Next Article in Journal
Information Geometric Duality of ϕ-Deformed Exponential Families
Previous Article in Journal
Clustering Financial Return Distributions Using the Fisher Information Metric
Previous Article in Special Issue
Chaotic Dynamics in a Quantum Fermi–Pasta–Ulam Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Correlation Production in Thermodynamics

1
Center for Quantum Technology Research, School of Physics, Beijing Institute of Technology, Beijing 100081, China
2
Institute for Quantum Science and Engineering, Texas A&M University, College Station, TX 77843, USA
Entropy 2019, 21(2), 111; https://doi.org/10.3390/e21020111
Submission received: 21 December 2018 / Revised: 18 January 2019 / Accepted: 20 January 2019 / Published: 24 January 2019
(This article belongs to the Special Issue Thermalization in Isolated Quantum Systems)

Abstract

:
Macroscopic many-body systems always exhibit irreversible behaviors. However, in principle, the underlying microscopic dynamics of the many-body system, either the (quantum) von Neumann or (classical) Liouville equation, guarantees that the entropy of an isolated system does not change with time, which is quite confusing compared with the macroscopic irreversibility. We notice that indeed the macroscopic entropy increase in standard thermodynamics is associated with the correlation production inside the full ensemble state of the whole system. In open systems, the irreversible entropy production of the open system can be proved to be equivalent with the correlation production between the open system and its environment. During the free diffusion of an isolated ideal gas, the correlation between the spatial and momentum distributions is increasing monotonically, and it could well reproduce the entropy increase result in standard thermodynamics. In the presence of particle collisions, the single-particle distribution always approaches the Maxwell-Boltzmann distribution as its steady state, and its entropy increase indeed indicates the correlation production between the particles. In all these examples, the total entropy of the whole isolated system keeps constant, while the correlation production reproduces the irreversible entropy increase in the standard macroscopic thermodynamics. In this sense, the macroscopic irreversibility and the microscopic reversibility no longer contradict with each other.

1. Introduction

Considering an isolated ideal gas with N particles initially occupying only part of a box, after long enough time diffusion, the gas spreads all over the volume uniformly (Figure 1). From the standard macroscopic thermodynamics it is simple to show the gas entropy is increased by Δ S = N k B ln ( V / V 0 ) , where V ( V 0 ) is the final (initial) occupied volume [1,2].
This is a quite typical example of the entropy increase in the macroscopic thermodynamics. However, notice that isolated quantum systems always follow the unitary evolution, and the system density matrix ρ ^ ( t ) is described by the von Neumann equation t ρ ^ = i [ ρ ^ , H ^ ] , which guarantees the von Neumann entropy S V [ ρ ^ ] = tr [ ρ ^ ln ρ ^ ] does not change with time. In principle, this result should also apply for many-body systems, then it seems inconsistent with the above entropy increase in the standard macroscopic thermodynamics.
Indeed, this is not a problem that only appears in quantum physics, and classical physics has the same situation. For an isolated classical system, the ensemble evolution follows the Liouville equation [1,3,4],
t ρ ( P , Q , t ) = { ρ ( P , Q , t ) , H } ,
which is derived from the Hamiltonian dynamics. Here { · , · } is the Poisson bracket, and ρ ( P , Q , t ) is the probability density around the microstate ( P , Q ) : = ( p 1 , p 2 , ; q 1 , q 2 , ) at time t. As a result, the Gibbs entropy of the whole system keeps a constant and never changes with time,
d d t S G [ ρ ( P , Q , t ) ] = d d t d 3 N p d 3 N q ρ ln ρ = 0 .
Therefore, this constant entropy result exists in both quantum and classical physics.
This is rather confusing when compared with our intuition of the “irreversibility” happening in the macroscopic world [4,5,6,7,8,9,10,11]. Moreover, even if the particles have complicated nonlinear interactions, although the system dynamics could be highly chaotic and unpredictable, the (classical) Liouville or (quantum) unitary dynamics still guarantees the entropy of isolated systems does not change with time.
Here we need to make some clarification on the word “irreversibility”. In thermodynamics, a “reversible (irreversible)” process means the system is (not) always in the thermal equilibrium state at every moment. Throughout this paper, we adopt the meaning in dynamics: for any initial condition, if some function (distribution, state, etc.) always approaches the same steady state, then such kind of behavior is regarded as “irreversible”.
On the other hand, if there is no inter-particle interaction, the microstate evolution is well predictable, but the above macroscopic diffusion process still happens irreversibly until the gas achieves the new uniform distribution in the whole volume. From this sense, it seems that the above contradiction between the constant entropy and the appearance of macroscopic irreversibility does not depend on whether there exist complicated interactions. Thus we need to ask: how could the macroscopic irreversibility and entropy increase arise from the underlying microscopic dynamics, which is reversible with time-reversal symmetry [6,12]?
Recently, it is noticed that the irreversible entropy production in open systems indeed is deeply related with correlation between the open system and its environment [13,14,15,16]. In an open system, the entropy of the system itself can either increase or decrease, depending on whether it is absorbing or emitting heat to its environment. Subtracting such thermal entropy due to the heat exchange, the rest part of the system entropy change is called the irreversible entropy production [17,18,19,20,21], and that increases monotonically with time until reaching the thermal equilibrium.
Under proper approximations, we can prove indeed the thermal entropy change due to the heat exchange is just equal to the entropy change of the environment state [16,22,23], and the irreversible entropy production is equivalent as the correlation generation between the open system and its environment, which is measured by the relative entropy [13,14,24,25,26] or their mutual information [16,23]. At the same time, the system and its environment together as a whole system maintains constant entropy during the evolution. In this sense, the constant global entropy and the increase of the system-environment correlation well coincide with each other. Moreover, when the baths are non-thermal states, which are beyond the application scope of the standard macroscopic thermodynamics, we could see such correlation production still applies (see Section 2.4).
That is to say, due to the practical restrictions of measurements, indeed some correlation information hiding in the global state is difficult to be sensed, and that results to the appearance of the macroscopic irreversibility as well as the entropy increase. In principle, such correlation understanding could also apply for isolated systems. Indeed, in the above diffusion example, our observation that “the gas spread all over the volume uniformly” is implicitly focused on the spatial distribution only, rather than the total ensemble state.
For the classical ideal gas with no inter-particle interactions, the Liouville equation for the ensemble evolution can be exactly solved [4,9,27]. Notice that in practice, it is the spatial and momentum distributions that are directly measured, but not the full ensemble state. We can prove that the spatial distribution P x ( x , t ) , as a marginal distribution of the whole ensemble, always approaches the new uniform one as its steady state. Moreover, by examining the correlation between the spatial and momentum distributions, we can see their correlation increases monotonically, and could reproduce the entropy increase in standard thermodynamics. At the same time, the total ensemble state ρ ( P , Q , t ) keeps constant entropy during the diffusion process (see Section 3).
For the non-ideal gas with weak particle interactions, the dynamics of the single-particle probability distribution function (PDF) f ( p , r , t ) can be described by the Boltzmann equation [1,8]. According to the Boltzmann H-theorem, f ( p , r , t ) always approaches the Maxwell-Boltzmann (MB) distribution as its steady state, and its entropy increases monotonically. Notice that the single-particle PDF f ( p , r , t ) is a marginal distribution of the full ensemble state ρ ( P , Q , t ) , which is obtained by averaging out all the other particles. Thus f ( p , r , t ) does not contain the particle correlations, and the increase of its entropy indeed implicitly reflects the increase of the inter-particle correlations, which exactly reproduces the entropy increase result in the standard macroscopic thermodynamics. At the same time, the total ensemble ρ ( P , Q , t ) still follows the Liouville equation with constant entropy.
The correlation production between the particles could also help us understand the Loschmidt paradox: when we consider the “backward” evolution, since significant particle correlations have established [28], the molecular-disorder assumption, which is the most crucial approximation in deriving the Boltzmann equation, indeed does not hold. Therefore, the Boltzmann equation as well as the H-theorem of entropy increase does not apply for the “backward” evolution (see Section 4).
In sum, the global state keeps constant entropy, but in practice, usually it is the partial information (e.g., marginal distribution, single-particle observable expectations) that is directly accessible to our observation, and that gives rise to the appearance of the macroscopic irreversibility [9,13,14,15,16,23,24,25,27,29,30,31,32,33,34]. The entropy increase in the standard macroscopic thermodynamics indeed reflects the correlation increase between different degrees of freedom (DoF) in the many-body system. In this sense, the reversibility of microscopic dynamics (for the global state) and the macroscopic irreversibility (for the partial information) coincide with each other. More importantly, this correlation understanding applies for both quantum and classical systems, and for both open and isolated systems; besides, it does not depends on whether there exist complicated particle interactions, and also can be used to describe time-dependent non-equilibrium systems.

2. The Correlation Production in Open Systems

In this section, we first discuss the thermodynamics of an open system, which is surrounded by an environment exchanging energy with it. The open system can absorb or emit heat to the environment, as a result, the entropy of the open system itself can either increase or decrease. Thus the thermodynamic irreversibility is not simply related to the entropy change of the open system alone, but should be described by the “irreversible entropy”, which increases monotonically with time.
Here we first give a brief review about this formalism for the irreversible entropy production, which is an equivalent statement for the second law. Then we will show indeed this irreversible entropy production in open systems is just equivalent with the correlation increase between the system and its environment [13,16,29], which is measured by their mutual information. Moreover, if the baths contacting with the system are not canonical thermal ones, the temperatures are no longer well defined, and this situation is indeed not within the applicable scope of the second law in standard thermodynamics, but we will see the the correlation production still applies in this case.

2.1. The Irreversible Entropy Production Rate

Now we first briefly review the formalism of entropy production [17,18,19,20,21]. The entropy change d S of an open system can be regarded as coming from two origins, i.e.,
d S = d S e + d S i
where d S e comes from the heat exchange with external baths, and d S i is regarded as the irreversible entropy change. The exchanging part d S e can be either positive or negative, indicating the heat absorbing or emitting of the system. But the irreversible entropy change d S i , as stated by the second law, should always be positive.
If the system is contacted with a thermal bath in the equilibrium state with temperature T, the entropy change due to the heat exchange can be written as d S e = đ Q / T (hereafter we refer it as the thermal entropy), where đ Q is the heat absorbed by the system. Then the second law can be expressed as
d S i = d S đ Q T 0 ,
where the equality holds only for reversible processes. This is just the Clausius inequality for an infinitesimal process [1,18,21].
More generally, if the system contacts with multiple independent thermal baths with different temperatures T α at the same time (Figure 2), the irreversible entropy change should be generalized as [17]
d S i = d S α đ Q α T α 0 ,
where đ Q α is the heat absorbed from bath- α [18,21]. For example, for a system contacting with two thermal baths with temperatures T 1 , 2 , in the steady state, we have d S = 0 and đ Q 1 = đ Q 2 , thus the above equation gives [21]
đ Q 1 1 T 1 1 T 2 0 .
It is easy to verify đ Q 1 = đ Q 2 > 0 always comes together with T 1 > T 2 , and vice versa. That means, the heat always flows from the high temperature area to the low temperature area, which is just the Clausius statement of the second law.
Therefore, for an open system, the second law can be equivalently expressed as a simple inequality d S i 0 , which means the irreversible entropy change always increases monotonically. This can be also expressed by the entropy production rate (EPr), which is defined as
R EP : = d S i d t = d S d t α 1 T α d Q α d t ,
and R EP 0 is equivalent as saying the irreversible entropy keeps increasing.
Besides the equivalence with the standard second law statements, the entropy production formalism also provides a proper way to quantitively study the non-equilibrium thermodynamics. Considering there is only one thermal bath, the system would get thermal equilibrium with the bath in the steady state. At this time, the system state no longer changes, and there is no net heat exchange between the system and the bath, thus R EP 0 when t .
In contrast, if the system contacts with multiple thermal baths with different temperatures, in the steady state, although the system state no longer changes with time, there still exists net heat flux between the system and the baths. Therefore, different from the thermal equilibrium, such a steady state is a stationary non-equilibrium state [17]. Notice that in this case the EPr remains a finite positive value R EP > 0 (see the example of Equation (6)) when t , which indicates there is still on-going production of irreversible entropy. Therefore, R EP = 0 (or > 0 ) well indicates whether (or not) the system is in the thermal equilibrium state.
Here the above discussions about the entropy production apply for both classical and quantum systems, as long as the quantities like S ˙ and Q ˙ α are calculated by the classical ensemble or quantum state correspondingly.

2.2. The Production Rate of the System-Bath Correlation

Now we will show the above EPr is indeed equivalent as the production rate of the correlation between the open system and its environment. Usually the dynamics of the open system alone is more often concerned in literature. The baths, due to their large size, are usually considered as unaffected by the system, and only provides a background with fluctuations. But the system surely has influence to its environment [16,23,35]. For example, when the system emits energy, this energy is indeed added to the environment. To study the correlation between the system and its environment, here we also need to know the dynamics of the whole environment.

2.2.1. Quantum Case

Here we first consider a quantum system contacting with several independent thermal baths with temperatures T α . Initially, each bath- α stays in the canonical thermal state
ρ ^ B , α ( 0 ) = 1 Z α exp [ H ^ B , α T α ] ,
with Z α as the normalization factor. The exact changing rate of the information entropy of bath- α is given by d d t S B , α ( t ) = tr [ ρ ^ ˙ B , α ( t ) ln ρ ^ B , α ( t ) ] . To make further calculation, we assume the bath state ρ ^ B , α ( t ) does not change too much from the initial state, thus ln ρ ^ B , α ( t ) = ln [ ρ ^ B , α ( 0 ) + δ ρ ^ t ] ln ρ ^ B , α ( 0 ) + o ( δ ρ ^ t ) , then the bath entropy change S ˙ B , α ( t ) becomes [16,22,23,26]
S ˙ B , α ( t ) tr [ ρ ^ ˙ B , α ( t ) ln ρ ^ B , α ( 0 ) ] = tr ρ ^ ˙ B , α ( t ) · ln 1 Z α exp [ H ^ B , α T α ] = 1 T α d d t H ^ B , α .
Notice that here d d t H ^ B , α is the energy increase of bath- α , thus it is just equal to the energy loss of the system to bath- α (i.e., Q ˙ α ) when the system-bath interaction strength is negligibly small. Therefore, the above EPr R EP (Equation (7)) can be rewritten as R EP S ˙ S ( t ) + α S ˙ B , α ( t ) .
Since initially the different baths are independent from each other and do not interact with each other directly, we assume they cannot generate significant correlations during the evolution, thus the entropy of the whole environment is simply the summation of that from each single bath, namely, S B ( t ) α S B , α ( t ) . Therefore, the above EPr R EP can be further rewritten as
R EP S ˙ S ( t ) + S ˙ B ( t ) = d d t [ S S + S B S SB ] = d d t I SB ( t ) .
Here S SB is the von Neumann entropy of the whole s + b state, which does not change with time ( d d t S SB = 0 ), since the whole s + b system is an isolated system and follows the unitary evolution [36].
Therefore, the production rate of the irreversible entropy R EP is just equivalent with the production rate of the mutual information between the system and its environment, I SB = S S + S B S SB , which measures their correlation [36]. That means, the second law statement that the irreversible entropy keeps increasing ( R EP 0 ) can be also equivalently understood as, the correlation between the system and its environment, as measured by the mutual information, always keeps increasing until they get the equilibrium.

2.2.2. Classical Case

The above discussions about quantum open systems also applies for classical ones. For classical systems, the initial state of bath- α should be represented by the canonical ensemble distribution
ρ B , α ( P , Q , t = 0 ) = 1 Z α exp [ 1 T α H B , α ( P , Q ) ] ,
where ( P , Q ) : = ( p 1 , p 2 , ; q 1 , q 2 , ) denotes the momenta and positions of the DoF in bath- α . Then we consider the changing rate of the Gibbs entropy of bath- α , and that is
d d t S G ρ B , α ( P , Q , t ) = d 3 N p d 3 N q t ρ B , α ( t ) ln ρ B , α ( t ) d 3 N p d 3 N q t ρ B , α ( t ) ln ρ B , α ( 0 ) = d 3 N p d 3 N q t ρ B , α ( t ) · 1 T α H B , α ( P , Q ) = 1 T α d d t H B , α ( P , Q ) .
Here we adopted the similar approximation ln ρ B , α ( P , Q , t ) ln ρ B , α ( P , Q , 0 ) as above, and this result is simply the classical counterpart of Equation (9).
Therefore, for classical open systems, the EPr R EP in Equation (7) also can be rewritten as R EP = d d t ( S S + S B ) . Furthermore, since the whole s + b system is an isolated system, its dynamics follows the Liouville equation, thus the Gibbs entropy of the whole s + b system does not change with time, i.e., d d t S SB = 0 . Therefore, the equivalence between the irreversible entropy production and the system-bath correlation (Equation (10)) also holds for classical systems. That means, for both classical and quantum open systems contacting with thermal baths, the second law can be equivalently stated as, the correlation between the system and its environment, which is measured by their mutual information, always keeps increasing.

2.3. Master Equation Representation

Besides the above general discussions, the time-dependent dynamics of the open system, either classical or quantum, can be quantitively described by a master equation. With the help of the master equations, the above EPr can be further written in a more detailed form. Here we will show this for both classical and quantum cases.

2.3.1. Classical Case

For a classical open system, the interaction with the baths would lead to the probability transition between its different states, and this dynamics is usually described by the Pauli master equation [37]
p ˙ n = α m L n m ( α ) p m L m n ( α ) p n ,
which is a Markovian process. Here p n is the probability to find the system in state-n (whose energy is E n ), and L n m ( α ) is the probability transition rate from state-m to state-n due to the interaction with the thermal bath- α . The back and forth transition rates between states- m , n should satisfy the following ratio [17,38]
L m n ( α ) L n m ( α ) = exp [ 1 T α ( E m E n ) ] ,
which means the “downward” transition to the low energy state is faster than the “upward” one by a Boltzmann factor. In the case of only one thermal bath, with this relation, the detailed balance L n m ( α ) p m L m n ( α ) p n = 0 simply leads to the Boltzmann distribution p n : p m = e E n / T : e E m / T in the steady state.
If there are multiple thermal baths, the energy average E = n E n p n gives an energy-flow conservation relation
t E = α J α , J α : = m , n ( L n m ( α ) p m L m n ( α ) p n ) E n ,
thus J α is the heat current flowing into the system from bath- α ( Q ˙ α ). We can put these relations, as well as the Gibbs entropy of the system S G = n p n ln p n , into the above EPr (7), obtaining [39]
R EP = α m , n ( L n m ( α ) p m L m n ( α ) p n ) ln p n E n T α ( L n m ( α ) p m L m n ( α ) p n ) = α m , n 1 2 ( L n m ( α ) p m L m n ( α ) p n ) ( ln e E n / T α p n ln e E m / T α p m ) = α m , n 1 2 ( L n m ( α ) p m L m n ( α ) p n ) ln L n m ( α ) p m / L m n ( α ) p n .
Notice that each summation term must be non-negative, thus we always have R EP 0 , which is just consistent with the above second law statement that the irreversible entropy keeps increasing. R EP = 0 holds only when L n m ( α ) p m = L m n ( α ) p n for any α , and this is possible only when all the baths have the same temperature, which means the thermal equilibrium. Otherwise, in the steady state, although it is time-independent, there still exists non-equilibrium flux flowing across the system, and that is indicated by R EP > 0 .

2.3.2. Quantum Case

For a quantum system weakly coupled with the multiple thermal baths, usually its dynamics can be described by the GKSL (Lindblad) equation [40,41],
ρ ^ ˙ = i [ ρ ^ , H ^ S ] + α L α [ ρ ^ ] .
where ρ ^ is the system state and L α [ ρ ^ ] describes the dissipation due to bath- α . Using the von Neumann entropy S V [ ρ ^ ] = tr [ ρ ^ ln ρ ^ ] and heat current Q ˙ α = tr H ^ S · L α [ ρ ^ ] , the EPr (7) can be rewritten as the following Spohn formula [39,42,43,44,45,46,47]
R EP = tr ρ ^ ˙ ln ρ ^ + α tr L α [ ρ ^ ] · ln ρ ^ SS ( α ) = α tr ( ln ρ ^ SS ( α ) ln ρ ^ ) L α [ ρ ^ ] : = R Sp .
Here ρ ^ SS ( α ) satisfies L α [ ρ ^ SS ( α ) ] = 0 , and we call it the partial steady state associated with bath- α . If the system only interacts with bath- α , then ρ ^ SS ( α ) should be its steady state when t . Clearly, ρ ^ SS ( α ) should be the thermal state ( exp [ H ^ S / T α ] ) when bath- α is the canonical thermal one with temperature T α , and the term χ α : = tr L α [ ρ ^ ] · ln ρ ^ SS ( α ) = Q ˙ α / T α is the corresponding exchange of thermal entropy.
The positivity of R Sp is not so obvious as the classical case (16), but still we can prove R Sp 0 , if the master Equation (17) has the standard GKSL form (see the proof in Appendix of Reference [16] or References [42,43]). The GKSL form of the master Equation (17) indicates it describes a Markovian process [40,41,48], which is similar like the above classical case. Again this is consistent with the above discussions about the second law statement.

2.3.3. Remark

In the above discussions, we focused on the case that the baths are canonical thermal ones. As a result, in the above master equations, the transition rate ratios (14) appear as the Boltzmann factors, and the partial steady states ρ ^ SS ( α ) of the system are the canonical thermal states. Strictly speaking, only for canonical thermal baths, the temperature T is well defined, and the thermal entropy d S e = đ Q / T can be applied, as well as the above EPr (7), which is the starting point to derive the master equation representations (Equations (16) and (18)).
If the baths are non-thermal states, there is no well-defined temperature, thus the above EPr in standard thermodynamics in Section 2.1, especially the thermal entropy d S e = d Q / T , does not apply. But master equations still can be used to study the dynamics of such systems. Due to the interaction with non-thermal baths, the transition rate ratios (14) do not need to be the Boltzmann factors, but we can verify the last line of Equation (16) still remains positive. Thus Equation (16) can be regarded as a generalized EPr beyond standard thermodynamics, however, now it is unclear to tell its physical meaning, as well as its relation with the non-thermal bath.
The quantum case has the same situation. If the master equation (17) comes from non-thermal baths, the partial steady state ρ ^ SS ( α ) would not be the thermal state with the temperature of bath- α , but the quantity R Sp (last line of Equation (18)) could still remain positive [16,42,43]. However, in this case the physical meaning of the Spohn formula (18) is not clear now.
In the following example of an open quantum system interacting with non-thermal baths, we will show that, although it is beyond the applicable scope of standard thermodynamics, the Spohn formula (18) is still equal to the production rate of the system-bath correlation, which is the same as the thermal bath case in Section 2.2, and the term χ α = tr L α [ ρ ^ ] · ln ρ ^ SS ( α ) is just equal to the informational entropy change of bath- α .

2.4. Contacting with Squeezed Thermal Baths

When the heat baths contacting with the system are not canonical thermal ones, it is possible to construct a heat engine that “seemingly” works beyond the Carnot bound. For example, in an optical cavity, a collection of atoms with non-vanishing quantum coherence can be used to generate light force to do mechanical work by pushing the cavity well [49]; a squeezed light field can be used to as the reservoir for an harmonic oscillator which expands and compresses as a heat engine [50]. In these studies, it seems that the efficiency of the heat engine could be higher than the Carnot bound η C = 1 T c / T h . However, since the baths are not canonical thermal ones, the parameter T can no longer be regarded as the well defined temperature. As we have emphasized, such kind of systems are indeed not within the applicable scope of standard thermodynamics, therefore they do not need to obey the second law inequalities that are based on canonical thermal baths [51].
In this non-thermal bath case, the thermal entropy d S e = đ Q / T does not apply, but the information entropy is still well defined. Now we study the system-bath mutual information when the baths are non-thermal states [16]. We consider an example of a single mode boson ( H ^ S = Ω a ^ a ^ ) which is linearly coupled with multiple squeezed thermal baths ( H ^ B = α H ^ B , α and H ^ B , α = k ω α k b ^ α k b ^ α k ), and they interact through V ^ SB = α g α k a ^ b ^ α k + g α k * a ^ b ^ α k . The initial states of the baths are squeezed thermal ones,
ρ ^ B , α ( 0 ) = 1 Z α exp 1 T α S ^ α H ^ B , α S ^ α , S ^ α : = k exp [ 1 2 λ α k * b ^ α k 2 h . c . ] , λ α k = r α k e i θ α k ,
where S ^ α is the squeezing operator for bath- α . Below we will use the master equation to calculate the Spohn formula (18), and compare it with the result by directly calculating the bath entropy change. We will see, in this non-thermal case, the Spohn formula (18) is still equal to the increasing rate of the correlation between the system and the squeezed baths.

2.4.1. Master Equation

We first look at the dynamics of the open system alone. The total s + b system follows the von Neumann equation t ρ ^ SB ( t ) = i [ ρ ^ SB ( t ) , H ^ S + B ] . Based on it, after the Born-Markovian approximation [38,52], we can derive a master equation ρ ^ ˙ S = α L α [ ρ ^ S ] for the open system ρ ^ S ( t ) (interaction picture), where (see the detailed derivation in Reference [16])
L α [ ρ ^ S ] = γ α [ n α a ^ ρ ^ S a ^ 1 2 { a ^ a ^ , ρ ^ S } + ( n α + 1 ) a ^ ρ ^ S a ^ 1 2 { a ^ a ^ , ρ ^ S } u α a ^ ρ ^ S a ^ 1 2 { ( a ^ ) 2 , ρ ^ S } u α * a ^ ρ ^ S a ^ 1 2 { a ^ 2 , ρ ^ S } ] .
Here n α : = ( n ¯ α , Ω + 1 2 ) cosh 2 r α Ω 1 2 , u α : = e i θ α Ω ( n ¯ α , Ω + 1 2 ) sinh 2 r α Ω . The parameters r α Ω and θ α Ω take values from λ α k in Equation (19) when ω k Ω , and n ¯ α , Ω : = 1 / [ exp ( Ω / T α ) 1 ] is the Planck function. The decay factor γ α : = J α ( Ω ) = K α ( Ω ) is defined from the coupling spectrums J α ( ω ) : = 2 π k | g α k | 2 δ ( ω ω α k ) and K α ( ω ) : = 2 π k g α k 2 δ ( ω ω α k ) . In addition, we have omitted the phase of g α k , thus K α ( ω ) = K α * ( ω ) = J α ( ω ) . From this master equation, we obtain
d d t a ˜ ( t ) = α 1 2 γ α a ˜ , d d t a ˜ 2 = α γ α [ a ˜ 2 u α ] , d d t a ˜ a ˜ = α γ α [ a ˜ a ˜ n α ] .
Here ρ ^ S ( t ) is in the interaction picture, o ˜ ( t ) : = tr [ ρ ^ S ( t ) o ^ ] , and o ^ is in the Schrödinger picture.
For this master equation, the partial steady state ρ ^ SS ( α ) associated with bath- α , which satisfies L α [ ρ ^ SS ( α ) ] = 0 , is a squeezed thermal one,
ρ ^ SS ( α ) = 1 z α exp [ Ω T α · S ^ α a ^ a ^ S ^ α ] , S ^ α : = exp [ ( 1 2 ζ α * a ^ 2 h . c . ) ] , ζ α = λ α k | ω k = Ω : = r α e i θ α ,
where S ^ α is the squeezing operator. Now we can put this result into the Spohn formula (18), then the term χ α = tr L α [ ρ ^ ] · ln ρ ^ SS ( α ) gives
χ α = Ω T α · γ α cosh 2 r α · [ a ˜ a ˜ n α ] 1 2 sinh 2 r α [ e i θ α ( a ˜ 2 ( t ) u α ) + h . c . ] .
When there is no squeezing ( r α = 0 ), this equation exactly returns to the thermal bath result χ α = 1 T α d d t [ Ω a ˜ a ˜ ] = Q ˙ α / T α (see Equation (21)), which is the exchange of the thermal entropy. However, due to the quantum squeezing in the bath, clearly this χ α term is no longer the thermal entropy, and now it looks too complicated to tell its physical meaning. Below, we are going to show that here this χ α term is just the informational entropy changing of bath- α .

2.4.2. Bath Entropy Dynamics

Now we calculate the entropy change S ˙ B , α of bath- α directly by adopting the similar approximation as Equation (9), and that gives
S ˙ B , α tr ρ ^ ˙ B , α ( t ) ln 1 Z α exp 1 T α S ^ α H ^ B , α S ^ α = k ω α k T α cosh 2 r α k · d d t b ˜ α k ( t ) b ˜ α k ( t ) + 1 2 sinh 2 r α k [ e i θ α k · d d t b ˜ α k 2 ( t ) + h . c . ] .
Unlike the thermal baths case in Equation (9), here it is not easy to see how the bath entropy dynamics S ˙ B , α is related the system dynamics. However, notice that S ˙ B , α is simply determined by the time derivative of the bath operator expectations like b ˜ α k ( t ) b ˜ α k ( t ) and b ˜ α k 2 ( t ) , which can be further calculated by Heisenberg equations. After certain Markovian approximation, in the weak coupling limit ( γ α Ω ), we can prove the following relation (see the detailed proof in Reference [16]),
k f k · d d t b ˜ α k b ˜ α k f ( ω k Ω ) · γ α [ a ˜ a ˜ n α ] = tr f ( Ω ) a ˜ a ˜ · L α [ ρ ] , k h k · d d t b ˜ α k 2 h ( ω k Ω ) · γ α [ a ˜ 2 u α ] = tr h ( Ω ) a ˜ 2 · L α [ ρ ] ,
where f k and h k are the summation weights associated with the bath mode b ^ α k .
These two relations well connects the dynamics of bath- α (left sides) with that of the open system (right sides). For example, let f k = ω α k , then the above relation becomes d d t [ k ω α k b ˜ α k b ˜ α k ] tr Ω a ˜ a ˜ · L α [ ρ ] , which is just the heat emission-absorption relation d d t H ^ B , α = Q ˙ α , and we have utilized it in the discussion below Equation (9). To calculate the above entropy change Equation (24) for the squeezed thermal bath, let f k = 1 T α ω α k cosh 2 r α k , h k = 1 2 T α ω α k e i θ α k sinh 2 r α k , then we obtain
S ˙ B , α = Ω T α · γ α cosh 2 r α · [ a ˜ a ˜ n α ] 1 2 sinh 2 r α [ e i θ α ( a ˜ 2 ( t ) u α ) + h . c . ] .
This result exactly equals to the term χ α = tr L α [ ρ ] ln ρ SS ( α ) in the above Spohn formula (see Equation (23)). Therefore, in this non-thermal bath case, the changing rate of the system-bath mutual information is just equal to the Spohn formula (18),
d d t I SB = S ˙ S + α S ˙ B , α = R Sp 0 ,
thus its positivity is still guaranteed [16].
That means that although the non-thermal baths are beyond the applicable scope of standard thermodynamics, namely, d S i = d S α đ Q α / T α 0 does not apply, the system-bath correlation I SB still keeps increasing monotonically like in the thermal bath case (Section 2.2). Therefore, this system-bath correlation production may be a generalization for the irreversible entropy production which also applies for the non-thermal cases. Tracing back to the original consideration of the irreversible entropy change (Equation (3)), it turns out the term d S e can also be regarded the informational entropy change of the bath, and it gives the relation d S e = đ Q / T in the special case of canonical thermal bath.

2.5. Discussions

Historically, the Spohn formula was first introduced by considering the distance between the system state ρ ^ ( t ) and its final steady state ρ ^ SS , which is measured by their relative entropy S [ ρ ^ ( t ) ρ ^ SS ] : = tr ρ ^ ( t ) · ( ln ρ ^ ( t ) ln ρ ^ SS ) (Spohn [42]). When t , ρ ^ ( t ) ρ ^ SS , and this distance decreases to zero. Thus, for a Markovian master equation t ρ ^ = L [ ρ ^ ] , the EPr is defined from the time derivative of this distance, i.e.,
σ : = d d t S [ ρ ^ ( t ) ρ ^ SS ] = tr ( ln ρ ^ SS ln ρ ^ ) L [ ρ ^ ] .
Therefore, this EPr- σ serves as a Lyapunov index for the master equation. It was proved that σ 0 , and σ = 0 when t . When there is only one thermal bath, this EPr- σ returns to the thermodynamics result, σ = S ˙ Q ˙ / T .
However, when the open system contacts with multiple heat baths as described by the master Equation (17), denoting L [ ρ ^ ] = α L α [ ρ ^ ] , the above EPr- σ still goes to zero when t . Thus it does not tell the difference between achieving the equilibrium state or the stationary non-equilibrium state (see the example of Equation (6)). Later (Spohn, Lebowitz [43]), this EPr- σ was generalized to be the form of Equations (7) and (18), and its positivity can be proved by the similar procedure given in the previous study (Spohn [42]).
In standard thermodynamics, the second law statement d S i 0 requires a monotonic increase of the irreversible entropy, not only compared with the initial state. Notice that in the proof for the positivity of the EPr, the Markovianity is necessary for both classical and quantum cases. If the master equation of the open system is non-Markovian, it is possible that there exist certain periods where R EP < 0 , which means the decrease of the irreversible entropy (or the system-bath correlation). When comparing EPr with standard thermodynamics, a coarse-grained time scale is more proper (which usually means the Markovian process), thus R EP < 0 may be acceptable if it appears only in short time scales.
In the above discussions, clearly the most important part is how to calculate the bath entropy dynamics directly. This is usually quite difficult since the bath contains infinite number of DoF. The above calculation can be done mainly thanks to the approximation S ˙ B tr [ ρ ^ ˙ B ( t ) ln ρ ^ B ( 0 ) ] . The results derived thereafter are consistent with the previous conclusions in thermodynamics, but still we need more examination about the validity of this approximation.
There are a few models of open system that are exactly solvable for this examination. In Reference [23], the bath entropy dynamics was calculated when a two-level-system (TLS) is dispersively coupled with a squeezed thermal bath. In this problem, the density matrix evolution of each bath mode can be exactly solved. The state of each bath mode is the probabilistic summation of two displaced Gaussian states ϱ ^ k ( t ) = p + ϱ ^ k + ( t ) + p ϱ ^ k ( t ) , which keep separating and recombining periodically in the phase space. Thus the exact entropy dynamics can be calculated and compared with the result based on the above approximation.
It turns out that the above approximation fits the exact result quite well in the high temperature area; in the low temperature area, the approximated result diverges to infinity when T 0 , but the exact result remains finite. This is because in the high temperature area, this ϱ ^ k ( t ) can be better approximated as a single Gaussian state when the separation of ϱ ^ k ± ( t ) is quite small; while in the low temperature area, the uncertainty of ϱ ^ k ( t ) mainly comes from the probabilities p ± but not the entropy in the Gaussian states ϱ ^ k ± . Namely, in the low temperature area, the influence from the system to the bath is bigger, especially for nonlinear systems like the TLS. If the bath states cannot be well treated as Gaussian ones, the above approximation is questionable, and how to calculate the bath entropy in this case remains an open problem.

3. The Entropy in the Ideal Gas Diffusion

In the above discussions about the correlation production between the open system and its environment, we utilized an important condition, i.e., the whole s + b system is an isolated system, thus its entropy does not change with time. In the quantum case, the whole s + b system follows the von Neumann equation t ρ ^ SB = i [ ρ ^ SB , H ^ S + B ] , thus the von Neumann entropy S V [ ρ ^ SB ] = tr [ ρ ^ SB ln ρ ^ SB ] does not change during the unitary evolution. Likewise, in the classical case, the whole system follows the Liouville equation t ρ SB = { ρ SB , H S + B } , thus the Gibbs entropy S G [ ρ ] = n p n ln p n keeps a constant.
However, still this is quite counter-intuitive compared with our intuition of the macroscopic irreversibility. For example, considering the diffusion process of an ideal gas as we mentioned in the very beginning (Figure 1), although there are no particle interactions and the dynamics of the whole system is well predictable, still we could see the diffusion proceeds irreversibly, and would finally occupy the whole volume uniformly.
In this section, we will show this puzzle also can be understood in the sense of correlation production. In open systems, we have seen it is the system-bath correlation that increases, while the total s + b entropy does not change. In an isolated system, there is no partition for “system” and “bath”, but we will see it is the correlation between different DoF, e.g., position-momentum, and particle-particle, that increases monotonically, while the total entropy does not change [4,27].

3.1. Liouville Dynamics of the Ideal Gas Diffusion

Here we make a full calculation on the phase-space evolution of the above ideal gas diffusion process in classical physics, so as to examine the dynamical behavior of the microstate, as well as its entropy.
Since there is no interaction between particles, the dynamics of the 3 N DoF are independent from each other. Assuming there is no initial correlation between different DoF, the total N-particle microstate PDF always can be written as a product form, i.e., ρ ( P , Q , t ) = i , σ ϱ ( p i σ , q i σ , t ) for σ = x , y , z , thus this problem can be reduced to study the PDF of a single DoF ϱ ( p , x , t ) . Correspondingly, the Liouville equation is
t ϱ = { ϱ , H } = ϱ x H p + ϱ p H x = p m x ϱ ,
where H = p 2 / 2 m is the single DoF Hamiltonian.
This equation is exactly solvable, and the general solution is Φ ( p , x p m t ) . The detailed form of the function Φ ( , ) should be further determined by the initial and boundary conditions. We assume initially the system starts from an equilibrium state confined in the area x [ a , b ] , namely
ϱ ( p , x , 0 ) = Λ ( p ) × Π ( x ) .
Here Λ ( p ) = 1 Z exp [ p 2 / 2 p ¯ T 2 ] is the MB distribution, with p ¯ T 2 / 2 m = 1 2 k B T as the average kinetic energy, and Z = 2 π p ¯ T is a normalization factor. Π (x) is the initial spatial distribution (Figure 3a)
Π ( x ) = 1 b a , a x b , 0 , elsewhere .
Such a product form of ϱ ( p , x , 0 ) indicates the spatial and momentum distributions have no correlations a priori.
For the diffusion in free space x ( , ) , the time-dependent solution is
ϱ F ( p , x , t ) = Λ ( p ) Π ( x p m t ) ,
which satisfies both the Liouville Equation (29) and the initial condition (30).
For a confined area x [ 0 , L ] with periodic boundary condition ϱ ( p , 0 , t ) = ϱ ( p , L , t ) , the solution can be constructed with the help of the above free space one, i.e.,
ϱ ( p , x , t ) = n = ϱ F ( p , x + n L , t ) , 0 x L .
Here ϱ F ( p , x + n L , t ) can be regarded as the periodic “image” solution in the interval [ n L , n L + L ] (Figure 4a) [35]. Clearly, Equation (33) satisfies the periodic boundary condition, as well as the initial condition (30), and it is simple to verify each summation term satisfies the above Liouville Equation (29), thus Equation (33) describes the full microstate PDF evolution in the confined area x [ 0 , L ] with periodic boundary condition.
From these exact solutions (32) and (33), it is clear to see the microstate PDF ϱ ( p , x , t ) can no longer hold the separable form like f x ( x , t ) × f p ( p , t ) once the diffusion starts, thus indeed it is not evolving towards any equilibrium state, since an equilibrium state must have a separable form similar like the initial condition (30) (see also Figure 3a).
In Figure 3 we show the microstate PDF ϱ ( p , x , t ) at different times. As the time increases, the “stripe” in Figure 3a becomes more and more inclined; once exceeding the boundary, it winds back from the other side due to the periodic boundary condition and generates a new “stripe” (Figure 3c). After very long time, more and more stripes appear, much denser and thinner, but they would never occupy the whole phase space continuously (Figure 3d,e).
Figure 3e shows the conditional PDF of the momentum when the position is fixed at x = L / 2 (the vertical dashed line in Figure 3d). In the limit t , it becomes an exotic function discontinuous everywhere, but not the MB distribution. All these features indicate that, during this diffusion process of the isolated ideal gas, indeed the ensemble is not evolving towards the new equilibrium state as expected in the macroscopic intuition. Even after long time relaxation, the microstate PDF ϱ ( p , x , t ) is not approaching the equilibrium state.

3.2. Spatial and Momentum Distributions

Even after long time relaxation, the ideal gas would not achieve the new equilibrium state. This result looks counter-intuitive, since clearly we can see the particles spread all over the box uniformly after long enough time relaxation. However, we must notice that the fact “spreading all over the box uniformly” is implicitly focused on the position distribution P x ( x , t ) alone, but not the whole ensemble state ϱ ( p , x , t ) . As a marginal distribution of ϱ ( p , x , t ) , the spatial distribution P x ( x , t ) 1 / L does approach the new uniform one as its steady state (Figure 3d), and now we show indeed this is true for any initial state of Π ( x ) [4,27].
We first consider the initial spatial distribution is a δ -function concentrated at x 0 , Π ( x ) = δ ( x x 0 ) . Since we have obtained the analytical results (32) and (33) for the ensemble evolution ϱ ( p , x , t ) , the spatial distribution P x ( x , t ) emerges as its marginal distribution by averaging over the momentum:
P x ( x , t ) = d p ϱ ( p , x , t ) = d p n = 1 Z exp [ p 2 2 p ¯ T 2 ] × δ ( x + n L x 0 p m t ) = n = m Z t exp 1 2 v ¯ T 2 t 2 ( x + n L x 0 ) 2 ,
where v ¯ T : = p ¯ T / m . With the increase of time t, these Gaussian terms becomes wider and lower (Figure 4a). Therefore, when t , the spatial distribution P x ( x , t ) always approaches the uniform distribution in x [ 0 , L ] .
Any initial spatial distribution can be regarded as certain combination of δ -functions, i.e., Π ( x ) = d x 0 Π ( x 0 ) δ ( x x 0 ) . Therefore, for any initial Π ( x ) , the spatial distribution P x ( x , t ) always approaches the uniform one as its steady state. In this sense, although the underlying Liouville dynamics obeys the time-reversal symmetry, the “irreversible” diffusion appears into our sight.
On the other hand, P p ( p , t ) never changes with time, and always maintains its initial distribution, which can be proved by simply changing the integral variable:
P p ( p , t ) = 0 L d x n = Λ ( p ) Π ( x + n L p m t ) = d x Λ ( p ) Π ( x p m t ) = Λ ( p ) .
This is all because of the periodic boundary condition, and the particles always move freely. If the particles can be reflected back at the boundaries, this momentum distribution would also change with time.

3.3. Reflecting Boundary Condition

Now we consider the reflecting boundary condition. In this case, when the particles hit the boundaries at x = 0 , L , their positions do not change, but their momentum should be suddenly changed from p to p . Correspondingly, the analytical result for the ensemble evolution can be obtained by summing up the “reflection images” (Figure 4b), i.e.,
ϱ ( p , x , t ) = ϱ ˜ ( 0 ) ( p , x , t ) + n = 1 ϱ ˜ ( n ) ( p , x , t ) + ϱ ˜ ( + n ) ( p , x , t ) , for x [ 0 , L ] , ϱ ˜ ( n ) ( p , x , t ) : = R 0 ϱ ˜ [ + ( n 1 ) ] ( p , x , t ) , ϱ ˜ ( + n ) ( p , x , t ) : = R L ϱ ˜ [ ( n 1 ) ] ( p , x , t ) .
Here ϱ ˜ ( 0 ) ( p , x , t ) = ϱ F ( p , x , t ) , and R a [ f ( p , x ) ] : = f ( p , 2 a x ) means making a mirror reflection to the function f ( p , x ) along the axis x = a . Clearly, each summation term ϱ ˜ ( n ) can be regarded as shifted from ϱ F ( p , x , t ) or its mirror reflection, thus they all satisfy the differential relation in the Liouville Equation (29).
To verify the boundary condition, consider a diffusing distribution in x [ 0 , L ] , initially described by ϱ ˜ ( 0 ) ( p , x , t ) . As the time increases, it diffuses wider and even exceeds the box range [ 0 , L ] . The exceeded part should be reflected at the boundaries x = 0 , L as the next order ϱ ˜ ( ± 1 ) , and added back to the total result ϱ ( p , x , t ) . This procedure should be done iteratively, namely, when the term ϱ ˜ ( ± n ) exceeds the boundaries, it generates the reflected term ϱ ˜ [ ( n + 1 ) ] as the next summation order (Figure 4b).
Therefore, this result is quite similar to the above periodic case, except reflection should be made to certain summation terms. Based on the same reason as above, each summation term becomes more and more flat during the diffusion, thus the spatial distribution P x ( x , t ) always approaches the new uniform one as its steady state [4].
The ensemble evolution is shown in Figure 5a–d, which is quite similar with the above periodic case. Again, ϱ ( p , x , t ) is indeed not evolving towards the equilibrium state. A significant difference is the momentum distribution P p ( p , t ) now varies with time. This is because the collision at the boundaries changes the momentum direction, thus p is no longer conserved, although the kinetic energy p 2 does not change (as the momentum amplitude).
Notice that the reflection “moves” the probability of momentum p to the area of p , therefore, we see that some “areas” of P p ( p , t ) are “cut” off from the initial MB distribution, and “added” to its mirror position along p = 0 (especially Figure 5c,d). Thus P p ( p , t ) is different from the initial MB distribution.
Since the reflection transfers the probability of p to its mirror position p , the difference δ P p ( t ) : = P p ( t ) P p ( 0 ) is always an odd function (lower blue in Figure 5e). As a result, the even moments p 2 n of P p ( t ) are the same with the MB distribution, but the odd ones p 2 n + 1 are changed.
As the time increases, more and more “stripes” appear in P p ( p , t ) , much thinner and denser. As a result, when calculating the odd orders p 2 n + 1 , the contributions from the nearest two stripes in δ P p ( t ) (who have similar p values), positive and negative, tends to cancel each other (lower blue in Figure 5e). Therefore, in the limit t , the odd orders p 2 n + 1 also approach the same value of the original MB distribution (zero) (Figure 6c).
Therefore, in the long time limit, P p ( p , t ) approaches an exotic function discontinuous everywhere, which is different from the initial MB distribution, but all of its moments p n have the same values as the initial MB distribution [53,54]. In usual experiments, practically it is difficult to tell the difference of these two different distributions [9].

3.4. Correlation Entropy

From the above exact results for the ensemble evolution, we have seen that the macroscopic appearance of the new uniform distribution is indeed only about the spatial distribution, which just reflects the marginal information of the whole state ϱ ( p , x , t ) , thus this macroscopic appearance is not enough to conclude whether ϱ ( p , x , t ) is approaching the new equilibrium state with entropy increase.
However, in practical experiments, the full joint distribution ϱ ( p , x ) is difficult to be measured directly. Usually it is the spatial and momentum distributions P x ( x ) and P p ( p ) that are directly accessible for measurements, e.g., by measuring the gas density and pressure. Therefore, based on these two marginal distributions, we may “infer” the microstate PDF as [7,25,55]
ϱ ˜ inf ( p , x , t ) : = P x ( x , t ) × P p ( p , t ) ,
which indeed neglected the correlation between these two marginal distributions. As a result, in the long time limit, P x ( x , t ) 1 / L approaches the new uniform distribution, while P p ( x , t ) “behaves” similarly like the initial MB distribution, thus this inferred state ϱ ˜ inf ( p , x , t ) just looks like a new “equilibrium state”.
The entropy change of this inferred state is
Δ i S ( t ) : = S G [ ϱ ˜ inf ( t ) ] S G [ ϱ ˜ inf ( 0 ) ] = S x [ P x ( t ) ] + S p [ P p ( t ) ] S G [ ϱ ( t ) ] S x [ P x ( 0 ) ] + S p [ P p ( 0 ) ] S G [ ϱ ( 0 ) ] ,
where S G [ ϱ ( t ) ] = S G [ ϱ ( 0 ) ] is guaranteed by the Liouville dynamics, and
S x [ P x ( x , t ) ] : = 0 L d x P x ( x , t ) ln P x ( x , t ) , S p [ P p ( p , t ) ] : = d p P p ( p , t ) ln P p ( p , t ) .
Notice that the term S x + S p S G : = I xp in Equation (38) is just the mutual information between the marginal distributions P x ( x , t ) and P p ( p , t ) , which is the measure for their correlation [36,56] (see the discussion about the entropy for continuous PDF in Appendix A).
Therefore, here Δ i S just describes the correlation increase between the spatial and momentum distributions. During the diffusion process, this correlation entropy Δ i S ( t ) increases monotonically for both periodic and reflecting boundary cases (Figure 6a,b). Notice that this is quite similar with the above discussions about open systems, namely, the total entropy does not change, while the correlation entropy increases “irreversibly” [13,14,15,16,23,24,25,29,30,31].
For the periodic boundary case, P p ( p ) does not change, and P x ( x ) approaches the uniform distribution after long time, thus the above entropy increase (38) gives Δ i S = ln ( L / L 0 ) , where L 0 : = b a is the length of the initially occupied area. When considering the full N-particle state of the ideal gas, the corresponding inferred state is ρ ˜ inf ( P , Q ) = i , σ ϱ ˜ inf ( p i σ , q i σ ) , thus it gives the entropy increase as Δ i S = N ln ( V / V 0 ) . Notice that this result exactly reproduces the above thermodynamic entropy increase as mentioned in the beginning of this section (Figure 6a) (these conclusions still hold in the thermodynamics limit V , since the system sizes always appear in ratios, e.g., V / V 0 ).
For the reflecting boundary case, P x ( x ) 1 / L still holds and gives Δ S x = ln ( L / L 0 ) , but now P p ( p ) varies with time. Moreover, it is worthwhile to notice that Δ S p [ P p ] is decreasing with time (Figure 6b), which looks a little counter-intuitive. The reason is, as we mentioned before, the boundary reflections change the momentum directions and so as the distribution P p ( p ) , but the average energy p 2 does not change, thus the thermal distribution (which is the initial distribution) should have the maximum entropy [20,55]. Therefore, during the evolution, the deviation of P p ( p , t ) from the initial MB distribution leads to the decrease of its entropy Δ S p [ P p ] . However, clearly this is quite difficult to be sensed in practice, and the total correlation entropy change Δ i S = Δ S x + Δ S p still increases monotonically.

3.5. Resolution Induced Coarse-Graining

Historically, the problem of the constant entropy from the Liouville dynamics was first studied by Gibbs [3]. To understand why the entropy increases in standard thermodynamics, he noticed that, if we change the order of taking limit when calculating the “ensemble volume” (the entropy), the results are different. Usually, at a certain time t, the phase space is divided into many small cells with a finite volume ε V , and we make summation from all these cells, then let the cell size ε V 0 . That gives the result of constant entropy. However, if we keep the cell size finite, and let the time t first, then let the cell size ε V 0 , that would give an increasing entropy.
This idea is now more specifically described as the “coarse-graining” [3,4,8]. The “coarse-grained” ensemble state ρ ˜ c . g . ( P , Q ) is obtained by taking the phase-space average of the exact one ρ ( P , Q ) over a small volume around each point ( P , Q ) , namely,
ρ ˜ c . g . ( P , Q ) : = ε V d 3 N P d 3 N Q ρ ( P , Q ) / ε V .
Here ε V is the small volume around the point ( P , Q ) in the phase space. From Figure 3d and Figure 5d, we can see after long time relaxation, the coarse-grained ρ ˜ c . g . ( P , Q ) could approach the equilibrium state.
In practical measurements, there always exists a finite resolution limit, and that can be regarded as the physical origin of coarse-graining. When measuring a continuous PDF P ( x ) , we should first divide the continuous area x [ 0 , L ] into N intervals, and then measure the probability that appears in the interval between x n and x n + Δ x , which is denoted as p n = P ( x n ) Δ x . In the limit Δ x 0 , the histogram P ( x n ) becomes the continuous probability density (see also Appendix A).
Here Δ x is determined by the measurement resolution, but practical measurements always have a finite resolution limit Δ x δ x ˜ , thus cannot approach 0 arbitrarily. Therefore, the fine structure within the minimum resolution interval δ x ˜ of the continuous PDF P ( x ) cannot be sensed in practice. However, usually P ( x ) is assumed to be a smooth function within this small interval, thus indeed P ( x ) is coarse-grained by its average value in this small region, and the resolution limit δ x ˜ practically determines the coarse-graining size.
Remember in the reflecting boundary case, the momentum distribution P p ( p ) approaches an exotic function with a structure of dense comb and discontinuous everywhere. Therefore, such an exotic structure within the resolution limit indeed cannot be observed in practice. For a fixed measurement resolution δ p ˜ , there always exists a certain time t δ p ˜ , so that after t > t δ p ˜ , the “comb teeth” in Figure 7a are finer than the resolution δ p ˜ . In addition, when t t δ p ˜ , the coarse-grained distribution P ˜ p ( p ) (with coarse-graining size δ p ˜ ) would well approach the thermal distribution (Figure 7b).
In Figure 6c, we have shown that all the moments p n of P p ( p , t ) have no difference with the initial MB distribution. Now due to the finite resolution limit, again we have no way to tell the difference between the exotic function P p ( p , t ) and its coarse-graining P ˜ p ( p , t ) , which goes back to the initial MB distribution (Figure 7a). In this sense, the momentum distribution P p ( p , t ) has no “practical” difference with the thermal equilibrium distribution.

3.6. Entropy “Decreasing” Process

Now we see the correlation entropy, rather than the total entropy, coincides closer to the irreversible entropy increase in macroscopic thermodynamics. Here we show it could be possible, although not quite feasible in practice, to construct an “entropy decrease” process.
To achieve this, we first let the ideal gas experience the above diffusion process with “entropy increase” for a certain time t * (Figure 3 and Figure 5). From the state ϱ ( p , x , t * ) at this moment (e.g., Figure 5d), we construct a new “initial state” by reversing its momentum ϱ ( 0 ) = ϱ ( p , x , t * ) . Since the Liouville equation obeys time-reversal symmetry, this new “initial state” would evolve into ϱ ( t * ) = ϱ ( p , x , 0 ) after time t * , which is just the original equilibrium state confined in x [ a , b ] (Figure 5a). That means, the idea gas exhibits a process of “reversed diffusion”.
During this time-reversal process, the total Gibbs entropy, which contains the full information, still keeps constant. However, the correlation entropy change Δ i S (no matter whether coarse-grained) would exactly experience the reversed “backward” evolution of Figure 6, which is an “entropy decreasing” process.
This is just the idea of the Loschmidt paradox [8,57,58,59,60], except two subtle differences: (1) here we are talking about the ideal gas with no particle collision, but the original Loschmidt paradox was about the Boltzmann transport equation in the presence of particle collisions; (2) here we are talking about the correlation entropy between the spatial and momentum distributions, while the Boltzmann equation is about the entropy of the single-particle PDF.
We must notice that such an initial state ϱ ( 0 ) is NOT an equilibrium state, but contains very delicate correlations between the spatial and momentum distributions [5]. To see such a time-reversal process, the initial state must be precisely prepared to contain such specific correlations between the marginal distributions (Figure 5d), which is definitely quite difficult for practical operation. Therefore, such an “entropy decrease” process is rarely seen in practice (except some special cases like the Hahn echo [61] and back-propagating wave [62,63]).

4. The Correlation in the Boltzmann Equation

In the above section, we focused on the diffusion of the ideal gas with no inter-particle interactions, and the initial momentum distribution has been assumed to be the MB distribution a priori. If the initial momentum distribution is not the MB one, the particles could still exhibit the irreversible diffusion filling the whole volume, but P p ( p , t ) would never become the thermal equilibrium distribution.
If there exist weak interactions between the particles, as shown by the Boltzmann H-theorem, the single-particle PDF f ( p , r , t ) could always approach the MB distribution as its steady state, together with the irreversible entropy increase. In addition, in this case, the above “coarse-graining” is not needed for f ( p , r , t ) to approach the thermal equilibrium distribution.
Notice that, in the Boltzmann H-theorem, only the single-particle PDF f ( p , r , t ) is concerned. The total ensemble ρ ( P , Q , t ) in the 6 N -dimensional phase space should still follow the Liouville equation, thus its entropy does not change with time. Therefore, the H-theorem conclusion also can be understood as the inter-particle correlations are increasing irreversibly [1,5,32]. Again, this is quite similar like the correlation understanding in the last two sections, and here the inter-particle correlation entropy could well reproduce the entropy increase result in the standard macroscopic thermodynamics.
The debates about the Boltzmann H-theorem started ever since its birth. The most important one must be the Loschmidt paradox raised in 1876: due to the time-reversal symmetry of the microscopic dynamics of the particles, once their momenta are reversed at the same time, the particles should follow their incoming paths “backward”, which is surely a possible evolution for the microstate; however, if the entropy of f ( p , r , t ) must increase (according to the H-theorem), then the corresponding “backward” evolution must give an entropy decreasing process, which is contradicted with the H-theorem conclusion.
In this section, we will see the slowly increasing inter-particle correlations could be helpful in understanding this paradox. Namely, due to the significant inter-particle correlation established during the “forward” process, indeed the “backward” process no longer satisfies the molecular-disorder assumption, which is a crucial approximation in deriving the Boltzmann equation, thus it is not suitable to be described by the Boltzmann equation, and the H-theorem does not apply in this case either.

4.1. Derivation of the Boltzmann Equation

We first briefly review the derivation of the Boltzmann transport equation [1,64,65]. When there is no external force, the evolution equation of the single-particle microstate PDF f ( p 1 , r 1 , t ) is
t + p 1 m · r 1 f ( p 1 , r 1 , t ) = t f | col .
The left side is just the above Liouville Equation (29) of the ideal gas, and the right side is the probability change due to the particle collision (assuming only bipartite collisions exist).
This collision term, rewritten as t f | col = Δ ( + ) Δ ( ) , contains two contributions: Δ ( ) means the collision between two particles ( p 1 r 1 ; p 2 r 2 ) ( p 1 r 1 ; p 2 r 2 ) kicks particle-1 out of its original region around ( p 1 , r 1 ) , thus f ( p 1 , r 1 , t ) decreases; likewise, Δ ( + ) means the collision ( p 1 r 1 ; p 2 r 2 ) ( p 1 r 1 ; p 2 r 2 ) kicks particle-1 into the region around ( p 1 , r 1 ) and that increases f ( p 1 , r 1 , t ) .
These two collision contributions can be further written down as (denoting d ς i : = d 3 r i d 3 p i )
Δ ( ) = F ( p 1 r 1 ; p 2 r 2 , t ) χ [ 12 1 2 ] d ς 1 d ς 2 d ς 2 , Δ ( + ) = F ( p 1 r 1 ; p 2 r 2 , t ) χ [ 1 2 12 ] d ς 1 d ς 2 d ς 2 ,
where F ( p 1 r 1 ; p 2 r 2 , t ) is the two-particle joint probability, and χ [ 12 1 2 ] denotes the transition ratio (or scattering matrix) from the initial state ( p 1 r 1 ; p 2 r 2 ) scattered into the final state ( p 1 r 1 ; p 2 r 2 ) . Due to the time-reversal and inversion symmetry of the microscopic scattering process, the transition ratios χ [ 12 1 2 ] and χ [ 1 2 12 ] equal to each other (see Figure 8). Therefore, the above Equation (41) is further written as [denoting F 12 : = F ( p 1 r 1 ; p 2 r 2 , t ) ]
t f ( p 1 , r 1 , t ) + p 1 m · r 1 f = ( F 1 2 F 12 ) χ [ 12 1 2 ] d ς 1 d ς 2 d ς 2 .
Now we adopt the “molecular-disorder assumption”, i.e., the two-particle joint PDF can be approximately written as the product of the two single-particle PDF
F ( p 1 r 1 ; p 2 r 2 , t ) f ( p 1 , r 1 , t ) × f ( p 2 , r 2 , t ) .
This assumption is usually known as the Stosszahlansatz, or the molecular chaos hypothesis. The word “Stosszahlansatz” was introduced by Ehrenfest in 1912, and its original meaning is “the assumption of collision number” [8,60,66]. Here we adopted the wording from Boltzmann’s paper in 1896 [64,65]. Essentially this is requiring that the correlation between the two particles is negligible. Then the Boltzmann transport equation is obtained as [denoting f i : = f ( p i , r i , t ) ]
[ t + p 1 m · r 1 ] f ( p 1 , r 1 , t ) = ( f 1 f 2 f 1 f 2 ) χ [ 12 1 2 ] d ς 1 d ς 2 d ς 2 .

4.2. H-theorem and the Steady State

Now we further review how to prove the H-theorem from the above Boltzmann equation, and find out its steady state. Defining the Boltzmann H-function as H [ f ( p 1 , r 1 , t ) ] : = d ς 1 f 1 ln f 1 , the Boltzmann Equation (45) guarantees H ( t ) decreases monotonically ( d H / d t 0 ), and this is the H-theorem.
To prove this theorem, we put the Boltzmann equation into the time derivative d d t H ( t ) = d ς 1 t f ( p 1 , r 1 , t ) · ln f ( p 1 , r 1 , t ) . The Liouville diffusion term gives (denoting v : = p / m )
d ς 1 p 1 m · r 1 f 1 · ln f 1 = d ς r · ( v f ln f v f ) ,
which can be turned into a surface integral and vanishes. In addition, the collision term gives
d H d t = ( f 1 f 2 f 1 f 2 ) χ [ 12 1 2 ] ln f 1 d ς 1 d ς 2 d ς 1 d ς 2 = 1 2 ( f 1 f 2 f 1 f 2 ) χ [ 12 1 2 ] ln f 1 d ς 1 d ς 2 d ς 1 d ς 2 + 1 2 ( f 2 f 1 f 2 f 1 ) χ [ 21 2 1 ] ln f 2 d ς 2 d ς 1 d ς 2 d ς 1 = 1 2 ( f 1 f 2 f 1 f 2 ) χ [ 12 1 2 ] ln ( f 1 f 2 ) d ς 1 d ς 2 d ς 1 d ς 2 ,
where the second line is because exchanging the integral variables 1 2 gives the same value. We can further apply the similar trick by exchanging the integral variables ( 12 ) ( 1 2 ) , and that gives
d H d t = 1 4 ( f 1 f 2 f 1 f 2 ) ( ln f 1 f 2 ln f 1 f 2 ) χ [ 12 1 2 ] d ς 1 d ς 2 d ς 1 d ς 2 .
Here the transition ratio χ [ 12 1 2 ] is non-negative, and notice that ( f 1 f 2 f 1 f 2 ) ( ln f 1 f 2 ln f 1 f 2 ) 0 always holds for any PDF f i . Therefore, we obtain d H / d t 0 , which means the function H ( t ) decreases monotonically, and this encloses the proof.
In the above inequality, the equality holds if and only if f 1 f 2 = f 1 f 2 , which means the collision induced increase Δ ( + ) and decrease Δ ( ) of f ( p , r ) must balance each other everywhere, thus it is also known as the detailed balance condition.
The time-independent steady state of f ( p , r , t ) can be obtained from this detailed balance equation f 1 f 2 = f 1 f 2 . Taking the logarithm of the two sides, it gives
ln f ( p 1 , r 1 ) + ln f ( p 2 , r 2 ) = ln f ( p 1 , r 1 ) + ln f ( p 2 , r 2 ) .
Notice that the two sides of the above equation depends on different variables, and has a conservation form. Therefore, ln f must be a combination of some conservative quantities. During the collision ( p 1 r 1 ; p 2 r 2 ) ( p 1 r 1 ; p 2 r 2 ) , the particles collides at the same position, and the total momentum and energy are conserved, thus ln f must be their combinations, namely, ln f = C 0 + C 1 · p + C 2 p 2 , where C 0 , C 1 , C 2 are constants. Therefore, f ( p , r ) must be a Gaussian distribution of p at any position r .
Furthermore, the diffusion term in Equation (45) requires p · r f = 0 in the steady state, thus f ( p , r ) must be homogenous for any position r . The average momentum p should be 0 for a stationary gas. Therefore, the steady state of the Boltzmann Equation (45) is a Gaussian distribution f ( p , r ) exp [ p 2 / 2 p ¯ T 2 ] independent of the position r , which is the MB distribution.

4.3. Molecular-Disorder Assumption and Loschmidt Paradox

In the above two sections, we demonstrated all the critical steps deriving the Boltzmann equation. Notice that there is no special requirement for the interaction form of the collisions, as long as it is short-ranged so as to make sure only bipartite collisions exist. The contribution of the collision interaction is implicitly contained in the transition rate χ [ 12 1 2 ] , and the only properties we utilized are (1) χ [ 12 1 2 ] 0 and (2) χ [ 12 1 2 ] = χ [ 1 2 12 ] . Thus it does not matter whether the interaction is nonlinear.
No doubt to say, the molecular-disorder assumption ( F 12 f 1 × f 2 ) is the most important basis in the above derivations. As mentioned by Boltzmann (pp. 29 in Reference [65]), “…The only assumption made here is that the velocity distribution is molecular-disordered (namely, F 12 f 1 × f 2 in our notation) at the beginning, and remains so. With this assumption, one can prove that H can only decrease, and also that the velocity distribution must approach that of Maxwell.” Before this approximation, indeed Equation (43) is still formally exact. Clearly, the validity of this assumption, which is imposed on the particle correlations, determines whether the Boltzmann Equation (45) holds. Now we will re-examine this assumption as well as the Loschmidt paradox.
Once two particles collide with each other, they get correlated. In a dilute gas, collisions do not happen very frequently, and once two particles collides with each other, they could hardly meet each other again. Therefore, if initially there is no correlations between particles, we can expect that, on average, the collision induced bipartite correlations are negligibly small, and thus this molecular-disorder assumption holds well.
Now we look at the situation in the Loschmidt paradox. First, the particles experience a “forward” diffusion process for a certain time. According to the H-theorem, the entropy increases in this process. Then suppose all the particle momenta are suddenly reversed at this moment. From this initial state, the particles are supposed to evolve “backward” exactly along the incoming trajectory, and thus exhibit an entropy decreasing process, which is contradicted with the H-theorem conclusion, and this is the Loschmidt paradox.
However, we should notice that, indeed the first “forward” evolution has established significant (although very small in quantity) bipartite correlations in this new “initial state” [28]. This is quite similar like the above discussion about the momentum-position correlation in Section 3.6. Thus, the above molecular-disorder assumption F 12 f 1 × f 2 does not apply in this case. As a result, the next “backward” evolution is indeed unsuitable to be described by Boltzmann Equation (45). Therefore, the entropy increasing conclusion of the H-theorem ( d H / d t 0 ) does not need to hold for this “backward” process. As well, the preparation of such a specific initial state is definitely unfeasible in practice, thus the “backward” entropy decreasing process is rarely seen.
We emphasize that the Boltzmann Equation (45) is about the single-particle PDF f ( p , r , t ) , which is obtained by averaging over all the other N 1 particles from the full ensemble ρ ( P , Q , t ) . Clearly, f ( p , r , t ) omits much information in ρ ( P , Q , t ) , but indeed it is enough to give most macroscopic thermodynamic quantities. For example, the average kinetic energy of each single molecular p 2 = d ς p 2 f ( p , r ) determines the gas temperature T, and the gas pressure on the wall is given by P = p x > 0 d ς ( 2 p x ) · v x f ( p , r ) [1].
In contrast, indeed the inter-particle correlations ignored by the single-particle PDF f ( p , r , t ) are quite difficult to be sensed in practice. Therefore, the N-particle ensemble may be “inferred” as ρ ˜ inf ( P , Q , t ) = i = 1 N f ( p i , r i , t ) , which clearly omits the inter-particle correlations in the exact ρ ( P , Q , t ) . Similar like the discussion in Section 3.4, based on this inferred ensemble ρ ˜ inf ( P , Q , t ) , the entropy change gives Δ i S = S G [ ρ ˜ inf ( t ) ] S G [ ρ ˜ inf ( 0 ) ] = N ln ( V / V 0 ) , which exactly reproduces the result in standard thermodynamics. Thus Δ i S indeed characterizes the increase of the inter-particle correlations.
In sum, even in the presence of the particle collisions, the full N-particle ensemble ρ ( P , Q , t ) still follows the Liouville equation exactly, thus its Gibbs entropy does not change. On the other hand, the single-particle PDF f ( p , r , t ) follows the Boltzmann equation, thus its entropy keeps increasing until reaching the steady state. In addition, this roots from our ignorance of the inter-particle correlations in the full ρ ( P , Q , t ) .

5. Summary

In this paper, we study the correlation production in open and isolated thermodynamic systems. In a many-body system, the microscopic dynamics of the whole system obeys the time-reversal symmetry, which guarantees the entropy of the global state does not change with time. Based on the microscopic dynamics, indeed the full ensemble state is not evolving towards the new equilibrium state as expected from the macroscopic intuition. However, the correlation between different local DoF, as measured by their mutual information, generally increases monotonically, and its amount could well reproduce the entropy increase result in the standard macroscopic thermodynamics.
In open systems, as described by the second law in standard thermodynamics, the irreversible entropy production increases monotonically. It turns out that this irreversible entropy production is just equal to the correlation production between the system and its environment. Thus, the second law can be equivalently understood as the system-bath correlation is increasing monotonically, while at the same time, the s + b system as a whole still keeps constant entropy.
In isolated systems, there is no specific partition for “system” and “environment”, but we could see the momentum and spatial distributions, as the marginal distributions of the total ensemble, exhibit the macroscopic irreversibility, and their correlation increases monotonically, which reproduces the entropy increase result in standard thermodynamics. In the presence of particle collisions, different particles are also establishing correlations between each other. As a result, the single-particle distribution exhibits the macroscopic irreversibility as well as the entropy increase in standard thermodynamics, which is just the result of the Boltzmann H-theorem. At the same time, the full ensemble ρ ( P , Q , t ) of the many-body system still follows the Liouville equation, which guarantees its entropy does not change with time.
It is worth noticing that, in practice, usually it is the partial information (e.g., marginal distribution, few-body observable expectations) that is directly accessible to our observation. However, indeed most macroscopic thermodynamic quantities are obtained only from such partial information like the one-body distribution, and that is why they exhibits irreversible behaviors. However, due to the practical restrictions in measurements, the dynamics of the full ensemble state, such as the constant entropy behavior, is quite difficult to be sensed in practice.
In sum, the global state keeps constant entropy, while partial information exhibits the irreversible entropy increase. However, in practice, it is the partial information that is directly observed. In this sense, the macroscopic irreversible entropy increase does not contradict with the microscopic reversibility. Clearly, such correlation production understanding can be applied for both quantum and classical systems, no matter whether there exist complicated particle interactions, and it can be well used for time-dependent non-equilibrium states. Moreover, it is worths noticing that, if the bath of an open system is a non-thermal state, indeed this is beyond the application scope of the standard thermodynamics, but we could see such correlation production understanding still applies in this case. We notice that such correlations can be found in many of recent studies of thermodynamics, and it is also quite interesting to notice that similar idea can be used to understand the paradox of blackhole information loss, where the mutual information of the radiation particles is carefully considered [67,68,69,70].

Funding

This study is supported by the Beijing Institute of Technology Research Fund Program for Young Scholars.

Acknowledgments

S.-W.L. appreciates very much for the helpful discussions with R. Brick, M. B. Kim, R. Nessler, M. O. Scully, A. Svidzinsky, Z. Yi, L. Zhang in Texas A&M University, L. Cohen in City University of New York, and H. Dong in Chinese Academy of Engineering Physics.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. The Entropy of a Continuous Probability Distribution

For a finite sample space of N events with probabilities { p n } , the information entropy is well-defined as S { p n } : = n p n ln p n . However, the situation of a continuous PDF is not so trivial, and a simple generalization from the discrete case would lead to a divergency problem.
For example, considering a uniform PDF P ( x ) = 1 / L in the area x [ 0 , L ] , to calculate its entropy, we first divide the area into N pieces averagely, and then turn the discrete summation into the continuous integral by taking the limit N . Clearly here each piece takes the probability p n = 1 / N , thus the entropy is S = n 1 N ln 1 N = ln N , but it diverges when N .
Generally, for a continuous PDF P ( x ) in the area x [ 0 , L ] , after the division into N pieces, the entropy is
S ( N ) = n = 0 N 1 P ( x n ) Δ x · ln P ( x n ) Δ x ,
where Δ x = L / N , and x n = n · Δ x . Remember here P ( x ) is the probability density, which has the unit of the length inverse [ L 1 ] , and P ( x ) Δ x is the unitless probability. However, when taking the limit N , the above equation becomes
lim N S ( N ) = lim N n = 0 N 1 P ( x n ) ln P ( x n ) · Δ x P ( x n ) · Δ x ln L N = 0 L d x P ( x ) ln P ( x ) 0 L d x P ( x ) ln L + lim N ln N · n = 0 N 1 P ( x n ) Δ x : = 0 L d x P ( x ) ln [ P ( x ) · L ] + ln N .
In the 3rd term of the 2nd line, the summation converges to 1 when N , thus this term diverges as ln N , and we denote it as ln N . In addition, notice that in the 1st term, now [ P ( x ) · L ] appears in the logarithm as a whole unitless quantity.
The diverging term ln N cannot be simply omitted. Considering the above example of the uniform distribution P ( x ) = 1 / L in the area x [ 0 , L ] , which is supposed to give the largest entropy, we can see the 1st term in the above result gives 0, and its entropy is exactly given by this diverging term S ( N ) = ln N .
Therefore, when generalizing the information entropy for the continuous PDF, it always contains a diverging term ln N . There are two ways to resolve this problem. First, whenever this entropy is under discussion, we always focus on the entropy difference between two states but not their absolute values. For example, in Section 3.4 we always focus on the entropy change Δ S : = S ( t ) S ( 0 ) comparing with the initial state. In this case, the divergency of ln N in S ( t ) and S ( 0 ) just cancels each other. As well, the length L in the above ln [ P ( x ) · L ] also can be canceled. Therefore, in the definition (39) for S x and S p , the probability densities P x ( x ) and P p ( p ) appear in the logarithm directly although they are not unitless.
Second, from the above simple example of the uniform PDF, we can see indeed this divergency origins from the idealization for the continuity of the probability distribution. Notice that a continuous probability density is indeed not directly accessible in practical measurements, in contrast, we should first divide the continuous area x [ 0 , L ] into N intervals, and then measure the probability that appears in the interval between x n and x n + Δ x , which is denoted as p n = P ( x n ) Δ x . In the limit Δ x 0 , the histogram P ( x n ) becomes the continuous probability density (see Section 3.5). Indeed here Δ x is the resolution in the measurement. A finer resolution indicates a larger sample space and more possible probability distributions, and that results to the divergence of ln N . However, most practical measurements have certain resolution limit, thus Δ x cannot approach 0 infinitely. If this resolution restriction is considered, the diverging term ln N is constrained by the finite resolution.

References

  1. Huang, K. Statistical Mechanics, 2nd ed.; Wiley: New York, NY, USA, 1987. [Google Scholar]
  2. Le Bellac, M.; Mortessagne, F.; Batrouni, G.G. Equilibrium and Non-Equilibrium Statistical Thermodynamics; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  3. Gibbs, J.W. Elementary Principles in Statistical Mechanics; C. Scribner’s Sons: New York, NY, USA, 1902. [Google Scholar]
  4. Hobson, A. Concepts In Statistical Mechanics, 1st ed.; Routledge: New York, NY, USA, 1971. [Google Scholar]
  5. Jaynes, E.T. Gibbs vs. Boltzmann Entropies. Am. J. Phys. 1965, 33, 391–398. [Google Scholar] [CrossRef]
  6. Prigogine, I. Time, Structure, and Fluctuations. Science 1978, 201, 777–785. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Garrett, A.J.M. Macroirreversibility and microreversibility reconciled: The second law. In Maximum Entropy in Action; Oxford University Press: Oxford, UK, 1991; pp. 139–170. [Google Scholar]
  8. Uffink, J. Compendium of the foundations of classical statistical physics. In Philosophy of Physics; Volume Part B, Handbook of the Philosophy of Science; North Holland: Amsterdam, The Netherlands, 2006; p. 923. [Google Scholar]
  9. Swendsen, R.H. Explaining irreversibility. Am. J. Phys. 2008, 76, 643–648. [Google Scholar] [CrossRef]
  10. Han, X.; Wu, B. Entropy for quantum pure states and quantum H theorem. Phys. Rev. E 2015, 91, 062106. [Google Scholar] [CrossRef]
  11. Dong, H.; Wang, D.W.; Kim, M.B. How isolated is enough for an “isolated” system in statistical mechanics? arXiv, 2017; arXiv:1706.02636. [Google Scholar]
  12. Mackey, M.C. The dynamic origin of increasing entropy. Rev. Mod. Phys. 1989, 61, 981–1015. [Google Scholar] [CrossRef]
  13. Esposito, M.; Lindenberg, K.; Van den Broeck, C. Entropy production as correlation between system and reservoir. New J. Phys. 2010, 12, 013013. [Google Scholar] [CrossRef] [Green Version]
  14. Manzano, G.; Galve, F.; Zambrini, R.; Parrondo, J.M.R. Entropy production and thermodynamic power of the squeezed thermal reservoir. Phys. Rev. E 2016, 93, 052120. [Google Scholar] [CrossRef] [PubMed]
  15. Alipour, S.; Benatti, F.; Bakhshinezhad, F.; Afsary, M.; Marcantoni, S.; Rezakhani, A.T. Correlations in quantum thermodynamics: Heat, work, and entropy production. Sci. Rep. 2016, 6, 35568. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Li, S.W. Production rate of the system-bath mutual information. Phys. Rev. E 2017, 96, 012139. [Google Scholar] [CrossRef] [PubMed]
  17. Bergmann, P.G.; Lebowitz, J.L. New Approach to Nonequilibrium Processes. Phys. Rev. 1955, 99, 578–587. [Google Scholar] [CrossRef]
  18. De Groot, S.R.; Mazur, P. Non-Equilibrium Thermodynamics; North-Holland: Amsterdam, The Netherlands, 1962. [Google Scholar]
  19. Nicolis, G.; Prigogine, I. Self-Organization in Nonequilibrium Systems: From Dissipative Structures to Order through Fluctuations, 1st ed.; Wiley: New York, NY, USA, 1977. [Google Scholar]
  20. Reichl, L.E. A Modern Course in Statistical Physics; Wiley: New York, NY, USA, 2009. [Google Scholar]
  21. Kondepudi, D.; Prigogine, I. Modern Thermodynamics: From Heat Engines to Dissipative Structures; John Wiley & Sons, Ltd: Chichester, UK, 2014. [Google Scholar]
  22. Aurell, E.; Eichhorn, R. On the von Neumann entropy of a bath linearly coupled to a driven quantum system. New J. Phys. 2015, 17, 065007. [Google Scholar] [CrossRef]
  23. You, Y.N.; Li, S.W. Entropy dynamics of a dephasing model in a squeezed thermal bath. Phys. Rev. A 2018, 97, 012114. [Google Scholar] [CrossRef] [Green Version]
  24. Pucci, L.; Esposito, M.; Peliti, L. Entropy production in quantum Brownian motion. J. Stat. Mech. 2013, 2013, P04005. [Google Scholar] [CrossRef] [Green Version]
  25. Strasberg, P.; Schaller, G.; Brandes, T.; Esposito, M. Quantum and Information Thermodynamics: A Unifying Framework Based on Repeated Interactions. Phys. Rev. X 2017, 7, 021003. [Google Scholar] [CrossRef]
  26. Manzano, G.; Horowitz, J.M.; Parrondo, J.M.R. Quantum Fluctuation Theorems for Arbitrary Environments: Adiabatic and Nonadiabatic Entropy Production. Phys. Rev. X 2018, 8, 031037. [Google Scholar] [CrossRef]
  27. Hobson, A. Irreversibility in Simple Systems. Am. J. Phys. 1966, 34, 411–416. [Google Scholar] [CrossRef]
  28. Chliamovitch, G.; Malaspinas, O.; Chopard, B. Kinetic Theory beyond the Stosszahlansatz. Entropy 2017, 19, 381. [Google Scholar] [CrossRef]
  29. Zhang, Q.R. A general information theoretical proof for the second law of thermodynamics. Int. J. Mod. Phys. E 2008, 17, 531–537. [Google Scholar] [CrossRef]
  30. Zhang, Q.R. Information conservation, entropy increase and statistical irreversibility for an isolated system. Phys. A 2009, 388, 4041–4044. [Google Scholar] [CrossRef] [Green Version]
  31. Horowitz, J.M.; Sagawa, T. Equivalent Definitions of the Quantum Nonadiabatic Entropy Production. J. Stat. Phys. 2014, 156, 55–65. [Google Scholar] [CrossRef] [Green Version]
  32. Kalogeropoulos, N. Time irreversibility from symplectic non-squeezing. Phys. A 2018, 495, 202–210. [Google Scholar] [CrossRef] [Green Version]
  33. Cramer, M.; Dawson, C.M.; Eisert, J.; Osborne, T.J. Exact Relaxation in a Class of Nonequilibrium Quantum Lattice Systems. Phys. Rev. Lett. 2008, 100, 030602. [Google Scholar] [CrossRef] [PubMed]
  34. Eisert, J.; Friesdorf, M.; Gogolin, C. Quantum many-body systems out of equilibrium. Nat. Phys. 2015, 11, 124–130. [Google Scholar] [CrossRef] [Green Version]
  35. Wang, J.; Dong, H.; Li, S.W. Magnetic dipole-dipole interaction induced by the electromagnetic field. Phys. Rev. A 2018, 97, 013819. [Google Scholar] [CrossRef] [Green Version]
  36. Nielsen, M.A.; Chuang, I.L. Quantum Computation and Quantum Information; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  37. Gardiner, C. Handbook of Stochastic Methods; Springer: Berlin, Germany, 1985; Volume 3. [Google Scholar]
  38. Breuer, H.; Petruccione, F. The Theory of Open Quantum Systems; Oxford University Press: Oxford, UK, 2002. [Google Scholar]
  39. Cai, C.Y.; Li, S.W.; Liu, X.F.; Sun, C.P. Entropy Production of Open Quantum System in Multi-Bath Environment. arXiv, 2014; arXiv:1407.2004. [Google Scholar]
  40. Gorini, V.; Kossakowski, A.; Sudarshan, E.C.G. Completely positive dynamical semigroups of N-level systems. J. Math. Phys. 1976, 17, 821–825. [Google Scholar] [CrossRef]
  41. Lindblad, G. On the generators of quantum dynamical semigroups. Comm. Math. Phys. 1976, 48, 119–130. [Google Scholar] [CrossRef]
  42. Spohn, H. Entropy production for quantum dynamical semigroups. J. Math. Phys. 1978, 19, 1227–1230. [Google Scholar] [CrossRef]
  43. Spohn, H.; Lebowitz, J.L. Irreversible Thermodynamics for Quantum Systems Weakly Coupled to Thermal Reservoirs. In Advances in Chemical Physics; Rice, S.A., Ed.; John Wiley & Sons, Inc.: New York, NY, USA, 1978; pp. 109–142. [Google Scholar]
  44. Alicki, R. The quantum open system as a model of the heat engine. J. Phys. A 1979, 12, L103. [Google Scholar] [CrossRef]
  45. Boukobza, E.; Tannor, D.J. Three-Level Systems as Amplifiers and Attenuators: A Thermodynamic Analysis. Phys. Rev. Lett. 2007, 98, 240601. [Google Scholar] [CrossRef]
  46. Kosloff, R. Quantum Thermodynamics: A Dynamical Viewpoint. Entropy 2013, 15, 2100–2128. [Google Scholar] [CrossRef] [Green Version]
  47. Kosloff, R.; Rezek, Y. The Quantum Harmonic Otto Cycle. Entropy 2017, 19, 136. [Google Scholar] [CrossRef]
  48. Li, S.W.; Kim, M.B.; Scully, M.O. Non-Markovianity in a non-thermal bath. arXiv, 2016; arXiv:1604.03091. [Google Scholar]
  49. Scully, M.O.; Zubairy, M.S.; Agarwal, G.S.; Walther, H. Extracting Work from a Single Heat Bath via Vanishing Quantum Coherence. Science 2003, 299, 862–864. [Google Scholar] [CrossRef] [PubMed]
  50. Roßnagel, J.; Abah, O.; Schmidt-Kaler, F.; Singer, K.; Lutz, E. Nanoscale Heat Engine Beyond the Carnot Limit. Phys. Rev. Lett. 2014, 112, 030602. [Google Scholar] [CrossRef] [PubMed]
  51. Gardas, B.; Deffner, S. Thermodynamic universality of quantum Carnot engines. Phys. Rev. E 2015, 92, 042126. [Google Scholar] [CrossRef]
  52. Walls, D.F.; Milburn, G.J. Quantum Optics, 2nd ed.; Springer: Berlin, Germany, 2008. [Google Scholar]
  53. Lin, G.D. On the moment problems. Statist. Probab. Lett. 1997, 35, 85–90. [Google Scholar] [CrossRef]
  54. Mayato, R.S.; Loughlin, P.; Cohen, L. M-indeterminate distributions in quantum mechanics and the non-overlapping wave function paradox. arXiv, 2018; arXiv:1807.11725. [Google Scholar]
  55. Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  56. Zhou, D.L. Irreducible Multiparty Correlations in Quantum States without Maximal Rank. Phys. Rev. Lett. 2008, 101, 180505. [Google Scholar] [CrossRef]
  57. Cohen, L. The history of noise. IEEE Signal Process. Mag. 2005, 22, 20–45. [Google Scholar] [CrossRef]
  58. Zaslavsky, G.M. Chaotic Dynamics and the Origin of Statistical Laws. Phys. Today 2008, 52, 39. [Google Scholar] [CrossRef]
  59. Lebowitz, J.L. Boltzmann’s Entropy and Time’s Arrow. Phys. Today 2008, 46, 32. [Google Scholar] [CrossRef]
  60. Brown, H.R.; Myrvold, W. Boltzmann’s H-theorem, its limitations, and the birth of (fully) statistical mechanics. arXiv, 2008; arXiv:0809.1304. [Google Scholar]
  61. Hahn, E.L. Spin Echoes. Phys. Rev. 1950, 80, 580–594. [Google Scholar] [CrossRef]
  62. Fink, M. Time Reversed Acoustics. Phys. Today 2008, 50, 34. [Google Scholar] [CrossRef]
  63. Przadka, A.; Feat, S.; Petitjeans, P.; Pagneux, V.; Maurel, A.; Fink, M. Time Reversal of Water Waves. Phys. Rev. Lett. 2012, 109, 064501. [Google Scholar] [CrossRef] [PubMed]
  64. Boltzmann, L. Vorlesungen über Gastheorie; Leipzig: Barth, Germany, 1896. (In German) [Google Scholar]
  65. Boltzmann, L. Lectures on Gas Theory; University of California Press: Berkeley, CA, USA, 1964. [Google Scholar]
  66. Ehrenfest, P.; Ehrenfest, T. The Conceptual Foundations of the Statistical Approach in Mechanics; Dover Publications: New York, NY, USA, 2015. first published in 1912. [Google Scholar]
  67. Zhang, B.; Cai, Q.Y.; You, L.; Zhan, M.S. Hidden messenger revealed in Hawking radiation: A resolution to the paradox of black hole information loss. Phys. Lett. B 2009, 675, 98–101. [Google Scholar] [CrossRef]
  68. Zhang, B.; Cai, Q.Y.; Zhan, M.S.; You, L. Entropy is conserved in Hawking radiation as tunneling: A revisit of the black hole information loss paradox. Ann. Phys. 2011, 326, 350–363. [Google Scholar] [CrossRef] [Green Version]
  69. Ma, Y.H.; Chen, J.F.; Sun, C.P. Dark information of black hole radiation raised by dark energy. Nucl. Phys. B 2018, 931, 418–436. [Google Scholar] [CrossRef]
  70. Ma, Y.H.; Cai, Q.Y.; Dong, H.; Sun, C.P. Non-thermal radiation of black holes off canonical typicality. Europhys. Lett. 2018, 122, 30001. [Google Scholar] [CrossRef]
Figure 1. Demonstration for gas diffusion.
Figure 1. Demonstration for gas diffusion.
Entropy 21 00111 g001
Figure 2. Demonstration for an open system S surrounded by several independent baths B α .
Figure 2. Demonstration for an open system S surrounded by several independent baths B α .
Entropy 21 00111 g002
Figure 3. (ad) The distribution ϱ ( p , x , t ) in phase space at different times ( τ ¯ L : = m L / p ¯ T as the time unit). As the time increases, P p ( p ) does not change, but P x ( x , t ) approaches the new uniform distribution in x [ 0 , L ] . (e) The conditional distribution at a fixed position ϱ ( p , x = L / 2 ) (vertical dashed line in (d)).
Figure 3. (ad) The distribution ϱ ( p , x , t ) in phase space at different times ( τ ¯ L : = m L / p ¯ T as the time unit). As the time increases, P p ( p ) does not change, but P x ( x , t ) approaches the new uniform distribution in x [ 0 , L ] . (e) The conditional distribution at a fixed position ϱ ( p , x = L / 2 ) (vertical dashed line in (d)).
Entropy 21 00111 g003
Figure 4. Demonstration for how the solutions are constructed for (a) periodic and (b) reflecting boundary conditions. The free space is cut into intervals of length L, and each contributes an “image” source. The solution is all their summation in x [ 0 , L ] (blue line in (a)).
Figure 4. Demonstration for how the solutions are constructed for (a) periodic and (b) reflecting boundary conditions. The free space is cut into intervals of length L, and each contributes an “image” source. The solution is all their summation in x [ 0 , L ] (blue line in (a)).
Entropy 21 00111 g004
Figure 5. (ad) ϱ ( p , x , t ) under reflecting boundary condition, as well as its spatial and momentum distributions. (e) The momentum distribution from (d), and its difference (lower blue) with the initial MB one (green dashed lines).
Figure 5. (ad) ϱ ( p , x , t ) under reflecting boundary condition, as well as its spatial and momentum distributions. (e) The momentum distribution from (d), and its difference (lower blue) with the initial MB one (green dashed lines).
Entropy 21 00111 g005
Figure 6. The increase of the correlation entropy Δ i S for the (a) periodic and (b) reflecting boundary cases. (c) The evolution of odd moments p n t under reflecting boundary condition (the values are normalized by their maximum amplitudes for comparison). The unit τ ¯ L : = m L / p ¯ T is the time for a particle with average kinetic energy p ¯ T 2 / 2 m to pass L.
Figure 6. The increase of the correlation entropy Δ i S for the (a) periodic and (b) reflecting boundary cases. (c) The evolution of odd moments p n t under reflecting boundary condition (the values are normalized by their maximum amplitudes for comparison). The unit τ ¯ L : = m L / p ¯ T is the time for a particle with average kinetic energy p ¯ T 2 / 2 m to pass L.
Entropy 21 00111 g006
Figure 7. (a) The exact distribution P p ( p ) (blue) and its coarse-grained distribution P ˜ p ( p ) (orange histogram) at t = 7 τ ¯ L , which is quite close to the MB one. (b) The entropy change calculated by the coarse-grained distribution P ˜ p ( p ) , where the solid gray line is the entropy change calculated from the exact P p ( p ) for comparison (the same as Figure 6b). The parameters are the same with those in Figure 6. The coarse-graining size is δ p ˜ = 0.2 p ¯ T .
Figure 7. (a) The exact distribution P p ( p ) (blue) and its coarse-grained distribution P ˜ p ( p ) (orange histogram) at t = 7 τ ¯ L , which is quite close to the MB one. (b) The entropy change calculated by the coarse-grained distribution P ˜ p ( p ) , where the solid gray line is the entropy change calculated from the exact P p ( p ) for comparison (the same as Figure 6b). The parameters are the same with those in Figure 6. The coarse-graining size is δ p ˜ = 0.2 p ¯ T .
Entropy 21 00111 g007
Figure 8. Assume collisions happen only in short range, then the transition rate χ [ ( p 1 r 1 ; p 2 r 2 ) ( p 1 r 1 ; p 2 r 2 ) ] could have nonzero value only within the range r 1 r 2 r 1 r 2 . The scattering process (b) is the time reversal of (a), thus they have equal transition rates χ [ p 1 p 2 p 1 p 2 ] = χ [ ( p 1 , p 2 ) ( p 1 , p 2 ) ] . The scattering process (c) is obtained by making the 180 inversion of (b), thus they also have equal transition rates χ [ ( p 1 , p 2 ) ( p 1 , p 2 ) ] = χ [ p 1 p 2 p 1 p 2 ] . Therefore, we obtain the relation χ [ 12 1 2 ] = χ [ 1 2 12 ] .
Figure 8. Assume collisions happen only in short range, then the transition rate χ [ ( p 1 r 1 ; p 2 r 2 ) ( p 1 r 1 ; p 2 r 2 ) ] could have nonzero value only within the range r 1 r 2 r 1 r 2 . The scattering process (b) is the time reversal of (a), thus they have equal transition rates χ [ p 1 p 2 p 1 p 2 ] = χ [ ( p 1 , p 2 ) ( p 1 , p 2 ) ] . The scattering process (c) is obtained by making the 180 inversion of (b), thus they also have equal transition rates χ [ ( p 1 , p 2 ) ( p 1 , p 2 ) ] = χ [ p 1 p 2 p 1 p 2 ] . Therefore, we obtain the relation χ [ 12 1 2 ] = χ [ 1 2 12 ] .
Entropy 21 00111 g008

Share and Cite

MDPI and ACS Style

Li, S.-W. The Correlation Production in Thermodynamics. Entropy 2019, 21, 111. https://doi.org/10.3390/e21020111

AMA Style

Li S-W. The Correlation Production in Thermodynamics. Entropy. 2019; 21(2):111. https://doi.org/10.3390/e21020111

Chicago/Turabian Style

Li, Sheng-Wen. 2019. "The Correlation Production in Thermodynamics" Entropy 21, no. 2: 111. https://doi.org/10.3390/e21020111

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop