Next Article in Journal
Quick Estimate of Information Decomposition for Text Style Transfer
Previous Article in Journal
Non-Iterative Multiscale Estimation for Spatial Autoregressive Geographically Weighted Regression Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

N-States Continuous Maxwell Demon

1
Université Paris Cité, CNRS, UMR 8236-LIED, 75013 Paris, France
2
Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France
3
Small Biosystems Lab, Condensed Matter Physics Department, University of Barcelona, 08028 Barcelona, Spain
4
Institut de Nanociència i Nanotecnologia (IN2UB), Universitat de Barcelona, 08028 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(2), 321; https://doi.org/10.3390/e25020321
Submission received: 31 December 2022 / Revised: 6 February 2023 / Accepted: 7 February 2023 / Published: 9 February 2023

Abstract

:
Maxwell’s demon is a famous thought experiment and a paradigm of the thermodynamics of information. It is related to Szilard’s engine, a two-state information-to-work conversion device in which the demon performs single measurements and extracts work depending on the state measurement outcome. A variant of these models, the continuous Maxwell demon (CMD), was recently introduced by Ribezzi-Crivellari and Ritort where work was extracted after multiple repeated measurements every time that τ is in a two-state system. The CMD was able to extract unbounded amounts of work at the cost of an unbounded amount of information storage. In this work, we built a generalization of the CMD to the N-state case. We obtained generalized analytical expressions for the average work extracted and the information content. We show that the second law inequality for information-to-work conversion is fulfilled. We illustrate the results for N-states with uniform transition rates and for the N = 3 case.

1. Introduction

In 1867, James Clerk Maxwell proposed a thought experiment for the better understanding of the scope and limitations of the second law [1]. Known as the Maxwell demon paradox, it has spurred strong research activity for many years, setting the basis for the thermodynamics of information and information-to-work conversion [2,3,4,5,6,7,8,9,10]. In 1929, Leo Szilard introduced a simple physical model [11] in which a particle was free to move in a box of volume V with two compartments (denoted with 0 and 1) of volumes V 0 and V 1 and V = V 0 + V 1 (Figure 1A). In Szilard’s engine (SZ), the “demon” was an entity able to monitor the particle’s position and store the observed compartment in a single-bit variable σ = 0 , 1 . Information-to-work conversion is as follows: once the particle’s compartment is known, a movable wall is inserted between the two compartments, and an isothermal process is implemented to extract work. A work-extracting cycle concludes when the movable wall reaches its far end, and the measurement-work extraction process restarts. The average work extracted per cycle equals the equivalent heat transferred from the isothermal reservoir to the system: W 2 S z = k B T ( P 0 log ( P 0 ) + P 1 log ( P 1 ) ) , with P 0 , 1 = V 0 , 1 / V the occupancy probabilities of each compartment. For equal compartments P 0 = P 1 = 1 / 2 , Szilard’s engine can extract maximal work determined by the Landauer bound, W S Z k B T log ( 2 ) from the reservoir without energy consumption, meaning that heat was fully converted into work, apparently violating Kelvin’s postulate. In the 1960s and 1970s, work by Landauer [12] and Bennett [13] found a solution to the paradox. The solution to this paradox considers the information content of the measurement, the work extraction, and the resetting processes of the demon [14,15]. Most importantly, to recover the initial state at the end of the thermodynamic cycle, the demon must erase the information acquired on the system [2]. The minimal erasure cost per bit of information equals k B T log ( 2 ) for equally probable outcomes. In the end, the information content stored in the demon is always larger than or equal to the extracted work, in agreement with the second law.
In a recent paper, a new variant of the Maxwell demon, the continuous Maxwell demon (CMD), was introduced [16] (see also [17] for additional results), analytically solved, and experimentally tested. In the CMD, the demon performs repeated measurements of a particle’s location in a two-compartment box every time τ . The first time that the demon measures that the particle changed compartments, a work extraction procedure is implemented. The main difference with the SZ engine is that, in the CMD, a work extraction cycle contains multiple measurements, whereas for the SZ, a single measurement is performed at every work cycle. Compared to the SZ, the CMD can extract a more significant amount of work W because of the larger information content of the multiple stored bits in a cycle. Interestingly, the average work per cycle in the CMD satisfies W C M D k B T log ( 2 ) being unbounded in the limits P 0 0 ( P 1 1 ) and P 0 1 ( P 1 0 ). A model combining the SZ and CMD work extraction protocols version showed the role of temporal correlations in optimizing information-to-energy conversion [18]. In the CMD, the time between measurements τ is arbitrary. In particular, it can be made infinitesimal, τ 0 , leading to an infinite number of measurements per cycle justifying the continuous adjective of the model.
Here, we generalize the CMD to the case of N-states (N-CMD). In a possible realization of the N-CMD, a particle in a box of volume V can occupy N distinct compartments of volumes V i (Figure 1B). The demon measures in which compartment the particle is at every time τ until a change in the compartment is detected. Then, a work extraction process is implemented by inserting one fixed wall at one side and one movable wall at the other side of the compartment that can expand under the elastic collisions exerted by the particle (Figure 1C). A pulley mechanism is attached to the movable wall to extract an average work equal to W = k B T log P i with P i = V i / V . For N = 2, we obtain the standard CMD (Figure 1A) (2-CMD), which corresponds to transforming the Szilard box (Figure 1B) into a periodic torus (Figure 1C).
The outline of this work is as follows. In Section 2, we show how to generalize the mathematical formalism of the 2-CMD to N states (N-CMD). In Section 3, we analyze the performance of the N-CMD by studying the thermodynamic efficiency and power. In Section 4, we analyze several cases, and in particular case N = 3 , to investigate the effect of topology on information-to-work conversion (IWC). We end with some conclusions and future directions.

2. General Setting

Let σ ( = 1 , , N ) denote the N states of a system following Markovian dynamics defined by transition rates that satisfy detailed balance, ensuring that the system relaxes to an equilibrium state. Let τ be the time between consecutive measurements. The conditional probability p τ ( σ | σ ) that the outcome of the measurement is σ after time τ conditioned that it starts in σ satisfies the following master equation:
τ p τ ( σ | σ ) = σ = 1 N K σ σ p τ ( σ | σ )
with initial condition p τ 0 ( σ | σ ) δ σ , σ , where δ is the Kronecker delta function. Markov matrix K σ σ satisfies σ K σ σ = 0 ; σ , defining transitions rates from state σ to σ :
K σ σ = σ ( σ ) k σ σ if σ = σ k σ σ otherwise
with k σ σ the probability to jump from state σ to σ during time d τ . Let us denote by P σ the stationary solutions of Equation (1). The detailed balance condition reads:
σ , σ ; K σ σ P σ = K σ σ P σ
The solution of Equation (1) can be written using the Perron–Frobenius theorem (see [19]) as a spectral expansion in terms of the eigenvalues and eigenvectors of K:
p τ ( σ | σ ) = P σ α l σ α l σ α exp ( λ α τ )
where l α is the left eigenvector of K associated with the eigenvalue λ α . The sum over the α term in Equation (4) is symmetric in σ σ . Therefore, the conditional probabilities also fulfil a detailed balance:
p τ ( σ | σ ) p τ ( σ | σ ) = P σ P σ
Remark 1.
Detailed balance ensures that there exists a unique stationary state P σ associated with the eigenvalue λ 0 = 0 and that the other eigenvalues are real and negative, λ α 0 < 0 . Equation (4) can be rewritten as follows:
p τ ( σ | σ ) = P σ 1 + α 0 l σ α l σ α exp ( λ α τ )
which gives p τ ( σ | σ ) = P σ for τ as expected.
In the CMD, a work-extraction cycle is defined by a sequence of n + 1 measurement outcomes σ i ( 1 i n + 1 ) repeatedly taken every time τ . In a cycle the first n outcomes are equal ( σ ) ending with σ ( σ ). We define the trajectory for a cycle as follows:
T σ σ n = σ , , σ , n σ
The probability of a given trajectory T σ σ n reads:
P T σ σ n = p τ ( σ | σ ) n 1 p τ ( σ | σ )
This is normalized as follows:
σ ( σ ) n = 1 P T σ σ n = 1 , σ

2.1. Thermodynamic Work and Information-Content

Like in the SZ, the work extracted by the CMD in a given cycle T σ σ n equals log ( P σ ) . Averaging over all the possible measurement cycles, we obtain the average extracted work:
W N ( τ ) = < log P σ > = σ σ ( σ ) n = 1 P σ P T σ σ n log P σ = σ = 1 N P σ 1 p τ ( σ | σ ) σ σ p τ ( σ | σ ) log P σ
which is positive by definition. In the limit τ we obtain the following expression,
W N = σ = 1 N P σ 1 P σ σ σ P σ log P σ ,
which can be written as follows:
W N = ( σ = 1 N P σ 1 P σ ) W N SZ + σ = 1 N P σ 2 log P σ 1 P σ
where W N S Z is the classical statistical entropy of the system, which can also be interpreted as the average work extraction of the N-states Szilard engine, denoted as N-SZ:
W N SZ = < log ( P σ ) > = σ P σ log P σ
This expression can be readily minimized in the space of P σ giving the uniform solution, P σ = 1 / N for which W N = log N . In contrast, W N log ( 1 P σ ) if P σ 1 for a given σ (and P σ 0 σ σ ) diverging in that limit.
We define the average information content per cycle as the statistical entropy of the measurement-cycle probabilities [20]:
I N ( τ ) = < log P σ P T σ σ n > = σ σ ( σ ) n = 1 P σ P T σ σ n log P σ P T σ σ n = σ = 1 N σ σ N P σ p τ ( σ | σ ) log ( P σ ) + p τ ( σ | σ ) log ( p τ ( σ | σ ) ) n = 1 p τ ( σ | σ ) n 1 1 1 p τ ( σ | σ ) σ = 1 N σ σ N P σ p τ ( σ | σ ) log ( p τ ( σ | σ ) ) n = 1 ( n 1 ) p τ ( σ | σ ) n 1 p τ ( σ | σ ) ( 1 p τ ( σ | σ ) ) 2 = W N S Z σ N P σ 1 p τ ( σ | σ ) σ p τ ( σ | σ ) log p τ ( σ | σ )
The positivity of I N ( τ ) follows from the fact that p τ ( σ | σ ) , P σ 1 . The second term in Equation (14) depends on τ and can be understood as the contribution of correlations between measurements to I N ( τ ) .
Lastly, using Equation (5), we can rearrange Equation (14) as follows:
I N ( τ ) = W N ( τ ) σ N P σ 1 p τ ( σ | σ ) σ p τ ( σ | σ ) log p τ ( σ | σ ) = Δ N > 0
where the second term Δ N is positive since p τ ( σ | σ ) 1 . Equation (15) implies the second law inequality:
I N ( τ ) W N ( τ ) > 0 τ
meaning that the cost to erase the stored sequences information content is always larger than the work extracted by the demon.

2.2. Comparison with the Szilard Engine

Equating Expressions (14) and (15) for I N ( τ ) , we obtain a relation between W N ( τ ) and W S Z that compares the average work extracted in the N-CMD to the N-SZ engine as follows:
W N ( τ ) W N S Z = < log P σ P σ > = σ p σ 1 p τ ( σ | σ ) σ p τ ( σ | σ ) log p τ ( σ | σ ) p τ ( σ | σ ) 0
where the first equality follows from the difference between the first right-hand side of Equations (10) and (13). This shows that the CMD’s average work per cycle is always larger or equal to SZ. The equality holds for the uniform case P σ = 1 / N where W N ( τ ) = W N S Z = log N .

3. Thermodynamic Power and Efficiency

3.1. Average Cycle Length

As a preliminary, we first compute the average time of a cycle of measurement. This is similar to the mean first residence time of the system, except for the fact that (unobserved) hopping events are permitted. We define it as follows:
t N c τ < n >
and obtain the following expression:
t N c = τ 1 + σ P σ 1 p τ ( σ | σ )
The following equalities are shown:
lim τ 0 + t N c = i 1 α 0 ( l i α ) 2 λ α > 0
lim τ t N c = +
The average cycle time is the mean first passage time [21] of the discrete time random walk defined by a cycle of measurements.

3.2. Thermodynamic Power

We define the thermodynamic power as the average work W N extracted per cycle time t N c :
Φ N ( τ ) = W N t N c
In the limit of uncorrelated measurements τ , we obtain from Equations (11) and (19):
Φ N = 1 τ σ = 1 N P σ 1 P σ σ σ P σ log P σ 1 + σ P σ 1 P σ
For N = 2 , we recover the results in [16,17].

3.3. Information-to-Work Efficiency

In the spirit of the efficiencies defined for thermal machines, we define the information-to-work conversion (IWC) efficiency of the CMD as the ratio between W N , taken to be the objective function, and I N , taken to be the cost function, for the optimization of the CMD:
η N = W N I N
Using Equation (15), we can rewrite η N as follows:
η N = 1 1 + Δ N W N
From Equation (16), η N < 1 . In the limit τ , we obtain:
lim τ η N = 1 1 + i P i 1 P i log P i i P i 1 P i j i P j log P j
In limit P σ 1 for a given state σ , one can check that the N-CMD reaches maximal efficiency 1.

4. Particular Cases

Here, we analyze some specific examples.

4.1. Case N = 2

We now turn to the N = 2 case considered in [16] as an example of our formulae. The kinetic rate matrix in this case reads:
K = k 1 0 k 0 1 k 1 0 k 0 1
Here, we do not need to make any particular choice of rates k σ σ to ensure detailed balance since, for two states, a detailed balance unconditionally holds. Applying the procedure sketched in Section 2, we solve the master equation:
p τ = ( p τ ( σ | σ ) ) σ , σ = 0 , 1 P 0 + P 1 exp ( R τ ) P 0 ( 1 exp ( R τ ) ) P 1 ( 1 exp ( R τ ) ) P 1 + P 0 exp ( R τ )
where R = k 1 0 + k 0 1 , P 0 = k 0 1 R and P 1 = k 1 0 R such that P 0 + P 1 = 1 . p τ is normalized per column:
p τ ( σ | σ ) + p τ ( 1 σ | σ ) = 1 , σ = 0 , 1
First, let us consider W 2 . Since N = 2 and by normalization, there is only one term in the sum σ σ of Equation (10). Thus, W 2 simplifies to:
W 2 = P 0 log ( 1 P 0 ) ( 1 P 0 ) log ( P 0 )
We recover the result obtained in [16] and coincidently show that the τ independence of this result is a particular feature of the N = 2 case. Moreover, since W 2 had a simple expression, we obtained a tractable expression for the comparison with the SZ average work extracted, c.f. Equation (17):
W 2 W 2 S Z = ( 1 2 P 0 ) log 1 P 0 P 0
This quantity is positive and vanishes only for uniform probability, P σ = 1 2 , as shown in Section 3.1. Using normalization Equation (29) again in the definition of I N Equation (14), we obtain I 2 as follows:
I 2 = P 0 log P 0 ( 1 P 0 ) log ( 1 P 0 ) P 0 p τ ( 0 | 0 ) p τ ( 1 | 0 ) log p τ ( 0 | 0 ) + log p τ ( 1 | 0 ) ( 1 P 0 ) p τ ( 1 | 1 ) p τ ( 0 | 1 ) log p τ ( 1 | 1 ) + log p τ ( 0 | 1 )
which is the result obtained in [16]. The remaining results of [16] are obtained by combining Equations (28), (30), and (32).

4.2. Uniform Transition Rates

In this subsection, we take the following particular case for the Markov matrix K:
K σ σ = R × ( N 1 ) if σ = σ 1 otherwise
In this case, there are only two independent conditional probabilities; we can thus rewrite the master equation as follows:
τ p τ ( σ | σ ) = R ( 1 N p τ ( σ | σ ) )
Via normalization, we obtain p τ ( σ | σ ) as follows:
p τ ( σ | σ ) = 1 N 1 ( 1 p τ ( σ | σ ) ) ; σ σ
In the remainder of this subsection, we define the dimensionless rescaled time between two measurements as τ ˜ = R τ . The solution of Equation (34) reads:
p τ ( σ | σ ) = 1 N 1 + ( N 1 ) exp ( N τ ˜ )
This particular case allows for us to obtain a glimpse of the dependence of the quantities introduced in Section 2 with N. The average work extracted is as follows:
W N = log N .
We see that the work extracted does not depend on τ . I N reads:
I N = log N N N 1 log ( 1 N ( 1 exp ( N τ ˜ ) ) )
The first remark is that in the limit τ ˜ , I N = 2 N 1 N 1 log N W N .
One way to optimize the CMD is to maximize IWC efficiency, defined as follows:
η N W N I N = log N log N N log 1 e N τ ˜ N N 1
We find the asymptotic efficiency η N = N 1 2 N 1 for τ ˜ and η N = 1 2 for N . For the thermodynamic power, we obtain:
Φ N W N t N c = log N N τ ˜ e N τ ˜ ( N 1 ) e N τ ˜ 1 + τ ˜
where t N c is the average cycle time that we analyzed in Section 3.1. One can show that the maximum thermodynamic power Φ N = ( N 1 ) log ( N ) is obtained in the limit τ ˜ 0 . This shows that the maximum IWC efficiency Equation (39) and the efficiency at maximum power Equation (40) are obtained in two different limits, a general result expected for thermodynamic machines [22].

4.3. Case N = 3

The 3-CMD is the simplest case in which two different topologies of the state space are available. They are defined in Figure 2 and are denoted as triangular (Panel A) and linear (Panel B), respectively. We denote the energy of state σ ( σ = 0 , 1 , 2 ) by ϵ σ . Taking β = 1 , the detailed balance assumption Equation (3) then reads:
K σ σ K σ σ = exp ( ( ϵ σ ϵ σ ) ) , σ σ
Here we take ϵ 0 = 0 . This implies that the energies of states 1 , 2 read:
ϵ 1 = log ( P 0 P 1 )
ϵ 2 = log ( P 0 P 2 )
In the linear case, taking as a particular case k 01 = 1 and k 21 = 1 , we obtain the following Markov matrix:
K 3 l i n = 1 exp ( ϵ 1 ) 0 1 ( exp ( ϵ 1 ) + exp ( ϵ 1 ϵ 2 ) ) 1 0 exp ( ϵ 1 ϵ 2 ) 1 = 1 P 0 P 1 0 1 P 0 P 1 ( 1 + P 2 P 0 ) 1 0 P 2 P 1 1
where we used Equations (42) and (43) to give an expression of K 3 l i n depending only on P 0 , P 1 , P 2 . In the triangular case, taking k 01 = 1 and k 21 = 1 and k 02 = 1 as a particular case, we obtain similarly the following Markov matrix:
K 3 t r i = 2 exp ( ϵ 1 ) exp ( ϵ 2 ) 1 ( exp ( ϵ 1 ) + exp ( ϵ 1 ϵ 2 ) ) 1 1 exp ( ϵ 1 ϵ 2 ) ( 1 + exp ( ϵ 2 ) ) = 2 P 0 P 1 P 0 P 2 1 P 0 P 1 ( 1 + P 2 P 0 ) 1 1 P 2 P 1 ( 1 + P 0 P 2 )
The solution of Equation (1) with Markov matrix (44) in the linear case and (45) in the triangular case, can be written using the Perron-Frobenius theorem [19] as the following spectral expansion:
P σ ( τ ) = Ψ 0 + c 1 σ Ψ 1 exp ( λ 1 τ ) + c 2 σ Ψ 2 exp ( λ 2 τ )
where λ 1 , λ 2 < 0 and c 1 σ , c 2 σ are the coefficients determined in the limit τ 0 , which depend on the conditional state σ . These coefficients are gathered in Table 1 for both models.
Ψ 0 , Ψ 1 , Ψ 2 are the eigenvectors of both K 3 l i n , K 3 t r i :
  • Ψ 0 is the eigenvector associated to the eigenvalue 0 and it corresponds to the stationary probability. Since the detailed balance condition Equation (3) holds, the stationary probability is the Boltzmann distribution. Thus,
    Ψ 0 = P 0 P 1 P 2 = 1 Z 1 exp ( ϵ 1 ) exp ( ϵ 2 )
    where Z = 1 + exp ( ϵ 1 ) + exp ( ϵ 2 )
  • Ψ 1 is the eigenvector associated to the second eigenvalue, which reads λ 1 l i n = 1 in the linear case, and λ 1 t r i = ( 1 + 1 P 1 P 2 ) in the triangular case. Ψ 1 reads:
    Ψ 1 = 1 0 1
  • Ψ 2 is the eigenvector associated in both models to the eigenvalue λ 2 = 1 P 1 . It reads:
    Ψ 2 = P 0 P 2 1 P 1 P 2 1

Uncorrelated Measurements on the 3-CMD

We now turn to the limit τ . In this limit of uncorrelated measurements, the time between consecutive measurements τ is larger than the relaxation time of the system, the inverse of the lowest eigenvalue, ∼ 1 / λ 1 . In this limit, P σ ( τ ) reduces to Boltzmann distribution Equation (47) and p τ ( σ | σ ) = P σ . Therefore, the two models (linear and triangular) are indistinguishable. Results for work and information are shown in Figure 3.
First, it is clear that the second law Inequality (16) was satisfied. In the limit P 1 0 , we recovered the 2-CMD. Our generalized expressions for work and information content reproduced well the trend observed in Figure 1c of [16]. In the limit of rare events, where P 1 0 and P 2 0 , 1 , we recovered the infinite average work extraction described for the 2-CMD. Large work extraction was only obtained in the 2-CMD limit.
Efficiency η 3 is shown in Figure 4. For P 1 0 and P 2 0 or P 2 1 , we recovered the limit of rare events and maximal efficiency η 3 1 . In the 3-CMD, we have η 3 2 / 5 , 1 .

4.4. Correlated Measurements in the 3-CMD

Correlated measurements are those where τ is lower than or comparable to the equilibrium relaxation time. Equation (46) shows that the dynamics of the linear and triangular topologies for the 3-CMD are very similar. Indeed, in the limit of uncorrelated measurements, the two dynamics reduce to the same Boltzmann distribution. They also collapse in the limit P 1 1 (with P 0 , P 2 0 ), indeed in this case λ 1 l i n = λ 1 t r i . In between, the topology of the network is relevant. For correlated measurements, we obtained the results shown in Figure 5.
First, the average cycle time (upper-left panel in Figure 5) in the linear case was generally larger than that in the triangular case. The direct consequence, since the average work extraction was comparable in both cases, was that the thermodynamic power (upper-right panel in Figure 5) extracted by the linear 3-CMD was lower than the thermodynamic power extracted by the triangular 3-CMD. Moreover, the thermodynamic power decreased logarithmically to 0 when τ increases. Thus, 3-CMD had optimal power production in limit τ 0 , i.e., in the limit of continuous measurements. The efficiency of the 3-CMD as a function of τ is plotted in the lower-left panel of Figure 5. The linear 3-CMD was generally less efficient than the triangular 3-CMD. The reason for this is in the lower-right panel of Figure 5, where W 3 and I 3 are plotted against τ for both models. For a comparable work extraction, the linear 3-CMD needs to store more information. Again, in the limit of uncorrelated measurements, the two models converge to the same result.

5. Concluding Remarks

In this work, we generalized the 2-CMD of [16,17] to N-states. We obtained generalized expressions of the average extracted work, the average information content stored in the demon’s memory, and of thermodynamical quantities such as the thermodynamic power and the information-to-work efficiency of the N-CMD. We proved that the second law inequality holds for the N-CMD, thus giving bounds on the efficiency of the engine. Comparing the N-CMD to the N-SZ engine, we also showed that the N-CMD could extract more work on average than the N-SZ engine. The most efficient setting of the N-CMD was in the limit of rare events already described in [16]. In the N-CMD case, this limit was obtained by first taking the 2-CMD limit. Thus, no configuration is more efficient in the N-CMD than the 2-CMD limit.
In future work on the N-CMD, it would be interesting to implement a graph theoretic procedure to obtain, for instance, a more precise explanation of the difference between the linear and triangular cases (connected graph versus fully connected graph). It would also be interesting to determine the distributions of the quantities computed here [23] and thus optimize the fluctuations of the N-CMD.

Author Contributions

F.R. conceived the work, and P.R. did the calculations. All authors have read and agreed to the published version of the manuscript.

Funding

FR was supported by the Spanish Research Council Grant PID2019-111148GB-100 and the Icrea Academia Prize 2018.

Data Availability Statement

Data is available upon contacting the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Leff, H.S.; Rex, A. Maxwell’s Demon. In Entropy, Information, Computing; Princeton University Press: Princeton, NJ, USA, 1990; pp. 160–172. [Google Scholar]
  2. Plenio, M.B.; Vitelli, V. The physics of forgetting: Landauer’s erasure principle and information theory. Contemp. Phys. 2001, 42, 25–60. [Google Scholar] [CrossRef]
  3. Ritort, F. The noisy and marvelous molecular world of biology. Inventions 2019, 4, 24. [Google Scholar] [CrossRef]
  4. Rex, A. Maxwell’s demon—A historical review. Entropy 2017, 19, 240. [Google Scholar] [CrossRef]
  5. Ciliberto, S. Experiments in stochastic thermodynamics: Short history and perspectives. Phys. Rev. X 2017, 7, 021051. [Google Scholar] [CrossRef]
  6. Barato, A.; Seifert, U. Unifying three perspectives on information processing in stochastic thermodynamics. Phys. Rev. Lett. 2014, 112, 090601. [Google Scholar] [CrossRef] [PubMed]
  7. Barato, A.C.; Seifert, U. Stochastic thermodynamics with information reservoirs. Phys. Rev. E 2014, 90, 042150. [Google Scholar] [CrossRef] [PubMed]
  8. Bérut, A.; Petrosyan, A.; Ciliberto, S. Detailed Jarzynski equality applied to a logically irreversible procedure. Europhys. Lett. 2013, 103, 60002. [Google Scholar] [CrossRef]
  9. Berut, A.; Petrosyan, A.; Ciliberto, S. Information and thermodynamics: Experimental verification of Landauer’s Erasure principle. J. Stat. Mech. Theory Exp. 2015, 2015, P06015. [Google Scholar] [CrossRef]
  10. Lutz, E.; Ciliberto, S. From Maxwells demon to Landauers eraser. Phys. Today 2015, 68, 30. [Google Scholar] [CrossRef]
  11. Szilard, L. Über die Entropieverminderung in einem thermodynamischen System bei Eingriffen intelligenter Wesen. Z. Phys. 1929, 53, 840–856. [Google Scholar] [CrossRef]
  12. Landauer, R. Irreversibility and heat generation in the computing process. IBM J. Res. Dev. 1961, 5, 183–191. [Google Scholar] [CrossRef]
  13. Bennett, C.H. The thermodynamics of computation—A review. Int. J. Theor. Phys. 1982, 21, 905–940. [Google Scholar] [CrossRef]
  14. Sagawa, T.; Ueda, M. Information Thermodynamics: Maxwell’s Demon in Nonequilibrium Dynamics. In Nonequilibrium Statistical Physics of Small Systems: Fluctuation Relations and Beyond; Wiley Online Library: Hoboken, NJ, USA, 2013; pp. 181–211. [Google Scholar] [CrossRef]
  15. Parrondo, J.M.; Horowitz, J.M.; Sagawa, T. Thermodynamics of information. Nat. Phys. 2015, 11, 131–139. [Google Scholar] [CrossRef]
  16. Ribezzi-Crivellari, M.; Ritort, F. Large work extraction and the Landauer limit in a continuous Maxwell demon. Nat. Phys. 2019, 15, 660–664. [Google Scholar] [CrossRef]
  17. Ribezzi-Crivellari, M.; Ritort, F. Work extraction, information-content and the Landauer bound in the continuous Maxwell Demon. J. Stat. Mech. Theory Exp. 2019, 2019, 084013. [Google Scholar] [CrossRef]
  18. Garrahan, J.P.; Ritort, F. Generalized Continuous Maxwell Demons. arXiv 2021, arXiv:2104.12472. [Google Scholar]
  19. Van Kampen, N.G. Stochastic Processes in Physics and Chemistry; Elsevier: Amsterdam, The Netherlands, 1992; Volume 1. [Google Scholar]
  20. Cover, T.M.; Thomas, J.A. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing); Wiley-Interscience: Hoboken, NJ, USA, 2006. [Google Scholar]
  21. Benichou, O.; Guérin, T.; Voituriez, R. Mean first-passage times in confined media: From Markovian to non-Markovian processes. J. Phys. A Math. Theor. 2015, 48, 163001. [Google Scholar] [CrossRef]
  22. Goupil, C.; Herbert, E. Adapted or Adaptable: How to Manage Entropy Production? Entropy 2019, 22, 29. [Google Scholar] [CrossRef]
  23. Van den Broeck, C.; Esposito, M. Ensemble and trajectory thermodynamics: A brief introduction. Phys. A Stat. Mech. Its Appl. 2015, 418, 6–16. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (A) The 2-CMD is represented as a two-compartment box in which a work extraction protocol is implemented (see text). The cycle of measurement is here T 0 , 1 3 = 1 , 1 , 1 n = 3 , 0 . The average work extracted for this cycle is log V 1 V . (B) 4-CMD in circular geometry. Each compartment had volume V i . The cycle of measurement of the CMD reads: T 0 , 2 4 = 0 , 0 , 0 , 0 n = 4 , 2 . The initial state σ is 0 and the final state σ is 2, the crossing of compartment 3 remained unnoticed for measurements made at every time τ . (C) In the work extraction protocol, a pair of walls limiting the volume of the last compartment, here V 2 , are inserted. The wall between compartments 1 and 2 is fixed, whereas the wall between compartments 2 and 3 was movable and had no mass. To extract the work produced by the expansion of the particle confined in 2, the movable wall is connected to a pulley device. The average work extracted for this cycle is k B T log V 2 V .
Figure 1. (A) The 2-CMD is represented as a two-compartment box in which a work extraction protocol is implemented (see text). The cycle of measurement is here T 0 , 1 3 = 1 , 1 , 1 n = 3 , 0 . The average work extracted for this cycle is log V 1 V . (B) 4-CMD in circular geometry. Each compartment had volume V i . The cycle of measurement of the CMD reads: T 0 , 2 4 = 0 , 0 , 0 , 0 n = 4 , 2 . The initial state σ is 0 and the final state σ is 2, the crossing of compartment 3 remained unnoticed for measurements made at every time τ . (C) In the work extraction protocol, a pair of walls limiting the volume of the last compartment, here V 2 , are inserted. The wall between compartments 1 and 2 is fixed, whereas the wall between compartments 2 and 3 was movable and had no mass. To extract the work produced by the expansion of the particle confined in 2, the movable wall is connected to a pulley device. The average work extracted for this cycle is k B T log V 2 V .
Entropy 25 00321 g001
Figure 2. Definition of the state spaces for the 2 topologies available for the 3-CMD: (A) Triangular 3-CMD, (B) Linear 3-CMD.
Figure 2. Definition of the state spaces for the 2 topologies available for the 3-CMD: (A) Triangular 3-CMD, (B) Linear 3-CMD.
Entropy 25 00321 g002
Figure 3. W 2 , I 2 , W 3 , I 3 as a function of P 2 for P 1 fixed in each panel. Large work extraction is obtained in the limit of rare events P 1 0 and P 2 0 , 1 .
Figure 3. W 2 , I 2 , W 3 , I 3 as a function of P 2 for P 1 fixed in each panel. Large work extraction is obtained in the limit of rare events P 1 0 and P 2 0 , 1 .
Entropy 25 00321 g003
Figure 4. W 2 , I 2 , W 3 , I 3 as a function of P 2 for P 1 fixed in each panel.
Figure 4. W 2 , I 2 , W 3 , I 3 as a function of P 2 for P 1 fixed in each panel.
Entropy 25 00321 g004
Figure 5. The 3-CMD for correlated measurements for P 1 = P 2 = 0.001 . (upper left) Average cycle length t 3 c / τ in both models, Equation (19); (upper right) thermodynamic power; (lower left) efficiency; (lower right) average information content and work extraction in k B T units (orange and red lines collapse on top of each other).
Figure 5. The 3-CMD for correlated measurements for P 1 = P 2 = 0.001 . (upper left) Average cycle length t 3 c / τ in both models, Equation (19); (upper right) thermodynamic power; (lower left) efficiency; (lower right) average information content and work extraction in k B T units (orange and red lines collapse on top of each other).
Entropy 25 00321 g005
Table 1. Coefficients of the spectral expansion Equation (46).
Table 1. Coefficients of the spectral expansion Equation (46).
c 1 σ c 2 σ
σ = 0 ( P 2 + P 2 2 + P 2 ( P 0 + P 1 ) P 0 + P 2 ) P 1 P 2 P 0 + P 2
σ = 1 0 P 2 ( P 1 1 ) P 0 + P 2
σ = 2 P 0 P 0 + P 2 P 1 P 2 P 0 + P 2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Raux, P.; Ritort, F. N-States Continuous Maxwell Demon. Entropy 2023, 25, 321. https://doi.org/10.3390/e25020321

AMA Style

Raux P, Ritort F. N-States Continuous Maxwell Demon. Entropy. 2023; 25(2):321. https://doi.org/10.3390/e25020321

Chicago/Turabian Style

Raux, Paul, and Felix Ritort. 2023. "N-States Continuous Maxwell Demon" Entropy 25, no. 2: 321. https://doi.org/10.3390/e25020321

APA Style

Raux, P., & Ritort, F. (2023). N-States Continuous Maxwell Demon. Entropy, 25(2), 321. https://doi.org/10.3390/e25020321

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop