Next Article in Journal
On the Clausius-Duhem Inequality and Maximum Entropy Production in a Simple Radiating System
Previous Article in Journal
Parameter Estimation for Spatio-Temporal Maximum Entropy Distributions: Application to Neural Spike Trains
Previous Article in Special Issue
Thermodynamics as Control Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Going Round in Circles: Landauer vs. Norton on the Thermodynamics of Computation

1
Department of Philosophy, University of Bristol, Cotham House, Cotham Hill, Bristol BS6 6JL, UK
2
Faculty of Philosophy, Radcliffe Humanities, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG, UK
*
Author to whom correspondence should be addressed.
Entropy 2014, 16(4), 2278-2290; https://doi.org/10.3390/e16042278
Submission received: 26 September 2013 / Revised: 9 January 2014 / Accepted: 16 January 2014 / Published: 22 April 2014
(This article belongs to the Special Issue Maxwell’s Demon 2013)

Abstract

: There seems to be a consensus among physicists that there is a connection between information processing and thermodynamics. In particular, Landauer’s Principle (LP) is widely assumed as part of the foundation of information theoretic/computational reasoning in diverse areas of physics including cosmology. It is also often appealed to in discussions about Maxwell’s demon and the status of the Second Law of Thermodynamics. However, LP has been challenged. In his 2005, Norton argued that LP has not been proved. LPSG offered a new proof of LP. Norton argued that the LPSG proof is unsound and Ladyman and Robertson defended it. However, Norton’s latest work also generalizes his critique to argue for a no go result that he purports to be the end of the thermodynamics of computation. Here we review the dialectic as it currently stands and consider Norton’s no go result.

1. Introduction

A Maxwell Demon violates the Second Law of Thermodyanmics (SL). The form of the latter with which we will operate says that it is not possible to construct a system that will reliably produce a process whose sole effect is the conversion of heat into work (Kelvin version). The caveat about reliability is necessary since once the SL was thought about in the context of statistical mechanics it became widely accepted that such processes can occur, albeit very rarely (See for example [1] (p. 28). According to statistical mechanics, the microstates that correspond to thermodynamic equilibrium states make up the vast majority of all possible microstates. Fluctuations from equilibrium constantly occur, but the conjunction of improbable events necessary for a reliable process in which entropy decreases is so improbable that the chances of one occurring are negligible over the lifetime of the universe. If macroscopic processes involving gases with temperature, volume and pressure are the aggregation of microscopic processes over relatively long timescales then the rarity of microscopic departures from equilibrium leads naturally to the complete absence of processes violate SL at the macroscopic scale. If we formulate SL with this is mind then we should not speak of “microscopic violations of SL” since only a macroscopic perpetual motion machine of the second kind could violate it, and then only if its conversion of heat into work was reliable and not a fluke. (Of course, a reliable microscopic perpetual motion machine of the second kind could be reproduced and several used together to produce a reliable macroscopic perpetual motion machine of the second kind, so the former is also ruled out.)

Maxwell’s demon made his machine reliable by using its intelligence to manipulate micro features of the thermodynamic system so that they accumulated to violate SL. Hence, Maxwell’s Demon is often thought of as an information processing device. The Demon must gain information about the thermodynamical system and perform operations accordingly, and, if it is then to return to its initial state so as to perform a cyclic process the sole effect of which is the conversion of heat into work, then it must forget the information it gained about the system. Effectively then the Demon must perform two information processing tasks, namely, COPY and RESET. It was thought [2] that the measurement [COPY] of the system by the Demon would necessarily result in an entropy increase thereby causing the Demon to fail. It was subsequently shown by Bennett that such reversible computations could be performed without an entropy increase, however, it was argued (also by Bennett, See [37]) that the Demon must fail because entropy will be produced when it erases [RESET] its memory to complete the cycle. The latter claim is an instance of Landauer’s Principle (LP) according to which no logically irreversible operation can be implemented by thermodynamic reversible physical processes [8]. LP is now orthodoxy and accordingly in physics textbooks it is often said that a Maxwell Demon will not decrease entropy overall because of the entropy it produces in accordance with Landauer’s Principle [9] (p. 160).

In [10], Norton argued that LP has not been proved. LPSG [11] offered a new proof of LP using a conceptual framework (the L-machine) based on an analysis of what it means to implement a computation with a physical system. Norton [12] argued that the physical processes used by LPSG in their proof allow for violations of SL, and can used to construct a counterexample to LP, and thus that the proof fails. However, Ladyman and Robertson [13] showed that this criticism was based on an illicit conception of controlled operations, and violation of the correct analysis of implementation. However, Norton’s latest criticism of LP is not confined to LSPG’s proof, and in [14,15], he offers a no go result that he purports to be the end of the thermodynamics of computation.

Here we review the dialectic as it currently stands and consider Norton’s no go result. In Section 1 we discuss what it means to say that a physical system implements a computation and review the analysis of implementation due to LPSG. In Section 2 we describe the structure of the LPSG proof of LP, in Section 3 we outline Norton’s critique of it and Ladyman’s and Robertson’s defence of it. In Section 4 we introduce Norton’s more general no-go result for the thermodynamics of computation, explicate his argument and raise some problems with it. In Section 6 we briefly offer some conclusions.

2. The Relation between Computation and Physics: Implementation

Precise formulation and assessment of LP requires an account of what it means to say that physical processes, which are subject to the laws of thermodynamics, implement computations or logical transformations which are abstract mathematical entities. (We shall from now on speak of “computation” but we have in mind a general notion that would include paradigmatic computations like finding prime factors or sorting lists as well as logical operations such as AND or COPY). In this section we summarize the model of implementation that LPSG developed and applied in their proof of LP. However, the following features must belong to any adequate model of implementation for the reasons given in each case.

(1)

Implementation means that a physical system is taken to represent abstract mathematical or logical states. There is a representation relation between physical degrees of freedom and the input and output states of the computation. The same physical degrees of freedom may or may not represent both input and output logical states. The representation relation is either a matter of stipulation and thus completely anthropocentric or it is natural and independent of our choices. The analysis of implementation below is neutral about this matter but in at least some cases it seems completely unnatural and conventional. (Whether or not information is always in the eye of the beholder is a matter of controversy. There are those who would regard everything including the universe itself as a computer, and others who merely claim that there is natural, in the sense of non-anthropocentric, computation in some physical systems such as DNA and associated molecules in cells.)

(2)

Computations cannot be implemented by single processes but only by families of them. This is because computations have multiple inputs and outputs, but physical processes are particular sequences of physical states. For example, RESET is the map that takes 0 to 0 and 1 to 0, but any given physical process can only instantiate one or other of the two possible initial to final state transformations. Hence to say that a physical system implements a computation requires not only that its initial and final states are those associated with the input and output states of the computation on some particular occasion, but also that had the initial state been that corresponding to one of the other input states of the computation, the resultant physical final state would have been that associated with the appropriate output state of the computation. In this sense implementation, like information, is modal (See [16] (p. 149)).

(3)

The physical system must be such that it can be prepared in any of the physical states that represent the logical states, and there must be a time evolution that can be set in motion and which will bring about the physical process that brings about the appropriate final physical state whatever the given initial state. This amounts to the insistence that everything involved in the computation must be included in the operation of the device, ruling out an agent choosing what to do with the device depending on which initial state it is in. LPSG argued that it is trivial that LP is false without this requirement. Norton’s schemes to violate LP all involve such an external agent (see [13]). It is easy to see that LP is false if this is allowed. Consider a particle on one side or other of a partition. One can perform RESET by doing nothing if the device is in the state corresponding to zero; and by simply rotating the device on a frictionless surface if it is in the other state. (Norton’s example is shifting a one-molecule gas from one side to the other of a box.) In sum, as Feynman says, “only if we do not know which side of the compartment the atom is in do we expend free energy” [17] (p. 144). This point gives rise to the terms “known” and “unknown” data, and we may then say LP only applies to the RESET of unknown data.

(4)

In the case of RESET, the physical system must not have another copy of the input state and use it to perform a different choice of operation. If this is done that the system is not implementing a logically irreversible computation it is effectively implementing UNCOPY (which might be loosely called the erasure of known data).

LPSG’s model of implementation is in terms of “L-machines”. The crucial features of which are that they include a physical device situated in a heat bath with thermodynamic states to represent logical input and output states, and a dynamics that drives it from the appropriate input to output states. That is we require a time-evolution operator ΛL for the device, such that ∀ x ∈ X, ΛL(Din(x)) = Dout(L(x)), where D labels the physical degrees of freedom of the device used to represent the input and output states of the computation. This is because, as Landauer says, “[t]he designer need not anticipate all the possible computations carried out by the machinery: that is what makes a computer more than the mechanical equivalent of looking things up in a table” [18] (p. 24).

It is important to bear in mind this model or some alternative conceptual structure satisfying the above four features when discussing physical implementations of RESET in the context of LP. It is not sufficient just to imagine some particular physical process that could be representing a particular part of a computation such as, for example, 1 → 0.

3. Summary of the LSPG Proof of LP

LPSG offered a new proof of LP based on the construction of a thermodynamic cycle, arguing that if there was a machine that violated LP it would be possible to harness it to produce a violation of the second law of thermodynamics (SL). LPSG thus argued: If not LP then not SL; SL therefore LP. (LPSG thus follow the sound horn of the dilemma posed by Earman and Norton 1999 according to which LP can at most be used to illustrate why certain Maxwell demon’s don’t work but, contrary to the profound horn, cannot be used to demonstrate that there are no Maxwell demons. Bennett 2003 conceded that the profound horn of the dilemma cannot be defended.)

LPSG constructed their thermodynamic cycle using an L-machine assumed to violate LP, combined with a repertoire of processes involving a one-molecule gas, including controlled operations on the L-machine depending on the state of the gas and vice versa. (See Section 5 in [11]). Hence, LPSGs argument is really: If the processes in the cycle are admissible, and not LP, then not SL; SL, therefore, LP or at least one of the processes in the cycle is not admissible.

LPSG presupposed standard thermodynamics which they in no way sought to revise, though they did use the Kelvin form of SL in a weakened form that prohibits cycles that reliably produce the sole effect of converting heat into work as mentioned above.

4. Norton’s Critique of LPSG

In [12], John Norton argues that LPSG’s proof is not sound for at least these reasons.

(1)

The processes LPSG use can be used to construct a counterexample to LP, i.e., “dissipationless erasure”.

(2)

It is possible to use LPSG’s processes to violate SL so LPSG use inconsistent assumptions.

(3)

LPSG (and LPS [19]) use a statistical form of SL with no attempt to ground it in the underlying physical properties of the system.

We will not discuss this final objection here but will focus on the disagreement about the physical processes that LPSG use. LPSG’s processes are part of “a standard repertoire of idealized processes from the literature in the thermodynamics of computation” [12] (p. 185). Norton argues that these and other processes are not implementable even in principle. He also argues that there are no processes to determine the position of a particle without dissipation and that in general such thermodynamically reversible measurements are not possible. Hence, Norton challenges much more than the LPSG proof.

Ladyman and Robertson [13] argue that Norton’s critique of the LPSG proof fails because:

(1)

Norton’s counterexample to LP uses a process that is not used in LPSG’s proof and furthermore the process in question is not admissible.

(2)

The processes LPSG allow do not violate SL.

(3)

Norton illicitly allows a controlled operation of a degree of freedom onto itself [20].

(4)

Norton only considers individual physical processes in his analysis but as argued above, implementation requires consideration of a system with some means of ensuring the time evolution that corresponds to a family of physical processes.

According to Norton, the processes LPSG use, which, as stated above, are standard in the literature, allow for violations of SL and LP. However, the processes Norton uses are importantly different from those LPSG use. For example, one of the processes Norton uses is the insertion of a partition into a one-molecule gas followed by measurement of which side of the partition the one-molecule gas is on, where the result is then used to control an operation that inserts a piston on the relevant side, and the partition removed and the isothermal expansion of the gas to its initial volume. This would be a cycle whose sole effect is the conversion of heat (from the heat bath) to work (moving the piston). Another of Norton’s suggestions is to perform ‘dissipationless erasure’ by removing the partition and re-inserting it. If the molecule is then on the side of the partition that represents 0 leave it, if not repeat until it is.

In these cases, Norton fails to consider the degree of freedom that must be used to control the operation on the gas to extract work and perform RESET respectively. Norton’s schemes require that a controlled operation can be performed from a degree of freedom onto itself (see Section 4.1 in [13]). This violates Condition 3 in Section 2 above. Similarly, the work can only be extracted for the putative violation of SL if something different is done depending on the initial state of the system. The alternative is to consider another physical degree of freedom to control the operation, but then either the RESET of that must be included in the analysis, or the information about which side the particle was on remains stored and the operation becomes the logically reversible UNCOPY as explained above.

Norton also objects to the idea that the insertion and removal of a partition in a one-molecule gas does not change the thermodynamic entropy of the system. This raises subtle issues concerning the connection between information-theoretic and thermodynamic entropies, however, Norton’s objections to the processes LPSG use is much more far-reaching and fundamental. Clearly, the soundness of the LPSG proof depends on the admissibility of the processes they use. While concerns may be raised about dissipationless measurement and unspecified means of controlling operations, the idea that at least some processes such as the isothermal compression and expansion of a one-molecule gas are thermodynamically reversible is orthodox. This is the latest focus of Norton’s assault on the thermodynamics of computation as he now argues that all such microscopic thermodynamically reversible processes are physically unrealizable even in principle. In the next section we explain and assess Norton’s “no go result”.

5. Norton’s No-Go Theorem

In his most recent work, Norton [12,14,15] has strengthened and broadened his criticism of the thermodynamics of computation with an argument with the conclusion that neither reversible nor irreversible computations can be physically implemented in a thermodynamically reversible way at the microscopic scale. He claims that his “no go result shows that fluctuations preclude completion of thermodynamically reversible processes” [14] (p. 1). According to Norton “completion can be achieved only by irreversible processes that create thermodynamic entropy in excess of the Landauer limit” (ibid.). The result is supposed to be completely general and to show that the quantitative LP lower bound is significantly exceeded by all implementations of computation not just irreversible computations. Since LP is the foundational connection between computation and physics Norton takes his no-go result to show that in the case of the thermodynamics of computation the “image of a well-developed science is an illusion. The thermodynamics of computation is an underdeveloped muddle of vague plausibility arguments and misapplications of statistical physics” (ibid. p. 2). Below we explain Norton’s [12,14] argument for the no-go result in which he appeals to an analysis of fluctuations from equilibrium due to Einstein and Tolman. In the next subsection we set out the argument, and in the following subsections we criticize Norton’s arguments [21].

The thermodynamically irreversible processes that Norton says cannot occur are often called “quasi-static”, implying that they can happen arbitrarily slowly. Not all arbitrarily slow processes are thermodynamically irreversible (as Norton forthcoming b points out). In the kind of processes commonly discussed in the literature in general, and used in the LPSG proof of LP in particular, the point of considering the processes as occurring over long periods of time is to allow the systems undergoing them to stay virtually in thermal equilibrium with a heat bath [22].

In the limit of infinite time the system is idealized as being perfectly thermodynamically reversible. Examples include the isothermal expansion and contraction of gases. Norton says of such processes that because they are at “perfect equilibrium” [14] (p. 3) therefore they must be impossible (as the defining feature of equilibrium states is that they do not change).

According to Norton, if such a process was to proceed, and if its final state is not intrinsically more probable than its initial state, then a difference in the free energy between the two states must be introduced to make it more probable. Heat flow between states in thermal equilibrium requires a disequilibrium between them, and Norton argues that, at the microscopic level, this cannot be idealized away, and hence the conclusion of the theorem is that “fluctuations obliterate thermodynamically reversible processes” [14] (p. 6).

When considering how microscopic reversible processes proceed Norton says “[e]ach computational step is carried out by a thermodynamically reversible process, whose stages are parameterized by λ. Fluctuations will carry the system spontaneously from one stage to another. As a result, the system is probabilistically distributed over the different stages” [14] (p. 5). Assuming that the processes in question must spontaneously occur due to fluctuations leads Norton [12,14] to employ a method due to Einstein and Tolman [23] (pp. 637–638) for calculating the probability of fluctuations of a macroscopic quantity away from equilibrium. The Einstein-Tolman formula connects the probability of a fluctuation with a corresponding increase in free energy and applies to fluctuations in some macroscopically controllable variable x around its mean value x0 (at equilibrium the mean value is the equilibrium value). Tolman considers the case of the mean value v0 of the volume v of a fluid. If the volume of the fluid were controlled (for example, by a movable partition) to a different upper value (vlim), then the free energy of the system would be different. Of all the n possible states of the system, there are some n(xlim) which are compatible with the hypothetical control xlim. The probability of fluctuating to the value x is the total probability of all states compatible with the value n(xlim), which is the sum of the probabilities of each individual state:

P ( x ) = n = n ( x lim ) e φ - E n θ

where En is the energy of that state, θ and φ are parameters.

which can be re-expressed as

P ( x ) = C e x p { - F ( x ) / k T }

F(x) is the free energy that would be given to the system as a whole if the control was put in (limiting the variable x) rather than allowing it fluctuate freely.

P ( x ) = P ( x 0 ) e - F ( x ) - F ( x 0 ) k T

As Tolman says, this formula is not precise, because P(x) is the probability of being in a state compatible with the control, which is not the same as being in a state at the control value [24].

Equation (3) tells us that fluctuations far from the mean (xx0 is large) will be exponentially suppressed, as when xx0 is large, F(x) − F(x0) is very large compared to kT, since F(x0) is a minimum. Obviously the most probable fluctuations are those close to the mean, and they contribute most to P(x). In the usual case, the probability distribution P(x) generated when F(x0) is a minimum is a gaussian centered around the mean value x0.

5.1. Norton’s Argument

Premise 1

Processes proceed spontaneously by fluctuations so if a process involves an evolution from stage 1 to stage 2 then stage 2 must be more probable than stage 1. Therefore the ratio in Equation (3) must be large for the process to be more likely to proceed than not (and so to be a reliable means of implementing the relevant branch of the computation). Further, Norton assumes the fluctuations from the Einstein-Tolman method can be treated as the stages of the process. The mean state x0 of the section above is relabeled by Norton as stage “λ1” and the fluctuation value x is called “λ2”.

Premise 2

The probability densities for the system fluctuating between stages λ1 and λ2 are:

p ( λ 2 ) / p ( λ 1 ) = e - [ F ( λ 2 ) - F ( λ 1 ) ] k T

(This is the same as Equation (2) above).

Premise 3

“The process is thermodynamically reversible, hence it is in equilibrium at every stage. Equilibrium requires the vanishing of the generalized thermodynamic force X(λ), acting on the system” (ibid. p. 5).

X ( λ ) = - δ / δ λ T A ( λ ) = 0

Integrating over λ, we find that the free energy (F) is constant over the stages of the process:

F ( λ ) = c o n s t a n t
F ( λ 2 ) = F ( λ 1 )

Therefore, from Equation (3) above,

p ( λ 2 ) = p ( λ 1 )

Since this holds for all λ it follows that, p(λ) = constant. Norton says this last equation is the no go result [14] (p. 6).

The crucial point of Norton’s analysis is that the probability of the system being in the final state is the same as that of it being in the initial state. Hence the “process” fails reliably to take the system from the initial state to the final state; indeed the system fluctuates through all stages of the “process” with equal probability. Norton says, “if we try to set up the process in its initial stage, it is as likely to remain there as to fluctuate to any intermediate stage or the final stage. If the process has arrived at its final stage, it is as likely to remain there as to fluctuate back to any earlier stage” [12] (p. 192).

Hence the conclusion quoted above that “[f]luctuations obliterate thermodynamically reversible processes” [14] (p. 6).

It is important to note that it has not been assumed thus far that the system in question is microscopic. Given that Norton does not want to deny the validity of phenomenological thermodynamics, or at least its approximate validity, it must be possible for the system’s evolution ‘to overcome fluctuations’, in the sense that the final state can be reliably arrived at from the initial state. However, in his analysis this happens by the introduction of a difference in the free energy of the two states so that the ratio p2/p1 increases to the extent that the system will surely evolve from state 1 to state 2, though the associated increase in entropy between the states is negligible at the macroscopic level allowing us to maintain the approximate truth of the claim that the process is thermodynamically reversible. Yet, argues Norton, at the microscopic level this entropy cannot be ignored because it is much greater than the Landauer bound.

The example that he gives is as follows [14] (p. 8). If we introduce a difference in the free energy of 25 kT between the initial and final states, then the final state is strongly favored as the probability density ratio p(λ2)/p(λ1) = 7.2 × 1010. As 25 kT is the thermal energy of 10 oxygen molecules it is negligible for macroscopic processes.

5.2. Critique of the No-Go Result

We offer a number of objections to the no-go result below.

(1)

Even if it is appropriate to treat the processes in question as proceeding by fluctuations (which we argue below it is not), it is inconsistent to say both that the final state λ2 has arisen by a fluctuation (premise 1) and that it is an equilibrium state (premise 2). A fluctuation is a transient deviation from equilibrium. Even it is appropriate to model processes as proceeding by fluctuations this seems inconsistent.

(2)

If fluctuations to any of the “stages” were equally likely, then the probability distribution would be a constant function. This is not compatible with the results of the Einstein-Tolman method because the latter assumes that the free energy is a minimum at equilibrium from which it follows that the probability distribution of the fluctuations will be a Gaussian centered on the mean [25] (p. 245).

(3)

The no go result applies to all systems not just molecular scale processes. If Norton is right the only way to recover macroscopic thermodynamically reversible processes is to introduce an entropy increase that cannot be idealized away as standard thermodynamics demands. Consequently, whilst the target of the no-go result is the thermodynamics of computation, if successful it actually undermines the orthodox way of thinking about thermodynamical process. Whilst this consequence might be palatable for Norton, it is important to note that this is the price of his result. In particular, if the only way to overcome the fluctuations is to make a sufficient increase in the free energy then we cannot take the limit of infinite time discussed above in which the process is supposed to be thermodynamically reversible. Rather than show that reversible processes cannot proceed at the microscale without violating the Landauer bound, Norton’s result is unrestricted. If the argument is sound it shows that all supposedly thermodynamically reversible processes cannot proceed without some entropy increases. The only difference is that this entropy is ‘negligible’ at the macroscale but not at the microscale. However, this runs against the thermodynamic orthodoxy and so Norton’s argument against the thermodynamics of computation conflicts with standard thermodynamics too.

(4)

The Einstein-Tolman method is for fluctuations. Processes do not proceed by fluctuations. In the next subsection we argue that premise 1 should be rejected.

5.2.1. Processes do not Proceed by Fluctuations

When Norton [14] (Section 2.2) argues for his “main result” he models a generic physical process that he describes as a “computational step” and says of it that “[f]luctuations will carry the system spontaneously from one stage to another” ([14] (p. 5), also quoted above). According to Norton’s analysis, because the processes that implement computations must proceed by spontaneous fluctuations, the final state must be more probable than the initial state if it is be reached reliably, and likewise for each of the stages in between. Of course, even if a fluctuation to a value of x far from the mean occurs, the system is overwhelmingly likely to revert to the mean over time. To manufacture a large probability ratio, Norton introduces a difference in the free energies because the probability ratio scales with difference in the free energies of the states (the final state has a lower free energy than the initial state) in accordance with Equation (3) above. The “disequilbrium” would have to be assumed to be infinitesimal so that in the infinite limit of slowing down the process it will be reversible. However, in the limit of taking the difference in the free energy to zero, the probability ratio also becomes negligible and therefore would not drive the process. Therefore, according to Norton there is a lower bound on the size of the “disequilibrium”; we cannot idealize it and take the limit of it becoming infinitesimal.

In Norton’s model the transition probability for the process is determined solely by the free energies of the different stages and therefore synchronically. But if we are considering, say, isothermal compression and isothermal expansion the same two states must be passed through in opposite temporal directions. If the processes were driven solely by properties of each state, then we would either except to only see isothermal expansion or isothermal compression. Furthermore, there would be no way to control the rate at which processes happened which undermines the idea of reversible/quasi-static processes explained above.

The processes in question are not supposed to happen spontaneously but are driven by the dynamics of the computational device. Processes do not proceed only by fluctuations but by fluctuations together with some driving potential/under the operation of a Hamiltonian [26].

In sum, for the reasons given above we argue that Norton’s way of modeling the processes implementing computations as a series of fluctuations from equilibrium is unjustified. Norton may be right that fluctuations prevent thermodynamically reversible processes occurring at the microscopic scale but we have argued that the above argument for the no go result is not sound.

6. Conclusions

Even if the only way we can prove LP is by assuming SL, using the former to show that a Maxwell Demon is impossible is still of interest as it shows the overall coherence of physics and information theory. In any case, LP is the formal foundation of the thermodynamics of computation and it is important to consider its precise formulation and justification. Norton [10] is right that consideration of instances of RESET or erasure is insufficient to demonstrate LP in full generality. However, if every schema for physical implementation of an irreversible computation of which we can think does satisfy LP then we may regard it as a reasonable inductive hypothesis. On the other hand, we could take LP as a regulative principle that is somehow constitutive of the theory and assumed a priori [27]. LPSG offered a proof of LP from SL. That proof may survive some of Norton’s criticisms but it certainly relies on the admissibility of processes that are idealized to be thermodynamically reversible. Norton’s no go result implies that there are no thermodynamically reversible processes that can be reliably implemented. He advertises the result as applying to microscopic processes however it applies to all processes if to any because nowhere in the proof does he use the fact that the system in question is microscopic. As he points out the result does not mean there are not approximations to thermodynamically reversible processes because the only produce macroscopically negligible quantities of entropy because “the disequilibrium required to overcome fluctuations is negligible macroscopically” [14] (p. 7). Clearly then the phenomenological validity of thermodynamics is not challenged by Norton’s result, however, it certainly cannot have any kind of fundamental status if he is right and its use in regimes outside of those probed by our experiments with work and heat may be problematic [28].

Behind the no-go result is a more general worry about the illegitimacy of neglecting fluctuations when describing microscopic processes with thermodynamics. We have argued that the no go result is not established, but we have not sought to dismiss Norton’s concerns reasoning with one-molecule gases and other microscopic systems that assumes they can undergo thermodynamically reversible processes.

Acknowledgments

We are very grateful to John Norton for discussion.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Landau, L.D.; Lifshitz, E.M. Statistical Physics; Pergamon Press: London, UK, 1958. [Google Scholar]
  2. Szilard, L. Uber die Entropieverminderung in einem thermodynamischen system Bei Eingriffen intelligenter wesen. Zeitschr. Phys 1929, 5, 840–856. (in German). [Google Scholar]
  3. Bennett, C.H. The logical reversibility of computation. IBM J. Res. Dev 1973, 17, 525–532. [Google Scholar]
  4. Bennett, C.H. Notes on Landauer’s principle, reversible computation, and Maxwell’s demon. Stud. Hist. Philos. Mod. Phys 2003, 34, 501–510. [Google Scholar]
  5. Bennett, C.H. The thermodynamics of computation—A review. Int. J. Theor. Phys 1982, 21, 905–940. [Google Scholar]
  6. Bennett, C.H. Demons, engines and the second law. Sci. Am 1987, 257, 108–116. [Google Scholar]
  7. Bennett, C.H. Notes on the history of reversible computation. IBM J. Res. Dev 1988, 32, 16–23. [Google Scholar]
  8. Landauer, R. Irreversibility and heat generation in the computing process. IBM J. Res. Dev 1961, 5, 183–191. [Google Scholar]
  9. Blundell, S.; Blundell, K. Concepts in Thermal Physics; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
  10. Norton, J.D. Eaters of the lotus: Landauer’s principle and the return of Maxwell’s demon. Stud. Hist. Philos. Mod. Phys 2005, 36, 375–411. [Google Scholar]
  11. Ladyman, J.; Presnell, S.; Short, A.; Groisman, B. The connection between logical and thermodynamic irreversibility. Stud. Hist. Philos. Mod. Phys 2007, 38, 58–79. [Google Scholar]
  12. Norton, J.D. Waiting for Landauer. Stud. Hist. Philos. Mod. Phys 2011, 42, 184–198. [Google Scholar]
  13. Ladyman, J.; Robertson, K. Landauer defened: Reply to Norton. Stud. Hist. Philos. Sci. B 2013, 44, 263–271. [Google Scholar]
  14. Norton, J.D. The end of the thermodynamics of computation: A no go result. Philos. Sci 2013, 80, 1182–1192. [Google Scholar]
  15. Norton, J.D. All shook up: Fluctuations, Maxwell’s demon and the thermodynamics of computation. Entropy 2013, 15, 4432–4483. [Google Scholar]
  16. Maudlin, T. Quantum Non-Locality and Relativity: Metaphysical Intimations of Modern Physics, 3rd ed; Wiley-Blackwell: Oxford, UK, 2011. [Google Scholar]
  17. Feynman, R.P. Feynman Lectures on Computation; Hey, J.G., Allen, W., Eds.; Addison-Wesley: Reading, MA, USA, 1996. [Google Scholar]
  18. Landauer, R. Information is physical. Phys. Today 1991, 44, 23–29. [Google Scholar]
  19. Ladyman, J.; Presnell, S.; Short, A. The use of the information-theoretic entropy in thermodynamics. Stud. Hist. Philos. Mod. Phys 2008, 39, 315–324. [Google Scholar]
  20. A controlled operation of a degree of freedom onto itself is one where the form of the operation depends on the state of the degree of freedom and nothing else. As Ladyman and Robertson argue, it is trivial that LP is false if such operations are admitted, and controlled operations such as CNOT require a control system and a target system that consist of different physical degrees of freedom so that the state of the former can vary independently of the state of the latter prior to the operation being implemented.
  21. Norton’s arguments for the no-go result in this volume do not make reference to the Einstein-Tolman method. Some but not all of our criticisms here apply only to his argument in his 2011 and forthcoming a. We do not consider Norton forthcoming b here.
  22. Hence, sometimes “quasi-static” is used to refer to processes that involve a sequence of states each of which is almost at equilibrium (see for example [25] (p. 243)).
  23. Tolman, R. The Principles of Statistical Mechanics; Clarendon Press: Oxford, UK, 1938. [Google Scholar]
  24. P(being in a state compatible with the control)= P(fluctuating to the state at the control value) is approximately valid when the control value is near the mean value.
  25. Cowan, B. Topics in Statistical Mechanics; Imperical College Press: London UK, 2005. [Google Scholar]
  26. In any computational device the physical states that represent computational states are likely to be coarse-grained with respect to the underlying fundamental physical state. For example, in a standard logic gate the computational states are represented by potential differences, and all that matters is whether they are either approximately 0V or approximately 3V, and so all manner of fluctuations in the exact physical state of the system are compatible with the computation proceeding in a particular way. Even in a molecular scale computation fluctuations in the exact state of the system will be permissible providing that they are compatible with the reliable evolution of the system in such a way as to take the relevant physical input states to the relevant physical output states so as to implement the computation. Of course, one can also consider probabilistic computation where some aspects of the computation proceed non-deterministically, however in the present context it is deterministic computation that is at issue. This does not mean that the underlying physics must be deterministic in toto, just that the system will follow the appropriate time evolution at the scale of the physical states used to represent computational states so that the computation can be reliably implemented.
  27. Friedman, M. Dynamics of Reason; Chicago University Press: Chicago, IL, USA, 2001. [Google Scholar]
  28. Berut et al. [29] claim to have experimentally verified LP. However, Norton [15] (Section 3.7) argues that their “erasure protocol” includes an unnecessary step that involves dissipation, and also that LP is not tested since the entropy associated with the heat produced by the whole apparatus is not taken into account.
  29. Berut, A.; Arakelyan, A.; Petrosyan, A.; Ciliberto, S.; Dillenschneider, R.; Lutz, E. Experimental verification of Landauer’s principle linking information and thermodynamics. Nature 2012, 483, 187–189. [Google Scholar]

Share and Cite

MDPI and ACS Style

Ladyman, J.; Robertson, K. Going Round in Circles: Landauer vs. Norton on the Thermodynamics of Computation. Entropy 2014, 16, 2278-2290. https://doi.org/10.3390/e16042278

AMA Style

Ladyman J, Robertson K. Going Round in Circles: Landauer vs. Norton on the Thermodynamics of Computation. Entropy. 2014; 16(4):2278-2290. https://doi.org/10.3390/e16042278

Chicago/Turabian Style

Ladyman, James, and Katie Robertson. 2014. "Going Round in Circles: Landauer vs. Norton on the Thermodynamics of Computation" Entropy 16, no. 4: 2278-2290. https://doi.org/10.3390/e16042278

Article Metrics

Back to TopTop