- freely available
- re-usable

*Entropy*
**2015**,
*17*(2),
772-789;
doi:10.3390/e17020772

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**There is a well-known analogy between statistical and quantum mechanics. In statistical mechanics, Boltzmann realized that the probability for a system in thermal equilibrium to occupy a given state is proportional to exp(−E/kT ), where E is the energy of that state. In quantum mechanics, Feynman realized that the amplitude for a system to undergo a given history is proportional to exp(−S/iħ), where S is the action of that history. In statistical mechanics, we can recover Boltzmann’s formula by maximizing entropy subject to a constraint on the expected energy. This raises the question: what is the quantum mechanical analogue of entropy? We give a formula for this quantity, which we call “quantropy”. We recover Feynman’s formula from assuming that histories have complex amplitudes, that these amplitudes sum to one and that the amplitudes give a stationary point of quantropy subject to a constraint on the expected action. Alternatively, we can assume the amplitudes sum to one and that they give a stationary point of a quantity that we call “free action”, which is analogous to free energy in statistical mechanics. We compute the quantropy, expected action and free action for a free particle and draw some conclusions from the results.

## 1. Introduction

There is a famous analogy between statistical mechanics and quantum mechanics. In statistical mechanics, a system can be in any state, but its probability of being in a state with energy E is proportional to exp(−E/T ), where T is the temperature in units where Boltzmann’s constant is one. In quantum mechanics, a system can move along any path, but its amplitude for moving along a path with action S is proportional to exp(−S/iħ), where ħ is Planck’s constant. Therefore, we have an analogy, where making the replacements:

In statistical mechanics, the probabilities exp(−E/T ) arise naturally from maximizing entropy subject to a constraint on the expected value of energy. Following the analogy, we might guess that the amplitudes exp(−S/iħ) arise from maximizing some quantity subject to a constraint on the expected value of action. This quantity deserves a name, so let us tentatively call it “quantropy”.

In fact, Lisi [5] and Munkhammar [7] have already treated quantum systems as interacting with a “heat bath” of action and sought to derive quantum mechanics from a principle of maximum entropy with amplitudes (or as they prefer to put it, complex probabilities) replacing probabilities. However, seeking to derive amplitudes for paths in quantum mechanics from a maximum principle is not quite correct. Quantum mechanics is rife with complex numbers, and it makes no sense to maximize a complex function. However, a complex function can still have stationary points, where its first derivative vanishes. Therefore, a less naive program is to derive the amplitudes in quantum mechanics from a “principle of stationary quantropy”. We do this for a class of discrete systems and then illustrate the idea with the example of a free particle, discretizing both space and time.

Carrying this out rigorously is not completely trivial. In the simplest case, entropy is defined as a sum involving logarithms. Moving to quantropy, each term in the sum involves a logarithm of a complex number. Making each term well defined requires a choice of branch cut; it is not immediately clear that we can do this and obtain a differentiable function as the result. Additional complications arise when we consider the continuum limit of the free particle. Our treatment handles all these issues.

We begin by reviewing the main variational principles in physics and pointing out the conceptual gap that quantropy fills. In Section 2, we introduce quantropy along with two related quantities: the free action and the expected action. In Section 3, we develop tools for computing all of these quantities. In Section 4, we illustrate our methods with the example of a free particle, and address some of the conceptual questions raised by our results. We conclude by mentioning some open issues in Section 5.

#### 1.1. Statics

Static systems at temperature zero obey the principle of minimum energy. In classical mechanics, energy is often the sum of kinetic and potential energy:

While familiar, this is actually somewhat noteworthy. Usually, minimizing the sum of two things involves an interesting tradeoff. In quantum physics, a tradeoff really is required, thanks to the uncertainty principle. We cannot know the position and velocity of a particle simultaneously, so we cannot simultaneously minimize potential and kinetic energy. This makes minimizing their sum much more interesting. However, in classical mechanics, in situations where K has a minimum at velocity zero statics at temperature zero is governed by a principle of minimum potential energy.

The study of static systems at nonzero temperature deserves to be called “thermostatics”, though it is usually called “equilibrium thermodynamics”. In classical or quantum equilibrium thermodynamics at any fixed temperature, a system is governed by the principle of minimum free energy. Instead of our system occupying a single definite state, it will have different probabilities of occupying different states, and these probabilities will be chosen to minimize the free energy:

Where does the principle of minimum free energy come from? One answer is that free energy F is the amount of “useful” energy: the expected energy 〈E〉 minus the amount in the form of heat, TS. For some reason, systems in equilibrium minimize this.

Boltzmann and Gibbs gave a deeper answer in terms of entropy. Suppose that our system has some space of states X and that the energy of the state x ∈ X is E(x). Suppose that X is a measure space with some measure dx, and assume that we can describe the equilibrium state using a probability distribution, a function p: X → [0, ∞) with:

In summary, every minimum or maximum principle in statics can be seen as a special case or a limiting case of the principle of maximum entropy, as long as we admit that sometimes we need to maximize entropy subject to constraints. This is quite satisfying, because as noted by Jaynes, the principle of maximum entropy is a general principle for reasoning in situations of partial ignorance [4]. Therefore, we have a kind of “logical” explanation for the laws of statics.

#### 1.2. Dynamics

Now, suppose things are changing as time passes, so we are doing dynamics instead of statics. In classical mechanics, we can imagine a system tracing out a path q(t) as time passes from t = t_{0} to t = t_{1}. The action of this path is often the integral of the kinetic minus potential energy:

The principle of least action says that if we fix the endpoints of this path, that is the points q(t_{0}) and q(t_{1}), the system will follow the path that minimizes the action subject to these constraints. This is a powerful idea in classical mechanics. However, in fact, sometimes, the system merely chooses a stationary point of the action. The Euler–Lagrange equations can be derived just from this assumption. Therefore, it is better to speak of the principle of stationary action.

This principle governs classical dynamics. To generalize it to quantum dynamics, Feynman [2] proposed that instead of our system following a single definite path, it can follow any path, with an amplitude a(q) of following the path q. He proposed this formula for the amplitude:

_{0}at time t

_{0}and ending at a point x

_{1}at time t

_{1}, we obtain a result proportional to the amplitude for a particle to go from the first point to the second. He also gave a heuristic argument showing that as ħ → 0, this prescription reduces to the principle of stationary action.

Unfortunately, the integral over all paths is hard to make rigorous, except in certain special cases. This is a bit of a distraction for our discussion now, so let us talk more abstractly about “histories” instead of paths and consider a system whose possible histories form some space X with a measure dx. We will look at an example later.

Suppose the action of the history x ∈ X is A(x). Then, Feynman’s sum over histories formulation of quantum mechanics says the amplitude of the history x is:

## 2. Quantropy

We have described statics and dynamics and a well-known analogy between them. However, we have seen that there are some missing items in the analogy:

Statics | Dynamics |
---|---|

statistical mechanics | quantum mechanics |

probabilities | amplitudes |

Boltzmann distribution | Feynman sum over histories |

energy | action |

temperature | Planck’s constant times i |

entropy | ??? |

free energy | ??? |

Our goal now is to fill in the missing entries in this chart. Since the Boltzmann distribution:

Unfortunately Feynman’s sum over histories involves complex numbers, and it does not make sense to maximize a complex function. So, let us try to derive Feynman’s prescription from a principle of stationary quantropy.

Suppose we have a set of histories, X, equipped with a measure dx. Suppose there is a function a: X → ℂ assigning to each history x ∈ X a complex amplitude a(x). We assume these amplitudes are normalized, so that:

To formalize this, we could treat quantropy as depending not on the amplitudes a(x), but on some function b: X → ℂ, such that exp(b(x)) = a(x). In this approach, we require:

Next, let us seek amplitudes a(x) that give a stationary point of the quantropy Q subject to a constraint on the “expected action”:

Let us look for a stationary point of Q subject to a constraint on 〈A〉, say 〈A〉 = α. To do this, one would be inclined to use Lagrange multipliers and to look for a stationary point of:

Following the usual Lagrange multiplier recipe, we seek amplitudes for which:

Note that the final answer does two equivalent things in one blow:

It gives a stationary point of quantropy subject to the constraints that the amplitudes sum to 1 and the expected action takes some fixed value.

It gives a stationary point of the free action:

$$\langle A\rangle -i\mathit{\hslash}Q$$

It is also worth noting that when ħ → 0, the free action reduces to the action. Thus, in this limit, the principle of stationary free action reduces to the principle of stationary action in classical dynamics.

## 3. Computing Quantropy

In thermodynamics, there is a standard way to compute the entropy of a system in equilibrium starting from its partition function. We can use the same techniques to compute quantropy. It is harder to get the integrals to converge in interesting examples. But we can worry about that later, when we do an example.

First, recall how to compute the entropy of a system in equilibrium starting from its partition function. Let X be the set of states of the system. We assume that X is a measure space and that the system is in a mixed state given by some probability distribution p: X → [0, ∞), where, of course:

Of course, we can also write the free energy in terms of the partition function and β:

Similarly, if we know the partition function of a quantum system as a function of λ = 1/iħ, we can compute its quantropy, expected action and free action. Let X be the set of histories of some system. We assume that X is a measure space and that the amplitudes for histories are given by a function: a: X → ℂ obeying

As mentioned, the formula for quantropy here is a bit dangerous, since we are taking the logarithm of the complex-valued function a(x), which requires choosing a branch. Luckily, the ambiguity is greatly reduced when we use Feynman’s prescription for a, because in this case, a(x) is defined in terms of an exponential. Therefore, we can choose this branch of the logarithm:

Inserting this formula for ln a(x) into the formula for quantropy, we obtain:

In terms of λ, we have:

Statistical Mechanics | Quantum Mechanics |
---|---|

states: x ∈ X | histories: x ∈ X |

probabilities: p: X → [0, ∞) | amplitudes: a: X → ℂ |

energy: E : X → ℝ | action: A: X → ℝ |

temperature: T | Planck’s constant times i: iħ |

coolness: β = 1/T | classicality: λ = 1/iħ |

partition function: $Z={\int}_{X}{e}^{-\beta E(x)}\mathit{dx}$ | partition function: $Z={\int}_{X}{e}^{-\mathrm{\lambda}A(x)}\mathit{dx}$ |

Boltzmann distribution: p(x) = e^{−}^{βE}^{(}^{x}^{)}/Z | Feynman sum over histories: a(x) = e^{−}^{λA}^{(}^{x}^{)}/Z |

entropy: $S=-{\displaystyle {\int}_{X}p}(x)\mathrm{ln}p(x)\mathit{dx}$ | quantropy: $Q=-{\displaystyle {\int}_{X}a}(x)\mathrm{ln}a(x)\mathit{dx}$ |

expected energy: 〈E〉 = ∫_{X} p(x)E(x) dx | expected action: 〈A〉 = ∫_{X} a(x)A(x) dx |

free energy: F = 〈E〉 − TS | free action: Φ = 〈A〉 − iħQ |

$\langle E\rangle =-\frac{d}{d\beta}\mathrm{ln}Z$ | $\langle A\rangle =-\frac{d}{d\mathrm{\lambda}}\mathrm{ln}Z$ |

$F=-\frac{1}{\beta}\mathrm{ln}Z$ | $\mathrm{\Phi}=-\frac{1}{\mathrm{\lambda}}\mathrm{ln}Z$ |

$S=\mathrm{ln}Z-\beta \frac{d}{d\beta}\mathrm{ln}Z$ | $Q=\mathrm{ln}Z-\mathrm{\lambda}\frac{d}{d\mathrm{\lambda}}\mathrm{ln}Z$ |

principle of maximum entropy | principle of stationary quantropy |

principle of minimum energy (in T → 0 limit) | principle of stationary action (in ħ → 0 limit) |

## 4. The Quantropy of a Free Particle

Let us illustrate these ideas with an example: a free particle. Suppose we have a free particle on a line tracing out some path as time goes by:

Unfortunately, the space of all paths is infinite-dimensional, so Dq is ill-defined: there is no “Lebesgue measure” on an infinite-dimensional vector space. Thus, we start by treating time as discrete, a trick going back to Feynman’s original work [2]. We consider n time intervals of length Δt. We say the position of our particle at the i-th time step is q_{i} ∈ ℝ, and require that the particle keeps a constant velocity v_{i} between the (i − 1)-st and i-th time steps:

_{0}= 0, but its final position q

_{n}is arbitrary. If we do not “nail down” the particle at some particular time in this way, our path integrals will diverge. Therefore, our space of histories is:

We start with the partition function. Naively, it is:

^{n}with coordinates q

_{1},…, q

_{n}, an obvious guess for a measure would be:

_{1}⋯dq

_{n}has units of length

^{n}. Therefore, to make the measure dimensionless, we introduce a length scale, Δx, and use the measure:

Now, let us compute the partition function. For starters, we have:

_{0}is fixed, we can express the positions q

_{1},…, q

_{n}in terms of the velocities v

_{1},…v

_{n}. Since:

Now, when α is positive, we have:

Given this formula for the partition function, we can compute everything we care about: the expected action, free action and quantropy. Let us start with the expected action:

This formula says that the expected action of our freely moving quantum particle is proportional to n, the number of time steps. Each time step contributes î/2 to the expected action. The mass of the particle, the time step Δt and the length scale Δx do not matter at all; they disappear when we take the derivative of the logarithm containing them. Indeed, our action could be any function of this sort:

_{i}are positive numbers, and we would still get the same expected action:

We can try to interpret this as follows. In the path integral approach to quantum mechanics, a system can trace out any history it wants. If the space of histories is an n-dimensional vector space, it takes n real numbers to determine a specific history. Each number counts as one “decision”. In the situation we have described, where the action is a positive definite quadratic form, each decision contributes iħ/2 to the expected action.

There are some questions worth answering:

Why is the expected action imaginary? The action A is real. How can its expected value be imaginary? The reason is that we are not taking its expected value with respect to a probability measure, but instead, with respect to a complex-valued measure. Recall that:

$$\langle A\rangle =\frac{{\displaystyle {\int}_{X}A(x){e}^{-\mathrm{\lambda}A(x)}}\mathit{dx}}{{\displaystyle {\int}_{X}{e}^{-\mathrm{\lambda}A(x)}}\mathit{dx}}.$$The action A is real, but λ = 1/iħ is imaginary; so, it is not surprising that this “expected value” is complex-valued.

Why does the expected action diverge as n → ∞? We have discretized time in our calculation. To take the continuum limit, we must let n → ∞, while simultaneously letting Δt → 0 in such a way that nΔt stays constant. Some quantities will converge when we take this limit, but the expected action will not: it will go to infinity. What does this mean?

This phenomenon is similar to how the expected length of the path of a particle undergoing Brownian motion is infinite. In fact, the free quantum particle is just a Wick-rotated version of Brownian motion, where we replace time by imaginary time; so, the analogy is fairly close. The action we are considering now is not exactly analogous to the arc length of a path:

$$\int}_{0}^{T}\left|\frac{dq}{\mathit{dt}}\right|\mathit{dt$$$${\int}_{0}^{T}{\left|\frac{dq}{\mathit{dt}}\right|}^{2}\mathit{dt}.$$However, both of these quantities diverge when we discretize Brownian motion and then take the continuum limit. The reason is that for Brownian motion, with probability 1, the path of the particle is non-differentiable, with Hausdorff dimension > 1 [6]. We cannot apply probability theory to the quantum situation, but we are seeing that the “typical” path of a quantum free particle has infinite expected action in the continuum limit.

Why does the expected action of the free particle resemble the expected energy of an ideal gas? For a classical ideal gas with n particles in three-dimensional space, the expected energy is:

$$\langle E\rangle =\frac{3}{2}nT$$$$\langle A\rangle =\frac{3}{2}ni\mathit{\hslash}.$$Why are the answers so similar?

The answers are similar because of the analogy we are discussing. Just as the action of the free particle is a positive definite quadratic form on ℝ

^{n}, so is the energy of the ideal gas. Thus, computing the expected action of the free particle is just like computing the expected energy of the ideal gas, after we make these replacements:$$\begin{array}{l}E\phantom{\rule{0.3em}{0ex}}\mapsto \phantom{\rule{0.3em}{0ex}}A\\ T\phantom{\rule{0.3em}{0ex}}\mapsto \phantom{\rule{0.3em}{0ex}}i\mathit{\hslash}\end{array}$$

The last remark also means that the formulas for the free action and quantropy of a quantum free particle will be analogous to those for the free energy and entropy of a classical ideal gas, except missing the factor of 3 when we consider a particle on a line. For the free particle on a line, we have seen that:

The presence of this ln K term is surprising, since the constant K is not part of the usual theory of a free quantum particle. A completely analogous surprise occurs when computing the partition function of a classical ideal gas. The usual textbook answer involves a term of type ln K, where K is proportional to the volume of the box containing the gas divided by the cube of the thermal de Broglie wavelength of the gas molecules [8]. Curiously, the latter quantity involves Planck’s constant, despite the fact that we we are considering a classical ideal gas! Indeed, we are forced to introduce a quantity with dimensions of action to make the partition function of the gas dimensionless, because the partition function is an integral of a dimensionless quantity over position-momentum pairs, and dpdq has units of action. Nothing within classical mechanics forces us to choose this quantity to be Planck’s constant; any choice will do. Changing our choice only changes the free energy by an additive constant. Nonetheless, introducing Planck’s constant has the advantage of removing this ambiguity in the free energy of the classical ideal gas, in a way that is retroactively justified by quantum mechanics.

Analogous remarks apply to the length scale Δx in our computation of the free action of a quantum particle. We introduced it only to make the partition function dimensionless. It is mysterious, much as Planck’s constant was mysterious when it first forced its way into thermodynamics. We do not have a theory or experiment that chooses a favored value for this constant. All we can say at present is that it appears naturally when we push the analogy between statistical mechanics and quantum mechanics to its logical conclusion, or, a skeptic might say, to its breaking point.

Finally, the quantropy of the free particle on a line is:

## 5. Conclusions

There are many questions left to tackle. The biggest is: what is the meaning of quantropy? Unfortunately it seems hard to attack this directly. It may be easier to work out more examples and develop more of an intuition for this concept. There are, however, some related puzzles worth keeping in mind.

As emphasized by Lisi [5], it is rather peculiar that in the path-integral approach to quantum mechanics, we normalize the complex numbers a(x) associated with paths, so that they integrate to 1:

It is also worth keeping in mind another analogy: “coolness as imaginary time”. Here, we treat β as analogous to it/ħ, rather than 1/iħ. This is widely used to convert quantum mechanics problems into statistical mechanics problems by means of Wick rotation, which essentially means studying the unitary group exp(−itH/ħ) by studying the semigroup exp(−βH) and then analytically continuing β to imaginary values. Wick rotation plays an important role in Hawking’s computation of the entropy of a black hole, nicely summarized in his book with Penrose [3]. The precise relation of this other analogy to the one explored here remains unclear and is worth exploring. Note that the quantum Hamiltonian H shows up on both sides of this other analogy.

## Acknowledgments

We thank Garrett Lisi, Joakim Munkhammar and readers of the Azimuth blog for many helpful suggestions. We thank the Centre for Quantum Technology and a Foundational Questions Institute mini-grant for supporting this research.

## Author Contributions

Both authors contributed to the research and the writing. Both authors have read and approved the final manuscript.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Feynman, R.P. Negative probability. Quantum Implications: Essays in Honour of David Bohm; Peat, F., Hiley, B., Eds.; Routledge & Kegan Paul Ltd.: London, UK, 1987; pp. 235–248. Available online: http://cds.cern.ch/record/154856/files/pre-27827.pdf accessed on 5 February 2015. [Google Scholar]
- Feynman, R.P.; Hibbs, A.R. Quantum Mechanics and Path Integrals; McGraw-Hill: New York, NY, USA, 1965. [Google Scholar]
- Hawking, S.; Penrose, R. The Nature of Space and Time; Princeton University Press: Princeton, NJ, USA, 1996; pp. 46–50. [Google Scholar]
- Jaynes, E.T. Probability Theory: The Logic of Science; Cambridge University Press: Cambridge, UK, 2003. Available online: http://omega.albany.edu:8008/JaynesBook.html accessed on 5 February 2015.
- Lisi, G. Quantum mechanics from a universal action reservoir, 2006. arXiv:physics/0605068.
- Mörters, P.; Peres, Y. Brownian Motion; Cambridge University Press: Cambridge, UK, 2010. Available online: http://www.stat.berkeley.edu/~peres/bmbook.pdf accessed on 5 February 2015.
- Munkhammar, J. Canonical relational quantum mechanics from information theory. Electron. J. Theor. Phys, 2011, 8, pp. 93–108. Available online: http://www.ejtp.com/articles/ejtpv8i25p93.pdf accessed on 5 February 2015. [Google Scholar]
- Reif, F. Fundamentals of Statistical and Thermal Physics; McGraw-Hill: New York, NY, USA, 1965; pp. 239–248. [Google Scholar]

© 2015 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).