## 1. Introduction: The Measurement Problem

Although quantum mechanics is a well-defined theory in terms of providing unambiguous experimental predictions that can be tested, several physicists and philosophers of science find its presentation to be unsatisfactory. At the center of the controversy is the well-known measurement problem. In the quantum theory, states evolve unitarily, unless a measurement takes place. During a measurement, the state suffers a reduction that is not described by a unitary operator. In traditional formulations, this non-unitary evolution is postulated. Such an approach makes the theory complete from a calculational point of view. However, one is left with an odd formulation: a theory that claims our world is quantum in nature, yet its own definition requires referring to a classical world, as measurements are supposed to take place when the system under study interacts with a classical measurement device.

More recently, a more careful inspection of how the interaction with a measurement device takes place has led to a potential solution to the problem. In the decoherence program (for a review and references, see [

1,

2]), the interaction with a measurement device and, more generally, an environment with a large number of degrees of freedom, leads the quantum system to behave almost as if a reduction had taken place. Essentially, the large number of degrees of freedom of the measurement device and environment “smother” the quantum behavior of the system under study. The evolution of the combined system plus measurement device plus environment is unitary, and everything is ruled by quantum mechanics. However, if one concentrates on the wavefunction of the system under study only, tracing out the environmental degrees of freedom, the evolution appears to be non-unitary and very close to a reduction.

The decoherence program, suitably supplemented by an ontology like the many worlds one, has not convinced everyone (see for instance [

3,

4]) that it provides a complete solution to the measurement problem. Objections can be summarized in two main points:

Since the evolution of the system plus environment plus measuring device is unitary, it could happen that the quantum coherence of the system being studied could be recovered. Model calculations show that such “revivals” could happen, but they would take a long time for most realistic measuring devices. However, it is therefore clear that the picture that emerges is slightly different from the traditional formulation where one can never dial back a reduction. A possible answer is that for most real experimental situations, one would have to wait longer than the age of the universe. Related to this is the point of when exactly does the measurement take place? Since all quantum states throughout the evolution are unitarily equivalent, what distinguishes the moment when the measurement takes place? Some have put this as: “in this picture nothing ever happens”. A possible response is that after a certain amount of time, the state of the system is indistinguishable from the result of a reduction “for all practical purposes” (FAPP) [

5]. However, from a conceptual point of view, the formulation of a theory should not rely on practical aspects. One could imagine that future scientists could perhaps find more accurate ways of measuring things and be able to distinguish what today is “FAPP” indistinguishable from a reduction.

A related point is that one can define global observables for the system plus measuring device plus environment [

3,

6]. The expectation value for one of these observables takes different values if a collapse takes place or not. That could allow in principle to distinguish the FAPP picture of decoherence from a real collapse. From the FAPP perspective, the answer is that these types of observables are very difficult to measure, since this requires measuring the many degrees of freedom of the environment. However, the mere possibility of measuring these observables is not consistent with a realistic description. This point has recently been highlighted by Frauchiger and Renner [

7], who show that quantum mechanics is inconsistent with single world interpretations.

The “and/or” problem [

8]: Even though the interaction with the environment creates a reduced density matrix for the system that has an approximate diagonal form, as all quantum states, the density matrix still represents a superposition of coexisting alternatives. Why is one to interpret it as exclusive alternatives with given probabilities? When is one to transition from an improper to a proper mixture, in d’Espagnat’s terminology [

3].

The Montevideo interpretation [

9] seeks to address these two criticisms. In the spirit of the decoherence program, it examines more finely what is happening in a measurement and how the theory is being formulated. It also brings into play the role of gravity in physics. It may be surprising that gravity has something to do with the formulation of quantum mechanics as one can imagine many systems where quantum effects are very important, but gravity seems to play no role. However, if one believes in the unity of physics, it should not be surprising that at some level, one needs to include all of physics to make certain situations work. More importantly, gravity brings to bear on physics important limitations on what can be done. Non-gravitational physics allows one to consider in principle arbitrarily large amounts of energy in a confined region, which is clearly not feasible physically if one includes gravity. This in particular places limitations on the accuracy with which we can measure any physical quantity [

10,

11]. Gravity also imposes limitations on our notions of space and time, which are absolute in non-gravitational physics. In particular, one has to construct measurements of space and time using real physical (and in this context, really quantum) objects, as no externally-defined space-time is pre-existent. This forces subtle changes in how theories are formulated. In particular, unitary theories do not appear to behave entirely unitarily since the notion of unitary evolution is defined with respect to a perfect classical time that cannot be approximated with arbitrary accuracy by a real (quantum) clock [

12,

13]. Notice that the role of gravity in this approach is different than in Penrose’s [

14]. Here, the emphasis is on limitations to clocks due to the intrinsically relational nature of time in gravity, whereas in Penrose’s differences in time in different places is what is the basis of the mechanism.

These two new elements that the consideration of gravity brings to bear on physics will be key in addressing the two objections to decoherence that we outlined above. Since the evolution of systems is not perfectly unitary, it will not help to revive coherence in quantum systems to wait. Far from seeing coherence restored, it will be progressively further lost. The limitations on measurement will impose fundamental constraints on future physicists in developing means of distinguishing the quantum states produced by decoherence from those produced by a reduction. It will also make it impossible to measure global observables that may tell us if a reduction took place or not. Notice that this is not FAPP: the limitations are fundamental. It is the theories of physics that tell us that the states produced by decoherence are indistinguishable from those produced by a reduction. There is therefore a natural definition of when “something happens”. A measurement takes place when the state produced by decoherence is indistinguishable from a reduction according to the laws of physics [

15]. No invocation of an external observer is needed. Measurements (more generally events) will be plentiful and happening all the time around the universe as quantum systems interact with the environment irrespective of whether or not an experimenter or measuring device is present. The resulting quantum theory can therefore be formulated on purely quantum terms, without invoking a classical world. It also naturally leads to a new ontology consisting of quantum systems, states and events, all objectively defined, in terms of which to build the world. One could ask: Were systems, states and events not already present in the Copenhagen interpretation? Could we not have used them already to build the world? Not entirely, since the definition of event used there required the existence of a classical external world to begin with. It therefore cannot be logically used to base the world on.

In this short review, we would like to outline some results supporting the above point of view. In the next section, we discuss how to use real clocks to describe physical systems where no external time is available. We will show that the evolution of the states presents a fundamental loss of coherence. Notice that we are not modifying quantum mechanics, just pointing out that we cannot entirely access the underlying usual unitary theory when we describe it in terms of real clocks (and measuring rods for space if one is studying quantum field theories). In the following section, we discuss how fundamental limitations of measurement prevent us from distinguishing a state produced by a reduction and a state produced by decoherence. Obviously, given the complexities of the decoherence process, we cannot show in general that this is the case. We will present a modification of a model of decoherence presented by Zurek [

16] to analyze this type of situation to exhibit the point we are making. The next section discusses some philosophical implications of having a realist interpretation of quantum mechanics like the one proposed. We end with a summary.

## 2. Quantum Mechanics without an External Time

When one considers a system without external time, like when one studies cosmology, or model systems like a set of particles with fixed angular and linear momentum assuming no knowledge of external clocks (see [

17] for references), one finds that the Hamiltonian does not generate evolution, but becomes a constraint that can be written generically as

$H=0$. One is left with what is called a “frozen formalism” (see [

18,

19] and the references therein). The values of the canonical coordinates at a given time

$q\left(t\right),p\left(t\right)$ are not observable, since one does not have access to

t. Physical quantities have to have vanishing Poisson brackets with the constraint; they are what is known as “Dirac observables”, and the canonical coordinates are not. The resulting picture is very different from usual physics, and it is difficult to extract physical predictions from it since the observables are all constants of the motion, as they have vanishing Poisson brackets with the Hamiltonian. People have proposed several possible solutions to deal with the situation, although no general consensus on a solution exists. We will not summarize all proposals here, in part because we will not need most of them and for reasons of space. We will focus on two proposals that, when combined, we claim provide a satisfactory solution to how to treat systems without external time when combined with each other. For other approaches, the review by Kuchař is very complete [

19].

The first proposal we call “evolving Dirac observables”. It has appeared in various guises over the years, but it has been emphasized by Rovelli [

20]. The idea is to consider Dirac observables that depend on a parameter

$O\left(t\right)$. These are true Dirac observables, they have vanishing Poisson brackets with the constraint, but their value is not well defined till one specifies the value of a parameter. Notice that

t is just a parameter; it does not have to have any connection with “time”. The definition requires that when the parameter takes the value of one of the canonical variables, the Dirac observable takes the value of another canonical variable, for example,

$Q(t={q}_{1})={q}_{2}$. This in part justifies why it is a Dirac observable. Neither

${q}_{1}$ nor

${q}_{2}$ can be observed since we do not have an external time, but the value

${q}_{2}$ takes when

${q}_{1}$ takes a given value is a relation that can be defined without referring to an external time, i.e., it is invariant information. As an example, let us consider the relativistic particle in one dimension. We parameterize it, including the energy as one of the canonical variables,

${p}_{0}$. One then has a constraint

$\varphi ={p}_{0}^{2}-{p}^{2}-{m}^{2}$. One can easily construct two independent Dirac observables:

p and

$X\equiv q-p\phantom{\rule{0.166667em}{0ex}}{q}^{0}/\sqrt{{p}^{2}+{m}^{2}}$ and verify that they have vanishing Poisson brackets with the constraint. An evolving constant of the motion could be,

and one would have that when the parameter takes the value

${q}^{0}$, the evolving constant

$Q\left(t={q}^{0},{q}^{a},{p}_{a}\right)=q$ takes the value of one of the canonical variables. Therefore, one now has an evolution for the system, the one in terms of the parameter

t. However, problems arise when one tries to quantize things. There, variables like

${q}_{1}$ become quantum operators, but the parameter remains unquantized. How does one therefore make sense of

$t={q}_{1}$ at the quantum level when the left member is a classical quantity and the right a quantum operator (particularly when the quantum operator is not a Dirac observable and therefore not defined on the physical space of states of the theory)?

The second approach was proposed by Page and Wootters [

21]. They advocate quantizing systems without time by promoting all canonical variables to quantum operators. Then, one chooses one as a “clock” and asks relational questions between the other canonical variables and the clock. Conditional probabilities are well defined quantum mechanically. Therefore, without invoking a classical external clock, one chooses a physical variable as a clock, and to study the evolution of probabilities, one asks relational questions: what is the expectation value of variable

${q}_{2}$ when variable

${q}_{1}$ (which we chose as clock) takes the value 3:30 p.m.? Again, because relational information does not require the use of external clocks, it has an invariant character, and one can ask physical questions about it. However, trouble arises when one actually tries to compute the conditional probabilities. Quantum probabilities require computing expectation values with quantum states. In these theories, since we argued that the Hamiltonian is a constraint

$H=0$, at a quantum level, one must have

$\widehat{H}|\Psi \rangle =0$; only states that are annihilated by the constraint are permissible. However, such a space of states is not invariant under multiplication by one of the canonical, variables, i.e.,

$\widehat{H}{q}_{1}|\Psi \rangle \ne 0$. Therefore, one cannot compute the expectation values required to compute the conditional probabilities. One can try to force a calculation pretending that one remains in the space, but then one gets incorrect results. Studies of model systems of a few particles have shown that one does not get the right results for the propagators, for example [

19].

Our proposal [

13] is to combine the two approaches we have just outlined: one computes conditional probabilities of evolving constants of the motion. Therefore, one chooses an evolving constant of the motion that will be the “clock”,

$T\left(t\right)$, and then, one chooses a variable one wishes to study

$O\left(t\right)$ and computes,

where we are computing the conditional probability that the variable

O takes a value within a range of width

$2{\Delta}_{1}$ around the value

${O}_{0}$ when the clock variable takes a value within a range of width

$2{\Delta}_{2}$ around the value

${T}_{0}$ (we are assuming the variables to have continuous spectra, hence the need to ask about ranges of values) on a quantum state described by the density matrix

$\rho $. The quantity

${P}_{{O}_{0}}^{{\Delta}_{1}}$ is the projector on the eigenspace associated with the eigenvalue

${O}_{0}$ of the operator

$\widehat{O}$ and similarly for

${P}_{{T}_{0}}^{{\Delta}_{2}}$. Notice that the expression does not require assigning a value to the classical parameter

t, since it is integrated over all possible values.

We have shown [

13] using a model system of two free particles where we use one of them as the “clock” that this expression, provided one makes judicious assumptions about the clock, indeed reproduces to leading order the correct usual propagator, not having the problems of the Page and Wootters’ proposal.

The above expression in terms of conditional probabilities may look unfamiliar. It is better to rewrite it in terms of an effective density matrix. Then, it looks exactly like the ordinary definition of probability in quantum mechanics,

where on the left-hand side, we shortened the notation omitting mention of the intervals, but they are still there. The effective density matrix is defined as,

where we have assumed that the density matrix of the total system is a direct product of that of the subsystem we use as clock

${\rho}_{\mathrm{cl}}$ and that of the subsystem under study

${\rho}_{s}$, and a similar assumption holds for their evolution operators

U. The probability,

is an unobservable quantity since it represents the probability that the variable $\widehat{T}$ take a given value when the unobservable parameter is t.

The introduction of the effective density matrix clearly illustrates what happens when one describes ordinary quantum mechanics in terms of a clock variable that is a real observable, not a classical parameter. Examining Equation (

4), we see in the right-hand side the ordinary density matrix evolving unitarily as a function of the unobservable parameter

t. If the probability

${\mathcal{P}}_{t}\left(T\right)$ were a Dirac delta, then the effective density matrix would also evolve unitarily. That would mean that the real clock variable is tracking the unobservable parameter

t perfectly. However, no physical variable can do that, so there will always be a dispersion, and the probability

${\mathcal{P}}_{t}\left(T\right)$ will have non-vanishing support over a range of

T. What this is telling us is that the effective density matrix for the system at a time

T will correspond to a superposition of density matrices at different values of the unobservable parameter

t. The resulting evolution is therefore non-unitary. We see clearly the origin of the non-unitarity: the real clock variable cannot keep track of the unitary evolution of quantum mechanics.

In fact, if we assume that the clock variable tracks the unobservable parameter almost perfectly by writing:

(a term proportional to

$\delta {(t-T)}^{\prime}$ only adds an unobservable shift), one can show that the evolution implied by (

4) is generated by a modified Schrödinger equation,

where

$\sigma \left(T\right)=db\left(T\right)/dT$ is the rate of spread of the probability

${\mathcal{P}}_{t}\left(T\right)$ and

$\rho ={\rho}_{\mathrm{eff}}\left(T\right)$.

Therefore, we clearly see that when describing quantum mechanics in terms of a real clock variable associated with a quantum observable rather than with a classical parameter, the system loses unitarity, and it is progressively worse the longer one waits.

The existence of the effect we are discussing is not controversial. In fact, one can make it as large as one wishes simply choosing a bad clock. Bonifacio et al. [

22,

23,

24] have reinterpreted certain experiments with Rabi oscillations as being described with an inaccurate clock, and indeed, experimentally, one sees the loss of coherence described above. More recently, it has been demonstrated with entangled photons, as well [

25].

However, the question still remains: can this effect be made arbitrarily small by a choice of the clock variable? If one takes into account gravity, the answer is negative. Using non-gravitational quantum physics, Salecker and Wigner [

10,

11] examined the question of how accurate a clock can be. The answer is that the uncertainty in the measurement of time is proportional to the square root of the length of time one desires to measure and inversely proportional to the square root of the mass of the clock. Therefore, to make a clock more accurate, one needs to make it more massive. However, if one takes gravity into account, there clearly is a limitation as to how massive a clock can be: at some point, it turns into a black hole. Several phenomenological models of this were proposed by various authors, and they all agree that the ultimate accuracy of a clock goes as some fractional power of the time to be measured times a fractional power of Planck’s time [

26,

27,

28,

29,

30]. Different arguments lead to slightly different powers, but the result is always that the longer one wishes to measure time, the more inaccurate the clocks become. For instance, in the phenomenological model of Ng and Van Dam [

26,

27,

28,

29,

30], one has that

$\delta T\sim {T}^{1/3}{T}_{\mathrm{Planck}}^{2/3}$. Substituting that in the modified Schrödinger equation, its solution can be found in closed form, in an energy eigenbasis,

where

${\omega}_{nm}$ is the Bohr frequency between the two energy eigenstates

n and

m. We see that the off-diagonal terms of the density matrix die off exponentially. Pure states evolve into mixed states.