The Montevideo Interpretation: How the inclusion of a Quantum Gravitational Notion of Time Solves the Measurement Problem

We review the Montevideo Interpretation of quantum mechanics, which is based on the use of real clocks to describe physics, using the framework recently introduced by Hoehn, Smith and Lock to treat the problem of time in generally covariant systems. The use of the new formalism makes the whole construction more accessible to readers without familiarity with totally constrained systems. We find that as in the original formulation, a fundamental mechanism of decoherence emerges that allows to supplement ordinary environmental decoherence and avoid its criticisms. Recent results on quantum complexity provide additional support to the type of global protocols used to prove that within ordinary -- unitary -- quantum mechanics no definite event -- an outcome to which a probability can be associated -- occurs. In lieu of this, states that start in a coherent superposition of possible outcomes always remain as a superposition. We show that, if one takes into account fundamental inescapable uncertainties in measuring length and time intervals due to general relativity and quantum mechanics, the previously mentioned global protocols no longer allow to distinguish whether the state is in a superposition or not. One is left with a formulation of quantum mechanics purely defined in quantum mechanical terms without any reference to the classical world and with an intrinsic operational definition of quantum events that does not need external observers.

We review the Montevideo Interpretation of quantum mechanics, which is based on the use of real clocks to describe physics, using the framework recently introduced by Höhn, Smith and Lock to treat the problem of time in generally covariant systems. The use of the new formalism makes the whole construction more accessible to readers without familiarity with totally constrained systems. We find that as in the original formulation, a fundamental mechanism of decoherence emerges that allows to supplement ordinary environmental decoherence and avoid its criticisms. Recent results on quantum complexity provide additional support to the type of global protocols used to prove that within ordinary --unitary--quantum mechanics no definite event --an outcome to which a probability can be associated--occurs. In lieu of this, states that start in a coherent superposition of possible outcomes always remain as a superposition. We show that, if one takes into account fundamental inescapable uncertainties in measuring length and time intervals due to general relativity and quantum mechanics, the previously mentioned global protocols no longer allow to distinguish whether the state is in a superposition or not. One is left with a formulation of quantum mechanics purely defined in quantum mechanical terms without any reference to the classical world and with an intrinsic operational definition of quantum events that does not need external observers.

I. INTRODUCTION
The act of measurement plays a role in quantum mechanics unlike any it had in classical theories. Physical predictions change after a measurement. We take the point of view that our understanding of quantum mechanics is not complete until a well defined understanding of the measurement process is in place. We want a complete characterization of the process ρ → ρ event , that is, to go from a coherent superposition to a state that can be attributed definite outcomes. Environmental decoherence [1] is a proposal that comes close, identifying instances in which the transition occurs for all practical purposes. There are additional proposals like quantum Darwinism [2] and coarse-grained measurements [3].
However, solutions based on decoherence are incomplete, since they give rise to a subjective and ill defined notion of event. There are protocols that show that the 'decoherence solution' is apparent. We are not the only ones levying these criticisms, and lots of work has been devoted to different modifications and/or re-interpretations of the theory to clarify the situation. Several involve compromises, for example giving up the possibility of a single-world description as a way of keeping the formalism of quantum theory intact [4], or losing the notion of objectivity [5], accepting makeshift modifications to the theory in order to restore a single-world picture [6], or even combinations of all of the above [7]. Here we would like to present a realist description, with minimal premises and a well defined notion of event.
The Montevideo Interpretation of quantum mechanics [8] is in reality an extension of quantum mechanics obtained by taking into account that time must be a physical observable and not a classical parameter. It takes seriously the notion that everything must be quantum in a quantum universe. It does not invoke a classical world in order to define the quantum theory, like the Copenhagen interpretations does. Considering that time is a physical observable is even more natural when one considers gravity, as it is well known that in generally covariant theories the time coordinate is just a gauge parameter and clocks must be constructed from physical quantities.
In this paper we present succinctly the main results that lead to the Montevideo Interpretation in the light of recent advances in the understanding of problems related to it, pointing in each case to references where the analysis are more extensive.
In section II we introduce a notion of time for generally covariant systems. Section III discusses fundamental limits to space-time measurements based on general relativity and quantum mechanics. Section IV discusses how quantum measurements are possible when real clocks are considered. Section V discusses the objective notion of event we consider. We end with conclusions.

II. A QUANTUM NOTION OF TIME IN GENERALLY COVARIANT SYSTEMS
There has been recent progress in understanding with precision how to recover the usual quantum descriptions, that is, the Schrödinger or Heisenberg pictures in totally covariant systems like general relativity [9]. It provides a systematic analysis that cannot be ignored in any attempt to advance the understanding of the problem of time.
In their main paper about the problem of time Höhn, Smith and Lock [9] have shown how it is possible to treat consistently three historical approaches to this problem. We will review here the first two approaches at a classical level to focus in what follows on the nature of the time parameter used.
One wishes to describe a totally constrained system with a constraint C H , whose classical dynamics is defined by the gauge transformations generated by such a constraint. A first approach can be made in terms of relational physical observables 1 F f,T (τ ) [10] [11]. They are Dirac observables, that is they commute with the constraint(s), and coincide with the function of the kinematical variables f when the variable T is equal to the parameter τ , also belonging to the kinematical space. A second approach can be to fix a gauge (a choice of time in the classical theory) and deparameterizing. For that purpose a canonical transformation is introduced in the classical system that gives rise to a time variable T and its canonically conjugate momentum P T , and relational Dirac observables associated to the remaining original canonical variables F qi ,T (τ ), F pi,T (τ ). The gauge is fixed, for instance T = 0 and one solves for P T , which eliminates the gauge dependence. The F qi,0 (τ ) obey now canonical equations that describe their evolution in terms of a true Hamiltonian H true resulting from the deparameterization and satisfy initial conditions F qi,0 (0) = q 0 i and F pi,0 (0) = p 0 i at T = 0. One then recovers the classical evolution in terms of the classical clock variable τ . The third approach, that of Page and Wootters [12], is purely quantum and cannot be treated classically.
In order to quantize these approaches one needs to introduce in each case a time operator. Time operators are not defined in the physical space of states. The latter are annihilated by constraints and therefore by the total Hamiltonian (which is a linear combination of the constraints in a totally constrained theory). Physical operators therefore have to commute with the total Hamiltonian and therefore are constants of the motion. Even at the kinematical level (where states that are not annihilated by the total Hamiltonian and the constraints) it is problematic to quantize a time variable. In fact, as Pauli has observed it is not possible to define a self-adjoint operator conjugate to a bounded-below Hamiltonian. Later on, this result was extended by Unruh and Wald [13], who showed that, if the Hamiltonian is bounded-below there is no self-adjoint operator that only can run forward in time. In other words any realistic clock that runs forward in time has a non vanishing probability of running backwards. Although there are no selfadjoint operators associated to a "time" variable that satisfy the conditions of a good clock, one can use generalized measurements in terms of positive operator-valued measures (POVM) and effect operators in order to describe clock readings. This was the strategy followed by Höhn, Smith and Lock [9]. However, these operators are necessarily defined in the kinematical space and do not correspond to any Dirac observable in the physical space. One may wonder if operators defined in the kinematical space are observable. After all any physical system in a background independent theory should obey the physical laws derived from the constraint. Let us assume that that the clock system is an element of our Universe and therefore satisfies the quantum and general relativistic laws, for instance it is an atomic clock. The observable time for this kind of clocks cannot be described by kinematical operators that would completely ignore the Einstein equations. Of course, if we come back to the physical Hilbert space we have to face again the problem of time, given the fact that Dirac observables are constants of the motion.
One could ask oneself if there is anything gained with this construction. The answer is clearly positive. The most important result is that following any of the mentioned techniques, i.e, introducing relational physical observables, deparameterization or the Page-Wootters formalism one ends up with a standard formulation of quantum mechanics in totally covariant systems in the Schrödinger or Heisenberg picture. Like in non relativistic quantum mechanics time is treated differently than any other variable. In ordinary quantum mechanics it is a classical variable and in totally constrained system a kinematical variable. In a totally constrained and quantum mechanical world such variables are idealizations that lead to the simplest description of the evolution in terms of unitary evolution operators. But the measurement of time is always done using a real clock subject to gravitational and quantum laws. Our observation, that we here re-derive in the language of Höhn, Smith and Lock, is that the evolution described in terms of a real clock satisfies a modified master equation.
We shall consider a physical situation described by a set of variables q, p belonging to a constrained system that includes the physical system that we wish to study and a "clock" that will be used to keep track of the passage of time, that we will assume takes continuum values. A simple example of a clock variable could be the position of certain free particle. The evolving constant of the motion associated with it is F T,tc (τ ) 2 . As before T and t c are kinematical quantities and τ a number. It corresponds to a relational physical observable that takes the value of T when the parameter τ takes the value of the auxiliary kinematical time t c that will, in the quantum theory, be modeled by a POVM. We also identify the physical variables that we wish to study as the relational physical observable F O,tc (τ ) associated to the kinematical variable O. We then proceed to quantize the system by promoting the observable and the clock to self-adjoint quantum operatorsF acting on the kinematical Hilbert space and t c is taken as an auxiliary variable belonging to the kinematical space (or represented by a POVM in the quantum theory). We are quantizing the relational physical observables following the procedure discussed in [9].
As we did in reference [14] we call the clock eigenvalues (the eigenvalues of the relational observable F T,tc (τ )) T and O the eigenvalues of the relational observable F O,tc (τ ) and we assume that both have a continuum spectrum. They satisfy,F which are eigenvalue equations in the kinematical space. The states of such space |T, k, τ are labeled by the eigenvalues ofF T,tc , of other parameterized Dirac observables that we collectively denote as k. The observables associated with T and k are dependent on τ , that is why we include it among the labels of the state. We observe that this operator may be written in terms of Dirac observables and τ , and we "normalize" the eigenstates in the physical state space where τ becomes a non-observable parameter. The projector in the physical space corresponding to finding the time variable T in the interval [T 0 − ∆T, T 0 + ∆T ] for a given τ is, 2 We are using the notation of Höhn, Smith and Lock. Readers that have followed our previous papers might notice that there we referred to it as the operator on the physical space T (τ ) Where k denotes the eigenvalues of other Dirac observables that form a complete set withF T,tc (τ ) that we have assumed has a continuum spectrum (this is not really necessary, that is why we kept a sum over it rather than an integral). Similarly the projector for O is In order to determine the simultaneous probability of finding O in the interval ∆O around O 0 and the clock around time T 0 we start by assuming that the measurements occurs when the parameter takes the value τ , where ρ is a density matrix in the physical space of states. We can compute in an analogous way the probability of finding T in the given interval. The conditional probability of finding the observable in its interval when the clock is its own interval would be given by the ratio of the simultaneous probability and the probability of finding T in the given interval. Taking into account that we have a complete ignorance of the value of τ , and that the above equations depend on τ we need to take the average of these expressions in all the possible values of τ . To do that we consider all the values of τ leading to a non vanishing value of Tr(P T0 (τ )ρ), let us call τ 1 T0 the minimum of these values and τ 2 Then by taking the average on the possible values of the unobservable parameter and simplifying the factors 1/∆t T0 in numerator and denominator we get for the conditional probability, We integrate the numerator and denominator separately since the numerator with its integral corresponds, up to a factor 1/∆t T0 to the joint probability of O 0 , T 0 and the integral of the, up to the same factor, to the probability of T 0 . The reason for this averaging is that we do not know for which value of the kinematical time τ the clock took the value T 0 . Notice that something similar occurs if one deparameterizes because in this case the parameter τ would correspond to a classical variable. If we could have a perfect clock such that to each τ corresponded only one T , the expression would reduce to the one of ordinary quantum mechanics. This behavior is still true if one is considering simple theories like in ordinary quantum mechanics, it does not depend on the fact that we are working with a totally constrained theory. It is enough to take into account that the Schrödinger time is an idealized variable about which one does not have perfect information.
Let us assume that we have chosen a physical clock whose interactions with the system of interest are negligible and that behaves semiclassically with small quantum fluctuations. Typically it will be an atomic clock based on a periodic system with small deviations from the ideal non observable time τ . Therefore in first approximation one expects to recover the ordinary Schrödinger evolution plus small corrections. Let us then assume that we can divide the density matrix of the whole system into a product form between clock and system, ρ = ρ cl ⊗ ρ sys and the evolution will be given by a unitary operator also of product type U = U cl ⊗ U sys .
As shown by Höhn, Smith and Lock [9] the evolution in terms of a density matrix at time τ is given by the usual probability of measuring the value of O at a time τ , in a state ρ, Since τ is unknown, we would like to shift to a description where we have density matrices as functions of the observable time T. To do this we start from the conditional probability (6), and make explicit the separation between clock and system, By introducing the probability that the measurement of T has occurred at the (unknown) time τ And noticing from the above expression that τ2 T τ1 T dτ P τ (T ) = 1, we may introduce a T dependent density matrix of the system at time T And noting that Tr(ρ(T )) = Tr(ρ sys ) one can derive from (5) the ordinary expression for the probabilities in quantum mechanics given in (4) but with an effective density matrix given by ρ(T ). Since one ends up with a superposition of density matrices evolved unitarily for different values of τ , the effective evolution of the physical density matrix ρ(T ) is not unitary.
In reference [14] we have shown that if the real clock behaves semi-classically and we assume that P t (T ) = f (T − t) is a peaked symmetric function approximated by a Dirac delta, that is f ( where σ(T ) = ∂b(T )/∂T is the rate of spread of the width of the clock state. This is an equation of the Lindblad [15] type that should be considered as the master equation that describes the actual evolution of any physical system evolving in terms of a real clock. The correcting term proportional to σ(T ), quantity that depends on the particular clock used controls the decoherence induced by the use of physical clocks on the evolution of the states. As we shall see there are fundamental bounds of how well a clock can be and for the value of b(T ) implied by this bound we will recover a master equation for optimal physical clocks. Summarizing: Schrödinger's equation is only approximate, it is a description in terms of a classical time not accessible to observers in a quantum and covariant universe. When one takes into account that clocks are physical systems just like any other and that the universe has covariant and quantum laws, the evolution needs to be modified. It ends up being described by a master equation that includes additional effects of loss of coherence. The origin of the lack of unitarity is the fact that definite statistical predictions are only possible by repeating an experiment. If one uses a real clock, which has thermal and quantum fluctuations, each experimental run will correspond to a different value of the evolution parameter. The statistical prediction will therefore correspond to an average over several intervals, and therefore its evolution cannot be unitary.

III. FUNDAMENTAL LIMITS TO SPACE-TIME MEASUREMENTS
From the above analysis it is clear that if there are fundamental limitations of how good a clock can be, the use of real clocks will introduce an additional, fundamental, source of decoherence. As we do not have a complete theory of quantum gravity this is still a contentious issue. Phenomenological arguments have been given by Salecker and Wigner, Karolyhazy, Ng and van Dam, Amelino-Camelia, Ng and Lloyd, and Frenkel [16] leading to similar estimations based on two main effects: quantum fluctuations and black hole formation. We have recently given a simple argument leading to a fundamental minimum uncertainty in the determination of time intervals consistent with the previous estimations. It only relies in the uncertainty principle and time dilation in a gravitational field [17]. Schematically, the argument is as follows: let us consider a microscopic quantum system playing the role of a clock and a macroscopic observer that interacts with the clock interchanging signals. Let us start by considering the time-energy uncertainty relation, ∆E∆t c ≥ , where ∆t c is of the order of the period of oscillation of the system being considered, and E the energy of the quantum oscillator. One can consider that the macroscopic system is at an infinite (macroscopic) distance from the microscopic quantum oscillator. We now consider the relationship between the time measured by the clock locally, t c , and an observer at an infinite distance from it, t. The gravitational time dilation measures the difference in the passage of proper time at different positions as described by a the metric tensor of space-time. It is given by where r S is the Schwarzschild radius given by r S = 2GE/c 4 We will concentrate in the uncertainty of the observed period of oscillations. Using the standard technique for the propagation of errors of a measurement, and taking into account the definition of the Schwarzschild radius and the time energy uncertainty relation we get Recalling that 1 − r S /r is a positive quantity less than one, since the size of the clock cannot be smaller than its Schwarzschild radius, and translating t c to t, one has that, Assuming that the clock has size r and that the oscillation within it takes place at the mean speed v we have that 2πr = v∆t c and computing the minimum of ∆t using equation (14) we get, This is a bound that agrees with those derived by other means by the authors mentioned above [16]. Similar fundamental uncertainties hold for spatial intervals l and for any relativistic invariant interval s [18].
If the best accuracy one can get with a clock is the one given above it will induce via the master equation the decay of the out of diagonal terms of the density matrix ρ(T ).
Therefore, pure states evolve approaching statistical mixtures also known as classical mixtures, suffering an irreversible evolution and the system presents a fundamental loss of coherence due to this effect. This is a fundamental effect, any physical system will lose unitarity through its evolution.
Summarizing: The precision with which time lapses or spatial distances can be measured is limited by quantum and gravitational effects. As a consequence of this limitation the quantum states exhibit small deviations from the Schrödinger evolution. The evolution described by real clocks becomes irreversible and exhibits a new form of loss of coherence independent from environmental decoherence implying that pure states approach classical mixtures as time evolves.

IV. QUANTUM MEASUREMENTS WITH REAL CLOCKS
According with the orthodox view of the quantum measurement problem, the collapse of the wave-packet during measurements refers to an irreducibly indeterministic change in the state of a quantum system, contravening the deterministic and continuous evolution prescribed by the Schrödinger equation. One needs to address several questions. In particular: under exactly what conditions does the collapse occur? In other words, the problem of measurement in quantum mechanics arises in standard treatments as the requirement of a reduction process when a measurement takes place. Such process is not contained within the unitary evolution of the quantum theory but has to be postulated externally and is not unitary. Processes that occur during measurements are usually justified through the interaction with a large, classical measuring device and an environment with many degrees of freedom, that is through ambient decoherence 3 .
Objections have been levied onto two aspects of the solution of the problem of measurement through decoherence. 1) Since the evolution of the system plus environment is unitary, the coherence still persist and it could potentially be regained. 2) In a picture where evolution is unitary "nothing ever occurs". This is Bell's "and/or" problem. The final reduced density matrix of the system plus the measurement device will describe a set of coexistent options and not alternative options with definite probabilities. To put this feature vividly, in terms of Schrödinger's cat: at the end of the decoherence process, the quantum state still describes two coexistent cats, one alive and one dead. We shall argue that these two problems are related with the issue of time and propose a solution for them.
A first approach we took to the analysis of the first issue is model dependent [19]. It consists in considering a model where the quantum system, the measurement apparatus and the environment are completely under control and study its evolution in terms of real quantum clocks. One can show that with the modified evolution described by the master equation the first objection to decoherence does not apply. We will concentrate in this section in the first objection: The quantum coherence is still there. Although a quantum system interacting with an environment with many degrees of freedom will very likely give the appearance that the initial quantum coherence of the system is lost, -the density matrix of the measurement device is almost diagonal-, the information about the original superposition could be recovered for instance carrying out a measurement that includes the environment. The fact that such measurements are hard to carry out in practice does not prevent the issue from existing as a conceptual problem. The persistence of correlations also manifests in closed systems in the problem of revivals: after a very long time the off-diagonal terms in the reduced matrix of the system plus the measurement device become large again. So that whatever definiteness of the observed preferred quantity that had been gained by the end of the measurement interaction, it turns out in the very long run to have been but a temporary victory. The superposition of different outcomes reappear in the state of the measuring apparatus. This is called the problem of revivals (or 'recurrence of coherence', or 'recoherence'). The fundamental irreversible decoherence induced by the use of real clocks allows to show that revivals are prevented by the modified evolution. When the multiperiodic functions in the coherences of the process induced by the environment tend to take again the original value after a Poincaré time of recurrence, the exponential decay of the out of diagonal terms of the density matrix in equation (16) for sufficiently large systems completely hides the revival under the noise amplitude [19].
Let us discuss the possibility of recovering the information that after the enviromental decoherence lays in the environment interference terms. That is, let us try to establish if the system remains in a coherent superposition or has become a statistical mixture. Since the information in question is a characteristic of the total system (including the environment) being in a coherent superposition, it can be in principle revealed by measuring a suitable quantity of the total system. A typical procedure could be measuring an observable of the total system that takes different values for a coherent superposition and a statistical mixture. For instance, that vanishes in the last case. This was proposed, for instance, by d'Espagnat [20]. In reference [19] we showed that due to the fundamental decoherence induced by quantum clocks the expectation value of such observables exponentially decreases and is more and more difficult to distinguish from the vanishing value resulting from an exact statistical mixture. The analysis was based in a particular model of a spin interacting with an spin bath environment.
In what follows we are going to show that due to the combined effect of fundamental decoherence and the bounds in the precision of the measurements of length and time intervals,this analysis may be extended to a wide range of systems [21] and show that it is impossible to distinguish between the expectation value for the evolved initial state and for a statistical mixture not only for all practical purposes (FAPP) but fundamentally. The solution may be applied to a general class of global protocols that apply to any decoherence model. In this way, we provide a criterion that works in much more general settings than a particular model. This analysis also permits to provide estimates for bounds on the time in which the event occurs.
Let us first consider a system with unitary evolution that interacts with its environment and presents ambient decoherence, then it is in principle always possible to distinguish whether the system is in a superposition or a statistical mixture. An obvious way of showing that is the following. Let us assume that we are interested in the measurement of the observable A with eigenvalues a i and eigenstates |ϕ i and we are interested in comparing the evolution during the measurement process of a pure state, and a statistical mixture , Measurements on both states lead to the same set of results with the same probabilities. But while after a measurement outcome is observed ρ 1 becomes ρ 2 , the latter remains invariant. After an event 4 occurs, the system becomes ρ 2 . However, if one studies the unitary evolution of the systems coupled to the environment, even after the interaction with the environment of the measuring device, the total system resulting from the evolution of the first state differs from the total system resulting from the second. Without projections breaking the unitarity, events would not occur. The distinction between them may become harder after evolution but it is always possible provided unitarity is preserved. A measurement on the first system with a definite outcome always requires breaking the unitarity. It is usually argued by decoherentists that at certain moment the evolution becomes "effectively irreversible".
Aaronson, Atia and Susskind [22] have shown that observing the interference terms in a state as ρ 1 (t) is as difficult as reproducing the initial state. In terms of Schrödinger's cat, measuring the interference terms is as difficult as bringing the dead cat back to life. A trivial example of a protocol allowing to distinguish ρ 1 (t) and ρ 2 (t) is time reversal. In fact we would recover the initial states, and they are easily distinguishable. Implementing this protocol with enough approximation to distinguish the two situations in an experiment is, without a doubt, an extremely hard task, that would require control over the huge number of degrees of freedom in the environment (see however [23] for some progress in that direction). However, the fact that this possibility in principle exists is already an insurmountable obstacle to constructing an objective notion of event within unitary quantum mechanics at a conceptual level.
In reference [21] we have analyzed the effect of the fundamental decoherence induced by the use of real clocks. If the same time reversal is considered in this case, the irreversibility of the evolution described by the master equation does not allow to recover the initial state. In fact if we consider the evolution of ρ 1 (0) from the initial time 0 to t and then back-reversed to t = 0 we do not re obtain ρ 1 (0) but a state that differs very little from ρ 2 (0).
In order to quantify this difference let us consider the observable O = O S ⊗ I E , where S refers to the system and E the environment. Then if O S = |ϕ j ϕ k | + |ϕ k ϕ j | the expectation values for ρ 1 (0) and ρ 2 (0) initially are while Tr (Oρ S,E,2 (0)) = 0, where the trace is taken over the whole system including the environment. After evolving a time T and time-reversing the system until T f = 2T , where τ D is the usual time of environmental decoherence of the system coupled to the measuring device and in the right hand side the trace is taken over S. This shows that using the global protocol to distinguish the state evolved with the master equation from the state in the case an event occurs, becomes increasingly harder when real clocks are used and time uncertainties are taken into account. The state evolved with the master equation becomes increasingly similar to the case in which an event occurs and it can be interpreted as a classical mixture.
One could consider that after all even though for all practical purposes both states are practically identical, they are still fundamentally different. We now show that this is not the case when one takes into account that the uncertainties in time intervals and length measurements put limitations in how the states of a system is prepared or measured. We illustrate it in the paradigmatic case of a particle in a coherent superposition over two spatial locations (for more details see [21]). To do that we take |ψ = a|ϕ 1 + b|ϕ 2 where |ϕ 2 and |ϕ 1 are states localized at different points, and x |ϕ 2 = 1 The state that would remain invariant if an event occurs after the measurement of the position is the statistical mixture, We can distinguish both states computing the expectation value of the observable O S = p. In fact, while Tr[pρ 2 (0)] = 0, we have that Hence, if the initial state is chosen with the appropriate phases, p discriminates whether an event occurred or not. Taking into account that fundamental uncertainties on the measurement of length intervals forbid a perfect preparation of the wavepackets |ϕ 1 and |ϕ 1 , since they imply errors ∆L and ∆σ on the separation and width of the Gaussians, that will lead to uncertainties on the expectation value of p.
We have shown in [21] that a simple propagation of uncertainties on the error induced on Tr[pρ 1 (0)] − Tr[pρ 2 (0)] by these uncertainties gives, The uncertainty on the measurement of O = p ⊗ 1 E , has to be taken into account when analyzing the global protocol considered above in equation (19).
That is, once condition (25) is satisfied the uncertainty in the measurement of the observable prevents one from verifying whether the system was initially in a coherent superposition ρ S,1 or in a statistical mixture ρ S,2 . Notice that this is a fundamental limitation and cannot be circumvented by making multiple measurements. It is related to the impossibility of preparing exactly the same initial state with infinite precision. One cannot decide whether the system was at the end of the evolution in a mixture state or the one resulting from the evolution with the master equation. While in the case of ambient decoherence the final states were not distinguishable for any practical purpose (FAPP), when the fundamental decoherence due to the bounds in the precision of the determination of time and space intervals is taken into account, the production of events in a system that initially was in a pure state superposition is compatible with the evolution equation. This condition is fundamental and not FAPP. This determines when a system produces an event, and it does so in an objective way. In reference [21] we proved that events can occur for times T > τ event where, It could be argued that more efficient protocols could exist than the one that requires the system's time-reversal, and therefore that the analysis presented is insufficient to show that events can be produced without violating the causal evolution described by the master equation. Nevertheless, the studies by Brown and Susskind [24] and Aaronson, Atia and Susskind [22] about quantum complexity suggest that any protocol that allows to distinguish the final states of a measurement process when the environment is included would be equally costly. In particular, it would be as difficult as implementing a procedure for bringing the dead cat back to life, the way in which the time reversal operation would do it, and therefore would require operations with a similar degree of difficulty. On the other hand the quantum complexity ideas mentioned before suggest that a rigorous demonstration of the indistinguishability of the final states after measurements, and statistical mixtures can be feasible.
When the fundamental decoherence effects due to the use of real quantum and relativistic clocks and to the quantum and gravitational limits to the measurements of time intervals are taken into account, one can show that during the measurement process pure states evolve into states that are fundamentally physically indistinguishably from statistical mixtures of the various possible outcomes of the measured observable, without any correction or violation of the evolution described by the master equation.

V. THE PRODUCTION OF EVENTS AND THE EVOLUTION OF STATES DURING MEASUREMENTS
Generically, the production of events alters the final state of a quantum system. The only exception takes place when the events occur in statistical mixture states. In that case the states can evolve according to the master equation without any change. In the preceding sections we have shown that when we take into account that time is not an ideal parameter but it is measured in clocks in a quantum gravitational universe, the states evolve into statistical mixtures and therefore it is possible to explain why events take place in measurement devices.
At this point one could ask: "isn't the and/or problem still present?". Have we adequately explained the transition from a superposition to a single outcome? Our point of view about this problem is that when the state of a system takes the form of an statistical mixture events occur. This statement could be considered as an ontological postulate, being the role of physics to predict when the events can occur and with which probability occur. When an event happens, like in the case of the dot on a photographic plate in the double slit experiment, typically many properties are actualized. For instance, the dot may be darker on one side than the other, or may have one of many possible shapes. The postulated association between properties and objects typical of the classical physics is now substituted by an association of properties with events. Objects understood as systems in certain dispositional state do not have properties until they are measured or produce events. Observe that the production of events is not necessarily associated with a measurement. Each time the state of a system becomes indistinguishable with a statistical mixture an event occurs.
All the recent criticisms based on Wigner's friend, like that of Frauchiger and Renner [25] to quantum mechanics stem from the assumption that the unitary evolution predicted by ordinary quantum mechanics with a classical time is correct with infinite precision. The quantum decoherence due to the use of real clocks eliminates all the problems mentioned before. Events can happen without violating the causal evolution described by the master equation, with probabilities that coincide exactly with those predicted by the textbook interpretations.
The treatment here presented is purely quantum mechanical, events and states provide a complete physical description. Events happen when the state acquires the form of a statistical mixture. One of the outcomes compatible with the statistical mixture is actualized. The occurrence of an event does not require any change in the causal evolution of the statistical mixture. The reduction of the state usually associated with measurement processes is nothing else that the update of the information resulting from the observation of the state.

VI. CONCLUSIONS
We have presented a brief review of the Montevideo Interpretation of Quantum Mechanics taking into account recent advances in the understanding of the problem of time [9] in general relativity and of quantum complexity [22,24]. We show that previous results not only still hold but are set on a more solid footing. The key observation is to take seriously the notion that time has to be a physical observable and not a classical parameter. This induces a fundamental mechanism for loss of coherence. Coupled to the fundamental limitations implied by general relativity and quantum mechanics, it leads to an objective notion of quantum event without reference to a classical world or observers. We showed that the mechanism that leads to the loss of coherence can be framed in terms of recent proposals to deal with the problem of time in quantum gravity.