1. Introduction
The Zeno effect typically occurs when a quantum system is repeatedly measured: if the time interval between two successive measurements tends to 0, the evolution of the system becomes frozen. The main reason is that, in quantum mechanics, the general short time evolution is quadratic, i.e.,
where
(we take
). Hence, if
n projective measurements along
are performed during a fixed time interval
T, the probability
that all measurements give the outcome
is, at leading order:
Note that, to obtain this limit, one has to neglect the higher order terms, as is usually done in the standard presentations of the Zeno effect (Section 3 of [
1] and Section 3.3.1.1 of [
2]). Rigorously speaking, this is an additional assumption, because when developing the expression
, the number of
actually depends on
n.
In the spirit of the theory of decoherence, one might wish to become rid of the ill-defined notion of (ideal) projective measurement. Since a measurement is nothing but a particular case of interaction with an environment that entails strong decoherence of the system in the measured basis, it is tempting to ask what level of decoherence is required to freeze a system. For example, a particle in a gas is continuously monitored (∼measured) by its neighbors, yet the gas manifestly has an internal evolution—and so does the universe in general. It is not obvious a priori whether quantum mechanics actually predicts that the universe is not frozen.
This question has already been addressed by examining the continuous dynamics of the pair system–environment for a relatively generic Hamiltonian [
3]. The Zeno limit is recovered for strong interaction, and Fermi’s golden rule is recovered in the limit of small interaction. “The model shows that the coupling to the environment leads to constant transition rates that are unaffected by the measurement if the coupling is ‘coarse enough’ to discriminate only between macroscopic properties. This may in turn be used to define what qualifies a property as macroscopic: it must be robust against monitoring by the environment” (Section 3.3.2.1 of [
2]). Similarly, the master equation for the motion of a mass point under continuous measurement indicates that the latter is not slowed down because the Ehrenfest theorems are still valid. “This may be understood as a consequence of the fact that, for a continuous degree of freedom, any measurement with finite resolution necessarily is too coarse to invoke the Zeno effect” (Section 3.3.1.1 of [
2]).
In addition, Presilla, Onofrio, and Tambini have studied how the continuous monitoring of a two-level system affects its time evolution and its coherence based on a Lindblad’s master equation (Section IV of [
4]). They have in particular confronted their results to that of a historical experimental test of the Zeno effect [
5] and were able to obtain a better fit of the data than the naïve approach (assuming perfect von Neumann collapses) by taking into account the finite duration of the measurements. We will come back to this approach in
Section 5 to make a quantitative comparison between the latter’s results and ours.
Another interesting model is that of Sections 8.3 and 8.4 of [
6]. It may at first sight seem puzzling that an unstable nucleus continuously measured by a Geiger counter can actually decay. Indeed, if the measurement is treated as an ideal projective one, the nucleus should continuously be projected onto a non-decayed state. However, as soon as the decoherence process is not supposed immediate anymore (even as short as
s, see Equation (8.45) in [
6]), the deviation from the expected exponential decay is shown to be negligible.
Although these models are already convincing, our aim is to make a new contribution to this topic by explaining why the vast majority of physics is not affected by the quantum Zeno effect, the latter being detectable only in some very specific experimental setups. Our model also formalizes the competition between free evolution (no information leaking to the rest of the world) and decoherence (interaction with the environment), but differs from the previous ones in two respects: its mathematical structure is discrete and it does not assume anything about the form of the Hamiltonian, so as to be as universal as possible. The use of a discrete framework is consistent with the approach adopted in many mathematical studies on the quantum Zeno effect (see [
7] and references therein).
2. The Model: Free Evolution vs. Decoherence
Having in mind the fact that continuous degrees of freedom are less prone to the Zeno effect (recall the previous quote from [
2]), in order to explain why the universe is not frozen, it may suffice to check it on a two-level system. Our system of interest will therefore be a qbit, initially in the state
and monitored by an environment producing partial decoherence in the basis
. We consider a fixed time interval
T, divided into
n phases of length
dominated by the free evolution. This evolution takes the general form
where the coefficients satisfy
and
(in the sequel, we will drop the argument
whenever the context is clear). As mentioned in the introduction, to stick to the standard derivations of the Zeno effect, we need to neglect all the higher order terms, so that we actually suppose
and
.
After the i
th phase of free evolution, the system meets some neighboring environment
, initially in the state
, and becomes immediately entangled according to
where
quantifies the level of decoherence induced by
, i.e., how well the environment has recorded the system’s state (
means no decoherence,
perfect decoherence). See
Figure 1.
We henceforth suppose that does not depend on i (taken as a mean level of decoherence), which amounts to assuming that the strength of the interaction is more or less constant over time. Finally, we also suppose that each environment is distinct from the others and non-entangled at the time it encounters the system.
Recalling that , the relevant quantity to compute is the probability that, at the end of the time interval T, the system is still found in its initial state and that all the successive environments have recorded 0.
Proposition 1. Neglecting all the higher order terms, we can write Proof. The cases
or 2 are easy to treat. Indeed, the successive iterations are as follows (
f stands for the free evolution, and
d for the decoherence step):
The
seem to live in different Hilbert spaces only because we omit all environments
with which the system is not entangled yet. Consequently, neglecting all the higher order terms yields
The last step is not obvious and comes from the following argument. A priori, the quantity
lies in
up to a
, but the coefficients of the matrix
are not unrelated. Using the general parametrization of a
unitary matrix,
can be written as
. Moreover, for small
(this approximation may be rough for the case
but improves as
n increases),
; hence,
and
. We also expect
to be close to the real number 1 (infinitesimal decoherence). Combining all this,
is close to be a real negative number, so its real part is approximately the opposite of its modulus.
In general,
is the square modulus of a sum of terms of the form
where
and
are words on the alphabets
and
, respectively. The word
b is entirely deduced from
according to
with the additional requirement that
(the system finally measured in state
), so that
actually contains an even number of ≠. Note that only the indices
i such that
contribute non-trivially to the product of brackets, since
.
We now use the fact that
for all complex numbers
. In our case, the leading term is clearly
, while all the other square moduli are of order
or less because they contain at least two factors
. Furthermore, repeating the above argument, the real parts can be approximately replaced by their opposite moduli (and this approximation is more accurate as
n becomes larger). Therefore, only the cross-products of the form
, where
contains exactly two ≠, contribute at order
. The power of
that appears in this cross-product (i.e., the number of non-trivial brackets
) is the number of indices
i such that
, which is the number of steps elapsed between the two ≠. For instance, if the two ≠ happen at the
and
step, the contribution is
There are obviously
words
with exactly two ≠ separated by
k steps, corresponding to the
possible choices for
i, whose contribution is
. Finally, the general expression for
when neglecting all the higher order terms is
□
We can check the consistency of this result on two particular cases:
If , no decoherence occurs, so we recover the free evolution case during a time interval instead of , i.e., .
If , a perfect decoherence, the environment acts as an ideal measuring device, so we recover the Zeno case mentioned in the introduction, which is .
Now, a Zeno effect will freeze the system in the limit of large
n if and only if
, that is (using
) if
. After some algebra, this expression can be simplified and leads to the following criterion:
We immediately note that, if is a constant independent of n, the criterion is satisfied. This is natural because, as the duration of each free evolution phase goes to 0, a constant (even weak) decoherence is applied infinitely many times, so the system freezes.
We will henceforth suppose that the level of decoherence depends on
n, with
. A global factor
can thus be dropped in the above criterion. Our task in the following sections will be (i) to check the criterion on some common classes of functions
(
Section 3) and (ii) to estimate the level of decoherence encountered in physical situations (
Section 4).
Remark 1. How finely should the time interval be divided so that the quadratic approximation is valid? Let us forget for a moment that our system is finite dimensional and consider the Hamiltonian of a free particle , starting from the initial state centered around and , and computeHence, the quadratic approximation is valid for times shorter than . Taking for instance kg and m, we obtain s. This is much shorter than the mean free time of a particle in a gas in standard conditions, which is of order s. Thus, it seems at first sight that the decoherence steps could in practice be too separated in time for the quadratic approximation to be valid all along the free evolution step. However, decoherence does not need any actual interaction to take place (a “null measurement” is still a measurement [8]). The fact that all other surrounding particles do not interact with the particle of interest is still a gain of information for the environment, which suffices to suppress coherence with other possible histories in which they would have interacted. In this case, information is continually leaking to the environment, so it seems legitimate to divide the time interval T as finely as desired so that the quadratic approximation becomes valid, and the resulting behavior is then determined by the intensity of infinitesimal decoherence only. The philosophy of this argument is not specific to the infinite dimensional case and may be applied to our two-level system. It relies, however, on the already mentioned assumption that the strength of the interaction is more or less constant over time. This will be discussed in Section 6. 5. Comparison with Presilla et al.’s Continuous Model
The situation explored by Presilla, Onofrio, and Tambini in Section IV of [
4] is very similar to ours—a two-level system undergoing some external monitoring in addition to its free evolution—but the major difference relies in the continuous nature of the model. Precisely, they solve the Lindblad’s master equation governing the competition between the system’s internal evolution in the presence of a resonant field (producing Rabi oscillations between levels 1 and 2) and a continuous probing of the occupancy of level 1, whose intensity can modulated by a factor
. The equation, which can in this case be solved analytically, reads
where
is the monitored observable.
It is interesting to check whether their findings are compatible with our model, in particular to determine the level of short time decoherence
and the parameter
that correspond to their situation and see if our conclusions still apply. To do so, recall that the density matrix of a system entangled with an environment in a state
is given by
. Therefore, although the environment’s evolution is not specified in Presilla et al.’s model, one can still deduce
, and in particular its short-time behavior corresponding to our
, by computing the quantity
. Using the expressions (84) and (85) for
and
given in [
4], we can plot
for different values of the parameters. In
Figure 2, we show
for the initial conditions
,
, corresponding to a system starting in the pure state
, and different values of
(we have set the Rabi frequency
, so that we actually plot in units of
and
).
Crucially, we see that has a non-vanishing first derivative in 0 (except in the extreme case , where of course no decoherence occurs). When plotting the latter as a function of , we observe that . This means that the short-time decoherence is of the form , which corresponds to the case and in our model (recalling that and equating as the characteristic times of the two models).
Are the two models consistent? According to the table of
Section 3, we expect an intermediate regime between Zeno freezing and free evolution, where the system is still evolving but slowed down. This is indeed what happens in Presilla et al.’s model. Importantly, they conclude that “strong Zeno inhibition [
limit] as well as full Rabi oscillations [
limit] are two trivial extreme regimes. However, [their model] tells us that another interesting and unexplored regime exists. It is the regime which occurs when the measurement coupling is comparable to the critical value. In this case a strong competition between stimulated transitions and measurement inhibition takes place.”
We can even be more precise and compare quantitatively the slowing down of the system as a function of
. In our model, the quadratic evolution term
is replaced by
, so that the time unit is effectively replaced by
In the continuous model, the proper frequency
of the system becomes under monitoring
, so that the time unit is effectively replaced by
Of course, these two correction factors are not equal, but they turn out to coincide quite well, at least in the small
case. For instance, on the whole range
, they deviate by no more than 5% to each other, as can be seen by plotting the function
. For larger
, though, the two models completely disagree at the quantitative level. This may be due to the fact that our model, by considering free evolution steps of finite time but immediate steps of decoherence, implicitly assumes some relatively weak coupling with the environment. In particular, it is unable to account for the critical transition that happens at
.
As a final comment in this section, it is interesting to remember that a continuous Lindblad’s master equation corresponds to the intermediate regime
. This does not necessarily affect the argument given in
Section 4 to justify that the most typical physical situation might be
. Indeed, although Lindblad’s equation is somehow universal (being the most general equation governing an open quantum system interacting with a Markovian environment), picking a preferred observable
is not. In reality, the mutual monitoring between all the subsystems in the universe arise from complex interactions between a huge number of particles, and which observables are more recorded than the others is far from obvious (this discussion is related to the famous preferred-basis problem, see [
10] for an updated bibliography).
6. Discussion
We have presented a model designed to check whether quantum mechanics indeed predicts that the universe should evolve. To remain as universal as possible, no specific form of Hamiltonian was assumed. It allowed us to determine the level of decoherence (induced by a surrounding environment) needed to freeze a two-level quantum system, arguably the kind of system most prone to the Zeno effect. We have found that if, during a time interval
, the environment distinguishes between the two states according to
with
, then free evolution wins over decoherence, and the system is not frozen. In the most generic case, because quantum mechanical short time evolutions are always quadratic (and this is true for the system as well as for the pair {system + environment}), we find
; hence, the universe is indeed not frozen. We have finally made a quantitative comparison with the continuous master equation model by Presilla, Onofrio and Tambini [
4]. The links between the two models are non-trivial but we have found a good agreement at least in the low coupling regime.
The main weaknesses of the model, leading to possible improvements, are the following.