Quantum measurements, stochastic networks, the uncertainty principle, and not so strange"weak values"

The outcomes of a series of measurements, made on a quantum system, form a sequence of random events which occur in a particular order. The system, together with a meter or meters, can be seen as following the paths of a stochastic network connecting all possible outcomes. The paths are shaped from the virtual paths of the system, and the corresponding probabilities are determined by the measuring devices employed. If the measurements are highly accurate, the virtual paths become"real", and the mean values of a quantity (a functional) is directly related to the frequencies with which the paths are travelled. If the measurements are highly inaccurate, the mean (weak) values are expressed in terms of the relative probabilities amplitudes. For pre- and post-selected systems they are bound to take arbitrary values, depending on the chosen transition. This is a direct consequence of the uncertainty principle, which forbids one to distinguish between interfering alternatives, while leaving the interference between them intact.


I. INTRODUCTION
A sequence of measurements made on a quantum system leads to number of random outcomes, observed by an experimentalist. In this sense, the composite system+meter(s) may be seen as following a path across a stochastic network, connecting all possible measurement results. The same can be said about a purely classical system, and the quantum nature of the experiment reveals itself in the manner, in which the transition probabilities are calculated. While a classical theory allows for non-invasive observations, a quantum meter plays an active role in "shaping" the network. In the simplest case of von Neumann (vN) measurements [1], the transition probabilities, as well as the observed events themselves, are determined by the virtual paths available to the studied system, the nature of the measured quantity, and the accuracy of the meter(s).
Perhaps the most studied network is produced by an accurate vN measurement of a projector on a state |ψ (preparation, or pre-selection), followed by a not necessarily accurate vN measurement of an operatorÂ, followed by an accurate measurement of a projector on a final state |φ (post-selection) [2]. Post-selection may succeed or fail, and a sub-ensemble is selected by collecting the statistics only in the case it is successful. IfÂ has N distinct eigenvalues, there a N virtual paths connecting the two states, for which quantum mechanics provides probability amplitudes, but not probabilities, if the system is considered in isolation.
If the measurement ofÂ is accurate (ideal, strong), the meter destroys the interference between the paths, which can be now equipped with probabilities (squared of the moduli of the corresponding amplitudes), and become "real". Each time the experiment is repeated, the pointer points at one of the eigenvalues of the operatorÂ, whose mean value is then calculated using the relative frequencies of these occurrences. The probability to end up in |φ is not what it would be, had the meter not perturbed the system's evolution. More controversial is the opposite limit, where the intermediate measurement is made inaccurate (weak), in order to avoid the said perturbation. The weakness of the measurement inevitably causes the meter's readings to be spread, covering the whole real axis in the limit when the perturbation is sent to zero [2]. This gives an operational meaning to the uncertainty principle, which states that the value ofÂ in a superposition of its eigenstates must be indeterminate. Indeed, trying to measure it without destroying the superposition would yield any value at all.
Much of the controversy resides, however, in the description of the mean of a weak meter's publication of [2], it gave rise a number of intriguing concepts. These include "negative kinetic energy" [3], "negative number of particles" [4], "having one particle in several places simultaneously" [5], "photons disembodied from its polarisation" [6], "electrons with disembodied charge and mass" [6], and "an atom with the internal energy disembodied form the mass" [6], all supported by the "evidence" of weak measurements.
Recently we have shown [7] that the above statements amount to over-interpretations, easily dismissed once the weak values are identified with the transition amplitudes on the paths connecting the initial and final states, and are not seen as the values of the measured quantities. In this paper we will continue to look at the problem of consecutive quantum measurements, and the classical statistical ensembles produced whenever a meter, or meters, interact with the observed quantum system. We will pay special attention to the importance of the quantum uncertainty principle, and emphasise the role quantum interference plays in the loss of information about the system's past. Throughout the paper, we will use the simplest case of a two-level system as an example, and refer the reader to Refs. [8] - [13] for a more general analysis.
The rest of the paper is organised as follows. In Sect. II we compare two of the best known formulations of the uncertainty principle. In Sect. III we briefly describe the work of a von Neumann meter. In Section IV we introduce a simple classical toy model, to be compared with its quantum counterparts later. Section V returns to the simplest quantum network produced by a vN measurement of a quantity, with pre-and post-selection. In Section VI we use our approach to describe a measurement of the difference of two physical quantities, in such a way that the values themselves remain indeterminate. Section VII sets the rules for combining virtual paths. Accurate (strong), and inaccurate (weak) limits of the measurements are analysed in Sections VIII and IX, respectively. In Sect. X the uncertainty principle is used to explain the properties of the weak values. Common misconceptions related with the issue are discussed in Sect. XI. Section XII contains our conclusions.

II. QUANTUM UNCERTAINTY PRINCIPLE(S)
There are several formulations of the uncertainty principle (UP) covering various aspects of the problem (for a recent review see [14]). Perhaps best known is the one relating the respective standard deviations, σ A and σ B , of two variables, represented by non-commuting operatorsÂ andB, measured in the same state. The so-called Robertson's uncertainty relation [15] reads where σ 2X ≡ X 2 − X 2 , [Â,B] ≡ÂB −BÂ is the commutator, and the angular brackets indicate averaging in the pure state |Ψ the system is supposed to be in, X = Ψ|X|Ψ .
Equation (1) is often interpreted as an indication of the fact that a measurement ofÂ must disturb a measurement ofB, and vice versa.
A somewhat different approach to the UP can be found in [16], where one reads: Any determination of the alternative taken by a process capable of following more that one alternative destroys the interference between alternatives. This principle is complemented by the rule for assigning probabilities to alternative scenarios [17]: When an event can occur in several alternative ways, the probability amplitude for the event is the sum of the probability amplitudes for each way considered separately. If an experiment is performed which is capable of determining when one or another alternative is actually taken, the probability of the event is the sum of the probabilities of each alternative.
From the above statements, to which we will refer as the "Feynman's uncertainty principle", one may conclude that interfering alternatives cannot be told apart and must, therefore, form a single indivisible pathway [24]. Another corollary to the principle is that interference must be destroyed by a physical agent, e.g., a meter coupled to the observed system. Thus, construction of probabilities by means of squaring the moduli of the corresponding amplitudes, pre-supposes the existence of a suitable meter, as well as the fact that the meter has already been deployed [13].
It is the Feynman's UP to which we will appeal in what follows. Despite the obvious difference between the formulations, the Feynman's principle is not all that different from the UP expressed by Eq.(1). Consider, for example, a system with a zero hamiltonian,Ĥ = 0, prepared at t = 0 in a state |ψ . An accurate measurement ofB at t = T yields an eigenvalue b i B with the probability p(i B |ψ) = | i B |ψ | 2 , where |i B is the corresponding eigenstate.
Suppose we also want to learn something about the value ofÂ at t = T /2. The system must be, in some sense, in one of the eigenstates ofÂ. To see in what sense, precisely, we There are now different ways (paths, pathways, routes) to reach the final state by "passing through a state |i A . Without a meter, we cannot determine which of them is actually taken.
Thus, despite the abundance of virtual paths, there is only a single "real" one, "travelled" with the probability p(i B ). If, on the other hand, we employ a meter, capable of telling us which of the eigenvalues a i A has occurred at t = T /2, the virtual paths will become real, We also find that the measurement ofÂ has disturbed the result of measuringB, since Quantifying the disturbance would lead to estimates similar to (1). We will not follow this matter any further, and in the next Section discuss possible realisation of quantum measurements.

III. VON NEUMANN MEASUREMENTS AND METERS
In the following we will often refer to von Neumann (vN) measurements [1], whose definition we will revisit in this Section. A vN meter, designed to measure a variableÂ of a system governed by a hamiltonianĤ, consists of a pointer (e.g., a massive particle on a line) whose position is ξ, and which is coupled to the observed system via (h = 1) If the value ofÂ is measured at some t = t 0 , the function β(t) is chosen to be β(t) = δ(t−t 0 ), where δ(z) is the Dirac delta. We will assume thatÂ has discrete eigenvalues a i , and |i are the corresponding eigenstates,Â = i |i a i i|. Prior to the interaction, at t = 0, the system is described by a state |ψ = c i (0)|i , and the pointer is in the state |M , such that is a real function (e.g., a Gaussian), peaked around ξ = 0, and rapidly tending to zero as ξ → ±∞ [18]. Immediately after the interaction has taken place, the pointer becomes entangled with the system, and their wave function is given by where c i (t 0 ) = i| exp[−iĤ(t 0 )]ψ We assume further that the final position of the pointer can be determined precisely, leaving out the fundamental question of how exactly this is done, and whether a collapse of the wave function has occurred [1]. In one way or another, after the interaction has taken place, we may "look" at the pointer, and find a meter's reading ξ. Perhaps the best known practical realisation of a vN measurement is the Stern-Gerlach apparatus (see, for example, [2] and [19]). Other vN meters can be constructed by using the spin-orbit interaction [20]- [22], or Bose-Einstein condensates trapped in doublewell potentials [23].
If nothing else is done to the system, the probability (density) to find a reading ξ 0 is (tr [...] stands for the trace of the operator inside the brackets) where |Ψ(t) = exp[−iĤ(t − t 0 )]|Ψ(t + 0) . Alternatively, at some t = T > t 0 we may try to find (post-select) the system in some state |φ , and keep the meter's readings only if the post-selection is successful. With this additional condition the probability to find a reading Thus, apart from the preparation (pre-selection) of the system in a state |ψ , two other events have occurred: the pointer was found pointing at ξ 0 , and the system was later found in a final state |φ . Quantum mechanics provides the probabilities of such occurrences by means of expressions similar to Eqs.(6) and (7). We can imagine, for example, performing more measurements between t = 0 and t = T , thus creating a network of scenarios, each of which would occur with certain frequency if the experiment is repeated many times. Before discussing such "quantum" networks, we briefly turn to their classical counterparts.

IV. CLASSICAL NETWORKS AND FUNCTIONALS
We start with a simple toy model which, we hope, will be helpful for the discussion of the following Sections. Suppose one has at his/her disposal a kit containing purely classical elements, little balls, tubes and connectors, which can be joined together. A connector x has two entrances, and w x (i|j) is the probability to enter via the inlet j, and subsequently leave via the outlet i. If one of the outlets is blocked, the probability to exit via the remaining one is unity for both entrances. Connecting the elements as shown in Fig. 1 , For example, we may limit our attention only to the cases where the ball goes to b 1 . As a result of this "post-selection" there are only two paths: taking the values a i , i = 1, 2, and whose mean value, evaluated over many runs by counting the times the ball passes through a 1 and a 2 , is Alternatively, we may focus only on those cases where the ball ends up in f 1 , and ask for the difference between the values of b i and a j , b i − a j , which is recorded for each run. Let We have invoked the simple toy model in Fig.1 in order to emphasise its resemblances to and differences from the quantum case outlined at the end of previous Section. As in the classical case, a set of vN measurements creates sequences of events occurring with certain probabilities, which can be seen as classical pathways "travelled" with certain frequencies, if the the experiment is repeated many times.
The quantum origin of the experiment becomes evident in the way the probabilities of the events are calculated, as well as in the nature of these events. A meter recombines and divides the system's virtual paths, endowed with probability amplitudes only, thereby producing "real" ones, to which probabilities can be assigned. In particular, two interfering paths are combined into a single one in a much stronger sense, than in the classical case.
For example, in Fig.1b the paths {1} and {4} are combined into {1 + 4}, since we chose to be interested only in the difference b i of a i , and not in their actual values. We will show that in the quantum case, these actual values may not even exist. Finally, in the classical case, it is understood that two different setups built from the elements of the same kit can be completely different, and need not share any statistical properties. In the quantum case this is less clear, and we will return to the problem of "contextuality" later.

POST-SELECTION
As the first example, we consider a vN measurement of an operatorÂ = 2 i=1 |i a i i|, for a two-level system, pre-and post-selected in the states |ψ and |φ , at t = 0 and t = T , respectively. For simplicity we put the system's hamiltonian to zero,Ĥ = 0, so that nothing happens between t = 0 and t = t 0 , as well as between t = t 0 and t = T . Our purpose is to show that this creates a classical statistical ensemble not too dissimilar from the one built from the classical elements in the previous Section. To see how this is done, we consider the two virtual paths connecting |ψ and |φ , {i}, i = 1, 2. These are Without a meter, we may define an amplitude distribution Φ(f ) for the value f taken by F 1 , but cannot say what is the probability for having, e.g., the value a 1 .
With a meter (1) coupled to the system, the final pointer's state |M is easily found to be given by and the meter reads ξ 0 with a (unnormalised) probability Equation (12) may be interpreted in the following way. By turning the meter on, we have created a continuum of real paths, all labelled by ξ. These connect the three observed events, system in |φ ← pointer at ξ 0 ← system in |ψ , assuming the meter is read before the post-selection is performed.
The range of possible scenarios is determined by the width of the initial pointer's state G(ξ) in Eq. (4), which also determines the accuracy of the measurement. For example, if G(ξ) is chosen to be G(ξ) = 1/ ∆ξ for |ξ| ≤ ∆ξ/2, and 0 otherwise, the possible values of ξ would lie inside the the interval [a 1 − ∆ξ/2, a 2 + ∆ξ/2], assuming a 2 > a 1 . The path labelled ξ is travelled with a probability (density) P (ξ). Far from passively observing the motion of the system, a meter actively participates in shaping the observed statistical ensemble. Accordingly, the probabilities in Eq. (12) contain both the amplitudes A[i], which characterise the unobserved system, and the meter's own G(ξ).
In the general case we may learn little about the the quantity of interest,Â. Instead of extracting its value, we can only conclude that if the meter reading is ξ 0 , the system's state after interaction with the meter was [25] |Ψ for which the precise value ofÂ remains indeterminate. It is, however, possible to say that the measurement "drives" the system through a state |Ψ(ξ 0 ) , with a probability P (ξ 0 ). To see that this is, indeed, the case, we could make an accurate measurement of the projector onto |Ψ(ξ) just after t 0 . Repeating the procedure many times will show that whenever the reading of the first meter lies in a narrow interval around ξ 0 , the additional projection will also succeed. Thus, while every attempt to measureÂ leads to a particular stochastic network, not every measurement allows us to determine the value ofÂ. We will return to the question of the accuracy shortly after considering yet another example.

TWO QUANTITIES
Suppose that, as in previous Section, we have a two-level system pre-and post-selected in the states |ψ and |φ at t = 0 and t = T , respectively. Now we want to learn something b) The two real pathways surviving in the high accuracy limit, ∆ξ → 0.
about the difference of the quantities represented by operatorsÂ = 2 which may or may not commute, at t = t 1 and t = t 2 , such that 0 < t 1 < t 2 < T . [Classically, we could consider, for example, the x-and y-components of the momentum, and ask for p y (t 2 ) − p x (t 1 )]. Quantally, there is more than one way to approach this task.
(i) We could employ two vN meters, set to measureÂ andB separately, and evaluate the difference of their readings.
(ii) We could also construct a hermitian operator formally representing the difference and use a single vN meter to measure it at t = t 1 . As discussed in [26], a classical analog of this procedure is a predictive measurement, which uses the fact that knowing the system's coordinates and momenta at t = t 1 allows one to restore the system's subsequent trajectory and, with it, all future values of all quantities.
(iii) A procedure unique to quantum mechanics allows one to evaluate the difference of interest, leaving the actual values ofÂ andB indeterminate.
In general, all three methods give different results [26], and here we are only interested in the option (iii). With this in mind, we couple a vN pointer (position ξ), toÂ andB at t = t 1 and t = t 2 , respectively, so that the interaction with the meter is given by and the pointer's initial state, G(ξ) has, as before, a Gaussian shape, centred at ξ = 0. The measurement works as follows: after the first coupling the pointer is shifted by one of the eigenvalues ofÂ, and after the second one by an eigenvalue ofB, albeit in the opposite sense. The resulting shift is, thus, the difference of the two values. This is not exactly the scheme used by von Neumann [1], and we will call this meter von Neuman-like [10].
Putting, as before,Ĥ = 0 we, therefore, have the paths and also the condition 4 i=1 A[i] = φ|ψ . The four paths are easily identified in the diagram in Fig.3a.
The difference betweenÂ andB is a functional F 2 [path] defined on the four paths, where it may take four different values LetÂ andB represent components of the spin (without a factor of 1/2) along two different axes, so that a 1 = b 1 = −1, and a 2 = b 2 = 1. Now F 2 takes three different values, −2, 0 and 2. With no meter employed, we cannot say what is the chance of having, e.g., F = 2, but, as before, may write down an amplitude distribution Φ(f ) for the value f of the functional normalised by the condition Φ(f )df = 4 i=1 A[i] = φ|ψ . As in the previous Section, this so far purely cosmetic re-arrangement of the virtual paths is useful for describing the effects of a meter, once it is coupled to the system as prescribed by Eq. (15). After a successful post-selection the pointer is in a pure state |M , and it is a simple matter to check that We can now look at Eq.(18) from the point of view adopted in the previous Section. As in Sect. IV, by external manipulation (connecting the system to a meter) we have created a classical statistical ensemble, where the system can reach the final state vie a continuum of real paths, labelled by the the variable ξ. Again, the meter plays an active role, and the (unnormalised) probability to travel the ξ 0 -th path is determined by the shape of G, which is the property of the pointer, as well as the amplitudes of the virtual paths, which represent the system "on its own", With this, the mean reading of the meter over many trials is readily seen to be given by As in the previous example, the value of F 2 remains indeterminate, if there is more than one term in r.h.s. of Eq. (19). To establish a good correlation between the pointer readings ξ and the values of F 2 we must, therefore, try to avoid the overlap between different G's in (19). We will do so after discussing the general rules for combining virtual paths in the next Section.

VII. SUPERPOSITION PRINCIPLE FOR VIRTUAL PATHS
The Feynman's uncertainty principle, cited in Sect. II, implies that virtual paths can be combined and recombined into new sets of paths, with interference being responsible for the loss of information about certain aspects of quantum motion. Next we briefly discuss the rules for creating such combinations.
We start with the well known case of changing the basis functions in which a state of the system can be expanded at a given time. Let a two-state system be in the state |Ψ = a 1 |ϕ 1 + a 2 |ϕ 2 , where ϕ m |ϕ m = δ mm , and a m is the amplitude with which |ϕ m enters in |Ψ . Consider a different basis, |φ i , φ i |φ i = δ ii , which is related to the first basis by a unitary transformation, |ϕ i = u i1 |φ 1 + u i2 |φ 2 . Now the amplitude b i , with which |φ i enters in |Ψ , is b i = φ i |Ψ = j u ji a j , i = 1, 2. These equations can be read as follows: if a new basis state is a weighted sum of the old ones, the corresponding amplitude is the weighted sum of the corresponding amplitudes.
The same rules can be applied to virtual paths, and we can describe the action of the meter in That the width of G(ξ), ∆ξ, determines the resolution, or the accuracy, of a measurement is readily seen if G(ξ) is chosen to have the form of a "rectangular window' (13). Inserting (13) in Eq. (18) yields and we can be sure that if the reading if ξ 0 , the value of the measured functional, f , is no smaller than ξ 0 − ∆ξ/2, and no greater than ξ 0 + ∆ξ/2. We are, however, "banned" from looking inside the interval [ξ 0 − ∆ξ/2, ξ 0 + ∆ξ/2], where all information about f is lost to the interference which the meter has failed to destroy. Next we look at the question of accuracy in more detail.
Since, by assumption, the pointer's final position is evaluated accurately, it is the uncertainty of its initial setting that determines precision of the measurement. For a highly accurate measurements we should choose ∆ξ as small as possible, thus requiring If so, in the example of Sect. V, Eq.(12) reduces to is not postulated, but is a consequence of the change in the physical state of the system, caused by its interaction with the meter [13].
Recalling that whenever post-selection in the state |φ fails, the system ends up in its orthogonal compliment, |φ ⊥ , |φ φ| + |φ ⊥ φ ⊥ | = 1, we can add two new paths, {3} and {4}, connecting |ψ with |φ ⊥ , and travelled with probabilities and respectively. We now have an exact equivalent of the (upper part of the) model in Fig.1.
The only difference is that this time it is built from quantum "elements", rather than from pieces of a classical kit. The rest is a simple exercise in classical statistics. In both cases we can calculate the averages of the functional F 1 . For example, conditioned by successful postselection in |φ , the mean value of F 1 is given by We can also choose the operatorÂ to be the projector on the state |1 , so that its eigenvalues are 1 and 0, and we have F 1 [1] = 1 and F 1 [2] = 0. Now the mean value of F 1 yields precisely the probability to travel the first path, [2] .
Classically, we would arrive at the same result by writing down 1 whenever the ball passes Adding three more paths connecting |ψ with |φ ⊥ , we obtain a classical stochastic system shown in Fig.3, similar to that built from the tubes and connectors in Fig.1 [3] .
To conclude, we return to the remark made at the end of Sect. IV. Joining the connectors and tubes in different ways, one may construct completely different statistical networks, which have nothing in common, except their constituent parts. In a similar way, by applying different measurement schemes, and measuring different operators to different accuracies, one may engineer equally different stochastic ensembles, having nothing in common except their parent quantum system. Ignoring this fact may lead to various counterfactual paradoxes, some of which disappear once the above remark is taken into account (for more detail see [7] and the Refs. therein).

IX. MOST INACCURATE MEASUREMENTS AND THE MEANING OF "INDE-TERMINATE"
Next we look at what happens in the opposite limit, There are several reasons for doing it. Firstly, as in the case of the accurate limit considered in the previous Section, the result is universal in the sense that it no longer depends on the particular choice of G(ξ), and appears to represent some "intrinsic" property of the studied system. Secondly, observation of the system's history changes, in general, the probability to end up in the state |φ which, when an accurate meter is employed, is different from that without a meter, This is a well known example of "one measurement perturbing the other", and it is tempting to try to get rid of the perturbation, and to see what will then occur.
The perturbation disappears if, and and only if, all G's in Eq. (18) have approximately the This can only happen in the limit (30), when G(ξ) is made very broad. One immediate consequence of taking the limit (30) is that the range of possible meter's readings has widened to occupy the whole real axis, This gives an operational meaning to the word "indeterminate" we have often used above.
If one wishes to keep interference between paths intact, he/she must choose the meter so inaccurate, that its reading will be arbitrary, leaving the researcher completely ignorant of the value of the functional in question.
Since it appears that not much can be learnt from an individual inaccurate measurement and, we follow [2] and move on to look at the average meter reading, ξ, obtained as the number of trials N tends to infinity. Nowt ξ must be expressed in the terms of the path amplitudes . The authors of [2] have shown that in the case of Sect.V the correct expression is where is the relative amplitude for the path number i. In the special case ofÂ being the projector onto the state |1 ,Â = |1 1|, we have In [10] the result was extended to any real functional for which, in the absence of a meter, the amplitude for taking a value f is Φ(f ), Similarly, the average reading of a very inaccurate meter measuring the difference ofB(t 2 ) andÂ(t 1 ) in the example of Sect.VI, is given by In general, an average reading of a very inaccurate meter is expressed in terms of the relative amplitudes for the virtual paths connecting the initial and final states. Equations (35) and (37) bear uncanny resemblance to Eqs. (28) and (29), in which the probabilities p[i] have been replaced by the corresponding probability amplitudes A[i], and the real part was chosen to make the result real valued. This may be taken as a hint that the quantity under the Re sign, the weak value of a functional F , has the universal importance of its accurate ("strong") counterpart in Eq. (29), in Eqs. (28) and (29). In the next Section we will show that this cannot be the case, and for a good reason.

X. WEAK VALUES AND THE UNCERTAINTY PRINCIPLE
As discussed in Sect.VIII, an accurately measured mean value of the projector onto an intermediate state gives the relative frequency with which the path created by the meter is travelled, and so serves to distinguish it from other real paths present. What would the result of such measurement mean in the most inaccurate limit, where the path in question exists only as a virtual one? The uncertainty principle states that two virtual paths cannot be distinguished by physical means inside the single real pathway they are combined into.
There appears to be a logical rather than a mathematical difficulty. To address it, consider a purely classical conundrum. Suppose there are two drops of water on a glass surface, and we put a grain of send in the drop number one. The drops move closer to each other, and coalesce. In which of the two drops is the sand grain now? If asked, one would certainly reject the question as meaningless, given the new circumstances. But if an answer is required under pain of death, the individual might reply "inside the water drop number 100, for all I care". The answer to a meaningless question may be anything at all.
The above passage, illustrated in Fig.3, is not entirely out of place in the present discussion. With two real paths created by an accurate meter, in each trial the pointer would point at zero or one, indicating the realisation of one of the two real possibilities. If the meter is highly inaccurate, and the interference between two virtual paths is not destroyed, the pointer is equally likely to point at any number. If asked "was the first or the second virtual path travelled in this transition?", the meter, unable to politely decline to answer a meaningless question, points at, say, 100. Importantly, this behaviour is a necessity, rather than a paradox. It is the uncertainty principle that makes the question meaningless, and any other kind of answer would lead to a conflict with the principle.   can all agree that the quantity they measure may take only these two values, and use this knowledge to develop theories of spin-1/2 particles, if needed. If the measurement is weak, they can only agree that the spin they measure is able to take any real value at all.
Suppose further that the equipment is such that each researcher only has access to the average values, and not to the distributions. If the measurements are accurate, for all choices of |ψ and |φ ξ would lie between −1 and 1. This gives the knowledge of a useful fact that σ z represents a quantity whose value cannot be larger than one, or less than minus one, although its discreet nature must be learned from some other source. If the measurement is weak, it can be shown [28], that there is always a choice of |ψ and |φ to give ξ any real value, positive, negative or zero. Thus, after trying all possible |ψ and |φ , the experimentators will remain in the state of maximal ignorance regarding the properties of the measured quantity.
The mathematical trick which allows the UP to conceal the information about individual virtual paths is based on replacing the path probabilities with path amplitudes when calculating a weak value. It is not so much the fact that A[i] are in general complex valued, but rather the absence of restrictions on the signs of their real and imaginary parts, that can cause the weak value of a projector to be −100 [10]. The mean and variance of a nonnegative distribution give the estimates for the centre and the width of its region of support.
The mean and variance of a distribution which can change sign do not have this property, allowing ξ lie anywhere on the real axis, or obtain a zero variance even when the distribution is not a Dirac delta [10].

XI. WEAK VALUES: OVER-INTERPRETATIONS AND MISCONCEPTIONS
Conclusions of the previous Section can be summarised as follows [7]. A highly inaccurate "weak" measurement (WM) on a pre-and post-selected system is a perturbative scheme, in which an additional system (a vN meter) is weakly coupled to the observed one (e.g., a spin). The coupling is such that the mean value of one of the meter's variables (pointer position ξ) coincides with the real (or imaginary, if necessary [2], [7]) part of a weighted sum of the relative amplitudes on the virtual paths connecting the system's initial and final states, In Eq.(40) the paths {i}, i = 1, 2... and the weighs F [i] are determined by the nature of the measured quantities. If is is possible (it may not be) to measure directly a functional F j , whose values are zero on all paths except the path number j, F j [i] = δ ij the mean reading of the weak pointer will yield precisely the real (imaginary) part of the amplitude α j . In short, weak measurements measure probability amplitudes.
A certain amount of confusion, which led to a plethora of recent publications on this still fashionable subject (for a recent review see [29] and Refs. therein), has its roots in Ref. [2] where the weak values, defined in [29] as "...complex numbers that one can assign to the powers of a quantum observable operatorÂ using two states, an initial state |i called the preparation or preselection, and a final state |f called the postselection", were first intro- With weak values described only vaguely as a "new concept" [2], some authors proposed to use them as a tool for resolving "counterfactual paradoxes". One simple example is the so-called "three-box case" [5], [30], [24], where the final state can be reached via three virtual pathways with amplitudes where C is a complex number. An accurate measurement of a functional F 1 , always finds the system taking the first path. On the other hand, an accurate measurement of a functional F 3 , always finds the system taking the third path, and the authors of [5] alert the reader to "...the peculiarity of having one particle in several places simultaneously even in the stronger sense than in the double slit experiment...". Admitting that the two results were obtained in different physical circumstances, the authors of [4] rely on the WM "... to test... the assertions otherwise regarded a counterfactual". Their reasoning is as follows. Since WM perturb the system only slightly, it is possible to conduct several of them at the same time [4] ( or, we add, even perform full tomography of the studied transition [7]). Simultaneous WM of F 1 and F 3 by two meters yield ξ 1 = 1 and ξ 2 = 1, and seemingly confirm the particle's presence in two places at the same time.
A closer look suggests a far simpler explanation of this "peculiarity". the perturbation theory typically gives information about both the moduli and the phases of amplitudes and wave function, and the WM scheme is the rule rather than the exception [7].
Finally a much criticised [31], [28] paper entitled "How the result of a single coin toss can turn out to be 100 heads" [32], the authors argue that "weak values are not inherently quantum but rather a purely statistical feature of pre-and postselection with disturbance".
Without going into the details of the author's reasoning, or into the arguments of their critics, one sees that the claim of [32] must be wrong. To obtain a mean value which lies outside the region of support of a distribution, the distribution must change sign [10].
Such distributions naturally appear in quantum mechanics, whose basic building blocks are complex valued probability amplitudes [17]. They may not, however, appear in a classical theory where all physically meaningful averages are taken with non-negative probabilities.
The parallel between the titles of [2], and the Ref. [32], which followed it twenty six years on, is not coincidental. The fallacy of weak classical values in [32] stems naturally from the suggestion [2] that the combinations of the transitions amplitudes, which arise when a measurement is extremely inaccurate, amount to a "new concept" of a quantum variable, capable of shedding light on a new physical reality.

XII. CONCLUSIONS
In summary, we have shown that the application of a sequence of quantum measurements creates a stochastic network. A network is classical in the sense that the probabilities for all observable events have, of course, all the usual properties of probabilities in a classical theory.
A network is quantum in its origin, and the action of a meter, or meters, can be seen as a way of "engineering" real scenarios, endowed with probabilities, from the virtual paths of the system, for which only probability amplitudes are available. A network is conveniently described in terms of functionals, which take numerical values on the virtual paths, and whose values the measurements are set to determine. In the case of von Neumann and von Neumann-like meters considered here, the real pathways are determined by: (i) the set of virtual paths available for the system with if no meter is present.
(ii) the values of the measured functional. Two paths sharing the same value cannot be distinguished by the meter(s).
(iii) the form and width of the initial state of the pointer, G(ξ), which determine the accuracy of the measurement.
Highly accurate measurements turn all virtual paths, that can be distinguished, into real ones, travelled with the probabilities proportional to the squared moduli of the corresponding probability amplitudes. If a functional whose value is 1 on one of the virtual paths, and 0 on all others, is measured accurately, its mean value yields the relative frequency with which the path is travelled.
If the same functional is measured in a highly inaccurate manner, its mean (weak) value coincides with the real (or imaginary) part of the relative amplitude, corresponding to the