Next Article in Journal / Special Issue
Uncertainty Relations and Possible Experience
Previous Article in Journal
Smoothness in Binomial Edge Ideals
Previous Article in Special Issue
SIC-POVMs and Compatibility among Quantum States
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Measurement Uncertainty for Finite Quantum Observables

Quantum Information Group, Institute for Theoretical Physics, Leibniz Universität Hannover, 30167 Hannover, Germany
*
Author to whom correspondence should be addressed.
Mathematics 2016, 4(2), 38; https://doi.org/10.3390/math4020038
Submission received: 30 March 2016 / Revised: 9 May 2016 / Accepted: 11 May 2016 / Published: 2 June 2016
(This article belongs to the Special Issue Mathematics of Quantum Uncertainty)

Abstract

:
Measurement uncertainty relations are lower bounds on the errors of any approximate joint measurement of two or more quantum observables. The aim of this paper is to provide methods to compute optimal bounds of this type. The basic method is semidefinite programming, which we apply to arbitrary finite collections of projective observables on a finite dimensional Hilbert space. The quantification of errors is based on an arbitrary cost function, which assigns a penalty to getting result x rather than y, for any pair ( x , y ) . This induces a notion of optimal transport cost for a pair of probability distributions, and we include an Appendix with a short summary of optimal transport theory as needed in our context. There are then different ways to form an overall figure of merit from the comparison of distributions. We consider three, which are related to different physical testing scenarios. The most thorough test compares the transport distances between the marginals of a joint measurement and the reference observables for every input state. Less demanding is a test just on the states for which a “true value” is known in the sense that the reference observable yields a definite outcome. Finally, we can measure a deviation as a single expectation value by comparing the two observables on the two parts of a maximally-entangled state. All three error quantities have the property that they vanish if and only if the tested observable is equal to the reference. The theory is illustrated with some characteristic examples.

Graphical Abstract

1. Introduction

Measurement uncertainty relations are quantitative expressions of complementarity. As Bohr often emphasized, the predictions of quantum theory are always relative to some definite experimental arrangement, and these settings often exclude each other. In particular, one has to make a choice of measuring devices, and typically, quantum observables cannot be measured simultaneously. This often used term is actually misleading, because time has nothing to do with it. For a better formulation, recall that quantum experiments are always statistical, so the predictions refer to the frequency with which one will see certain outcomes when the whole experiment is repeated very often. Therefore, the issue is not simultaneous measurement of two observables, but joint measurement in the same shot. That is, a device R is a joint measurement of observable A with outcomes x X and observable B with outcomes y Y , if it produces outcomes of the form ( x , y ) in such a way that if we ignore outcome y, the statistics of the x outcomes is always (i.e., for every input state) the same as obtained with a measurement of A and symmetrically for ignoring x and comparing to B. It is in this sense that non-commuting projection-valued observables fail to be jointly measurable.
However, this is not the end of the story. One is often interested in approximate joint measurements. One such instance is Heisenberg’s famous γ-ray microscope [1], in which a particle’s position is measured by probing it with light of some wavelength λ, which from the outset sets a scale for the accuracy of this position measurement. Naturally, the particle’s momentum is changed by the Compton scattering, so if we make a momentum measurement on the particles after the interaction, we will find a different distribution from what would have been obtained directly. Note that in this experiment, we get from every particle a position value and a momentum value. Moreover, errors can be quantified by comparing the respective distributions with some ideal reference: the accuracy of the microscope position measurement is judged by the degree of agreement between the distribution obtained and the one an ideal position measurement would give. Similarly, the disturbance of momentum is judged by comparing a directly measured distribution with the one after the interaction. The same is true for the uncontrollable disturbance of momentum. This refers to a scenario where we do not just measure momentum after the interaction, but try to build a device that recovers the momentum in an optimal way, by making an arbitrary measurement on the particle after the interaction, utilizing everything that is known about the microscope, correcting all known systematic errors and even using the outcome of the position measurement. The only requirement is that at the end of the experiment, for each individual shot, some value of momentum must come out. Even then it is impossible to always reproduce the pre-microscope distribution of momentum. The tradeoff between accuracy and disturbance is quantified by a measurement uncertainty relation. Since it simply quantifies the impossibility of a joint exact measurement, it simultaneously gives bounds on how an approximate momentum measurement irretrievably disturbs position. The basic setup is shown in Figure 1.
Note that in this description of errors, we did not ever bring in a comparison with some hypothetical “true value”. Indeed, it was noted already by Kennard [2] that such comparisons are problematic in quantum mechanics. Even if one is willing to feign hypotheses about the true value of position, as some hidden variable theorists will, an operational criterion for agreement will always have to be based on statistical criteria, i.e., the comparison of distributions. Another fundamental feature of this view of errors is that it provides a figure of merit for the comparison of two devices, typically some ideal reference observable and an approximate version of it. An “accuracy” ε in this sense is a promise that no matter which input state is chosen, the distributions will not deviate by more than ε. Such a promise does not involve a particular state. This is in contrast to preparation uncertainty relations, which quantify the impossibility to find a state for which the distributions of two given observables (e.g., position and momentum) are both sharp.
Measurement uncertainty relations in the sense described here were first introduced for position and momentum in [3] and were initially largely ignored. A bit earlier, an attempt by Ozawa [4] to quantify error-disturbance tradeoffs with state dependent and somewhat unfortunately chosen [5] quantities had failed, partly for reasons already pointed out in [6]. When experiments confirmed some predictions of the Ozawa approach (including the failure of the error-disturbance tradeoff), a debate ensued [7,8,9,10]. Its unresolved part is whether a meaningful role for Ozawa’s definitions can be found.
Technically, the computation of measurement uncertainty remained hard, since there were no efficient methods to compute sharp bounds in generic cases. A direct computation along the lines of the definition is not feasible, since it involves three nested optimization problems. The only explicit solutions were for qubits [11,12,13], one case of angular momentum [14] and all cases with phase space symmetry [7,15,16], in which the high symmetry allows the reduction to preparation uncertainty as in [3,9]. The main aim of the current paper is to provide efficient algorithms for sharp measurement uncertainty relations of generic observables, even without any symmetry.
In order to do that, we restrict the setting in some ways, but allow maximal generality in others. We will restrict to finite dimensional systems and reference observables, which are projection valued and non-degenerate. Thus, each of the ideal observables will basically be given by an orthonormal basis in the same d-dimensional Hilbert space. The labels of this basis are the outcomes x X of the measurement, where X is a set of d elements. We could choose all X = { 1 , , d } , but it will help to keep track of things using a separate set for each observable. Moreover, this includes the choice X R , the set of eigenvalues of some Hermitian operator. We allow not just two observables, but any finite number n 2 of them. This makes some expressions easier to write down, since the sum of an expression involving observable A and an analogous one for observable B becomes an indexed sum. We also allow much generality in the way errors are quantified. In earlier works, we relied on two elements to be chosen for each observable, namely a metric D on the outcome set and an error exponent α, distinguishing, say, absolute ( α = 1 ), root-mean-square ( α = 2 ) and maximal ( α = ) deviations. Deviations were then averages of D ( x , y ) α . Here, we generalize further to an arbitrary cost function c : X × X R , which we take to be positive and zero exactly on the diagonal (e.g., c ( x , y ) = D ( x , y ) α ), but not necessarily symmetric. Again, this generality comes mostly as a simplification of notation. For a reference observable A with outcome set X and an approximate version A with the same outcome set, this defines an error ε ( A | A ) . Our aim is to provide algorithms for computing the uncertainty diagram associated with such data, of which Figure 2 gives an example. The given data for such a diagram are n projection valued observables A 1 , , A n , with outcome sets X i , for each of which we are given also a cost function c i : X i × X i R for quantifying errors. An approximate joint measurement is then an observable R with outcome set X 1 × × X n , and hence, with POVMelements R ( x 1 , , x n ) , where x i X i . By ignoring every output, but one, we get the n marginal observables:
A i ( x i ) = x 1 , , x i - 1 , x i + 1 , , x n R ( x 1 , , x n )
and a corresponding tuple:
ε ( R ) = ε ( A 1 | A 1 ) , , ε ( A n | A n )
of errors. The set of such tuples, as R runs over all joint measurements, is the uncertainty region. The surface bounding this set from below describes the uncertainty tradeoffs. For n = 2 , we call it the tradeoff curve. Measurement uncertainty is the phenomenon that, for general reference observables A i , the uncertainty region is bounded away from the origin. In principle, there are many ways to express this mathematically, from a complete characterization of the exact tradeoff curve, which is usually hard to get, to bounds that are simpler to state, but suboptimal. Linear bounds will play a special role in this paper.
We will consider three ways to build a single error quantity out of the comparison of distributions, denoted by ε M ( A | A ) , ε C ( A | A ) and ε E ( A | A ) . These will be defined in Section 2. For every choice of observables and cost functions, each will give an uncertainty region, denoted by U M , U C and U E , respectively. Since they are all based on the same cost function c, they are directly comparable (see Figure 2). We show in Section 3 that the three regions are convex and hence characterized completely by linear bounds. In Section 4, we show how to calculate the optimal linear lower bounds by semidefinite programs. Finally, an Appendix collects the basic information on the beautiful theory of optimal transport, which is needed in Section 2.1 and Section 4.1.

2. Deviation Measures for Observables

Here, we define the measures we use to quantify how well an observable A approximates a desired observable A. In this section, we do not use the marginal condition Equation (1), so A is an arbitrary observable with the same outcome set X as A, i.e., we drop all indices i identifying the different observables. Our error quantities are operational in the sense that each is motivated by an experimental setup, which will in particular provide a natural way to measure them. All error definitions are based on the same cost function c : X × X R , where c ( x , y ) is the “cost” of getting a result x X , when y X would have been correct. The only assumptions are that c ( x , y ) 0 with c ( x , y ) = 0 iff x = y .
As described above, we consider a quantum system with Hilbert space C d . As a reference observable A, we allow any complete von Neumann measurement on this system, that is any observable whose set X of possible measurement outcomes has size | X | = d and whose POVM elements A ( y ) B ( C d ) ( y X ) are mutually orthogonal projectors of rank 1; we can then also write A ( y ) = | ϕ y ϕ y | with an orthonormal basis { ϕ y } of C d . For the approximating observable A , the POVM elements A ( x ) (with x X ) are arbitrary with A ( x ) 0 and x X A ( x ) = 𝟙 .
The comparison will be based on a comparison of output distributions, for which we use the following notations: given a quantum state ρ on this system, i.e., an operator with ρ 0 and tr ρ = 1 , and an observable, such as A, we will denote the outcome distribution by ρ A , so ( ρ A ) ( y ) : = tr ( ρ A y ) . This is a probability distribution on the outcome set X and can be determined physically as the empirical outcome distribution after many experiments.
For comparing just two probability distributions p : X R + and q : X R + , a canonical choice is the “minimum transport cost”:
c ˇ ( p , q ) : = inf γ x y c ( x , y ) γ ( x , y ) | γ couples p to q
where the infimum runs over the set of all couplings or “transport plans” γ : X × X R + of p to q, i.e., the set of all probability distributions γ satisfying the marginal conditions y γ ( x , y ) = p ( x ) and x γ ( x , y ) = q ( y ) . The motivations for this notion and the methods to compute it efficiently are described in the Appendix. Since X is finite, the infimum is over a compact set, so it is always attained. Moreover, since we assumed c 0 and c ( x , y ) = 0 x = y , we also have c ˇ ( p , q ) 0 with equality iff p = q . If one of the distributions, say q, is concentrated on a point y ˜ , only one coupling exists, namely γ ( x , y ) = p ( x ) δ y y ˜ . In this case, we abbreviate c ˇ ( p , q ) = c ˇ ( p , y ˜ ) , and get:
c ˇ ( p , y ˜ ) = x p ( x ) c ( x , y ˜ )
i.e., the average cost of moving all of the points x distributed according to p to y ˜ .

2.1. Maximal Measurement Error ε M ( A | A )

The worst case error over all input states is:
ε M ( A | A ) : = sup ρ c ˇ ( ρ A , ρ A ) |   ρ quantum state on C d
which we call the maximal error. Note that, like the cost function c and the transport costs c ˇ , the measure ε M ( A | A ) need not be symmetric in its arguments, which is sensible, as the reference and approximating observables have distinct roles. Similar definitions for the deviation of an approximating measurement from an ideal one have been made, for specific cost functions, in [7,9] and [14] before.
The definition Equation (5) makes sense even if the reference observable A is not a von Neumann measurement. Instead, the only requirement is that A and A be general observables with the same (finite) outcome set X, not necessarily of size d. All of our results below that involve only the maximal measurement error immediately generalize to this case, as well.
One can see that it is expensive to determine the quantity ε M ( A | A ) experimentally according to the definition: one would have to measure and compare (see Figure 3) the outcome statistics ρ A and ρ A for all possible input states ρ, which form a continuous set. The following definition of observable deviation alleviates this burden.

2.2. Calibration Error ε C ( A | A )

Calibration (see Figure 4) is a process by which one tests a measuring device on inputs (or measured objects) for which the “true value” is known. Even in quantum mechanics, we can set this up by demanding that the measurement of the reference observable on the input state gives a sharp value y. In a general scenario with continuous outcomes, this can only be asked with a finite error δ, which goes to zero at the end [7], but in the present finite scenario, we can just demand ( ρ A ) ( y ) = 1 . Since, for every outcome y of a von Neumann measurement, there is only one state with this property (namely ρ = | ϕ y ϕ y | ), we can simplify even further, and define the calibration error by:
ε C ( A | A ) : = sup y , ρ { c ˇ ( ρ A , y ) | tr ( ρ A ( y ) ) = 1 } = max y x ϕ y | A ( x ) | ϕ y c ( x , y )
Note that the calibration idea only makes sense when there are sufficiently many states for which the reference observable has deterministic outcomes, i.e., for projective observables A.
A closely related quantity has recently been proposed by Appleby [10]. It is formulated for real valued quantities with cost function c ( x , y ) = ( x - y ) 2 and has the virtue that it can be expressed entirely in terms of first and second moments of the probability distributions involved. Therefore, for any ρ, let m and v be the mean and variance of ρ A and v the mean quadratic deviation of ρ A from m. Then, Appleby defines:
ε D ( A | A ) = sup ρ ( v - v ) 2
Here, we added the square to make Appleby’s quantity comparable to our variance-like (rather than standard deviation-like) quantities and chose the letter D, because Appleby calls this the D-error. Since in the supremum, we have also the states for which A has a sharp distribution (i.e., v = 0 ), we clearly have ε D ( A | A ) ε C ( A | A ) . On the other hand, let Φ ( x ) = t ( x - m ) 2 and Ψ ( y ) = t / ( 1 - t ) ( y - m ) 2 . Then, one easily checks that Φ ( x ) - Ψ ( y ) ( x - y ) 2 , so ( Φ , Ψ ) is a pricing scheme in the sense defined in the Appendix. Therefore:
c ˇ ( ρ A , ρ A ) x ( ρ A ) ( x ) Φ ( x ) - y ( ρ A ) ( y ) Ψ ( y ) = t v - t 1 - t v
Maximizing this expression over t gives exactly Equation (7). Therefore,
ε C ( A | A ) ε D ( A | A ) ε M ( A | A ) .

2.3. Entangled Reference Error ε E ( A | A )

In quantum information theory, a standard way of providing a reference state for later comparison is by applying a channel or observable to one half of a maximally-entangled system. Two observables would be compared by measuring them (or suitable modifications) on the two parts of a maximally-entangled system (see Figure 5). Let us denote the entangled vector by Ω = d - 1 / 2 k | k k . Since later, we will look at several distinct reference observables, the basis kets | k in this expression have no special relation to A or its eigenbasis ϕ y . We denote by X T the transpose of an operator X in the | k basis, and by A T the observable with POVM elements A ( y ) T = | ϕ y ¯ ϕ y ¯ | , where ϕ y ¯ is the complex conjugate of ϕ y in | k -basis. These transposes are needed due to the well-known relation ( X 𝟙 ) Ω = ( 𝟙 X T ) Ω . We now consider an experiment, in which A is measured on the first part and A T on the second part of the entangled system, so we get the outcome pair ( x , y ) with probability:
p ( x , y ) = Ω | A ( x ) A ( y ) T | Ω = Ω | A ( x ) A ( y ) 𝟙 | Ω = 1 d tr A ( x ) A ( y )
As A is a complete von Neumann measurement, this probability distribution is concentrated on the diagonal ( x = y ) iff A = A , i.e., there are no errors of A relative to A. Averaging with the error costs, we get a quantity we call the entangled reference error:
ε E ( A | A ) : = x y 1 d tr A ( x ) A ( y ) c ( x , y )
Note that this quantity is measured as a single expectation value in the experiment with source Ω. Moreover, when we later want to measure different such deviations for the various marginals, the source and the tested joint measurement device can be kept fixed, and only the various reference observables A i T acting on the second part need to be adapted suitably.

2.4. Summary and Comparison

The quantities ε M ( A | A ) , ε C ( A | A ) and ε E ( A | A ) constitute three different ways to quantify the deviation of an observable A from a projective reference observable A. Nevertheless, they are all based on the same distance-like measure, the cost function c on the outcome set X. Therefore, it makes sense to compare them quantitatively. Indeed, they are ordered as follows:
ε M ( A | A ) ε C ( A | A ) ε E ( A | A )
Here, the first inequality follows by restricting the supremum Equation (5) to states that are sharp for A and the second by noting that the Equation (6) is the maximum of a function of y, of which Equation (10) is the average.
Moreover, as we argued before in Equation (10), ε E ( A | A ) = 0 if and only if A = A , which is hence equivalent also to ε M ( A | A ) = 0 and ε C ( A | A ) = 0 .

3. Convexity of Uncertainty Diagrams

For two observables B 1 and B 2 with the same outcome set, we can easily realize their mixture or convex combination B = t B 1 + ( 1 - t ) B 2 by flipping a coin with probability t for heads in each instance and then apply B 1 when heads is up and B 2 otherwise. In terms of POVM elements, this reads B ( x ) = t B 1 ( x ) + ( 1 - t ) B 2 ( x ) . We show first that this mixing operation does not increase the error quantities from Section 2.
Lemma 1. 
For L { M , D , C , E } the error quantity ε L ( B | A ) , is a convex function of B, i.e., for B = t B 1 + ( 1 - t ) B 2 and t [ 0 , 1 ] :
ε L ( B | A ) t ε L ( B 1 | A ) + ( 1 - t ) ε L ( B 1 | A )
Proof. 
The basic fact used here is that the pointwise supremum of affine functions (i.e., those for which equality holds in the definition of a convex function) is convex. This is geometrically obvious and easily verified from the definitions. Hence, we only have to check that each of the error quantities is indeed represented as a supremum of functions, which are affine in the observable B.
For L = E , we even get an affine function, because Equation (10) is linear in A . For L = C , Equation (6) has the required form. For L = M , Equation (5) is a supremum, but the function c ˇ is defined as an infimum. However, we can use the duality theory described in the Appendix to write it instead as a supremum over pricing schemes, of an expression that is just the expectation of Φ ( x ) plus a constant and, therefore, an affine function. Finally, for Appleby’s case Equation (7), we get the same supremum, but over the subset of pricing schemes (the quadratic ones).  ☐
The convexity of the error quantities distinguishes measurement from preparation uncertainty. Indeed, the variances appearing in preparation uncertainty relations are typically concave functions, because they arise from minimizing the expectation of ( x - m ) 2 over m. Consequently, the preparation uncertainty regions may have gaps and non-trivial behavior on the side of large variances. The following proposition will show that measurement uncertainty regions are better behaved.
For every cost function c on a set X, we can define a “radius” c ¯ * , the largest transportation cost from the uniform distribution (the “center” of the set of probability distributions) and a “diameter” c * , the largest transportation cost between any two distributions:
c ¯ * = max y x c ( x , y ) / d c * = m a x x y c ( x , y )
Proposition 2. 
Let n observables A i and cost functions c i be given, and define c i M = c i C = c i * and c i E = c i ¯ * . Then, for L = { M , C , E } , the uncertainty regions U L is a convex set and has the following (monotonicity) property: when x = ( x 1 , , x n ) U L and y = ( y i , , y n ) R n are such that x i y i c i L , then y U L .
Proof. 
Let us first clarify how to make the worst possible measurement B, according to the various error criteria, for which we go back to the setting of Section 2, with just one observable A and cost function c. In all cases, the worst measurement is one with constant and deterministic output, i.e., B ( x ) = δ x * , x 𝟙 . For L = C and L = M , such a measurement will have ε L ( B | A ) = max y c ( x * , y ) , and we can choose x * to make this equal to c * = c L . For L = E , we get instead the average, which is maximized by c ¯ * .
We can now make a given joint measurement R worse by replacing it partly by a bad one, say for the first observable A 1 . That is, we set, for λ [ 0 , 1 ] ,
R ˜ ( x 1 , x 2 , , x n ) = λ B 1 ( x 1 ) y 1 R ( y 1 , x 2 , , x n ) + ( 1 - λ ) R ( x 1 , x 2 , , x n )
Then, all marginals A ˜ i for i 1 are unchanged, but A ˜ 1 ( x 1 ) = λ B 1 ( x 1 ) + ( 1 - λ ) A ( x 1 ) . Now, as λ changes from zero to one, the point in the uncertainty diagram will move continuously in the first coordinate direction from x to the point in which the first coordinate is replaced by its maximum value (see Figure 6 (left)). Obviously, the same holds for every other coordinate direction, which proves the monotonicity statement of the proposition.
Let R 1 and R 2 be two observables, and let R = λ R 1 + ( 1 - λ ) R 2 be their mixture. For proving the convexity of U L , we will have to show that every point on the line between ε L ( R 1 ) and ε L ( R 2 ) can be attained by a tuple of errors corresponding to some allowed observable (see Figure 6 (right)). Now, lemma 1 tells us that every component of ε L ( R ) is convex, which implies that ε L ( R ) λ ε L ( R 1 ) + ( 1 - λ ) ε L ( R 2 ) . However, by monotonicity, this also means that λ ε L ( R 1 ) + ( 1 - λ ) ε ( R 2 ) is in U L again, which shows the convexity of U L . ☐

Example: Phase Space Pairs

As is plainly visible from Figure 2, the three error criteria considered here usually give different results. However, under suitable circumstances, they all coincide. This is the case for conjugate pairs related by Fourier transform [15]. The techniques needed to show this are the same as for the standard position/momentum case [9,17] and, in addition, imply that the region for preparation uncertainty is also the same.
In the finite case, there is not much to choose: we have to start from a finite abelian group, which we think of as position space, and its dual group, which is then the analogue of momentum space. The unitary connecting the two observables is the finite Fourier associated with the group. The cost function needs to be translation invariant, i.e., c ( x , y ) = c ( x - y ) . Then, by an averaging argument, we find for all error measures that a covariant phase space observable minimizes measurement uncertainty (all three versions). The marginals of such an observable can be simulated by first doing the corresponding reference measurement and then adding some random noise. This implies [14] that ε M ( A | A ) = ε C ( A | A ) . However, we know more about this noise: it is independent of the input state, so that the average and the maximum of the noise (as a function of the input) coincide, i.e., ε C ( A | A ) = ε E ( A | A ) . Finally, we know that the noise of the position marginal is distributed according to the position distribution of a certain quantum state, which is, up to normalization and a unitary parity inversion, the POVM element of the covariant phase space observable at the origin. The same holds for the momentum noise. However, then the two noise quantities are exactly related like the position and momentum distributions of a state, and the tradeoff curve for that problem is exactly preparation uncertainty, with variance criteria based on the same cost function.
If we choose the discrete metric for c, the uncertainty region depends only on the number d of elements in the group we started from [15]. The largest ε for all quantities is the distance from a maximally-mixed state to any pure state, which is Δ = ( 1 - 1 / d ) . The exact tradeoff curve is then an ellipse, touching the axes at the points ( 0 , Δ ) and ( Δ , 0 ) . The resulting family of curves, parameterized by d, is shown in Figure 7. In general, however, the tradeoff curve requires the solution of a non-trivial family of ground state problems and cannot be given in closed form. For bit strings of length n and the cost, some convex function of Hamming distance there is an expression for large n [15].

4. Computing Uncertainty Regions via Semidefinite Programming

We show here how the uncertainty regions, and therefore, optimal uncertainty relations, corresponding to each of the three error measures can actually be computed, for any given set of projective observables A 1 , , A n and cost functions c 1 , , c n . Our algorithms will come in the form of semidefinite programs (SDPs) [18,19], facilitating efficient numerical computation of the uncertainty regions via the many existing program packages to solve SDPs. Moreover, the accuracy of such numerical results can be rigorously certified via the duality theory of SDPs. To obtain the illustrations in this paper, we used the CVX package [20,21] under MATLAB.
As all of our uncertainty regions U L R n (for L = M , C , E ) are convex and closed (Section 3), they are completely characterized by their supporting hyperplanes (for a reference to convex geometry, see [22]). Due to the monotonicity property stated in Proposition 2, some of these hyperplanes just cut off the set parallel along the planes x i = c i L . The only hyperplanes of interest are those with nonnegative normal vectors w = ( w 1 , , w n ) R + n (see Figure 8). Each hyperplane is completely specified by its “offset” b L ( w ) away from the origin, and this function determines U L :
b L ( w ) : = inf w · ε | ε U L
U L = ε R n | w R + n w · ε b L ( w )
In fact, due to homogeneity b L ( t w ) = t b L ( w ) , we can restrict everywhere to the subset of vectors w R + n that, for example, satisfy i w i = 1 , suggesting an interpretation of the w i as weights of the different uncertainties ε i . Our algorithms will, besides evaluating b L ( w ) , also allow one to compute a (approximate) minimizer ε , so that one can plot the boundary of the uncertainty region U L by sampling over w , which is how the figures in this paper were obtained.
Let us further note that knowledge of b L ( w ) for some w R + n immediately yields a quantitative uncertainty relation: every error tuple ε U L attainable via a joint measurement is constrained by the affine inequality w · ε b L ( w ) , meaning that some weighted average of the attainable error quantities ε i cannot become too small. When b L ( w ) > 0 is strictly positive, this excludes in particular the zero error point ε = 0 . The obtained uncertainty relations are optimal in the sense that there exists ε U L , which attains strict equality w · ε = b L ( w ) .
Having reduced the computation of an uncertainty region essentially to determining b L ( w ) (possibly along with an optimizer ε ), we now treat each case L = M , C , E in turn.

4.1. Computing the Uncertainty Region U M

On the face of it, the computation of the offset b M ( w ) looks daunting: expanding the definitions, we obtain:
b M ( w ) = inf R i = 1 n w i sup ρ c ˇ i ( ρ A i , ρ A i )
where the infimum runs over all joint measurements R with outcome set X 1 × × X n , inducing the marginal observables A i = A i ( R ) according to Equation (1), and the supremum over all sets of n quantum states ρ 1 , , ρ n and where the transport costs c ˇ i ( p , q ) are given as a further infimum Equation (3) over the couplings γ i of p = ρ A i and q = ρ A i .
The first simplification is to replace the infimum over each coupling γ i , via a dual representation of the transport costs, by a maximum over optimal pricing schemes ( Φ α , Ψ α ) , which are certain pairs of functions Φ α , Ψ α : X i R , where α runs over some finite label set S i . The characterization and computation of the pairs ( Φ α , Ψ α ) , which depend only on the chosen cost function c i on X i , are described in the Appendix. The simplified expression for the optimal transport costs is then:
c ˇ i ( p , q ) = max α S i x Φ α ( x ) p ( x ) - y Ψ α ( y ) q ( y )
We can then continue our computation of b M ( w ) :
b M ( w ) = inf R i w i sup ρ max α S i x Φ α ( x ) tr [ ρ A i ( x ) ] - y Ψ α ( y ) tr [ ρ A i ( y ) ]
= inf R i w i max α S i sup ρ tr ρ x Φ α ( x ) A i ( x ) - y Ψ α ( y ) A i ( y )
= inf R i w i max α S i λ max x Φ α ( x ) A i ( x ) - y Ψ α ( y ) A i ( y )
where λ max ( B i , α ) denotes the maximum eigenvalue of a Hermitian operator B i , α . Note that λ max ( B i , α ) = inf { μ i | B i , α μ i 𝟙 } , which one can also recognize as the dual formulation of the convex optimization sup ρ tr ( ρ B i , α ) over density matrices, so that:
max α S i λ max ( B i , α ) = inf { μ i | α S i : B i , α μ i 𝟙 }
We obtain thus a single constrained minimization:
b M ( w ) = inf R , { μ i } i w i μ i | i α S i : x Φ α ( x ) A i ( x ) - y Ψ α ( y ) A i ( y ) μ i 𝟙
Making the constraints on the POVM elements R ( x 1 , , x n ) of the joint observable R explicit and expressing the marginal observables A i = A i ( R ) directly in terms of them by Equation (1), we finally obtain the following SDP representation for the quantity b M ( w ) :
b M ( w ) = inf i w i μ i with real variables μ i and d × d - matrix variables R ( x 1 , , x n ) subject to μ i 𝟙 x 1 , , x n Φ α ( x i ) R ( x 1 , , x n ) - y Ψ α ( y ) A ( y ) i α S i R ( x 1 , , x n ) 0 x 1 , , x n x 1 , , x n R ( x 1 , , x n ) = 𝟙 .
The derivation above shows further that, when w i > 0 , the μ i attaining the infimum equals μ i = sup ρ c ˇ i ( ρ A i , ρ A i ) = ε M ( A i | A i ) , where A i is the marginal coming from a corresponding optimal joint measurement R ( x i , , x n ) . Since numerical SDP solvers usually output an (approximate) optimal variable assignment, one obtains in this way directly a boundary point ε = ( μ 1 , , μ n ) of U M when all w i are strictly positive. If w i = 0 vanishes, a corresponding boundary point ε can be computed via ε i = ε M ( A i | A i ) = max α S i λ max ( x 1 , , x n Φ α ( x i ) R ( x 1 , , x n ) - y Ψ α ( y ) A ( y ) ) from an optimal assignment for the POVM elements R ( x 1 , , x n ) .
For completeness, we also display the corresponding dual program [18] (note that strong duality holds and the optima of both the primal and the dual problem are attained):
b M ( w ) = sup tr [ C ] - i , α tr [ D i , α y Ψ α ( y ) A i ( y ) ] with d × d - matrix variables C and D i , α subject to : C i , α Φ α ( x i ) D i , α x 1 , , x n 0 D i , α i α S i w i = α tr [ D i , α ] i .

4.2. Computing the Uncertainty Region U C

To compute the offset function b C ( w ) for the calibration uncertainty region U C , we use the last form in Equation (6) and recall that the projectors onto the sharp eigenstates of A i (see Section 2.2) are exactly the POVM elements A i ( x ) for x X i :
b C ( w ) = inf R i w i max y x tr [ A i ( x ) A i ( y ) ] c i ( x , y )
= inf R i w i sup { λ i , y } y λ i , y x tr [ A i ( x ) A i ( y ) ] c i ( x , y )
= inf R sup { λ i , y } x 1 , , x n tr R ( x 1 , , x n ) i , y w i λ i , y c i ( x i , y ) A i ( y )
where again, the infimum runs over all joint measurements R, inducing the marginals A i , and we have turned, for each i = 1 , , n , the maximum over y into a linear optimization over probabilities λ i , y 0 ( y = 1 , , d ) subject to the normalization constraint y λ i , y = 1 . In the last step, we have made the A i explicit via Equation (1).
The first main step towards a tractable form is von Neumann’s minimax theorem [23,24]: as the sets of joint measurements R and of probabilities { λ i , y } are both convex and the optimization function is an affine function of R and, separately, also an affine function of the { λ i , y } , we can interchange the infimum and the supremum:
b C ( w ) = sup { λ i , y } inf R x 1 , , x n tr R ( x 1 , , x n ) i , y w i λ i , y c i ( x i , y ) A i ( y )
The second main step is to use SDP duality [19] to turn the constrained infimum over R into a supremum, abbreviating the POVM elements as R ( x 1 , , x n ) = R ξ :
inf { R ξ } { ξ R ξ B ξ | R ξ 0 ξ , ξ R ξ = 𝟙 } = sup Y { tr [ Y ] | Y B ξ ξ }
which is very similar to a dual formulation often employed in optimal ambiguous state discrimination [25,26].
Putting everything together, we arrive at the following SDP representation for the offset quantity b C ( w ) :
b C ( w ) = sup tr [ Y ] with real variables λ i , y and a d × d - matrix variable Y subject to Y i , y w i λ i , y c i ( x i , y ) A i ( y ) x 1 , , x n λ i , y 0 i y y λ i , y = 1 i .
The dual SDP program reads (again, strong duality holds, and both optima are attained):
b C ( w ) = inf i w i m i with real variables m i and d × d - matrix variables R ( x 1 , , x n ) subject to m i x 1 , , x n tr [ R ( x 1 , , x n ) A i ( y ) ] c i ( x i , y ) i y R ( x 1 , , x n ) 0 x 1 , , x n x 1 , , x n R ( x 1 , , x n ) = 𝟙 .
This dual version can immediately be recognized as a translation of Equation (26) into SDP form, via an alternative way of expressing the maximum over y (or via the linear programming dual of sup { λ i , y } from Equation (28)).
To compute a boundary point ε of U C lying on the supporting hyperplane with normal vector w , it is best to solve the dual SDP Equation (32) and to obtain ε = ( m 1 , , m n ) from an (approximate) optimal assignment of the m i . Again, this works when w i > 0 , whereas otherwise, one can compute ε i = max y x 1 , , x n tr [ R ( x 1 , , x n ) A i ( y ) ] c i ( x i , y ) from an optimal assignment of the R ( x 1 , , x n ) . From many primal-dual numerical SDP solvers (such as CVX [20,21]), one can alternatively obtain optimal POVM elements R ( x 1 , , x n ) also from solving the primal SDP Equation (31) as optimal dual variables corresponding to the constraints Y and compute ε from there.

4.3. Computing the Uncertainty Region U E

As one can see by comparing the last expressions in the defining Equations (6) and (10), respectively, the evaluation of b E ( w ) is quite similar to Equation (26), except that the maximum over y is replaced by a uniform average over y. This simply corresponds to fixing λ i , y = 1 / d for all i , y in Equation (28), instead of taking the supremum. Therefore, the primal and dual SDPs for the offset b E ( w ) are:
b E ( w ) = sup 1 d tr [ Y ] with a d × d - matrix variable Y subject to Y i , y w i c i ( x i , y ) A i ( y ) x 1 , , x n .
and:
b E ( w ) = inf 1 d i y x 1 , , x n w i tr [ R ( x 1 , , x n ) A i ( y ) ] c i ( x i , y ) with d × d - matrix variables R ( x 1 , , x n ) subject to R ( x 1 , , x n ) 0 x 1 , , x n x 1 , , x n R ( x 1 , , x n ) = 𝟙 .
The computation of a corresponding boundary point ε U E is similar as above.

5. Conclusions

We have provided efficient methods for computing optimal measurement uncertainty bounds in the case of finite observables. The extension to infinite dimensional and unbounded observables would be very interesting. The SDP formulation is also a powerful tool for deriving further analytic statements. This process has only just begun.

Acknowledgments

The authors acknowledge financial support from the BMBF project Q.com-Q , the DFG project WE1240/20 and the European grants DQSIM and SIQS.

Author Contributions

All authors contributed equally to this work.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Optimal Transport

Appendix A.1. Kantorovich Duality

In this Appendix, we collect the basic theory of optimal transport adapted to the finite setting at hand. This eliminates all of the topological and measure theoretic fine points that can be found, e.g., in Villani’s book [27], which we also recommend for extended proofs of the statements in our summary. We slightly generalize the setting from the cost functions used in the main text of this paper: we allow the two variables on which the cost function depends to range over different sets. This might actually be useful for comparing observables, which then need not have the same outcome sets. Which outcomes are considered to be close or the same must be specified in terms of the cost function. We introduce this generalization here less for the sake of applications rather than for a simplification of the proofs, in particular for the book-keeping of paths in the proof of Lemma 5.
The basic setting is that of two finite sets X and Y and an arbitrary function c : X × Y R , called the cost function. The task is to optimize the transport of some distribution of stuff on X, described by a distribution function p : X R + , to a final distribution q : Y R + on Y when the transportation of one unit of stuff from the point x to the point y costs c ( x , y ) . In the first such scenario ever considered, namely by Gaspar Monge, the “stuff” was earth, the distribution p a hill and q a fortress. Villani [27] likes to phrase the scenario in terms of bread produced at bakeries x X to be delivered to cafés y Y . This makes plain that optimal transport is sometimes considered a branch of mathematical economics, and indeed, Leonid Kantorovich, who created much of the theory, received a Nobel prize in economics. In our case, the “stuff” will be probability.
A transport plan (or coupling) will be a probability distribution γ : X × Y R + , which encodes how much stuff is moved from any x to any y. Since all of p is to be moved, y γ ( x , y ) = p ( x ) , and since all stuff is to be delivered, x γ ( x , y ) = q ( y ) . Now, for any transport plan γ, we get a total cost of x , y γ ( x , y ) c ( x , y ) , and we are interested in the optimum:
c ˇ ( p , q ) = inf γ x y c ( x , y ) γ ( x , y ) |   γ couples p to q
This is called the primal problem, to which there is also a dual problem. In economic language, it concerns pricing schemes, that is pairs of functions Φ : X R and Ψ : Y R satisfying the inequality:
Φ ( x ) - Ψ ( y ) c ( x , y ) for all x X , y Y
and demands to maximize:
c ^ ( p , q ) = sup Φ , Ψ x Φ ( x ) p ( x ) - y Ψ ( y ) q ( y ) | ( Φ , Ψ ) is a pricing scheme
In Villani’s example [27], think of a consortium of bakeries and cafés that is used to organize the transport themselves according to some plan γ. Now, they are thinking of hiring a contractor, which offers to do the job, charging Φ ( x ) for every unit picked up from bakery x and giving Ψ ( y ) to café y on delivery (these numbers can be negative). Their offer is that this will reduce overall costs, since their pricing scheme satisfies Equation (A2). Indeed, the overall charge to the consortium will be:
x Φ ( x ) p ( x ) - y Ψ ( y ) q ( y ) = x y Φ ( x ) - Ψ ( y ) γ ( x , y ) x y c ( x , y ) γ ( x , y )
Taking the sup on the left-hand side of this inequality (the company will try to maximize their profits by adjusting the pricing scheme ( Φ , Ψ ) ) and the inf on the right-hand side (the transport plan γ was already optimized), we get c ^ ( p , q ) c ˇ ( p , q ) . The general duality theory for linear programs shows that the duality gap closes in this case since both optimization problems satisfy Slater’s constraint qualification condition ([18] Section 5.3.2) [22], i.e., we actually always have:
c ^ ( p , q ) = c ˇ ( p , q )
Therefore, the consortium will face the same transport costs in the end if the contractor chooses an optimal pricing scheme (note that both the infimum and the supremum in the definitions of c ˇ and c ^ , respectively, are attained as X and Y are finite sets).
What is especially interesting for us, however, is that the structure of the optimal solutions for both variational problems is very special, and both problems can be reduced to a combinatorial optimization over finitely many possibilities, which furthermore can be constructed independently of p and q. Indeed, pricing schemes and transport plans are both related to certain subsets of X × Y . We define S ( γ ) X × Y as the support of γ, i.e., the set of pairs on which γ ( x , y ) > 0 . For a pricing scheme ( Φ , Ψ ) , we define the equality set E ( Φ , Ψ ) as the set of points ( x , y ) for which equality holds in Equation (A2). Then, equality holds in Equation (A4) if and only if S ( γ ) E ( Φ , Ψ ) . Note that for γ to satisfy the marginal condition for given p and q, its support S ( γ ) cannot become too small (depending on p and q). On the other hand, E ( Φ , Ψ ) cannot be too large, because the resulting system of equations for Φ ( x ) and Ψ ( y ) would become overdetermined and inconsistent. The kind of set for which they meet is described in the following definition.
Definition 3. 
Let X , Y be finite sets and c : X × Y R a function. Then, a subset Γ X × Y is called cyclically c-monotone (“ccm” for short), if for any sequence of distinct pairs ( x 1 , y 1 ) Γ , , ( x n , y n ) Γ , and any permutation π of { 1 , , n } the inequality:
i = 1 n c ( x i , y i ) i = 1 n c ( x i , y π i )
holds. When Γ is not properly contained in another cyclically c-monotone set, it is called maximally cyclically c-monotone (“mccm” for short).
A basic example of a ccm set is the equality set E ( Φ , Ψ ) for any pricing scheme ( Φ , Ψ ) . Indeed, for ( x i , y i ) E ( Φ , Ψ ) and any permutation π, we have:
i = 1 n c ( x i , y i ) = i = 1 n Φ ( x i ) - Ψ ( y i ) ) = i = 1 n Φ ( x i ) - Ψ ( y π i ) i = 1 n c ( x i , y π i )
The role of ccm sets in the variational problems Equations (A1) and (A3) is summarized in the following proposition.
Proposition 4. 
Let X , Y , c , p , q be given as above. Then:
(1) 
A coupling γ minimizes Equation (A1) if and only if S ( γ ) is ccm.
(2) 
The dual problem Equation (A3) has a maximizer ( Φ , Ψ ) for which E ( Φ , Ψ ) is mccm.
(3) 
If Γ X × Y is mccm, there is a pricing scheme ( Φ , Ψ ) with E ( Φ , Ψ ) = Γ , and ( Φ , Ψ ) is uniquely determined by Γ up to the addition of the same constant to Φ and to Ψ.
Sketch of the proof. 
(1)
Suppose ( x i , y i ) S ( γ ) ( i = 1 , , n ), and let π be any permutation. Set δ = min i γ ( x i , y i ) . Then, we can modify γ by subtracting δ from any γ ( x i , y i ) and adding δ to γ ( x i , y π i ) . This operation keeps γ 0 and does not change the marginals. The target functional in the infimum Equation (A1) is changed by δ times the difference of the two sides of Equation (A6). For a minimizer γ, this change must be 0 , which gives inequality Equation (A6). For the converse, we need a Lemma, whose proof will be sketched below.
Lemma 5. 
For any ccm set Γ, there is some pricing scheme ( Φ , Ψ ) with E ( Φ , Ψ ) Γ .
 
By applying this to Γ = S ( γ ) , we find that the duality gap closes for γ, i.e., equality holds in Equation (A4), and hence, γ is a minimizer.
(2)
Every subset, Γ X × Y can be thought of as a bipartite graph with vertices X Y and an edge joining x X and y Y iff ( x , y ) Γ (see Figure A1). We call Γ connected, if any two vertices are linked by a sequence of edges. Consider now the equality set E ( Φ , Ψ ) of some pricing scheme. We modify ( Φ , Ψ ) by picking some connected component and setting Φ ( x ) = Φ ( x ) + a and Ψ ( y ) = Ψ ( y ) + a for all x , y in that component. If | a | is sufficiently small, ( Φ , Ψ ) will still satisfy all of the inequalities of Equation (A2), and E ( Φ , Ψ ) = E ( Φ , Ψ ) . The target functional in the optimization Equation (A3) depends linearly on a, so moving in the appropriate direction will increase, or at least not decrease, it. We can continue until another one of the inequalities of Equation (A2) becomes tight. At this point, E ( Φ , Ψ ) E ( Φ , Ψ ) . This process can be continued until the equality set E ( Φ , Ψ ) is connected. Then, ( Φ , Ψ ) is uniquely determined by E ( Φ , Ψ ) up to a common constant.
Figure A1. Representation of a subset Γ X × Y (left) as a bipartite graph (right). The graph is a connected tree.
Figure A1. Representation of a subset Γ X × Y (left) as a bipartite graph (right). The graph is a connected tree.
Mathematics 04 00038 g009
It remains to show that connected equality sets E ( Φ , Ψ ) are mccm. Suppose that Γ E ( Φ , Ψ ) is ccm. Then, by Lemma 5, we can find a pricing scheme ( Φ , Ψ ) with E ( Φ , Ψ ) E ( Φ , Ψ ) . However, using just the equalities in Equation (A2) coming from the connected E ( Φ , Ψ ) , we already find that Φ = Φ + a and Ψ = Ψ + a , so we must have E ( Φ , Ψ ) = E ( Φ , Ψ ) .
(3)
This is trivial from the proof of (2) that mccm sets are connected.  ☐
Proof sketch of Lemma 5. 
Our proof will give some additional information on the set of all pricing schemes that satisfy E ( Φ , Ψ ) Γ and Φ ( x 0 ) = 0 for some reference point x 0 X to fix the otherwise arbitrary additive constant. Namely we will explicitly construct the largest element ( Φ + , Ψ + ) of this set and the smallest ( Φ - , Ψ - ) , so that all other schemes ( Φ , Ψ ) satisfy:
Φ - ( x ) Φ ( x ) Φ + ( x ) and Ψ - ( y ) Ψ ( y ) Ψ + ( y )
for all x X and y Y . The idea is to optimize the sums of certain costs over paths in X Y .
We define a Γ-adapted path as a sequence of vertices z 1 , , z n X Y such that the z i X ( z i , z i + 1 ) Γ , and z i Y z i + 1 X . For such a path, we define:
c ( z 1 , , z n ) = i = 1 n - 1 c ( z i , z i + 1 )
with the convention c ( y , x ) : = - c ( x , y ) for x X , y Y . Then, Γ is ccm if and only if c ( z 1 , , z n , z 1 ) 0 for every Γ-adapted closed path. This is immediate for cyclic permutations and follows for more general ones by cycle decomposition. The assertion of Lemma 5 is trivial if Γ = , so we can pick a point x 0 X for which some edge ( x 0 , y ) Γ exists. Then, for any z X Y , we define, for z x 0 ,
χ + ( z ) : = - sup c ( x 0 , , z ) and χ - ( z ) : = sup c ( z , , x 0 )
where the suprema are over all Γ-adapted paths between the specified endpoints; we define χ + ( x 0 ) : = χ - ( x 0 ) : = 0 , and empty suprema are defined as - . Then, χ ± are the maximal and minimal pricing schemes, when written as two functions Φ ± ( x ) = χ ± ( x ) and Ψ ± ( y ) = χ ± ( y ) for x X and y Y .
For proving these assertions, consider paths of the type ( x 0 , , y , x ) . For this to be Γ-adapted, there is no constraint on the last link, so:
- χ + ( y ) - c ( x , y ) - χ + ( x ) , and sup y - χ + ( y ) - c ( x , y ) = χ + ( x )
Here, the inequality follows because the adapted paths x 0 x going via y as the last step are a subclass of all adapted paths and give a smaller supremum. The second statement follows, because for x x 0 , there has to be some last step from Y to x. The inequality Equation (A11) also shows that ( Φ + , Ψ + ) is a pricing scheme. The same argument applied to the decomposition of paths ( x 0 , , x , y ) with ( x , y ) Γ gives the inequality:
- χ + ( x ) + c ( x , y ) - χ + ( y ) for ( x , y ) Γ
Combined with inequality Equation (A11), we get that ( Φ + , Ψ + ) has equality set E ( Φ + , Ψ + ) at least Γ. The corresponding statements for χ - follow by first considering paths ( y , x , , x 0 ) and then ( x , y , x 0 ) with ( x , y ) Γ .
Finally, in order to show the inequalities Equation (A8), let ( Φ , Ψ ) be a tight pricing scheme with Φ ( x 0 ) = 0 and E ( Φ , Ψ ) Γ . Consider first any Γ-adapted path ( x 0 , y 0 , x 1 , , x n , y ) . Then,
c ( x 0 , , x n , y ) = i = 0 n - 1 ( Φ ( x i ) - Ψ ( y i ) - c ( x i + 1 , y i ) ) + Φ ( x n ) - Ψ ( y ) = Φ ( x 0 ) - Ψ ( y ) + i = 0 n - 1 ( Φ ( x i + 1 ) - Ψ ( y i ) - c ( x i + 1 , y i ) ) Φ ( x 0 ) - Ψ ( y ) = - Ψ ( y )
because the sum is term-wise non-positive due to the pricing scheme property. Hence, by taking the supremum, we get χ + ( y ) Ψ ( y ) . The other inequalities follow with the same arguments applied to paths of the type ( x 0 , , y n , x ) , ( x , y 0 , , x 0 ) and ( y , x 1 , , x 0 ) .  ☐
Let us summarize the consequences of Proposition 4 for the computation of minimal costs Equation (A1). Given any cost function c, the first step is to enumerate the corresponding mccm sets, say Γ α , α S , for some finite label set S , and to compute for each of these the pricing scheme ( Φ α , Ψ α ) (up to an overall additive constant; see Proposition 4). This step depends only on the chosen cost function c. Then, for any distributions p , q , we get:
c ^ ( p , q ) = c ˇ ( p , q ) = max α S x Φ α ( x ) p ( x ) - y Ψ α ( y ) q ( y )
This is very fast to compute, so the preparatory work of determining the ( Φ α , Ψ α ) is well invested if many such expressions have to be computed. However, even more important for us is that Equation (A14) simplifies the variational problem sufficiently, so that we can combine it with the optimization over joint measurements (see Section 4.1). Of course, this leaves open the question of how to determine all mccm sets for a cost function. Some remarks about this will be collected in the next subsection.

Appendix A.2. How to Find All mccm Sets

We will begin with a basic algorithm for the general finite setting, in which X , Y and the cost function c are arbitrary. Often, the task can be greatly simplified if more structure is given. These simplifications will be described in the following sections.
The basic algorithm will be a growth process for ccm subsets Γ X × Y , which stops as soon as Γ is connected (cf. the proof of Proposition 4(2)). After that, we can compute the unique pricing scheme ( Φ , Ψ ) with equality on Γ by solving the system of linear equations with ( x , y ) Γ from Equation (A2). This scheme may have additional equality pairs extending Γ to an mccm set. Hence, the same ( Φ , Ψ ) and mccm sets may arise from another route of the growth process. Nevertheless, we can stop the growth when Γ is connected and eliminate doubles as a last step of the algorithm. The main part of the algorithm will thus aim at finding all connected ccm trees, where by definition, a tree is a graph containing no cycles. We take each tree to be given by a list of edges ( x 1 , y 1 ) , ( x N , y N ) , which we take to be written in lexicographic ordering, relative to some arbitrary numberings X = { 1 , , | X | } and Y = { 1 , , | Y | } . Hence, the first element in the list will be ( 1 , y ) , where y is the first element connected to 1 X .
At stage k of the algorithm, we will have a list of all possible initial sequences ( x 1 , y 1 ) , ( x k , y k ) of lexicographically-ordered ccm trees. For each such sequence, the possible next elements will be determined and all of the resulting edge-lists of length k + 1 form the next stage of the algorithm. Now, suppose we have some list ( x 1 , y 1 ) , ( x k , y k ) . What can the next pair ( x , y ) be? There are two possibilities:
(1)
x = x k is unchanged. Then, lexicographic ordering dictates that y > y k . Suppose that y is already connected to some x < x k . Then, adding the edge ( x k , y ) would imply that y could be reached in two different ways from the starting node ( x = 1 ). Since we are looking only for trees, we must therefore restrict to only those y > y k that are yet unconnected.
(2)
x is incremented. Since, in the end, all vertices x must lie in one connected component, the next one has to be x = x k + 1 . Since the graphs at any stage should be connected, y must be a previously-connected Y-vertex.
With each new addition, we also check the ccm property of the resulting graph. The best way to do this is to store with any graph the functions Φ , Ψ on the set of already connected nodes (starting from Φ ( 1 ) = 0 ) and to update them with any growth step. We then only have to verify inequality Equation (A2) for every new node paired with every old one. Since the equality set of any pricing scheme is ccm, this is sufficient. The algorithm will stop as soon as all nodes are included, i.e., after | X | + | Y | - 1 steps.

Appendix A.3. The Linearly-Ordered Case

When we look at standard quantum observables, given by a Hermitian operator A, the outcomes are understood to be the eigenvalues of A, i.e., real numbers. Moreover, we typically look at cost functions, which depend on the difference ( x - y ) of two eigenvalues, i.e.,
c ( x , y ) = h x - y
For the Wasserstein distances, one uses h ( t ) = | t | α with α 1 . The following Lemma allows, in addition, arbitrary convex, not necessarily even functions h.
Lemma 6. 
Let h : R R be convex and c be given by Equation (A15). Then, for x 1 x 2 and y 1 y 2 , we have:
c ( x 1 , y 1 ) + c ( x 2 , y 2 ) c ( x 1 , y 2 ) + c ( x 2 , y 1 )
with strict inequality if h is strictly convex, x 1 < x 2 and y 1 < y 2 .
Proof. 
Since x 2 - x 1 0 and y 2 - y 1 0 , there exists λ [ 0 , 1 ] , such that ( 1 - λ ) ( x 2 - x 1 ) = λ ( y 2 - y 1 ) . This implies x 1 - y 1 = λ ( x 1 - y 2 ) + ( 1 - λ ) ( x 2 - y 1 ) , so that convexity of h gives c ( x 1 , y 1 ) = h ( x 1 - y 1 ) λ h ( x 1 - y 2 ) + ( 1 - λ ) h ( x 2 - y 1 ) = λ c ( x 1 , y 2 ) + ( 1 - λ ) c ( x 2 , y 1 ) . The same choice of λ also implies x 2 - y 2 = ( 1 - λ ) ( x 1 - y 2 ) + λ ( x 2 - y 1 ) , so that similarly, c ( x 2 , y 2 ) ( 1 - λ ) c ( x 1 , y 2 ) + λ c ( x 2 , y 1 ) . Adding up the two inequalities yields the desired result. If x 1 < x 2 and y 1 < y 2 are strict inequalities, then λ ( 0 , 1 ) , so that strict convexity of h gives a strict overall inequality.  ☐
As a consequence, if Γ is a ccm set for the cost function c and ( x 1 , y 1 ) Γ , then all ( x , y ) Γ satisfy either x x 1 and y y 1 or x x 1 and y y 1 . Loosely speaking, while in Γ, one can only move north-east or south-west, but never north-west or south-east.
This has immediate consequences for ccm sets: in each step in the lexicographically-ordered list (see the algorithm in the previous subsection), one either has to increase x by one or increase y by one, going from ( 1 , 1 ) to the maximum. This is a simple drive on the Manhattan grid and is parameterized by the instructions on whether to go north or east in every step. Of the | X | + | Y | - 2 necessary steps, | X | - 1 have to go in the east direction, so altogether, we will have at most:
r = | X | + | Y | - 2 | X | - 1
mccm sets and pricing schemes. They are quickly enumerated without going through the full tree search described in the previous subsection.

Appendix A.4. The Metric Case

Another case in which a little bit more can be said is the following ([27] Case 5.4, p. 56):
Lemma 7. 
Let X = Y , and consider a cost function c ( x , y ) , which is a metric on X. Then:
(1) 
Optimal pricing schemes satisfy Φ = Ψ , and the Lipschitz condition | Φ ( x ) - Φ ( y ) | c ( x , y ) .
(2) 
All mccm sets contain the diagonal.
Proof. 
Any pricing schemes satisfies Φ ( x ) - Ψ ( x ) c ( x , x ) = 0 , i.e., Φ ( x ) Ψ ( x ) . For an optimal scheme, and y X , we can find x , such that Ψ ( y ) = Φ ( x ) - c ( x , y ) . Hence:
Ψ ( y ) - Ψ ( x ) Φ ( x ) - c ( x , y ) + c ( x , x ) - Φ ( x ) c ( y , x )
By exchanging x and y, we get | Ψ ( y ) - Ψ ( x ) | c ( y , x ) . Moreover, given x, some y will satisfy:
Φ ( x ) = Ψ ( y ) + c ( x , y ) Ψ ( x )
which combined with the previous first inequality gives Φ = Ψ . In particular, every ( x , x ) belongs to the equality set.  ☐
One even more special case is that of the discrete metric, c ( x , y ) = 1 - δ x y . In this case, it makes no sense to look at error exponents, because c ( x , y ) α = c ( x , y ) . Moreover, the Lipschitz condition | Φ ( x ) - Φ ( y ) | c ( x , y ) is vacuous for x = y and, otherwise, only asserts that Φ ( x ) - Φ ( y ) 1 , which after adjustment of a constant just means that | Φ ( x ) | 1 / 2 for all x. Hence, the transportation cost is just the 1 norm up to a factor, i.e.,
c ˇ ( p , q ) = 1 2 sup | Φ | 1 x ( p ( x ) - q ( x ) ) Φ ( x ) = 1 2 x | p ( x ) - q ( x ) |

References

  1. Heisenberg, W. Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Z. Phys. 1927, 43, 172–198. [Google Scholar] [CrossRef]
  2. Kennard, E. Zur Quantenmechanik einfacher Bewegungstypen. Z. Phys. 1927, 44, 326–352. [Google Scholar] [CrossRef]
  3. Werner, R.F. The uncertainty relation for joint measurement of position and momentum. Quant. Inform. Comput. 2004, 4, 546–562. [Google Scholar]
  4. Ozawa, M. Uncertainty relations for joint measurements of noncommuting observables. Phys. Lett. A 2004, 320, 367–374. [Google Scholar] [CrossRef]
  5. Busch, P.; Lahti, P.; Werner, R.F. Quantum root-mean-square error and measurement uncertainty relations. Rev. Mod. Phys. 2014, 86, 1261–1281. [Google Scholar] [CrossRef]
  6. Appleby, D.M. Concept of experimental accuracy and simultaneous measurements of position and momentum. Int. J. Theor. Phys. 1998, 37, 1491–1509. [Google Scholar] [CrossRef]
  7. Busch, P.; Lahti, P.; Werner, R.F. Proof of Heisenberg’s error-disturbance relation. Phys. Rev. Lett. 2013, 111, 160405. [Google Scholar] [CrossRef] [PubMed]
  8. Ozawa, M. Disproving Heisenberg’s error-disturbance relation. 2013. arXiv.org e-Print archive. Available online: http://arxiv.org/abs/1308.3540 (accessed 1 April 2016). [Google Scholar]
  9. Busch, P.; Lahti, P.; Werner, R.F. Measurement uncertainty relations. J. Math. Phys. 2014, 55, 042111. [Google Scholar] [CrossRef]
  10. Appleby, D.M. Quantum errors and disturbances: Response to busch, lahti and werner. Entropy 2016, 18, 174. [Google Scholar] [CrossRef]
  11. Busch, P.; Heinosaari, T. Approximate joint measurement of qubit observables. Quantum Inf. Comput. 2008, 8, 0797–0818. [Google Scholar]
  12. Bullock, T.; Busch, P. Incompatibillity and error relations for qubit observables. 2015. arXiv.org e-Print archive. Available online: http://arxiv.org/abs/1512.00104 (accessed 1 April 2016). [Google Scholar]
  13. Busch, P.; Lahti, P.; Werner, R.F. Heisenberg uncertainty for qubit measurements. Phys. Rev. A 2014, 89, 012129. [Google Scholar] [CrossRef]
  14. Dammeier, L.; Schwonnek, R.; Werner, R.F. Uncertainty relations for angular momentum. New J. Phys. 2015, 17, 093046. [Google Scholar] [CrossRef]
  15. Werner, R.F. Uncertainty relations for general phase spaces. In Proceedings of the QCMC 2014: 12th International Conference on Quantum Communication, Measurement and Computing, Hefei, China, 2–6 November 2014.
  16. Busch, P.; Kiukas, J.; Werner, R.F. Sharp uncertainty relations for number and angle. 2016. arXiv.org e-Print archive. Available online: http://arxiv.org/abs/1604.00566 (accessed 1 April 2016).
  17. Werner, R.F. Quantum harmonic analysis on phase space. J. Math. Phys. 1984, 25, 1404–1411. [Google Scholar] [CrossRef]
  18. Vandenberghe, L.; Boyd, S. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  19. Vandenberghe, L.; Boyd, S. Semidefinite programming. SIAM Rev. 1996, 38, 49–95. [Google Scholar] [CrossRef]
  20. Grant, M.; Boyd, S. CVX: Matlab Software for Disciplined Convex Programming, Version 2.1. Available online: http://cvxr.com/cvx (accessed on 1 April 2016).
  21. Grant, M.; Boyd, S. Graph implementations for nonsmooth convex programs. In Recent Advances in Learning and Control; Blondel, V., Boyd, S., Kimura, H., Eds.; Springer-Verlag Limited: Berlin, Germany, 2008; pp. 95–110. [Google Scholar]
  22. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  23. Nikaidô, H. On von Neumann’s minimax theorem. Pac. J. Math. 1954, 4, 65–72. [Google Scholar] [CrossRef]
  24. Sion, M. On general minimax theorems. Pac. J. Math. 1958, 8, 171–176. [Google Scholar] [CrossRef]
  25. Holevo, A.S. Statistical decision theory for quantum systems. J. Multivar. Anal. 1973, 3, 337–394. [Google Scholar] [CrossRef]
  26. Yuen, H.P.; Kennedy, R.S.; Lax, M. Optimum testing of multiple hypotheses in quantum detection theory. IEEE Trans. Inf. Theory 1975, 21, 125–134. [Google Scholar] [CrossRef]
  27. Villani, C. Optimal Transport: Old and New; Springer: Berlin, Germany, 2009. [Google Scholar]
Figure 1. Basic setup of measurement uncertainty relations. The approximate joint measurement R is shown in the middle, with its array of output probabilities. The marginals A and B of this array are compared to the output probabilities of the reference observables A and B, shown at the top and at the bottom. The uncertainties ε ( A | A ) and ε ( B | B ) are quantitative measures for the difference between these distributions.
Figure 1. Basic setup of measurement uncertainty relations. The approximate joint measurement R is shown in the middle, with its array of output probabilities. The marginals A and B of this array are compared to the output probabilities of the reference observables A and B, shown at the top and at the bottom. The uncertainties ε ( A | A ) and ε ( B | B ) are quantitative measures for the difference between these distributions.
Mathematics 04 00038 g001
Figure 2. Uncertainty regions for three reference observables, namely the angular momentum components L 1 , L 2 , L 3 for spin 1, each with outcome set X = { - 1 , 0 , + 1 } and the choice c ( x , y ) = ( x - y ) 2 for the cost function. The three regions indicated correspond to the different overall figures of merit ε M ( A | A ) ε C ( A | A ) ε E ( A | A ) described in Section 2.
Figure 2. Uncertainty regions for three reference observables, namely the angular momentum components L 1 , L 2 , L 3 for spin 1, each with outcome set X = { - 1 , 0 , + 1 } and the choice c ( x , y ) = ( x - y ) 2 for the cost function. The three regions indicated correspond to the different overall figures of merit ε M ( A | A ) ε C ( A | A ) ε E ( A | A ) described in Section 2.
Mathematics 04 00038 g002
Figure 3. For the maximal measurement error ε M ( A | A ) the transport distance of output distributions is maximized over all input states ρ.
Figure 3. For the maximal measurement error ε M ( A | A ) the transport distance of output distributions is maximized over all input states ρ.
Mathematics 04 00038 g003
Figure 4. For the calibration error ε C ( A | A ) , the input state is constrained to the eigenstates of A, say with sharp A-value y, and the cost of moving the A -distribution to y is maximized over y.
Figure 4. For the calibration error ε C ( A | A ) , the input state is constrained to the eigenstates of A, say with sharp A-value y, and the cost of moving the A -distribution to y is maximized over y.
Mathematics 04 00038 g004
Figure 5. The entangled reference error ε E ( A | A ) is a single expectation value, namely of the cost c ( x , y ) , where y is the output of A T and x the output of A . Like the other error quantities, this expectation vanishes iff A = A .
Figure 5. The entangled reference error ε E ( A | A ) is a single expectation value, namely of the cost c ( x , y ) , where y is the output of A T and x the output of A . Like the other error quantities, this expectation vanishes iff A = A .
Mathematics 04 00038 g005
Figure 6. The blue shaded region corresponds to the monotonicity statement for ε L ( R ) . (Left) R ˜ is a mixture of R and B 1 . We can also get an observable V by mixing the second marginal of R ˜ with B 2 and, thus, reach every point in the blue shaded region. (Right) ε L ( R ) is componentwise convex. Therefore, the mixture of the points ε L ( R ) and ε L ( R ) is always in the monotonicity region corresponding to ε L ( R ) .
Figure 6. The blue shaded region corresponds to the monotonicity statement for ε L ( R ) . (Left) R ˜ is a mixture of R and B 1 . We can also get an observable V by mixing the second marginal of R ˜ with B 2 and, thus, reach every point in the blue shaded region. (Right) ε L ( R ) is componentwise convex. Therefore, the mixture of the points ε L ( R ) and ε L ( R ) is always in the monotonicity region corresponding to ε L ( R ) .
Mathematics 04 00038 g006
Figure 7. The uncertainty tradeoff curves for discrete position/momentum pairs, with a discrete metric. In this case, all uncertainty regions, also the one for preparation uncertainty, coincide. The parameter of the above tradeoff curves is the order d = 2 , 3 , , 10 , , of the underlying abelian group.
Figure 7. The uncertainty tradeoff curves for discrete position/momentum pairs, with a discrete metric. In this case, all uncertainty regions, also the one for preparation uncertainty, coincide. The parameter of the above tradeoff curves is the order d = 2 , 3 , , 10 , , of the underlying abelian group.
Mathematics 04 00038 g007
Figure 8. The lower bound of the uncertainty region U L can be described by its supporting hyperplanes (red line) with a normal vector w R + n .
Figure 8. The lower bound of the uncertainty region U L can be described by its supporting hyperplanes (red line) with a normal vector w R + n .
Mathematics 04 00038 g008

Share and Cite

MDPI and ACS Style

Schwonnek, R.; Reeb, D.; Werner, R.F. Measurement Uncertainty for Finite Quantum Observables. Mathematics 2016, 4, 38. https://doi.org/10.3390/math4020038

AMA Style

Schwonnek R, Reeb D, Werner RF. Measurement Uncertainty for Finite Quantum Observables. Mathematics. 2016; 4(2):38. https://doi.org/10.3390/math4020038

Chicago/Turabian Style

Schwonnek, René, David Reeb, and Reinhard F. Werner. 2016. "Measurement Uncertainty for Finite Quantum Observables" Mathematics 4, no. 2: 38. https://doi.org/10.3390/math4020038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop