Coarse-Graining of Observables

We first define the coarse-graining of probability measures in terms of stochastic kernels. We define when a probability measure is part of another probability measure and say that two probability measures coexist if they are both parts of a single probability measure. We then show that any two probability measures coexist. We extend these concepts to observables and instruments and mention that two observables need not coexist. We define the discretization of an observable as a special case of coarse-graining and show that these have \zeroone stochastic kernels. We next consider finite observables and instruments and show that in these cases, stochastic kernels are replaced by stochastic matrices. We also show that coarse-graining is the same as post-processing in this finite case. We then consider sequential products of observables and discuss the sequential product of a post-processed observable with another observable. We briefly discuss SIC observables and the example of qubit observables.

We define the Dirac measure at x on (Ω, F) by δ x where δ x (∆) = 1 if and only if x ∈ ∆. It can be shown that an arbitrary affine map V : Prob (Ω 1 , F 1 ) → Prob (Ω 2 , F 2 ) need not have a stochastic kernel and hence need not be a coarse-graining. One way to accomplish this is to construct such a map V where x → V (δ x )(∆) is not measurable for some ∆ ∈ F 2 . We leave the details of this to the reader. Then V does not have a stochastic kernel v because if it did, then by Lemma 1.1(a), v(x, ∆) = V (δ x )(∆) so x → v(x, ∆) is not measurable for some ∆ ∈ F 2 which is a contradiction.
We say that µ 2 ∈ Prob (Ω 2 , F 2 ) is part of µ 1 ∈ Prob (Ω 1 , F 1 ) if there exists a measurable function f : for all x ∈ Ω 1 , ∆ ∈ F 2 . Hence, v is a stochastic kernel for V f . Since stochastic kernels are unique, the converse holds.
Proof. For all µ ∈ Prob (Ω 1 , F 1 ) and ∆ ∈ F 3 we have that We say that two probability measures coexist, if they are both parts of another probability measure.
When we consider a finite measurable space (Ω, F) we always assume that F = 2 Ω so F need not be specified. For Ω = {x 1 , x 2 , . . . , x n } we identify a µ ∈ Prob (Ω) with the column vector with entries µ(x 1 ), µ( In this finite case, the stochastic kernels are replaced by stochastic matrices. This is because, in the finite case, if v(x, ∆) is a stochastic kernel, then v(x i , {y j }) is a stochastic matrix and conversely, if [m ij ] is a stochastic matrix, then v(x i , ∆) = {m ij : y j ∈ ∆} is a stochastic kernel.
We conclude that in the finite case, every affine map V : Prob (Ω 1 ) → Prob (Ω 2 ) is a coarse-graining and is implemented by a unique stochastic matrix V . We then identify V and V .

Observables and Instruments
In this section we employ our previous work to study coarse-graining of observables and instruments. Let H be a complex Hilbert space that represents a quantum system S. We denote the set of bounded linear operators on H by L(H).
where 0, I are the zero and identity operators respectively. We denote the set of effects by E(H) and interpret an E ∈ E(H) as a 1-0 (true-false) measurement [1,6,9]. [1,6,9]. That is, and A(Ω) = I. We interpret A(∆) as the effect that occurs when a measurement of A results in an outcome in ∆. A state for S is an effect ρ ∈ E(H) that satisfies tr (ρ) = 1. We denote the set of states on H by S(H). If ρ ∈ S(H), E ∈ E(H) we interpret tr (ρE) as the probability that E occurs (is true) when S is in the state ρ. If A is an observable, its statistics in the state ρ is given by the distribution [1,6,9]. We now discuss a method for constructing stochastic kernels from observables. Let (Ω, F), (Ω A , G) be measurable spaces, {α x : x ∈ Ω} ⊆ S(H) a collection of states and A an observable with outcome space Ω A . We say with the corresponding coarse-graining and We interpret (2.1) as the probability that a measurement of A results in an outcome in ∆ when S is in the state α x .
We now give an application of the previous structure to the study of the dynamics of the system S. Suppose the dynamics of S is described by the strongly continuous unitary group We can consider φ t as a collection of states indexed by the points of the measurable space, is continuous for all ∆ ∈ F. It follows that (φ t , A) is measurable. We conclude that the map v : [0, ∞)×F → [0, 1] given by v(t, ∆) = A(∆)φ t , φ t is a stochastic kernel called the dynamical kernel for (φ t , A). We interpret v(t, ∆) as the probability that a measurement of A at time t results in an outcome in ∆. In terms of the dynamical group we have The observable ∆ → e itK A(∆)e −itK which gives the time evolution of A is the Heisenberg picture of quantum mechanics while (2.5) gives the Schrödinger picture. The corresponding coarse-graining map Let A be an observable with outcome space (Ω A , F) and let (Ω, G) be a measurable space. If v : Ω A ×G is a stochastic kernel, we define the observable for all ρ ∈ S(H). We now show that this idea extends to observables.
Proof. For all ρ ∈ S(H), ∆ ∈ G we obtain The observable V • A is unique because two observables on H with the same distributions for every ρ ∈ S(H) are identical [1,6,9].
If A i , i = 1, 2, . . . , n, are observables on H with the same outcome set and 0 ≤ λ i ≤ 1, λ i = 1, it is clear that λ i A i is again an observable. Thus, such observables form a convex set. We conclude that A → V • A is an affine map because Let (Ω, F), (Ω A , G) be measurable spaces and let (α, A) be measurable with corresponding stochastic kernel v(x, ∆) and coarse-graining V (α,A) given by (2.1) and (2.2). If B is an observable with outcome space Ω we obtain the following result.
An observable B is part of an observable A if there exists a measurable surjection f : Proof. If B is a part of A, there exists a measurable surjection f : is the corresponding stochastic kernel, then for ∆ ∈ F B we obtain ρ is part of Φ A ρ for all ρ ∈ S(H). Two observables B, C coexist if there exists an observable A such that B and C are part of A [1,6,7,8]. It is well-known that unlike in Lemma 1.5, two observables need not coexist [1,6,8]. Let A be an observable with outcome space is the corresponding stochastic kernel we obtain and it follows that for all ∆ ⊆ Ω 1 we obtain Corollary 2.5. Any two discretizations of an observable coexist. [1,6,7,9]. If an operation T preserves the trace, then T is called a channel on H. An instrument on H with outcome space Ω I is an operation-valued measure I on (Ω I , F) such that I(Ω I ) is a channel. The statistics of an instrument I for a state ρ ∈ S(H) is given by its distribution

Let T (H) be the set of trace-class operators on H. An operation on H is a trace non-increasing, completely positive linear map T : T (H) → T (H)
for all ∆ ∈ F. Of course, Φ I ρ is a probability measure on (Ω I , F). We say that an instrument I measures an observable A if Ω A = Ω I and for all ρ ∈ S(H) and ∆ ∈ F we have It can be shown that an instrument measures a unique observable, but an observable is measured by many instruments [6]. If I measures A we write I = A. We think of I as an apparatus that can be employed to measure the observable I and conclude that there are many such apparatuses. Although I reproduces the statistics of I, I gives more information than I. This is because when a measurement of I produces a result in ∆ ∈ F the instrument I updates the state It follows that V • I is an instrument. It is easy to check that instruments form a convex set and that I → V • I is affine.
, then for all ρ ∈ S(H) we have that

Finite Observables
In this section, we restrict our attention to finite observables. If A is an observable with Ω A = {x 1 , . . . , x n }, then A is completely determined by for all ρ ∈ S(H). We then say that B is a post-processing of A [5,6]. Thus, post-processing is the same as coarse-graining for finite observables.
Proof. Suppose V : Prob (Ω A ) → Prob (Ω B ) is affine and B = V • A. By Theorem 1.7 V is a stochastic matrix and for all ρ ∈ S(H) we obtain for all y ∈ Ω B . Then for all ρ ∈ S(H) and y ∈ Ω B we obtain Φ B ρ (y) = tr (ρB y ) = tr We can identify an observable with a set [5,6]. If A is atomic, there exists an orthonormal basis {φ i } for H such that A x i = |φ i φ i |, i = 1, 2, . . . , n.
Notice that A x is rank 1 if and only if A x = λP where 0 < λ ≤ 1 and P is a 1-dimensional projection. If A = {A x : x ∈ Ω A }, B = {B y : y ∈ Ω B } are observables on H, their sequential product A•B is the observable with outcome space Ω A ×Ω B given by [3,5].
We also define the observable B conditioned by the observable A as It can be shown that (B | A) coexists with A [5]. If µ is a stochastic matrix of the appropriate size, then and if ν is a stochastic matrix of the appropriate size, then Notice that (3.2) is much more complicated than (3.1). If A is sharp, then (3.1),(3.2) become and If A and B are atomic with A x = |φ x φ x | and B y = |ψ y ψ y | then (3.1), and Notice from (3.5) and (3.6) that both A • µ • B and (ν • A) • B are rank 1 observables. The next lemma shows that post-processing and conditioning interact in a regular way.

SIC Observables
This section is more speculative than the previous ones and we do not come to many definite conclusions. A finite observable A is informationally com- (S1) |Ω A | = d 2 , (S2) A has rank 1, It can be shown that d 2 is the smallest cardinality for the outcome space of an IC observable [6]. Also, tr ( [6]. A symmetric IC observable is called a SIC observable. An important unsolved problem is whether SIC observables exist for every finite dimensional Hilbert space [6]. It is not even known whether high dimensional SIC observables exist. We would like to propose a possible method for attacking this problem. Unfortunately, we have not been able to complete this method and we leave this to future work. Let dim H = d and let A = {|φ x φ x | : x ∈ Ω A }, B = {|ψ y ψ y | : y ∈ Ω B } be atomic observables. For a d × d stochastic matrix µ we define the observable C = (ν • A) • B. For example, (µ • Q) • P of Example 4 is such an observable. Letting η xy ∈ H be the vector given by We conclude from (3.6) that for all (x, y) ∈ Ω C we have that It immediately follows that C satisfies (S1) and (S2). We say that a stochastic matrix ν is doubly stochastic if   We now illustrate our SIC method in the qubit case H = C 2 . Let φ 1 = (1, 0), φ 2 = (0, 1) be the standard basis for H and let {ψ 1 , ψ 2 } be a basis for H such that {φ i }, {ψ j } are MUB. For example, we could use Letting η xy , x, y ∈ {1, 2}, be the vectors defined by (4.1), we have by (4.2) that η 11 = Dψ 1 , η 12 + Dψ 2 , η 21 = Eψ 1 , η 22 = Eψ 2 .
It is possible that for other {ψ 1 , ψ 2 } we obtain an IC observable C. It is also possible that for higher dimensional spaces we obtain SIC observables using this method. Even though C is not IC, it satisfies two necessary (but not sufficient) conditions for IC [6](Prop 3.35). If C is IC these conditions are: (a) C (j,k) does not have both eigenvalues 0,1 and (b) for all j, k there exists j ′ , k ′ such that C (j,k) C (j ′ ,k ′ ) = C (j ′ ,k ′ ) C (j,k) Indeed, (a) is clear and (b) follows from the fact that C (1,1) C (1,2) = C (1,2) C (1,1) and C (2,2) C (2,1) = C (2,1) C (2,2)