Next Article in Journal
Machine Learning with Quantum Matter: An Example Using Lead Zirconate Titanate
Next Article in Special Issue
Quantum-Inspired Classification Based on Voronoi Tessellation and Pretty-Good Measurements
Previous Article in Journal
A Potential-Based Quantization Procedure of the Damped Oscillator
Previous Article in Special Issue
Quantum Circuit Learning with Error Backpropagation Algorithm and Experimental Implementation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Coarse-Graining of Observables

Department of Mathematics, University of Denver, Denver, CO 80208, USA
Quantum Rep. 2022, 4(4), 401-417; https://doi.org/10.3390/quantum4040029
Submission received: 23 August 2022 / Revised: 25 September 2022 / Accepted: 27 September 2022 / Published: 3 October 2022
(This article belongs to the Special Issue Exclusive Feature Papers of Quantum Reports)

Abstract

:
We first define the coarse-graining of probability measures in terms of stochastic kernels. We define when a probability measure is part of another probability measure and say that two probability measures coexist if they are both parts of a single probability measure. We then show that any two probability measures coexist. We extend these concepts to observables and instruments and mention that two observables need not coexist. We define the discretization of an observable as a special case of coarse-graining and show that these have 0 –1 stochastic kernels. We next consider finite observables and instruments and show that in these cases, stochastic kernels are replaced by stochastic matrices. We also show that coarse-graining is the same as post-processing in this finite case. We then consider sequential products of observables and discuss the sequential product of a post-processed observable with another observable. We briefly discuss SIC observables and the example of qubit observables.

1. Introduction

In this introduction we shall present a general discussion with detailed definitions and results given in later sections. A coarse-graining of an observable A is an imprecise version of A [1,2]. Generally speaking, coarse-graining means a reduction in the statistical description of a system. If A and B are observables, we say that B is a coarse-graining of A if the probability distribution of B is an affine function of the probability disctribution of A. A specific type of coarse-graining is when B is an unsharp or fuzzy version of A [3,4,5].
In Section 2, we discuss a coarse-graining of probability measures that involves stochastic kernels. Since probability measures are the classical counterparts of quantum states, we can consider this as pertaining to coarse-graining in classical physics. This work is applied in Section 3 to studying coarse-graining of quantum observables and instruments. Section 3 begins with an application for the dynamics of a quantum system described by a strongly continuous unitary group. We then discuss parts and discretizations of observables and show that any two discretizations of observables coexist.
Section 4 discusses finite observables and shows that in this case coarse-graining is the same as post-processing [5,6]. Rank 1, sharp and atomic observables are considered next. Sequential products and conditioned observables are treated. The concepts of this section are illustrated with the example of finite position and momentum observables where a crucial role is played by the finite Fourier transform.
Section 5 is more speculative than the previous sections and we do not arrive at many definite conclusions This section discusses symmetric informationally complete (SIC) observables. An important unsolved problem is whether SIC observables exist for every finite dimensional Hilbert space [5]. It is not even known whether high dimensional SIC observables exist. Applying some of the work in Section 4, we propose a possible method for attacking this problem. Unfortunately, we have not been able to complete the method and leave this for future work.

2. Coarse-Graining of Measures

We denote the set of probability measures on a measurable space ( Ω , F ) by Prob ( Ω , F ) . Let ( Ω 1 , F 1 ) , ( Ω 2 , F 2 ) be measurable spaces. A map v : Ω 1 × F 2 0 , 1 that satisfies x v ( x , Δ ) is measurable for all Δ F 2 and v ( x , · ) Prob ( Ω 2 , F 2 ) for all x Ω 1 is called a stochastic kernel [5]. If v is a stochastic kernel, define
V : Prob ( Ω 1 , F 1 ) Prob ( Ω 2 , F 2 )
by V ( μ ) ( Δ ) = v ( x , Δ ) μ ( d x ) . We call v the stochastic kernel for V and we say that V ( μ ) is a coarse-graining of μ . We think of V ( μ ) as an imprecise version of μ Prob ( Ω 1 , F 1 ) on ( Ω 2 , F 2 ) . Notice that V is an affine map because if 0 λ i 1 , λ i = 1 , then
V λ i μ i ( Δ ) = v ( x , Δ ) λ i μ i ( d x ) = λ i v ( x , Δ ) μ i ( d x ) = λ i V ( μ i ) ( Δ )
for all Δ F 2 . Moreover, if f : Ω 2 R is measurable, then
Ω 2 f ( y ) V ( μ ) ( d y ) = Ω 2 f ( y ) Ω 1 v ( x , d y ) μ ( d x ) = Ω 2 Ω 1 f ( y ) v ( x , d y ) μ ( d x ) = Ω 1 Ω 2 f ( y ) v ( x , d y ) μ ( d x )
Example 1.
The map v ( x , Δ ) = χ Δ ( x ) is a stochastic kernel from ( Ω , F ) to ( Ω , F ) . The corresponding coarse-graining map V : Prob ( Ω , F ) Prob ( Ω , F ) satisfies
V ( μ ) ( Δ ) = v ( x , Δ ) μ ( d x ) = χ Δ ( x ) μ ( d x ) = Δ μ ( d x ) = μ ( Δ )
for all Δ F . Hence, V ( μ ) = μ so V is the identity map.
Example 2.
Let ν Prob ( Ω 2 , F 2 ) and let v : Ω 1 × F 2 0 , 1 be defined by v ( x , Δ ) = ν ( Δ ) for all x Ω 1 , Δ F 2 . Then v is a stochastic kernel and the corresponding coarse-graining map is
V ( μ ) ( Δ ) = v ( x , Δ ) μ ( d x ) = ν ( Δ ) μ ( d x ) = ν ( Δ )
Hence, V is the constant map V ( μ ) = ν .
We define the Dirac measure at x on ( Ω , F ) by δ x where δ x ( Δ ) = 1 if and only if x Δ .
Lemma 1. 
(a) If v : Ω 1 × F 2 0 , 1 is a stochastic kernel for V : Prob ( Ω 1 , F 1 ) ( Ω 2 , F 2 ) then v ( x , Δ ) = V ( δ x ) ( Δ ) .(b) If V : Prob ( Ω 1 , F 1 ) Prob ( Ω 2 , F 2 ) has a stochastic kernel v, then v is unique.
Proof. 
(a) If v is a stochastic kernel for V, then
V ( δ x ) ( Δ ) = v ( y , Δ ) δ x ( d y ) = v ( x , Δ )
(b) follows from (a). □
It can be shown that an arbitrary affine map V : Prob ( Ω 1 , F 1 ) Prob ( Ω 2 , F 2 ) need not have a stochastic kernel and hence need not be a coarse-graining. One way to accomplish this is to construct such a map V where x V ( δ x ) ( Δ ) is not measurable for some Δ F 2 . We leave the details of this to the reader. Then V does not have a stochastic kernel v because if it did, then by Lemma 1(a), v ( x , Δ ) = V ( δ x ) ( Δ ) so x v ( x , Δ ) is not measurable for some Δ F 2 which is a contradiction.
Let ( Ω j , F j ) , j = 1 , 2 , 3 , be measurable spaces and let v : Ω 1 × F 2 0 , 1 , u : Ω 2 × F 3 0 , 1 be stochastic kernels. Define u v : Ω 1 × F 3 0 , 1 by u v ( x , Δ ) = Ω 2 u ( y , Δ ) v ( x , d y ) . Then u v is a stochastic kernel.
Lemma 2.
Let V : Prob ( Ω 1 , F 1 ) Prob ( Ω 2 , F 2 ) and U : Prob ( Ω 2 , F 2 ) Prob ( Ω 3 , F 3 ) be coarse-grainings with corresponding stochastic kernels v , u . Then their composition U V : Prob ( Ω 1 , F 1 ) Prob ( Ω 3 , F 3 ) has stochastic kernel u v .
Proof. 
For all μ Prob ( Ω 1 , F 1 ) , Δ F 3 we have that
U V ( μ ) ( Δ ) = Ω 2 u ( y , Δ ) V ( μ ) ( d y ) = Ω 2 Ω 1 u ( y , Δ ) v ( x , d y ) μ ( d x ) = Ω 1 Ω 2 u ( y , Δ ) v ( x , d y ) μ ( d x ) = Ω 1 u v ( x , Δ ) μ ( d x )
Hence, the stochastic kernel for U V is u v . □
We say that μ 2 Prob ( Ω 2 , F 2 ) is part of μ 1 Prob ( Ω 1 , F 1 ) if there exists a measurable function f : Ω 1 Ω 2 such that μ 2 ( Δ ) = μ 1 f 1 ( Δ ) for all Δ F 2 . Define V f : Prob ( Ω 1 , F 1 ) Prob ( Ω 2 , F 2 ) by ( V f μ ) ( Δ ) = μ f 1 ( Δ ) . Thus, μ 2 is part of μ 1 , if and only if μ 2 = V f ( μ 1 ) for a measurable function f : Ω 1 Ω 2 . Notice that V f is affine because
V f λ i μ i ) ( Δ ) = λ i μ i ( V f μ i ) ( Δ ) = λ i V f μ i ) ( Δ )
and hence, V f λ i u i = λ i V f ( μ i ) .
Lemma 3.
A map v : Ω 1 × F 2 0 , 1 is the stochastic kernel for V f if and only if v ( x , Δ ) = χ f 1 ( Δ ) ( x ) for all x Ω 1 , Δ F 2 .
Proof. 
If v ( x , Δ ) = χ f 1 ( Δ ) , then v is a stochastic kernel and
v ( x , Δ ) μ ( d x ) = χ f 1 ( Δ ) ( x ) μ ( d x ) = f 1 ( Δ ) μ ( d x ) = μ f 1 ( Δ ) = ( V f μ ) ( Δ )
for all x Ω 1 , Δ F 2 . Hence, v is a stochastic kernel for V f . Since stochastic kernels are unique, the converse holds. □
Lemma 4.
If f : Ω 1 Ω 2 and g : Ω 2 Ω 3 are measurable, V g V f = V g f and the stochastic kernel for V g f is ω ( x , Δ ) = χ f 1 g 1 ( Δ ) ( x ) .
Proof. 
For all μ Prob ( Ω 1 , F 1 ) and Δ F 3 we have that
( V g V f ) μ ( Δ ) = V f μ g 1 ( Δ ) = μ f 1 g 1 ( Δ ) = μ ( g f ) 1 ( Δ ) = ( V g f μ ) ( Δ )
Hence, V g V f = V g f . It follows from Lemma 3 that the stochastic kernel for V g f is
w ( x , Δ ) = χ ( g f ) 1 ) Δ ) ( x ) = χ f 1 g 1 ( Δ ) ( x )
We say that two probability measures coexist, if they are both parts of another probability measure.
Lemma 5.
If μ 1 Prob ( Ω 1 , F 1 ) , μ 2 Prob ( Ω 2 F 2 ) then μ 1 , μ 2 coexist.
Proof. 
Define μ Prob ( Ω 1 × Ω 2 , F 1 × F 2 ) by μ = μ 1 × μ 2 and define f : Ω 1 × Ω 2 Ω 1 by f ( x , y ) = x , g : Ω 1 × Ω 2 Ω 2 by g ( x , y ) = y . Then f and g are measurable and if Δ 1 F 1 we obtain
μ f 1 ( Δ 1 ) = μ ( Δ 1 × Ω 2 ) = μ 1 ( Δ 1 ) μ 2 ( Ω 2 ) = μ 1 ( Δ 1 )
Hence, μ 1 is a part of μ . Similarly, if Δ 2 F 2 , then μ g 1 ( Δ 2 ) = μ 2 ( Δ 2 ) so μ 2 is a part of μ . □
Let ( Ω , F ) be a measurable space and let ( Ω 1 , 2 Ω 1 ) be a finite measurable space with Ω 1 = 1 , 2 , , n . Let B 1 , B 2 , , B n be a measurable partition of Ω . That is B i B j = for i j and B i = Ω . Define
V : Prob ( Ω , F ) Prob ( Ω 1 , 2 Ω 1 )
by V ( μ ) ( Δ ) = μ ( B i ) : i Δ . Then V is affine because
V λ j μ j ( Δ ) = i Δ j λ j μ j ( B i ) = j λ j i Δ μ j ( B i ) = j λ j V ( μ j ) ( Δ )
so that V λ j μ j = λ j V ( μ j ) . We call V a discretization map and V ( μ ) a discretization of μ . A stochastic kernel v ( x , Δ ) is called a 0 –1 stochastic kernel if v ( x , Δ ) = 0 or 1 for all x , Δ .
Theorem 1.
An affine map V : Prob ( Ω , F ) Prob ( Ω 1 , 2 Ω 1 ) is a discretization if and only if V has a 0 –1 stochastic kernel.
Proof. 
Suppose V is a discretization and V has stochastic kernel v ( x , Δ ) . Then by Lemma 1 we obtain for j = 1 , 2 , , n that
v ( x , j ) = V ( δ x ) ( j ) = δ x ( B i ) : i j = δ x ( B j )
Hence,
v ( x , Δ ) = V ( δ x ) ( Δ ) = j Δ V ( δ x ) ( j ) = j Δ δ x ( B j ) = δ x j Δ B j = χ j Δ B j ( x )
for all x Ω , Δ 2 Ω 1 . To show that v ( x , Δ ) is actually the stochastic kernel for V we have that
v ( x , Δ ) μ ( d x ) = χ j Δ B j ( x ) μ ( d x ) = j Δ B j μ ( d x ) = j Δ μ ( B j ) = V ( μ ) ( Δ )
Of course, v ( x , Δ ) is a 0 –1 stochastic kernel. Conversely, suppose v ( x , Δ ) is a 0 –1 stochastic kernel for
V : Prob ( Ω , F ) Prob ( Ω 1 , 2 Ω 1 )
Then v ( x , Δ ) = j Δ v ( x , j ) for all x Ω , Δ 2 Ω 1 . Let B i , i = 1 , 2 , , n , be the measurable sets
B i = x Ω : v ( x , i ) = 1
If x B i B j for i j , then v ( x , i ) = v ( x , j ) = 1 and v ( x , i , j ) = 2 which is a contradiction. Hence, B i B j = for i j . If x Ω and v ( x , i ) = 0 for all i = 1 , 2 , , n , then
v ( x , Ω 1 ) = i Ω 1 v ( x , i ) = 0
which is a contradiction. Hence, there exists an i such that v ( x , i ) = 1 so B i = Ω 1 . We conclude that B i is a measurable partition of Ω . Since
( V μ ) ( i ) = v ( x , i ) μ ( d x ) = B i μ ( d x ) = μ ( B i )
we have for all Δ 2 Ω 1 that
( V μ ) ( Δ ) = i Δ ( V μ ) ( i ) = i Δ μ ( B i )
We conclude that V is a discretization map. □
When we consider a finite measurable space ( Ω , F ) we always assume that F = 2 Ω so F need not be specified. For Ω = x 1 , x 2 , , x n we identify a μ Prob ( Ω ) with the column vector with entries μ ( x 1 ) , μ ( x 2 ) , , μ ( x n ) where we write μ x i = μ ( x i ) , i = 1 , 2 , , n . An m × n matrix M = m i j is a stochastic matrix if 0 m i j 1 and j = 1 m m i j = 1 for all i = 1 , 2 , , n . In this finite case, the stochastic kernels are replaced by stochastic matrices. This is because, in the finite case, if v ( x , Δ ) is a stochastic kernel, then v ( x i , y j ) is a stochastic matrix and conversely, if m i j is a stochastic matrix, then
v ( x i , Δ ) = m i j : y j Δ
is a stochastic kernel.
Theorem 2.
Let Ω 1 = x 1 , x 2 , , x n , Ω 2 = y 1 , y 2 , , y m and let V : Prob ( Ω 1 ) Prob ( Ω 2 ) be affine. Then there exists a unique m × n stochastic matrix V ˜ such that for every ν Prob ( Ω 1 ) we have V ( ν ) = V ˜ ν . Conversely, if M is an m × n stochastic matrix, then there exists an affine map V : Prob ( Ω 1 ) Prob ( Ω 2 ) such that V ˜ = M .
Proof. 
Let V : Prob ( Ω 1 ) Prob ( Ω 2 ) be affine. Since every element of Prob ( Ω 2 ) is a convex combination of δ y j , j = 1 , 2 , , m we have that
V ( δ x i ) = j = 1 m μ i j δ y j
where 0 μ i j 1 and j = 1 m μ i j = 1 , i = 1 , 2 , , n . We conclude that V ˜ = μ i j is an m × n stochastic matrix and μ i j = V ( δ x i ) ( y j ) . Letting ν Prob ( Ω 1 ) we obtain ν = i = 1 n ν i δ x i where ν i = ν ( x i ) , i = 1 , 2 , , n . Since 0 ν i 1 , i = 1 n ν i = 1 and V is affine, we conclude that
V ( ν ) = V i = 1 n ν i δ x i = i = 1 n ν i V ( δ x i ) = i = 1 n ν i j = 1 m μ i j δ y j = i , j μ i j ν ( x i ) δ y j
It follows that
V ( ν ) = V ( ν ) ( y 1 ) V ( ν ) ( y 2 ) V ( ν ) ( y m ) = μ i 1 ν ( x i ) μ i 2 ν ( x i ) μ i m ν ( x i ) = V ˜ ν ( x 1 ) ν ( x 2 ) ν ( x n ) = V ˜ ν
To show that V ˜ is unique, suppose V ( ν ) = M ν where M = M i j is an m × n matrix. We then obtain
M i j = δ y j , M δ x i = δ y j , V ( δ x i = δ y j , k = 1 m μ i k δ y k = μ i j = V ˜ i j
Conversely, let M = M i j be an m × n stochastic matrix. Define V : Prob ( Ω 1 ) Prob ( Ω 2 ) by V ( δ x i ) = j = 1 m μ i j δ y j , i = 1 , 2 , , n and extend V affinely to all of Prob ( Ω 1 ) . By our previous work, V ˜ = M . □
We conclude that in the finite case, every affine map V : Prob ( Ω 1 ) Prob ( Ω 2 ) is a coarse-graining and is implemented by a unique stochastic matrix V ˜ . We then identify V and V ˜ .

3. Observables and Instruments

In this section, we employ our previous work to study coarse-graining of observables and instruments. Let H be a complex Hilbert space that represents a quantum system S. We denote the set of bounded linear operators on H by L ( H ) . For A , B L ( H ) , we write A B if ϕ , A ϕ ϕ , B ϕ for all ϕ H . An operator E L ( H ) is an effect if 0 E I where 0 , I are the zero and identity operators, respectively. We denote the set of effects by E ( H ) and interpret an E E ( H ) as a 1– 0 (true-false) measurement [5,7,8]. If ( Ω A , F ) is a measurable space, an observable with outcome space Ω A is an effect-valued measure A : F E ( H ) [5,7,8]. That is, A ( Δ i ) = A ( Δ i ) when Δ i Δ j = , i j , and A ( Ω ) = I . We interpret A ( Δ ) as the effect that occurs when a measurement of A results in an outcome in Δ . A state for S is an effect ρ E ( H ) that satisfies tr ( ρ ) = 1 . We denote the set of states on H by S ( H ) . If ρ S ( H ) , E E ( H ) we interpret tr ( ρ E ) as the probability that E occurs (is true) when S is in the state ρ . If A is an observable, its statistics in the state ρ is given by the distribution
Φ ρ A ( Δ ) = tr ρ A ( Δ )
for all Δ F . Of course, Φ ρ A Prob ( Ω , F ) for all ρ S ( H ) [5,7,8].
We now discuss a method for constructing stochastic kernels from observables. Let ( Ω , F ) , ( Ω A , G ) be measurable spaces, α x : x Ω S ( H ) a collection of states and A an observable with outcome space Ω A . We say that ( α , A ) is measurable if x Φ α x A ( Δ ) is measurable for all Δ G . If ( α , A ) is measurable, we define the stochastic kernel
v ( x , Δ ) = tr α x A ( Δ ) = Φ α x A ( Δ )
with the corresponding coarse-graining
V ( α , A ) ( μ ) ( Δ ) = v ( x , Δ ) μ ( d x ) = tr α x A ( Δ ) μ ( d x ) = ϕ α x A ( Δ ) μ ( d x )
If α x are pure states α x = ϕ x ϕ x , ϕ x H , then (1) and (2) become
v ( x , Δ ) = A ( Δ ) ϕ x , ϕ x
and
V ( α , A ) ( μ ) ( Δ ) = A ( Δ ) ϕ x , ϕ x μ ( d x )
We interpret (1) as the probability that a measurement of A results in an outcome in Δ when S is in the state α x .
Example 3.
Let Ω = 1 , 2 , , n be a finite measurable space. We show that any stochastic matrix M = μ i j , i , j = 1 , 2 , , n can be written in the form of the previous paragraph. Let H be a complex Hilbert space with dimension n and let ϕ i : i = 1 , 2 , , n be an orthonormal basis for H. Let A be the observable with outcome space Ω satisfying
A j = diag μ 1 j , μ 2 j , , μ n j
Letting α i be the pure state α i = ϕ i ϕ i , i = 1 , 2 , , n , we obtain
A j ϕ i , ϕ i = μ i j
This is essentially (3).
We now give an application of the previous structure to the study of the dynamics of the system S. Suppose the dynamics of S is described by the strongly continuous unitary group e i t K , t 0 , , where K is the Hamiltonian for S. If ϕ 0 H is the initial state, then ϕ t = e i t K ϕ 0 is the state at time t 0 , . We can consider ϕ t as a collection of states indexed by the points of the measurable space, 0 , , B 0 , . Let A be an observable with outcome space ( Ω A , F ) . Since t ϕ t is continuous we have that
t Φ ϕ t A ( Δ ) = A ( Δ ) ϕ t , ϕ t
is continuous for all Δ F . It follows that ( ϕ t , A ) is measurable. We conclude that the map v : 0 , × F 0 , 1 given by v ( t , Δ ) = A ( Δ ) ϕ t , ϕ t is a stochastic kernel called the dynamical kernel for ( ϕ t , A ) . We interpret v ( t , Δ ) as the probability that a measurement of A at time t results in an outcome in Δ . In terms of the dynamical group we have
v ( t , Δ ) = A ( Δ ) e i t K ϕ 0 , e i t K ϕ 0 = e i t K A ( Δ ) e i t k ϕ 0 , ϕ 0
The observable Δ e i t K A ( Δ ) e i t K which gives the time evolution of A is the Heisenberg picture of quantum mechanics while (5) gives the Schrödinger picture. The corresponding coarse-graining map
V ( ϕ , A ) : Prob 0 , , B 0 , Prob ( Ω A , F )
satisfies
V ( ϕ , A ) ( μ ) ( Δ ) = A ( Δ ) ϕ t , ϕ t μ ( d t ) = e i t K A ( Δ ) e i t K ϕ 0 , ϕ 0 μ ( d t )
For a particular time t 0 0 , we have
V ( ϕ , A ) ( δ t 0 ) ( Δ ) = v ( t 0 , Δ ) = e i t 0 K A ( Δ ) e i t 0 K ϕ 0 , ϕ 0
Let A be an observable with outcome space ( Ω A , F ) and let ( Ω , G ) be a measurable space. If v : Ω A × G is a stochastic kernel, we define the observable V · A with outcome space Ω by
V · A ( Δ ) = v ( x , Δ ) A ( d x ) , for all Δ G
We call v the stochastic kernel for V and V · A is a coarse-graining of A [5,7]. We see that V · A ( Δ ) is the unique effect satisfying
tr ρ V · A ( Δ ) = v ( x , Δ ) tr ρ A ( d x )
for all ρ S ( H ) . We now show that this idea extends to observables.
Lemma 6.
V · A is the unique observable with distribution
Φ ρ V · A ( Δ ) = V Φ ρ A ( Δ )
Proof. 
For all ρ S ( H ) , Δ G we obtain
Φ ρ V · A ( Δ ) = tr ρ ( V · A ) ( Δ ) = tr ρ v ( x , Δ ) A ( d x ) = v ( x , Δ ) tr ρ A ( d x ) = v ( x , Δ ) Φ ρ A ( d x ) = V Φ ρ A ( Δ )
The observable V · A is unique because two observables on H with the same distributions for every ρ S ( H ) are identical [5,7,8]. □
If A i , i = 1 , 2 , , n , are observables on H with the same outcome set and 0 λ i 1 , λ i = 1 , it is clear that λ i A i is again an observable. Thus, such observables form a convex set. We conclude that A V · A is an affine map because
V · λ i A i ( Δ ) = v ( x , Δ ) λ i A ( d x ) = λ i v ( x , Δ ) A i ( d x ) = λ i ( V · A i ) ( Δ )
Let ( Ω , F ) , ( Ω A , G ) be measurable spaces and let ( α , A ) be measurable with corresponding stochastic kernel v ( x , Δ ) and coarse-graining V ( α , A ) given by (1) and (2). If B is an observable with outcome space Ω we obtain the following result.
Lemma 7. 
(a) For all Δ G we have that
( V ( α , A ) · B ) ( Δ ) = Φ α x A ( Δ ) B ( d x )
(b) For all ρ S ( H ) , Δ G , we have that
Φ ρ V ( α , A ) · B ( Δ ) = Φ α x A ( Δ ) Φ ρ B ( d x )
Proof. 
(a) Since v ( x , Δ ) = Φ α x A ( Δ ) for all Δ G , we obtain
( V ( α , A ) · B ) ( Δ ) = v ( x , Δ ) B ( d x ) = Φ α x A ( Δ ) B ( d x )
(b) For all ρ S ( H ) , Δ G , applying (a) we obtain
Φ ρ V ( α , A ) · B ( Δ ) = tr ρ ( V ( α , A ) · B ) ( Δ ) = tr ρ Φ α x A ( Δ ) B ( d x ) = Φ α x A ( Δ ) tr ρ B ( d x ) = Φ α x A ( Δ ) Φ ρ B ( d x ) )
An observable B is part of an observable A if there exists a measurable surjection f : Ω A Ω B such that B = V f · A [6,9,10].
Lemma 8.
Let A , B be observables on H with outcome spaces ( Ω A , F A ) , ( Ω B , F B ) , respectively. Then B is part of A if and only if there is a measurable surjection f : Ω A Ω B such that B ( Δ ) = A f 1 ( Δ ) for all Δ F B .
Proof. 
If B is a part of A, there exists a measurable surjection f : Ω A Ω B such that B = V f · A . If v ( x , Δ ) = χ f 1 ( Δ ) ( x ) is the corresponding stochastic kernel, then for Δ F B we obtain
B ( Δ ) = ( V f · A ) ( Δ ) = v ( x , Δ ) A ( d x ) = χ f 1 ( Δ ) ( x ) A ( d x ) = f 1 ( Δ ) A ( d x ) = A f 1 ( Δ )
Conversely, if B ( Δ ) = A f 1 ( Δ ) for all Δ F B , then letting v ( x , Δ ) = χ f 1 ( Δ ) ( x ) we obtain B ( Δ ) = ( V f · A ) ( Δ ) by reversing the previous argument. Hence, B = V f · A so B is part of A. □
By Lemma 6 if B is part of A so that B = V f · A , then Φ ρ B = V f ( Φ ρ A ) and hence Φ ρ B is part of Φ ρ A for all ρ S ( H ) . Two observables B , C coexist if there exists an observable A such that B and C are part of A [5,7,11,12]. It is well-known that unlike in Lemma 5, two observables need not coexist [5,7,12]. Let A be an observable with outcome space ( Ω A , F ) . If V is a discretization of ( Ω A , F ) , we call V · A a discretization of A [5]. If v ( x , i ) = χ B i ( x ) is the corresponding stochastic kernel we obtain
( V · A ) ( i ) = v ( x , i ) A ( d x ) = χ B i ( x ) A ( d x ) = B i A ( d x ) = A ( B i )
Moreover,
( V · A ) ( Δ ) = i Δ ( V · A ) ( i ) = A ( B i ) : i Δ
Lemma 9.
If V · A is a discretization of A, then V · A is a part of A.
Proof. 
Let V : Prob ( Ω A , F ) Prob ( Ω 1 ) where Ω 1 = 1 , 2 , , n so the outcome space of V · A is Ω 1 . Let v ( x , i ) = χ B i ( x ) be the corresponding stochastic kernel. Define f : Ω A Ω 1 by f ( x ) = i if x B i . Then by (7)
( V · A ) ( i ) = A ( B i ) = A f 1 ( i )
and it follows that for all Δ Ω 1 we obtain
( V · A ) ( Δ ) = i Δ ( V · A ) ( i ) = A f 1 ( Δ )
Hence, V · A is part of A. □
Corollary 1.
Any two discretizations of an observable coexist.
Let T ( H ) be the set of trace-class operators on H. An operation on H is a trace non-increasing, completely positive linear map T : T ( H ) T ( H ) [5,7,8,11]. If an operation T preserves the trace, then T is called a channel on H. An instrument on H with outcome space Ω I is an operation-valued measure I on ( Ω I , F ) such that I ( Ω I ) is a channel. The statistics of an instrument I for a state ρ S ( H ) is given by its distribution
Φ ρ I ( Δ ) = tr I ( Δ ) ( ρ )
for all Δ F . Of course, Φ ρ I is a probability measure on ( Ω I , F ) . We say that an instrument I measures an observable A if Ω A = Ω I and for all ρ S ( H ) and Δ F we have
Φ ρ A ( Δ ) = tr ρ A ( Δ ) = tr I ( Δ ) ( ρ ) = Φ ρ I ( Δ )
It can be shown that an instrument measures a unique observable, but an observable is measured by many instruments [5]. If I measures A we write I ^ = A . We think of I as an apparatus that can be employed to measure the observable I ^ and conclude that there are many such apparatuses. Although I reproduces the statistics of I ^ , I gives more information than I ^ . This is because when a measurement of I produces a result in Δ F the instrument I updates the state of the system to the new state I ( Δ ) ρ / tr I ( Δ ) ρ when tr I ( Δ ) ρ 0 [5,7,8].
If I is an instrument on ( Ω I , F ) and v : Ω I × G 0 , 1 is a stochastic kernel, then we shall show that
( V · I ) ( Δ ) = v ( x , Δ ) I ( d x )
is an instrument with outcome space ( Ω , G ) called a coarse-graining of I . To show this we have that ( V · I ) ( Δ ) is countably additive on G and
( V · I ) ( Ω ) = v ( x , Ω ) I ( d x ) = Ω I I ( d x ) = I ( Ω I )
so ( V · I ) ( Ω ) is a channel. Moreover, if ρ S ( H ) we obtain
tr ( V · I ) ( Δ ) ρ = tr v ( x , Δ ) I ( d x ) ( ρ ) = v ( x , Δ ) tr I ( d x ) ρ tr I ( d x ) ρ = tr I ( Ω I ) ρ = tr ( ρ )
It follows that V · I is an instrument. It is easy to check that instruments form a convex set and that I V · I is affine.
Theorem 3. 
(a) ( V · I ) = V · I ^ . (b)For instruments I , J we have that Φ ρ J = V ( Φ ρ I ) for all ρ S ( H ) if and only if J = V · I ^ .(c)If J = V · I , then Φ ρ J = V ( Φ ρ I ) for all ρ S ( H ) .
Proof.  
(a) For all ρ S ( H ) we obtain
tr ρ ( V · I ^ ) ( Δ ) = tr ρ v ( x , Δ ) I ^ ( d x ) = v ( x , Δ ) tr ρ I ^ ( d x ) = v ( x , Δ ) tr I ( d x ) ( ρ ) = tr v ( x , Δ ) I ( d x ) ( ρ ) = tr ( V · I ) ( Δ ) ( ρ ) = tr ρ ( V · I ) ( Δ )
It follows that ( V · I ) = V · I ^ . (b) If Φ ρ J = V ( Φ ρ I ) , then for all ρ S ( H ) we have that
tr ρ J ^ ( Δ ) = tr J ( ρ ) ( Δ ) = Φ ρ J ( Δ ) = V ( Φ ρ I ) ( Δ ) = v ( x , Δ ) Φ ρ I ( d x ) = v ( x , Δ ) tr I ( ρ ) ( d x ) = v ( x , Δ ) tr ρ I ^ ( d x ) tr ρ v ( x , Δ ) I ^ ( d x ) = tr ρ V · I ^ ( Δ )
Therefore, J ^ ( Δ ) = V · I ^ ( Δ ) for all Δ so J ^ = V · I ^ . Conversely, if J ^ = V · I ^ , then for all ρ S ( H ) we obtain
Φ ρ J ( Δ ) = tr ρ J ^ ( Δ ) = tr ρ V · I ^ ( Δ ) = tr ρ v ( x , Δ ) I ^ ( d x ) = v ( x , Δ ) tr ρ I ^ ( d x ) = v ( x , Δ ) tr I ( ρ ) ( d x ) = v ( x , Δ ) Φ ρ I ( d x ) = V ( Φ ρ I ) ( Δ )
Hence, Φ ρ J = V ( Φ ρ I ) . (c) If J = V · I , then by (a) J ^ = ( V · I ) = V · I ^ . Applying (b) gives Φ ρ J = V ( Φ ρ I ) for all ρ S ( H ) . □
The converse of Theorem 3(c) does not hold. That is, if Φ ρ J = V ( Φ ρ I ) for all ρ S ( H ) , we need not have J = V · I . For example, let V = I the identity map. Then Φ ρ J = Φ ρ I for all ρ S ( H ) . However, there exist J I with Φ ρ J = Φ ρ I for all ρ S ( H ) so J I · I = I . Applying Theorem 3, we can consider the various special types of coarse-graining for instruments.

4. Finite Observables

In this section, we restrict our attention to finite observables. If A is an observable with Ω A = x 1 , , x n , then A is completely determined by
A x 1 , A x 2 , , A x n
We then define A x = A x and write A = A x : x Ω A . It follows that for all Δ Ω A we have that A ( Δ ) = A x : x Δ . Let B = B y : y Ω B be another observable and let V : Prob ( Ω A ) Prob ( Ω B ) be an affine map. We write B = V · A if Φ ρ B = V ( Φ ρ A ) for all ρ S ( H ) . We then say that B is a post-processing of A [5,6]. Thus, post-processing is the same as coarse-graining for finite observables.
Theorem 4.
If V : Prob ( Ω A ) Prob ( Ω B ) is affine, then B = V · A if and only if B y = x Ω A V ˜ x y A x for all y Ω B where V ˜ x y is the stochastic matrix corresponding to V.
Proof. 
Suppose V : Prob ( Ω A ) Prob ( Ω B ) is affine and B = V · A . By Theorem 2 V ˜ is a stochastic matrix and for all ρ S ( H ) we obtain
tr ( ρ B y ) = Φ ρ B ( y ) = V ( Φ ρ A ) ( y ) = V ˜ ( Φ ρ A ) ( y ) = x Ω A V ˜ x y Φ ρ A ( x ) = x Ω A V ˜ x y tr ( ρ A x ) = tr ρ x Ω A V ˜ x y A x
It follows that B y = x Ω A V ˜ x y A x . Conversely, suppose B y = x Ω A V ˜ x y A x for all y Ω B . Then for all ρ S ( H ) and y Ω B we obtain
Φ ρ B ( y ) = tr ( ρ B y ) = tr ρ x Ω A V ˜ x y A x = x Ω A V ˜ x y Φ ρ A ( x ) = ( V Φ ρ A ) ( y )
Hence, B = V · A . □
We can identify an observable with a set A = A x 1 , A x 2 , , A x n E ( H ) satisfying i = 1 n A x i = I . We say that A is rank 1, sharp, atomic, respectively, if A x i are rank 1, projections, 1-dimensional projections. If A is sharp, it follows that A x A y = A y A x = 0 for x y [5,6]. If A is atomic, there exists an orthonormal basis ϕ i for H such that A x i = ϕ i ϕ i , i = 1 , 2 , , n . Notice that A x is rank 1 if and only if A x = λ P where 0 < λ 1 and P is a 1-dimensional projection.
If A = A x : x Ω A , B = B y : y Ω B are observables on H, their sequential product A B is the observable with outcome space Ω A × Ω B given by [6,13].
( A B ) ( x , y ) = A x B y = A x 1 / 2 B y A x 1 / 2
We also define the observable Bconditioned by the observable A as
( B A ) y = x Ω A ( A x B y )
It can be shown that ( B A ) coexists with A [6]. If μ is a stochastic matrix of the appropriate size, then
( A μ · B ) ( x , y ) = A x ( μ · B ) y = A z Ω B μ z y B z = z Ω B μ z y A x B z = z Ω B μ z y A x 1 / 2 B z A x 1 / 2
and if ν is a stochastic matrix of the appropriate size, then
( ν · A ) B ( x , y ) = ( ν · A ) x B y = z Ω A ν z x A z B y = z Ω A ν z x A z 1 / 2 B y z Ω A ν z x A z 1 / 2
Notice that (9) is much more complicated than (8). If A is sharp, then (8) and (9) become
( A μ · B ) ( x , y ) = z Ω B μ z y A x B z A x
and
( ν · A ) B ( x , y ) = z Ω A ν z x 1 / 2 A z B y z Ω A ν z x 1 / 2 A z = r , s Ω A ν r x 1 / 2 ν s x 1 / 2 A r B y A s
If A and B are atomic with A x = ϕ x ϕ x and B y = ψ y ψ y then (8), (9) become
( A μ · B ) ( x , y ) = z Ω B μ z y ϕ x , ψ z 2 ϕ x ϕ x
and
( ν · A ) B ( x , y ) = r , s Ω A ν r x 1 / 2 ν s x 1 / 2 ϕ r , ψ y ψ y , ϕ s ϕ r ϕ s = r Ω A ν r x 1 / 2 ϕ r , ψ y ϕ r r Ω A ν r x 1 / 2 ϕ r , ψ y ϕ r
Notice from (12) and (13) that both A μ · B and ( ν · A ) B are rank 1 observables. The next lemma shows that post-processing and conditioning interact in a regular way.
Lemma 10. 
( μ · B A ) = μ · ( B A )
Proof. 
The result follows because
( μ · B A ) z = x Ω A A x ( μ · B ) z = x Ω A A x y Ω B μ y z B y = y Ω B μ y z x Ω A ( A x B y ) = μ · ( B A ) z
Hence, ( μ · B A ) = μ · ( B A ) . □
Example 4.
This example illustrates the concepts of this section in terms of finite position and momentum observables. Let H be a finite-dimensional Hilbert space with dimension d and let ϕ j : j = 0 , 1 , , d 1 be an orthonormal basis for H. The finite Fourier transform is the unitary operator on H given by
F = 1 d j , k = 0 d 1 e 2 π i j k / d ϕ k ϕ j
where i = 1 [5]. Equivalently, F is the operator satisfying
F ( ϕ k ) = 1 d j = 0 d 1 e 2 π i j k / d ϕ j
for all k = 0 , 1 , , d 1 . We call Q = Q j : j = 0 , 1 , , d 1 where Q j = ϕ j ϕ j the finite position observable and P = P j : j = 0 , 1 , , d 1 where P j = ψ j ψ j with ψ j = F ϕ j the finite momentum observable. Notice that P j = F Q j F * , j = 0 , 1 , , d 1 and Ω Q = Ω P = 0 , 1 , , d 1 . We also see that Q and P are atomic observables. The observable Q P has effects
( Q P ) ( j , k ) = Q j P k = Q j P k Q j = ϕ j , ψ j 2 ϕ j ϕ j = 1 d Q j
Thus, Q P is a rank 1 observable and ( P Q ) is the trivial observable
( P Q ) k = j ( Q j P k ) = 1 d I
for k = 0 , 1 , , d 1 . In a similar way, ( P Q ) ( j , k ) = 1 d P j and ( Q P ) k = 1 d I for j , k = 0 , 1 , , d 1 . The distribution of Q in the state ρ S ( H ) becomes
Φ ρ Q ( j ) = tr ( ρ Q j ) = ϕ j , ρ ϕ j
for j = 0 , 1 , , d 1 .
More interesting observables are obtained by post-processing. Let μ r j be a stochastic matrix so that μ r j 0 and j = 0 d 1 μ r j = 1 , for all r , j = 0 , 1 , , d 1 . Then the post-processing observable μ · Q satisfies
( μ · Q ) j = r = 0 d 1 μ r j Q r = r = 0 d 1 μ r j ϕ r ϕ r
We see that the eigenvalues of ( μ · Q ) j are μ r j , r = 0 , 1 , , d 1 with corresponding eigenvectors ϕ r . The distribution of μ · Q in the state ρ S ( H ) becomes
Φ ρ μ · Q ( j ) = tr ρ ( μ · Q ) j = tr ρ r = 0 d 1 μ r j ϕ r ϕ r = r = 0 d 1 μ r j ϕ r , ρ ϕ r = r = 0 d 1 μ r j Φ ρ Q ( r )
The observable ( μ · Q ) P satisfies
( μ · Q ) P ( j , k ) = ( μ · Q ) j P k = ( μ · Q ) j 1 / 2 P k ( μ · Q j ) 1 / 2 = r = 0 d 1 μ r j 1 / 2 ϕ r ϕ r P k s = 0 d 1 μ s j 1 / 2 ϕ s ϕ s = r , s = 0 d 1 μ r j 1 / 2 μ s j 1 / 2 ϕ r , ψ k ψ k , ϕ s ϕ r ϕ s = 1 d r , s = 0 d 1 μ r j 1 / 2 μ s j 1 / 2 e 2 π i k ( s r ) ϕ r ϕ s
Equation (14) also follows from (13).

5. SIC Observables

This section is more speculative than the previous ones and we do not come to many definite conclusions. A finite observable A is informationally complete (IC) if tr ( ρ 1 A x ) = tr ( ρ 2 A x ) for all x Ω A implies that ρ 1 = ρ 2 . Equivalently, A is informationally complete if Φ ρ 1 A = Φ ρ 2 A implies that ρ 1 = ρ 2 . It can be shown that there exist IC observables for every finite dimensional Hilbert space H [5]. An Observable A on a Hilbert space H with dim H = d is symmetric if [5]:
(S1)
Ω A = d 2 ,
(S2)
A has rank 1,
(S3)
tr ( A x ) = 1 / d for all x Ω A ,
(S4)
tr ( A x A y ) = 1 / d 2 ( d + 1 ) for all x y Ω A .
It can be shown that d 2 is the smallest cardinality for the outcome space of an IC observable [5]. Furthermore, tr ( A x ) = 1 / d if tr ( A x ) is constant for all x Ω A and tr ( A x A y ) = 1 / d 2 ( d + 1 ) for x y Ω A if tr ( A x A y ) is constant for x y [5]. A symmetric IC observable is called a SIC observable. An important unsolved problem is whether SIC observables exist for every finite dimensional Hilbert space [5]. It is not even known whether high dimensional SIC observables exist. We would like to propose a possible method for attacking this problem. Unfortunately, we have not been able to complete this method and we leave this to future work.
Let dim H = d and let A = ϕ x ϕ x : x Ω A , B = ψ y ψ y : y Ω B be atomic observables. For a d × d stochastic matrix μ we define the observable C = ( ν · A ) B . For example, ( μ · Q ) P of Example 4 is such an observable. Letting η x y H be the vector given by
η x y = r Ω A ν r x 1 / 2 ϕ r , ψ y ϕ r
We conclude from (13) that for all ( x , y ) Ω C we have that
C ( x , y ) = η x y η x y
It immediately follows that C satisfies (S1) and (S2). We say that a stochastic matrix ν is doubly stochastic if x ν x y = 1 for all y [5]. The bases ϕ r , ψ y are mutually unbiased bases (MUB) if ϕ r , ψ y 2 = 1 / d for all r , y = 1 , 2 , , d [6]. It is easy to show that there exist pairs of MUB for every finite dimension. In fact, the two bases in Example 4 are MUB.
Theorem 5. 
(a)If ν is doubly stochastic and ϕ r , ψ y are MUB, then tr C ( x , y ) = 1 / d for all x , y .(b) If tr C ( x , y ) = 1 / d for all x , y then ν is doubly stochastic.
Proof. 
(a) Applying (15), (16) we have that
tr C ( x , y ) = η x y 2 = r ν r x 1 / 2 ϕ r , ψ y ϕ r , s ν s x 1 / 2 ϕ s , ψ y ψ s = r ν r x ϕ r , ψ y 2
for all x , y . If ν is doubly stochastic and ϕ r , ψ y are MUB we conclude that
tr C ( x , y ) = 1 d r ν r x = 1 d
for all x , y . (b) If tr C ( x , y ) = 1 / d for all x , y , then by (a) we obtain
r ν r x ϕ r , ψ y 2 = 1 d
for all x , y . Summing over y gives r ν r x = 1 for all x, so ν is doubly stochastic. □
In Theorem 5b, if tr C ( x , y ) = 1 / d for all x , y , then ϕ r , ψ y need not be MUB so the converse of Theorem 5a does not hold. For example, suppose ν r x = 1 / d for all r , x . Then tr C ( x , y ) = 1 / d for all x , y but ϕ r , ψ y can be arbitrary bases. We conclude from Theorem 5a that if ν is doubly stochastic and ϕ r , ψ y are MUB, then Condition (S3) holds.
Lemma 11. 
(a)Condition (S4) holds if and only if η x y , η x y 2 = 1 / d 2 ( d + 1 ) for all ( x , y ) ( x , y ) .(b)The observable C is IC if and only if for ρ 1 , ρ 2 S ( H ) we have that ρ 1 η x y , η x y = ρ 2 η x y , η x y for all x , y implies that ρ 1 = ρ 2 .
Proof. 
(a) Applying (16) we have that
tr C ( x , y ) C ( x , y ) = η x y , η x y 2
and the result follows. (b) Applying (16) we have that
tr ρ C ( x , y ) = tr ρ η x y η x y = ρ η x y , η x y
and the result follows. □
Theorem 5 and Lemma 11 complete conditions under which C become a SIC observable.
We now illustrate our SIC method in the qubit case H = C 2 . Let ϕ 1 = ( 1 , 0 ) , ϕ 2 = ( 0 , 1 ) be the standard basis for H and let ψ 1 , ψ 2 be a basis for H such that ϕ i , ψ j are MUB. For example, we could use
ψ 1 = F ( ϕ 1 ) = 1 2 ( ϕ 1 + ϕ 2 ) ψ 2 = F ( ϕ 2 ) = 1 2 ( ϕ 1 ϕ 2 )
of Example 4. Define the atomic observables A = ϕ 1 ϕ 1 , ϕ 2 ϕ 2 , B = ψ 1 ψ 1 , ψ 2 ψ 2 with Ω A = Ω B = 1 , 2 . Let ν be the doubly stochastic matrix
ν = a 1 a 1 a a
0 a 1 . Define the observable C = ( ν · A ) B and the effects
D = a 1 / 2 0 0 ( 1 a ) 1 / 2 , E = ( 1 a ) 1 / 2 0 0 a 1 / 2
Letting η x y , x , y 1 , 2 , be the vectors defined by (15), we have by (16) that η 11 = D ψ 1 , η 12 + D ψ 2 , η 21 = E ψ 1 , η 22 = E ψ 2 .
We have that C satisfies Conditions (S1), (S2) and (S3). According to Theorem 5(a), C satisfies Condition (S4) if and only if η j k , η j k = 1 / 12 when ( j , k ) ( j , k ) . Now
η 11 , η 22 2 = D ψ 1 , E ψ 2 2 = E D ψ 1 , ψ 2 2 = a 1 / 2 ( 1 a ) 1 / 2 ψ 1 , ψ 2 2 = 0
Hence, (S4) is not satisfied.
If a = 0 , 1 or 1 / 2 , it is easy to check that C is not IC. Unfortunately, even when a 0 , 1 , 1 / 2 , C need not be IC. For example, let ψ 1 1 2 ( ϕ 1 + ϕ 2 ) , ψ 2 = 1 2 ( ϕ 1 ϕ 2 ) as before. We then have the following result.
Theorem 6.
If a 0 , 1 , 1 / 2 and G is a 2 × 2 self-adjoint matrix, GC ( j , k ) = 0 for all j , k = 1 , 2 , if and only if G = 0 i α i α 0 where α R .
Proof. 
By (16), tr G C ( j , k ) = 0 if and only if
G η j k , η j k = 0
for all j , k = 1 , 2 . Letting G = ( 1 a ) 1 / 2 0 0 a 1 / 2 we conclude that (17) holds if and only if
G D ψ 1 , D ψ 1 = 1 2 G 11 a + G 22 ( 1 a ) + 2 a 1 / 2 ( 1 a ) 1 / 2 Re G 12 = 0
G E ψ 1 , E ψ 1 = 1 2 G 11 ( 1 a ) + G 22 a + 2 a 1 / 2 ( 1 a ) 1 / 2 Re G 12 = 0
G D ψ 2 , D ψ 2 = 1 2 G 11 a + G 22 ( 1 a ) 2 a 1 / 2 ( 1 a ) 1 / 2 Re G 12 = 0
G E ψ 2 , E ψ 2 = 1 2 G 11 ( 1 a ) + G 22 a 2 a 1 / 2 ( 1 a ) 1 / 2 Re G 12 = 0
Adding (18) and (20) gives G 11 a + G 22 ( 1 a ) = 0 and hence, G 11 = a 1 a G 22 . If G 22 0 , then ( a 1 ) 2 = a 2 so a = 1 / 2 which is a contradiction. Hence, G 22 = 0 and it follows that G 11 = Re G 12 = 0 . Hence, G 12 = i α , α R and the result follows. The converse is clear. □
Corollary 2.
If ψ 1 = 1 2 ( ϕ 1 + ϕ 2 ) , ψ 2 = 1 2 ( ϕ 1 ϕ 2 ) , then C is not IC.
Proof. 
Define ρ 1 = 1 2 1 1 0 1 , ρ 2 = 1 / 2 i α i α 1 / 2 where 0 < α < 1 / 2 . Then ρ 1 S ( H ) and it is easy to check that ρ 2 S ( H ) by showing that the eigenvalues of ρ 2 are 1 2 ± α . Since ρ 2 ρ 1 = 0 i α i α 0 , it follows from Theorem 6 that
tr ρ 2 C ( j , k ) = tr ρ 1 C ( j , k )
for all j , k = 1 , 2 . However, ρ 2 ρ 1 so C is not IC. □
It is possible that for other ψ 1 , ψ 2 we obtain an IC observable C. It is also possible that for higher dimensional spaces we obtain SIC observables using this method. Even though C is not IC, it satisfies two necessary (but not sufficient) conditions for IC [5] (Prop 3.35). If C is IC these conditions are: (a) C ( j , k ) does not have both eigenvalues 0,1 and (b) for all j , k there exists j , k such that
C ( j , k ) C ( j , k ) C ( j , k ) C ( j , k )
Indeed, (a) is clear and (b) follows from the fact that
C ( 1 , 1 ) C ( 1 , 2 ) C ( 1 , 2 ) C ( 1 , 1 ) and C ( 2 , 2 ) C ( 2 , 1 ) C ( 2 , 1 ) C ( 2 , 2 )

Funding

This research received no external funding.

Data Availability Statement

No data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Glatzel, F.; Schilling, T. The interplay between memory and potentials of mean force: A discussion on the structure of equations of motion for coarse-grained observables. EPL 2021, 136, 1–6. [Google Scholar] [CrossRef]
  2. Rudnick, L. Majorization approach to entropic uncertainty relations for coarse-grained observables. arXiv 2015, arXiv:1503.03682v1. [Google Scholar] [CrossRef] [Green Version]
  3. Ali, S.; Emch, G. Fuzzy observables in quantum mechanics. J. Math. Phys. 1974, 15, 176–182. [Google Scholar] [CrossRef]
  4. Ali, S.; Carmeli, C.; Heinosaari, T.; Toigo, A. Commutative povms and fuzzy observables. Found. Phys. 2009, 39, 593–612. [Google Scholar] [CrossRef] [Green Version]
  5. Heinosaari, T.; Ziman, M. The Mathematical Language of Quantum Theory; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  6. Gudder, S. Combinations of quantum observables and instruments. arXiv 2020, arXiv:2010.08025. [Google Scholar] [CrossRef]
  7. Busch, P.; Grabowski, M.; Lahti, P. Operational Quantum Physics; Springer: Berlin/Heidelberg, Germany, 1995. [Google Scholar]
  8. Nielson, M.; Chuang, I. Quantum Computation and Quantum Information; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  9. Gudder, S. Parts and composites of quantum systems. arXiv 2020, arXiv:2009.07371. [Google Scholar] [CrossRef]
  10. Fillipov, S.; Heinosaari, T.; Leppäjärvi, L. Simulability of observables in general probabilistic theories. Phys. Rev. 2018, A97, 62102. [Google Scholar] [CrossRef] [Green Version]
  11. Heinosaari, T.; Reitzner, D.; Stano, R.; Ziman, M. Coexistence of quantum operations. J. Phys. 2009, A42, 365302. [Google Scholar] [CrossRef]
  12. Lahti, P. Coexistence and joint measurability in quantum mechanics. Int. J. Theor. Phys. 2003, 42, 893–906. [Google Scholar] [CrossRef]
  13. Gudder, S.; Greechie, R. Sequential products on effect algebras. Rep. Math. Phys. 2002, 49, 87–111. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gudder, S. Coarse-Graining of Observables. Quantum Rep. 2022, 4, 401-417. https://doi.org/10.3390/quantum4040029

AMA Style

Gudder S. Coarse-Graining of Observables. Quantum Reports. 2022; 4(4):401-417. https://doi.org/10.3390/quantum4040029

Chicago/Turabian Style

Gudder, Stan. 2022. "Coarse-Graining of Observables" Quantum Reports 4, no. 4: 401-417. https://doi.org/10.3390/quantum4040029

APA Style

Gudder, S. (2022). Coarse-Graining of Observables. Quantum Reports, 4(4), 401-417. https://doi.org/10.3390/quantum4040029

Article Metrics

Back to TopTop