On two-distillable Werner states

We consider bipartite mixed states in a $d\otimes d$ quantum system. We say that $\rho$ is PPT if its partial transpose $1 \otimes T (\rho)$ is positive semidefinite, and otherwise $\rho$ is NPT. The well-known Werner states are divided into three types: (a) the separable states (the same as the PPT states); (b) the one-distillable states (necessarily NPT); and (c) the NPT states which are not one-distillable. We give several different formulations and provide further evidence for validity of the conjecture that the Werner states of type (c) are not two-distillable.


Introduction
Let H = H A ⊗H B be the Hilbert space for the quantum system consisting of two parties, A and B (Alice and Bob). We assume that the Hilbert spaces H A and H B have the same finite dimension, which we denote by d. A product state is a tensor product ρ A ⊗ ρ B of the states ρ A and ρ B of the first and second party, respectively. A bipartite state ρ is separable if it can be written as a convex linear combination of product states. We say that a bipartite state is entangled if it is not separable. We say that ρ is PPT if its partial transpose σ = 1 ⊗ T (ρ), computed in some fixed orthonormal (o.n.) basis of H B , is a positive semidefinite operator. Otherwise σ has a negative eigenvalue, and we say that ρ is NPT.
It is more complicated to give the definition of distillability for bipartite states ρ. For that purpose we have to consider multiple copies of ρ. For k copies the density matrix is the kth tensor power ρ ⊗k which acts on the Hilbert space H ⊗k . We can identify H ⊗k with the tensor product of the Hilbert spaces H ⊗k A and H ⊗k B . In this way we can view ρ ⊗k as a bipartite state. Thus any vector |ψ ∈ H ⊗k has its Schmidt decomposition and a well-defined Schmidt rank.
The definition of distillability given below is not the original one but it is the only one that we are going to use. Replacing the original definition with this one was nontrivial, see [5]. Definition 1.1. For a bipartite state ρ acting on H and an integer k ≥ 1, we say that ρ is k-distillable if there exists a (non-normalized) pure state |ψ ∈ H ⊗k of Schmidt rank at most two such that (1.1) ψ|σ ⊗k |ψ < 0, where σ = 1 ⊗ T (ρ) is the partial transpose of ρ. We say that ρ is distillable if it is k-distillable for some integer k ≥ 1.
If a bipartite state ρ is separable then it is PPT, i.e., σ is positive semidefinite, and consequently ρ is not distillable. For the same reason, the entangled bipartate PPT states are not distillable. Equivalently, every distillable bipartite state is necessarily NPT. It is not known whether the converse holds, i.e., whether every bipartite NPT state is distillable. However it is widely believed that the converse is false. In particular the following conjecture has been raised [2]. Conjecture 1.2. There exist bipartite NPT states which are not distillable.
It is known [10] that for each integer k ≥ 1 there exist examples of bipartite states which are distillable but not k-distillable.
We fix an o.n. basis |i , i = 1, . . . , d of H A , and an o.n. basis of H B for which we use the same notation. The context will make clear which basis is used. After fixing these bases, we can define the flip operator F : H → H by The (non-normalized) Werner states on H can be parametrized as Let |ψ max ∈ H be the maximally entangled (pure) state given by Its density matrix is the projector Since dP is the partial transpose of F , the partial transpose of ρ W (t) is The following facts about the Werner states are well-known. The importance of Werner states for the distillability problem for bipartite states was first established in [4]. From now on we assume that d ≥ 3. In fact the following stronger conjecture is believed to be true [1,2,3]. In this paper we will consider a very weak version of it. Conjecture 1.6. None of the Werner states ρ W (t), t ∈ (1/d, 1/2], is 2-distillable.
For the k-distillability problem the following fact [2] is important.
In view of this proposition, it suffices to prove Conjecture 1.6 for t = 1/2 only. Extensive numerical evidence for the validity of this conjecture in the case d = 3 is presented in [2,3,7] and [9]. In the latter it is also claimed that their numerical proof is rigorous. The case d = 4 was analyzed recently in [8] but it remains open.
The paper is organized as follows. In Section 2 we construct a hermitian biquadratic form Φ and show that Conjecture 1.6 is equivalent to Φ being positive semidefinite, Φ ≥ 0. The form Φ depends on 4d arbitrary vectors x i , y i ∈ H A and u i , v i ∈ H B , i = 1, . . . , d.
In Section 3 we express Φ as a function of four matrices X, Y, U, V of order d, where X = [x 1 , . . . , x d ], etc. The formula that we obtain for Φ is very attractive and looks simple, but proving that Φ ≥ 0 remains a challenge.
In Section 4 we compute the matrix H of Φ when the latter is viewed as a hermitian quadratic form in the 2d 2 complex entries of U and V . The entries of X and Y play the role of parameters. As expected, Conjecture 1.6 is equivalent to H being a positive semidefinite matrix.
In Section 5 we prove that H(X, Y ) ≥ 0 when X and Y are diagonal matrices. We point out that H(X, Y ) is not diagonal even when both X and Y are. Since this is done for arbitrary d ≥ 3, and the proof is quite nontrivial, this is an important piece of evidence for the validity of Conjecture 1.6.
In Section 6 we propose much simpler conjecture. Namely we replace the condition H(X, Y ) ≥ 0 by just det H(X, Y ) = 0 with a mild condition on the pair (X, Y ). The condition says that the pair (X, Y ) is generic which means that the matrices X and Y are linearly independent and that some linear combination of them is nonsingular. We prove that this new conjecture implies Conjecture 1.6, but we failed to show that they are equivalent.

The hermitian biquadratic form Φ
Since we are going to use only one Werner state, the one for t = 1/2, we set Conjecture 1.6 is equivalent to the claim that the inequality Such |ψ can be written as Note that |x , |y ∈ H A ⊗H A while |u , |v ∈ H B ⊗H B . We point out that we do not require |ψ 1 + |ψ 2 to be the Schmidt decomposition of |ψ , i.e., we do not require that x|y = u|v = 0. The reason for this is to allow the vectors |x , |y , |u , |v to be completely arbitrary.
We can rewrite |ψ 1 and |ψ 2 as The vectors |x i and |y i live in Alice's second copy of H A , while |u i and |v i live in Bob's second copy of H B . The summation is taken over all i and j in {1, 2, . . . , d}.
Consequently, we can view the LHS of (2.1) as a function of 4d vectors x i , y j , u r , v s : After the substitution |ψ = |ψ 1 + |ψ 2 , each of the Φ k breaks up into four pieces. For instance, we have We have computed each of the resulting 16 pieces. For instance the second piece, say E, in the above formula for Φ 2 is computed as follows. We first observe that The final formulae are: These formulae show that each Φ k , viewed as a function of the components of the x i and y j , is a hermitian quadratic form. And the same is true when we view them as functions of the components of the u i and v j . Hence we shall refer to the Φ k (and Φ) as hermitian biquadratic forms. We note that Φ remains a hermitian biquadratic form when we replace the partition of the vector variables x i , y j |u r , v s with the new partition x i , v s |y j , u r . (This is a consequence of the fact that the two-pair Werner state ρ ⊗2 W is invariant under the flip operator 1 ⊗ F of the second pair.) The following proposition follows immediately from Eq. (2.1) and the definition of the form Φ.
Proposition 2.1. Conjecture 1.6 is equivalent to the assertion that the form Φ is positive semidefinite.

Φ as a function of four matrices
It is quite natural to introduce the d × d matrix X whose successive columns are the vectors Now the formulae for Φ can be rewritten in terms of the matrices X, Y, U and V . We obtain that where ℜ stands for "the real part of".
The first expression can be further simplified by using the standard Frobenius norm on the tensor product of matrices The third expression also simplifies to One benefit of these formulae is that they make the verification of the next proposition trivial.  We can now reformulate Proposition 2.1 as follows.

Proposition 3.2. Conjecture 1.6 is equivalent to
Let us remark that in view of the previous proposition we can assume, without any loss of generality, that the matrix X is diagonal with real nonnegative diagonal entries.

The matrix H of the form Φ
We shall consider the entries of X and Y as parameters and those of U and V as complex variables. Then Φ (and each Φ k ) becomes a family of hermitian quadratic forms depending on the mentioned parameters. Since we have derived explicit formulae for Φ, it is a routine matter to compute the matrix H = H(X, Y ) of Φ. For that purpose we used the formulae given in Section 2. We shall only give a very simple recipe for the construction of H. As Φ depends on 2d 2 complex variables, its matrix is a hermitian matrix of order 2d 2 . We shall write I m for the identity matrix of order m.
For any complex matrix Z letZ denote the column vector obtained by writing the columns of Z one below the other starting with the first column, then the second, etc. Now we can express the relationship between the form Φ and its matrix H by the formula In view of Proposition 2.1, we can restate Conjecture 1.6 in the following equivalent form.
Recall that d ≥ 3 by the assumption made earlier, but this conjecture is also valid for d = 1 and d = 2. However in these two cases the determinant of H(X, Y ) is identically 0.
If A ∈ U(d) and we replace X and Y with AX and AY , respectively, then the H k undergo the transformation Z → (I 2d ⊗ A * )Z(I 2d ⊗ A T ). In fact H 1 and H 2 remain fixed under this transformation.
Similarly, if B ∈ U(d) and we replace X and Y with XB and Y B, respectively, then the H k undergo the transformation Z → (I 2 ⊗ B † ⊗ I d )Z(I 2 ⊗ B ⊗ I d ). This time H 1 and H 3 remain fixed.
Hence we have the following formula.
This formula is also valid for the matrices H k .
Next we show that H(X, Y ) satisfies yet another identity. Let By using α γ β δ By invoking Eq. (4.1) and by using the formula we deduce that

The diagonal case
We say that the pair of complex matrices (X, Y ) in M d is generic if X and Y are linearly independent and some linear combination of X and Y is nonsingular. In this section we analyze the structure of H = H(X, Y ) for a generic pair (X, Y ) of diagonal matrices and prove Conjecture 4.1 in that special case. We denote the diagonal entries of X and Y by λ 1 , . . . , λ d and µ 1 , . . . , µ d respectively.
where c k = 1 for k = i, j and c i = c j = 1/2. Each matrix on the RHS is positive semidefinite. If H(p) is singular, then all of these matrices must be singular and must have the same kernel. This contradicts the linear independence of X and Y . Hence H(p) must be positive definite.
It remains to consider the remaining block B of size 2d, i.e., the principal submatrix of H corresponding to the indexes (i − 1)d + i and Each G(i) is positive semidefinite of rank 1 or 2. Thus in the decomposition B = B ′ + B 4 both summands, B ′ and B 4 , are positive semidefinite. Assume that, say G(1) is singular. Then all d − 1 matrices in the above formula for G(1) must be singular and have the same kernel, i.e., the vectors (µ 2 , µ 3 , . . . , µ d ) and (λ 2 , λ 3 , . . . , λ d ) are linearly dependent. Since X and Y are linearly independent, all the other G(i) must be positive definite. We deduce that the kernel of B ′ is 1-dimensional and is spanned by the column vector having all components 0 except the first which is −µ 2 and (d + 1)st which is λ 2 . Since this vector is not killed by B 4 , we conclude that B is positive definite. Proof. This follows from the theorem because any pair of diagonal matrices can be approximated by a generic pair of diagonal matrices.

A simplified conjecture
We have already observed that in proving that Φ(X, Y, U, V ), or equivalently its matrix H(X, Y ), is positive semidefinite we may assume that X is diagonal with real nonnegative diagonal entries. In fact we may clearly assume that these diagonal entries are positive. As the size of H is rapidly increasing with d (it is 18 for d = 3) and it depends on many parameters, the task of proving that H is positive semidefinite seems formidable. The standard criterion would require checking that all principal minors of H are positive semidefinite. For positive definiteness of H the leading principal minors would suffice.
Let us partition H into four square blocks of order d 2 . Then it is not hard to show that the two diagonal blocks are positive semidefinite. In the case d = 3 we were able to show that also the leading principal minor of H of order 10 is a positive semidefinite polynomial. This was accomplished by finding an explicit expression for this minor as a sum of squares of real polynomials. Although some minor further simplifications are possible, it is clear that such a brute force method will not provide a proof of the conjecture in general. Even in the case d = 3 we failed to finish the job.
We propose now a new simpler conjecture which is stronger than Conjecture 4.1. We introduce the real valued polynomial D(X, Y ) = det H(X, Y ). By taking the determinants in Eq. From Eq. (4.6) we deduce that (6.2) D(αX + βY, γX + δY ) = |αδ − βγ| 2d 2 D(X, Y ) is valid when Λ is invertible. Since both sides are polynomials, this identity must be valid for arbitrary Λ. Note that D(X, 0) = 0 for all matrices X. More generally, we claim that D(X, Y ) = 0 if X and Y are linearly dependent. Indeed, it suffices to choose a matrix Λ as in (4.4) such that γX + δY = 0 and apply Eq. (6.2). The converse of this claim is false but we conjecture that it is true in a weaker form. Proposition 5.1 shows that this conjecture is true when the matrices X and Y are diagonal. As this conjecture deals with only one polynomial and has no positivity conditions whatsoever, it should be much easier to prove (or disprove). Proposition 6.2. Conjecture 4.1 is a consequence of Conjecture 6.1.
Proof. Let X 1 and Y 1 be any matrices in M d . We have to show that H(X 1 , Y 1 ) is positive semidefinite. Clearly it suffices to prove this when the pair (X 1 , Y 1 ) is generic. Let (X 0 , Y 0 ) be a generic pair of diagonal matrices. Then H(X 0 , Y 0 ) is positive definite by Theorem 5.1. Consequently, D(X 0 , Y 0 ) > 0 and all eigenvalues of H(X 0 , Y 0 ) are positive. We can join the pairs (X 0 , Y 0 ) and (X 1 , Y 1 ) by a continuous path (X t , Y t ), 0 ≤ t ≤ 1, such that (X t , Y t ) is generic for each t. By Conjecture 6.1, D(X t , Y t ) = 0 for all t. Hence H(X t , Y t ) has no zero eigenvalues. Since the eigenvalues of H(X t , Y t ) are continuous functions of t and they are all positive for t = 0, they must all remain positive for all values of t. In particular this is true for t = 1. We thus conclude that H(X 1 , Y 1 ) is positive definite.
We do not know whether Conjectures 4.1 and 6.1 are equivalent.