Next Article in Journal
Entropy of the Quantum–Classical Interface: A Potential Metric for Security
Previous Article in Journal
A Deflationary Account of Information in Terms of Probability
Previous Article in Special Issue
Optimal Estimation of Quantum Coherence by Bell State Measurement: A Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparing Singlet Testing Schemes

by
George Cowperthwaite
1,† and
Adrian Kent
1,2,*,†
1
Centre for Quantum Information and Foundations, DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK
2
Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON N2L 2Y5, Canada
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2025, 27(5), 515; https://doi.org/10.3390/e27050515
Submission received: 7 April 2025 / Revised: 4 May 2025 / Accepted: 6 May 2025 / Published: 11 May 2025
(This article belongs to the Special Issue Editorial Board Members' Collection Series on Quantum Entanglement)

Abstract

:
We compare schemes for testing whether two parties share a two-qubit singlet state. The first, standard, scheme tests Braunstein–Caves (or CHSH) inequalities, comparing the correlations of local measurements drawn from a fixed finite set against the quantum predictions for a singlet. The second, alternative, scheme tests the correlations of local measurements, drawn randomly from the set of those that are θ -separated on the Bloch sphere, against the quantum predictions. We formulate each scheme as a hypothesis test and then evaluate the test power in a number of adversarial scenarios involving an eavesdropper altering or replacing the singlet qubits. We find the ‘random measurement’ test to be superior in most natural scenarios.

1. Introduction

Many quantum information protocols require two parties (Alice and Bob) to share a two-qubit singlet state:
| Ψ = 1 2 ( | 01 | 10 ) ,
where Alice holds the first qubit and Bob the second. Common examples of such two-party protocols include teleportation [1], summoning tasks [2,3,4,5] and other forms of distributed quantum computing (e.g., [6]), entanglement-based key distribution protocols (e.g., [7]), communication and information processing between collaborating agents in some protocols for position verification and position-based cryptography (e.g., [8,9,10]), and in relativistic quantum bit commitment (e.g., [11]).
The two parties should be confident they share singlets, both to ensure the protocol will execute as intended and to preclude the possibility of an adversarial third-party having interfered with the system for their own advantage. A natural method for distinguishing singlets from other quantum states is to measure a quantity for which the singlet attains a unique maximum. Common examples are the expressions in the CHSH [12] and Braunstein–Caves [13] inequalities. These have the additional advantage that they test Bell nonlocality. They can thus detect any adversarial attack that replaces the singlet qubits with classical physical systems programmed to produce deterministic or probabilistic results in response to measurements, since these can be modelled by local hidden variables. In this paper, we examine a quantity that has the same properties but has not previously been studied as a singlet test, the (anti-)correlation of outcomes of random measurements separated by a fixed angle θ ( 0 , π / 3 ) , and compare it to schemes derived from the CHSH and Braunstein–Caves inequalities (e.g., [14,15,16,17]).
The singlet testing schemes we examine in this paper only require both parties to accurately perform projective measurements. Unlike singlet purification schemes [18], they do not need quantum computers or quantum memory. This makes them potentially advantageous when users’ technology is limited, or more generally when single-qubit measurements are cheaper compared to multi-qubit operations.
We assume that Alice and Bob are separated, and the singlet is created either by Alice or by a source separate from both Alice and Bob, before each qubit is transmitted to the respective party. An adversarial third party may intercept the qubits during transmission and alter them, either to obtain information or to disrupt the protocol. We refer to her as ‘Eve’, but emphasize that we are interested in protocols beyond key distribution and that her potential interference need not necessarily involve eavesdropping.
We compare the power of our proposed singlet testing schemes against four commonly studied attacks. These do not represent the full range of possible adversarial action, but illustrate why random measurement testing schemes can be advantageous in a variety of scenarios:
  • Single-qubit intercept–resend attack: Eve intercepts Bob’s qubit, performs a local projective measurement, notes the outcome, and sends the post-measurement state on to Bob. This could occur in a setting where Alice creates the singlet and transmits a qubit to Bob.
  • Bipartite state transformation: Eve intercepts both qubits and performs a quantum operation on them, replacing the singlet with some other two-qubit state that is sent to Alice and Bob.
  • LHV replacement: Eve replaces the singlet with a non-quantum system chosen so that Alice’s and Bob’s measurement outcomes are determined by local hidden variables instead of quantum entanglement.
  • Noisy quantum channel: This is described by a physically natural noise model (and is hence a special case of scenario 2, if we consider the noise as being due to Eve). This alters the singlet state as it is transmitted to Alice and Bob.
The advantages of these various attacks for Eve, in disrupting or obtaining information from the protocol, will depend on the context. We assume each offers Eve some potential advantage and focus on the extent to which Alice and Bob can detect the attacks.
We consider two different types of scheme that Alice and Bob may use to test the purported singlet:
  • Braunstein–Caves test: Testing the Braunstein–Caves inequality [13] with a specific set of N measurement choices for which the singlet uniquely induces the maximum violation [19]. We often particularly focus on the N = 2 case, the CHSH inequality [12], for which self-testing schemes have been extensively studied (e.g., [14,15,16,17]).
  • Random measurement test: Alice and Bob choose random local projective measurements that are constrained to have a fixed separation angle on the Bloch sphere [20] and calculate the anti-correlation of their measurement outcomes. For a wide range of angles, this is uniquely maximized by the singlet.
The intuition, which we test and quantify, is that the random measurement test may generally be more efficient than Braunstein–Caves, as it tests anti-correlations for the same set of axis separations ( π / 2 N ) , but chooses axes randomly over the Bloch sphere, providing Eve with less information about the test measurements, and hence offering her less scope to tailor her attack to minimize its detectability. In particular, the random measurement test is rotationally symmetric, and is hence sensitive to any attack by Eve that breaks rotational symmetry. It can also be applied to any θ , not just the discrete set of the form { π / 2 N : N N , N > 1 } .
We first describe these schemes and analyse their efficiency. We discuss their feasibility in the final section.

2. Materials and Methods

2.1. Braunstein–Caves Singlet Test

2.1.1. Preliminaries

Alice and Bob wish to test whether they share a two-qubit singlet. It is possible that they instead share a not necessarily quantum system governed by a local hidden variable theory (see Section 3.3) or a more complex quantum system with further degrees of freedom. However, we start by assuming the parties are confident they share a (potentially mixed) two-qubit state ρ A B . To start the test, Alice and Bob use an authenticated channel to fix a parameter N 2 and uniformly randomly generate a projective measurement pair ( A , B ) from the set
{ ( a k , b k ) } k = 0 N 1 { ( a k + 1 , b k ) } k = 0 N 2 { ( a 0 , b N 1 ) } ,
where
a k = | m k π / 2 N , | m k π / 2 N + π / 2 ,
b k = | m ( 2 k + 1 ) π / 4 N , | m ( 2 k + 1 ) π / 4 N + π / 2 ,
with | m θ = cos θ | 0 + sin θ | 1 ; outcomes are labelled { 1 , 1 } , respectively.
Next, Alice and Bob perform measurements A and B, respectively, on their qubit and store their outcomes as O A and O B , respectively. They compute the following quantity:
C ^ = O A O B when ( A , B ) = ( a 0 , b N 1 ) O A O B otherwise
through an authenticated classical channel. We call C ^ the ‘Braunstein–Caves sample’, as it possesses properties derived from the Braunstein–Caves inequality [13].
C ^ takes outcomes { 1 , 1 } , so it follows a shifted Bernoulli distribution. The expected value of C ^ resulting from a uniformly random choice of measurement bases ( A , B ) and the application of those measurements is as follows:
E [ C ^ ] = 1 2 N k = 0 N 1 E [ a k , b k ] + k = 0 N 2 E [ a k + 1 , b k ] E [ a 0 , b N 1 ] ,
with E [ x , y ] defined as the expected correlation between Alice and Bob’s measurement outcomes for choices ( A , B ) = ( x , y ) . The expectation is bounded [19] for quantum states by
| E [ C ^ ] | cos π 2 N
and the singlet saturates this bound [19].
As we review below, for the measurement choices defined in (3) and (4), the singlet uniquely achieves the minimum expectation ( cos ( π / 2 N ) ) . Any other state thus produces a detectable deviation in the sample mean of C ^ , given a large enough sample size, assuming perfect measurements.

2.1.2. Calculating E [ C ^ ] for ρ A B

If Alice and Bob each utilise local projective measurements, a general combined measurement basis for a Braunstein–Caves test can be described by the following:
{ | m θ | m ϕ , | m θ + π 2 | m ϕ + π 2 , | m θ | m ϕ + π 2 , | m θ + π 2 | m ϕ }
where | m θ = cos θ | 0 + sin θ | 1 , with the first two results corresponding to the correlated outcomes and the final two results corresponding to the anticorrelated outcomes.
Define a i j p q = i j | ρ A B | p q for i , j , p , q { 0 , 1 } . The expected correlation between the outcomes of measurement { | m θ , | m θ + π 2 } on Alice’s qubit and { | m ϕ , | m ϕ + π 2 } on Bob’s qubit is given by the following:
E { | m θ , | m θ + π 2 } , { | m ϕ , | m ϕ + π 2 } = P ( outcomes same ) P ( outcomes differ ) = 2 P ( outcomes same ) 1 = 2 m ϕ | m θ | ρ A B | m θ | m ϕ | 2 + 2 m ϕ + π 2 | m θ + π 2 | ρ A B | m θ + π 2 | m ϕ + π 2 | 2 1 = cos 2 θ cos 2 ϕ ( a 0000 + a 1111 a 0101 a 1010 ) + 2 cos 2 θ sin 2 ϕ · Re ( a 0001 a 1011 ) + 2 sin 2 θ cos 2 ϕ · Re ( a 0010 a 0111 ) + 2 sin 2 θ sin 2 ϕ · Re ( a 0011 + a 0110 ) .
Thus, by utilising values of θ , ϕ corresponding to the measurements in (3) and (4), quantity (6) can be evaluated as follows:
E [ C ^ ] = 1 2 N k = 0 N 1 E [ a k , b k ] + k = 0 N 2 E [ a k + 1 , b k ] E [ a 0 , b N 1 ] = 1 2 cos π 2 N ( a 0000 + a 1111 a 0101 a 1010 ) + cos π 2 N Re ( a 0011 + a 0110 ) = cos π 2 N Φ + | ρ A B | Φ + Ψ | ρ A B | Ψ ,
where | Φ + = 1 2 ( | 00 + | 11 ) and | Ψ = 1 2 ( | 01 | 10 ) .
Clearly, the minimum of E [ C ^ ] is uniquely attained by the singlet.

2.2. Random Measurement Singlet Test

2.2.1. Preliminaries

Alice and Bob wish to test whether they share a two-qubit singlet. Again, we start by assuming the parties are confident they share a (potentially mixed) two-qubit state ρ A B . Alice uniformly randomly generates a projective qubit measurement
{ | ψ A , | ψ A } ,
corresponding to outcomes { 1 , 1 } and, likewise, Bob uniformly randomly generates a projective qubit measurement
{ | ψ B , | ψ B } ,
from a set with the defining restriction that | ψ A and | ψ B must be separated by angle θ [ 0 , π / 2 ] on the Bloch sphere, so that | ψ A | ψ B | = cos θ . This separation can be achieved in many ways: for example, Alice and Bob could share a list of pre-agreed measurements or Bob could delay his measurement choice until Alice has made and communicated hers. The optimal method of achieving this depends on the parent protocol within which the shared singlet is required. For example, the pre-sharing of measurement choices may be reasonable when verifying singlets for use in a teleportation protocol, but may not be used in a key generation scheme, as a one-time pad could instead be pre-shared with similar resources.
Next, Alice and Bob perform their chosen measurements on their qubit and compute the product of their outcomes through an authenticated classical channel to obtain
O ^ = O A O B ,
where O A and O B are the outcomes of Alice and Bob’s measurements, respectively. We call O ^ the ‘random measurement sample’.
O ^ takes outcomes { 1 , 1 } , so it follows a shifted Bernoulli distribution. The expected value of O ^ resulting from a uniformly random choice of θ separated measurement bases on the Bloch sphere and the application of those measurements is denoted by E [ O ^ ] .
It will be shown that, for this test, for 0 θ < π / 2 the singlet uniquely (among quantum states) attains the minimum expectation cos θ . Any other state would produce a significant deviation in the sample mean of O ^ given a large enough sample size. If Alice and Bob can implement a random measurement test precisely, for any given θ in the range, they can thus distinguish a source of singlets from a source of any other quantum state. Note, however, that since the measurements need to be individually calibrated, this is not a device-independent singlet test.

2.2.2. Relation Between | ψ A and | ψ B

If | ψ A and | ψ B are separated by angle θ on the Bloch sphere, they are related by
| ψ B = U | ψ A P ( α ) R ( θ / 2 ) | 0 , | ψ B = U | ψ A P ( α ) R ( θ / 2 ) | 1 ,
for some α [ 0 , 2 π ) , where U | ψ A = | ψ A 0 | + | ψ A 1 | is a unitary transformation and
P ( α ) = 1 0 0 e i α , R ( θ / 2 ) = cos ( θ / 2 ) sin ( θ / 2 ) sin ( θ / 2 ) cos ( θ / 2 ) ,
in the computational basis. Thus,
| ψ B = 0 | P ( α ) R ( θ / 2 ) | 0 | ψ A + 1 | P ( α ) R ( θ / 2 ) | 0 | ψ A , | ψ B = 0 | P ( α ) R ( θ / 2 ) | 1 | ψ A + 1 | P ( α ) R ( θ / 2 ) | 1 | ψ A .

2.2.3. E A B [ O ^ ] for Fixed Pair of Measurements

For ease of notation, define the following product states:
| ψ A A = | ψ A | ψ A | ψ A A = | ψ A | ψ A , | ψ A A = | ψ A | ψ A | ψ A A = | ψ A | ψ A , | ψ A B = | ψ A | ψ B | ψ A B = | ψ A | ψ B .
Let E A B [ O ^ ] be the expected measurement correlation for a fixed choice of | ψ A and | ψ B . Then,
E A B [ O ^ ] = P [ O ^ = 1 ] P [ O ^ = 1 ] = 2 P [ O ^ = 1 ] 1 = 2 ψ A B | ρ A B | ψ A B + 2 ψ A B | ρ A B | ψ A B 1 .
Using the expressions for | ψ B in (16) to evaluate each term individually,
E A B [ O ^ ] = 2 cos 2 ( θ / 2 ) ψ A A | ρ A B | ψ A A + 2 sin 2 ( θ / 2 ) ψ A A | ρ A B | ψ A A + 2 Re [ e i α sin ( θ ) ψ A A | ρ A B | ψ A A ] + 2 cos 2 ( θ / 2 ) ψ A A | ρ A B | ψ A A + 2 sin 2 ( θ / 2 ) ψ A A | ρ A B | ψ A A 2 Re [ e i α sin ( θ ) ψ A A | ρ A B | ψ A A ] 1 .

2.2.4. E A [ O ^ ] for a Fixed Alice Measurement

For a fixed measurement choice for Alice, the expected correlation E A [ O ^ ] among all Bob’s possible measurement choices is found by integrating over α in [ 0 , π ] . The integrals of e i α and e i α vanish over this interval; hence,
E A [ O ^ ] = 1 π 0 π E A B [ O ^ ] d α = 2 cos 2 ( θ / 2 ) ψ A A | ρ A B | ψ A A + ψ A A | ρ A B | ψ A A 1 + 2 sin 2 ( θ / 2 ) ψ A A | ρ A B | ψ A A + ψ A A | ρ A B | ψ A A = cos θ 2 ψ A A | ρ A B | ψ A A + 2 ψ A A | ρ A B | ψ A A 1 .
Note that | ψ A can be written as
| ψ A = cos ( ω / 2 ) | 0 + e i β sin ( ω / 2 ) | 1 , | ψ A = sin ( ω / 2 ) | 0 e i β cos ( ω / 2 ) | 1 ,
for some ω [ 0 , π ] and β [ 0 , 2 π ) , so that
| ψ A A = 1 2 1 + cos ω | 00 + 1 2 e i β sin ω | 01 + 1 2 e i β sin ω | 10 + 1 2 e 2 i β 1 cos ω | 11 , | ψ A A = 1 2 1 cos ω | 00 1 2 e i β sin ω | 01 1 2 e i β sin ω | 10 + 1 2 e 2 i β 1 + cos ω | 11 .
For ease of notation, define the following quantities:
a i j p q = i j | ρ A B | p q ,
for i , j , p , q { 0 , 1 } . Using these quantities, Equation (20) provides the following:
E A [ O ^ ] = cos θ 2 ψ A A | ρ A B | ψ A A + 2 ψ A A | ρ A B | ψ A A 1 = cos θ [ 1 2 ( 1 + cos ω ) 2 a 0000 + 1 2 ( 1 cos ω ) 2 a 1111 + 1 2 ( 1 cos ω ) 2 a 0000 + 1 2 ( 1 + cos ω ) 2 a 1111 + sin 2 ω · a 0101 + sin 2 ω · a 1010 + 2 sin 2 ω · Re [ a 0110 ] 1 + f ( e i β , e 2 i β , e i β , e 2 i β ) ] = cos θ [ cos 2 ω ( cos 2 ω + 1 ) ( a 0101 + a 1010 ) + 2 sin 2 ω · Re [ a 0110 ] + f ( e i β , e 2 i β , e i β , e 2 i β ) ] ,
where f is a function representing a linear combination of its arguments.

2.2.5. Calculating E [ O ^ ]

The expected correlation over all Alice’s possible measurement choices is found by integrating (24) over β in [ 0 , 2 π ] and over ω in [ 0 , π ] , with the Jacobian sin ω appropriate for integration over a sphere surface. The integral of e i n β vanishes over the interval [ 0 , 2 π ] ; hence,
E [ O ^ ] = 1 4 π 0 2 π 0 π E A [ O ^ ] sin ω d ω d β = 1 2 cos θ 0 π [ cos 2 ω ( cos 2 ω + 1 ) ( a 0101 + a 1010 ) + 2 sin 2 ω · Re [ a 0110 ] ] sin ω d ω = cos θ ( 1 3 2 3 ( a 0101 + a 1010 ) + 4 3 Re [ a 0110 ] ) = cos θ ( 1 3 4 3 Ψ | ρ A B | Ψ ) ,
where Ψ | ρ A B | Ψ is the fidelity between ρ A B and the singlet state.
Clearly, the minimum of E [ O ^ ] is uniquely attained by the singlet.

3. Results

We will now describe and compare hypothesis tests for the singlet using (i) the Braunstein–Caves samples with parameter N or (ii) random measurement samples with θ = π / 2 N . We link our choice of θ to N in this way, as this ensures both tests induce equal expected correlations when measuring singlets, allowing for a clear comparison of the effect of deviations. We recall that the test samples both follow shifted Bernoulli distributions:
BC sample C ^ 2 · Bernoulli ( p ) 1 , RM sample O ^ 2 · Bernoulli ( q ) 1 ,
with parameters defined through the expectation values in (10) and (25) as
p = 1 2 + 1 2 cos π 2 N Φ + | ρ A B | Φ + Ψ | ρ A B | Ψ , q = 1 2 + 1 2 cos π 2 N 1 3 4 3 Ψ | ρ A B | Ψ .
If we denote the sample mean of n Braunstein–Caves samples as C ¯ and the sample mean of n random measurement samples as O ¯ , then they both follow shifted binomial distributions:
BC sample mean C ¯ 2 n B ( n , p ) 1 , RM sample mean O ¯ 2 n B ( n , q ) 1 .

3.1. Description of Hypothesis Tests

We aim to test the following hypotheses:
H 0 : ρ A B = | Ψ Ψ | v H 1 : ρ A B | Ψ Ψ | .
If we wish to conduct the test using Braunstein–Caves samples, we generate n samples of C ^ (as in Section 2.1) and let the test statistic be C ¯ .
If we wish to conduct the test using random measurement samples, we generate n samples of O ^ (as in Section 2.2) and let the test statistic be O ¯ .
Let α be the desired size of the test, defined as the probability the null hypothesis is erroneously rejected when Alice and Bob do in fact share a singlet. The critical region R is a set of values for the test statistic for which the null hypothesis is rejected. For both tests, we wish to define R as follows:
R = x : x > 2 z α , n n 1 ,
where z α , n is defined as the upper α -quantile of a B n , 1 2 1 2 cos π 2 N distribution. However, binomial quantiles can only take discrete values, so we are often unable to select one exactly correponding to α . To rectify this, we instead set z α , n to be the smallest integer, such that P X > z α , n | X B n , 1 2 1 2 cos π 2 N < α and extend R to a critical decision region R + , where if our test statistic exactly equals 2 z α , n / n 1 , we decide to reject the null hypothesis with probability q, where q is chosen so that P X R + | X B n , 1 2 1 2 cos π 2 N = α .
Through (28), it follows that P ( C ¯ R + | H 0 ) = P ( O ¯ R + | H 0 ) = α , as C ¯ and O ¯ are identically distributed under the null hypothesis.
The power functions π B C and π R M for each test describe the probability the null hypothesis is rejected given the density matrix of the state being tested, and are defined using (27)–(30) as follows:
π B C ( ρ A B ) = P ( X R + | X B ( n , p ) ) , π R M ( ρ A B ) = P ( X R + | X B ( n , q ) ) .
It is clear that
π B C ( ρ A B ) > π R M ( ρ A B ) p > q E [ C ^ ] > E [ O ^ ] ,
so whether (i) or (ii) is better at detecting non-singlet states in a given scenario can be determined by comparing the values of E [ C ^ ] and E [ O ^ ] associated with the testing of typical states arising from that scenario.
For large n, the asymptotic power functions are described by the central limit theorem. If Φ is the cumulative distribution function of a N ( 0 , 1 ) distribution, then, as n , the following holds:
π B C ( ρ A B ) 1 Φ z ˜ α , n + 0.5 n p n p ( 1 p ) , π R M ( ρ A B ) 1 Φ z ˜ α , n + 0.5 n q n q ( 1 q ) ,
where the ‘+0.5’ terms are the appropriate correction for a continuous limit of a discrete distribution and z ˜ α , n is the upper α -quantile of a N ( n p 0 , n p 0 ( 1 p 0 ) ) distribution, with p 0 = 1 2 1 2 cos π 2 N .

3.2. Comparison for Simple Intercept-Resend Attack

Consider the scenario in which Eve manages to intercept Bob’s qubit and performs the following measurement:
{ | α = cos ψ | 0 + e i β sin ψ | 1 , | α } ,
before sending the post-measurement qubit on to Bob. As the singlet can be expressed as follows:
| Ψ = 1 2 | α | α 1 2 | α | α ,
it is clear that the post-measurement state will be the following:
ρ A B = | α α | | α α | with probability 1 / 2 | α α | | α α | with probability 1 / 2 .
The expected test samples are independent of Eve’s measurement outcome and are calculated in Table 1 using (10) and (25).
These results show that a single-qubit intercept–resend attack reduces E [ O ^ ] to 1 / 3 of its singlet value, and reduces E [ C ^ ] to between 1 / 4 and 1 / 2 of its singlet value, depending on the measurement made by Eve.
Eve will choose ( ψ , β ) to achieve a desired balance of minimal disruption and maximal information gain; hence, her choice will depend on the parent protocol within which Alice and Bob intended to use the singlet. For example, if BB84 is the parent protocol, it is known [21] that the Breidbart basis ( β = 0 , ψ = π / 8 ) is optimal for Eve; hence, E [ O ^ ] > E [ C ^ ] and the measurement test has greater power. More generally, if Eve’s priority is to choose a basis which minimises disruption, the random measurement test will be more powerful (see Figure 1).
It is clear that choosing N to be as large as possible will maximise the difference between the expected correlations under the null and alternative hypotheses, leading to a more powerful test for both schemes. This implies that choosing θ = 0 is optimal for the random measurement test in this scenario; hence, it is optimal for Alice and Bob to use the same randomly chosen measurements if they know they are testing a singlet and a post-measurement state. However, it is known [22] (see Section 3.4) that for θ = 0 , the test does not distinguish the singlet from a class of simple LHV models.

3.3. Comparison for Bipartite State Transformation Attack

Consider the scenario in which Eve intercepts both qubits and manipulates them so that the singlet is transformed into some other state ρ A B , with the following singlet fidelity:
Ψ | ρ A B | Ψ = 1 ϵ .
While we permit any ϵ [ 0 , 1 ] , we are particularly interested in small values. The expected test samples are calculated in Table 2 using (10) and (25).
The results show that E [ O ^ ] increases with ϵ at linear rate 4 cos ( π / 2 N ) / 3 , while E [ C ^ ] increases with ϵ + Φ + | ρ A B | Φ + at linear rate cos ( π / 2 N ) ; thus, the test of greater power can be identified by comparing the values of Φ + | ρ A B | Φ + and ϵ . Note that the orthogonality of Bell states imposes the constraint 0 Φ + | ρ A B | Φ + ϵ .
When 0 Φ + | ρ A B | Φ + < ϵ / 3 , we have E [ O ^ ] > E [ C ^ ] , and when ϵ / 3 < Φ + | ρ A B | Φ + ϵ , we have E [ C ^ ] > E [ O ^ ] , while for Φ + | ρ A B | Φ + = ϵ / 3 , both tests are equally strong.
As an example, in a scenario where Eve prioritises being as undetectable as possible for a given ϵ , she would choose a transformation with Φ + | ρ A B | Φ + = 0 , so the random measurement test would be superior in this case (see Figure 2).
One way to overcome the uncertainty in the value of Φ + | ρ A B | Φ + is to require Alice and Bob to apply the same randomly chosen unitary operation U to both of their qubits before measurement, without remembering the identity of U. This effectively transforms their shared system to a mixed state of the following form:
( 1 ϵ ) | Ψ Ψ | + ϵ 3 I | Ψ Ψ |
as the singlet component remains invarient under a U U transformation, while the complement becomes maximally mixed. This ensures that both tests are equivalently strong when testing the resulting state, as Φ + | ρ A B | Φ + = ϵ / 3 .
The largest possible choice of parameter N leads to the test of greater power for each type of scheme, much as it did in Section 3.2.

3.4. Comparison for LHV Replacement Attack

Consider the scenario in which Eve intercepts both qubits and replaces them with a not necessarily quantum system where the correlation between Alice and Bob is governed entirely by a local hidden variable (LHV) theory. Using the Braunstein–Caves inequality [13], the expected value of the Braunstein–Caves sample is bounded for integers N 2 as
| E [ C ^ ] |   1 1 N < cos π 2 N ,
for measurements of a two-sided LHV system. It is also known (Theorem 1 in [20]) that the expected value of the random measurement sample for θ = π / 2 N is bounded for integers N 2 as follows:
| E [ O ^ ] |   1 1 N < cos π 2 N ,
for measurements of a two-sided LHV system. The strictly positive difference between correlations resulting from LHV models and singlets implies that both tests can detect when the correlation between Alice and Bob’s measurement outcomes is caused by an LHV theory, with a power that is uniformly bounded for all possible LHV theories.
For N 2 , the optimal parameters for both types of scheme are found by selecting the value of N that maximises the difference between expected singlet correlations and the bound on LHV correlations. This difference is defined in (39) and (40) as follows:
D ( N ) = cos π 2 N + 1 N 1 .
As D ( 2 ) 0.207 , D ( 3 ) 0.199 and D ( N ) < 0 for N 3 , it follows that D ( N ) is maximised by N = 2 over integer inputs greater than 1, providing an optimal minimum bound on test power for both schemes.
This result does not identify which value of θ leads to the random measurement test with the greatest power for detecting LHV models, as it is possible to use any θ [ 0 , π / 2 ] , not just the discrete selection considered above, and the gap is not generally given by (41). This question was explored further in [23] and resolved numerically in [24]. The optimal value for detecting general LHV models that are optimized to simulate the singlet is θ = π / 4 ; the optimal value for detecting general LHV models that are optimzed to simulate the singlet, with the constraint that they provide perfect anticorrelations for measurements about the same axis, is θ = π / 5 .
It is also interesting to compare the optimal value of θ for detecting the LHVs given by Bell’s original model [22], which is defined such that Alice’s measurement on one hemisphere of the Bloch sphere leads to outcome + 1 and the other leads to outcome 1 , with Bob’s measurement providing opposite values on the same hemispheres. For this model, it is easy to verify that
E [ O ^ ] = 1 + 2 θ π ,
with the difference between this expected correlation and that for the singlet being
D ˜ ( θ ) = cos θ + 2 θ π 1 .
This quantity is maximised by θ = arcsin ( 2 / π ) 0.6901 , leading to D ˜ ( arcsin ( 2 / π ) ) 0.2105 . Hence, the random measurement test with this parameter has the greatest power for detecting this class of LHV models (see Figure 3). For comparison, D ˜ ( π / 4 ) 0.2071 .
Since D ˜ ( θ ) > 0 for all θ ( 0 , π / 2 ) , a test with any θ in this range would detect Bell’s LHV models with some efficiency.

3.5. Comments on LHV Model Testing with Measurement Errors

When Alice and Bob program their measurement devices during a test, there is a possibility they incur small calibration errors. These could be realised as small deviations in their measurement angles on the Bloch sphere. We fix δ > 0 as a bound on the magnitude of a deviation in any single measurement for both Alice and Bob.
We examine the effect of such errors on the random measurement and Braunstein–Caves schemes.

3.5.1. Random Measurement Scheme

Theorem 1 of Ref. [20] provides a bound on the expected value of a random measurement sample from an LHV model in this error regime. The theorem equivalently states that for any LHV model, any integer N 2 and any θ π / 2 N , π / 2 ( N 1 ) , we have 1 + 1 / N E [ O ^ ] 1 1 / N .
For δ < π / 8 N ( N 1 ) , it follows that the expected value of a random measurement sample from any LHV model with chosen angle θ = π / 2 N + 2 δ satisfies
| E [ O ^ ] |   1 1 N .
In this setting, the greatest assured difference between the expected correlation for a singlet and that for an LHV model over all possible δ -bounded errors is
D ( N ) = cos π 2 N + 4 δ 1 1 N .
D ( N ) is positive when δ < ( arccos ( 1 1 / N ) π / 2 N ) / 4 , implying that the random measurement test can distinguish between singlet and LHV models in the presence of δ -bounded measurement errors when δ < min { π / 8 N ( N 1 ) , ( arccos ( 1 1 / N ) π / 2 N ) / 4 } .
For N 3 , this required bound on δ converges monotonically to 0 as N increases. This implies that the scheme can only reliably tolerate a smaller range of absolute measurement errors when N is large, suggesting that schemes with reasonably large θ may be more robust.

3.5.2. Braunstein–Caves Scheme

The expected value of a Braunstein–Caves sample from an LHV model in this error regime is still bounded as
| E [ C ^ ] |   1 1 N ,
as the Braunstein–Caves inequality holds independently of Alice and Bob’s measurement choices.
In this setting, the expected correlation for a singlet over all δ -bounded measurement errors can be calculated using (6) and (9) by shifting the usual Braunstein–Caves measurement angles for Alice by ϵ A and likewise for Bob by ϵ B , where both ϵ A and ϵ B represent δ -bounded errors, leading to the following:
E [ C ^ ] = cos π 2 N cos ( 2 ( ϵ A ϵ B ) ) .
This implies that the greatest assured difference between the expected correlation for a singlet and that of an LHV model over all possible δ -bounded errors is
D ( N ) = cos π 2 N cos ( 4 δ ) 1 1 N .
D ( N ) is positive when δ < ( arccos ( 1 1 / N ) π / 2 N ) / 4 , implying that, under this condition, the Braunstein–Caves test can distinguish between singlet and LHV models in the presence of δ -bounded measurement errors.
For N 2 , this required bound on δ converges monotonically to 0 as N increases. This implies that the scheme can only reliably tolerate a smaller range of absolute measurement errors when N is large, again suggesting that schemes with a small N may be more robust.

3.5.3. Conclusions

In summary, it is shown that both schemes are still able to distinguish between singlet and LHV models in the presence of small deviations in the intended measurement angle. As N becomes large, we become less sure of the robustness of each scheme, as the proven range of tolerable measurement errors decreases.

3.6. Comparison for Noisy Quantum Channel

Consider the scenario in which Eve takes no action, but the quantum channel used for state transmission to Alice and Bob is affected by noise. Different quantum channels are afflicted with different types of noise; however, as a simple example, we can consider a depolarising channel that replaces the singlet with the maximally mixed state with probability δ .
The effect of this noise on a singlet can be modelled using two-qubit Werner states, with these being the only set of states that is invariant under arbitrary unitary transformations acting equally on both qubits [25].
The two-qubit Werner state can be defined as follows:
W δ = ( 1 δ ) | Φ Φ | + δ 4 I ,
where δ parametrises the strength of the noise, with δ = 0 corresponding to a pure singlet state in the absence of noise.
The expected test samples are calculated in Table 3 using (10) and (25).
Hence, both tests are equally powerful in testing for depolarising noise. As in Section 3.2 and Section 3.3, a larger value of N leads to a test of greater power, so the choice of a large N and θ = 0 would be optimal.
As an additional example, we can consider the effect of a simple dephasing channel which acts on a qubit as Pauli gate Z with probability p. The effect of this noise on a singlet can be described as follows:
Δ p = ( 1 p ) 2 | Φ Φ | + p ( 1 p ) ( I Z ) | Φ Φ | ( I Z ) + p ( 1 p ) ( Z I ) | Φ Φ | ( Z I ) + p 2 ( Z Z ) | Φ Φ | ( Z Z ) = ( ( 1 p ) 2 + p 2 ) | Φ Φ | + 2 p ( 1 p ) | Φ + Φ + | ,
where we restrict 0 < p < 1 .
The expected test samples are calculated in Table 4 using (10) and (25).
It is clear that the random measurement test has greater power in testing for this type of dephasing for any 0 < p < 1 . Just as in Section 3.2 and Section 3.3, a larger value of N leads to a test of greater power, so a choice of large N and θ = 0 would be optimal.

4. Discussion

While there is no universally superior choice of singlet test, we have seen that the random measurement test is theoretically superior or equal in many natural scenarios, including in the detection of intercept–resend or transformation attacks, where Eve prioritises minimising her chance of detection, distinguishing LHV models, and detecting rotationally invariant noise.
These results provide a rationale for considering the random measurement test for singlet verification over more conventional CHSH schemes (e.g., [14,15,16,17]). A complete analysis would consider the full range of attacks open to Eve and the full range of tests available for A and B. This would define a two-party game (with A and B collaborating as one party and Eve as the other), in which the optimal strategy for each party is likely probabilistic. However, Eve’s actions may be limited depending on how the singlets are generated and distributed and on the technologies available to her. Also, Alice and Bob may be able to exclude non-quantum LHV attacks if they can test qubits before measurement to ensure they are in the appropriate physical state.
Our discussion has mainly focussed on the ideal case, in which Alice and Bob can carry out perfectly precise measurements. Establishing that random measurement tests have an advantage in this case shows they are potentially valuable options, and motivates the development of technology that can implement them more easily and precisely. However, at present, imprecisions need to be taken into account when assessing the relative feasibility, advantages and costs of all the considered tests. For example, the Braunstein–Caves test only requires the calibration of measurement devices in a finite number ( 2 N ) of orientations around a great circle on the Bloch sphere, while the random measurement test requires the ability to measure all possible orientations. The Braunstein–Caves test may thus be a more desirable choice if calibrating detectors or, equivalently, if manipulating qubits precisely is difficult. An analysis of the feasibility of carrying out random measurement tests with current or foreseeable future technology—a task for future work—would illuminate these tradeoffs.
In principle, the random measurement protocol can be implemented in various ways, each of which requires some resources. One option is for Alice and Bob to pre-coordinate their measurements. This requires secure classical communication and/or secure classical memory, albeit not necessarily a large amount. For example, if Alice and Bob choose from a pre-agreed list of 10 6 approximately uniformly distributed axes on the Bloch sphere, they can specify a measurement pair with about 40 bits, choosing pairs separated by the chosen θ to within error 3 × 10 4 . Consuming secure classical communication and/or memory at this rate is not hugely demanding, and may be a reasonable option in many quantum cryptographic and communication scenarios. However, relatively precise pre-coordinated measurements effectively define (if pre-agreed) or consume (if securely communicated) large amounts of a shared secret key. Singlet verification may be required for only a small fraction of the shared singlets. Still, the advantage is, at best, context-dependent in protocols that aim to generate one-time pads.
An alternative, if Bob has short-term quantum memory, is for Alice to communicate her measurement choice after Bob receives and stores his qubit. Each can then define their measurement choice using locally generated or stored random bits, and Bob can delay his measurement choice until he receives Alice’s, with no additional security risk.
Another possible option is for Alice and Bob to choose measurements randomly and independently, and then sort their results into approximately θ -separated pairs post-measurement for some discrete set of θ in the range [ 0 , π / 2 ] . This effectively means carrying out random measurement tests for each θ in the chosen set, up to some chosen finite precision. This protocol effectively uses a random variable θ , and further analysis is needed to characterise its efficiency. The Braunstein–Caves protocol can be similarly adapted to avoid pre-coordination if Alice and Bob each independently choose measurements from set (2) and then sort their results into pairs that correspond to complete elements of (2). For a test with parameter N, they would, on average, retain a fraction 2 / N of their samples. If the remainder are discarded, this requires them to multiply their initial sample size by N / 2 to compensate. However, some of the discarded data could be used for further Braunstein–Caves tests if N is factorisable. Other anti-correlation tests could, in principle, be carried out on the remainder (although the finite precision loophole for measurements on the circle [23] needs to be allowed for). In the N = 2 (CHSH) case, there is no loss of efficiency, as all choices by Alice and Bob would correspond to an allowed pair.
Larger values of N provide more powerful tests for detecting bipartite state transformation attacks and rotationally invariant noise, while the smallest possible N is optimal for detecting LHV correlations. Alice and Bob should thus either choose N according to which type of attack is most likely or—if they are in the type of game-theoretic scenario discussed above—act against the potential use of any of the attacks by employing a probabilistic strategy that mixes different values of N.
In the case N = 2 , there is a natural sense that the random measurement test is at least as good at, or better than, the Braunstein–Caves test in every scenario. In Section 3.2, Eve’s goal could be to carry out an intercept-resend with minimum probability of detection (i.e., ψ = 0 or β = 0 ), in which case the random measurement test is more powerful. In Section 3.3, Eve’s goal could be to carry out a state replacement that achieves fidelity 1 ϵ with minimum probability of detection (i.e., Φ + | ρ A B | Φ + = 0 ), in which case the random measurement test is again more powerful. In Section 3.4 and Section 3.6, both tests are equally good for all variations. In Section 3.5, it is shown that both tests are still effective in the presence of small measurement calibration errors.
Our results thus make a clear case for the consideration of random measurement tests, and add motivation to continue work [23,24] focused on identifying their power for the full range of θ ( 0 , π / 3 ) .
Random measurement tests are, at present, technologically challenging. More work is also needed to characterise their robustness in real-world applications where finite precision is inevitable, with various plausible error models, and where there may be a wide range of plausible adversarial attacks. For example, Eve might employ a mixture of the attacks discussed above, choosing different attacks randomly for different singlets, and/or combinations of these attacks on each singlet. That said, our results suggest that random measurement tests should be considered, as and when the technology allows, in scenarios where efficient singlet testing is critical and the costs of classical and/or quantum memory resources are relatively negligible. The optimal testing strategies against general attacks likely also involve random mixtures of tests. It would thus also be very interesting to explore the advantages of random measurement tests in more sophisticated testing strategies, such as mixtures of random measurements with different angles [24] and routed singlet tests [26] using random measurements.

Author Contributions

Conceptualization, A.K.; methodology, G.C. and A.K.; software, G.C.; validation, G.C. and A.K.; formal analysis, G.C. and A.K.; investigation, G.C. and A.K.; writing—original draft preparation, G.C. and A.K.; writing—review and editing, G.C. and A.K.; visualization, G.C.; supervision, A.K.; project administration, A.K.; funding acquisition, A.K. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the financial support from the UK Quantum Communications Hub grant no. EP/T001011/1 and from UK-Canada Quantum for Science research collaboration grant OPP640. G.C. was supported by a studentship from the Engineering and Physical Sciences Research Council. A.K. is partially supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bennett, C.H.; Brassard, G.; Crépeau, C.; Jozsa, R.; Peres, A.; Wootters, W.K. Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels. Phys. Rev. Lett. 1993, 70, 1895–1899. [Google Scholar] [CrossRef] [PubMed]
  2. Kent, A. Quantum tasks in Minkowski space. Class. Quantum Gravity 2012, 29, 224013. [Google Scholar] [CrossRef]
  3. Kent, A. A no-summoning theorem in relativistic quantum theory. Quantum Inf. Process. 2013, 12, 1023–1032. [Google Scholar] [CrossRef]
  4. Hayden, P.; May, A. Summoning information in spacetime, or where and when can a qubit be? J. Phys. A Math. Theor. 2016, 49, 175304. [Google Scholar] [CrossRef]
  5. Adlam, E.; Kent, A. Quantum paradox of choice: More freedom makes summoning a quantum state harder. Phys. Rev. A 2016, 93, 062327. [Google Scholar] [CrossRef]
  6. Vaidman, L. Instantaneous measurement of nonlocal variables. Phys. Rev. Lett. 2003, 90, 010402. [Google Scholar] [CrossRef]
  7. Ekert, A.K. Quantum cryptography based on Bell’s theorem. Phys. Rev. Lett. 1991, 67, 661–663. [Google Scholar] [CrossRef]
  8. Kent, A.; Munro, W.J.; Spiller, T.P. Quantum tagging: Authenticating location via quantum information and relativistic signaling constraints. Phys. Rev. A 2011, 84, 012326. [Google Scholar] [CrossRef]
  9. Kent, A. Quantum tagging for tags containing secret classical data. Phys. Rev. A 2011, 84, 022335. [Google Scholar] [CrossRef]
  10. Buhrman, H.; Chandran, N.; Fehr, S.; Gelles, R.; Goyal, V.; Ostrovsky, R.; Schaffner, C. Position-based quantum cryptography: Impossibility and constructions. SIAM J. Comput. 2014, 43, 150–178. [Google Scholar] [CrossRef]
  11. Kent, A. Unconditionally secure bit commitment with flying qudits. New J. Phys. 2011, 13, 113015. [Google Scholar] [CrossRef]
  12. Clauser, J.F.; Horne, M.A.; Shimony, A.; Holt, R.A. Proposed Experiment to Test Local Hidden-Variable Theories. Phys. Rev. Lett. 1969, 23, 880–884. [Google Scholar] [CrossRef]
  13. Braunstein, S.L.; Caves, C.M. Wringing out better Bell inequalities. Ann. Phys. 1990, 202, 22–56. [Google Scholar] [CrossRef]
  14. McKague, M.; Yang, T.H.; Scarani, V. Robust Self Testing of the Singlet. J. Phys. A Math. Theor. 2012, 45, 455304. [Google Scholar] [CrossRef]
  15. Valcarce, X.; Sekatski, P.; Orsucci, D.; Oudot, E.; Bancal, J.D.; Sangouard, N. What is the minimum CHSH score certifying that a state resembles the singlet? Quantum 2019, 4, 246. [Google Scholar] [CrossRef]
  16. Bancal, J.D.; Navascués, M.; Scarani, V.; Vértesi, T.; Yang, T.H. Physical characterization of quantum devices from nonlocal correlations. Phys. Rev. A 2015, 91, 022115. [Google Scholar] [CrossRef]
  17. Yang, T.H.; Vértesi, T.; Bancal, J.D.; Scarani, V.; Navascués, M. Robust and Versatile Black-Box Certification of Quantum Devices. Phys. Rev. Lett. 2014, 113, 040401. [Google Scholar] [CrossRef]
  18. Bennett, C.H.; Brassard, G.; Popescu, S.; Schumacher, B.; Smolin, J.A.; Wootters, W.K. Purification of noisy entanglement and faithful teleportation via noisy channels. Phys. Rev. Lett. 1996, 76, 722. [Google Scholar] [CrossRef]
  19. Wehner, S. Tsirelson bounds for generalized Clauser-Horne-Shimony-Holt inequalities. Phys. Rev. A 2006, 73, 022110. [Google Scholar] [CrossRef]
  20. Kent, A.; Pitalúa-García, D. Bloch sphere colourings and Bell inequalities. Phys. Rev. A 2013, 90, 062124. [Google Scholar] [CrossRef]
  21. Dan, L.; Chang-xing, P.; Dong-xiao, Q.; Bao-bin, H.; Nan, Z. A new attack strategy for BB84 protocol based on, Breidbart basis. In Proceedings of the 2009 Fourth International Conference on Communications and Networking in China, Xi’an, China, 26–28 August 2009. [Google Scholar] [CrossRef]
  22. Bell, J.S. On the Einstein Podolsky Rosen paradox. Physics 1964, 1, 195–200. [Google Scholar] [CrossRef]
  23. Chistikov, D.; Goulko, O.; Kent, A.; Paterson, M. Globe-hopping. Proc. R. Soc. A 2020, 476, 2020038. [Google Scholar] [CrossRef] [PubMed]
  24. Llamas, D.; Chistikov, D.; Kent, A.; Paterson, M.; Goulko, O. Optimal Local Simulations of a Quantum Singlet. arXiv 2025, arXiv:2504.20953. [Google Scholar] [CrossRef]
  25. Werner, R.F. Quantum states with Einstein-Podolsky-Rosen correlations admitting a hidden-variable model. Phys. Rev. A 1989, 40, 4277–4281. [Google Scholar] [CrossRef] [PubMed]
  26. Le Roy-Deloison, T.; Lobo, E.P.; Pauwels, J.; Pironio, S. Device-Independent Quantum Key Distribution Based on Routed Bell Tests. PRX Quantum 2025, 6, 020311. [Google Scholar] [CrossRef]
Figure 1. Results when Eve measures singlets in the computational basis and transmits the outcome states (see Section 3.2). Test parameters are N = 2 and θ = π / 4 . Curves are theoretical asymptotic power functions (33). Dots are empirical data, representing the proportion of 400 simulated tests leading to a rejection of the null hypothesis at the 1 % level.
Figure 1. Results when Eve measures singlets in the computational basis and transmits the outcome states (see Section 3.2). Test parameters are N = 2 and θ = π / 4 . Curves are theoretical asymptotic power functions (33). Dots are empirical data, representing the proportion of 400 simulated tests leading to a rejection of the null hypothesis at the 1 % level.
Entropy 27 00515 g001
Figure 2. Results when Eve transforms singlets to states with Ψ | ρ A B | Ψ = 0.9 and with Φ + | ρ A B | Φ + = 0 , choosing the latter to be undetectable as possible with the BC test for the given fidelity (see Section 3.3). Test parameters are N = 2 and θ = π / 4 . Curves are theoretical asymptotic power functions (33). Dots are empirical data, representing the proportion of 400 simulated tests leading to a rejection of the null hypothesis at the 1 % level.
Figure 2. Results when Eve transforms singlets to states with Ψ | ρ A B | Ψ = 0.9 and with Φ + | ρ A B | Φ + = 0 , choosing the latter to be undetectable as possible with the BC test for the given fidelity (see Section 3.3). Test parameters are N = 2 and θ = π / 4 . Curves are theoretical asymptotic power functions (33). Dots are empirical data, representing the proportion of 400 simulated tests leading to a rejection of the null hypothesis at the 1 % level.
Entropy 27 00515 g002
Figure 3. Results when Eve replaces singlets with Bell’s LHV model. Test parameters are N = 2 and θ = arcsin ( 2 / π ) , the optimal choice for the random measurement test in this case (see Section 3.4). Curves are theoretical asymptotic power functions (33). Dots are empirical data, representing the proportion of 400 simulated tests leading to a rejection of the null hypothesis at the 1 % level.
Figure 3. Results when Eve replaces singlets with Bell’s LHV model. Test parameters are N = 2 and θ = arcsin ( 2 / π ) , the optimal choice for the random measurement test in this case (see Section 3.4). Curves are theoretical asymptotic power functions (33). Dots are empirical data, representing the proportion of 400 simulated tests leading to a rejection of the null hypothesis at the 1 % level.
Entropy 27 00515 g003
Table 1. Comparison of expectation values for post-measurement state and singlet, with the BC test using parameter N and the RM test using θ = π / 2 N .
Table 1. Comparison of expectation values for post-measurement state and singlet, with the BC test using parameter N and the RM test using θ = π / 2 N .
ρ AB | Ψ
E [ C ^ ] 1 2 + 1 4 sin 2 ( 2 ψ ) sin 2 ( 2 β ) cos π 2 N cos π 2 N
E [ O ^ ] 1 3 cos π 2 N cos π 2 N
Table 2. Comparison of expectation values for post-transformation state and singlet, with the BC test using parameter N and the RM test using θ = π / 2 N .
Table 2. Comparison of expectation values for post-transformation state and singlet, with the BC test using parameter N and the RM test using θ = π / 2 N .
ρ AB | Ψ Ψ |
E [ C ^ ] 1 + ϵ + Φ + | ρ A B | Φ + cos π 2 N cos π 2 N
E [ O ^ ] 1 + 4 3 ϵ cos π 2 N cos π 2 N
Table 3. Comparison of expectation values for the Werner state and singlet, with the BC test using parameter N and the RM test using θ = π / 2 N .
Table 3. Comparison of expectation values for the Werner state and singlet, with the BC test using parameter N and the RM test using θ = π / 2 N .
W δ | Ψ Ψ |
E [ C ^ ] ( 1 δ ) cos π 2 N cos π 2 N
E [ O ^ ] ( 1 δ ) cos π 2 N cos π 2 N
Table 4. Comparison of expectation values for the dephased state and singlet, with the BC test using parameter N and the RM test using θ = π / 2 N .
Table 4. Comparison of expectation values for the dephased state and singlet, with the BC test using parameter N and the RM test using θ = π / 2 N .
Δ p | Ψ Ψ |
E [ C ^ ] ( 1 2 p ( 1 p ) ) cos π 2 N cos π 2 N
E [ O ^ ] ( 1 8 3 p ( 1 p ) ) cos π 2 N cos π 2 N
Note the superior performance of the random measurement test due to its sensitivity to broken rotational symmetry (in contrast to Table 3).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cowperthwaite, G.; Kent, A. Comparing Singlet Testing Schemes. Entropy 2025, 27, 515. https://doi.org/10.3390/e27050515

AMA Style

Cowperthwaite G, Kent A. Comparing Singlet Testing Schemes. Entropy. 2025; 27(5):515. https://doi.org/10.3390/e27050515

Chicago/Turabian Style

Cowperthwaite, George, and Adrian Kent. 2025. "Comparing Singlet Testing Schemes" Entropy 27, no. 5: 515. https://doi.org/10.3390/e27050515

APA Style

Cowperthwaite, G., & Kent, A. (2025). Comparing Singlet Testing Schemes. Entropy, 27(5), 515. https://doi.org/10.3390/e27050515

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop