1. Introduction
Let
$\mathbb{X}$ be a nonempty finite set containing
n elements and
$p={\left({p}_{x}\right)}_{x\in \mathbb{X}}$ be a probability vector parameterized by some vector
$\theta =({\theta}_{1},\dots ,{\theta}_{m})\in {\mathsf{\Theta}}^{m}$ for an integer
$m\ge 2$. For instance, the set
$\mathsf{\Theta}$ can be the real interval
$[0,1]$ or the set of Hermitian semidefinite positive matrices as it is the case for the simulation of entanglement. The probability vector
p defines a random variable
X such that
$\mathbf{P}\{X=x\}\phantom{\rule{0.166667em}{0ex}}\stackrel{\mathrm{def}}{=}\phantom{\rule{0.166667em}{0ex}}{p}_{x}$ for
$x\in \mathbb{X}$. To sample exactly the probability vector
p means to produce an output
x such that
$\mathbf{P}\{X=x\}={p}_{x}$. The problem of sampling probability distributions has been studied and is still studied extensively within different random and computational models. Here, we are interested in sampling
exactly a discrete distribution whose defining parameters are distributed among
m different parties. The
${\theta}_{i}$’s for
$i\in \{1,\dots ,m\}$ are stored in
m different locations where the
ith party holds
${\theta}_{i}$. In general, any communication topology between the parties would be allowed, but, in this work, we concentrate for simplicity on a model in which we add a designated party known as the
leader, whereas the
m other parties are known as the
custodians because each of them is sole keeper of the corresponding parameter
$\theta $—hence there are
$m+1$ parties in total. The leader communicates in both directions with the custodians, who do not communicate among themselves. Allowing intercustodian communication would not improve the communication efficiency of our scheme and can, at best, halve the number of bits communicated in any protocol. However, it could dramatically improve the sampling
time in a realistic model in which each party is limited to sending and receiving a fixed number of bits at any given time step, as demonstrated in our previous work [
1] concerning a special case of the problem considered here. The communication scheme is illustrated in
Figure 1.
It may seem paradoxical that the leader can sample
exactly the probability vector
p with a
finite expected number of bits sent by the custodians, who may hold
continuous parameters that define
p. However, this counterintuitive possibility has been known to be achievable for more than a quartercentury in earlier work on the simulation of quantum entanglement by classical communication, starting with Refs. [
2,
3,
4,
5,
6,
7], continuing with Refs. [
8,
9,
10,
11,
12,
13,
14], etc. for the bipartite case and Refs. [
15,
16,
17], etc. for the multipartite case, and culminating with our own Ref. [
1].
Our protocol to sample remotely a given probability vector is presented in
Section 2. For this purpose, the von Neumann rejection algorithm [
18] is modified to produce an output
$x\in \mathbb{X}$ with exact probability
${p}_{x}$ using mere approximations of those probabilities, which are computed based on partial knowledge of the parameters transmitted on demand by the custodians to the leader. For the sake of simplicity, and to concentrate on the new techniques, we assume initially that algebraic operations on real numbers can be carried out with infinite precision and that continuous random variables can be sampled. Later, in
Section 4, we build on techniques developed in Ref. [
1] to obtain exact sampling in a realistic scenario in which all computations are performed with finite precision and the only source of randomness comes from flipping independent fair coins.
In the intervening
Section 3, we study our motivating application of remote sampling, which is the simulation of quantum entanglement using classical resources and classical communication. Readers who may not be interested in quantum information can still benefit from
Section 2 and most of
Section 4, which make no reference to quantum theory in order to explain our general remote sampling strategies. A special case of remote sampling has been used by the authors [
1], in which the aim was to sample a specific probability distribution appearing often in quantum information science, namely the
mpartite Greenberger–Horne–Zeilinger (GHZ) distribution [
19]. More generally, consider a quantum system of dimension
$d={d}_{1}\cdots {d}_{m}$ represented by a density matrix
$\rho $ known by the leader (surprisingly, the custodians have no need to know
$\rho $). Suppose that there are
m generalized measurements (
povms) acting on quantum systems of dimensions
${d}_{1},\dots ,{d}_{m}$ whose possible outcomes lie in sets
${\mathbb{X}}_{1},\dots ,{\mathbb{X}}_{m}$ of cardinality
${n}_{1},\dots ,{n}_{m}$, respectively. Each custodian knows one and only one of the
povms and nothing else about the others. The leader does not know initially any information about any of the
povms. Suppose in addition that the leader can generate independent identically distributed uniform random variables on the real interval
$[0,1]$. We show how to generate a random vector
$X=({X}_{1},\dots ,{X}_{m})\in \mathbb{X}={\mathbb{X}}_{1}\times \dots \times {\mathbb{X}}_{m}$ sampled from the exact joint probability distribution that would be obtained if each custodian
i had the
ith share of
$\rho $ (of dimension
${d}_{i}$) and measured it according to the
ith
povm, producing outcome
${x}_{i}\in {\mathbb{X}}_{i}$. This task is defined formally in
Section 3, where we prove that the total expected number of bits transmitted between the leader and the custodians using remote sampling is
$O\left({m}^{2}\right)$ provided all the
${d}_{i}$’s and
${n}_{i}$’s are bounded by some constant. The exact formula, involving
m as well as the
${d}_{i}$’s and
${n}_{i}$’s, is given as Equation (
14) in
Section 3, where
d and
n denote the product of the
${d}_{i}$’s and the
${n}_{i}$’s, respectively. In
Section 4, we obtain the same asymptotic result in the more realistic scenario in which the only source of randomness comes from independent identically distributed uniform random
bits. This result subsumes that of Ref. [
1] since all
${d}_{i}$’s and
${n}_{i}$’s are equal to 2 for projective measurements on individual qubits of the
mpartite GHZ state.
2. Remote Sampling
As explained in the Introduction, we show how to sample remotely a discrete probability vector $p={\left({p}_{x}\right)}_{x\in \mathbb{X}}$. The task of sampling is carried by a leader ignorant of some parameters $\theta =({\theta}_{1},\dots ,{\theta}_{m})$ that come in the definition of the probability vector, where each ${\theta}_{i}$ is known by the ith custodian only, with whom the leader can communicate. We strive to minimize the amount of communication required to achieve this task.
To solve our conundrum, we modify the von Neumann rejection algorithm [
18,
20]. Before explaining those modifications, let us review the original algorithm. Let
$q={\left({q}_{x}\right)}_{x\in \mathbb{X}}$ be a probability vector that we know how to sample on the same set
$\mathbb{X}$, and let
$C\ge 1$ be such that
${p}_{x}\le C{q}_{x}$ for all
$x\in \mathbb{X}$. The classical von Neumann rejection algorithm is shown as Algorithm 1. It is well known that the expected number of times round the
repeat loop is exactly
C.
Algorithm 1 Original von Neumann rejection algorithm 
 1:
repeat  2:
Sample X according to ${\left({q}_{x}\right)}_{x\in \mathbb{X}}$  3:
Sample U uniformly on $[0,1]$  4:
if $UC{q}_{X}\le {p}_{X}$ then  5:
return X {X is accepted}  6:
end if  7:
end repeat

If only partial knowledge about the parameters defining
p is known, it would seem that the condition in line 4 cannot be decided. Nevertheless, the strategy is to build a sequence of increasingly accurate approximations that converge to the left and right sides of the test. As explained below, the number of bits transmitted depends on the number of bits needed to compute
q, and on the accuracy in
p required to accept or reject. This task can be achieved either in the
random bit model, in which only i.i.d. random bits are generated, or in the less realistic
uniform model, in which uniform continuous random variables are needed. The random bit model was originally suggested by von Neumann [
18], but only later given this name and formalized by Knuth and Yao [
21]. In this section, we concentrate for simplicity on the uniform model, leaving the more practical random bit model for
Section 4.
Definition 1. A tbit approximation of a real number x is any $\widehat{x}$ such that $x\widehat{x}\le {2}^{t}$. A special case of tbit approximation is the tbit truncation $\widehat{x}=\mathrm{sign}\left(x\right)\lfloor \leftx\right{2}^{t}\rfloor /{2}^{t}$, where $\mathrm{sign}\left(x\right)$ is equal to $+1$, 0 or $1$ depending on the sign of x. If $\alpha =a+bi$ is a complex number, where $i=\sqrt{1}$, then a tbit approximation (resp. truncation) $\widehat{\alpha}$ of α is any $\widehat{a}+\widehat{b}i$, where $\widehat{a}$ and $\widehat{b}$ are tbit approximations (resp. truncations) of a and b, respectively.
Note that we assume without loss of generality that approximations of probabilities are always constrained to be real numbers between 0 and 1, which can be enforced by snapping any outofbound approximation (even if it is a complex number) to the closest valid value.
Consider an integer ${t}_{0}>0$ to be determined later. Our strategy is for the leader to compute the probability vector $q={\left({q}_{x}\right)}_{x\in \mathbb{X}}$ defined below, based on ${t}_{0}$bit approximations ${p}_{x}\left({t}_{0}\right)$ of the probabilities ${p}_{x}$ for each $x\in \mathbb{X}$. For this purpose, the leader receives sufficient information from the custodians to build the entire vector q at the outset of the protocol. This makes q the “easytosample” distribution required in von Neumann’s technique, which is easy not from a computational viewpoint, but in the sense that no further communication is required for the leader to sample it as many times as needed.
Let
and
Noting that
${\sum}_{x}{q}_{x}=1$, these
${q}_{x}$ define a proper probability vector
$q={\left({q}_{x}\right)}_{x\in \mathbb{X}}$. Using the definition of a
tbit approximation and the definition of
${q}_{x}$ from Equation (
2), we have that
Taking the sum over the possible values for
x and recalling that set
$\mathbb{X}$ is of cardinality
n,
Consider any $x\in \mathbb{X}$ sampled according to q and U sampled uniformly in $[0,1]$ as in lines 2 and 3 of Algorithm 1. Should x be accepted because $UC{q}_{x}\le {p}_{x}$, this can be certified by any tbit approximation ${p}_{x}\left(t\right)$ of ${p}_{x}$ such that $UC{q}_{x}\le {p}_{x}\left(t\right){2}^{t}$ for some positive integer t since ${p}_{x}\left(t\right)\le {p}_{x}+{2}^{t}$. Conversely, any integer t such that $UC{q}_{x}>{p}_{x}\left(t\right)+{2}^{t}$ certifies that x should be rejected because it implies that $UC{q}_{x}>{p}_{x}$ since ${p}_{x}\left(t\right)\ge {p}_{x}{2}^{t}$. On the other hand, no decision can be made concerning $UC{q}_{x}$ versus ${p}_{x}$ if ${2}^{t}<UC{q}_{x}{p}_{x}\left(t\right)\le {2}^{t}$. It follows that one can modify Algorithm 1 above into Algorithm 2 below, in which a sufficiently precise approximation of ${p}_{x}$ suffices to make the correct decision to accept or reject an x sampled according to distribution q. A wellchosen value of ${t}_{0}$ must be input into this algorithm, as discussed later.
Algorithm 2 Modified rejection algorithm—Protocol for the leader 
Input: Value of ${t}_{0}$ 
 1:
Compute ${p}_{x}\left({t}_{0}\right)$ for each $x\in \mathbb{X}$ {The leader needs information from the custodians in order to compute these approximations}  2:
Compute C and $q={\left({q}_{x}\right)}_{x\in \mathbb{X}}$ as per Equations ( 1) and ( 2)  3:
Sample X according to q  4:
Sample U uniformly on $[0,1]$  5:
for$t={t}_{0}$ ∞ do  6:
if $UC{q}_{X}\le {p}_{X}\left(t\right){2}^{t}$ then  7:
return X {X is accepted}  8:
else if $UC{q}_{X}>{p}_{X}\left(t\right)+{2}^{t}$ then  9:
go to line 3 {X is rejected}  10:
else  11:
Continue the for loop {We cannot decide whether to accept or reject because ${2}^{t}<UC{q}_{x}{p}_{x}\left(t\right)\le {2}^{t}$; communication may be required in order for the leader to compute ${p}_{X}(t+1)$; it could be that bits previously communicated to compute ${p}_{x}\left(t\right)$ can be reused.}  12:
end if  13:
end for

Theorem 1. Algorithm 2 is correct, i.e., it terminates and returns $X=x$ with probability ${p}_{x}$. Furthermore, let T be the random variable that denotes the value of variable t upon termination of any instance of theforloop, whether the loop terminates in rejection or acceptation. Then, Proof. Consider any
$x\in \mathbb{X}$ and
$t\ge {t}_{0}$. To reach
$T>t$, it must be that
${2}^{t}<UC{q}_{x}{p}_{x}\left(t\right)\le {2}^{t}$. Noting that
${q}_{x}\ne 0$ according to Equation (
2), the probability that
$T>t$ when
$X=x$ is therefore upperbounded as follows:
The last inequality uses the fact that
It follows that the probability that more turns round the for loop are required decreases exponentially with each new turn once $t>{t}_{0}+1$, which suffices to guarantee termination of the for loop with probability 1. Termination of the algorithm itself comes from the fact that the choice of X and U in lines 3 and 4 leads to acceptance at line 7—and therefore termination—with probability $1/C$, as demonstrated by von Neumann in the analysis of his rejection algorithm.
The fact that $X=x$ is returned with probability ${p}_{x}$ is an immediate consequence of the correctness of the von Neumann rejection algorithm since our adaptation of this method to handle the fact that only approximations of ${p}_{X}$ are available does not change the decision to accept or reject any given candidate sampled according to q.
In order to bound the expectation of
T, we note that
$\mathbf{P}\{T>t\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}X=x\}=1$ when
$t<{t}_{0}$ since we start the
for loop at
$t={t}_{0}$. We can also use vacuous
$\mathbf{P}\{T>{t}_{0}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}X=x\}\le 1$ rather than the worsethanvacuous upper bound of 2 given by Equation (
5) in the case
$t={t}_{0}$. Therefore,
It remains to note that, since $\mathbf{E}\left(T\phantom{\rule{3.33333pt}{0ex}}\right\phantom{\rule{3.33333pt}{0ex}}X=x)\le {t}_{0}+3$ for all $x\in \mathbb{X}$, it follows that $\mathbf{E}\left(T\right)\le {t}_{0}+3$ without condition. □
Let S be the random variable that represents the number of times variable X is sampled according to q at line 3, and let ${T}_{i}$ be the random variable that represents the value of variable T upon termination of the ith instance of the for loop starting at line 5, for $i\in \{1,\dots ,S\}$. The random variables ${T}_{i}$ are independently and identically distributed as the random variable T in Theorem 1 and the expected value of S is C. Let ${X}_{1},\dots ,{X}_{S}$ be the random variables chosen at successive passes at line 3, so that ${X}_{1},\dots ,{X}_{S1}$ are rejected, whereas ${X}_{S}$ is returned as the final result of the algorithm.
To analyse the communication complexity of Algorithm 2, we introduce function
${\gamma}_{x}\left(t\right)$ for each
$x\in \mathbb{X}$ and
$t>{t}_{0}$, which denotes the
incremental number of bits that the leader must receive from the custodians in order to compute
${p}_{x}\left(t\right)$, taking account of the information that may already be available if he had previously computed
${p}_{x}(t1)$. For completeness, we include in
${\gamma}_{x}\left(t\right)$ the cost of the communication required for the leader to request more information from the custodians. We also introduce function
$\delta \left(t\right)$ for
$t\ge 0$, which denotes the number of bits that the leader must receive from the custodians in order to compute
${p}_{x}\left(t\right)$ for all
$x\in \mathbb{X}$ in a “simultaneous” manner. Note that it could be much less expensive to compute those
n values than
n times the cost of computing any single one of them because some of the parameters held by the custodians may be relevant to more than one of the
${p}_{x}$’s. The total number of bits communicated in order to implement Algorithm 2 is therefore given by random variable
For simplicity, let us define function
$\gamma \left(t\right)\phantom{\rule{0.166667em}{0ex}}\stackrel{\mathrm{def}}{=}\phantom{\rule{0.166667em}{0ex}}{max}_{x\in \mathbb{X}}{\gamma}_{x}\left(t\right)$. We then have
whose expectation, according to Wald’s identity, is
Assuming the value of
$\gamma \left(t\right)$ is upperbounded by some
$\gamma $,
because
$\mathbf{E}\left(S\right)=C$ and using Equations (
4) and (
3).
Depending on the specific application, which determines
$\gamma $ and function
$\delta \left(t\right)$, Equation (
7) is key to a tradeoff that can lead to an optimal choice of
${t}_{0}$ since a larger
${t}_{0}$ decreases
${2}^{1{t}_{0}}$ but is likely to increase
$\delta \left({t}_{0}\right)$. The value of
$\gamma $ may play a rôle in the balance. The next section, in which we consider the simulation of quantum entanglement by classical communication, gives an example of this tradeoff in action.
3. Simulation of Quantum Entanglement Based on Remote Sampling
Before introducing the simulation of entanglement, let us establish some notation and mention the mathematical objects that we shall need. It is assumed that the reader is familiar with linear algebra, in particular the notion of a semidefinite positive matrix, Hermitian matrix, trace of a matrix, tensor product, etc. For a discussion about the probabilistic and statistical nature of quantum theory, see Ref. [
22]. For convenience, we use
$\left[n\right]$ to denote the set
$\{1,2,\dots ,n\}$ for any integer
n.
Consider integers
$m,{d}_{1},{d}_{2},\dots ,{d}_{m},{n}_{1},{n}_{2},\dots ,{n}_{m}$, all greater than or equal to 2. Define
$d={\prod}_{i=1}^{m}{d}_{i}$ and
$n={\prod}_{i=1}^{m}{n}_{i}$. Let
$\rho $ be a
$d\times d$ density matrix. Recall that any density matrix is Hermitian, semidefinite positive and unittrace, which implies that its diagonal elements are real numbers between 0 and 1. For each
$i\in \left[m\right]$ and
$j\in \left[{n}_{i}\right]$, let
${M}_{ij}$ be a
${d}_{i}\times {d}_{i}$ Hermitian semidefinite positive matrix such that
where
${I}_{{d}_{i}}$ is the
${d}_{i}\times {d}_{i}$ identity matrix. In other words, each set
${\left\{{M}_{ij}\right\}}_{j\in \left[{n}_{i}\right]}$ is a
povm (positiveoperator valued measure) [
22].
As introduced in
Section 1, we consider one
leader and
m custodians. The leader knows density matrix
$\rho $ and the
ith custodian knows the
ith
povm, meaning that he knows the matrices
${M}_{ij}$ for all
$j\in \left[{n}_{i}\right]$. If a physical system of dimension
d in state
$\rho $ were shared between the custodians, in the sense that the
ith custodian had possession of the
ith subsystem of dimension
${d}_{i}$, each custodian could perform locally his assigned
povm and output the outcome, an integer between 1 and
${n}_{i}$. The joint output would belong to
$\mathbb{X}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\stackrel{\mathrm{def}}{=}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\left[{n}_{1}\right]\times \left[{n}_{2}\right]\times \cdots \times \left[{n}_{m}\right]$, a set of cardinality
n, sampled according to the probability distribution stipulated by the laws of quantum theory, which we review below.
Our task is to sample
$\mathbb{X}$ with the exact same probability distribution even though there is no physical system in state
$\rho $ available to the custodians, and in fact all parties considered are purely classical! We know from Bell’s Theorem [
23] that this task is impossible in general without communication, even when
$m=2$, and our goal is to minimize the amount of communication required to achieve it. Special cases of this problem have been studied extensively for expected [
1,
2,
4,
5,
6], etc. and worstcase [
3,
8], etc. communication complexity, but here we solve it in its essentially most general setting, albeit only in the expected sense. For this purpose, the leader will centralize the operations while requesting as little information as possible from the custodians on their assigned
povms. Once the leader has successfully sampled
$X=({X}_{1},\dots ,{X}_{m})$, he transmits each
${X}_{i}$ to the
ith custodian, who can then output it as would have been the case had quantum measurements actually taken place.
We now review the probability distribution
$\mathbb{X}$ that we need to sample, according to quantum theory. For each vector
$x=({x}_{1},\dots ,{x}_{m})\in \mathbb{X}$, let
${M}_{x}$ be the
$d\times d$ tensor product of matrices
${M}_{i{x}_{i}}$ for each
$i\in \left[m\right]$:
The set
${\left\{{M}_{x}\right\}}_{x\in \mathbb{X}}$ forms a global
povm of dimension
d, which applied to density matrix
$\rho $ defines a joint probability vector on
$\mathbb{X}$. The probability
${p}_{x}$ of obtaining any
$x=({x}_{1},\dots ,{x}_{m})\in \mathbb{X}$ is given by
For a matrix A of size $s\times s$ and any pair of indices r and c between 0 and $s1$, we use ${\left(A\right)}_{rc}$ to denote the entry of A located in the ${r}^{\mathrm{th}}$ row and ${c}^{\mathrm{th}}$ column. Matrix indices start at 0 rather than 1 to facilitate Fact 2 below. We now state various facts for which we provide cursory justifications since they follow from elementary linear algebra and quantum theory, or they are lifted from previous work.
Fact 1. For all
$x\in \mathbb{X}$, we have
$0\le {p}_{x}\le 1$ when
${p}_{x}$ is defined according to Equation (
10); furthermore,
${\sum}_{x\in \mathbb{X}}{p}_{x}=1$. This is obvious because quantum theory tells us that Equation (
10) defines a probability distribution over all possible outcomes
$x\in \mathbb{X}$, as sampled by the joint measurement. Naturally, this statement could also be proven from Equations (
8) and (
10) using elementary linear algebra.
Fact 2. For each
$x=({x}_{1},\dots ,{x}_{m})\in \mathbb{X}$, matrix
${M}_{x}$ is the tensor product of
m matrices as given in Equation (
9). Therefore, each entry
${\left({M}_{x}\right)}_{rc}$ is the product of
m entries of the
${M}_{i{x}_{i}}$’s. Specifically, consider any indices
r and
c between 0 and
$d1$ and let
${r}_{i}$ and
${c}_{i}$ be the indices between 0 and
${d}_{i}1$, for each
$i\in \left[m\right]$, such that
The
${r}_{i}$’s and
${c}_{i}$’s are uniquely defined by the principle of mixedradix numeration. We have
Fact 3. Let
M be a Hermitian semidefinite positive matrix. Every entry
${\left(M\right)}_{ij}$ of the matrix satisfies
This follows from the fact that all principal submatrices of any Hermitian semidefinite positive matrix are semidefinite positive [
24] (Observation 7.1.2, page 430). In particular, the principal submatrix
is semidefinite positive, and therefore it has nonnegative determinant:
by virtue of
M being Hermitian, where
${\alpha}^{*}$ denotes the complex conjugate of
α.
Fact 4. The norm ${\left(\rho \right)}_{ij}$ of any entry of a density matrix ρ is less than or equal to 1. This follows directly from Fact 3 since density matrices are Hermitian semidefinite positive, and from the fact that diagonal entries of density matrices, such as ${\left(\rho \right)}_{ii}$ and ${\left(\rho \right)}_{jj}$, are real values between 0 and 1.
Fact 5. Given any povm ${\left\{{M}_{\ell}\right\}}_{\ell =1}^{L}$, we have that
$0\le {\left({M}_{\ell}\right)}_{ii}\le 1$ for all ℓ and i, and
${\left({M}_{\ell}\right)}_{ij}\le 1$ for all ℓ, i and j.
The first statement follows from the fact that ${\sum}_{\ell =1}^{L}{M}_{\ell}$ is the identity matrix by definition of povms, and therefore ${\sum}_{\ell =1}^{L}{\left({M}_{\ell}\right)}_{ii}=1$ for all i, and the fact that each ${\left({M}_{\ell}\right)}_{ii}\ge 0$ because each ${M}_{\ell}$ is semidefinite positive. The second statement follows from the first by applying Fact 3.
Fact 6. (This is a special case of Theorem 1 from Ref. [
1], with
$v=0$). Let
$k\ge 1$ be an integer and consider any two real numbers
a and
b. If
$\widehat{a}$ and
$\widehat{b}$ are arbitrary
kbit approximations of
a and
b, respectively, then
$\widehat{a}+\widehat{b}$ is a
$(k1)$bit approximation of
$a+b$. If, in addition,
a and
b are known to lie in interval
$[1,1]$, which can also be assumed without loss of generality concerning
$\widehat{a}$ and
$\widehat{b}$ since otherwise they can be safely pushed back to the appropriate frontier of this interval, then
$\widehat{a}\widehat{b}$ is a
$(k1)$bit approximation of
$ab$.
Fact 7. Let
$k\ge 1$ be an integer and consider any two
complex numbers
α and
β. If
$\widehat{\alpha}$ and
$\widehat{\beta}$ are arbitrary
kbit approximations of
α and
β, respectively, then
$\widehat{\alpha}+\widehat{\beta}$ is a
$(k1)$bit approximation of
$\alpha +\beta $. If, in addition,
$k\ge 2$ and the real and imaginary parts of
α and
β are known to lie in interval
$[1,1]$, which can also be assumed without loss of generality concerning
$\widehat{\alpha}$ and
$\widehat{\beta}$, then
$\widehat{\alpha}\widehat{\beta}$ is a
$(k2)$bit approximation of
$\alpha \beta $. This is a direct consequence of Fact 6 in the case of addition. In the case of multiplication, consider
$\alpha =a+bi$,
$\beta =c+di$,
$\widehat{\alpha}=\widehat{a}+\widehat{b}i$ and
$\widehat{\beta}=\widehat{c}+\widehat{d}i$, so that
By the multiplicative part of Fact 6,
$\widehat{a}\widehat{c}$,
$\widehat{b}\widehat{d}$,
$\widehat{a}\widehat{d}$ and
$\widehat{b}\widehat{c}$ are
$(k1)$bit approximations of
$ac$,
$bd$,
$ad$ and
$bc$, respectively; and then by the additive part of the same fact (which obviously applies equally well to subtraction),
$\widehat{a}\widehat{c}\widehat{b}\widehat{d}$ and
$\widehat{a}\widehat{d}+\widehat{b}\widehat{c}$ are
$(k2)$bit approximations of
$acbd$ and
$ad+bc$, respectively.
Fact 8 (This is Corollary 2 from Ref. [
1]). Let
$m\ge 2$ and
$k\ge \lceil \phantom{\rule{0.166667em}{0ex}}lgm\rceil $ be integers and let
${\left\{{a}_{j}\right\}}_{j=1}^{m}$ and
${\left\{{\widehat{a}}_{j}\right\}}_{j=1}^{m}$ be real numbers and their
kbit approximations, respectively, all in interval
$[1,1]$. Then,
${\prod}_{j=1}^{m}{\widehat{a}}_{j}$ is a
$(k\lceil \phantom{\rule{0.166667em}{0ex}}lgm\rceil )$bit approximation of
${\prod}_{j=1}^{m}{a}_{j}$.
Fact 9. Let
$m\ge 2$ and
$k\ge 2\lceil \phantom{\rule{0.166667em}{0ex}}lgm\rceil $ be integers and let
${\left\{{\alpha}_{j}\right\}}_{j=1}^{m}$ and
${\left\{{\widehat{\alpha}}_{j}\right\}}_{j=1}^{m}$ be complex numbers and their
kbit approximations, respectively. Provided it is known that
${\alpha}_{j}\le 1$ for each
$j\in \left[m\right]$, a
$(k2\lceil \phantom{\rule{0.166667em}{0ex}}lgm\rceil )$bit approximation of
${\prod}_{j=1}^{m}{\alpha}_{j}$ can be computed from knowledge of the
${\widehat{\alpha}}_{j}$’s. The proof of this fact follows essentially the same template as Fact 8, except that
two bits of precision may be lost at each level up the binary tree introduced in Ref. [
1], due to the difference between Facts 6 and 7. A subtlety occurs in the need for Fact 7 to apply that the real and imaginary parts of all the complex numbers under consideration must lie in interval
$[1,1]$. This is automatic for the exact values since the
${\alpha}_{j}$’s are upperbounded in norm by 1 and the product of suchbounded complex numbers is also upperbounded in norm by 1, which implies that their real and imaginary parts lie in interval
$[1,1]$. For the approximations, however, we cannot force their
norm to be bounded by 1 because we need the approximations to be rational for communication purposes. Fortunately, we can force the real and imaginary parts of all approximations computed at each level up the binary tree to lie in interval
$[1,1]$ because we know that they approximate suchbounded numbers. Note that the product of two complex numbers whose real and imaginary parts lie in interval
$[1,1]$, such as
$1+{2}^{k}i$ and
$1{2}^{k}i$, may not have this property, even if they are
kbit approximations of numbers bounded in norm by 1.
Fact 10. Let $s\ge 2$ and $k\ge \lceil \phantom{\rule{0.166667em}{0ex}}lgs\rceil $ be integers and let ${\left\{{\alpha}_{j}\right\}}_{j=1}^{s}$ and ${\left\{{\widehat{\alpha}}_{j}\right\}}_{j=1}^{s}$ be complex numbers and their kbit approximations, respectively, without any restriction on their norm. Then ${\sum}_{j=1}^{s}{\widehat{\alpha}}_{j}$ is a $(k\lceil \phantom{\rule{0.166667em}{0ex}}lgs\rceil )$bit approximation of ${\sum}_{j=1}^{s}{\alpha}_{j}$. Again, this follows the same proof template as Fact 8, substituting multiplication of real numbers by addition of complex numbers, which allows us to drop any condition on the size of the numbers considered.
Fact 11. Consider any
$x=({x}_{1},\dots ,{x}_{m})\in \mathbb{X}$ and any positive integer
t. In order to compute a
tbit approximation of
${p}_{x}$, it suffices to have
$(t+1+\lceil 2lgd\rceil +2\lceil \phantom{\rule{0.166667em}{0ex}}lgm\rceil )$bit approximations of each entry of the
${M}_{i{x}_{i}}$ matrices for all
$i\in \left[m\right]$. This is because
by virtue of Fact 2. Every term of the double sum in Equation (
11) involves a product of
m entries, one per
povm element, and therefore incurs a loss of at most
$2\lceil \phantom{\rule{0.166667em}{0ex}}lgm\rceil $ bits of precision by Fact 9, whose condition holds thanks to Fact 5. An additional bit of precision may be lost in the multiplication by
${\left(\rho \right)}_{rc}$, even though that value is available with arbitrary precision (and is upperbounded by 1 in norm by Fact 4) because of the additions involved in multiplying complex numbers. Then, we have to add
$s={d}^{2}$ terms, which incurs an additional loss of at most
$\lceil \phantom{\rule{0.166667em}{0ex}}lgs\rceil =\lceil 2lgd\rceil $ bits of precision by Fact 10. In total,
$(t+1+\lceil 2lgd\rceil +2\lceil \phantom{\rule{0.166667em}{0ex}}lgm\rceil )$bit approximations of the
${\left({M}_{i{x}_{i}}\right)}_{{c}_{i}{r}_{i}}$’s will result in a
tbit approximation of
${p}_{x}$.
Fact 12. The leader can compute
${p}_{x}\left(t\right)$ for any specific
$x=({x}_{1},\dots ,{x}_{m})\in \mathbb{X}$ and integer
t if he receives a total of
bits from the custodians. This is because the
ith custodian has the description of matrix
${M}_{i{x}_{i}}$ of size
${d}_{i}\times {d}_{i}$, which is defined by exactly
${d}_{i}^{2}$ real numbers since the matrix is Hermitian. By virtue of Fact 11, it is sufficient for the leader to have
$(t+1+\lceil 2lgd\rceil +2\lceil \phantom{\rule{0.166667em}{0ex}}lgm\rceil )$bit approximations for all those
${\sum}_{i=1}^{m}{d}_{i}^{2}$ numbers. Since each one of them lies in interval [−1, 1] by Fact 5, wellchosen
kbit approximations (for instance
kbit truncations) can be conveyed by the transmission of
$k+1$ bits, one of which carries the sign.
Note that the tbit approximation of ${p}_{x}$ computed according to Fact 12, say $a+bi$, may very well have a nonzero imaginary part b, albeit necessarily between ${2}^{t}$ and ${2}^{t}$. Since ${p}_{x}\left(t\right)$ must be a real number between 0 and 1, the leader sets ${p}_{x}\left(t\right)=max(0,min(1,a))$, taking no account of b, although a paranoid leader may wish to test that ${2}^{t}\le b\le {2}^{t}$ indeed and raise an alarm in case it is not (which of course is mathematically impossible unless the custodians are not given proper povms, unless they misbehave, or unless a computation or communication error has occurred).
Fact 13. For any
t, the leader can compute
${p}_{x}\left(t\right)$ for each and every
$x\in \mathbb{X}$ if he receives
bits from the custodians. This is because it suffices for each custodian
i to send to the leader
$(t+1+\lceil 2lgd\rceil +2\lceil \phantom{\rule{0.166667em}{0ex}}lgm\rceil )$bit approximations of all
${n}_{i}{d}_{i}^{2}$ real numbers that define the entire
ith
povm, i.e., all the matrices
${M}_{ij}$ for
$j\in \left[{n}_{i}\right]$. This is a nice example of the fact that it may be much less expensive for the leader to compute at once
${p}_{x}\left(t\right)$ for all
$x\in \mathbb{X}$, rather than computing them one by one independently, which would cost
bits of communication by applying
n times Fact 12.
After all these preliminaries, we are now ready to adapt the general template of Algorithm 2 to our entanglementsimulation conundrum, yielding Algorithm 3. We postpone the choice of ${t}_{0}$ until after the communication complexity analysis of this new algorithm.
Algorithm 3 Protocol for simulating arbitrary entanglement subjected to arbitrary measurements 
 1:
Each custodian $i\in \left[m\right]$ sends his value of ${n}_{i}$ to the leader, who computes $n={\prod}_{i=1}^{m}{n}_{i}$  2:
The leader chooses ${t}_{0}$ and informs the custodians of its value  3:
Each custodian $i\in \left[m\right]$ sends to the leader $({t}_{0}+1+\lceil 2lgd\rceil +2\lceil lgm\rceil )$bit truncations of the real and imaginary parts of the entries defining matrix ${M}_{ij}$ for each $j\in \left[{n}_{i}\right]$  4:
The leader computes ${p}_{x}\left({t}_{0}\right)$ for every $x\in \mathbb{X}$, using Fact 13  5:
The leader computes C and $q={\left({q}_{x}\right)}_{x\in \mathbb{X}}$ as per Equations ( 1) and ( 2)  6:
$\mathrm{accept}\leftarrow \mathrm{false}$  7:
repeat  8:
$\mathrm{reject}\leftarrow \mathrm{false}$  9:
The leader samples $X=({X}_{1},{X}_{2},\dots ,{X}_{m})$ according to q  10:
The leader informs each custodian $i\in \left[m\right]$ of the value of ${X}_{i}$  11:
The leader samples U uniformly on $[0,1]$  12:
$t\leftarrow {t}_{0}$  13:
repeat  14:
if $UC{q}_{X}\le {p}_{X}\left(t\right){2}^{t}$ then  15:
$\mathrm{accept}\leftarrow \mathrm{true}$ {X is accepted}  16:
else if $UC{q}_{X}>{p}_{X}\left(t\right)+{2}^{t}$ then  17:
$\mathrm{reject}\leftarrow \mathrm{true}$ {X is rejected}  18:
else  19:
The leader asks each custodian $i\in \left[m\right]$ for one more bit in the truncation of the real and imaginary parts of the entries defining matrix ${M}_{i{X}_{i}}$;  20:
Using this information, the leader updates ${p}_{X}\left(t\right)$ into ${p}_{X}(t+1)$;  21:
$t\leftarrow t+1$  22:
end if  23:
until accept or reject  24:
until accept  25:
The leader requests each custodian $i\in \left[m\right]$ to output his ${X}_{i}$

To analyse the expected number of bits of communication required by this algorithm, we apply Equation (
7) from
Section 2 after defining explicitly the cost parameters
$\delta \left({t}_{0}\right)$ for the initial computation of
${p}_{x}\left({t}_{0}\right)$ for all
$x\in \mathbb{X}$ at lines 3 and 4, and
$\gamma $ for the upgrade from a specific
${p}_{X}\left(t\right)$ to
${p}_{X}(t+1)$ at lines 19 and 20. For simplicity, we shall ignore the negligible amount of communication entailed at line 1 (which is
${\sum}_{i=1}^{m}\lceil \phantom{\rule{0.166667em}{0ex}}lg{n}_{i}\rceil \le m+lgn$ bits), line 2 (
$\lceil \phantom{\rule{0.166667em}{0ex}}lg{t}_{0}\rceil $ bits), line 10 (also
${\sum}_{i=1}^{m}\lceil \phantom{\rule{0.166667em}{0ex}}lg{n}_{i}\rceil $ bits, but repeated
$\mathbf{E}\left(S\right)\le 1+{2}^{1{t}_{0}}n$ times) and line 25 (m bits) because they are not taken into account in Equation (
7) since they are absent from Algorithm 2. If we counted it all, this would add
$O((1+{2}^{1{t}_{0}}n)lgn+lg{t}_{0})$ bits to Equation (
13) below, which would be less than
$10lgn$ bits added to Equation (
14), with no effect at all on Equation (
15).
According to Fact 13,
The cost of line 19 is very modest because we use
truncations rather than general approximations in line 3 for the leader to compute
${p}_{x}\left({t}_{0}\right)$ for all
$x\in \mathbb{X}$. Indeed, it suffices to obtain a single additional bit of precision in the real and imaginary parts of each entry defining matrix
${M}_{i{X}_{i}}$ from each custodian
$i\in \left[m\right]$. The cost of this update is simply
bits of communication, where the addition of
m is to account for the leader needing to request new bits from the custodians. This is a nice example of what we meant by “it could be that bits previously communicated can be reused” in line 11 of Algorithm 2.
Putting it all together in Equation (
7), the total expected number of bits communicated in Algorithm 3 in order to sample exactly according to the quantum probability distribution is
We are finally in a position to choose the value of parameter
${t}_{0}$. First note that
$n={\prod}_{i=1}^{m}{n}_{i}\ge {2}^{m}$. Therefore, any constant choice of
${t}_{0}$ will entail an expected amount of communication that is exponential in
m because of the righthand term in Equation (
13). At the other extreme, choosing
${t}_{0}=n$ would also entail an expected amount of communication that is exponential in
m, this time because of the lefthand term in Equation (
13). A good compromise is to choose
${t}_{0}=\lceil \phantom{\rule{0.166667em}{0ex}}lgn\rceil $, which results in
$1\le C\le 3$ according to Equation (
3), because in that case
${2}^{{t}_{0}}\ge n$ and therefore
so that Equation (
13) becomes
In case all the
${n}_{i}$’s and
${d}_{i}$’s are upperbounded by some constant
$\xi $, we have that
$n={\prod}_{i=1}^{m}{n}_{i}\le {\xi}^{m}$, hence
$lgn\le mlg\xi $, similarly
$lgd\le mlg\xi $, and also
${\sum}_{i=1}^{m}{n}_{i}{d}_{i}^{2}\le m{\xi}^{3}$. It follows that
which is on the order of
${m}^{2}$, thus matching with our most general method the result that was already known for the very specific case of simulating the quantum
mpartite GHZ distribution [
1].
4. Practical Implementation Using a Source of Discrete Randomness
In practice, we cannot work with continuous random variables since our computers have finite storage capacities and finite precision arithmetic. Furthermore, the generation of uniform continuous random variables does not make sense computationally speaking and we must adapt Algorithms 2 and 3 to work in a finite world.
For this purpose, recall that
U is a uniform continuous random variable on
$[0,1]$ used in all the algorithms seen so far. For each
$i\ge 1$, let
${U}_{i}$ denote the
ith bit in the binary expansion of
U, so that
We acknowledge the fact that the
${U}_{i}$’s are not uniquely defined in case
$U=j/{2}^{k}$ for integers
$k>0$ and
$0<j<{2}^{k}$, but we only mention this phenomenon to ignore it since it occurs with probability 0 when
U is uniformly distributed on
$[0,1]$. We denote the
tbit truncation of
U by
$U\left[t\right]$:
For all
$t\ge 1$, we have that
We modify Algorithm 2 into Algorithm 4 as follows, leaving to the reader the corresponding modification of Algorithm 3, thus yielding a practical protocol for the simulation of general entanglement under arbitrary measurements.
Algorithm 4 Modified rejection algorithm with discrete randomness source—Protocol for the leader 
Input: Value of ${t}_{0}$ 
 1:
Compute ${p}_{x}\left({t}_{0}\right)$ for each $x\in \mathbb{X}$ {The leader needs information from the custodians in order to compute these approximations}  2:
Compute C and $q={\left({q}_{x}\right)}_{x\in \mathbb{X}}$ as per Equations ( 1) and ( 2)  3:
Sample X according to q  4:
$U\left[0\right]\leftarrow 0$  5:
for$t=1$to${t}_{0}1$do  6:
Generate i.i.d. unbiased bit ${U}_{t}$  7:
$U\left[t\right]\leftarrow U[t1]+{U}_{t}\phantom{\rule{0.166667em}{0ex}}{2}^{t}$  8:
end for  9:
for$t={t}_{0}$ ∞ do  10:
Generate i.i.d. unbiased bit ${U}_{t}$  11:
$U\left[t\right]\leftarrow U[t1]+{U}_{t}\phantom{\rule{0.166667em}{0ex}}{2}^{t}$  12:
if $\left(U\left[t\right]+{2}^{t}\right)C{q}_{X}\le {p}_{X}\left(t\right){2}^{t}$ then  13:
return X {X is accepted}  14:
else if $U\left[t\right]C{q}_{X}>{p}_{X}\left(t\right)+{2}^{t}$ then  15:
go to line 3 {X is rejected}  16:
else  17:
Continue the for loop {We cannot decide to accept or reject because $(1+C{q}_{X}){2}^{t}<U\left[t\right]C{q}_{X}{p}_{X}\left(t\right)\le {2}^{t}$; communication may be required in order for the leader to compute ${p}_{X}(t+1)$; it could be that bits previously communicated to compute ${p}_{x}\left(t\right)$ can be reused.}  18:
end if  19:
end for

Theorem 2. Algorithm 4 is correct, i.e., it terminates and returns $X=x$ with probability ${p}_{x}$. Furthermore, let T be the random variable that denotes the value of variable t upon termination of any instance of theforloop that starts at line 9, whether it terminates in rejection or acceptation. Then, Proof. This is very similar to the proof of Theorem 1, so let us concentrate on the differences. First note that it follows from Equation (
16) and the fact that
${p}_{X}\left(t\right){p}_{X}\le {2}^{t}$ that
and
Therefore, whenever
X is accepted at line 13 (resp. rejected at line 15), it would also have been accepted (resp. rejected) in the original von Neumann algorithm, which shows sampling correctness. Conversely, whenever we reach a value of
$t\ge {t}_{0}$ such that
$\left(U\left[t\right]+{2}^{t}\right)C{q}_{X}>{p}_{X}\left(t\right){2}^{t}$ and
$U\left[t\right]C{q}_{X}\le {p}_{X}\left(t\right)+{2}^{t}$, we do not have enough information to decide whether to accept or reject, and therefore we reach line 17, causing
t to increase. This happens precisely when
To obtain an upper bound on
$\mathbf{E}\left(T\right)$, we mimic the proof of Theorem 1, but in the discrete rather than continuous regime. In particular, for any
$x\in \mathbb{X}$ and
$t\ge {t}_{0}$,
To understand Equation (
17), think of
${2}^{t}U\left[t\right]$ as an integer chosen randomly and uniformly between 0 and
${2}^{t}1$. The probability that it falls within some real interval
$(a,b]$ for
$a<b$ is equal to
${2}^{t}$ times the number of integers between 0 and
${2}^{t}1$ in that interval, the latter being upperbounded by the number of unrestricted integers in that interval, which is at most
$ba+1$.
Noting how similar Equation (
18) is to the corresponding Equation (
5) in the analysis of Algorithm 2, it is not surprising that the expected value of
T will be similar as well. Indeed, continuing as in the proof of Theorem 1, without belabouring the details,
We conclude that
$\mathbf{E}\left(T\right)\le {t}_{0}+3+{2}^{{t}_{0}}$ without condition since Equation (
19) does not depend on
x. □
The similarity between Theorems 1 and 2 means that there is no significant additional cost in the amount of communication required to achieve remote sampling in the random bit model. i.e., if we consider a realistic scenario in which the only source of randomness comes from i.i.d. unbiased bits, compared to an unrealistic scenario in which continuous random variables can be drawn. For instance, the reasoning that led to Equation (
7) applies
mutatis mutandis to conclude that the expected number
Z of bits that needs to be communicated to achieve remote sampling in the random bit model is
where
$\delta $ and
$\gamma $ have the same meaning as in
Section 2.
If we use the random bit approach for the general simulation of quantum entanglement studied in
Section 3, choosing
${t}_{0}=\lceil \phantom{\rule{0.166667em}{0ex}}lgn\rceil $ again, Equation (
14) becomes
which reduces to the identical
in case all the
${n}_{i}$’s and
${d}_{i}$’s are upperbounded by some constant
$\xi $, which again is on the order of
${m}^{2}$.
In addition to communication complexity, another natural efficiency measure in the random bit model concerns the
expected number of random bits that needs to be drawn in order to achieve sampling. Randomness is needed in lines 3, 6 and 10 of Algorithm 4. A single random bit is required each time lines 6 and 10 are entered, but line 3 calls for sampling
X according to distribution
q. Let
${V}_{i}$ be the random variable that represents the number of random bits needed on the
ith passage through line 3. For this purpose, we use the algorithm introduced by Donald Knuth and Andrew ChiChih Yao [
21], which enables sampling within any finite discrete probability distribution in the random bit model by using an expectation of no more than two random bits in addition to the Shannon binary entropy of the distribution. Since each such sampling is independent from the others, it follows that
${V}_{i}$ is independently and identically distributed as a random variable
V such that
where
$H\left(q\right)$ denotes the binary entropy of
q, which is never more than the basetwo logarithm of the number of atoms in the distribution, here
n.
Let
R be the random variable that represents the number of random bits drawn when running Algorithm 4. Reusing the notation of
Section 2, let
S be the random variable that represents the number of times variable
X is sampled at line 3 and let
${T}_{i}$ be the random variable that represents the value of variable
T upon termination of the
ith instance of the
for loop starting at line 9, for
$i\in \{1,\dots ,S\}$. The random variables
${T}_{i}$ are independently and identically distributed as the random variable
T in Theorem 2 and the expected value of
S is
C. Since one new random bit is generated precisely each time variable
t is increased by 1 in any pass through either
for loops (line 5 or 9), we simply have
By virtue of Equations (
3) and (
21), Theorem 2, and using Wald’s identity again, we conclude that:
Taking
${t}_{0}=\lceil \phantom{\rule{0.166667em}{0ex}}lgn\rceil $ again, remote sampling can be completed using an expected number of random bit in
$O(lgn)$, with a hidden multiplicative constant no larger than 6. The hidden constant can be reduced arbitrarily close to 2 by taking
${t}_{0}=\lceil \phantom{\rule{0.166667em}{0ex}}lgn\rceil +a$ for an arbitrarily large constant
a. Whenever target distribution
p has close to full entropy, this is only twice the optimal number of random bits that would be required according to the Knuth–Yao lower bound [
21] in the usual case when full knowledge of
p is available in a central place rather than having to perform remote sampling. Note, however, that, if our primary consideration is to optimize communication for the classical simulation of entanglement, as in
Section 3, choosing
${t}_{0}=\lceil \phantom{\rule{0.166667em}{0ex}}lgn\rceil a$ would be a better idea because the summation in the lefthand term of Equation (
13) dominates that of the righthand term. For this inconsequential optimization,
a does not have to be a constant, but it should not exceed
$lg\left(\xi m\right)$, where
$\xi $ is our usual upper bound on the number of possible outcomes for each participant (if it exists), lest the righthand term of Equation (
13) overtake the lefthand term. Provided
$\xi $ exists, the expected number of random bits that needs to be drawn is linear in the number of participants.
The need for continuous random variables was not the only unrealistic assumption in Algorithms 1–3. We had also assumed implicitly that custodians know their private parameters precisely (and that the leader knows exactly each entry of density matrix
$\rho $ in
Section 3). This could be reasonable in some situations, but it could also be that some of those parameters are transcendental numbers or the result of applying transcendental functions to other parameters, for example
$cos\pi /8$. More interestingly, it could be that the actual parameters are spoonfed to the custodians by
examiners, who want to test the custodians’ ability to respond appropriately to unpredictable inputs. However, all we need is for the custodians to be able to obtain their own parameters with arbitrary precision, so that they can provide that information to the leader upon request. For example, if a parameter is
$\pi /4$ and the leader requests a
kbit approximation of that parameter, the custodian can compute some number
$\widehat{x}$ such that
$\widehat{x}\pi /4\le {2}^{k}$ and provide it to the leader. For communication efficiency purposes, it is best if
$\widehat{x}$ itself requires only
k bits to be communicated, or perhaps one more (for the sign) in case the parameter is constrained to be between
$1$ and 1. It is even better if the custodian can supply a
kbit
truncation because this enables the possibility to upgrade it to a
$(k+1)$bit truncation by the transmission of a single bit upon request from the leader, as we have done explicitly for the simulation of entanglement at line 19 of Algorithm 3 in
Section 3.
Nevertheless, it may be impossible for the custodians to compute truncations of their own parameters in some cases, even when they can compute arbitrarily precise approximations. Consider for instance a situation in which one parameter held by a custodian is
$x=cos\theta $ for some angle
$\theta $ for which he can only obtain arbitrarily precise truncations. Unbeknownst to the custodian,
$\theta =\pi /3$ and therefore
$x=1/2$. No matter how many bits the custodian obtains in the truncation of
$\theta $, however, he can never decide whether
$\theta <\pi /3$ or
$\theta \ge \pi /3$. In the first case,
$x<1/2$ and therefore the 1bit truncation of
x should be 0, whereas in the second (correct) case,
$x\ge 1/2$ and therefore the 1bit truncation of
x is
$1/2$ (or
$0.1$ in binary). Thus, the custodian will be unable to respond if the leader asks him for a 1bit truncation of
x, no matter how much time he spends on the task. In this example, by contrast, the custodian can supply the leader with arbitrarily precise
approximations of
x from appropriate approximations of
$\theta $. Should a situation like this occur, for instance in the simulation of entanglement, there would be two solutions. The first one is for the custodian to transmit increasingly precise truncations of
$\theta $ to the leader and let
him compute the cosine on it. This approach is only valid if it is known at the outset that the custodian’s parameter will be of that form, which was essentially the solution taken in our earlier work on the simulation of the quantum
mpartite GHZ distribution [
1]. The more general solution is to modify the protocol and declare that custodians can send arbitrary approximations to the leader rather than truncations. The consequence on Algorithm 3 is that line 19 would become much more expensive since each custodian
i would have to transmit a fresh onebitbetter approximation for the real and imaginary parts of the
${d}_{i}^{2}$ entries defining matrix
${M}_{i{X}_{i}}$. As a result, efficiency parameter
$\gamma \left(t\right)$ in Equation (
6) would become
which should be compared with the much smaller (constant) value of
$\gamma $ given in Equation (
12) when truncations of the parameters are available. Nevertheless, taking
${t}_{0}=\lceil \phantom{\rule{0.166667em}{0ex}}lgn\rceil $ again, this increase in
$\gamma \left(t\right)$ would make no significant difference in the total number of bits transmitted for the simulation of entanglement because it would increase only the righthand term in Equations (
14) and (
20), but not enough to make it dominate the lefthand term. All counted, we still have an expected number of bits transmitted that is upperbounded by
$(3{\xi}^{3}lg\xi ){m}^{2}+O(mlogm)$ whenever all the
${n}_{i}$’s and
${d}_{i}$’s are upperbounded by some constant
$\xi $, which again is on the order of
${m}^{2}$.
5. Discussion and Open Problems
We have introduced and studied the general problem of sampling a discrete probability distribution characterized by parameters that are scattered in remote locations. Our main goal was to minimize the amount of communication required to solve this conundrum. We considered both the unrealistic model in which arithmetic can be carried out with infinite precision and continuous random variables can be sampled exactly, and the more reasonable
random bit model studied by Knuth and Yao [
21], in which all arithmetic is carried out with finite precision and the only source of randomness comes from independent tosses of a fair coin. For a small increase in the amount of communication, we can finetune our technique to require only twice the number of random bits that would be provably required in the standard context in which all the parameters defining the probability distribution would be available in a single location, provided the entropy of the distribution is close to maximal.
When our framework is applied to the problem of simulating quantum entanglement with classical communication in its essentially most general form, we find that an expected number of
$O\left({m}^{2}\right)$ bits of communication suffices when there are
m participants and each one of them (in the simulated world) is given an arbitrary quantum system of bounded dimension and asked to perform an arbitrary generalized measurement (
povm) with a bounded number of possible outcomes. This result generalizes and supersedes the best approach previously known in the context of multiparty entanglement, which was for the simulation of the
mpartite GHZ state under projective measurements [
1]. Our technique also applies without the boundedness condition on the dimension of individual systems and the number of possible outcomes per party, provided those parameters remain finite.
It would be preferable if we could eliminate the dependency of the expected number of bits of communication on the number of possible measurement outcomes. Is perfect simulation possible at all when that number is infinite, regardless of communication efficiency, a scenario in which our approach cannot be applied? In the bipartite case, Serge Massar, Dave Bacon, Nicolas Cerf, and Richard Cleve proved that classical communication can serve to simulate the effect of arbitrary measurements on maximally entangled states in a way that does not require any bounds on the number of possible outcomes [
6]. More specifically, they showed that arbitrary
povms on systems of
n Bell states can be simulated with an expectation of
$O\left(n{2}^{n}\right)$ bits of communication. However, their approach exploits the equivalence of this problem with a variant known as
classical teleportation [
5], in which one party has full knowledge of the quantum state and the other has full knowledge of the measurement to be applied to that state. Unfortunately, the equivalence between those two problems breaks down in a multipartite scenario and there is no obvious way to extend the approach. We leave as an open question the possibility of a simulation protocol in which the expected amount of communication would only depend on the number of participants and the dimension of their simulated quantum systems.
Our work leaves several additional important questions open. Recall that our approach provides a bounded amount on the
expected communication required to perform exact remote sampling. The most challenging open question is to determine if it is possible to achieve the same goal with a guaranteed bounded amount of communication
in the worst case. If possible, this would certainly require the participants to share ahead of time the realization of random variables, possibly even continuous ones. Furthermore, a radically different approach would be needed since we had based ours on the von Neumann rejection algorithm, which has intrinsically no worstcase upper bound on its performance. This task may seem hopeless, but it has been shown to be possible for special cases of entanglement simulation in which the remote parameters are taken from a continuum of possibilities [
3,
8], despite earlier “proofs” that it is impossible [
2].
A much easier task would be to consider other communication models, in which communication is no longer restricted to being between a single leader and various custodians. Would there be an advantage in communicating through the edges of a complete graph? Obviously, the maximum possible savings in terms of communication would be a factor of 2 since any time one participant wants to send a bit to some other participant, he can do so via the leader. However, if we care not only about the total number of bits communicated, but also the
time it takes to complete the protocol in a realistic model in which each party is limited to sending and receiving a fixed number of bits at any given time step, parallelizing communication could become valuable. We had already shown in Ref. [
1] that a parallel model of communication can dramatically improve the time needed to sample the
mpartite GHZ distribution. Can this approach be generalized to arbitrary remote sampling settings?
Finally, we would like to see applications for remote sampling outside the realm of quantum information science.