On the possibility of classical client blind quantum computing

We define the functionality of delegated pseudo-secret random qubit generator (PSRQG), where a classical client can instruct the preparation of a sequence of random qubits at some distant party. Their classical description is (computationally) unknown to any other party (including the distant party preparing them) but known to the client. We emphasize the unique feature that no quantum communication is required to implement PSRQG. This enables classical clients to perform a class of quantum communication protocols with only a public classical channel with a quantum server. A key such example is the delegated universal blind quantum computing. Using our functionality one could achieve a purely classical-client computational secure verifiable delegated universal quantum computing (also referred to as verifiable blind quantum computation). We give a concrete protocol (QFactory) implementing PSRQG, using the Learning-With-Errors problem to construct a trapdoor one-way function with certain desired properties (quantum-safe, two-regular, collision-resistant). We then prove the security in the Quantum-Honest-But-Curious setting and briefly discuss the extension to the malicious case.


Introduction and Related Works
The recent interest in quantum technologies has brought forward a vision of quantum internet [ELG + 17] that could implement a collection of known protocols for enhanced security or communication complexity (see a recent review in [BS16]). On the other hand the rapid development of quantum hardware has increased the computational capacity of quantum servers that could be linked in such a communicating network. This raised the necessity/importance of privacy preserving functionalities such as the research developed around quantum computing on encrypted data (see a recent review in [Fit17]).
However, there exist some challenges in adapting widely the above vision: A reliable longdistance quantum communication network connecting all the interested parties might be very costly. Moreover, currently, some of the most promising quantum computation devices (e.g. superconducting such as the devices developed by IBM, Google, etc) do not yet offer the possibility of "networked" architecture, i.e. cannot receive and send quantum states.
For this reason, there has been extensive research focusing on the practicality aspect of quantum delegated computation protocols (and related functionalities). One direction is to reduce the required communications by exploiting classical fully-homomorphic-encryption schemes [BJ15,DSS16,ADSS17], or by defining their direct quantum analogues [Lia15, OTF15, TKO + 16, LC17]. Different encodings, on the client side, could also reduce the communication [MPDF13,GMMR13]. However, in all these approaches the client still requires some quantum capabilities. While no-go results indicate restrictions on which of the above properties are jointly achievable for classical clients [AGKP14,YPDF14,ACGK17,NS17], completing this picture remains an open problem. Another direction is to consider fully-classical client protocols, compatible with the no-go results, that can therefore achieve more restricted levels of security. The first such procedure achieving statistical security (but not for universal computations) was proposed in [MDMF17]. Focusing on post-quantum computational security a universal blind delegated protocol was proposed in [Mah17] and a verifiable one in [Mah18].
Our own independent work presented here, is also based on post-quantum computational security, appeared (in preprint [CCKW18]) in between the above mentioned two works, taking a different approach, more natural to measurement-based quantum computing protocols. The approach we take is modular. We replace the need for (a particular) quantum communication channel with a computationally (but post-quantum) secure generation of secret and random qubits. This can be used by classical clients to achieve blind quantum computing and a number of other applications.

Our Contributions
1. We define a classical client/quantum server delegated ideal functionality of pseudo-secret random qubit generator (PSRQG), in Section 3. PSRQG can replace the need for quantum channel between parties in certain quantum communication protocols with trade-off that the protocols become computationally secure (against quantum adversaries).
2. We give a basic protocol (QFactory) that achieves this functionality, given a trapdoor one-way function that is quantum-safe, two-regular and collision resistant resistant in Section 4 and prove its correctness.
3. We prove the security of the QFactory against Quantum-Honest-But-Curious server or against any malicious third party by proving that the classical description of the generated qubits is a hard-core function (following a reduction similar that of the Goldreich-Levin Theorem) in Section 5.
4. While our previous results do not depend on the specific function used, the existence of such specific functions (with all desired properties) makes the PSRQG a practical primitive that can be employed as described in this paper. In Section 6, we first give methods for obtaining two-regular trapdoor one-way functions with extra properties (collision resistant or second preimage resistant) assuming the existence of simpler trapdoor one-way functions (permutation trapdoor or homomorphic trapdoor functions). We use reductions to prove that the resulting functions maintain all the properties required. Furthermore, we give in Subsection 6.3 an explicit family of functions that respect all the required properties based on the security of the Learning-With-Errors problem as well as a possible instantiation of the parameters. Thus, this function is also quantum-safe, and thus directly applicable for our setting. Note, that other functions may also be used, such as the one in [BCM + 18] or functions based on the Niederreither cryptosystem and the construction in [FGK + 10].

Applications
The PSRQG functionality, viewed as a resource, has a wide range of applications. Here we give a general overview of the applications, while for details on how to use the exact output of the PSRQG obtained in this paper in specific protocols we refer the reader to Appendix A. PSRQG enables fully-classical parties to participate in many quantum protocols using only public classical channels and a single (potentially malicious) quantum server.
The first type of applications concerns a large class of delegated quantum computation protocols, including blind quantum computation and verifiable blind quantum computation. These protocols are of great importance, enabling information-theoretically secure (and verifiable) access to a quantum cloud. However, the requirement for quantum communication limits their domain of applicability. This limitation is removed by replacing the off-line preparation stage with our QFactory protocol. Concretely, we can use QFactory to implement the blind quantum computation protocol of [BFK09], as well as the verifiable blind quantum computation protocols (e.g. those in [FK12,Bro15,FKD17]), in order to achieve classical-client secure and verifiable access to a quantum cloud.
In all these cases, the cost of using PSRQG is that the security becomes post-quantum computational (from information-theoretic). However, the possibility of information-theoretically secure classical client blind quantum computation seems highly unlikely due to strong complexity-theoretic arguments given in [ACGK17] and therefore this is the best we could hope for.
Finally, we note that in order to use PSRQG as a subroutine in a larger protocol, we need to address the issue of composition and formulate the functionality in the universal composability framework [Unr10]. This could be done as in [DK16] (where quantum communication was required, using a quantum version of SRQG), but the full details are outside of the scope of this paper.

Overview of the Protocol and Proof
The general idea is that a classical client gives instructions to a quantum server to perform certain actions (quantum computation). Those actions lead to the server having as output a single qubit, which is randomly chosen from within a set of possible states of the form |0 + e irπ/4 |1 , where r ∈ {0, · · · , 7}. The randomness of the output qubit is due to the (fundamental) randomness of quantum measurements that are part of the instructions that the client gives. Moreover, the server cannot guess the value of r any better than if he had just received that state directly from the client (up to negligible probability). This is possible because the instructed quantum computation is generically a computation that is hard to (i) classically simulate and (ii) to reproduce quantumly because it is unlikely (exponentially in the number of measurements) that by running the same instructions the server obtains the exact same measurement outcomes twice. On the other hand, we wish the client to know the classical description and thus the value of r. To achieve this task, the instructions/quantum computation the client uses are based on a family of trapdoor one-way functions with certain extra properties 1 . Such functions are hard to invert (e.g. for the server) unless someone (the client in our case) has some extra "trapdoor" information t k . This extra in-formation makes the quantum computation easy to classically reproduce for the client, which can recover the value r, while it is still hard to classically reproduce for the server. Sending random qubits of the above type, is exactly what is required from the client in most of the protocols and applications given earlier, while with simple modifications our protocol could achieve other similar sets of states.
Our QFactory protocol can heuristically be described in the next steps: Preparation. The client randomly selects a function f k , from a family of trapdoor one-way, quantum-safe, two-regular and collision resistant functions. The choice of f k is public (server knows), but the trapdoor information t k needed to invert the function is known only to the client. Stage 1: Preimages Superposition. The client instructs the server (i) to apply Hadamard(s) on the control register, (ii) to apply U f k on the target register i.e. to obtain x |x ⊗ |f k (x) and (iii) to measure the target register in the computational basis, in order to obtain a value y. This collapses his state to the state (|x + |x ) ⊗ |y , where x, x are the unique two preimages of y.
Remarks. First we note that each image y appears with same probability (therefore, obtaining twice the same y happens with negligible probability). We now consider the first register |x +|x = |x 1 · · · x n +|x 1 · · · x n , where the subscripts denote the different bits of the corresponding preimages x and x . We rewrite this: whereḠ is the set of bits positions where x, x are identical, G is the set of bits positions where the preimages differ, while we have suitably changed the order of writing the qubits. It is now evident that the state at the end of Stage 1 is a tensor product of isolated |0 and |1 states, and a Greenberger-Horne-Zeilinger (GHZ) state with random X's applied. The crucial observation is that the connectivity (which qubit belongs to the GHZ and which doesn't) depends on the XOR of the two preimages x ⊕ x and is computationally impossible to determine, with non-negligible advantage, without the trapdoor information t k . Stage 2: Squeezing. The client instructs the server to measure each qubit i (except the output) in a random basis {|0 ± e iα i π/4 |1 } and return back the measurement outcome b i . The output qubit is of the form |+ θ = |0 + e iθ |1 , where (see [CCKW18]): Intuitively, measuring qubits that are not connected has no effect to the output, while measuring qubits within the GHZ part, rotates the phase of the output qubit (by a (−(1) x i α i + 4b i )π/4 angle).
Security. The protocol is secure, if we can prove that the server (or other third parties) cannot guess (obtain noticeable advantage in guessing) the classical description of the state, i.e. the value of θ. We consider a quantum-honest-but-curious server (see formal definition below) which means that he essentially follows the protocol and the security reduces in proving that the server cannot use his classical information to obtain any advantage in guessing the classical description of the (honest) quantum output. The server does not know the two preimages x, x and needs to guess θ from the value of the image y. A similar (simpler) result that we use is the Goldreich-Levin theorem [GL89a], that (informally) states that the inner product of the preimage of a one-way function with a random vector, taken modulo 2, is indistinguishable from a random bit. Our case is similar since Eq. (1) has the form of an inner product of the XOR of two preimages with a random vector taken modulo 8. We prove that if a computationally bounded server could obtain non-trivial advantage in guessing θ, then he could also break the property of "second preimage resistance" which we requested for our function f k . The function. Our protocol relies on using functions that have a number of properties (oneway, trapdoor, two-regular, collision resistant (see Remark 3.1)), quantum safe). Any function satisfying those conditions is suitable for our protocol. While in first thought some of these appear hard to satisfy jointly (e.g. two-regularity and collision resistance), we give two constructions that achieve those properties from simpler functions: one from injective, homomorphic trapdoor oneway function and one from bijective trapdoor one-way function. Both constructions define a new function that has domain extended by one bit, and the value of that bit "decides" whether one uses the initial basic function or not.
We then use a (slight) modification of the first construction and the trapdoor one-way function based on the Learning-with-Errors of [MP12] with suitable choice of parameters, and obtain a function that has all the desired properties. In a nutshell, the idea is to use the construction of [MP12], to create an injective function g(s, e) hard to invert without the secret trapdoor, and then to sample from a Gaussian distribution a small error term e 0 ∈ Z m q as well as a (uniform) random s 0 ∈ Z n q . According to [MP12], it should be impossible to recover efficiently s 0 and e 0 from b 0 := g(s 0 , e 0 ). Then, to create the function f (s, e, c), we define f (s, e, 0) = g(s, e) and f (s, e, 1) = g(s, e) + b 0 , and we require e to have infinity norm smaller than a parameter µ. Because the function is "nearly homomorphic", it appears that f (s, e, 1) = f (s+s 0 , e+e 0 , 0), so this function has intuitively two preimages. However, e+e 0 may not be small enough to stay in the input domain, so it may be possible to have only one preimage for some y. What we show is that if we sample e 0 "small enough" (at least as small as O(µ/m)), then the probability to have two preimages is at least constant. Moreover, we prove that this modification does not break the security of g, and leads to a function f that is both one-way and collision resistant under the LW E assumption, which reduces to SIVP γ , with γ = poly(n).

Classical Definitions
We are considering protocols secure against quantum adversaries, so we assume that all the properties of our functions hold for a general Quantum Polynomial Time (QPT) adversary, rather than the usual Probabilistic Polynomial Time (PPT) one. We will denote D the domain of the functions, while D(n) is the subset of strings of length n.
Definition 2.1 (Quantum-Safe (informal)). A protocol/function is quantum-safe (also known as post-quantum secure), if all its properties remain valid when the adversaries are QPT (instead of PPT).
The following definitions are for PPT adversaries, however in this paper we will generally use quantum-safe versions of those definitions and thus security is guaranteed against QPT adversaries.
Definition 2.2 (One-way). A family of functions {f k : D → R} k∈K is one-way if: • There exists a PPT algorithm that can compute f k (x) for any index function k, outcome of the PPT parameter-generation algorithm Gen and any input x ∈ D; • Any PPT algorithm A can invert f k with at most negligible probability over the choice of k: Pr where rc represents the randomness used by A Definition 2.3 (Second preimage resistant). A family of functions {f k : D → R} k∈K is second preimage resistant if: • There exists a PPT algorithm that can compute f k (x) for any index function k, outcome of the PPT parameter-generation algorithm Gen and any input x ∈ D; • For any PPT algorithm A, given an input x, it can find a different input x such that f k (x) = f k (x ) with at most negligible probability over the choice of k: Pr where rc is the randomness of A; Definition 2.4 (Collision resistant). A family of functions {f k : D → R} k∈K is collision resistant if: • There exists a PPT algorithm that can compute f k (x) for any index function k, outcome of the PPT parameter-generation algorithm Gen and any input x ∈ D; • Any PPT algorithm A can find two inputs x = x such that f k (x) = f k (x ) with at most negligible probability over the choice of k: Pr where rc is the randomness of A (rc will be omitted from now).
Theorem 2.1. [KL14] Any function that is collision resistant is also second preimage resistant.
Definition 2.6 (Trapdoor Function). A family of functions {f k : D → R} is a trapdoor function if: • There exists a PPT algorithm Gen which on input 1 n outputs (k, t k ), where k represents the index of the function; • {f k : D → R} k∈K is a family of one-way functions; • There exists a PPT algorithm Inv, which on input t k (which is called the trapdoor information) output by Gen(1 n ) and y = f k (x) can invert y (by returning all preimages of y 2 ) with nonnegligible probability over the choice of (k, t k ) and uniform choice of x.
Definition 2.7 (Hard-core Predicate). A function hc : D → {0, 1} is a hard-core predicate for a function f if: • There exists a QPT algorithm that for any input x can compute hc(x); • Any PPT algorithm A when given f (x), can compute hc(x) with negligible better than 1/2 probability: Pr where rc represents the randomness used by A; Definition 2.8 (Hard-core Function). A function h : D → E is a hard-core function for a function f if: • There exists a QPT algorithm that can compute h(x) for any input x • For any PPT algorithm A when given f (x), A can distinguish between h(x) and a uniformly distributed element in E with at most negligible probability: The intuition behind this definition is that as far as a QPT adversary is concerned, the hard-core function appears indistinguishable from a randomly chosen element of the same length.
Theorem 2.2 (Goldreich-Levin [GL89b]). From any one-way function f : D → R, we can construct another one-way function g : D × D → R × D and a hard-core predicate for g. If f is a one-way function, then: • g(x, r) = (f (x), r) is a one-way function, where |x| = |r|.
• hc(x, r) = x, r mod 2 is a hard-core predicate for g Informally, the Goldreich-Levin theorem is proving that when f is a one-way function, then f (x) is hiding the xor of a random subset of bits of x from any PPT adversary 3 . Theorem 2.3 (Vazirani-Vazirani XOR-Condition Theorem [VV85]). Function h is hard-core function for f if and only if the xor of any non-empty subset of h's bits is a hard-core predicate for f .
The Learning with Errors problem (LWE) can be described in the following way: Definition 2.9 (LWE problem (informal)). Given s, an n dimensional vector with elements in Z q , the task is to distinguish between a set of polynomially many noisy random linear combinations of the elements of s and a set of polynomially many random numbers from Z q .
Regev [Reg05] and Peikert [Pei09] have given quantum and classical reductions from the average case of LWE to problems such as approximating the length of the shortest vector or the shortest independent vectors problem in the worst case, problems which are conjectured to be hard even for quantum computers.
Theorem 2.4 (Reduction LWE, from [Reg05, Therem 1.1]). Let n, q be integers and α ∈ (0, 1) be such that αq > 2 √ n. If there exists an efficient algorithm that solves LWE q,Ψα , then there exists an efficient quantum algorithm that approximates the decision version of the shortest vector problem GapSVP and the shortest independent vectors problem SIVP to withinÕ(n/α) in the worst case.

Quantum definitions
We assume basic familiarity with quantum computing notions. For any function f : A → B that can be described by a polynomially-sized classical circuit, we define the controlled-unitary U f , as acting in the following way: where we name the first register |x control and the second register |y target. Given the classical description of this function f , we can always define a QPT algorithm that efficiently implements U f . The protocol we want to implement (achieving PSRQG) can be viewed as a special case of a two-party quantum computation protocol, where one side (Client) has only classical information and thus the communication consists of classical messages. Furthermore, the client is honest, so we only need to prove security (and simulators) against adversarial server. Finally, the ideal protocol (giving same output but mediated by a trusted party; see definition below) that the real protocol implements, needs to be by itself PSRQG, i.e. obtaining the legitimate outputs should not leak any extra information (see Sections 3 and 5). In this paper, unless stated otherwise, we use the convention that all quantum operators considered are described by polynomially-sized quantum circuits.
We follow the notations and conventions of [DNS10]. We have two parties A, B with registers A, B and an extra register R with dim R = (dim A + dim B). The input state is denoted ρ in ∈ D(A⊗B⊗R), where D(A) is the set of all possible quantum states in register A. We also denote with L(A) the set of linear mappings from A to itself. The ideal output 4 is given by ρ out = (U ⊗ I R ) · ρ in , where for simplicity we write U · ρ instead of U ρU † . For two states ρ 0 , ρ 1 we denote the trace norm distance ∆(ρ 0 , ρ 1 ) := 1 2 ρ 0 − ρ 1 . If ∆(ρ 0 , ρ 1 ) ≤ then any process applied on ρ 0 behaves as for ρ 1 except with probability at most . Definition 2.10 (taken from [DNS10]). An n-step two party strategy is denoted Π O = (A, B, O, n): 1. input spaces A 0 , B 0 and memory spaces A 1 , · · · , A n and B 1 , · · · , B n 2. n-tuple of quantum operations (L A 1 , · · · , L A n ) and (L B 1 , · · · , L B n ) such that L A i : L(A i−1 ) → L(A i ) and similarly for L B i .
3. n-tuple of global operations (O 1 , · · · , O n ) for that step, The global operations (in our case) are communications that transfers some (classical) register from one party to another. The quantum state in each step of the protocol is given by: Definition 2.11 (Ideal Protocol). Given a real protocol, we call the corresponding"ideal protocol" a protocol that has same input/output distributions with an honest run of the real protocol, but all intermediate steps are completed by a trusted third party.
The security definitions are based on the corresponding ideal protocol of secure two-party quantum computation (S2PQC) that takes a joint input ρ in ∈ A 0 ⊗ B 0 , obtains the state U · ρ in and returns to each party their corresponding quantum registers. A protocol Π O U implements the protocol securely, if no possible adversary in any step of the protocol, can distinguish with a non negligible probability whether they interact with the real protocol or with a simulator (which has access to the ideal protocol). When a party is malicious we add the notation "∼", e.g.Ã.
Definition 2.12 (Simulator). S(Ã) = (S 1 , · · · , S n ), q is a simulator for adversaryÃ in Π O U if it consists of: 1. operations where S i : L(A 0 ) → L(Ã i ) are described by polynomially-sized quantum circuits, 2. sequence of bits q ∈ {0, 1} n determining if the simulator calls the ideal functionality at step i (q i = 1 calls the ideal functionality).
Given input ρ in the simulated view for step i is defined as: Definition 2.13 (Privacy with respect to the Ideal Protocol). We say that the protocol is δ-private (with respect to an ideal protocol) if for all adversaries and for all steps i: is the state of the real protocol with corrupted partyÃ, at step i.
The honest-but-curious (HBC) adversaries, follow the protocol honestly, keeping records of all communication and attempt to learn from those more than what they should. Since quantum states cannot be copied, in [DNS10] they defined an adversary that could be considered the quantum analogue, called specious adversary.
Definition 2.14 (Specious). An adversaryÃ is -specious if there exists a sequence of operations (T 1 , · · · , T n ), where T i : L(Ã i ) → L(A i ) can be described by polynomially-sized quantum circuits, such that for all i: In our protocol, where communications are classical, it is sensible to define a weaker version of the adversary: Definition 2.15 (Quantum-Honest-But-Curious (QHBC)). An adversaryÃ is QHBC if it is 0specious.

Ideal Functionality
In many distributed protocols the required communication consists of sending sequence of single qubits prepared in random states that are unknown to the receiver (and any other third parties). What we want to achieve is a way to generate remotely single qubits that are random and (appear to be) unknown to all parties but the "client" that gives the instructions.
In this work, for clarity and having in mind the applications we wish to implement, we will focus on a particular choice for the set R of possible states that contains eight different single-qubit states (see below). One could easily modify our work to restrict to a smaller set (e.g. the four BB84 states [BB84] that would actually simplify our proofs) or a larger set.
We define the set of states R := {|+ θ } where θ ∈ {0, π/4, π/2, · · · , 7π/4} (7) By including magic states ( + π/4 ), this set of states can be viewed as a "universal" resource, as applying Clifford operations on those states is sufficient for universal quantum computation. Furthermore, it is sufficient to implement both Blind Quantum Computation (e.g. [BFK09]) and Verifiable Blind Quantum Computation (e.g. [FKD17]). -Either returns abort to both client and server -Or returns (m C , r) to the client, and (m S , + (rπ/4) ) to the server Remarks: (i) The outcome of this functionality is the client "sending" the qubit |+ θ (that he knows) to the server, thus simulating a quantum channel. (ii) We note that there is an abort possibility and some auxiliary classical message m, both included to make the functionality general enough to allow for our construction. Furthermore, the classical description of the qubit r and the classical message m are totally uncorrelated (as r is chosen randomly for each m). (iii) While the server can learn something about the classical description (e.g. by measuring the qubit), this information is limited and is the exact same information that he could obtain if the client had prepared and send a random qubit. Therefore, the privacy is defined with respect to this ideal setting.
We are interested only in the honest-but-curious setting for now. The idea is that we will allow the adversary to have access to the classical registers/variables of the server (we will call these information a "view "), as well as the classical variables produced by the ideal functionality (uncorrelated with the quantum output, so secure by definition). The goal of the adversary will be to distinguish whether he is interacting with a view of the ideal functionality or a view of the real protocol. More formally, we will denote by P S the view of server S in protocol P, which is the list of the content of the variables/classical registers assigned by the server S in the protocol P. Similarly, F S will be the view of the server S in the ideal functionality F, equal to the value of m S in a run of the idea functionality.
To achieve the PSRQG functionality we define an ideal protocol, called Ideal QFactory 5 , mediated by a trusted third party that (under certain assumptions) achieves the PSRQG functionality. This ideal protocol, can be realised by a concrete protocol without any trusted parties (see later), and certain choices in the definition of the ideal QFactory (e.g. the function required) are done with this in mind.

Protocol 3.2 Ideal QFactory Protocol
Public Information: A security parameter n ∈ N * , a trapdoor one-way function that are quantum-safe, two-regular and collision resistant (or the weaker second preimage resistance property, see Remark 3.1) {f k : D → R} k∈K and a family of functions -If the last bit of x and x is the same, abort otherwise -ComputesB := g k (x, x , β). Setting θ :=B × π/4, prepares a qubit in the state |+ θ Outputs: -Either returns abort to both parties -Or returns (k, y, β, |+ θ ) to server S and (t k , y, β, θ) to client C. Note that the θ is optional and could have been recomputed by the client from t k .
Remark 3.1. It appears that the second preimage resistance property will be enough to prove the security of our scheme in the honest-but-curious setting. However, as soon as the server can be malicious, the collision resistance property will be very important, else the server might forge known valid states, which would break the security.
We will denote by M QF the distribution obtained by sampling as above the index k and trapdoor t k according to Gen(1 n ), the y uniformly in the elements of R having two preimages, and the β uniformly in E, and then outputting ((t k , y, β), (k, y, β)).
Lemma 3.1. Ideal QFactory Protocol 3.2 is a PSRQG protocol as described in Definition 3.
Proof. We can see that Protocol 3.2 is identical with Protocol 3.1 with M = M QF (since the Client having t k can determine if it aborts or not), apart from the fact that in Protocol 3.2 the state received by the server is |+ θ , while in Protocol 3.1 is |+ r . Now we use the fact that g k is a hard-core function. By definition 2.8, for a QPT adversary that has access to m = (k, y = f k (x), β) the value of the hard-core function g k (x, x , β) = 4θ/π where x, x are the unique preimages of y, is indistinguishable (up to negligible probability) to that of a random value r. It follows that such adversary cannot distinguish (apart with negligible probability) whether he received the state |+ θ as in Protocol 3.2 or the state |+ r as in the ideal functionality described on Protocol 3.1, and therefore satisfies Eq. 8).
It is not sufficient to prove that given the image y = f k (x) it is hard to obtain the exact value of the function g (we will omit the k if it is clear from the context), we want the stronger requirement that given y, a QPT adversary obtains no advantage in distinguishing the value of g (the classical description of the state), from a totally random value r. Intuitively, what Protocol 3.2 achieves, is that it produces (truly) random qubits in states that are pseudo-secret, i.e. their classical description is computationally unknown to anyone that does not have access to the trapdoor t k (i.e. the server).

The Real Protocol
We assume the existence 6 of a family {f k : {0, 1} n → {0, 1} m } k∈K of trapdoor one-way functions that are two-regular and collision resistant (or the weaker second preimage resistance property, see Remark 3.1) even against a quantum adversary. For any y, we will denote by x(y) and x (y) the two unique different preimages of y by f k (if the y is clear, we may remove it). Note that because of the two-regularity property m ≥ n − 1. We use subscripts to denote the different bits of the strings. -Client: uniformly samples a set of random three-bits strings α = (α 1 , · · · , α n−1 ) where α i ← {0, 1} 3 , and runs the algorithm (k, t k ) ← Gen F (1 n ). The α and k are public inputs (known to both parties), while t k is the "private" input of the Client. Stage 1: Preimages superposition -Client: instructs Server to prepare one register at ⊗ n H |0 and second register initiated at |0 m -Client: returns k to Server and the Server applies U f k using the first register as control and the second as target -Server: measures the second register in the computational basis, obtains the outcome y and returns this result y to the Client. Here, an honest Server would have a state (|x + |x ) ⊗ |y with f k (x) = f k (x ) = y and y ∈ Im f k . Stage 2: Squeezing -Client: instructs the Server to measure all the qubits (except the last one) of the first register in the |0 ± e α i π/4 |1 basis. Server obtains the outcomes b = (b 1 , · · · , b n−1 ) and returns the result b to the Client -Client: using the trapdoor t k computes x, x . Then check if the nth bit of x and x (corresponding to the y received in stage 1) are the same or different. If they are the same, returns abort, otherwise, obtains the classical description of the Server's state. Output: If the protocol is run honestly, when there is no abort, the state that Server has is |+ θ , where the Client (only) knows the classical description (see Theorem 4.1): Remarks: The first thing to note is that the server should not only be unable to guess θ from his classical communications, but he should also be unable to distinguish it from a random string with probability greater than negligible. We will prove this later, but for now it is enough to point out that θ depends on the pre-images x and x of y (which the Client can obtain using t k ). The second thing to note is that previously, in Protocol 3.2 and in Theorem 3.1, we used the variable β. In our case, β corresponds to both α i 's and b. While our expression resembles the inner product in the Goldreich-Levin (GL) theorem, it differs in a number of places and our proof (that θ is a hard-core function), while it builds on GL theorem proof, is considerably more complicated. Details can be found in the security proof, but here we simply mention the differences: (i) our case involves three-bits rather than a predicate, and the different bits, if we view them separately, may not be independent, (ii) we have a term (x − x ) rather than a single preimage, so rather than the one-way property of the function we will need the second preimage resistance and (iii) for the same reason, if we view our function as an inner product, it can take both negative and positive values ((x − x ) could be negative).
A third thing to note is that we have singled-out the last qubit of the first register, as the qubit that will be the output qubit. One could have a more general protocol where the output qubit is chosen randomly, or, for example, in the set of the qubits that are known to have different bit values between x and x , but this would not improve our analysis so we keep it like this for simplicity. Moreover, while the "inner product" normally involves the full string x that one tries to invert, in our case, it does not include one of the bits (the last) of the string we wish to invert. It is important to note, that it does not change anything to our proofs, since if one can invert all the string apart from one bit with inverse polynomial probability of success, then trivially one can invert the full string with inverse polynomial probability (by randomly guessing the remaining bit or by trying out both values of that bit). Therefore all the proofs by contradiction are still valid and in the remaining, for notational simplicity, we will take the inner products to involve all n bits.

Correctness and intuition
Theorem 4.1. If both the Client and the Server follow Protocol 4.1, the protocol aborts when x n = x n = f −1 k (y), while otherwise the Server ends up with the output (single) qubit being in the state |+ θ , where θ is given by Eq. (9).
Proof. In the first stage, before the first measurement, but after the application of U f k , the state is x |x ⊗ |f k (x) . What the measurement does, is that it collapses the first register in the equal superposition of the two unique preimages of the measured y = f k (x) = f k (x ), in other words in the state (|x + |x )⊗|y . It is not possible, even for malicious adversary (not considered here), to force the output of the measurement to be a given y (see [Aar05] for relation of PostBQP with BQP). This completes the first stage of the protocol. Before proceeding with the proof of correctness we make three observations. By the second preimage resistance property of the trapdoor function, learning x is not sufficient to learn x but with negligible probability, and intuitively, by the stronger collision resistance property, even a malicious server cannot forge a state |x + |x (with f (x) = f (x )) fully known to him.
Then, we examine what happens if the last bit of x and x are the same and see why the protocol aborts. In this case, in the first register, the last qubit is in product form with the remaining state, and therefore any further measurements in stage 2 do not affect it, leaving it in the state |x n . Because of this, the output state is not of the form of Eq. (9), while including this states in the set of possible outputs would change considerably our analysis.
Finally, we should note that the resulting state is essentially a Greenberger-Horne-Zeilinger (GHZ) state [GHZ89]: let G be the set of bits positions where x and x differ (which include n - Figure 1: A simplified representation of the protocol. The red and yellow ellipses represent the qubits, the inner circle contains the bits of x and the outer circle contains the bits of x . The central qubit is the last one, which is not measured and which will be the output qubit. output qubit), whileḠ is the set where they are identical. The state is then (where we no longer keep the qubits in order, but group them depending on their belonging to G orḠ): This can be rewritten as (up to trivial re-normalization): It is now evident that the state at the end of Stage 1 is a tensor product of isolated |0 and |1 states, and a GHZ state with random X's applied. You can find on Figure 1 an illustration of this state 7 , before and after the Stage 2.
The important thing to note, is that the set G, that determines which qubits are in the GHZ state and which qubits are not, is not known to the server (apart from the fact that the position of the output qubit belongs to G since otherwise the protocol aborts). Moreover, this set denotes the positions where x and x differ, which is given by the XOR of the two preimages x ⊕ x := (x 1 ⊕ x 1 , · · · , x n ⊕ x n ). Because of second preimage resistance of the function, the server should not be able to invert and obtain x ⊕ x apart with negligible probability (without access to the trapdoor t k ). This in itself does not guarantee that the Server cannot learn any information about the XOR of the preimages, but we will see that the actual form of the state is such that being able to obtain information would lead to invert the full XOR and thus break the second preimage resistance. Now let us continue towards Stage 2. Measuring a qubit (other than the last one) inḠ has no effect on the last qubit (since it is disentangled). When the qubit index is in G, then measuring it at angle α i π/4 gives a phase to the output qubit of the form (−(−1) x i α i + 4b i )π/4 as one can easily check 8 . Therefore, adding all the phases leads to the output state being: Because θ is defined modulo 2π and −4 = 4 mod 8, we can express the output angle in a more symmetrical way: Note that because the angles are defined modulo 2π, one can represent this angle as a 3-bits string B (interpretable as an integer) such that θ :=B × π 4 and eventually remove the (−1) xn if needed by choosing the suitable convention in defining x and x .
A final remark is that in an honest run of this protocol, the measurement outcomes b i and y are uniformly chosen from {0, 1} and Im(f k ) respectively. This justifies why in the honest-but-curious model we can view the protocol as sampling randomly the different α, y, b's.

Privacy against QHBC adversaries
Here we will prove the security of Protocol 4.1 against QHBC adversaries (Definition 2.15). It can easily be generalised for specious adversaries (Definition 2.14). Before proceeding further, it is worth stressing that this security level has three-fold importance. Firstly, the QHBC model concerns any application of PSQRG that involves a protocol where the adversaries are third parties that have access to the classical communication and nothing else. In this case, we can safely assume that the quantum part of the protocol is followed honestly and we only require to prove that the third parties learn nothing about the classical description of the state from the classical public communication. Second case of interest is scenarios where the "server" does not intend to sabotage/corrupt the computation but may be interested to learn (for free) extra information. In such case, the protocol should be followed honestly, since any non-reversible deviation other than copying classical information could corrupt the computation. Finally, the QHBC case, as in the classical setting, is a first step towards proving the full security against malicious adversaries, as we will discuss in Section 7).
Theorem 5.1. Protocol 4.1 realises a PSRQG Ideal Protocol (as in Definition 3.2) that is private with respect to this ideal protocol (as in Definition 2.13) against a QHBC server A (Definition 2.15).
Before proving the privacy with respect to the ideal functionality (see below for construction of simulators), the first step is to show that the corresponding ideal protocol (Definition 2.11) is a PSRQG. By Theorem 3.1 this reduces to proving that the classical description is a hard-core function with respect to f k .
Theorem 5.2. The function θ given here as was defined in Protocol 4.1, is a hard-core function with respect to f k . NB: here the collision resistance is not needed and is replaced by the weaker second preimage resistance property.
Sketch Proof of Theorem 5.2. In Protocol 4.1, the adversary (Server) can only use the classical information that he possesses (k, y, α, b) in order to try and guess with some probability the value of θ in the case that there is no abort. Since the adversary follows the honest protocol, the choices of y, b are truly random (and not determined by the adversary as he could in the malicious case).
Outline of sketch proof: We first express the classical description of the state into expressions for each of the corresponding three bits. The aim is to prove that it is impossible to distinguish the sequence of these three bits from three random bits with non-negligible probability. To show this we follow five steps. In Step 1 we express each of the the bits as a sum mod two, of an inner product (of the form present in GL theorem) and some other terms. In Step 2 we show that guessing the sum modulo two of the two preimages breaks the second preimage resistance of the function and thus is impossible. We assume that the adversary can achieve some inverse polynomial advantage in guessing certain predicates and in the remaining steps we show that in that case he can obtain a polynomial inversion algorithm for the one-way function f k , and thus reach the contradiction. In Step 3 we use the Vazirani-Vazirani Theorem 2.3 to reduce the proof of hard-core function to a number of single hard-core bits (predicates). In Step 4 we use a Lemma that allows us to fix all but one variable in each expression, with an extra cost that is an inverse polynomial probability and therefore the (fixed variables) guessing algorithm still needs to have negligible success probability. Finally, in Step 5, we reduce all the predicates in a form of a known hard-core predicate XOR with a function that involves variables not included in that predicate. Using the previous step, it reduces to guessing the XOR of a hard-core predicate with a constant, which is bounded by the probability of guessing the (known to be hard-core) predicate.
Here we give the sketch described above, while the full proof can be found in the Appendix B. Let us start by defining where B i are single bits. Moreover, we treat x, x as vectors in {0, 1} n ; we define α (j) = (α n−1 ) the vector that involves the j ∈ {1, 2, 3} bit of each of three-bit strings α, and we definex := x ⊕ x . We define z as a vector in {−1, 0, 1} n defined as the element-wise differences of the bits of x and x , i.e. z i = x i − x i . Finally, as in GL theorem, we use the notation for the inner product a, b = n−1 i=1 a i b i . We will prove that any QPT adversary A having all the classical information that Server has (y, α, b), can guess B with at most negligible probability where for simplicity we denote the function f instead of f k . This means that the adversary A cannot distinguish B from a random three-bit string with non-negligible probability and thus Protocol 4.1 is PSRQG as given in Definition 3.2.
Step 1: We decompose Eq. (15) into three separate bits, and use the variablex, z defined above.
where the derivation and exact expressions for the functions h 1 , h 2 are given in Appendix B. We notice from Eq. (17) that each bit includes a term of the form x, α (i) mod 2 which on its own is a hard-core predicate following the GL theorem.
Step 2: By the second preimage resistance we have: For each bit j ∈ {1, 2, 3}, separately we assume that the adversary can achieve an advantage in guessing thex which is 1 2 + ε j (n). Then, similarly to GL theorem, we prove that if this ε j (n) is inverse polynomial, this leads to contradiction with Eq. (18) since one can obtain an inversepolynomial inversion algorithm for the one-way function f .
Step 3: While each bit includes terms that on its own it would make it hard-core predicate (as stated in Step 1), if we XOR the overall bit with other bits it could destroy this property. To proceed with the proof that B is hard-core function we use the Vazirani-Vazirani theorem which states that it suffices to show that individual bits as well as combinations of XOR's of individual bits are all hard-core predicates. In this way one evades the need to show explicitly that the guesses for different bits are not correlated. To proceed with the proof, we use a trick that "disentangles" the different variables.
Step 4: We would like to be able to fix one variable and vary only the remaining, while in the same time maintain some bound on the guessing probability.
The advantage ε j (n) that we assume the adversary has for guessing one bit (or an XOR) is calculated "on average" over all the random choices of (x, α (i) , b). Using Theorem 5.3 we can fix one-by-one all but one variable (applying the lemma iteratively, see Appendix B). With suitable choices, the cardinality of the set of values that satisfies all these conditions is O(2 n ε j (n)) for each iteration. Unless ε j (n) is negligible, this size is an inverse polynomial fraction of all values. This suffices to reach the contradiction. The actual inversion probability that we will obtain is simply a product of the extra cost of fixing the variables with the standard GL inversion probability. This extra cost is exactly the ratio of the cardinality of the Good sets (defined below) to the set of all values and is O(ε v i (n)).
Step 5: If the expression we wish to guess involves XOR of terms that depend on different variables, then by using Step 4 we can fix the variables of all but one term. Then we note that trying to guess a bit (that depends on some variable and has expectation value close to 1/2) is at least as hard as trying to guess the XOR of that bit with a constant. For example, if the bit we want to guess is x, r 1 mod 2 ⊕ h(z, r 2 , r 3 )] and we have a bound on the guessing probability where only r 1 is varied, then we have: 9 Pr We note that all bits of B and their XOR's can be brought in this form. Then using this, we can now prove security, as the r.h.s. is exactly in the form where the GL theorem provides an inversion algorithm for the one-way function f . For details, see Appendix B.
We now return and prove Theorem 5.1.
Proof of Theorem 5.1. In the proof we use QHBC adversaries, but following closely the more general Specious adversaries, we can see that it would easily generalise. The proof has two steps. In the first step of the proof, using the operators T i (as per definition of specious) and the existence of certain fixed state (see below) the simulator can reproduce the real view of the Server, if he can reproduce the honest state ρ i (ρ in ) of the corresponding part of the protocol. The second step of the proof is to notice that apart from the last step of the protocol (decision to abort or not), the (only) secret input t k of the Client plays no role, and thus the simulator can reproduce the view of the Server without calling the ideal functionality. Finally, the simulator of the last step of the protocol, calls the ideal functionality (and thus q i = 1 in Eq. (4)) and receives the decision to abort (without access to the secret t k ).
Step 1: We use the no-extra information lemma from [KW17]: Lemma 5.4 (No-extra Information (from [KW17])). Let Π U = (A, B, n) be a correct protocol for two party evaluation of U . LetÃ be any -specious adversary. Then there exists an isometry T i :Ã i → A i ⊗Â and a (fixed) mixed stateρ i ∈ D(Â i ) such that for all joint input states ρ in , where ρ i (ρ in ) is the state in the honest run andρ i (Ã, ρ in ) is the real state (with the specious adversaryÃ).
By setting = 0 (as QHBC) and using the inverse of the isometry T i , we have 10 and the operation S i of the simulator for any step, consists of generating ρ i (ρ in ) (see next part of the proof), tensor it with the fixed stateρ i and apply the inverse of the isometry T i . This recovers exactly the real stateρ i (S, ρ in ) and thus tracing out the system of the Client to obtain the simulated view ν i (S, ρ in ) gives (δ = 0)-private with respect to the ideal protocol (see Eq. (5)).
Step 2: We give below the honest states at the two steps of the protocol before the Server (classically) communicates with the Client, noting that a simulator (with no access to the private information t k ) could interact with the Server (instead of the Client) just following the normal steps of the protocol, using the public inputs (k, α).
• State after the Server measures the second register: • State after the Server measures the first registers in α angles: where |Output = |+ θ if there is no abort, while |Output = |x n otherwise.
The final state is To obtain the corresponding view, the Simulator calls the ideal functionality, but only uses the abort/no − abort decision, and otherwise acts as in previous steps: Generates the stateρ f (from the no-extra information lemma), obtains the final state ρ f (ρ in ) by running the actual protocol until the previous step and adding the extra register |abort / |no − abort , and then applies the inverse of the isometry T f and traces out the Client's registers. Note that, as given in the definitions, all operators used correspond to polynomially-sized quantum circuits and therefore the Simulator is also QPT.
Before moving to the constructions of trapdoor functions with the required properties and discussing the malicious case, we need to make an important observation. The ideal Protocol 3.1 other than the classical information (k, y, α, b), returns the state |+ θ to the Server. The security of our real Protocol 4.1 that we proved is with respect to the ideal protocol (i.e. no information beyond that of the ideal protocol is obtained). However, having access to (a single copy) of the state |+ θ can (and does) give some non-negligible information on the classical description of that specific θ. For example, by making a measurement one can rule-out one of the eight states with certainty. This, naively, would appear to be in contradiction with the properties of the function we have (where we prove that one can have only negligible advantage in guessing θ). This is no different from the SRQG functionality, that the server can obtain some information on r m . The resolution to this apparent contradiction, is that the basis of the proof of the hard-core property of θ with respect to the function, is that one can repeat the same guessing algorithm keeping same x (or y) but varying α's. However, to obtain any information from the (output) qubit, one needs to measure it and disturb it. Then repeating the experiment the probability of obtaining the same y a second time (and thus having prepared the same θ) is negligible for any QPT adversary (if one can repeat only polynomial number of times). Therefore, this one-shot extra information on θ, cannot be distinguished from a one-shot information on a truly random r m .

Function Constructions
For our Protocol 4.1 we need a trapdoor one-way function that is also quantum-safe, two-regular and second preimage resistant (or the stronger collision resistance property). These properties may appear to be too strong to achieve, however, we give here methods to construct functions that achieve these properties starting from trapdoor one-way functions that have fewer (more realistic) conditions, and we specifically give one example that achieves all the desired properties. In particular we give: • A general construction given either (i) an injective, homomorphic (with respect to any operation 11 ) trapdoor one-way function or (ii) a bijective trapdoor one-way function, to obtain a two-regular, second preimage resistant 12 , trapdoor one-way function. In both cases the quantum-safe property is maintained (if the initial function has this property, so does the constructed function).
• (taken from [MP12]) A method of how to realise injective quantum-safe trapdoor functions derived from the LWE problem, that has certain homomorphic property.
• A way to use the first construction with the trapdoor from [MP12] that requires a number of modifications, including relaxation of the notion of two-regularity. The resulting function satisfy all the desired properties if a choice of parameters that satisfy multiple constraints, exists.
• A specific choice of these parameters, satisfying all constraints, that leads to a concrete function with all the desired properties.

Obtaining two-regular, collision resistant/second preimage resistant, trapdoor one-way functions
Here we give two constructions. The first uses as starting point an injective, homomorphic trapdoor function while the second a bijective trapdoor function. While we give both constructions, we focus on the first construction since (i) we can prove the stronger collision-resistance property and (ii) (to our knowledge) there is no known bijective trapdoor function that is believed to be quantum-safe.
Theorem 6.1. If G is a family of injective, homomorphic, trapdoor one-way functions, then there exists a family F of two-regular, collision resistant, trapdoor one-way functions. Moreover the family F is quantum-safe if and only if the family G is quantum-safe.
From now on, we consider that any function g k ∈ G has domain D and range R and let be the closed operation on D and be the closed operation on R such that g k is the morphism between D and R with respect to these 2 operations: We also denote the operation on D, the inverse operation of , specifically: a b −1 = a b ∀a, b ∈ D and 0 be the identity element for . Then, the family F is described by the following PPT algorithms: FromInj.Gen F (1 n ) 1 : (k, t k ) ←$ Gen G (1 n ) / / k is an index of a function from G and t k is its associated trapdoor 2 : x 0 ←$ D \ {0} / / x 0 = 0 to ensure that the 2 preimages mapped to the same output are distinct 3 : k := (k, g k (x 0 )) / / the description of the new function 4 : t k := (t k , x 0 ) / / the trapdoor associated with the function f k 5 : return k , t k The Evaluation procedure receives as input an index k of a function from F and an elementx from the function's domain (x ∈ D × {0, 1}): where every function from F is defined as: 3 : x 2 := x 1 x 0 4 : return (x 1 , 0) and (x 2 , 1) / / the unique 2 preimages corresponding to 5 : / / an element from the image of f k Proof. To prove Theorem 6.1 we give below five lemmata showing that, the family F of functions defined above, satisfies the following properties: (i) two-regular, (ii) trapdoor, (iii) one-way, (iv) collision-resistant and (v) quantum-safe if G is quantum-safe.
Lemma 6.2 (two-regular). If G is a family of injective, homomorphic functions, then F is a family of two-regular functions.
Proof. For every y ∈ Im f k ⊆ R, where k = (k, g k (x 0 )): 1. Since Im f k = Im g k and g k is injective, there exists a unique x := g −1 k (y) such that f k (x, 0) = g k (x) = y.

Assume
x such that f k (x , 1) = y. By definition f k (x , 1) = g k (x x 0 ) = y, but g k is injective and g k (x) = y by assumption, therefore there exists a unique x = x x 0 such that f k (x , 1) = y Therefore, we conclude that: ∀ y ∈ Im f k : f −1 k (y) := {(g −1 k (y), 0), (g −1 k (y) x 0 , 1)} (24) 13 The last equality follows since each function g k from G is homomorphic Lemma 6.3 (trapdoor). If G is a family of injective, homomorphic, trapdoor functions, then F is a family of trapdoor functions.
Proof. Let y ∈ Im f k ⊆ R. We construct the following inversion algorithm: 3 : return (x, 0) and (x x 0 , 1) Lemma 6.4 (one-way). If G is a family of injective, homomorphic, one-way functions, then F is a family of one-way functions.
Proof. We prove it by contradiction. We assume that a QPT adversary A can invert any function in F with non-negligible probability P (i.e. given y ∈ Im f k to return a correct preimage of the form (x , b) with probability P ). We then construct a QPT adversary A that inverts a function in G with the same non-negligible probability P reaching the contradiction since G is one-way by assumption. From Eq. (24) of Theorem 6.2 we know the two preimages of y are: (i) (g −1 k (y), 0) and (ii) (g −1 k (y) x 0 , 1). We see that information on g −1 k (y) is obtain in both cases, i.e. obtaining any of these two preimages, is sufficient to recover g −1 k (y) if x 0 is known. We now construct an adversary A that for any function g k : D → R, inverts any output y = g k (x) with the same probability P that A succeeds. Lemma 6.5 (collision-resistance). If G is a family of injective, homomorphic, one-way functions, then any function f ∈ F is collision resistant.
(24) we know that the two preimages are of the form (x, 0), (x x 0 , 1) where g k (x) = y. It follows that when A is successful, by comparing the first arguments of the two preimages, can recover x 0 .
We now construct a QPT adversary A that inverts the function g k with the same probability P , reaching a contradiction: : return x = x 1 x 2 5 : else / / A failed to find collision of f k ; happens with probability (1 − P ) 6 : return 0 Lemma 6.6 (quantum-safe). If G is a family of quantum-safe trapdoor functions, with properties as above, then F is also a family of quantum-safe trapdoor functions.
Proof. The properties that require to be quantum-safe is the one-wayness and collision resistance. Both these properties of F that we derived above were proved using reduction to the hardness (one-wayness) of G. Therefore if G is quantum-safe, its one-wayness is also quantum-safe and thus both properties of F are also quantum-safe.
Theorem 6.7. If G is a family of bijective, trapdoor one-way functions, then there exists a family F of two-regular, second preimage resistant, trapdoor one-way functions. Moreover ,the family F is quantum-safe if and only if the family G is quantum-safe.
The family F is described by the following PPT algorithms, where each function g k ∈ G has domain D and range R: where every function from F is defined as: FromBij.Inv F (k , y, t k ) 1 : / / y is an element from the image of f k , k = (k 1 , k 2 ), t k = (t k 1 , t k 2 ) 2 : x 1 := Inv G (k 1 , y, t k1 ) 3 : x 2 := Inv G (k 2 , y, t k2 ) 4 : return (x 1 , 0) and (x 2 , 1) / / the unique 2 preimages corresponding to 5 : / / an element from the image of f k The proof of Theorem 6.7, using the family of function defined above, follows same steps as of Theorem 6.1 and is given in the Appendix C.

Injective, homomorphic quantum-safe trapdoor one-way function from LWE (taken from [MP12])
We outline the Micciancio and Peikert [MP12] construction of injective trapdoor one-way functions, naturally derived from the Learning With Errors problem. At the end we comment on the homomorphic property of the function, since this is crucial in order to use this function as the basis to obtain our desired two-regular, collision resistant trapdoor one-way functions. The algorithm below generates the index of an injective function and its corresponding trapdoor. The matrix G used in this procedure, is a fixed matrix (whose exact form can be seen in [MP12]) for which the function from the family G with index G can be efficiently inverted. The actual description of the injective trapdoor function is given in the Evaluation algorithm below, where each function from G is defined on: g K : Z n q × L m → Z m q , and L is the domain of the errors in the LWE problem (the set of integers bounded in absolute value by µ): LWE.Eval G (K, (s, e)) 1 : y := g K (s, e) = s t K + e t 2 : return y The inversion algorithm returns the unique preimage (s, e) corresponding to b t ∈ Im(g K ). The algorithm uses as a subroutine the efficient algorithm Inv G for inverting the function g G , with G the fixed matrix mentioned before. We examine now whether the functions g K are homomorphic with respect to some operation.
Given a = (s 1 , e 1 ) ∈ Z n q × L m and b = (s 2 , e 2 ) ∈ Z n q × L m , the operation is defined as: (s 1 , e 1 ) (s 2 , e 2 ) = (s 1 + s 2 mod q, e 1 + e 2 ) Given y 1 = g K (a) ∈ Z m q and y 2 = g K (b) ∈ Z m q , the operation is defined as: Then, we can easily verify that: g K (s 1 , e 1 ) + g K (s 2 , e 2 ) mod q = s 1 t K + e 1 t + s 2 t K + e 2 t mod q = (s 1 + s 2 mod q) t K + (e 1 + e 2 ) t = g K ((s 1 + s 2 ) mod q, e 1 + e 2 ) However, the sum of two error terms, each being bounded by µ, may not be bounded by µ. This means that the function is not (properly) homomorphic. Instead, what we conclude is that as long the vector e 1 + e 2 lies inside the domain of g K , then g K is homomorphic. To address this issue, we will need to define a weaker notion of 2-regularity, and a (slight) modification of the FromInj construction to provide a desired function starting from the trapdoor function of [MP12].

A suitable δ-2 regular trapdoor function
Using the homomorphic injective trapdoor function of Micciancio and Peikert [MP12] and the construction defined in the proof of Theorem 6.1, we derive a family F of collision-resistant trapdoor one-way function, but with a weaker notion of 2-regularity, called δ-2 regularity: Definition 6.1 (δ-2 regular). A family of functions (f i ) i←Gen F is said to be δ-2 regular, with δ ∈ [0, 1] if: Pr Given this definition, we should note here that in Protocol 4.1 we need to modify the abort case to include the possibility that the image y obtained from the measurement does not have two preimages (something that happens with at most probability (1 − δ)).
Theorem 6.8 (Existence of a δ-2 regular trapdoor function family). There exists a family of functions that are δ-2 regular (with δ at least as big as a fixed constant), trapdoor, one-way, collision resistant and quantum-safe, assuming that there is no quantum algorithm that can efficiently solve SIVP γ for γ = poly(n).
Proof. To prove this theorem, we define a function similar to the one in the FromInj construction, where the starting point is the function defined in [MP12]. Crucial for the security is a choice of parameters that satisfy a number of conditions given by Theorem 6.9 and proven in Appendix D. The proof is then completed by providing a choice of parameters given in Theorem 6.10 that satisfies all conditions as it is shown in Appendix E.
Definition 6.2. For a given set of parameter P chosen as in Theorem 6.9, we define the following functions, that are similar to the construction FromInj, except for the key generation that require an error sampled from a smaller set: Note, that the pairs (s, e) and (s 0 , e 0 ) correspond to x and x 0 of the FromInj construction of Subsection 6.1. The idea behind this construction is that the noise of the trapdoor is sampled from a set which is small compared to the noise of the input function. That way, when you will add the trapdoor together with an input, the total noise will still be small enough to stay in the set of possible input noise with good probability, mimicking the homomorphic property needed in Theorem 6.1. Note that the parameters need to be carefully chosen, and a trade-off between probability of success and security exists. Lemma 6.9 (Requirements on the parameters). For all n, q, µ ∈ Z, µ ∈ R, let us define: 4. α q ≥ 2 √ n (required for the LWE to SIVP reduction) 5. n α is poly(n) (representing, up to a constant factor, the approximation factor γ in the SIVP γ problem) 6.
(required for the correctness of the inversion algorithm -r max represents the maximum length of an error vector that one can correct using the [MP12] function 14 , and the last term is needed in the proof of collision resistance to ensure injectivity even when we add the secret trapdoor noise, as illustrated in Figure 2) then the family of functions of Definition 6.2 is δ-2 regular (with δ at least as big as a fixed constant), trapdoor, one-way and collision resistant (all these properties are true even against a quantum attacker), assuming that there is no quantum algorithm that can efficiently solve SIV P γ for γ = poly(n).
Proof. The proof follows by showing that the function with these constraints on the parameters is (i) δ-2 regular, (ii) collision resistant, (iii) one-way and (iv) trapdoor. In Appendix D we give and prove one lemma for each of those properties. For an intuition of the choice of parameters see also Figure 2. Figure 2: The red circle represents the domain of the error term from the trapdoor information, which is being sampled from a Gaussian distribution. The orange square is an approximation of this domain, which must satisfy that its length is much smaller (by a factor of at least m -the dimension of the error) than the length of the blue square, used for the actual sampling from the domain of the error terms, for which it is known that the trapdoor function is invertible, domain represented by the green circle. The dashed part is needed to ensure that if there is a collision (x 1 , x 2 ), then

Parameter Choices
Lemma 6.10 (Existence of parameters). The following set of parameters fulfills Theorem 6.9. n = λ k = 5 log(n) + 21 and α, α , C are defined like in Theorem 6.9.
The proof is given in Appendix E. As a final remark, we stress that other choices of the parameters are possible (considering the trade-off between security and probability of success) and we have not attempted to find an optimal set.

Discussion
In this work we deal with Quantum-Honest-But-Curious adversaries. Naturally, the final aim should be to provide security against a (fully) malicious adversary/Server. There are two (linked) issues to consider when dealing with malicious adversaries. The first issue is whether the Server (by deviating arbitrarily) can obtain extra information about the secret classical description (of the state supposingly prepared). The second issue is whether the actual state at the end of the protocol is (essentially) the one that the Client believes, i.e. if the functionality provides verification. We make remarks separately on these issues, and then conclude with an approach that could lead in a solution to both issues. Issue 1 (privacy): The most naive attempt for the Server to deviate in order to obtain information, is to return y, b other than those obtained from an honest run of the protocol. Since y, b determine (along with other parameters) the value of the secret θ a deviation there could lead to breaking the security. For example, instead of the (truly) random y that is obtained in the honest run, the Server can choose y such that he has information on the preimages for the given k or can choose b adaptively depending on values of α. However, the function f k is collision-resistant, which means that even if the adversary chooses the y he cannot find such y that both preimages are known with a non-negligible probability. Moreover, if the Server chooses y, it means that the protocol was not followed and thus the final output state is not going to be related with the value θ as expected. We conjecture the hard-core function proof (Theorem 5.2) will remain valid in that case with minor modifications. The more significant difficulty, however, comes from "mixed" strategies, where the adversary follows partly the protocol (and thus the output qubit is correlated with the classical secret description), and partly deviates. In those cases it is hard to quantify what information the server has, and whether this is strictly less than that of an ideal protocol (where the state |+ θ gives some legitimate information).

Issue 2 (verification):
The first thing to note, is that the adversary has in his lab the output state, and therefore (trivially) he can always apply a final-step deviation corrupting the legitimate output. Thus when we speak of verification, we mean a correct state, up to a (reversible) deviation on the Server's side (as the operations T i in the definition of specious). The second thing to stress, is that Protocol 4.1 cannot be verifiable against a malicious Server, unless some extra mechanism is added. There is a way, by deviating from the instructions, to corrupt the output in a way that depends on the secret classical description (θ), but without actually learning any information about the same classical description. In particular it is possible by measuring all qubits of the first register in 3α angle to generate the state |+ 3θ as output. This deviation does not help the Server to learn any information about θ (protocol still "private") but affects the output state in a "non-reversible" way and thus compromises the verifiability. A way forward: The ultimate goal would be to extent QFactory into a Quantum Universal Composable protocol [Unr10] in order to be able to compose it with any other protocol, or at least to proof the security against a malicious adversary. In classical protocols (and recently in quantum too [KMW17]), the way to boost the security from honest-but-curious to malicious is to introduce a "compiler" (e.g. using the construction in [GMW87] or a cut-and-choose technique) and boost the security by essentially enforcing the honest-but-curious behaviour to malicious adversaries (or abort). In our case, the protocol is simple enough, having single qubits as outputs. One method could be to prepare a large string of qubits, and have the Client choose a random subset of those and instructs the Server to measure them. By observing the correct statistics on the "test" qubits, one can infer the correct preparation. This is closely related to the parameter-estimation in QKD, and with self-testing [MYS12]. The exact details are involved, as the analogous cases of compilers, parameter-estimation and self-testing suggests, and will be explored in a future publication.

Appendices A PSRQG within several applications
In Subsection 1.2 we listed several applications that can use the PSRQG functionality to allow for fully classical parties to participate using, a potentially malicious, quantum server. Here we give details on how to use the exact output of our QFactory protocol in these applications. We emphasize that in all protocols in which the "server" used by the classical party is a malicious party, the cost of using our QFactory construction is that the security becomes computational and applies in the quantum-honest-but-curious setting.
1. In the quantum homomorphic encryption scheme AUX in [BJ15], where the target quantum computation must have constant T -gate depth, using our QFactory protocol would allow a classical client to participate (delegate such computation) provided, of course, that the input/output are classical. Specifically, as the input is classical, the client will instruct the server to prepare a quantum state of the classical one-time pad of this input (and then the client will also send to the server a classical homomorphic encryption of the classical one-time pad key of each of the input's bits). Moreover, for every T -gate in the quantum computation, the auxiliary qubits in the evaluation key can be produced using QFactory: |+ , P |+ = + 2π/4 , Z |+ = + 4π/4 , ZP |+ = + 6π/4 . We note that due to the use of a classical fully homomorphic encryption scheme, the AUX protocol [BJ15] has computationally security, thus, the computational security offered by the QFactory is not downgrading the security of this protocol.
2. In the blind delegated quantum computation protocol of [BFK09], the client needs to prepare and send to the server qubits, randomly chosen, from the set of states {|+ , + π/4 , ..., + 7π/4 }. This is exactly the set of states of Eq. (7) which are given by the QFactory. It follows that our construction eliminates the need for quantum communication and thus any classical client can use this protocol.
3. The verifiable blind quantum computation protocol in [FKD17], the only quantum ability that the verifier needs is to prepare and send to the prover single qubits, randomly chosen, from the set of states { + kπ/4 }. Again, this is exactly the set of states given by the QFactory. Therefore, the quantum communication, and thus quantum abilities of the verifier, can be completely replaced by the QFactory functionality.
4. For the quantum key-distribution construction in [BB84], we can use two conjugate bases to realise this protocol, namely: the diagonal basis {|+ , |+ π } and the left-right handed circular basis { + π/2 , + 3π/2 }. All these four quantum states can be obtained by the QFactory protocol 15 . As the quantum coin flipping protocol of [BB84], the quantum money protocol of [BOV + 18], or the quantum digital signatures protocol of [WDKA15] only require, as in [BB84], any pair of conjugate bases, this implies that we can use QFactory in a straight forward way. On the other hand, for the quantum coin flip construction in [PCDK11], the single qubit quantum states needed are of the form √ a |0 + (−1) α i √ 1 − a |1 , which might be achieved by a different construction of the PSRQG. 5. In the multiparty quantum computation protocol of [KP17], the n clients need to send multiple copies of quantum states in the set { + kπ/4 } to the server, who entangles and measures them all but one. Using QFactory all these states will be prepared by the server, which would enable the n clients to be fully classical.
6. The verifiable blind quantum computation protocols in [Bro15], [FK12] or the two-party quantum computation protocols in [KW17], [KMW17], require the honest party to prepare single qubit states from the set of states {|0 , |1 , + kπ/4 }. While the QFactory primitive can output the + kπ/4 states, in order to make the honest party fully classical, we need to change the construction of QFactory in order to also be able to output the |0 and |1 states, and maintain the same guarantees in privacy as in the QFactory.
B Full proof of Theorem 5.2 Proof. From Eq. (15) we have the definition of B in terms of the three corresponding bits and we aim to prove that it is hard-core, i.e. that Eq. (17) is satisfied. We will follow the five steps outlined in the main text. Before that let us define some simple identities that will be used. ∀a, b, d, e ∈ N, we have: We now return to Eq. (15) We also define x = x ⊕ x ∈ {0, 1} n and z ∈ {−1, 0, 1} n be the vector defined as: Step 1: We will rewrite this expression in terms of single bits and obtain the expression of Eq.
B 2 = S 2 mod 2 ⊕ S 3 mod 4 − S 3 mod 2 2 Finally, we can derive B 1 : Using I7: Using I5: Using I6 we can rewrite the first term, and we get: mod 2 mod 4 − S 2 mod 2 + (S 3 mod 4 − S 3 mod 2) 2 mod 2 Combining the first and third term: We notice that both S 2 − S 2 mod 2 and S 3 −S 3 mod 4 2 are even, so the first big term is 0: which can be rewritten as: Finally using I4, we get: Therefore, to make our analysis easier, we can consider that z and x are fixed. Then, if we define the function: we can rewrite B 3 , B 2 , B 1 as in Eq. (17) completing Step 1: Where: Step 2: We see from Eq. (28) that each of the three bits involve a term similar to that of the GL theorem 2.2 (the B α (i) term), but with two the important differences. First, there is another term, and the bits of B are XORs of the GL-looking term and that other one. The second type of terms (that involve h 1 , h 2 ) depend on variables that appear in the expressions of other bits, potentially introducing correlations among the different bits. We will deal with the issue of correlations in Step 3, while with the effects of having extra terms in Steps 4 and 5. Here we deal with the second important difference, namely that the GL-looking terms (those of the form x, r mod 2) depend onx rather than x in the inner product. For the remaining Step 2, we assume that the first issue is resolved and it all reduces to GL theorem subject to havingx rather than x.
Since we havex in our expression if we follow the same proof with that of the GL theorem we can follow the proof until the point that we end up with obtaining a polynomial number of guesses forx of which one is the correct value with probability negligibly close to unity. Now to continue with the proof we are lacking two elements. First, in GL theorem the use the fact that computing f (x) given x is easy, and check one-by-one the polynomial guesses to see which one (if any) is correct. We cannot do this since we only obtainx and there is no way with no extra information to check ifx actually corresponds to a given image y = f (x) = f (x ). The second issue, is that even if we could check this, having obtainedx does not contradicts the definition of one-way function (definition 2.2).
We resolve both these issues with two observations. Observation 1: We notice that because of the 2-regularity property of f , x is uniquely determined by The assumption that our 2-regular trapdoor function f is second preimage resistant (i.e. a QPT adversary given x, cannot find the second preimage x , where f (x) = f (x )) means that: we have that: As we have mentioned, following the GL theorem proof we would obtain polynomially many guesses forx g (where subscript g stands for guess). Now by the second preimage resistance, if we are given x we should be unable to obtain x in polynomial time. However, using our polynomially many guesses forx and checking for each guess if f (x ⊕x g ) = f (x) we can obtain with probability negligible close to unity the correctx and therefore come to contradiction with Eq. (33).
Step 3: Since the different bits involve common variables, to prove that our function is hardcore we need to consider the issue of correlations. One way to deal with this would be to prove the independence of both the bits and of the optimal guessing algorithms. We, instead, use the Vazirani-Vazirani Theorem 2.3, which for our case it means that it suffices to show that: The most general expression that captures all these (to be proven) hard-core predicates (formed from the subsets of { B 1 , B 2 , B 1 } ) is: where g can be any binary function. Using x, r 1 ⊕ r 2 mod 2 = x, r 1 mod 2 ⊕ x, r 2 mod 2 we can rewrite this as E(x, r 1 , r 2 , r 3 ) = x, r 1 mod 2 ⊕ g (z, r 2 , r 3 ) where g (z, r 2 , r 3 ) = x, r 2 mod 2 ⊕ g(z, r 2 , r 3 ). In other words, in order to prove that B 1 B 2 B 3 is a hard-core function for f , it suffices to prove that E(x, r 1 , r 2 , r 3 ) is a hard-core predicate for f .
Step 4: In this step, we will see how we can effectively fix all but one variables, and turn Eq. (35) to depend only on r 1 .
We want to prove that if there exists a QPT algorithm A that can guess the predicate E as given in Eq. (35), A(f (x), r 1 , r 2 , r 3 ) = E(x, r 1 , r 2 , r 3 ) with probability non-negligible better than 1/2, then the second preimage resistance assumption is violated by constructing a QPT algorithm A that, when given x can obtain x, A (f (x), 1 n ) = x, with non-negligible probability.
We now assume that the advantage A has in computing is ε(n), without restricting ε(n) to be non-negligible, aiming to reach a contradiction if this ε(n) is inverse polynomial. We therefore assume: Since the different variables (x, r 1 , r 2 , r 3 ) are chosen randomly and independently we can effectively "fix" one variable. We can consider the set of values of that variable that satisfy some condition that we need and name these values "Good" values (e.g. the guessing algorithm A to succeed with higher than negligible probability). Then we can work with the assumption that the fixed variable is within the "Good" set, with only caveat that at the end, whatever probability of inversion we obtain, is conditional on the fixed variables being "Good" and thus we need to multiply that probability with the probability that the fixed variable is "Good". For this reason, it is important that the probability of being "Good" (ratio of cardinality of Good values with total values) should be at least inverse polynomial.
We will, therefore, be using the following Lemma: [Guessing] ≥ p + ε(n), then for any variable v i , ∃ a set where the probability is taken over all variables except v i . Proof.
Now we return to Eq. (36), we fix the set of Good x , the set of inputs x such that: Pr and using Theorem 5.3 we have |Good x | ≥ ε(n) 2 2 n . Note that fixing x is equivalent with fixingx or z, given the definition of the 2-regular function f . Starting with Eq. (39) we can now fix r 3 (conditional on x ∈ Good x ) Pr where using again Theorem 5.3 we have |Good r 3 | ≥ ε(n) 4 2 n . Finally, we can fix r 2 (conditional on x ∈ Good x and r 3 ∈ Good r 3 ) and again by Theorem 5.3 we have |Good r 2 | ≥ ε(n) 8 2 n .
Step 5: In Eq. (41) the only variable is r 1 . Using Eq. (35) we can see that given that x, r 2 , r 3 are all fixed, E(x, r 1 , r 2 , r 3 ) = x, r 1 ⊕ g (z, r 2 , r 3 ) where g (z, r 2 , r 3 ) = c is constant. Because c is a constant, we can defineÃ = A ⊕ c. Now, we can easily see that: Pr r 1 ←{0,1} n [A(f (x), r 1 , r 2 , r 3 ) = x, r 1 mod 2 ⊕ g (z, r 2 , r 3 )] = Pr r 1 ←{0,1} n [Ã(f (x), r 1 , r 2 , r 3 ) = x, r 1 mod 2] So, using Eq. (41), we obtain which is exactly the expression in GL theorem. There, one obtains guesses for inversion, i.e. to obtainx with a polynomial in ε(n) probability of success, given the fixed x, r 2 , r 3 's. Multiplying this with the probability of actually being in Good x and Good r 3 and Good r 2 we obtain another polynomial in ε(n). This rules out the possibility of ε(n) being inverse polynomial, since that would break the second preimage resistance. As we have already stated, guessingx with inverse polynomial success probability does not contradict the one-way property of the trapdoor function, but it does contradict the second preimage resistance, since given x andx one can obtain deterministically x .
which as explained in Step 2 breaks second preimage resistance, Eq. (33). Since all the terms given in Step 3 ( B i , B i ⊕ B j , B 1 ⊕ B 2 ⊕ B 3 ) are of the form E(x, r 1 , r 2 , r 3 ) as in Eq. (35) our analysis suffices to prove that B 1 B 2 B 3 is a hard-core function for f . C Proof of Theorem 6.7 Lemma C.1 (two-regular). If G is a family of bijective functions, then F is a family of two-regular functions.
2. Since Im f k = Im g k 2 and g k 2 is bijective, there exist unique x 2 := g −1 k 2 (y) such that f k (x, 1) = g k 2 (x) = y.
Therefore, we conclude that: Lemma C.2 (trapdoor). If G is a family of bijective trapdoor functions, then F is a family of trapdoor functions.
Proof. We prove it by contradiction. We assume that a PPT adversary A can invert any function in F with non-negligible probability P (i.e. given y ∈ Im f k to return a correct preimage of the form (x , b) with probability P ). We then construct a PPT adversary A that inverts any function in G with the same non-negligible probability P reaching the contradiction since G is one-way by assumption. From Eq. (45) we know the two preimages of y are: (i) (g −1 k 1 (y), 0) and (ii) (g −1 k 2 (y), 1). We now construct an adversary A that for any function g k : D → R, inverts any output y = g k (x) with the probability P/2. The inversion algorithm succeeds with 1−((1−P )+P/2) = P/2 and thus reaches a contradiction.
Lemma C.4 (second preimage resistance). If G is a family of bijective, one-way functions, then, any function f ∈ F is second preimage resistant.
Proof. Assume there exists a PPT adversary B that given k = (k 1 , k 2 ) and (y, (x, b)) such that f k (x, b) = y can find (x , b ) such that f k (x , b ) = y with non-negligible probability P . From Eq.
(45) we know that the two preimages have different b's. We now construct a PPT adversary B that inverts the function g kc with the same probability P , reaching a contradiction: B (k c , y) 1 : (k 2 , t k2 ) ←$ Gen G (1 n ) 2 : x 2 = g −1 k2 (y)/ / using the trapdoor t k 2 3 : k := (k c , k 2 ) 4 : (x, 0) ← B(k , y, (x 2 , 1))/ / where y is an element from the image of f k 5 : if f k (x, 0) == f k (x 2 , 1) == y 6 : return x 7 : else / / B failed to find a second preimage; happens with probability (1 − P ) 8 : return 0 Lemma C.5 (quantum-safe). If G is a family of quantum-safe trapdoor functions, with properties as above, then F is also a family of quantum-safe trapdoor functions.
Proof. The properties that require to be quantum-safe is the one-wayness and second preimage resistance. Both these properties of F that we derived above were proved using reduction to the hardness (one-wayness) of G. Therefore if G is quantum-safe, its one-wayness is also quantum-safe and thus both properties of F are also quantum-safe. D Proof of Theorem 6.9 In the following, we will denote by f (s, e, c), the function REG2.Eval P (k, (s, e, c)) for k the index function obtained by REG2.Gen P (1 n ), and by s 0 , e 0 the trapdoor information associated with this function f .
We now prove separately the δ-2 regularity, collision resistance, one-wayness and trapdoor property of the function in Definition 6.2.

D.1 δ-2 regularity
Here we describe how to achieve δ-2 regularity using the construction FromInj and specifically, the function in Definition 6.2.
This reduces to ensuring that the two function inputs (s, e) and (s − s 0 , e − e 0 ) both lie within the domain of the function. The input (s, e) is the result of the inversion algorithm, so it is by definition inside the domain. Additionally, as the first element of the domain is only required to be in Z n q and as Z q is closed with subtraction mod q, then s − s 0 ∈ Z n q for any s, s 0 ∈ Z n q . On the other hand, the second element of the domain is required to be in Z m , such that each component is bounded in absolute value by some value µ. In this case, we are not guaranteed that adding or subtracting two such elements the result is still in the domain. What we want to ensure is that with (at least) constant probability over the choice of (s, e) and (s 0 , e 0 ), the result (s − s 0 , e − e 0 ) is in the domain of the function.
It is not difficult to show that if (s 0 , e 0 ) is chosen arbitrarily from the domain of the function, then (s − s 0 , e − e 0 ) lies within the domain of the function only with inverse exponential in m probability. This is why we consider restricting e 0 to be within a subset of the domain. By suitable choice of this subset we can make the success probability (of having two preimages) -seen as a function in m -to be at least a constant value.
Firstly, we remark that the exact probability of success can be explicitly computed. Indeed, if the trapdoor noise e 0 is sampled from a Gaussian of dimension m, and standard deviation σ, and if the noise e 1 is sampled uniformly from an hypercube C of length 2µ (both distribution being centered on 0) then the probability that e 0 + e 1 is still inside C is: However, for simplicity, and because we do not aim to find optimal parameters, we will use a (simpler) lower bound of this probability (that will be less efficient by a factor of √ m). To do that, remark that using Lemma 2.5 in [Reg05], we have that if e 0 ∈ Z m , such that each component of e 0 is sampled from a Gaussian distribution with parameter α q, then we have that every component of the vector e 0 is less than µ := α q √ m with overwhelming probability when m increases. So one can remark that, up to a negligible term, the Gaussian distribution with parameter α q is "closer to 0" than the uniform distribution on [−α q √ m; α q √ m] for sufficiently large m (i.e. for any x, the integral between −x and x of the Gaussian distribution is bigger, up to a negligible term, than the integral of the uniform distribution). Therefore, to obtain a lower bound on the probability of having two preimages, we can consider that e 0 is sampled according to the uniform distribution on a hypercube of length 2α q √ m rather than according to the Gaussian distribution of parameter α q.
This simplifies our analysis, and allows us to find the subset in which e 0 must reside, as seen in the following lemma. Note also that if you do not want to do any assumption on the input distribution, and only assume that the infinity norm is smaller than µ , then the same Lemma applies with the constant 4 replaced by 2.
Proof. As ||e 0 +e 1 || ∞ must be less than µ, which means that each component of the sum vector must be less than µ, and as each component of the 2 vectors e 0 and e 1 was independently sampled, then we can simplify our proof by considering that e 0 and e 1 are vectors in R, essentially determining P 1,µ,µ and then, we can compute P m,µ,µ = P 1,µ,µ m . Then, let us denote by E 1 the random variable sampled uniformly from [−µ, µ], E 0 the random variable sampled uniformly from [−µ , µ ] and E the random variable obtained as E = E 1 + E 0 . Therefore, P 1,µ,µ = P r[−µ ≤ E ≤ µ]. Now, we can compute the density function of E using convolution: where f E 1 and f E 0 are the probability density functions of E 1 and E 0 (f E 1 (e 1 ) = 1 2µ , when e 1 ∈ [−µ, µ] and 0 elsewhere and f E 0 (e 0 ) = 1 2µ , when e 0 ∈ [−µ , µ ] and 0 elsewhere). Then, we are only interested in the cases when both the values of f E 1 (e 1 ) and f E 0 (e − e 1 ) are non-zero and for this we need to consider 3 cases for e, given by the intervals: e ∈ [−µ − µ , µ − µ] [µ − µ, µ − µ ] [µ − µ , µ + µ ]. Thus, we can derive: Consequently, we have P m,µ,µ = (1 − µ 4µ ) m . Now, given that µ is a function of m, µ = µ(m), we want to determine the values of µ , such that this probability (seen as a function in m) is at least a positive constant number. • If lim m→∞ µ µ > 0 (and less than 1, as 0 < µ < µ), then: Consequently, it is clear that in order to get a positive constant lower bound for the success probability, we must have: Thus, in our case, if e 1 is sampled uniformly on a hypercube of length 2µ and e 0 from a Gaussian with parameter α q, by replacing the actual values of µ = αq √ m and µ := α q √ m, what we require is that:

D.2 Collision resistance
We start by the observation that for the choices of Definition 6.2, no PPT adversary can infer the trapdoor information (s 0 , e 0 ), as determining s 0 from k = (A, b 0 ) would be equivalent to solving LWE q,Ψ α q : Corollary D.2 (One-wayness of the trapdoor [Reg05, Theorem 1.1] ). Under the SIVP γ (with γ = poly(n)) assumption, no PPT adversary can recover the trapdoor information (s 0 , e 0 ).

Lemma D.3 (Collision resistance)
. The function f defined in Definition 6.2 is collision resistant if the parameters are chosen accordingly to Theorem 6.9 assuming that SIVP γ is hard.
Then, according to [MP12,Theorem 5.4], there is exactly one element (s, e) with e of length smaller than r max such that As + e = y. Because (s 1 , e 1 ) is a solution, we then have that: s 2 + s 0 = s 1 and e 2 + e 0 = e 1 , i.e. e 0 = e 1 − e 2 and s 0 = s 1 − s 2 . Hence, it is possible to deduce the trapdoor information s 0 and e 0 from the collision pair, which is impossible by Theorem D.2.

D.3 One-wayness
One could imagine that the one-wayness of the resulting function of Definition 6.2 is implied by the one-wayness of the function in [MP12] (as is the case in Theorem 6.4). However, we need more care here, since in our construction the error term e is not sampled from a Gaussian distribution with suitable parameters (unlike the error term e 0 ). 16 Lemma D.4 (Collision resistance to one-wayness). Let f : A → B ∪ ⊥ (with ⊥ / ∈ B), where A is finite and can be efficiently sampled uniformly and let C be the set of all y ∈ B that admit 2 preimages. If the restriction of f to the set f −1 (B) is a collision resistant function that admits with non-negligible probability two preimages for any y from its image and if |f −1 (C)| |A| is non-negligible, then f restricted to the set f −1 (C) is a one-way function.
Proof. By contradiction: suppose that f is not one-way on C, i.e. with a non-negligible probability we can find a preimage of y for y uniformly sampled in C, and from this we can show how to find a collision. The idea is to sample an input x ∈ A, and then compute y := f (x). Then, as |f −1 (C)| |A| is non-negligible, we know that with non-negligible probability this y will have two preimages. Now, with non-negligible probability, this function will be easy to invert and one gets x . Because we sample uniformly at the step before, we have the same probability to sample one image or the other, so with probability 1/2, x = x, therefore, we found a collision.
Corollary D.5 (One-wayness from Theorem D.3 and Theorem D.4 ). The function defined in Definition 6.2 is one-way for all y that admit two preimages, under the SIVP γ hardness assumption, when the parameters are chosen accordingly to Theorem 6.9.

D.4 Trapdoor
We want to prove that using the trapdoor information of the REG2 construction, which consists of (s 0 , e 0 ) and t k , the trapdoor information of the LWE function, we can efficiently derive the preimages of an output b of REG2.Eval. Firstly, we notice that to find all the preimages, we can simply run LWE.Inv on b as well as on b − b 0 and if we succeed we take only the preimages that lie in the input domain, i.e. whose error part e is bounded in infinity norm by µ: ||e|| ∞ ≤ µ. Because the function is injective, these are all the possible preimages. However, because we are interested only in the case when there are exactly two preimages, the function REG2.Inv can also do the following: we first run LWE.Inv on b and obtain (s 1 , e 1 ). Then, the inversion is completed by returning (s 1 , e 1 , 0) and (s 1 − s 0 , e 1 − e 0 , 1), which are both valid preimages, if and only if the function has two preimages (see Theorem D.3 for more details).

E Proof of Theorem 6.10
Proof. Using the following explicit values for the parameters of the Micciancio and Peikert injective trapdoor function [MP12], we want to prove that they fulfil all of the requirements of Theorem 6.9: and α, α , C are defined as in Theorem 6.9. Now, let us proof that these parameters satisfy all the requirements.
• The first three requirements are trivially satisfied.
• For the fifth condition, i.e. n α is poly(n), we just need to remark that 1/α = m 3/2 q µ < m 3/2 q, and that both m and q are poly(n).
• Finally, to show that the last condition is satisfied, we note that: if and only if Now, let us suppose that k := u log(n) + v with u ≤ 5 and v ≥ 19 and we need to find u, v such that A ≤ 2 k . Note that we will include v in some constants and then find the good v at the end. First, remark that: = √ m(1 + 1 n(2 + k) ) (49)