New Bounds and a Generalization for Share Conversion for 3-Server PIR

Private Information Retrieval (PIR) protocols, which allow the client to obtain data from servers without revealing its request, have many applications such as anonymous communication, media streaming, blockchain security, advertisement, etc. Multi-server PIR protocols, where the database is replicated among the non-colluding servers, provide high efficiency in the information-theoretic setting. Beimel et al. in CCC 12’ (further referred to as BIKO) put forward a paradigm for constructing multi-server PIR, capturing several previous constructions for k≥3 servers, as well as improving the best-known share complexity for 3-server PIR. A key component there is a share conversion scheme from corresponding linear three-party secret sharing schemes with respect to a certain type of “modified universal” relation. In a useful particular instantiation of the paradigm, they used a share conversion from (2,3)-CNF over Zm to three-additive sharing over Zpβ for primes p1,p2,p where p1≠p2 and m=p1·p2, and the relation is modified universal relation CSm. They reduced the question of the existence of the share conversion for a triple (p1,p2,p) to the (in)solvability of a certain linear system over Zp, and provided an efficient (in m,logp) construction of such a sharing scheme. Unfortunately, the size of the system is Θ(m2) which entails the infeasibility of a direct solution for big m’s in practice. Paskin-Cherniavsky and Schmerler in 2019 proved the existence of the conversion for the case of odd p1, p2 when p=p1, obtaining in this way infinitely many parameters for which the conversion exists, but also for infinitely many of them it remained open. In this work, using some algebraic techniques from the work of Paskin-Cherniavsky and Schmerler, we prove the existence of the conversion for even m’s in case p=2 (we computed β in this case) and the absence of the conversion for even m’s in case p>2. This does not improve the concrete efficiency of 3-server PIR; however, our result is promising in a broader context of constructing PIR through composition techniques with k≥3 servers, using the relation CSm where m has more than two prime divisors. Another our suggestion about 3-server PIR is that it’s possible to achieve a shorter server’s response using the relation CSm′ for extended Sm′⊃Sm. By computer search, in BIKO framework we found several such sets for small m’s which result in share conversion from (2,3)-CNF over Zm to 3-additive secret sharing over Zpβ′, where β′>0 is several times less than β, which implies several times shorter server’s response. We also suggest that such extended sets Sm′ can result in better PIR due to the potential existence of matching vector families with the higher Vapnik-Chervonenkis dimension.


Private Information Retrieval
Private Information Retrieval (PIR) protocols allow the client to fetch items from the server's database without disclosing to the server which item was requested. A main challenge in constructing PIR protocols is minimizing the communication complexity. The idea of PIR was introduced by Chor et al. [1], together with the 2-server PIR protocol having the communication complexity O(n 1/3 ) for the dataset size n. PIR has a wide variety of applications such as anonymous communication [2,3], privacy-preserving media streaming [4], blockchain security [5,6], personalized advertisement [7], location and contact discovery [8][9][10], etc.
The naive approach to PIR is just to make the server send all the items in the database to the client: we stress that PIR cares only about the privacy of the client's request but not about the privacy of the server. However, it entails a huge communication complexity equal to the size of the database. To shorten the communication complexity and still keep the privacy of the request, there are two main approaches to construct PIR: • Historically, the first type of PIR was a Multi-Server PIR [1], where the database is replicated for k ≥ 2 non-colluding servers. The client secret-shares its request, and servers locally compute the secret-shared response and send it back to the client. The client recovers the item from the shares of response. Multi-Server PIR protocols, such as [11][12][13][14] are relatively efficient in information-theoretic settings. The requirement of the replicated database kept by the non-colluding parties is restrictive; however, there is a space for such a PIR, preferably in blockchain databases, cloud services, multi-server enterprise ecosystems where a small number of servers (but not all) are likely to be compromised. • Single-Server PIR protocols work in a computational setting and are built on the basis of homomorphic encryption (FHE, AHE, or SHE). The starting point in singleserver PIR is the AHE-based protocol of Kushilevitz and Ostrovski [15]. The early single-server PIR constructions were both computationally and communicationally low efficient, although recently significant progress was made which allow speaking about the practically suitable one-server PIR solutions [16][17][18][19]. For instance, the OnionPIR protocol from SHE [16] achieves a 64 KB request and 128 KB response in the online phase of the protocol (and the same in the offline phase) for all the realistic database sizes.
On the high level, for both approaches, the database is represented as a function (usually, a polynomial) f such that for any key x and the correspondent value (a record) y holds y = f (x). Then, the client has to send the request x to the server (servers) in a way that preserves its privacy. For the Single-Server PIR, it means that x is sent encrypted, in the Multi-Server paradigm, x is secret-shared. Encryption or secret sharing has to be homomorphic so that the server (servers) could compute the function f (x) under the encryption/secret-sharing and send the encrypted or secret-shared response y back to the client.
In a 2-server computationally-secure PIR of Gilboa and Ishai [20], the request is shared as a DPF (Distributed Point Function) and has a polylog length. In this case, to compute the shares of the response, only additive operations are needed (DPF sharing is homomorphic in respect of them). However, in the information-theoretic setting, which is the focus of this work, it is still unclear how to construct efficient in terms of communication and computation PIR with the secret sharing which is homomorphic in respect of any number of additions and multiplications.
Currently, 3 generations of information-theoretic PIR protocols exist: the first generation originated from the work of Chor et al [1] is based on Reed-Muller codes and have communication complexity n 1/Θ(k) , in the second from Beimel et al. [21] they restated some of the previous results in a more arithmetic language, in terms of polynomials, and also considered a certain encoding of the inputs and element-wise secret sharing the encoding, which resulted in n O(k) communication complexity. The third generation from works of Efremenko [11] followed by [22][23][24][25][26], Yekhanin [12], Beimel et al. [13], Dvir and Gopi [14] is based on matching vectors and is the most computationally efficient line of protocols with the complexity n o (1) for database size n. In all the 3rd generation schemes, but [14], as was demonstrated by Beimel et al. [13], in fact, the combination of two secret-sharing schemes is utilized, both linear in different groups, and a share conversion with respect to some relation, allowing to locally perform some non-linear operation over the shares (apart from the case of the identity relation).

Share Conversion
Suppose that there is some number of parties, each holding a share of a secret s which was created by a secret-sharing scheme Sh 1 . The share conversion is defined as a process of a local computation performed by those parties based only on their shares and outputting the new shares of the secret s in a different scheme Sh 2 so that there is some predefined relation between s and s . A systematic study of share conversion was started by Cramer et al. [27] by considering the case s = s for two arbitrary linear secret sharing systems over different fields.
Let us consider an easy illustrative example: for the function f (x) = x 1 · x 2 over the ring R , and for the conversion's relation s = s 2 , for the input x = (x 1 , x 2 ) shared in a linear scheme over the ring R, it is possible to compute f (x) in the following circuit: first, according to the linear property of the first scheme, servers locally compute shares of x 3 = x 1 + x 2 , then convert shares of x 1 , x 2 and x 3 to shares of x 2 1 , x 2 2 and x 2 3 over R , and finally obtain shares of the response y = 2 −1 (x 2 3 − x 2 1 − x 2 2 ). This approach, however, leaves room for improvement, as such a conversion usually increases the size of the request and response in PIR, because the conversion is a local operation and therefore it is not a trivial issue: to evaluate the circuit which computes some succinct function f (x) which represents the database, the client forms its request as a proper input to this circuit. In addition, not any circuit is possible to compute within the existing secret sharing and conversion schemes, which means that we are bound to only certain kinds of the circuit families and, depending on the VC-dimension of these families of those certain function families, the proper representation of request might be much larger than the size of the database. Recall that the notion of the VC-dimension was introduced by V. Vapnik and A. Chervonenkis in [28]. Informally, for the boolean function family F , where each f ∈ F : D → {0, 1}, VC-dimension VC(F ) is the size of the largest I ⊂ D such that the set f | I f ∈ F of restrictions of functions from F contains all the possible boolean functions over I. The higher VC(F ) relative to |D|, the more efficient PIR can be built. For a precise definition, see [13].
Using homomorphic properties of secret sharing schemes to perform MPC on shared values is a widely used technique in information-theoretic MPC, initiated by the seminal work of [29]. Indeed, in order to (semi honestly) securely evaluate an algebraic circuit, the parties share their input with Shamir secret sharing. Then, linear combinations can be homomorphically evaluated 'for free' via local computation on the shares so that additions can be performed repeatedly any number of times. Multiplications can also be performed, however, multiplying two shared values results in a value shared according to Shamir with the doubled degree. This limits the depth of a circuit computable with (even) 1-privacy if we require that the only communication round will be sending shares for the final reconstruction. This idea transfers to PIR, where inputs come from a single party, so they may also be conveniently preprocessed by it via arbitrarily complex functions (which is not always possible for inputs distributed among multiple parties). For instance, for 3-server PIR, degree-2 polynomials can be locally evaluated if Shamir secret sharing was used. As degree-2 polynomials (over a field) in n variables have non-trivially high VC dimension (n 2 ), this allows for encoding each input via a vector of O(2 n/2 ) entries and using the appropriate share conversion. For k-server PIR, different kinds of share conversion may enable us to evaluate a family of shallow circuits that both have high VC dimension and suitable secret sharing with share conversion, allowing us to locally evaluate them. In particular, note that a share conversion for a suitable relation, rather than a function suffice to evaluate circuits of that type.

BIKO Framework
In [13], Beimel, Ishai, Kushilevitz, and Orlov (BIKO) interpret the state of the art 3-server PIR schemes as using share conversion from a (variant) of Shamir secret sharing over a certain ring R m for small composite m, applied to circuits stemming from MV codes [30] We refer to such codes as S-bounded MV codes. They manage to get improved complexity of the resulting PIR, by using conversions from CNF secret sharing rather than from Shamir over certain small R m , for which a conversion from Shamir for that relation does not exist (the (t, k)-CNF is a threshold secret sharing scheme introduced in [31]; see Section 2.2 for a detailed description). Specifically, they obtained conversions from (2, 3)-CNF over Z m to the additive secret sharing scheme over Z β p for the following relation i is the decomposition of m into prime factors. This is a useful choice, due to the existence of good S m -bounded MV codes over composite moduli m. Their approach is motivated by the existence of conversions for CNF to additive (roughly, that CNF can be converted to "any" scheme, and any scheme can be converted to additive), they use Sh 1 as CNF over a certain ring, and Sh 2 as additive over another ring. This relation (although not a function) suffices to evaluate the required type of circuits, arising from the MV family. There is a potential tradeoff here between the best MV codes that exist over a certain ring R, and the size (more generally, the identity) of the set S that can be achieved. On a high level:

1.
The smaller S is, the easier it is to find a suitable share conversion (required to evaluate functions in the circuit family induced by the MV code).

2.
The larger S is, the easier it is to find an MV code resulting in a family of circuits with high VC dimension. The communication complexity of the resulting PIR decreases with the VC dimension of the set (and eventually, the size of the shallow circuit to evaluate).
The concrete parameters of both constructions used so far for 3-server PIR (in their most efficient variants) follow from the following Theorem 7, and instantiations of it via known constructions of MV codes and share conversion schemes.
On a very high level, these PIR protocols consist of three steps and is shown in Construction 1.

Construction 1: BIKO Framework [13]
1 Let f : {0, 1} log(n) → {0, 1} denote the server's database. The client preprocesses its input x ∈ {0, 1} log(n) into a vector v x ∈ R h for a (constant) ring R, where {v x } x is a set of vectors of an S-bounded MV code. It shares the vector coordinate-wise among the k servers via some (2, k)-private secret sharing scheme Sh 1 (so no single server learns anything about the secret). 2 The servers use linear homomorphism properties of Sh 1 , Sh 2 , which are homomorphic over certain finite groups, to locally evaluate (an encoding of) f on the shared v. More concretely, In some more detail, each < u i , v > uses linear homomorphism of Sh 1 , then a share conversion from Sh 1 to Sh 2 relatively to C S , applied to each share of f i (v), and finally linear homomorphism of Sh 2 is applied to evaluate ∑ i f i (v) on the resulting shares. The share conversion is required to transform < v i , v > for v i = v into a non-zero value, and < u j , v > for v j = v into 0's, making the sum non-zero iff. f (v) = 1. Then each server sends its share to the client. 3 The client recovers the output using linear homomorphism of Sh 2 , and post-processing the value.
The correctness of the scheme is easy to verify. For a 3-server PIR, Ref. [13] provides the technique for the constructing the conversion (it such a conversion exists) from (2,3)-CNF to the additive secret sharing and obtains results for some special cases. Utilizing the results of Beimel et al., Paskin-Cherniavsky and Schmerler in [32] proved that there is a share conversion from (2,3)-CNF over Z m to 3-additive secret sharing over Z p , if m = p 1 p 2 , for distinct odd primes p 1 and p 2 , one of which is equal to p. Thereby they found infinitely many cases when conversion falling into the BIKO framework exists.
Theorem 1 ([13,32]). Let m = p 1 · p 2 , where p 1 , p 2 are distinct primes, and p is a prime. Then, there exists a share conversion from (2, 3)-CNF to additive over Z β p for the relation C S m for some β in the following cases: For other cases of m = p 1 · p 2 and p, however, the existence of the conversion was neither confirmed nor disproved. The constant β in Theorem 1 seems to grow with m, but due to the techniques used, it has not been proven for any infinite family of parameters.

Remark 1.
However, not all the 3rd generation information-theoretic PIR protocols fall into the BIKO framework. For instance, the work of [14] could be viewed as a certain generalization of it. This beautiful work surprisingly manages to carry over "3rd generation" PIR communication complexity previously achieved for 3 or more servers, to the 2-server setting, resolving a long standing open problem, thereby illustrating the limitations of the BIKO framework, providing evidence that generalizing it in certain directions can be instrumental in the context of PIR. In some more detail, [14]'s PIR has a bilinear, rather than linear reconstruction in Sh 2 , and the step corresponding to share conversion can not be cleanly viewed as a share conversion from Sh 1 to Sh 2 according to C S (or in fact any) relation. In particular, the client essentially uses a 2-out-of-3 sharing scheme to make the share conversion work, with himself holding one of the shares.
Following the BIKO framework [13] and utilizing some results of [32], we prove that: There exists a share conversion from (2,3)-CNF over Z 2q to 3-additive secret-sharing scheme over Z (q−1)(q−2) 2 for any odd prime q.
• There is no conversion from (2,3)-CNF over Z 2q to Z β p for any odd primes q and p (including the case q = p) and any β > 0.
In this way, we prove the existence of the conversion for infinitely many cases, and also for infinitely many cases we prove a conversion does not exist. Together with [32] for m's which are products of two primes, it leaves open only the question of the conversion in the case when m = p 1 p 2 , where p 1 and p 2 are both odd and not equal to p.
Note also that for considered cases, we managed to compute the parameter β which determines the server's response size. We prove that β in Theorem 7 is indeed the best for m = 6 among m = p 1 p 2 where p 1 = 2. More concretely, one of our contributions is the precise value of β for share conversion with respect to relation C S 2q . Previous techniques did not allow to compute β, as they traded generality that could allow computing β for some additional simplicity-using a single row in M ≡ to understand the rank difference β = rank(M ≡, ≡ ) − rank(M ≡ ).

Computing and improving server reply size.
Another somewhat surprising observation we made is that we may sometimes increase S beyond S m so that a conversion from (2, 3)-CNF over Z m to Z β q (for the same m, q as before) still exists. This may have two possible implications. A direct implication that we observed experimentally for several values of m, is that the rank difference β sometimes goes down, but not all the way to 0. Thus, if the share conversion still exists, as follows from the BIKO technique, β may decrease, leading to the reduced size of the server's response. We checked this fact for some small m's by computer search and obtained positive results, which is presented in Section 4. Indeed, we obtained smaller β supplementing S m up to S m by additional values. We informally sum the result of the computer search in the following theorem. This result may also be viewed as evidence that canonical sets S m for m with a larger number r of prime factors may potentially have share conversions for C S m for (significantly) smaller than 2 r − 1 number of servers (as we have conversions for 2 r − 1 servers but S larger than S m , where the resulting linear system has much more rows than columns). This direction is interesting to explore, initiating a systematic search for share conversions with server sets as small as possible, resulting in PIR with share complexity polynomial in MV codeword length for m which is a factor of r primes.
In addition to our two main contributions, we identify a few minor errors in [13,32]. Nevertheless, these errors do not affect the correctness of any of their main contributions.

•
We recalculated some computer search results of [13] (BIKO) as they come in contradiction with the theoretical result of Paskin-Cherniavsky and Schmerler. In particular, [13] showed the absence of the conversion for m = 35, p = 7, while [32] proved that the conversion for this case exists. In addition, we obtained numerical results for cases m = 22, 26, 33 which were not considered in BIKO. Our numerical results given in Section 4 confirm both our theoretical result for p 1 = 2 and the conclusion of [32]. • We corrected some calculation mistakes made in previous work [32]. The corrigenda are shown in Appendix A.

Instantiations of BIKO and Future Directions of Our Work
Almost all third-generation PIR protocols falling in a BIKO framework, utilize the conversion from Shamir secret sharing instead of CNF. The existence of the conversion from Shamir secret sharing scheme implies the existence of conversion from CNF, but not vice versa [13].
The following theorem by V. Grolmusz generalizes a similar instance of the theorem for 3-servers in [13], to put our work in a broader context. It states the size of the MV families depending on the constant m which has an impact on the complexity of the PIR protocols based on them. which is S m -bounded. Here c ≤ p −r r , where p r is the largest prime.
In fact, the construction in Theorem 4 generalizes to any m with r distinct prime divisors. Next, we outline some parameters for which suitable share conversions leading to (3rd generation) PIR via the BIKO framework and MV codes from Theorem 4 exist. Note that Theorems 5 and 6 were initially stated in terms of conversion from Shamir secret sharing, but a corresponding conversion from CNF is implied.
Theorem 5 ([11,26]). For each r ≥ 2, there exists a number m with r distinct prime divisors p 1 ≤ . . . ≤ p r , with p r ≥ 73, for which there exists a share conversion from (2, 3/4 · 2 r )-CNF over Z m to 3/4 · 2 r -additive over Z β 2 for some β < m, and relation C S m . Furthermore, such a conversion exists for every m of the form 2 t − 1 with r distinct prime divisors, if the number of parties, 3/4 · 2 r , is replaced by 2 r .
In a nutshell, the above result is obtained by [26] via a composition technique applied to [11]'s result for 3-server and 2 r -server PIR. The reduction in the number of parties from 2 r to 3/4 · 2 r for m with r prime distinct divisors follows from the (somewhat surprising) 3-party conversion for r = 2 and m = 73 · 7.
In [23], the authors found 50 additional such 3-party conversions for m = p 1 · p 2 (which need to satisfy a certain condition), leading to further improvements in the number of parties as a function of r. Note, that for all m found in [23], p r ≥ 73 are large, so the constant in Theorem 1 grows fast with r.
Note that for the above instantiations, "descending" from [11], m must be odd.
Theorem 7 (Implicit in [13]). Let m ∈ N, {0} ⊆ S ⊆ Z m , and C an S-bounded MV code family {C h } of vectors in Z h m . Assume also there exists share conversion from (2, k)-CNF over Z m to Z β p for some constant β, for the relation C S . Then there exists a k-server PIR family for databases of size n = |C h | with client's message of size h log(m) and server's message of size β log p .
We note that among the known m's in the Corollary above for r = 2, p r ≥ 73 and grows particularly fast with r for r ≥ 104 if (3/4) 51 2 r servers (instead of 3/4 · 2 r ) exist.
Instantiating Theorem 7 with Theorem 4 for MV-code construction, and either Theorem 1, we obtain the best known concrete efficiency of 3-server PIR, with 2 6 √ log(n) log log(n) communication complexity. On the other hand, for more than polynomially improved communication complexity and a larger number of servers, the best result is obtained by instantiating the share conversion via Theorem 5.
Our concrete result does not improve communication complexity for 3-server PIR, which is essentially optimal for conversion from Z 6 by [13] as stated in Theorem 1. However, the technical tools developed may help understand the existence of share conversions for even m with a larger number of prime factors, with better the communication complexity of PIR and the larger number of servers. Due to the generality of BIKO's framework, converting from CNF, one could hopefully get improved efficiency of communication complexity relatively to the number of servers. In particular, as noted above, the instantiation of BIKO as in [11] does not yield PIR protocols with even m, and the known values of m have large maximal factors and lead to PIR with high constants in the exponent. By a direct corollary from Theorems 4 and 5 similar to Corollary 7, we get a 6-server PIR with communication complexity O 2 146·log 1/3 (n) log log 2/3 (n) . Using the BIKO framework instantiated Theorem 5-the 'furthermore' part, for 8-server PIR we obtain a complexity of O 2 34·log 1/3 (n) log log 2/3 (n) , by using m = 255 = 3 · 5 · 17 = 2 8 − 1, and instantiating Theorem 4 with m = 255. Thus, as far as we know, no PIR with complexity better than O 2 146·log 1/3 (n) log log 2/3 (n) (best known 6-server PIR) exists for 7 servers. We conjecture that a 7-server PIR with much improved constants exists, by using share conversion from (2, 7)-CNF with parameters generalizing the conversions we obtained for m = 2 · p 2 .
We hope to be able to verify the conjecture more easily by generalizing the insights we have for the existence of a share conversion for m = 2 · p 2 to a share conversion to m = 2 · 15 (more generally, for 2 · c for some composite c), and the fact that in this case of p 1 , the analysis turned out to be rather simple. Another reason to hope we can manage with 7 servers is that M ≡ is in that case, has a form similar to the 3-server case considered in present work (unless, for example, 6-server case). See Section 1.6 for more details.
A broader goal is improving the number of servers one can tolerate for PIR with CC corresponding to MV codes over Z m with r prime factors. While [26] show how to achieve (3/4) 51 2 r servers for an infinite number of r's and corresponding m's, and 3 r/2 -server PIR for finitely many r's, it would be interesting to improve Theorem 6 to get share conversion for 3 r/2 -server PIR for all r. Our hope is to devise a composition theorem along the lines of [26], composing 'gadgets' of conversions from (2, 3)-CNF over Z m for coprime composite m's. As we already have such conversions for infinitely many pairwise coprime m's via Theorem 1, we only need a suitable composition theorem. In fact, it is not hard to show, that if we had conversions for coprime m 1 , m 2 respectively, both to Z β p for the same p, say Z β 2 , we would obtain the result. In particular, it is strictly easier to prove the existence of conversion from Z p 1 p 2 to Z β 2 for some β depending on p 1 , p 2 for infinitely many coprime p 2i+1 · p 2i+2 's (as the 51 known cases based on Mersenne-style primes in [26] are a special case). To summarize, to complete this direction, we only need to find a conversion from (2, 3)-CNF over Z m i to Z β i 2 for infinitely many coprimes m i 's of the form m i = p 1 · p 2 where p 1 , p 2 are distinct primes. This seems to require only moderate extension on the (linear algebraic) toolbox conversions from (2, 3)-CNF that has been laid out in the seminal work of [13] and subsequently in [32].
A more ambitious still direction (which we expect to be more technically involved) is expected to lead to dramatic improvements in the number of servers, bringing it down from exponential to linear in r. It relies on the following composition lemma, which is not hard to prove (see full version for details).

Lemma 1.
Let m 1 = 2m 1 , m 2 = 2m 2 , where m 1 , m 2 > 1 are odd coprime integers. Assume there exists a share conversion from (2, k)-CNF over Z m 1 to (t 1 , k)-CNF over Z β 1 2 for the relation S m 1 (and an analogous conversion exists for m 2 ). Then there exists a share conversion from (2, k)-CNF over Z 2m 1 m 2 Remark 2. More generally, slightly optimizing parameters, relatively to iteratively applying Lemma 1 for two m i 's, for any r ≥ 2, and m 1 , . . . , m r as above, we obtain a share conversion from Assume a conversion generalizing our result from Theorem A1 for 3 servers to more servers, while keeping the conversion to a scheme (t, k)-CNF for sufficiently small t. Such a scheme has enough redundancy to support multiplications over the resulting field F 2 β unlike (k, k)-additive, which has none (if needed, the field characteristic 2 may be replaced with some other prime, generalizing Theorem 1 instead). Then we can obtain PIR with linear server complexity k = O(r), using Theorem 4, and applying Lemma 1 r − 1 times. More precisely, we have: Assume there exists a (global) constant t, such that for all sufficiently large k, the following holds. For infinitely many m i 's of the form m i = 2p i where all p i are odd distinct primes, there exists a share conversion from (2, k)-CNF to (t, k)-CNF over Z β i 2 for the relation C m i . Then, for all sufficiently large r, there exists a k = t(r − 1) + 1-server PIR with communication complexity 2 O(log 1/r (n) log log 1−1/r (n)) .

Our Techniques
As described above, one of the main contributions of [13] was an instantiation of the framework for designing PIR protocols, which reduces the question of the existence of a three-server PIR protocol to the existence of a share conversion for certain parameters p 1 , p 2 , p, and certain linear sharing schemes over Abelian rings R, R determined by the parameters.
BIKO provides the criteria of the share conversion existence in the case when m = p 1 p 2 for distinct primes p 1 and p 2 and the set S m = {s 1 , s 2 , 1}, where s 1 mod p 1 = 0, s 1 mod p 2 = 1, s 2 mod p 1 = 1, s 2 mod p 2 = 0. Namely, they prove that for such m and S m , the share conversion from (2,3)-CNF over Z m to 3-additive scheme over Z β p exists if and only if rank(M ≡, ≡ ) − rank(M ≡ ) = β > 0, where the rank is computed over F p . The matrices M ≡ and M ≡, ≡ are matrices over Z p with 3m 2 columns and 3m 2 and 4m 2 rows respectively which are constructed from some specific system of equations and inequalities. Beimel et al. in [13] did not provide the general solution for this system; however, they proved existence and nonexistence of the conversion for some special cases.
While the solvability of a system can be verified efficiently for a concrete instance, it does not provide a simple condition for characterizing triples (p 1 , p 2 , p) for which solutions exist. Moreover, the size of the matrix M ≡, ≡ in this system is 4m 2 × 3m 2 which makes the numerical solution for big m's heavy in practice (though asymptotically efficient). Before [32], where the solvability of the system for the case odd primes p 1 and p 2 , if one of them equals to p was proven, even the question of whether an infinite set of such triples exists remained open.
Our concrete goal in this work is to better understand the case of m = p 1 p 2 , motivated by understanding the technical foundations of the broader problem for m which is a product of r > 2 distinct primes (see Section 1.5 for details). We proceed using the BIKO characterization above. Concretely, for parameters m = p 1 p 2 and p, this reduces to calculating the quantity rank(M ≡, ≡ ) − rank(M ≡ ) = β, where the rank is computed over Z p .
In [32], the case p 1 = p for odd p 1 and p 2 was explored. To simplify the technical task, the authors of [32] rely on the observation from [13] that β > 0 iff M ≡ does not span v ≡ for any row v ≡ of M ≡ . Thus, they replace M ≡ with some v ≡ as above, and work with that (forgoing the goal of understanding the particular value of β). Then, they proceed by bringing the matrix M ≡, ≡ to a more convenient form by performing a sequence of carefully tailored elimination steps on the rows of the matrix M ≡, ≡ . The sequence of eliminations is based on a observing a 3-leveled structure of the matrix of the matrix M ≡, ≡ , and working on blocks of decreasing coarseness as the elimination process progresses. It also involves a change of basis at some point, to make the matrix's structure nicer for understanding. That is, rewriting the matrix so that the set of columns corresponds to a new basis-here we even manage to get fewer vectors in, as it suffices to include a set of vectors which is guaranteed to span M ≡, ≡ . However, the resulting matrix after that process remains too complex to check whether β > 0 for all parameters. The analysis up to that point (resulting in some matrix A inter = (A inter , v ≡ ) to analyze) is oblivious to the particular parameters except for not looking at even m (not because it was particularly hard, but rather out of a decision to limit the scope of the paper at what was already achieved). To obtain their partial result for some of the parameters, the authors then reduce the matrix's rows modulo a certain vector subspace (formally, multiplied it from the right by a certain square matrix L with non-trivial left kernel). Clearly, it holds that if rank(A inter L) − rank(A L) > 0, then rank(A inter ) − rank(A ) > 0 as well (implying the existence of a share conversion), but not necessarily the other way around. The matrix A inter L turns out to be sufficiently simple to analyze, and for p which is either p 1 or p 2 , the resulting rank difference is non-zero. However, we do not yet understand other parameters, for which rank(A inter ) − rank(A ) = 0, or the case of even m. Also, due to the first simplification, the concrete value of β is not found, and thus the concrete answer complexity of the resulting PIR as implied by Theorem 8 remains unknown.
Our current paper considers the case where p 1 = 2. We proceed by a quite straightforward generalization of [32]'s elimination process up until producing the matrix A inter , except that we do not make the simplification of keeping a single row out of M ≡ , but rather keep the entire matrix. The main divergence from [32] is that we do not perform the reduction modulo a subspace, but are able to directly check whether rank(A inter ) − rank(A ) > 0, and furthermore to compute the exact value of β. This is made possible, as the case where p 1 = 2 turns out to be particularly simple, and we managed to successfully analyze it directly (for all p 2 , p). The other cases (when m is odd, and p is not equal to p 1 or p 2 ) remain open.

Some Notation
Parameters of the secret sharing schemes. Throughout this paper, we fix the notation for p 1 , p 2 and p being prime numbers such that p 1 = p 2 , and m = p 1 · p 2 are the parameters of the secret sharing schemes and conversion. Later, considering the corner case p 1 = 2 in Section 3.4, we introduce the odd prime number q to set p 2 = q.
Matrices and block-matrices. In this paper, we will consider matrices and blockmatrices over a finite field F = Z p . Those matrices are defined for 3 levels. The level-2 ("big") block-matrices we denote by letter A with correspondent indexes. The elements of level-2 matrices are level-1 block-matrices which we denote by the letter R with a lower index equal to the upper index of its "host" A. The level-0 "small" matrices are square matrices, initially having the size m × m. For them, we use distinct letters.
For entry i, j of some matrix X, we use the standard notation of X[i, j]. Addressing the elements of level-2 and level-1 matrices, we address their blocks. Such, A 1 [i, j] denotes the block in the ith row and jth column of A 1 . For level-0 matrices, we address the particular elements of this matrix. More generally, for a matrix X ∈ F u×v , for the subsets R ⊆ [v] of rows and C ⊆ [u] of columns, X[R, C] denotes the sub-matrix with rows (or block-rows) restricted to R and columns (or block-columns) restricted to C. Those rows and columns are ordered in the original order in X. As special cases, using a single index i instead of R (C) refers to a single row (column). A "·" instead of R (C) stands for [u] ([v]). Most of the time, index arithmetic will be done modulo the matrices' number of (block-)rows and columns (we will however state this explicitly).
When we consider the case p 1 = 2 in Section 3.4, the level-1 matrices R i j 's are quite small and have only 2 level-0 blocks. Therefore, we omit the level-1 and address to level-0 blocks as to the entries of level-2 matrices A (k), .

Some concrete matrices and vectors.
By the letter I, we denote the m × m identity matrix. If the identity matrix has a different size, we write this size down in the lower index. For instance, I q is a q × q identity matrix. By a b×c we denote a b × c matrix with all elements equal to a. In case when a = 0 and the size of this zero-block is clear from the context, we omit b × c and write 0 instead of 0 b×c . By a b we denote the row of a's of the length b. For example, 1 m means the m-long string of 1's.
By e i we denote the unity vector. The length of this vector is, as a rule, clear from the context, or it is specified in the accompanying text. The lower index specifies the position of 1 in this vector. In Section 2.5 and subsequently in Section 3.2, when we construct matrices in the basis B = B 1 ∪ B 2 , the unity vectors have double indexes e b,c . As explained in Section 2.5, there is the telescopic indexing system, and this double index points to the single position in the vector.
Concatenation and circular shifts over matrices. For matrices X, Y with the same number of columns, (X; Y) denotes the matrix comprised by concatenating Y below X. For matrices X, Y with the same number of rows, we denote by (X|Y) the matrix obtained by concatenating Y to the right of X.
In Section 3.4, we obtain the set of circularly shifted matrices. By X <<k we denote the matrix X with the circular left shift by k positions.

Secret Sharing Schemes
A secret sharing scheme is defined by pair of algorithms Sh = (Share, Dec). The randomized algorithm Share randomly splits a secret message s ∈ S into an n-tuple of shares, (s 1 , . . . , s n ). The deterministic algorithm Dec reconstructs s from some allowed (qualified) subset of the shares. The set of all the qualified sets is called an access structure of the secret-sharing scheme. We say that Sh is t-private, and has a threshold access structure if any t shares jointly reveal no information about the secret s.
We say that Sh is linear over some finite Abelian ring G if S ⊆ G and each share s i is obtained by applying a linear function over G to the vector (s, r 1 , . . . , r ) ∈ G +1 , where r 1 ,. . . , r are random and independent elements of G. A useful property of such schemes is that they allow evaluating locally linear functions of the shares such that additions and multiplications by the constant from G. In this work, we consider two types of linear secret sharing schemes: • Additive secret sharing: the algorithm Share splits s ∈ G into n random ring elements that add up to s; the algorithm Dec reconstructs s by adding up all the shares. This scheme is (n − 1)-private. Within the limits of this work, we consider a 3-additive scheme, where n = 3. • CNF secret sharing: the algorithm Share first splits s ∈ G into ( n t ) additive shares s T , each labeled by a distinct set T ∈ ( [n] t ), and then lets each share s i be the subset of s T apart from i ∈ T. For (2,3)-CNF we consider in this work, each of 3 parties obtains 2 additive shares out of 3, such that if additive shares of s are (a, b, c), then s 1 = (b, c), s 2 = (a, c), and s 3 = (a, b). This scheme is 1-private, as any two parties can sum their shares up to calculate the secret s.
See [33] for a survey on secret sharing.

Share Conversion
We recall the definition of (generalized) share conversion schemes as considered in our paper. Our definition is exactly the definition in [13], in turn, adopted from previous work.

Definition 1 ([13]
). Let Sh 1 and Sh 2 be two n-party secret-sharing schemes over the domains of secrets S 1 and S 2 , respectively, and let C ⊆ S 1 × S 2 be a relation such that, for every a ∈ S 1 , there exists at least one b ∈ S 2 such that (a, b) ∈ C. A share conversion scheme convert(s 1 , . . . , s n ) from Sh 1 to Sh 2 with respect to relation C is specified by (deterministic) local conversion functions g 1 , . . . , g n such that: If (s 1 , . . . s n ) is a valid sharing for some secret s in Sh 1 , then g 1 (s 1 ), . . . g n (s n ) is a valid sharing for some secret s in Sh 2 such that (s, s ) ∈ C.
For a pair of Abelian groups G 1 , G 2 (When G 1 , G 2 are rings, we consider G 1 , G 2 as groups with respect to the "+ operation of the rings), we define the relation C S as in [13]. Definition 2 (The relation C S [13]). Let G 1 and G 2 be finite Abelian groups, and let S ⊆ G 1 \ {0}. The relation C S converts s = 0 ∈ G 1 to any nonzero s ∈ G 2 and every s ∈ S to s = 0. There is no requirement when s / ∈ S ∪ 0. Formally, Given m = p 1 · p 2 , where p 1 = p 2 are primes and p is a prime, we consider pairs of rings G 1 = Z m , G 2 = Z β p . We denote the set a relation C S m in this work is built with as where (a, b) Z m means the element of Z m which has the remainder a modulo p 1 , and b modulo p 2 . For S = S m , we refer to S m as the canonical relation for Z m .

The Characterization of BIKO.
In Beimel et al. [13], Sh 1 is a 3-additive secret sharing scheme over Z m , and Sh 2 is (2,3)-CNF sharing over Z β p . The conversion with respect to relation C S m from Sh 1 to Sh 2 is considered. In [13], is proven that such a conversion exists iff a certain condition is satisfied by the matrix M ≡, ≡ over Z p .
In The work in [13] provided a quantitative lower bound on β, depending on the degree difference between M ≡ and M ≡ . Theorem 8 (Theorem 4.5 [13]). Let β = rank F p (M ≡, ≡ ) − rank F p (M ≡ ). Then, we have: • If β = 0, then there is no conversion from (2, 3)-CNF sharing over Z m to additive sharing over Z κ p with respect to C S m , for every κ > 0. • If β > 0, then there is a conversion from (2, 3)-CNF sharing over Z m to additive sharing over Z β p with respect to C S m . Furthermore, in this case, every row v of M ≡ is not spanned by the rows of M ≡ .
Theorem 8 provides a full characterization via a condition that given (p 1 , p 2 , p) can be verified in polynomial time in (p 1 , p 2 , log(p)). More precisely, the size of our matrix M ≡, ≡ is 4m 2 × 3m 2 , so verifying the condition amounts to solving a set of linear equations, which naïvely takes about O(m 6 ) time, or slightly better using improved algorithms for matrix multiplication, and the running time cannot be better thanΩ(m 4 ) using generic matrix multiplication algorithms. Thus, the complexity of verification grows very fast with m, becoming essentially infeasible for p 1 , p 2 circa 50.

Our Starting Point-The Result of Paskin-Cherniavsky and Schmerler
The work [32] is made within BIKO's setting. Starting with the matrix M ≡, ≡ they performed the sequence of elimination steps, according to the following lemma.
denote non-empty sets of rows and columns, respectively. A is obtained from A by a sequence of row operations on A, so that A [I 1 , , and the rest of the rows in A [I 1 , In fact, the result of [32] is the proof of the existence of the conversion for finite rings G 1 = Z p 1 p 2 to G 2 = Z β p 1 with distinct odd p 1 and p 2 , for which it was enough to prove that the first row of M ≡ is not spanned by M ≡ . Therefore, the matrix considered in [32] contained the full matrix M ≡ and the single row from M ≡ . (As in our work we solve the problem for 2 sets of parameters proving both positive and negative results, we consider the full M ≡, ≡ matrix.) After two elimination steps which cut the matrix M ≡ to m + m 2 rows, and the permutation of columns, they introduced a new basis where e x,y is a vector of length m 2 having 1 in the position indexed by (x, y) and 0's elsewhere. Indexes x and y are taken modulo m.
Basic m × m blocks of Type-2 are identity matrix I, and Bigger blocks composed from them have the size p 1 × (p 1 − 1) "small" m × m blocks: The matrix M ≡ is brought to the form (A 1 ; A 2 ), where each of matrices A 1 and A 2 are block matrices having the left and right parts (In [32], the matrices we are talking about have the additional upper index (6), which is omitted here. Thus, A i = A (6),i = (A (6),L,i |A (6),R,i ) for i ∈ 1, 2): where We remark that the appearance of A R,1 and A R,2 are slightly different from those in the work of Paskin-Cherniavsky and Schmerler. The difference is in p 2 − 1'st block-row and comes from the computational mistake made in [32] while changing the basis to B. We correct this mistake and bring the corrigendum in Appendix A.
In [32], only the first row from the Type-3 layer was constructed. For our purposes of obtaining β, we need the full Type-3 matrix. Therefore, we write here down the general formula for Type-3 rows taken from [32], and we use it in our next work to construct the entire Type-3 matrix in the basis B:

Starting Point and Main Technical Tool
Our goal is to compute the difference between ranks of matrices M ≡ and M ≡, ≡ . We start from the matrix M ≡ brought to the form (9) and we also construct the result of initial elimination steps over the matrix M ≡ from (12), obtained in [32]. We continue the process of elimination using Lemma 2 considering the case m = 2q, where q is an odd prime number.

Construction of Type-3 matrix
First, as we need to compute the rank of the matrix M ≡, ≡ , it is not enough to consider only a single row from the Type-3 matrix (which is the result of the sequence of elimination steps over M ≡ . Therefore, our first step is to reconstruct this matrix from (12) and to perform initial elimination steps similar to those were made over matrix A 2 in [32] to bring it to the form (11).
To be consistent, we denote the initial Type-3 matrix as A (−1),3 , the intermediate result of the inner elimination steps over this matrix as A (0),3 , and the final result (on the same stage as (9)) as A 3 . Next we describe the process of obtaining Type-3 matrix from (12). Recall that each of matrices A is separated in the left and right parts, where the left part contains indexes of vectors from basis B 1 , and right-from B 2 . Each row of A (−1),3 is indexed by i, j, b such that the largest blocks are indexed by i ∈ Z p 2 , and contains blocks indexes by j ∈ Z p 1 . The smallest blocks indexed by b ∈ Z m are, in turn, parts of middle-size blocks.
First, rewrite (12) • Consider the case when i = p 2 − 1, j = 0. Each row indexed by (i, j, b) is determined by (13), where B = b, C = i + j(1, 0) Z m . Then the first two terms in (13) are The term in the sum in (13) is • When j = 0, i = p 2 − 1 the sum in (15) turns to 0. As for (14), the first two terms in (13) are: • When j = 0, i = p 2 − 1, the first two terms of (13) are the same as in (14). The only difference is the sum of terms: • Finally, for i = p 2 − 1, j = 0, the first two terms in (13) are according (16), and the terms in sum are as in (17), except from the term ∑ Substituting expressions (14)- (18) to (13) for appropriate i, j, b, we obtain the matrix which has the structure similar to A 2 : where 1 3 . . . . . .
Here block matrices R 2 3 and R 3 3 are of the same form as R 2 2 (6) and R 3 2 (7) respectively, where the blocks T 2 are replaced with the blocks T 3 : The matrix R 1 3 is similar to R 1 2 , but with the opposite sign, and permuted rows: The matrix R 4 3 can be obtained from R 1 3 with the circular permutation of rows:

Elimination Steps in Type-3 Matrix
Following the way in [32] for elimination steps in A 2 , we first sum the block-rows in (19) with ordinal numbers from 0 to p 1 − 2 to the last block-row. The resulting matrix A (0),3 equals A (−1),3 except the last row, where A (0),L,3 [p 2 − 1, ·] is 0-block, and The second elimination step is an inner step in every block-row except from the last one (as those block-rows are the same in A (−1), 3 and A (0),3 , we can say that this step is performed over (19)). Namely, in any level-2 block-row A (0),3 [i, ·] where i ∈ {0, ..., p 1 − 2}, we subtract the level-1 block-row with the ordinal number 0 from all other sub-rows in this block-row. As a result, A 3 = A L,3 |A R,3 , where the left-side matrix takes the form and the right-side matrix is where 3.4. The Case of the Even m (p 1 = 2, p 2 = q) In this section, we consider the case p 1 = 2, p 2 > 2 for both p = 2 and p > 2 (including the case p 2 = p). We obtain the feasibility results and, moreover, in the case of p = 2 when the conversion from (2,3)-CNF over Z 2p 2 to three-additive secret sharing over Z β 2 (as we prove) exists, we also compute β. We adopt the following notation in this section: p 2 = q > 2, and p are prime numbers, and m = 2q (later we split this case into subcases p = 2 and p > 2). Our starting point is the block matrix A = A 2 ; A 1 ; A 3 over Z p , where A 1 and A 2 are described in (9), and A 3 in (23) and (24).
We next consider the matrices in the case when m = 2q, where q is an odd prime number. Below we write down the block matrices completing the matrix A. Each of the following matrices contains two square m × m blocks: ; We would like to remind that 1 m = (1, ..., 1) is an m-element vector of 1's, also 1 q is the q-element vector of 1's. I is a m × m identity matrix, and I q is a q × q identity matrix.
The left-side and the right-side matrices are divided by the double vertical line. All the subsequent matrices which we will obtain from A will have the additional upper indexes: 3 , where i is the ordinal number of the matrix in the sequence of transformation steps. Within the limits of this section, we consider matrices A (·), as if they have level-0 blocks as the entries, and we, therefore, address to level-0 blocks as to the entries of level-2 matrices A (·), .
First of all, we subtract A 2 from A 3 and rewrite all the matrices such that first go all the block-rows for i < p 2 − 1, j = 0, and then the block-rows for i < p 2 − 1, j = 1 and two last block-rows remain where they were before.
We note that the last block-row in A (1),2 is the same as the previous one up to sign. The same observation we can make about A (1),3 .

Internal Transformations in Matrices on the Level-2
We made some quite obvious steps inside each of the matrices A (1),· to reach a more comfortable form. In A (1),2 we eliminate the last block-row. In addition, we add all the block-rows from A (1),2 [i] for i ∈ {q − 1, ..., 2q − 3} to block-row A (1),2 [2q − 2] and change the sign of this row. The resulting matrix is In A (1),1 , we change the sign of the pre-last row, move it to the position p 2 − 1 = q − 1 and make telescopic elimination by the following sequence of steps: starting with the row i = q, we subtract the previous row from ith, then change the sign of the ith row. Then we increment i by 1. We repeat this algorithm up to the last row of this matrix. Note, that the last row turns to 0, so we eliminate it from the matrix. The result is the matrix with (2q − 1) rows: The same transformation we perform in A (2),3 , working with block-rows instead of rows. The result is the following matrix with (2q − 1) block-rows: . . .

Resolution of the Level-0 Blocks in A (2),3
Consider the block-rows of the appearance · · · T + 2I · · · in (36). According (30), it equals to the matrix · · · (I q |I q ) · · · · · · (I q |I q ) · · · , where first q rows are the same as q last ones, and those q rows can be eliminated from the matrix. According to our notation, the q × m-block equal to the concatenation of two I q 's is denoted as (I q |I q ). The rows we just considered belong to the local basis of A (2),3 . Then we consider the block-rows · · · T · · · −I I · · · in (36). Subtracting from this block-row the basis vectors from two corresponding block-rows · · · (I q |I q ) · · · and taking into account (30), we transform it to: Again, the first q rows in this block-row can be crossed out from the matrix, as they are the same as q following ones up to the sign. Then the resulting matrix contains only q(2q − 1) rows of the local basis: First, we consider block-rows of appearance · · · T 2 · · · I · · · in A (2),2 (Equation (34)). For each j > q, , hence we can bring all the T 2 [j] to 0 by the inner linear operations of rows in the block-row, which result in the following appearance of the block-row under the consideration: where T 2 = T 2 [{0, ..., q}, ·], and J is a (q − 1) × q-size matrix: The set of all the upper block-rows as in (38) is in the echelon form and belongs to the basis of our matrix. Now, we pay attention to q − 1 first rows in A (2),1 (Equation (35)), taking into account that 1 m = T 2 [0] + T 2 [q]. Subtracting the correspondent rows of A (2),2 from A (2),1 , we bring the rows of the appearance A (2),1 [i, ·] = · · · 1 m · · · · · · to the form 0 · · · (−e 0 | − e 0 ) · · · . Considering this resulting row together with the second block-row from (38), we can see that those are easily transformable to the form 0 · · · (I q |I q ) · · · . As block-rows of the appearance 0 · · · (I q |I q ) · · · are composed from transformed rows of both matrices A (2),2 and A (2),1 , we denote this merged matrix of q − 1 block-rows as A This matrix is in the left echelon form and thus consisted from the basis vectors of M ≡, ≡ . In addition, we consider the last block-row of A (2),2 , namely, 0 I I · · · I T 2 − I and subtract from there all the correspondent basis vectors of (40). Each block I then turns to 0 −I q 0 I q . Adding the upper half of the resulting block-row to the lower (and changing the sign of the upper one), we get 0 (0|I q ) (0|I q ) · · · (0|I q ) ( We subtract the last row of A (2),1 from each of last q rows of this block-row. Then these last q rows come to the form 0 · · · (I q |I q ) , and we append them to (40) to complete the matrix A (3),1&2 . Then, subtracting the appropriate rows of the block (I q |I q ) from the block (I − T 2 )[0, ..., q − 1], we transform it to the form (0 q×q |I q − N), where N is a q-dimension matrix Finally, we consider the block-rows in A (2),2 in (34) of the appearance 0 · · · T 2 − 2I −T 2 · · · . Let us consider the result of the elimination the basis (I q |I q ) from the block The result of the elimination is 0 N 0 −N . For the block −T 2 , the result of the elimination Thus, each block-row under the consideration turns to the following q-row block-row 0 · · · (0|N) (0|2I q − N) · · · .

Resolution of the A (3),1&2 Basis
To apply Lemma 2, it is necessary to subtract vectors of basis A (3),1&2 from the first (q − 1) block-rows of A (3),2 . The matrix A (3),3 contains exactly the same block-rows as A (3),1&2 which can be simply crossed out.
Subtracting block (I q |I q ) from block (I q+1 |0), we obtain the latest in form 0 −I q 0 e 0 .
We denote the (q + 1) × q block −I q e 0 as (−I q+1 ). Then, after applying Lemma 2 to remove the basis A (3),1&2 and corresponding columns, we obtain the matrix . . .

Elimination of the Left-Side Matrices
We note that each row in block (I q | − I q ) is spanned by the rows of T 2 , namely, . Subtracting the correspondent rows of A (4),2 from A (4),3 , and applying Lemma 2, we obtain Here, the left side is crossed out, and matrix A (5) = A (5),2 ; A (5),3 is the matrix of q × q level-0 blocks.
3.4.6. Resolution of N and 2I q − N Blocks Up to this moment, we performed transformations in matrices without connection to any particular modulus. Considering blocks N and 2I q − N in A (5),2 (Equation (45)), we can see two different situations taking into account the prime modulus p = 2 or p > 2:

•
In the case of p > 2, each N defined in (41) can be transformed to I q by the linear transformation steps such as additions of rows, multiplications by (-1) and 2 −1 (which exists due to the fact that p is odd). Applying the same transformations to the adjacent block 2I q − N, we turn it into −I * q , where • In the case of p = 2, each block-row of the appearance · · · N 2I q − N · · · in (45) contains q equal rows · · · 1 q 1 q · · · . According to the dichotomy above, we next consider two cases.
Theorem 9. Assume m = 2q, and p, q are odd prime numbers. Then there is no share conversion from (2,3)-CNF over Z 2q to three-additive secret-sharing scheme over Z β p for any β.
Proof. The proof follows from Theorem 8 and the fact that rank(M ≡, ≡ ) = rank(M ≡ ).
Performing the same permutation of rows in each block-row in A (6),3 as for the case p > 2, we obtain block-rows in form · · · I q I <<1 q · · · , where I <<k q according to our notation is the result of the left k-bit circular shift in I q . We remark that I <<k 1 q I <<k 2 q = I <<(k 1 +k 2 ) mod q q . Subtracting the last block-row of A (6),2 from the first block-row of A (6),3 , we obtain The matrix I q ⊕ I <<1 q is an invertible matrix, hence it is the linear transformation matrix.
We subtract the row (I q ⊕ I <<1 q )A (7),3 [1] from A (7), 3 [0] to obtain We stress, that . Then we similarly subtract the 3rd block-row multiplied by the 3rd element of the first block-row, then 4th, and so on. As a result, the first block-row takes the form: Taking into account that q−1 k=0 I <<k q = 1 q×q = N, the first block-row of A (7), 3 [0] equals zero and can be crossed out of the matrix. Now, we make some elimination steps to bring matrix A (6),2 to the echelon form (such that all the rows there are basis rows). For this, we subtract all the rows with even ordinal numbers from the first row of the last block-row. The resulting last block-row in A (6),2 turns to K K · · · K K ⊕ N , where 1 0 0 · · · 0 1 0 1 · · · 1 1 1 0 · · · 1 . . .
In both blocks K and K ⊕ N, the first row is spanned by others (as their sum), thus the first row of this block-row can be crossed out. The remaining rows in this block-row are basis vectors, which do not span A (7),3 , and thus, according to Lemma 2 can be thrown out the next consideration (together with the first row which also does not span A (7),3 ). Then . . .
Matrix A (7),3 contains (q − 2) block-rows with q rows each. We make the last elimination step by subtracting each row of A (7),2 from the first row of the correspondent block-row of A (7),2 . Then each block I q turns to K, and each I <<1 q turns to K <<1 , where the first row is the sum of others. Hence, each block-row in A (7),3 loses the first row, and the rest of them are not spanned by the basis vectors of A (7),2 . Thereby, there are (q − 2) block-rows with (q − 1) rows each, and rank(M ≡, ≡ ) − rank(M ≡ ) = (q − 1)(q − 2).  Table 4 in the work of Beimel et al. [13] reports ranks of the matrices M ≡ and M ≡, ≡ for m = 6, 10, 14, 15, 21, 35 and p = 2, 5, 7, 11. Unfortunately, some of the data there go against the proven properties of those matrices. For instance, in [32] it was proven that there exists a share conversion in case m = p 1 · p 2 and p = p 1 , where p 1 and p 2 are distinct odd primes. At the same time, Table 4 in [13] shows that for the case m = 5 · 7 it holds that rank(M ≡ ) = rank(M ≡, ≡ ) over Z 7 , which means the absence of the conversion. Moreover, rank(M ≡ ) cannot be less than m 2 , since the matrix M ≡ has an identity block matrix of the size m 2 × m 2 in the upper left corner. However, for case m = 35 in Table 4 in [13], this rank appears to be less. Therefore, we recalculated this table (also by computer search) in Tables 1 and 2 to correct the result of [13] as well as to check the soundness of our derivations. The results in Table 1 confirm our conclusions. Indeed, for the case of m = 2q there is a conversion if and only if the modulus of the group is 2, and in this case β = (q − 1)(q − 2). Table 2 is relevant to the result of [32]. For the case of odd m = p 1 · p 2 , there is a conversion if the modulus of the group p equals either p 1 or p 2 . In this paper, as well as in [32], only the case of the set S m of size 3 was considered. However, as we noted in the Introduction, the larger sets if the conversion for them exists, could result in MV families with higher VC dimension and hence in better PIR. For cases when the conversion exists in respect to the relation C S m , we also decided to check the extended sets S m , trying different additional values from Z m and checking the ranks of M ≡ and M ≡, ≡ .

Computer Search Results on the Set S m and the Extended Set S m .
For even m, we only tested possible extensions for S m modulo 2 (because if there is no conversion for a set S m , then there is no conversion for any extended set). Of all the cases in Table 1, only for m = 2 · 7 there are extended sets S m = {1, 3, 7, 8} and {1, 5, 7, 8} with β > 0 (namely, β = 6). The set S m = {1, 3, 5, 7, 8} provides β = 0 and therefore the absence of the conversion.
Surprisingly, for odd m's the result is more encouraging: for all m and p in Table 2 which provide β > 0 for S m , there were also extended sets S m with non-zero β. We summed them in Table 3. In the row S m \ S m , there is a subset extending S m up to S m . It is interesting that any number of entries added from S m \ S m to S m gives the same rank(M ≡ ) and β. It is also interesting that the set S m in all the checked cases with the odd m contains all the entries which are equal to 1 modulo p 2 (taking p 1 = p) except from 1 and (0, 1) Z m which are already in S m . Namely, S m \ S m = {(2, 1) Z m , ..., (p − 1, 1) Z m }. The mistake in the construction of A 1 and A 2 only affects the last block-rows of these matrices (i.e., for i = p 2 − 1). Therefore, we will consider in detail how these block-rows are built. According to the Section 4.5 in [32], the Type-2 vectors are in the form: (e b+k,c − e b+k,c+1 ).
When i = p 2 − 1, each term under the summation in (A1) is similar to (17): The first two terms of (A1) differ for distinct j: if j < p 1 − 1 they are For i = p 2 − 1, j = p 1 − 1, the first two terms in (A1) are Equations (A3) and (A4) give us the matrix R 1 2 in the last block of the last block-row of A (−1),2 . Equation (A2) gives the last block-row in A (−1),L,2 , and the blocks −R 3 2 and R 2 2 in the last block-row of A (−1),R,2 . Upper block-rows in A (−1),2 remain as in [32].
Here, the "right side" A (−1),R,2 is a p 2 × p 2 block matrix. Its content is as follows.
The left-side matrix A (−1),L,2 is a block matrix of size p 2 × 1 (where indeed the number of rows in each of its p 2 blocks is consistent with that of A (−1),R,2 ). It has the following structure (There was a missing "-" in the description of A (−1),L,2 [p 2 − 1] in [32]).
Appendix A.2. The Matrix A (−1), 3 We choose (B, C) = (0, 0). Then, by Equation (12) and a simple calculation, the line is of the form (Note that another T 0,0 element is required to take care of the e 0,0 − e 0,(0,1) Zm part, so it is subtracted, and is thus missing from the first sum): We consider in detail now only the last block-row of A (−1),1 . According to the Section 4.5 in [32], the Type-1 vectors are in the form: When i = p 2 − 1, each term under the summation in (A5) is: The first term in (A6) gives the block filled by (−1)'s in the left part of the matrix A (−1),1 , the second term gives R 3 1 in the first block of the right block-row, and the third term gives R 2 1 in the last block of the last block-row of A (−1),1 . Upper block-rows in A (−1),1 remain as in [32].
The resulting matrix A (−1),R,1 equals (The last block-row is changed to correct the mistake made in [32]): where the resulting R 2 1 is defined in (2), and R 3 1 in (3). (The matrix R 3 1 is corrected in comparison with [32]).

Appendix A.4. Another Elimination Sequence
From now on, assume that p = p 1 and that p 2 > 2. We leave the full analysis of other cases for future work. We are now able to apply Lemma 2. We perform this step for I 2 corresponding to the L-part blocks of A (−1) and proceed in several steps. We perform the row operations starting at a grosser resolution and then proceed to a finer resolution.
We perform a similar transformation on A (−1),1 , resulting in: where A (0),R,1 equals (The matrix A (0),R,1 corresponds to A (5),R,1 in [32] and is in different from it because of the corrected mistake): and A (0),L,1 equals (The matrix A (0),L,1 corresponds to A (5),L,1 in [32]): Step 2: Working at the Resolution of Level-0 Blocks Here, we view the matrix A (0) as a block matrix over Level-0 blocks. That is, denote by (i, j) the row block corresponding to the j th Level-0 block inside the i th Level-1 block of A. We transform A (0) into a matrix A as follows.
Step 3: Working within Level-0 Blocks Here, we move to working with individual rows and complete the task of leaving a basis of the original rows of A (−1),L as the set of non-zero rows of the matrix A (1),L obtained by a series of row operations. To this end, our goal is to understand the set of remaining rows in A L . In the A L,2 part, each Level-0 block-column (with blocks of size m × m) has only G de f = Rows(T 2 ) ∪ {(1, . . . , 1)} (here, one appears m times) as non-zero rows, and in each row, there are non-zero entries in only one of the blocks. Thus, it suffices to find a basis for the set G of vectors.
Lemma A1. Assume p = p 1 . Then, the index set I = {k|0 ≤ k ≤ (p 1 − 1)p 2 } satisfies that Rows(T 2 [I, [m]]) is a basis for G. In particular, for each i ∈ Z p , we have ∑ Here, x is computed as follows: first calculate y as p −1 2 modulo p 1 (that is, in Z p 1 ). Then, we "lift" y back into Z (1 ≤ y ≤ p 1 − 1) and then set x to be y modulo p-that is, x is an element of Z p (note that all non-zero coefficients in the linear combination that results in (1, . . . , 1) indeed belong to I).
Another observation that will be useful to us identifies the dual of T 2 .
Lemma A2. Assume p = p 1 . Then, the set of vectors: is a basis of Ker(T 2 ), where Ker(T 2 ) de f = {v|v · T 2 } denotes the left kernel of the matrix T 2 .
The observations are rather simple to prove by basic techniques; see [32]. Note that the general theory of cyclotomic matrices is not useful here, as it holds over infinite or large (larger than matrix size) fields, so we proceed by ad-hoc analysis of the (particularly simple) matrices at hand.
Overall, the resulting A (1),2 is as follows: A (1),R,2 is identical to A R,2 , except for replacing R 4 2 with R 5 2 . That is, in the last block row R 5 2 , all cells are R 2 −1 , and there are p 1 such cells.
Next, we handle the A (1),1 part (Here, A (1),1 corresponds to A (7),1 in [32]). Here, we b-zero the remaining rows in A L,1 by adding the right combination of rows in A L,2 . The combination is determined by the "in particular" part of Lemma A2. The resulting matrix A (1),L,2 is identical to A L,2 , except for T 2 in each L i 2 being replaced by T 2 . Here, T 2 has the form: Here, A (1),L,1 becomes zero, which was our goal. Note that as opposed to previous transformations, the transformation performed on A L,1 does not "mirror" the transformation performed on A L,2 and in fact involves rows from both A L,2 and A L,1 . A (1),R,1 is identical to A R,1 , except that in each Level-1 block (i, i) for i ∈ {0, . . . , p 2 − 2}, the first row of R 2 1 (the content of this block) is replaced by (We correct here the typo in [32], by adding the exponent (−1) to x): It remains to b-zero the L-part of A 3 . For simplicity, we focus on V L,3 [0, 0] (which is the only non-zero block in V L,2 ) and then use the resulting linear dependence to produce the new row V 3 [0, ·]. Taking I 1 to be the set of rows in A (1) that correspond to non-zero rows in A (1),L,1 and I 2 corresponding to L, we obtain the following matrix A (2) . (The matrix A (2) here corresponds to A (8) in [32].) On Level-1, it has a block structure similar to that of A (1),R (where the number of rows changes in some of the matrices). More concretely, A (2),2 has the form (The correction of the mistake in [32] leads to the changed first block in the last block-row in A (2),2 ): Here, R 5,− 2 is identical to R 5 2 except that the top m − p 2 + 1 rows in it are removed. That is, it is identical to R 4 2 , except that the (0,0) th Level-0 block in R 4 2 replaces I by C, which are equal.
Similarly, R 2,− 2 is obtained from R 2 2 in the same manner. In this case, only zero rows are removed. A (2),1 is precisely A (1),R,1 (no rows were eliminated from there, as all corresponding rows on the left side became zero). Similarly, A (2),3 = A (1),R,3 .