Next Article in Journal
Post-Quantum Two-Party Adaptor Signature Based on Coding Theory
Previous Article in Journal
Network-Compatible Unconditionally Secured Classical Key Distribution via Quantum Superposition-Induced Deterministic Randomness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Designing a Practical Code-Based Signature Scheme from Zero-Knowledge Proofs with Trusted Setup

1
Amazon Web Services Inc., Seattle, WA 98101, USA
2
Department of Mathematics, University of Haifa, Haifa 3498838, Israel
3
Department of Mathematical Sciences, Florida Atlantic University, Boca Raton, FL 33431, USA
4
Department of Engineering, Marche Polytechnic University, 60121 Ancona, Italy
*
Author to whom correspondence should be addressed.
Cryptography 2022, 6(1), 5; https://doi.org/10.3390/cryptography6010005
Submission received: 10 December 2021 / Revised: 5 January 2022 / Accepted: 11 January 2022 / Published: 27 January 2022

Abstract

:
This paper defines a new practical construction for a code-based signature scheme. We introduce a new protocol that is designed to follow the recent paradigm known as “Sigma protocol with helper”, and prove that the protocol’s security reduces directly to the Syndrome Decoding Problem. The protocol is then converted to a full-fledged signature scheme via a sequence of generic steps that include: removing the role of the helper; incorporating a variety of protocol optimizations (using e.g., Merkle trees); applying the Fiat–Shamir transformation. The resulting signature scheme is EUF-CMA secure in the QROM, with the following advantages: (a) Security relies on only minimal assumptions and is backed by a long-studied NP-complete problem; (b) the trusted setup structure allows for obtaining an arbitrarily small soundness error. This minimizes the required number of repetitions, thus alleviating a major bottleneck associated with Fiat–Shamir schemes. We outline an initial performance estimation to confirm that our scheme is competitive with respect to existing solutions of similar type.

1. Introduction

Most of the public-key cryptosystems currently in use are threatened by the development of quantum computers. Due to Shor’s algorithm [1], for example, the widely used RSA and Elliptic-Curve Cryptography (ECC) will be rendered insecure as soon as a large-scale functional quantum computer is built. To prepare for this scenario, there is a large body of active research aimed at providing alternative cryptosystems for which no quantum vulnerabilities are known. The earliest among those is the McEliece cryptosystem [2], which was introduced over four decades ago, and relies on the hardness of decoding random linear codes. Indeed, a modern rendition of McEliece’s scheme [3] is one of the major players in NIST’s recent standardization effort for quantum-resistant public-key cryptographic schemes [4].
Lattice-based cryptosystems play a major role in NIST’s process, especially due to their impressive performance figures. Yet, code-based cryptography, the area comprising the McEliece cryptosystem and its offspring, provides credible candidates for the task of key establishment (other examples being HQC [5] and BIKE [6], both admitted to the final round as “alternates”). The situation, however, is not the same when talking about digital signatures. Indeed, NIST identified a shortage of alternatives to lattice-based candidates, to the point of planning a reopening of the call for proposals (see for instance [7]).
There is a long history of code-based signatures, dating back to Stern’s work [8], which introduced a Zero-Knowledge Identification Scheme (ZKID). It is known that ZKIDs can be turned into full-fledged signature schemes via the Fiat–Shamir transformation [9]. Stern’s first proposal has since been extended, improved and generalized (e.g., [10,11,12]). However, all of these proposals share a common drawback: a high soundness error, ranging from 2/3 to (asymptotically close to) 1/2. This implies that the protocol requires multiple repetitions and leads to very long signatures. Other schemes, based on different techniques, have also been proposed in the literature. Trapdoor-based constructions (e.g., [13,14]) usually suffer from a problem strictly connected to the (Hamming) code-based setting: to be precise, unlike the case of the RSA-based Full-Domain Hash (FDH) and similar schemes, a randomly chosen syndrome is, in general, not decodable. This makes signing very slow, since multiple attempts need to be made, and furthermore leads to parameter choices for the underlying linear codes that yield very large public keys. This issue is somewhat mitigated in [14], although key sizes are still large (3.2 MB for 128 security bits) and signing is still hindered by complex sampling techniques. Protocols based on code equivalence (e.g., [15,16]) show promising performance numbers, yet are very new and require further study before becoming established; the attack presented in [17], for instance, suggests that the exact hardness of the code equivalence problem has yet to be established in practice. Finally, there exists a literature on schemes using a different metric, such as the rank metric [18,19] or the “restricted” metric [20]. All of these schemes typically show very good performance, yet the hardness of the underlying problems is also not fully trusted; for instance, RankSign was broken in [21], Durandal attacked in [22], and the scheme of [20] appears to be vulnerable to subset-sum solvers.

Our Contribution

We present a new code-based zero-knowledge scheme that improves on the existing literature by featuring an arbitrarily low soundness error, typically equal to the reciprocal of the size q of the underlying finite field. This allows us to greatly reduce the number of repetition rounds needed to obtain the target security level, consequently reducing the signature size. To do this, our construction leverages the recent approach by Katz et al. [23], using a Multi-Party Computation (MPC) protocol with preprocessing. More concretely, we design a “Sigma protocol with helper”, following the nomenclature of [24]. We then show how to convert it to a full-fledged signature scheme and provide an in-depth analysis of various techniques utilized to refine the protocol. Our scheme is equipped with a wide range of optimizations, to balance the added computational cost that stems from the MPC construction. In the end, we are able to achieve very satisfactory performance figures, with a very small public key, and rather short signatures.
It is worth remarking on security aspects. First of all, our scheme rests on an incredibly solid basis: the security of the main building block, in fact, is directly connected to the Syndrome Decoding Problem (SDP). To the best of our knowledge, the only other code-based schemes to do so are the original Stern construction and its variants which, however, pay a heavy price in terms of signature size. Our signature scheme is obtained via standard theoretical tools, which exploit the underlying zero-knowledge and naturally do not add any vulnerabilities. In the end, we obtain a scheme that is secure in the QROM with strong security guarantees and minimal assumptions. We deem this as a very important feature of our proposal.

2. Preliminaries

We will use the following conventions throughout the rest of the paper:
aa scalar
a a vector
A a matrix
A a function or algorithm
A a protocol
I n the n × n identity matrix
λ a security parameter
[ a ; b ] the set of integers { a , a + 1 , , b }

2.1. Coding Theory

An [ n , k ] -linear code C of length n and dimension k over F q is a k-dimensional vector subspace of F q n . It is usually represented in one of two ways. The first identifies a matrix G F q k × n , called generator matrix, whose rows form a basis for the vector space, i.e., C = { u G , u F q k } . The second way, instead, describes the code as the kernel of a matrix H F q ( n k ) × n , called parity-check matrix, i.e., C = { x F q n : x H = 0 } . Linear codes are usually measured using the Hamming weight, which corresponds to the number of non-zero positions in a vector. Isometries for the Hamming metric are given by monomial transformations τ = ( v ; π ) F q * n S n , i.e., permutations combined with scaling factors. In this paper, we will denote by FWV ( q , n , w ) the set of vectors of length n with elements in F q and fixed Hamming weight w. The parity-check representation for a linear code leads to the following well-known problem.
Definition 1
(Syndrome Decoding Problem (SDP)). Let C be a code of length n and dimension k defined by a parity-check matrix H F q ( n k ) × n , and let s F q n k , w n . Find e F q n such that e H = s and e is of weight w.
SDP was proved to be NP-complete for both the binary and q-ary cases [25,26], and is thus a solid base to build cryptography on. In fact, the near-entirety of code-based cryptography relies more or less directly on SDP. The main solvers for SDP belong to the family of Information-Set Decoding (ISD); we will expand on this when discussing practical instantiations and parameter choices, in Section 5.

2.2. Technical Tools

We recall here the characteristics of a Sigma protocol with helper, as formalized in [24]. This is an interactive protocol between three parties (which we model as PPT algorithms): a prover P = ( P 1 , P 2 ) , a verifier V = ( V 1 , V 2 ) and a trusted third party H called helper. The protocol takes place in the following phases:
  • I. Setup: the helper takes a random seed seed as input, generates some auxiliary information aux , then sends the former to the prover and the latter to the verifier.
  • II. Commitment: the prover uses seed , in addition to his secret sk , to create a commitment c and sends it to the verifier.
  • III. Challenge: the verifier selects a random challenge ch from the challenge space and sends it to the prover.
  • IV. Response: the prover computes a response rsp using ch (in addition to his previous information), and sends it to the verifier.
  • V. Verification: the verifier checks the correctness of rsp , then checks that this was correctly formed using aux , and accepts or rejects accordingly.
A Sigma protocol with helper is expected to satisfy the following properties, which are closely related to the standard ones for ZK protocols:
  • Completeness: if all parties follow the protocol correctly, the verifier always accepts.
  • 2-Special Soundness: given an adversary A that outputs two valid transcripts ( aux , c , ch , rsp ) and ( aux , c , ch , rsp ) with ch ch , it is possible to extract a valid secret sk . Note that this is not necessarily the one held by the prover, but could in principle be any witness for the relation ( pk , sk ) .
  • Special Honest-Verifier Zero-Knowledge: there exists a probabilistic polynomial-time simulator algorithm that is capable, on input ( pk , seed , ch ) , to output a transcript ( aux , c , ch , rsp ) which is computationally indistinguishable from one obtained via an honest execution of the protocol.
Of course, the existence of a helper party fitting the above description is not realistic, and thus it is to be considered just a technical tool to enable the design of the protocol. In Section 3.2, we will show the details of the transformation to convert a Sigma protocol with helper into a customary 3-pass ZK protocol.
The design of such a protocol will crucially rely on several building blocks, in addition to those coming from coding theory. In particular, we employ a non-interactive commitment function Com : { 0 , 1 } λ × { 0 , 1 } * { 0 , 1 } 2 λ . The first λ bits of input, chosen uniformly at random, guarantee that the input message is hidden in a very strong sense, as captured in the next definition.
Definition 2.
Given an adversary A , we define the two following quantities:
Adv Bind ( A ) = Pr Com ( r , x ) = Com ( r , x ) ( r , x , r , x ) A ( 1 λ ) ;
Adv Hide ( A , x , x ) = | Pr r { 0 , 1 } λ A ( Com ( r , x ) ) = 1 Pr r { 0 , 1 } λ A ( Com ( r , x ) ) = 1 | .
We say that Com is computationally binding if, for all polynomial-time adversaries A , the quantity Adv Bind ( A ) is negligible in λ. We say that Com is computationally hiding if, for all polynomial-time adversaries A and every pair ( x , x ) , the quantity Adv Hide ( A ) is negligible in λ.
Informally, the two properties defined above ensure that nothing about the input is revealed by the commitment and that, furthermore, it is infeasible to open the commitment to a different input.

3. The New Scheme

The new Sigma protocol with helper is described in Algorithm 1. Next, we proceed to prove that the scheme satisfies the three fundamental properties of a Sigma protocol with helper, which we described in Section 2.2.

3.1. Security

Correctness: we have that τ ( y ) H = τ ( u + z e ˜ ) H = τ ( u ) H + z τ ( e ˜ ) H and the second addendum is exactly z s , which yields τ ( u ) H as expected.
Soundness: intuitively, this is based on the fact that enforcing τ to be an isometry is equivalent to checking that e ˜ has the correct weight. In fact, we want to show that, given two transcripts that differ in the challenge, we are able to extract a solution for SDP. Consider then the two transcripts ( aux , c , z , r , r z , τ , y ) and ( aux , c , z , r , r z , τ , y ) with z z . Now let t = τ ( y ) H z s and t = τ ( y ) H z s . By the binding properties of the commitment (hash, etc.), the verifier only accepts if c = Com ( r , τ , t ) = Com ( r , τ , t ) , so in particular this implies τ = τ and t = t . Since the helper computed everything honestly, and aux is properly formed, the verifier also requires that c z = Com ( r z , u + z e ˜ ) = Com ( r z , y ) and c z = Com ( r z , u + z e ˜ ) = Com ( r z , y ) , from which it follows, respectively, that y = u + z e ˜ and y = u + z e ˜ . We now put all the pieces together, starting from t = t , τ = τ and substituting in. We obtain that τ ( u + z e ˜ ) H z s = τ ( u + z e ˜ ) H z s , which implies that z τ ( e ˜ ) H z s = z τ ( e ˜ ) H z s . Rearranging and gathering common terms leads to ( z z ) τ ( e ˜ ) H = ( z z ) s and since z z , we conclude that τ ( e ˜ ) H = s . It follows that τ ( e ˜ ) is a solution to SDP as desired. Note that this can easily be calculated since y y = ( z z ) e ˜ , from which e ˜ can be obtained and hence τ ( e ˜ ) (since τ is known).
Zero-knowledge: it is easy to prove that there is no knowledge leaked by honest executions. To show this, we construct a simulator which proceeds as follows. Take as input H , s a challenge z and the seed. The simulator then reconstructs aux , r z and y = u + z e ˜ , having obtained u and e ˜ from the seed. It then selects a random permutation π and computes t = π ( y ) H z s . Finally, it generated a randomness r , calculates the commitment as c = Com ( r , π , t ) , and outputs a transcript ( aux , c , z , r , r z , π , y ) . It is easy to see that such a transcript passes verification, and in fact it is identical to the one produced by an honest prover, with the exception of π being used instead of τ .
Algorithm 1: Our Proposed Sigma Protocol with Helper
Public Data
Parameters q , n , k , w N , a full-rank matrix H F q ( n k ) × n and a commitment function Com : { 0 , 1 } λ × { 0 , 1 } * { 0 , 1 } 2 λ .
Private Key
A vector e FWV ( q , n , w ) .
Public Key
The syndrome s = e H .
I. Setup ( H )
Input: Uniform random seed { 0 , 1 } λ .
1.  Generate u F q n and e ˜ FWV ( q , n , w ) from seed .
2.  For all v F q :
   i.  Generate randomness r v { 0 , 1 } λ from seed .
   ii.  Compute c v = Com ( r v , u + v e ˜ ) .
3.  Set aux = { c v v F q } .
II. Commitment ( P 1 )
Input: H , e and seed .
1.  Regenerate u F q n and e ˜ FWV ( q , n , w ) from seed .
2.  Determine isometry τ such that e = τ ( e ˜ ) .
3.  Generate randomness r { 0 , 1 } λ .
4.  Compute c = Com ( r , τ , τ ( u ) H ) .
III. Challenge ( V 1 )
Input: -
1.  Sample uniform random z F q .
2.  Set ch = z .
IV. Response ( P 2 )
Input: ch and seed .
1.  Regenerate r z from seed .
2.  Compute y = u + z e ˜ .
3.  Set rsp = ( r , r z , τ , y ) .
V. Verification ( V 2 )
Input: H , s , aux , c and rsp .
1.  Compute t = τ ( y ) H z s .
2.  Check that Com ( r , τ , t ) = c and that τ is an isometry.
3.  Check that Com ( r z , y ) = c z .
4.  Output 1 (accept) if both checks are successful, or 0 (reject) otherwise.

3.2. Removing the Helper

We now explain how the protocol above can be converted into a standard Zero-Knowledge protocol (without the artificial helper). The main idea is to use a “cut-and-choose” technique, as suggested by Katz et al. [23]. The simplest way to accomplish this is the following. The prover can simulate the work of the helper by performing all the duties of the precomputation phase; namely, the prover is able to generate a seed, run the Setup process to obtain aux and generate the protocol commitment c . The verifier can then hold the prover accountable by asking to do this several times, and then querying on a single random instance, which gets executed. The other instances are still checked, by simply using the respective seeds, which are transmitted along with the protocol response of the one selected instance. To be more precise, we give a schematic description of the new protocol in Algorithm 2, below.
Algorithm 2: Generic Transformation to Transform Sigma Protocol with Helper into a Zero-Knowledge Proof
Public Data, Private Key, Public Key
Same as in Figure 1.
I. Commitment (Prover)
Input: Public data and private key.
1.  For all i [ 0 ; N 1 ] :
   i.  Sample uniform random seed ( i ) { 0 , 1 } λ .
   ii.  Compute aux ( i ) = H ( seed ( i ) ) .
   iii.  Compute c ( i ) = P 1 ( H , e , seed ( i ) ) .
2.  Send aux ( 0 ) , , aux ( N 1 ) and c ( 0 ) , , c ( N 1 ) to verifier.
II. Challenge (Verifier)
Input: -
1.  Sample uniform random index I [ 0 ; N 1 ] .
2.  Sample uniform random challenge z F q .
3.  Set ch = { I , z } .
III. Response (Prover)
Input: ch and seed ( I ) .
1.  Compute rsp ( I ) = P 2 ( ch , seed ( I ) ) .
3.  Send rsp ( I ) and { seed ( i ) } i I to verifier.
IV. Verification (Verifier)
Input: H , s , aux ( 0 ) , , aux ( N 1 ) , c ( 0 ) , , c ( N 1 ) , rsp ( I ) and { seed ( i ) } i I .
1.  For all i [ 0 ; N 1 ] , i I :
   i.  Compute aux ¯ ( i ) = H ( seed ( i ) ) .
   ii.  Check that this is equal to aux ( i ) .
2.  Set b = 1 if all checks are successful, and b = 0 otherwise.
3.  Compute b = V 2 ( H , s , aux ( I ) , c ( I ) , rsp ( I ) ) .
4.  Output b b .
It is possible to see that this new protocol yields a zero-knowledge proof of knowledge. More specifically, we have the following result.
Theorem 1.
Let P be a Sigma protocol with helper with challenge space C , and let I be the identification protocol described in Figure 2. Then I is an honest-verifier zero-knowledge proof of knowledge with challenge space [ 0 ; N 1 ] × C and soundness error
ε = max { 1 N , 1 | C | } .
A proof was given in [27] (Theorem 3) in all generality. In our case the challenge space is F q , a challenge is a random value z F q , and therefore we have | C | = q . The protocol can then be iterated, as customary, to obtain the desired soundness error of 2 λ ; namely, the protocol is repeated t times, with t = λ / log ε . Katz et al. [23] also show how it is possible to beat parallel repetition, by using a more sophisticated approach. In order to have a clearer description of the costs, we postpone the discussion on this approach until the end of the next section.

3.3. Obtaining a Signature Scheme

The standard way to obtain a signature scheme from a ZKID is the Fiat–Shamir transformation. In fact, this allows to securely convert an interactive scheme (identification) into a non-interactive one (signature). To be precise, the following theorem was proved in [28], stating the security of a generalized version of the Fiat–Shamir transformation.
Theorem 2.
Let I be a canonical identification protocol that is secure against impersonation under passive attacks. Let S = FS ( I ) be the signature scheme obtained applying the Fiat–Shamir transformation to I . Then, S is secure against chosen-message attacks in the random oracle model.
The main idea is to replace the interaction with the verifier in the challenge step with an automated procedure. Namely, the prover can generate the challenge by computing it as a hash value of the message and commitment. The signature will consist of the commitment and the corresponding response. The verifier can then regenerate the challenge himself, and proceed with the same verification steps as in the identification protocol. The process is summarized in Algorithm 3, where we indicate with the challenge length.
Algorithm 3: The Fiat–Shamir Transformation
Public Data, Private Key, Public Key
Same as in I , plus a collision-resistant hash function Hash FS : { 0 , 1 } * { 0 , 1 } .
I. Signature (Signer)
Input: Public data, private key and message m .
1.  Generate commitment cmt as in I .
2.  Compute challenge ch = Hash FS ( m , cmt ) .
3.  Produce response rsp as in I .
4.  Output signature σ = ( cmt , rsp ) .
II. Verification (Verifier)
Input: Public data, public key, message m and signature σ .
1.  Parse σ as ( cmt , rsp ) .
2.  Compute challenge ch = Hash FS ( m , cmt ) .
3.  Perform verification as in I .
4.  Accept or reject accordingly.
Note that the security result in Theorem 2 is intentionally vague, as the exact result depends on the specific security notions defined for identification and signature schemes. In our case, we rely on the fact that the underlying identification scheme provides honest-verifier zero-knowledge, with negligible soundness error, to achieve EUF-CMA security (see for example [29]). Moreover, note that, as per theorem statement, security depends on the hash function being modeled as a random oracle. This could, in principle, generate doubts about whether such a scheme would still be secure in the scenario that considers an adversary equipped with quantum capabilities. However, following some recent works [30,31], we are able to claim that applying Fiat–Shamir to our identification scheme is enough to produce a signature scheme whose EUF-CMA security is preserved in the Quantum Random Oracle Model (QROM). The author in [27] (Theorem 3) argues that the schemes satisfy the collapsing property, although this property is not explicitly defined in the paper. Thus, we present its definition below, following the generalized version of [30].
Definition 3.
Let R : X × Y { 0 , 1 } be a relation with | X | and | Y | superpolynomial in the security parameter λ, and define the following two games for any polynomial-time two-stage adversary A = ( A 1 , A 2 ) ,
Game 1 : ( S , X , Y ) A 1 , r R ( X , Y ) , X M ( X ) , Y M ( Y ) , b A 2 ( S , X , Y )
Game 2 : ( S , X , Y ) A 1 , r R ( X , Y ) , Y M ( Y ) , b A 2 ( S , X , Y )
where X and Y are quantum registers of dimension | X | and | Y | , respectively, M denotes a measurement in the computational basis, and applying R to quantum registers is achieved by computing the relation coherently and measuring it. We say that R is collapsing from X to Y, if an adversary cannot distinguish the two experiments when the relation holds, i.e., if for all adversaries A
| Pr A , Game 1 [ r = b = 1 ] Pr A , Game 2 [ r = b = 1 ] | negl ( λ ) .
The above property allows to show that a Sigma protocol has quantum computationally unique responses, which is necessary to achieve existential unforgeability in the QROM. We then have the following result.
Theorem 3.
Let Com and PRNG be modeled as a quantum random oracles. Then, the signature scheme obtained by applying the Fiat–Shamir transformation to the scheme in Figure 2 is EUF-CMA secure in the QROM.
Proof. 
We follow the steps of [27] (Theorem 4), and note that these are standard (for instance, they are similar to those given for the proof of the Picnic signature scheme, see Section 6.1 of [30]). First, consider the setup algorithm, which consists of expanding a randomness seed using PRNG , generating values accordingly, and then committing to them using Com . Note that, since Com is modeled as a quantum random oracle, then it is collapsing, as shown in [32]. As for PRNG , this is injective with overwhelming probability (as the output is much longer than the input), and so is the computation of the values derived from it. Since the composition of collapsing functions is also collapsing, as shown in [33], and composing a collapsing function with an injective one preserves collapsingness, we conclude that the setup algorithm is overall collapsing. Next, we examine protocol responses: these consist only of preimages of Com (the commitment openings) and preimages of the setup algorithm, and thus we are able to argue that the protocol has quantum computationally unique responses, as mentioned above. The thesis then follows by applying Theorems 22 and 25 from [30]. □

4. Communication Cost and Optimizations

Several optimizations are presented in [27], albeit in a rather informal way. Moreover, these are all combined together and nested into each other, so that, in the end, it is quite hard to have a clear view of the full picture. In here, we strive to present the optimizations in full detail, one at a time, show how they can all be applied to our protocol, and illustrate their impact on the communication cost. To begin, we analyze the communication cost of a single iteration of the protocol, before any optimizations are applied. This consists of several components:
  • N copies of the auxiliary information aux ( i ) , each consisting of q hash values.
  • N copies of the commitment c ( i ) , each consisting of a single hash value.
  • The index I and the challenge z, respectively an integer and a field element.
  • The protocol response ( r , r z , τ , y ) , consisting of two λ -bit randomness strings, an isometry, and a length-n vector with elements in F q .
  • The N 1 seeds { seed ( i ) } i I , each a λ -bit string.
The bit-length of most of the objects listed above is self-explanatory. For linear isometries, recall from Section 2.1 that these are composed by a permutation, combined with scaling factors. The former can be compactedly represented with a list of entries using n log n bits, while the latter amount to n non-zero field elements (each corresponding to log q bits). This leads to the following formula for the communication cost (in bits):
2 λ q N { aux ( i ) } + 2 λ N { c ( i ) } + log N I + log q z + 2 λ r , r z + n log n + log q τ + n log q y + λ ( N 1 ) { seed ( i ) } i I ,
where all logarithms are assumed to be in base 2. Note that we have chosen to leave the above formula in its unsimplified version, in order to highlight all the various components. This will be helpful when analyzing the impact of the different optimizations.

4.1. Protocol Commitments

The first optimization that we discuss regards the commitments c ( i ) ; we choose to present this first, because it involves the first verifier check, and also because it can serve as a blueprint for other optimizations. Now, note that the prover transmits N copies of c ( i ) , but only one of them is actually employed in the verification (the one corresponding to instance I). It follows that the transmission cost can be reduced, by employing a Merkle tree T of depth d = log N , whose leaves are associated to c ( 0 ) , , c ( N 1 ) .
To be more precise, we define a function MerkleTree , which uses a collision-resistant hash function Hash tree : { 0 , 1 } * { 0 , 1 } 2 λ , and, on input a list of elements ( a 0 , , a 2 d 1 ) , generates a Merkle tree in the following way. First, generate the leaves as
T d , l = Hash tree ( a l )
for 0 l 2 d 1 . Then, the internal nodes are created, starting from the leaves and working upwards, as
T u , l = Hash tree ( T u + 1 , 2 l | | T u + 1 , 2 l + 1 )
for 0 u < d and 0 l 2 u 1 . Only the root of the tree, root = T 0 , 0 , needs to be initially transmitted to the verifier; the prover will then include the authentication path of the tree in his response, after receiving the challenge. By authentication path, we mean the list of the hash values corresponding to the siblings of the nodes on the path from the leaf to the root. This can be used as input to a function ReconstructRoot , together with the corresponding leaf, to recalculate root , by using the supplied nodes inside (3). In our case, using a tree T c = MerkleTree ( c ( 0 ) , , c ( N 1 ) ) and transmitting only its root and a path, the component 2 λ N in Equation (1) is reduced to 2 λ ( 1 + log N ) .

4.2. Auxiliary Information

We now illustrate how to deal with the cost of transmitting the N copies of the auxiliary information aux ( i ) . This can be greatly reduced, using two distinct optimizations.
First, notice that only one out of the q commitment values is employed in a single protocol execution, namely, in the second verifier check. This means that we can again use a Merkle tree, with a setup similar as the previous optimization; in this case, the leaves will be associated to the values { c v v F q } . Thus, once more, only the root needs to be transmitted initially, and the authentication path can be included in the response phase. Accordingly, the component 2 λ q N in Equation (1) is reduced to 2 λ N ( 1 + log q ) .
Furthermore, we can look at the previous improvement in another light: only one of the N instances is actually executed, while the other ones are simply checked by recomputing the setups using the seeds. Thus, there is no need to transmit all N copies of aux ( i ) , even in the above simplified version consisting of root + path. Instead, the prover can compute a hash of the roots, send it to the verifier, and include in his response only the authentication path for instance I. In the verification phase, the root for instance I is computed via protocol execution, while the other roots are recomputed via the seeds { seed ( i ) } i I ; then, the verifier hashes the newly computed roots and checks the resulting value against the transmitted one. In the end, with this second technique, the component 2 λ N ( 1 + log q ) that we previously obtained is reduced to 2 λ ( 1 + log q ) .

4.3. Seeds

We are going to use a binary tree again, to reduce the communication cost associated with the N 1 seeds sent by the prover along with his response. However, this is not going to be a Merkle tree; instead, in this case, the tree is built starting from the root (i.e., the opposite of what one does to build Merkle trees). The prover begins by choosing a random seed to be the root of the tree. He then uses a pseudo-random generator PRNG : { 0 , 1 } λ { 0 , 1 } 2 λ to generate internal nodes.
To be precise, we define a function SeedTree that, on input seed , generates a full binary tree as follows. First, set T 0 , 0 = seed . Then, on input a node T u , l , returns two nodes
( T u + 1 , 2 l | | T u + 1 , 2 l + 1 ) = PRNG ( T u , l )
for 0 u d 1 and 0 l 2 u 1 , where d = log N . In the end, the tree will produce N values as leaves, which are going to be exactly the N seeds to be used in the protocol. Now, one seed is used and the remaining N 1 need to be made available to the verifier. Indeed, seed I should not be revealed, so the prover cannot transmit the root of the tree. Instead, the prover transmits the list path seed of the d internal nodes that are “companions”, i.e., siblings to the parents (and grandparents etc.) of seed I . An illustration is given in Figure 1. With this technique, the component λ ( N 1 ) in Equation (1) is reduced to just λ log N . For more details about the “Seed Tree” primitive, we refer to [34], where an extensive treatment is given.
Figure 1. Example of binary tree for N = 8 . The chosen seed (in green) is used and not revealed. The prover transmits the red nodes and the verifier can generate the remaining seeds (but not the chosen one) by applying PRNG to T 2 , 0 and T 1 , 1 (and hence to T 2 , 2 and T 2 , 3 ). The nodes generated in this way are colored in gray. The leaves obtained are highlighted with the thick line.
Figure 1. Example of binary tree for N = 8 . The chosen seed (in green) is used and not revealed. The prover transmits the red nodes and the verifier can generate the remaining seeds (but not the chosen one) by applying PRNG to T 2 , 0 and T 1 , 1 (and hence to T 2 , 2 and T 2 , 3 ). The nodes generated in this way are colored in gray. The leaves obtained are highlighted with the thick line.
Cryptography 06 00005 g001

4.4. Executions

We are now ready to discuss the more sophisticated approach of Katz et al. [23], which will yield better performance compared to a simple parallel repetion of the protocol. The main idea is to modify the “cut-and-choose” technique as follows. Currently, the protocol precomputes N distinct instances and only executes one, then repeats this process t times; thus, one has to precompute a total of t N instances, out of which only t are executed. As mentioned above, this leads to a soundness error of 2 λ = ε t , where ε = max { 1 / N , 1 / q } . ε = 1 / N . Instead, the same soundness error can be obtained by having a larger number of instances, say M, but executing a subset of them, rather than just one; this entirely avoids the need for repetition. In fact, as explained in [27], the soundness error, in this case, is bounded above by
max e [ 0 , s ] M e s e M s q s e
where s is the number of executed instances (which are indexed by a set S { 0 , , M 1 } ) and e s is a parameter that indicates how many instances are incorrectly precomputed by an adversary. Note that, for these instances, the adversary would not be able to produce values seed i , and therefore, the only chance he has to win, is if the verifier chooses to execute exactly all such instances; this happens with probability equal to M e s e / M s . The remaining term in Equation (4) is given by the probability of answering correctly (which is 1 / q ) in each of the remaining s e instances. For a formal proof of this fact, we refer the reader to [23].
In terms of communication cost, the total amount for the method using plain parallel repetition is given by t times the cost of one execution, refined with the previous optimizations. This is given (in bits) by
t 2 λ ( 1 + log q ) { aux ( i ) } + 2 λ ( 1 + log N ) { c ( i ) } + λ log N { seed ( i ) } i I + C c r
where we have simplified to C c r = 2 λ + log N + n log n + ( 2 n + 1 ) log q the cost of transmitting the challenge and response, which is fixed. In comparison, with the new method, the various tree roots need only be transmitted once. This would lead to the following total cost (again, in bits)
2 λ ( 1 + s log q ) { aux ( i ) } + 2 λ ( 1 + s log M ) { c ( i ) } + s λ log M { seed ( i ) } i S + s C c r
with C c r = 2 λ + log M + n log n + ( 2 n + 1 ) log q . While the number of instances necessarily increases (i.e., M > N ), the number of executions remains about the same as for the case of parallel repetition (i.e., s t ). Since the former number appears only logarithmically in the cost, while the latter is linear, the above optimization allows us to greatly reduce the total number of setups needed (from t N down to M) without increasing communication cost.
It is worth commenting on the behavior of some of the logarithmic terms in Equation (6), which correspond to the different trees used in the various optimizations. First of all, since the executed instances are chosen among the same set, all of the commitments { c ( i ) } are taken from a single tree. In this case, the different authentication paths will present some common nodes, and fewer nodes need to be transmitted in practice. More specifically, it is enough to transmit at most s log ( M / s ) nodes, to be able to simultaneously check membership for all leaves. This leads to a noticeable cost reduction.
For the tree of the { seed ( i ) } , it is possible to follow a similar process, so that it is not necessary to send multiple paths, and, instead, the required M s seeds can all be obtained in batch using a variable number of nodes. This number depends on the position of the chosen leaves; an illustration is given in Figure 2.
Figure 2. Example of two binary trees for M = 8 , s = 3 . Color codes are the same as in Figure 1. For tree (a), it is necessary to transmit 4 nodes, whereas for tree (b), only 2 nodes need to be transmitted.
Figure 2. Example of two binary trees for M = 8 , s = 3 . Color codes are the same as in Figure 1. For tree (a), it is necessary to transmit 4 nodes, whereas for tree (b), only 2 nodes need to be transmitted.
Cryptography 06 00005 g002
With simple calculations, one can find that, in the worst case, the number of nodes which must be transmitted is given by
2 log ( s ) + s ( log ( M ) log ( s ) 1 ) .
Conservatively, we will then estimate the cost as λ times this number, and substitute this in Equation (6), obtaining the final cost formula (again, in bits):
2 λ ( 1 + s log q ) { aux ( i ) } + 2 λ ( 1 + s log ( M / s ) ) { c ( i ) } + λ 2 log ( s ) + s ( log ( M ) log ( s ) 1 ) { seed ( i ) } i S + s C c r .
To give a complete picture, that sums up all the optimizations we consider, we present a schematic description in Algorithm 4.

5. Practical Considerations

We now move on to a discussion about practical instantiations. We start by summarizing some important facts about SDP.
As a first consideration, note that, once one fixes the code parameters (i.e., the values of q, k and n), the difficulty to solve SDP depends heavily on the weight w of the searched solution. Generically, hard instances are those in which the ratio w / n is outside the range [ q 1 q ( 1 k n ) ; q 1 q + k n q ] (for a detailed discussion about this topic, we refer the interested reader to [14] (Section 3)). Roughly speaking, the problem is hard when the weight of the solution is either high or low: in the first case SDP admits multiple solutions, while in the latter case we essentially expect to have a single solution. In this paper we will consider the low weight regime: as it is essentially folklore in coding theory, for random codes the hardest instances are obtained when w is close to the Gilbert–Varshamov (GV) distance, which is defined as
d ( q , n , k ) = max d N j = 0 d 1 n j ( q 1 ) j q n k .
Algorithm 4: The Optimized Zero-Knowledge Proof
Public Data
Parameters q , n , k , w N , a full-rank matrix H F q ( n k ) × n , a commitment function Com : { 0 , 1 } λ × { 0 , 1 } * { 0 , 1 } 2 λ and a collision-resistant hash function Hash root : { 0 , 1 } 2 λ M { 0 , 1 } 2 λ .
Private Key
A vector e FWV ( q , n , w ) .
Public Key
The syndrome s = e H .
I. Commitment (Prover)
Input: H and e .
1.  Sample uniform random seed { 0 , 1 } λ .
2.  Compute seed ( 0 ) , , seed ( M 1 ) = SeedTree ( seed ) .
3.  For all i [ 0 ; M 1 ] :
   i.  Compute aux ( i ) = H ( seed ( i ) ) .
   ii.  Build tree T aux ( i ) = MerkleTree ( aux ( i ) ) and call root aux ( i ) its root.
4.  Compute h = Hash root ( root aux ( 0 ) , , root aux ( M 1 ) ) .
5.  For all i [ 0 ; M 1 ] :
   i.  Compute c ( i ) = P 1 ( H , e , seed ( i ) ) .
6.  Build tree T c = MerkleTree ( c ( 0 ) , , c ( M 1 ) ) and call root c its root.
7.  Send h and root c to verifier.
II. Challenge (Verifier)
Input: -
1.  Sample uniform random S [ 0 ; M 1 ] with | S | = s .
2.  For all j S :
   i.  Sample uniform random z ( j ) F q .
3.  Set ch = { S , { z ( j ) } j S } .
III. Response (Prover)
Input: ch and { seed ( j ) } j S .
1.  For all j S :
   i.  Compute rsp ( j ) = P 2 ( z ( j ) , seed ( j ) ) .
2.  Send { rsp ( j ) } j S , { path aux ( j ) } j S , { path c ( j ) } j S and path seed to verifier.
IV. Verification (Verifier)
Input: H , s , h , root c , { rsp ( j ) } j S , { path aux ( j ) } j S , { path c ( j ) } j S and path seed .
1.  For all j S :
   i.  Compute t ( j ) = τ ( j ) ( y ( j ) ) H z ( j ) s .
   ii.  Compute c ( j ) = Com ( r ( j ) , τ , ( j ) t ( j ) ) .
   iii.  Compute root ¯ c = ReconstructRoot ( path c ( j ) , c ( j ) ) .
   iv.  Check that this is equal to root c .
2.  Set b = 1 if all checks are successful, and b = 0 otherwise.
3.  For all j S :
   i.  Compute c z ( j ) ( j ) = Com ( r z ( j ) , y ( j ) ) .
   ii.  Compute root ¯ aux ( j ) = ReconstructRoot ( path aux ( j ) , c z ( j ) ( j ) ) .
4.  For all j S :
   i.  Recover seed ¯ ( j ) from path seed .
   ii.  Compute aux ¯ ( j ) = H ( seed ¯ ( j ) ) .
   iii.  Build tree T aux ¯ ( j ) = MerkleTree ( aux ¯ ( j ) ) and call root ¯ aux ( j ) its root.
5.  Compute h ¯ = Hash ( root ¯ aux ( 0 ) , , root ¯ aux ( M 1 ) )
6.  Set b = 1 if h ¯ = h and b = 0 otherwise.
7.  Output b b .
In such a regime, the best SDP solvers are known as ISD algorithms, as we mentioned in Section 2.1. Introduced by Prange in 1962 [35], ISD techniques have been characterized by a significant interest along the years, which has ultimately led to a strong confidence about their performance. In particular, for the case of non-binary codes, the state-of-the-art is represented by Peters’ algorithm [36], proposed in 2010 and still unbeaten in practice. In our scheme we will always set w = d ( q , n , k ) and, to guarantee a security of λ bits, will choose parameters q, n and k so that the complexity resulting from Peters’ ISD [36] is at least 2 λ .
Taking the above reasoning into account, we have devised several parameters sets, for different values of M. The resulting parameters are reported in Table 1, below.
We observe that our scheme offers an interesting trade-off between the signature size and the computational efficiency: increasing M leads to a significant reduction in the signature size, but this comes at the cost of an increase in the number of operations for signing and verification. Indeed, we expect the computational overhead to be dominated by the computation of hash functions, whose number grows proportionally to M q (these are required to obtained the M values of { aux ( i ) } ). Optimization of the scheme can leverage a choice of efficient primitives for such short-input hashes. In addition, we note that these hashes operate on independent inputs, so software and hardware performance can enjoy potential parallelization efficiency.
Remark 1.
We note that, in order to decrease the algorithmic complexity of the scheme, one can reduce the size of the challenge space. Recall that, in the commitment phase, the prover uses all the values from F q to prepare the values aux ( i ) ; this means that, for each setting, the prover has to compute a Merkle tree having q leaves in the base layer. The same operations are essentially repeated by the verifier and, as we have already said, we expect this step to be the most time-consuming. Indeed, to select the challenges (and consequently, to prepare the aux ( i ) values), one can use a subset C F q , of size q < q . By doing so, the computation cost will decrease greatly. On the other hand the soundness error will also change, since in (4) we need to replace q with q , and thus the code parameters may have to change accordingly; however, this has very little impact on the communication cost, which will essentially remain unchanged (actually, it may become slightly smaller if representing the challenges requires fewer bits).
Remark 2.
We would also like to point out that, while we have chosen all values of q that are powers of 2, this does not have to be the case. For the smallest parameter sets, for example, we estimate that a practical choice for the value of q would be either q = 2 8 or the prime q = 251 . In both cases, the field arithmetic in the chosen field F q could be implemented efficiently. For example, for the case q = 2 8 , note that Intel has recently included (as of microarchitecture codename "Ice Lake") the Galois Field New Instructions (GF-NI). These instructions (namely VGF2P8AFFINEINVQB, VGF2P8AFFINEQB, VGF2P8MULB) allow software to compute multiplications and inversions in F 2 8 over the wide registers that are available with the AVX512 architectures.
To explain the potential of our scheme, we present next a comparison with the current scenario of code-based signature schemes. Note that most of the considered schemes make use of a public matrix which, however, does not depend on the private key. As suggested, for instance, in [19], this matrix can be generated from a seed and, consequently, its size can be excluded from the calculations relative to the public key (it is instead included in the column “Public data”). We have taken this into account to compute the various numbers for the schemes that we consider in Table 2. Note that the original papers of Durandal [19] and LESS-FM [16] already do this, while we have recomputed the public key size for the following schemes: Stern [8], Veron [10], CVE [12] and cRVDC [37]. For a comprehensive list of these algorithms parameters, we refer the interested reader to Table 2 in [37] (accessed on 9 December 2021), which is the preprint version of [37]. The public data expression for Stern, Veron, CVE and cRVDC is given by, respectively, k ( n k ) log 2 ( q ) , k ( n k ) log 2 ( q ) , k ( n k ) and k m log 2 ( q ) . Finally, in the comparison we have also included Wave [14], which is based on the hash-and-sign framework and does not make use of any additional public matrix.
The results in Table 2 capture the current status of code-based signatures, highlighting the advantages and disadvantages of each type of approach. For the schemes that work with multiple repetitions of a single-round identification protocol (namely, Stern, Veron, CVE and cRVDC), we have extremely compact public keys, at the cost of rather large signatures. In this light, our scheme presents a clear improvement, as the signature size is smaller in all cases. On the other hand, the most compact signatures are obtained with Wave which, unfortunately, needs very large public keys. Durandal, which follows a modified version of the Schnorr–Lyubashevsky approach [38], yields very good performance overall; however, like cRVDC, the scheme is based on the rank metric, which relies on relatively recent security assumptions, which are sometimes ad hoc and whose security is not yet completely understood [21,22,39,40]. Still, our work compares well with such schemes, for example when considering the sum of public key and signature sizes, a measure which is relevant in several situations where key/signature pairs are transmitted, as in TLS. Finally, we have included numbers for the LESS-FM scheme, which is constructed via a very different method, exploiting a group action associated to the notion of code equivalence. Thanks to this, the scheme is able to leverage various techniques aimed at manipulating public key and signature size; this leads to a trade-off between the two, with an overall high degree of flexibility (as is evident from the three parameter sets). In all cases, our scheme presents a clear advantage, especially when considering the size of the public key.
Remark 3.
In a recently-appeared preprint [41], Feneuil, Joux and Rivain introduce a scheme built using very similar techniques to ours, leveraging the trusted setup in conjunction with an ad hoc affine transformation. The scheme is very interesting and offers attractive performance; however, given that it was posted after the submission of this manuscript, in order to avoid changing this manuscript from the form it was accepted in, we limit ourselves to cite it here, and commit to include a detailed comparison in the full version of this work (available online at [42]).

Author Contributions

Conceptualization: S.G., E.P. and P.S.; writing: E.P. and P.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly supported by: National Science Foundation (grant number 1906360); NSF-BSF (grant number 2018640); The Israel Science Foundation (grant number 3380/19); The Center for Cyber Law and Policy at the University of Haifa, in conjunction with the Israel National Cyber Bureau in the Prime Minister’s Office.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shor, P. Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. SIAM J. Comput. 1997, 26, 1484–1509. [Google Scholar] [CrossRef] [Green Version]
  2. McEliece, R. A Public-Key Cryptosystem Based On Algebraic Coding Theory. Deep Space Netw. Prog. Rep. 1978, 44, 114–116. [Google Scholar]
  3. Albrecht, M.R.; Bernstein, D.J.; Chou, T.; Cid, C.; Gilcher, J.; Lange, T.; Maram, V.; von Maurich, I.; Misoczki, R.; Niederhagen, R.; et al. Classic McEliece: Conservative Code-Based Cryptography. In NIST Post-Quantum Standardization, 3rd Round; 2021; Available online: https://www.hyperelliptic.org/tanja/vortraege/mceliece-round-3.pdf (accessed on 9 December 2021).
  4. 2017. NIST Call for Standardization. Available online: https://csrc.nist.gov/Projects/Post-Quantum-Cryptography (accessed on 9 December 2021).
  5. Melchor, C.A.; Aragon, N.; Bettaieb, S.; Bidoux, L.; Blazy, O.; Bos, J.; Deneuville, J.C.; Arnaud Dion, I.S.; Gaborit, P.; Lacan, J.; et al. HQC: Hamming Quasi-Cyclic. In NIST Post-Quantum Standardization, 3rd Round; 2021; Available online: https://pqc-hqc.org/doc/hqc-specification_2021-06-06.pdf (accessed on 9 December 2021).
  6. Aragon, N.; Barreto, P.S.L.M.; Bettaieb, S.; Bidoux, L.; Blazy, O.; Deneuville, J.C.; Gaborit, P.; Gueron, S.; Güneysu, T.; Melchor, C.A.; et al. BIKE: Bit Flipping Key Encapsulation. In NIST Post-Quantum Standardization, 3rd Round; 2021; Available online: https://bikesuite.org/files/v4.2/BIKE_Spec.2021.07.26.1.pdf (accessed on 9 December 2021).
  7. 2021. NIST Status Update. Available online: https://csrc.nist.gov/Presentations/2021/status-update-on-the-3rd-round (accessed on 9 December 2021).
  8. Stern, J. A new identification scheme based on syndrome decoding. In Advances in Cryptology—CRYPTO’ 93; Stinson, D.R., Ed.; Springer: Berlin/Heidelberg, Germany, 1994; pp. 13–21. [Google Scholar]
  9. Fiat, A.; Shamir, A. How to prove yourself: Practical solutions to identification and signature problems. In CRYPTO; Springer: Berlin/Heidelberg, Germany, 1986; pp. 186–194. [Google Scholar]
  10. Véron, P. Improved identification schemes based on error-correcting codes. Appl. Algebra Eng. Commun. Comput. 1997, 8, 57–69. [Google Scholar] [CrossRef] [Green Version]
  11. Gaborit, P.; Girault, M. Lightweight code-based identification and signature. In Proceedings of the 2007 IEEE International Symposium on Information Theory, Nice, France, 24–29 June 2007; pp. 191–195. [Google Scholar]
  12. Cayrel, P.L.; Véron, P.; El Yousfi Alaoui, S.M. A zero-knowledge identification scheme based on the q-ary syndrome decoding problem. In Selected Areas in Cryptography; Springer: Berlin/Heidelberg, Germany, 2011; pp. 171–186. [Google Scholar]
  13. Courtois, N.T.; Finiasz, M.; Sendrier, N. How to Achieve a McEliece-Based Digital Signature Scheme. Lect. Notes Comput. Sci. 2001, 2248, 157–174. [Google Scholar]
  14. Debris-Alazard, T.; Sendrier, N.; Tillich, J.P. Wave: A new family of trapdoor one-way preimage sampleable functions based on codes. In ASIACRYPT; Springer: Berlin/Heidelberg, Germany, 2019; pp. 21–51. [Google Scholar]
  15. Biasse, J.F.; Micheli, G.; Persichetti, E.; Santini, P. LESS is More: Code-Based Signatures Without Syndromes. In AFRICACRYPT; Nitaj, A., Youssef, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2020; pp. 45–65. [Google Scholar]
  16. Barenghi, A.; Biasse, J.F.; Persichetti, E.; Santini, P. LESS-FM: Fine-tuning Signatures from a Code-based Cryptographic Group Action. PQCrypto 2021, 2021, 23–43. [Google Scholar]
  17. Beullens, W. Not Enough LESS: An Improved Algorithm for Solving Code Equivalence Problems over F q . In Proceedings of the International Conference on Selected Areas in Cryptography, Halifax, NS, Canada, 21–23 October 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 387–403. [Google Scholar]
  18. Gaborit, P.; Ruatta, O.; Schrek, J.; Zémor, G. RankSign: An efficient signature algorithm based on the rank metric. In International Workshop on Post-Quantum Cryptography; Springer: Berlin/Heidelberg, Germany, 2014; pp. 88–107. [Google Scholar]
  19. Aragon, N.; Blazy, O.; Gaborit, P.; Hauteville, A.; Zémor, G. Durandal: A Rank Metric Based Signature Scheme. In Advances in Cryptology–EUROCRYPT 2019; Ishai, Y., Rijmen, V., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 728–758. [Google Scholar]
  20. Baldi, M.; Battaglioni, M.; Chiaraluce, F.; Horlemann-Trautmann, A.L.; Persichetti, E.; Santini, P.; Weger, V. A new path to code-based signatures via identification schemes with restricted errors. arXiv 2020, arXiv:2008.06403. [Google Scholar]
  21. Debris-Alazard, T.; Tillich, J.P. Two attacks on rank metric code-based schemes: RankSign and an IBE scheme. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Brisbane, Australia, 2–6 December 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 62–92. [Google Scholar]
  22. Bardet, M.; Briaud, P. An algebraic approach to the Rank Support Learning problem. arXiv 2021, arXiv:2103.03558. [Google Scholar]
  23. Katz, J.; Kolesnikov, V.; Wang, X. Improved non-interactive zero knowledge with applications to post-quantum signatures. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, ON, Canada, 15–19 October 2018; pp. 525–537. [Google Scholar]
  24. Beullens, W. Sigma Protocols for MQ, PKP and SIS, and Fishy Signature Schemes. Eurocrypt 2020, 12107, 183–211. [Google Scholar]
  25. Berlekamp, E.; McEliece, R.; van Tilborg, H. On the inherent intractability of certain coding problems (Corresp.). IEEE Trans. Inf. Theory 1978, 24, 384–386. [Google Scholar] [CrossRef]
  26. Barg, S. Some new NP-complete coding problems. Probl. Peredachi Informatsii 1994, 30, 23–28. [Google Scholar]
  27. Beullens, W. Sigma protocols for MQ, PKP and SIS, and fishy signature schemes. In Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, Zagreb, Croatia, 10–14 May 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 183–211. [Google Scholar]
  28. Abdalla, M.; An, J.H.; Bellare, M.; Namprempre, C. From identification to signatures via the Fiat-Shamir transform: Minimizing assumptions for security and forward-security. In EUROCRYPT; Springer: Berlin/Heidelberg, Germany, 2002; pp. 418–433. [Google Scholar]
  29. Kiltz, E.; Lyubashevsky, V.; Schaffner, C. A concrete treatment of Fiat-Shamir signatures in the quantum random-oracle model. In Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, Tel Aviv, Israel, 29 April–3 May 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 552–586. [Google Scholar]
  30. Don, J.; Fehr, S.; Majenz, C.; Schaffner, C. Security of the Fiat-Shamir Transformation in the Quantum Random-Oracle Model. In CRYPTO; Springer: Cham, Switzerland, 2019; pp. 356–383. [Google Scholar]
  31. Liu, Q.; Zhandry, M. Revisiting Post-quantum Fiat-Shamir. In Advances in Cryptology-CRYPTO 2019; Springer: Cham, Switzerland, 2019; pp. 326–355. [Google Scholar]
  32. Unruh, D. Computationally binding quantum commitments. In Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, Vienna, Austria, 8–12 May 2016; pp. 497–527. [Google Scholar]
  33. Fehr, S. Classical proofs for the quantum collapsing property of classical hash functions. In Proceedings of the Theory of Cryptography Conference, Panaji, India, 11–14 November 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 315–338. [Google Scholar]
  34. Beullens, W.; Katsumata, S.; Pintore, F. Calamari and Falafl: Logarithmic (linkable) ring signatures from isogenies and lattices. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Daejeon-gu, Korea, 7–11 December 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 464–492. [Google Scholar]
  35. Prange, E. The use of information sets in decoding cyclic codes. IRE Trans. Inf. Theory 1962, 8, 5–9. [Google Scholar] [CrossRef]
  36. Peters, C. Information-set decoding for linear codes over F q . In International Workshop on Post-Quantum Cryptography; Springer: Berlin/Heidelberg, Germany, 2010; pp. 81–94. [Google Scholar]
  37. Bellini, E.; Caullery, F.; Gaborit, P.; Manzano, M.; Mateu, V. Improved Veron Identification and Signature Schemes in the Rank Metric. In Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 1872–1876. [Google Scholar]
  38. Lyubashevsky, V. Fiat-Shamir with Aborts: Applications to Lattice and Factoring-Based Signatures. In ASIACRYPT; Springer: Berlin/Heidelberg, Germany, 2009; pp. 598–616. [Google Scholar]
  39. Bardet, M.; Briaud, P.; Bros, M.; Gaborit, P.; Neiger, V.; Ruatta, O.; Tillich, J.P. An algebraic attack on rank metric code-based cryptosystems. In Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, Zagreb, Croatia, 10–14 May 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 64–93. [Google Scholar]
  40. Bardet, M.; Bros, M.; Cabarcas, D.; Gaborit, P.; Perlner, R.; Smith-Tone, D.; Tillich, J.P.; Verbel, J. Improvements of Algebraic Attacks for solving the Rank Decoding and MinRank problems. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Daejeon-gu, Korea, 7–11 December 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 507–536. [Google Scholar]
  41. Feneuil, T.; Joux, A.; Rivain, M. Shared Permutation for Syndrome Decoding: New Zero-Knowledge Protocol and Code-Based Signature. Cryptology ePrint Archive: Report 2021/1576. Available online: https://eprint.iacr.org/2021/1576 (accessed on 9 December 2021).
  42. Gueron, S.; Persichetti, E.; Santini, P. Designing a Practical Code-Based Signature Scheme from Zero-Knowledge Proofs with Trusted Setup. Cryptology ePrint Archive: Report 2021/1020. Available online: https://eprint.iacr.org/2021/1020 (accessed on 9 December 2021).
Table 1. Parameters for the proposed instances, for different values of M.
Table 1. Parameters for the proposed instances, for different values of M.
MsqnkwPk Size (B)Signature Size (kB)
5122312822010190104.224.6
102419256207939011422.2
204816512196928411720.2
40961410241879080121.319.5
Table 2. A comparison of public keys and signature sizes with other code-based signature schemes. All sizes are in Kilobytes (kB).
Table 2. A comparison of public keys and signature sizes with other code-based signature schemes. All sizes are in Kilobytes (kB).
SchemeSecurity LevelPublic DataPublic KeySig.PK + Sig.Security Assumption
Stern8018.430.048113.57113.62Low-weight Hamming
Veron8018.430.096109.06109.16Low-weight Hamming
CVE805.180.07266.4466.54Low-weight Hamming
Wave128-32051.043206.04High-weight Hamming
cRVDC1250.0500.1522.4822.63Low-weight Rank
Durandal-I128307.3115.244.0619.3Low-weight Rank
Durandal-II128419.7818.605.0123.61Low-weight Rank
LESS-FM-I1289.789.7815.224.97Linear Equivalence
LESS-FM-II12813.71205.745.25210.99Permutation Equivalence
LESS-FM-III12811.5711.5710.3921.96Permutation Equivalence
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gueron, S.; Persichetti, E.; Santini, P. Designing a Practical Code-Based Signature Scheme from Zero-Knowledge Proofs with Trusted Setup. Cryptography 2022, 6, 5. https://doi.org/10.3390/cryptography6010005

AMA Style

Gueron S, Persichetti E, Santini P. Designing a Practical Code-Based Signature Scheme from Zero-Knowledge Proofs with Trusted Setup. Cryptography. 2022; 6(1):5. https://doi.org/10.3390/cryptography6010005

Chicago/Turabian Style

Gueron, Shay, Edoardo Persichetti, and Paolo Santini. 2022. "Designing a Practical Code-Based Signature Scheme from Zero-Knowledge Proofs with Trusted Setup" Cryptography 6, no. 1: 5. https://doi.org/10.3390/cryptography6010005

APA Style

Gueron, S., Persichetti, E., & Santini, P. (2022). Designing a Practical Code-Based Signature Scheme from Zero-Knowledge Proofs with Trusted Setup. Cryptography, 6(1), 5. https://doi.org/10.3390/cryptography6010005

Article Metrics

Back to TopTop