Appendix A. Full Security Proof
This section presents detailed security proof for the two-party signature scheme.
Definition A1(no-abort Honest-Verifier Zero-Knowledge). An identification scheme is said to be -naHVZK if there exists a probabilistic expected polynomial-time algorithm  that is given only the public key  and that outputs  such that the following holds:
- The distribution of the simulated transcript produced by  () has a statistical distance at most  from the real transcript produced by the transcript algorithm . 
- The distribution of c from the output  conditioned on  is uniformly random over the set C. 
 Theorem A1. Assume a homomorphic hash function  is provably collision-resistant and ϵ-regular; then for any probabilistic polynomial time adversary  that makes a single query to the key generation oracle,  queries to the signing oracle, and  queries to the random oracles , the distributed signature protocol is DS-UF-CMA secure in the random oracle model under Module-LWE, rejected Module-LWE, and Module-SIS assumptions.
 Proof.  Given an adversary  that succeeds in breaking the distributed signature protocol with advantage , a simulator  is constructed.  simulates the behaviour of the single honest party without using honestly generated secret keys for the computation. Algorithm  is constructed such that it fits all the assumptions of the forking lemma. By the definition of forking algorithm, it was required that  is given a public key and a random oracle query replies as input.  simulates the behaviour of the honest party , and the party  is corrupted by the adversary. The algorithm  is defined in Algorithm A1.
 | Algorithm A1(). | 
| 1: Create empty hash tables  for . | 
| 2: Create a set of queried messages . | 
| 3: Simulate the honest party oracle as follows: Upon receiving a query from  of the form , reply to the query as described in  (Oracle A1) and  (Oracle A2).If one of the oracles terminates with output of the form , then  also terminates with the same output .
 | 
| 4: Simulate random oracles as follows: | 
| 5: Upon receiving a forgery  on message  from : If , then  terminates with output .Compute .Make query .If  or  or , then  terminates with output .Find index  such that .  terminates with the output 
 | 
![Entropy 23 00989 i006]()
![Entropy 23 00989 i007]()
  Appendix A.1. Random Oracle Simulation
There are several random oracles that need to be simulated:
            
- [C is a set of all vectors in  with exactly  nonzero elements] 
All of the random oracles are simulated as described in Algorithm A2. Additionally, there is a searchHash() algorithm for searching entries from the hash table defined in Algorithm A3.
| Algorithm A2. | 
| is a hash table that is initially empty. | 
| 1: On a query x, return element  if it was previously defined. | 
| 2: Otherwise, sample output y uniformly at random from the range of  and return | 
| Algorithm A3 searchHash() | 
| 1: For value h, find its preimage m in the hash table such that . | 
| 2: If preimage of value h does not exist, set flag  and set preimage . | 
| 3: If for value h more than one preimage exists in hash table , set flag . | 
| 4: Output: | 
Simulators for the key generation and signing processes were constructed using several intermediate games. The goal was to remove the usage of the actual secret key share of the party  from both processes. Let Pr[] denote the probability that  does not output (0,⊥) in the game . This means that the adversary must have created a valid forgery (as defined in Algorithm A1). Then, Pr[] . In Game 0,  simulates the honest party behaviour using the same instructions as in the original KeyGen  and Sign  protocols.
  Appendix A.2. Game 1
In Game 1, only signing process ics changed with respect to the previous game. The simulator for the signing process in Game 1 is described in Algorithm A4. Challenge c is now sampled uniformly at random, and the signature shares are computed without communicating with the adversary. Changes with respect to the previous game are highlighted.
| Algorithm A4. | 
| 1: . | 
| 2: . | 
| 3: . | 
| 4:  and . | 
| 5: , send out . | 
| 6: Upon receiving , search for . | 
| 7: If the flag  is set, then simulation fails with output . If the flag  is set, then send out .
 | 
| 8: . | 
| 9: Program random oracle  to respond queries  with c. Set . If  has been already set, set flag  and the simulation fails with output .
 | 
| 10: Send out . Upon receiving : if : send out ABORT.if the flag  is set and : set the flag  and the simulation fails with output .
 | 
| 11: Otherwise, run rejection sampling, if it did not pass: send out RESTART and go to the step 1. | 
| 12: Otherwise, send out . Upon receiving RESTART, go to step 1. | 
| 13: Upon receiving , reconstruct  and check that , if not: send out ABORT. | 
| 14: Otherwise, set ,  and output composed signature . | 
Game 0 → Game 1:
The difference between Game 0 and Game 1 can be expressed using the  events that can happen with the following probabilities:
- Pr[] is the probability that at least one collision occurs during at most  queries to the random oracle  made by adversary or simulator. This means that two values  were found such that . As all of the responses of  are chosen uniformly at random from  and there are at most  queries to the random oracle , the probability of at least one collision occurring can be expressed as , where  is the length of  output. 
- Pr[] is the probability that programming random oracle  fails at least once during  queries. This event can happen in the following two cases:  was previously queried by the adversary or it was not queried by the adversary: - -
- Case 1:  has been already asked by adversary during at most  queries to . This means that the adversary knows  and may have queried  before. This event corresponds to guessing the value of . - Let the uniform distribution over  be denoted as X and the distribution of  output be denoted as Y. As  is -regular (for some negligibly small ), it holds that SD . Then, for any subset T of , by definition of statistical distance, it holds that . Therefore, for a uniform distribution X, the probability of guessing Y by T is bounded by . - Since  was produced by  in the beginning of the signing protocol completely independently from , the probability that  queried  is at most  for each query. 
- -
- Case 2:  has been set by adversary or simulator by chance during at most  prior queries to the . Since  has not queried  before, adversary does not know  and the view of  is completely independent from . The probability that  occurred by chance in one of the previous queries to  is at most . 
 
- Pr[] is the probability that the adversary predicted at least one of two outputs of the random oracle  without making a query to it. In this case, there is no record in the hash table  that corresponds to the preimage . This can happen with probability at most  for each signing query. 
Therefore, the difference between two games is
  Appendix A.3. Game 2
In Game 2, when the signature share gets rejected, simulator commits to a uniformly random vector  from the ring  instead of committing to a vector computed as . The simulator for the signing process in Game 2 is described in Algorithm 5.
Game 1 → Game 2:
The difference between Game 1 and Game 2 can be expressed with the probability that the adversary can distinguish simulated commitment with random  from the real one when the rejection sampling algorithm does not pass. If the signature shares are rejected, it means that  or .
Let us assume that there exists an adversary 
 who succeeds in distinguish simulated commitment with random 
 from the real one with nonnegligible probability:
              
Then, the adversary  can be used to construct an adversary  who solves the rejected Module-LWE for parameters , where U is the uniform distribution. The adversary  is defined in Algorithm A6.
| Algorithm A5. | 
| 1: . | 
| 2:. | 
| 3:  and . | 
| 4: Run rejection sampling; if it does not pass, proceed as follows: | 
| 1. . | 
| 2. , send out . | 
| 3. Upon receiving , search for . | 
| 4. If the flag  is set, then simulation fails with output . If the flag  is set, then send out . | 
| 5. . | 
| 6. Program random oracle  to respond queries  with c. Set . If  has been already set, set flag  and the simulation fails with output . | 
| 7. Send out . Upon receiving : if : send out ABORT.if the flag  is set and : set the flag  and the simulation fails with output .
 | 
| 8. Otherwise, send out RESTART and go to step 1. | 
| 5: If rejection sampling passes, proceed as follows: | 
| 1. . | 
| 2. , send out . | 
| 3. Upon receiving , search for . | 
| 4. If the flag  is set, then simulation fails with output . If the flag  is set, then continue. | 
| 5. . | 
| 6. Program random oracle  to respond queries  with c. Set . If  has been already set, set flag  and the simulation fails with output . | 
| 7. Send out .Upon receiving : if : send out ABORT.if the flag  is set and : set the flag  and the simulation fails with output .
 | 
| 8. Otherwise, send out . Upon receiving RESTART, go to step 1. | 
| 9. Upon receiving , reconstruct  and check that , if not: send out ABORT. | 
| 10. Otherwise, set ,  and output composed signature . | 
| Algorithm A6. | 
| 1: | 
| 2: | 
| 3: return | 
As a consequence, the difference between the two games is bounded by the following:
  Appendix A.4. Game 3
In Game 3, the simulator does not generate the signature shares honestly and thus does not perform rejection sampling honestly. Rejection sampling is simulated as follows:
- Rejection case: with probability  simulator generates commitment to the random  as in the previous game. 
- Otherwise, sample signature shares from the set  and compute  out of it. 
The simulator for the signing process in Game 3 is described in Algorithm A7.
Game 2 → Game 3:
The signature shares generated in Algorithm A7 are indistinguishable from the real ones because of the 
-naHVZK property of the underlying identification scheme from [
13], appendix B. Therefore, the difference between Game 2 and Game 3 can be defined as follows:
According to the proof from [
13], 
 for the underlying identification scheme.
| Algorithm A7. | 
| 1: With probability , proceed as follows: 1..2..3., send out .4.Upon receiving , search for .5.If the flag  is set, then simulation fails with output . If the flag  is set, then send out .6..7.Program random oracle  to respond queries  with c. Set . If  has been already set, set flag  and the simulation fails with output .8.Send out . Upon receiving : if : send out ABORT.if the flag  is set and : set the flag  and the simulation fails with output .
9.Otherwise, send out RESTART and go to step 1.
 | 
| 2: Otherwise, proceed as follows: 1..2. and .3..4., send out .5.Upon receiving , search for .6.If the flag  is set, then simulation fails with output . If the flag  is set, then continue.7..8.Program random oracle  to respond queries  with c. Set . If  has been already set, set flag  and the simulation fails with output .9.Send out . Upon receiving : if : send out ABORT.if the flag  is set and : set the flag  and the simulation fails with output .
10.Otherwise, send out . Upon receiving RESTART, go to step 1.11.Upon receiving , reconstruct  and check that , if not: send out ABORT.12.Otherwise, set ,  and output composed signature .
						
 | 
  Appendix A.5. Game 4
Now, the signing process does not rely on the actual secret key share of the honest party . In the next games, the key generation process is changed so that it does not use secret keys as well. In this game, the simulator is given a predefined uniformly random matrix , and the simulator defines its own matrix share out of it. By definition, the algorithm  (Algorithm A1) receives a pre-generated public key  as the input. Therefore, the simulator in Game 4 is given a predefined matrix , and in the later games, the simulator is changed so that it receives the entire public key and uses it to compute its shares . The simulator for the key generation process in Game 4 is described in Algorithm A8.
| Algorithm A8). | 
| 1: Send out | 
| 2: Upon receiving : | 
| search for .if the flag  is set, then simulation fails with output .if the flag  is set, then sample . Otherwise, define .
 | 
| 3: Program random oracle  to respond queries  with . Set . If  has been already set, then set the flag  and the simulation fails with output . | 
| 4: Send out . Upon receiving : if : send out ABORT.if the flag  is set and : set the flag  and the simulation fails with output .
 | 
| 5: (, ) . | 
| 6: , send out . | 
| 7: Upon receiving , send out . | 
| 8: Upon receiving , check that . If not: send out ABORT. | 
| 9: Otherwise, ,  and . | 
Game 3 → Game 4:
The distribution of public matrix  does not change between Game 3 and Game 4. The difference between Game 3 and Game 4 can be expressed using  events that happen with the following probabilities:
- Pr[] is the probability that at least one collision occurs during at most  queries to the random oracle  made by adversary or simulator. This can happen with probability at most , where  is the length of  output. 
- Pr[] is the probability that programming random oracle  fails, which happens if  has been previously asked by adversary during at most  queries to the random oracle . This event corresponds to guessing random , for each query the probability of this event is bounded by . 
- Pr[] is the probability that adversary predicted at least one of two outputs of the random oracle  without making a query to it. This can happen with probability at most . 
Therefore, the difference between the two games is
  Appendix A.6. Game 5
In Game 5, the simulator picks public key share  randomly from the ring instead of computing it using secret keys. The simulator for the key generation process in Game 5 is described in Algorithm A9.
| Algorithm A9. | 
| 1: Send out . | 
| 2: Upon receiving : search for .if the flag  is set, then simulation fails with output .if the flag  is set, then sample . Otherwise, define .
 | 
| 3: Program random oracle  to respond queries  with . Set . If  has been already set, then set the flag  and the simulation fails with output . | 
| 4: Send out . Upon receiving : if : send out ABORT.if the flag  is set and : set the flag  and the simulation fails with output .
 | 
| 5: , send out . | 
| 6: Upon receiving , send out . | 
| 7: Upon receiving , check that . If not: send out ABORT. | 
| 8: Otherwise, , . | 
Game 4 → Game 5:
In Game 5, public key share  is sampled uniformly at random from  instead of computing it as , where  are random elements from . As matrix  follows the uniform distribution over , if adversary can distinguish between Game 3 and Game 4, this adversary can be used as a distinguisher that breaks the decisional Module-LWE problem for parameters , where U is the uniform distribution.
Therefore, the difference between two games is bounded by the advantage of adversary in breaking decisional Module-LWE:
  Appendix A.7. Game 6
In Game 6, the simulator uses as input a random resulting public key  to compute its own share . The simulator for the key generation process in Game 6 is described in Algorithm A10.
Game 5 → Game 6:
The distributions of  do not change with respect to Game 5. The difference between Game 5 and Game 6 can be expressed using  events that happen with the following probabilities:
- Pr[] is the probability that at least one collision occurs during at most  queries to the random oracle  made by adversary or simulator. This can happen with probability at most , where  is the length of  output. 
- Pr[] is the probability that programming random oracle  fails, which happens if  was previously asked by adversary during at most  queries to the random oracle . This event corresponds to guessing a uniformly random , for each query the probability of this event is bounded by . 
- Pr[] is the probability that adversary predicted at least one of two outputs of the random oracle  without making a query to it. This can happen with probability at most . 
Therefore, the difference between the two games is
| Algorithm A10. | 
| 1: Send out . | 
| 2: Upon receiving : search for .if the flag  is set, then simulation fails with output .if the flag  is set, then sample . Otherwise, define .
						
 | 
| 3: Program random oracle  to respond queries  with . Set . If  has been already set, then set the flag  and the simulation fails with output . | 
| 4: Send out . Upon receiving : if : send out ABORT.if the flag  is set and : set the flag  and the simulation fails with output .
						
 | 
| 5: Send out . | 
| 6: Upon receiving , search for . | 
| 7: If the flag  is set, then simulation fails with output . | 
| 8: Compute public key share: If the flag  is set, .Otherwise, .
 | 
| 9: Program random oracle  to respond queries  with . Set . If  has been already set, set flag  and the simulation fails with output .
 | 
| 10: Send out . Upon receiving : if : send out ABORT.if the flag  is set and : set the flag  and simulation fails with output .
 | 
| 11: Otherwise, , . | 
  Appendix A.8. Forking Lemma
Now, both key generation and signing do not rely on the actual secret key share of the honest party . In order to conclude the proof, it is needed to invoke forking lemma to receive two valid forgeries that are constructed using the same commitment  but different challenges .
Currently, the combined public key consists of matrix  uniformly distributed in  and vector  uniformly distributed in . We want to replace it with Module-SIS instance , where . The view of adversary will not be changed if we set .
Let us define an input generation algorithm  such that it produces the following input:  for the . Now, let us construct  around the previously defined simulator .  invokes the forking algorithm  on the input .
As a result, with probability  two valid forgeries  and  are obtained. Here, by the construction of , it holds that , . The probability  satisfiesollowing:
Since both signatures are valid, it holds that
Let us examine the following cases:
Case 1 : , and 
 is able to break the collision resistance of the hash function (that is hard under the worst-case difficulty of finding short vectors in cyclic/ideal lattices), as was proven in [
35,
36].
 Case 2 : . It can be rearranged as , and this, in turn, leads to
Now, recall that  is an instance of Module-SIS problem; this means that we found a solution for Module-SIS with parameters , where .
Therefore, the probability  is the following:
Finally, taking into account that the underlying identification scheme has perfect naHVZK (i.e., 
), the advantage of the adversary is bounded by the following:
            
 ☐