Next Article in Journal
Application and CFD-Based Optimization of a Novel Porous Object for Confined Slot Jet Impingement Cooling Systems under a Magnetic Field
Next Article in Special Issue
Sigma Identification Protocol Construction Based on MPF Defined over Non-Commuting Platform Group
Previous Article in Journal
End-to-End Deep Learning Architectures Using 3D Neuroimaging Biomarkers for Early Alzheimer’s Diagnosis
Previous Article in Special Issue
Efficient and Secure Pairing Protocol for Devices with Unbalanced Computational Capabilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Mathematical Perspective on Post-Quantum Cryptography

by
Maximilian Richter
1,*,
Magdalena Bertram
1,
Jasper Seidensticker
1 and
Alexander Tschache
2
1
Secure Systems Engineering, Fraunhofer AISEC, 14199 Berlin, Germany
2
Volkswagen AG, 38440 Wolfsburg, Germany
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(15), 2579; https://doi.org/10.3390/math10152579
Submission received: 27 June 2022 / Revised: 18 July 2022 / Accepted: 21 July 2022 / Published: 25 July 2022
(This article belongs to the Special Issue Mathematics Cryptography and Information Security 2021)

Abstract

:
In 2016, the National Institute of Standards and Technology (NIST) announced an open competition with the goal of finding and standardizing suitable algorithms for quantum-resistant cryptography. This study presents a detailed, mathematically oriented overview of the round-three finalists of NIST’s post-quantum cryptography standardization consisting of the lattice-based key encapsulation mechanisms (KEMs) CRYSTALS-Kyber, NTRU and SABER; the code-based KEM Classic McEliece; the lattice-based signature schemes CRYSTALS-Dilithium and FALCON; and the multivariate-based signature scheme Rainbow. The above-cited algorithm descriptions are precise technical specifications intended for cryptographic experts. Nevertheless, the documents are not well-suited for a general interested mathematical audience. Therefore, the main focus is put on the algorithms’ corresponding algebraic foundations, in particular LWE problems, NTRU lattices, linear codes and multivariate equation systems with the aim of fostering a broader understanding of the mathematical concepts behind post-quantum cryptography.

1. Introduction

In recent years, significant progress in researching and building quantum computers has been made. The existence of such computers threatens the security of many modern cryptographic systems. This affects, in particular, asymmetric cryptography, i.e., KEMs and digital signatures. By leveraging Shor’s quantum algorithm to find the period of a function in a large group, a quantum computer can solve a distinct set of mathematical problems. In particular, this includes integer factorization and the discrete logarithm, which are the basis for a wide range of cryptographic schemes. Therefore, a fully fledged quantum computer would be able to efficiently break the security of many modern cryptosystems. To defend against this threat, the need for novel mathematical problems which are resistant to Shor’s algorithm arises. Such problems are thereby promising candidates to withstand the superior computing possibilities of quantum computers.
In 2016, NIST announced an open competition with the goal of finding and standardizing suitable algorithms for quantum-resistant cryptography. The standardization effort by NIST is aimed at KEMs and digital signatures [1]. This process is currently in its third round of candidate selection (April 2022).
At this point, the submitted algorithms are complex technical specifications without a presentation of the underlying mathematical fundamentals and therefore do not allow an easy access to these novel post-quantum algorithm approaches. As some of these algorithms will probably become widely used in industrial areas very soon, it is vital to foster a broad understanding of these mathematical concepts. In this document, we therefore address the described lack of educational presentation. As we do not intend to give a detailed comparison of the presented methods and their performance in practice, we would like to refer to the post-quantum database PQDB [2]. This website is an internal project within Fraunhofer AISEC and aims to provide an up-to-date overview of implementation details and performance measurements of post-quantum secure cryptographic schemes according to available research.
In the following sections, the round-three finalists of NIST’s competition are presented, and their mathematical details and properties are outlined. For a quick access to any of these algorithms, we have structured the document in separate parts containing distinct mathematical concepts, which thereby offer independent readability. These concepts correspond to the algorithms’ respective algebraic foundations, which are LWE problems as well as NTRU lattices in Section 2, linear codes in Section 3 and multivariate equation systems in Section 4.

2. Lattice-Based Cryptography

2.1. Lattice Fundamentals

The cryptographic interest in lattices mainly arises from the fact that a given lattice L can have widely different bases. While a good basis can simplify some computational tasks significantly, a bad basis can make them almost impossible. In this section, we will give a short introduction to the fundamental mathematics and the two most important computational problems of lattices.

2.1.1. Lattices

Definition 1
(lattice, basis). Let B = { b 1 , b 2 , . . . , b m } be a set of linearly independent vectors of R n . Then, the set of all integer linear combinations
L ( B ) = i a i b i a i Z R n
is called alatticein R n generated by B. We furthermore refer to { b 1 , b 2 , . . . , b m } as abasisof the lattice L.
An example of a lattice with corresponding basis is shown in Figure 1. We can equivalently generate L via a matrix B containing the basis vectors as column vectors.
Definition 2
(lattice, rank, dimension, full-rank lattice). Let { b 1 , b 2 , . . . , b m } be a set of linearly independent vectors of R n . Let B be the n × m matrix with column vectors b 1 , . . . , b m . Then:
L ( B ) = { B x x Z m }
is called lattice in R n generated by B. We call m the rank and n the dimension of the lattice. For m equals n, the lattice is called the full-rank lattice.
Observe that the basis underlying a lattice L is not unique. Observe that the lattice generated by the vectors
0 1 , 1 0
is Z 2 , the set of all integer points. Z 2 is also generated by the vectors
2 1 , 1 1 .
Figure 2 also illustrates this fact. On the other hand, n linearly independent vectors in Z n are not necessarily a basis of Z n . As an example, observe that the modified vectors from the example above
2 0 , 1 1
do not form a basis of Z 2 .

2.1.2. Computational Lattice Problems

The particular structure of lattices allows them to have special mathematical properties. The following computations can be efficiently evaluated using linear algebra algorithms:
  • Let g 1 , . . . , g k R n be a set of vectors generating the lattice L. Calculate a basis b 1 , . . . , b m R n of L.
  • Let L be a lattice. Evaluate whether a given vector c is an element of L.
Other computational lattice problems appear to be generally hard and are - as indicated in the introduction - even believed to be resistant against Shor’s algorithm. Therefore, they are interesting candidates for usage in post-quantum-cryptography. These problems are presented in the following.

Shortest Vector Problem

Let L be a lattice with some basis B R n × m and · some norm. Let λ ( L ) be the length of the shortest nonzero vector in L. The task of finding l L such that l = λ ( L ) , i.e., finding any shortest vector of L, is called the shortest vector problem (SVP). Figure 3 illustrates such a shortest vector in a lattice.

2.1.3. Closest Vector Problem

Let L be a lattice with some basis B R n × m and · some norm. Given q R n , the task of finding l L such that l q is minimal, i.e., find the lattice vector l closest to a given arbitrary vector, is called the closest vector problem (CVP). Figure 4 illustrates a random point with its corresponding closest lattice vector.

2.2. Cryptography Based on Learning with Errors (LWE)

2.2.1. LWE Fundamentals

Learning with Errors

Let Z q = Z / q Z be the ring of integers modulo q. We can naturally form a linear equation system
A · s = b ,
where A Z q n × m , s Z q m , b Z q n . For example, consider the following system:
A = 10 3 5 1 4 1 1 2 3 1 1 5 , b = 10 3 8
Then, the associated equations look like:
10 · s 1 + 3 · s 2 + 5 · s 3 + 1 · s 4 = 10 4 · s 1 + 1 · s 2 + 1 · s 3 + 2 · s 4 = 3 3 · s 1 + 1 · s 2 + 1 · s 3 + 5 · s 4 = 8
Solving this equation system can be efficiently realized using the Gaussian algorithm. However, adding even only small error values e Z q n to the equation system yields:
A · s + e = b ,
which renders solving the equation system and retrieving the solution vector s surprisingly hard. This fact is founded in the relation to the hard lattice problems described above, which is presented in a nutshell below.

Decisional LWE

The LWE problem can also be rephrased as a decision problem, usually abbreviated dLWE. Given an LWE sample ( A , b ) as defined above (s and e are kept secret), the task is to guess whether the values of b have been calculated as A · s + e with small error values e, or whether they have been chosen arbitrarily. Both variants are equivalently hard. The reduction from LWE to dLWE has been proven by Regev ([3], Lemma 4.2), the inverse reduction from dLWE to LWE is trivial.

Linking LWE to Computational Lattice Problems

Consider an LWE problem of the form:
A · s + e = b ,
where A Z q n × m , b Z q n and small vectors s Z q m , e Z q n . It is straightforward to solve a concrete LWE instance by solving the closest vector problem. Observe that the closest vector to b is almost always the lattice vector A · s with distance e.
To give an intuition of the relationship between learning with errors and the shortest vector problem, consider the following lattice:
L = { x Z m + n + 1 ( A I n ( b ) ) · x = 0 mod q } ,
where the ‘ ’ operator denotes concatenation and I n denotes the n × n identity matrix. It can be observed that the column vector ( s , e , 1 ) is an element of L by verifying that
A I n b · s e 1 = A · s + e b = b b = 0 m o d q
holds. It can be shown that the vector ( s , e , 1 ) is actually a shortest vector in L and therefore is an SVP solution for L. This means retrieving the vector ( s , e , 1 ) directly yields the secret s as well as the error vector e and therefore solves the LWE system.

LWE-Based Encryption Schemes

This section aims to serve as a high-level introduction to LWE-based encryption schemes, so that their basic idea can be easily understood. The following simplified example will only be used to transmit a message consisting of a single bit, but it can be trivially extended to transmit a bitstring of any desired length.
Consider an LWE instance A · s + e = b , where A Z q n × m is chosen uniformly random and s Z q m and e Z q n are chosen from an error distribution, i.e., their values are rather small. Let us assume the values A and b are public while the corresponding values s and e are kept secret. The LWE problem then states that it is hard to calculate s or e.
To build the actual encryption scheme, we will randomly sample the additional values r Z q n as well as errors e 1 Z q m and e 2 Z q . With that, we construct the equation system:
u = A T · r + e 1 Z q m v = b T · r + e 2 Z q ,
which can be equivalently represented as:
u v = A T b T r + e 1 e 2
in a compact form.
It is then easy to see that this is also another instance of the LWE problem. With knowledge of ( A , b , u , v ) , it is hard to calculate any of the other values. Furthermore, the decisional LWE problem states that it is even hard to differentiate between the values u , v calculated in the method described above and u , v with some arbitrary value v . This is a core part of our encryption system.
For now, let us assume we would just send ( u , v ) back to the recipient, who (knowing s) could then calculate the value s T · u = s T · ( A T · r + e 1 ) . Taking into account that the error values are relatively small, we observe that s T · u s T · A T · r and also that v = b T · r + e 2 b T · r ( A · s ) T · r = s T · A T · r . Thus, neglecting the error values, we find that s T · u v .
This means we have found a way to indirectly transmit about the same value in two separate ways, and we have done so unnoticed by a third person: without knowledge of s, it cannot be deduced how close exactly these values are to each other (dLWE assumption).
The trick is to hide the message in one of these values. When the message is 0, we will just transmit v = v . However, in case it is 1, we will transmit v = v + q / 2 (remember that we are operating on Z q , so this is the value “opposite” to 0). The receiver can then calculate μ = v s T · u . If  μ is close to zero (mod q), the message was 0; if it is closer to q / 2 , the message was 1.
Let us summarize the process more formally. Let round n ( · ) denote rounding to the nearest multiple of n. For a one-bit message encoded as μ { 0 , q / 2 } , the ciphertext is ( u , v ) with
u = A T · r + e 1 v = b T · r + e 2 + μ ,
from which the receiver can calculate:
= round q / 2 ( v s T · u ) = round q / 2 ( b T r + e 2 + μ s T ( A T r + e 1 ) ) = round q / 2 ( ( A s + e ) T r + e 2 + μ s T A T r s T e 1 ) = round q / 2 ( ( A s ) T r + e T r + e 2 + μ ( A s ) T r s T e 1 ) = round q / 2 ( μ + e T r + e 2 s T e 1 ) = μ .
For the last equality to hold (and thus, the decryption to succeed), we need the overall effect of the error term ( e T r + e 2 s T e 1 ) to stay below q / 4 . In practice, all candidate schemes use an error distribution and a modulus q where this is not always the case in order to have reasonable ciphertext sizes. The failure probability in all cases is extremely small, so it is usually negligible in practice. However, care must be taken that attackers cannot learn anything about the secret key by intentionally crafting ciphertexts that cause decryption failures.

Flavors of LWE: Ring-LWE and Module-LWE

The sample cryptosystem described above can be trivially extended to encapsulate bitstrings of a fixed length by running the same protocol times in parallel. In contrast to the flavors described below, this approach is called Plain LWE (note that even though Z q is a ring, the term Ring-LWE refers to another approach, see below). A production-ready scheme that uses Plain LWE is Frodo [4]. Because of its simplicity it is considered to have the least potential for attacks. However, this is paid for by communication costs about 15 times higher than with Ring-LWE or Module-LWE. The comparison of Frodo’s public key and ciphertext size to the respective sizes of Kyber and Saber shows this fact. Because of the relatively bad performance, it is not among the NIST standardization finalists (but included as an alternate candidate) and is thus not included in this report. Other variants of LWE can be created by exchanging the underlying algebraic structure. Various flavors have been researched, and we will detail the relevant ones in the following.
Ring-LWE was first proposed by Vadim Lyubashevsky, Chris Peikert and Oded Regev in 2010 [5]. Calculations take place in a polynomial ring R q : = Z q [ x ] / f ( x ) for some polynomial f ( x ) . Therefore, polynomial multiplication is used instead of matrix multiplication.
Module-LWE is a variant that further improves Ring-LWE and was proposed by Adeline Langlois and Damien Stehlé in 2012 [6]. It uses the exact same structure as the sample system detailed above, but the scalars are replaced by ring elements of R q , as defined in the previous paragraph. Consequently, vectors become elements of so-called modules, which are a generalization of vector spaces over rings, hence the name (see Table 1 for a comparison).
Most early practical implementations of LWE-based cryptography, such as the NewHope scheme [7], use Ring-LWE. However, it was shown that Ring-LWE possibly provides more attack surface, so that a Ring-LWE scheme is at most as secure as an equally parameterized Module-LWE scheme [8]. For that reason, NIST has decided not to consider Ring-LWE schemes in the third round.

Learning with Rounding

The learning with rounding (LWR) problem is a variant of the LWE problem. Consider a single line of the LWE problem A s + e = b , where A Z q n × m is chosen uniformly and s Z q m and e Z q n from a small error distribution, i.e.,
( A s ) k + e k = ( a k 1 · s 1 + . . . + a k m · s m ) + e k = b k .
Instead of sampling and adding a random small error e k , noise is added to the equation by simple rounding. In this case, that means defining a rounding function · p : Z q Z p for some p < q dividing Z q into p roughly same-sized intervals and mapping an element in Z q to the index of its corresponding interval. For example, when p and q are both powers of 2, rounding simplifies to mapping an element to its l o g 2 ( p ) most significant bits.
This rounding function can be extended to vectors in Z q n by component-wise rounding, i.e., rounding each ( A s ) k separately. Counter-intuitively, although the noise in LWR is deterministically computed, it is computationally as difficult as solving LWE, i.e., deriving s from A and A · s p is hard [9]. Just as in the LWE case, variants of LWR can be created by exchanging the underlying structure. For example, the scheme Saber uses Module-LWR.

2.2.2. Kyber

Kyber [10] is a CCA-secure KEM derived from a CPA-secure public-key encryption (PKE) scheme based on Module-LWE. For n , q N , the underlying ring is R q = Z q [ X ] / ( X n + 1 ) , i.e., the ring of polynomials up to degree n 1 with coefficients in Z q . The corresponding module is R q k with rank k N .
The following primitives are required: a noise space B, where sampling a value from B yields a random small integer value in the range { 4 , . . . , 4 } . Additionally, for the KEM construction, secure hash functions H 1 , H 2 and a secure key derivation function K D F are required.
Internally, the plaintext encrypted by Kyber is a ring element r R q . Therefore, the input bitstring m { 0 , 1 } 256 is converted to a ring element r = t o R i n g ( m ) , i.e., a polynomial, as follows:
0 0 1 0 1 t o R i n g 0 0 q 2 0 q 2 0 + 0 · x + q 2 · x 2 + . . . + 0 · x n 2 + q 2 · x n 1
It can already be observed that even after having added a vector with small coefficients the original polynomial can easily be reconstructed. The reverse operation f r o m R i n g reconstructs a bitstring from a given ring element through coefficient-wise division by q 2 and subsequent rounding. The Kyber specification introduces encoding and compression functions, which we have simplified to the t o R i n g and f r o m R i n g functions to increase readability and understanding.
Analogously to the general LWE-based encryption scheme described in Section 2.2, the Kyber key generation (Algorithm 1) instantiates a particular LWE problem, A s + e = b , by generating coefficients A for the linear equation system and sampling a solution vector s as well as an error vector e.
Algorithm 1 Kyber PKE Key Generation: keyGen.
Input: none
  • Generate A R q k × k
  • Sample s R q k with coefficients from B
  • Sample e R q k with coefficients from B
  • Calculate b = A s + e
Output: public key p k = ( A , b ) , secret key s
The solution vector s functions as the secret key, while A and the vector b = A s + e are used as the public key. Calculating s from the public key would be identical to solving an instance of the LWE problem.
The Kyber PKE encryption (Algorithm 2) looks similar to the LWE-based encryption scheme introduced in Section 2.2 expanded to a Module-LWE setting.
Algorithm 2 Kyber PKE Encryption: enc,
Input: public key p k = ( A , b ) , message m { 0 , 1 } 256
  • Sample r R q k with coefficients from B
  • Sample e 1 R q k with coefficients from B
  • Sample e 2 R q with coefficients from B
  • Calculate u = A T r + e 1
  • Calculate v = b T r + e 2 + t o R i n g ( m )
Output: ciphertext c = ( u , v )
With knowledge of the secret value s, the reconstruction of the message m is possible through the corresponding Kyber PKE decryption routine (Algorithm 3).
Algorithm 3 Kyber PKE Decryption: dec
Input: secret key s, ciphertext c = ( u , v )
  • Calculate m * = v s T u
Output: message m = f r o m R i n g ( m * )
Applying the operation f r o m R i n g ( m * ) reconstructs the original m with very high probability. Indeed, the Kyber encryption scheme is a probabilistic algorithm returning the original message m with very high probability (see Table 2 for concrete failure probability values), depending on the amount of noise within the sampled vectors.
To construct a CCA-secure KEM from the given PKE, a variant of the Fujisaki–Okamoto transformation (FO-transformation) is used. Fujisaki and Okamoto [11] presented the first generic transformation from asymmetric and symmetric encryption schemes to a secure hybrid encryption scheme. Later, Hofheinz, Hövelmanns and Kiltz [12] extended the work of Fujisaki and Okamoto and presented a generic transformation toolkit, including a transformation of a PKE scheme into a secure KEM. Algorithm 4 shows the Kyber KEM key generation.
Algorithm 4 Kyber KEM Key Generation.
Input: none
  • Generate σ { 0 , 1 } 256
  • Generate ( p k , s ) = P K E . k e y G e n ( )
Output: public key p k , secret key s k = ( s , σ )
In the KEM encapsulation (Algorithm 5), observe that the value r is used in the underlying PKE as a seed for the generation of the otherwise random values during encryption. Although a deterministic public key encryption algorithm is usually not desirable, for a KEM, the receiver needs to be able to repeat the encryption procedure in the same way as the sender. We denote the deterministic version of the encryption routine with given seed r by P K E . e n c r ( p k , m ) . Furthermore, the message m is hashed before being fed to the PKE encryption routine.
Algorithm 5 Kyber KEM Encapsulation.
Input: public key p k
  • Generate message m { 0 , 1 } 256
  • Calculate ( K , r ) = H 1 ( H 2 ( m ) H 2 ( p k ) )
  • Calculate c = P K E . e n c r ( p k , H 2 ( m ) )
  • Calculate K = K D F ( K H 2 ( c ) )
Output: encapsulation c, shared secret K
The decapsulation routine (Algorithm 6) calculates the required values analogously to the encapsulation routine.
Algorithm 6 Kyber KEM Decapsulation.
Input: public key pk, secret key s k = ( s , σ ) , encapsulation c
  • Calculate H m = P K E . d e c ( s , c )
  • Calculate ( K , r ) = H 1 ( H m H 2 ( p k ) )
  • Calculate c = P K E . e n c r ( p k , H m )
  • If c = c set K = K D F ( K H 2 ( c ) )
  • If c c set K = K D F ( σ H 2 ( c ) )
Output: shared secret K
To gain some intuition of how ciphertext validation in Kyber works, have a look at the decryption process as described in detail in Section 2.2. In the Kyber PKE scheme, the message m is embedded within the difference of the vectors v and s T u , i.e.,
v s T · u = t o R i n g ( m ) + ( e T r + e 2 s T e 1 ) ,
where e , e 1 , e 2 are random error vectors. There are a lot of different combinations of values of these error terms that all correspond to the same m. In the KEM, however, the randomness becomes deterministic by deriving it from a chosen r, so there is a unique set of values ( e , e 1 , e 2 ) for each m. This property establishes the required CCA-security of the KEM. When an adversary sends a random ciphertext to the decapsulation routine, it will always decipher to a message m, but the probability that the adversary has chosen the specific ciphertext (generated by the correct “random” terms) corresponding to m is negligible.
The Kyber instances with their corresponding parameter choices are shown in Table 2.

2.2.3. Saber

Saber [13] is a CCA-secure KEM derived from a CPA-secure PKE based on Module-LWR. For n , q N , the underlying ring is R q = Z q [ X ] / ( X n + 1 ) , i.e., the ring of polynomials up to degree n 1 with coefficients in Z q . The corresponding module is R q k with rank k N .
The following primitives are required: a noise space B, where sampling a value from B yields a random small integer value in the range { 5 , . . . , 5 } . Additionally, for the KEM construction, secure hash functions H 1 , H 2 , H 3 and a secure key derivation function K D F are required.
Saber’s rounding function does not strictly round down, as we have seen in the general case of LWR in Section 2.2; instead, it rounds to the median of each of the p intervals. (This is basically just the most naive approach for rounding.) This is implemented by adding half of the interval’s length h q 2 p and subsequently rounding down, i.e., 
x p : = x + h p .
To implement that efficiently, Saber only uses powers of 2 for the parameters q and p. This simplifies rounding to an addition followed by a bitwise shift.
Like Kyber, the Saber PKE (Algorithms 7–9) is based on the classic LWE-based encryption scheme introduced in Section 2.2. However, error addition is replaced by rounding. This is the only difference to the Kyber PKE presented in Section 2.2.2.
Algorithm 7 Saber PKE Key Generation: keyGen.
Input: none
  • Generate A R q k × k
  • Sample s R q k with coefficients from B
  • Calculate b = A s p
Output: public key p k = ( A , b ) , secret key s
Algorithm 8 Saber PKE Encryption: enc
Input: public key p k = ( A , b ) , message m { 0 , 1 } 256
  • Sample r R q k with coefficients from B
  • Calculate u = A T r p
  • Calculate v = b T r + t o R i n g ( m ) p
Output: ciphertext c = ( u , v )
Algorithm 9 Saber PKE Decryption: dec
Input: secret key s, ciphertext c = ( u , v )
  • Calculate m * = v s T u
Output: message m = f r o m R i n g ( m * )
Analogously to Kyber, to construct a CCA-secure KEM from the given PKE, a variant of the FO-transformation is used. In fact, the key generation algorithm (Algorithm 10) is completely identical.
Algorithm 10 Saber KEM Key Generation.
Input: none
  • Generate σ { 0 , 1 } 256
  • Generate ( p k , s ) = P K E . k e y G e n ( )
Output: public key p k , secret key s k = ( s , σ )
Again, the KEM construction (Algorithms 11 and 12) is very similar to Kyber. The only structural difference is the absent additional hash function used on the message m.
Algorithm 11 Saber KEM Encapsulation.
Input: public key p k
  • Generate message m { 0 , 1 } 256
  • Calculate ( K , r ) = H 2 ( H 1 ( p k ) m )
  • Calculate c = P K E . e n c r ( p k , m )
  • Calculate K = H 3 ( K c )
Output: encapsulation c, shared secret K
Algorithm 12 Saber KEM Decapsulation.
Input: public key pk, secret key s k = ( s , σ ) , encapsulation c
  • Calculate m = P K E . d e c ( s k , c )
  • Calculate ( K , r ) = H 2 ( H 1 ( p k ) m )
  • Calculate c = P K E . e n c r ( p k , m )
  • If c = c set K = H 3 ( K c )
  • If c c set K = H 3 ( σ c )
Output: shared secret K
The Saber instances with their corresponding parameter choices are shown in Table 3.

2.2.4. Dilithium

Dilithium [14] is a signature scheme based on Module-LWE. For n , q N , the underlying ring is R q = Z q [ X ] / ( X n + 1 ) , i.e., the ring of polynomials up to degree n 1 with coefficients in Z q . The corresponding module is R q l with rank l N . Additionally, Dilithium requires a secure hash function H.
The key generation (Algorithm 13) is almost identical to Kyber’s key generation. An LWE instance is generated, i.e., a matrix A R q k × l with k N , a secret vector s R q l and an error term e R q k . As usual, A and b are public, while s is kept private.
Algorithm 13 Dilithium Key Generation: keyGen.
Input: none
  • Generate A R q k × l
  • Sample s R q l with small coefficients
  • Sample e R q k with small coefficients
  • Calculate b = A s + e
Output: public key p k = ( A , b ) , secret key s
Dilithium’s signing process (Algorithm 14) is probabilistic. In the first step, a random vector y R q l is sampled. As we will see in the verification process, to achieve correctness, we will use the rounded version of A y by means of a function r o u n d ( ) . This function takes a given vector of polynomials and rounds each coefficient of every polynomial. The signature is formed by calculating a pair ( z , c ) , where c is formed by hashing the message m and the value r o u n d ( A y ) . The hash function H maps an input to a polynomial with coefficients in { 1 , 0 , 1 } .
Due to the fact that z depending on the secret key, s potentially leads to serious security issues, and z is not output directly. Instead, in order to remove the statistical dependencies between z and s, Dilithium follows a so-called rejection sampling approach. For the details of rejection sampling, we refer to [15,16]. In case z is rendered invalid (’rejected’), the algorithm restarts from step 1.
Algorithm 14 Dilithium Signature generation.
Input: public key p k = ( A , b ) , secret key s, message m { 0 , 1 }
Until z is valid:
  • Sample y R q l with small coefficients
  • Calculate w = r o u n d ( A y )
  • Calculate c = H ( m w )
  • Calculate z = y + c s
Output: signature σ = ( z , c )
Given a correct signature σ , it is possible to recover w using the following calculation:
r o u n d ( A z b c ) = r o u n d ( A ( y + c s ) ( A s + e ) c ) = r o u n d ( A y + A c s A c s c e ) = r o u n d ( A y c e ) = w
To indeed recover w, the last step requires r o u n d ( A y c e ) = r o u n d ( A y ) . Since c and e both have small coefficients, their product c e does not influence the outcome of the rounding. In order to verify the signature, we can use a recovered w to recalculate c = H ( m w ) and compare it to the provided signature value c (Algorithm 15). Observe that if z has not been calculated by using the secret key s, i.e., by z = y + c s , the terms A c s would not cancel in the equation above leading to an incorrect w w . Hence, the value c would be incorrect as well leading to a rejection of the provided signature.
Algorithm 15 Dilithium Verification.
Input: public key p k = ( A , b ) , message m { 0 , 1 } , signature σ = ( z , c )
  • Calculate w = r o u n d ( A z b c )
  • Calculate c = H ( m w )
Output: valid if c = c , else invalid
The Dilithium instances with their corresponding parameter choices are shown in Table 4.

2.3. NTRU-Based Cryptography

2.3.1. NTRU Fundamentals

The NTRU Assumption

NTRU is a lattice-based cryptosystem, which was first developed by Hoffstein, Pipher and Silverman in 1996. It originates from the two well-known schemes NTRUEncrypt and NTRUSign. For its abbreviation, “NTRU”, one can find multiple explanation attempts, for example: n-th degree truncated polynomial ring or ’number theorists r us’. As the former indicates, NTRU’s operations take place in the ring of truncated polynomials R q = Z q [ X ] / ( X n 1 ) , where n and q are two positive coprime integers and Z q = Z / q Z denotes the ring of integers modulo q. Therefore, R q is the ring of all polynomials of degree < n with coefficients in Z q .
Similar to RSA, where it cannot be proven that breaking RSA is as hard as integer factorization, the security of NTRU underlies just a hardness assumption. Let the notation R denote congruence in the ring R . The so-called NTRU assumption states that the following task is difficult to solve:
Given h R q , find ternary polynomials f , g Z 3 [ X ] / ( X n 1 ) (a ternary polynomial has coefficients in Z 3 ) such that
f · h R q g .
Later, we will see that this can actually be solved as a shortest vector problem.

NTRU-Based Encryption Schemes

This section will provide an overview of the main theory used to build NTRU cryptosystems. To build a cryptosystem around the NTRU assumption, we need two primes, n and p, as well as an integer, q, which is coprime to both. Furthermore, p is significantly smaller than q; in our case, we always have p = 3 . These integers will define the rings
R q = Z q [ X ] / ( X n 1 ) R p = Z p [ X ] / ( X n 1 ) = R 3 = Z 3 [ X ] / ( X n 1 ) ,
in which the operations take place.
In the first step, we sample two ternary polynomials f , g R 3 , where f needs to be invertible in R 3 and R q . Then, we need to calculate said inverses:
f q : = f 1 R q f 3 : = f 1 R 3
While f q is used to calculate the public key:
h R q f q · g ,
f and f 3 serve as the secret key. It is now easy to see that deriving the secret key from the public key provides a solution to the NTRU assumption.
To now encrypt a message m R 3 , we need another random ternary polynomial r R 3 and calculate the ciphertext as:
c R q p · r · h + m = 3 · r · h + m
The r ensures that encryption is not deterministic, while multiplication by 3 enables correct decryption, as we are about to see.
The decryption process then consists of two steps. First, we calculate:
a = f · c = f · ( 3 · r · h + m ) = f · f q · 3 · r · g + f · m R q 3 · r · g + f · m
The second step is calculated in R 3 , which ensures that the first term of a vanishes. Multiplying by f 3 then leads to the original message m:
f 3 · a = f 3 · ( 3 · r · g + f · m ) R 3 m
The attentive reader might ask why the condition p q is obligatory, and indeed, this is not clearly evident. The problem lies in the transition between R q and R p . It is vital for flawless decryption that a = p · r · g + f · m does not only hold true in R q but also in Z [ X ] . To be more precise, if the coefficients of p · r · g + f · m become a reduced mod q, a reduction mod p would not yield m.
In conclusion, the correctness is assured if this calculation yields a polynomial of degree < n and coefficients < q . Since r , g , f , m R p have small coefficients, it is sufficient to require p q .
Another observable fact is that the decryption process also establishes the possibility to obtain the message m without the knowledge of f. Indeed (according to the NTRU assumption), it is sufficient for an adversary to find any ternary polynomial f ^ such that f ^ · h is again ternary modulo q, since this still ensures that f ^ · c does not become reduced modulo q and the subsequent reduction modulo p would yield m. The authors showed in [17] that in all likelihood the only polynomials with this property are just rotations of f (i.e., polynomials obtained by cyclically rotating the coefficients of f).

Linking NTRU to Computational Lattice Problems

To get an idea of the connection between lattices and NTRU, consider the lattice:
L = { ( u , v ) R q × R q u · h R q v ) } .
consisting of every possible solution for a fixed NTRU assumption given h R q .
In the following, all calculations are reduced modulo ( X n 1 ) , and we therefore write u · h = v mod q . To find a basis of L , observe that every ( u , v ) L equivalently fulfills:
u · h k · q = v
for some k R q . This can be rewritten as:
u v = 1 0 h q · u k
in an equivalent form. Using the coefficients of u = i = 0 n 1 u i x i , v = i = 0 n 1 v i x i , h = i = 0 n 1 h i x i and k = i = 0 n 1 k i x i , this can be transformed into:
u 0 u 1 u n 1 v 0 v 1 v n 1 = 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 h 0 h 1 h n 1 q 0 0 h n 1 h 0 h n 2 0 q 0 h 1 h 2 h 0 0 0 q · u 0 u 1 u n 1 k 0 k 1 k n 1
Defining M h as the bottom-left quadrant, it is easy to see that the matrix
I n 0 n M h q · I n
defines a basis of the lattice L . It is obvious that ( f , g ) L , and since f , g R 3 , it is also a rather short vector. Furthermore, it can be shown that with overwhelming probability ( f , g ) is indeed the shortest vector of L . Therefore, being able to solve an SVP also enables finding the secret key of any NTRU cryptosystem.

2.3.2. NTRU

Even though the NIST round three submission of NTRU [18] is mainly based on the generic NTRU encryption scheme we have just seen, it contains a few major differences that need further explanation. The most obvious change regards the underlying polynomial rings. Before, we just considered the two truncated polynomial rings R q = Z q [ X ] / ( X n 1 ) and R p = Z p [ X ] / ( X n 1 ) that were both generated by ( X n 1 ) . Instead, we consider polynomial rings that are generated by the three polynomials
ϕ 1 = ( X 1 ) ϕ n = X n 1 X 1 ϕ 1 ϕ n = ( X n 1 ) .
To differentiate the corresponding rings, we introduce the following notation:
R k ϕ : = Z k [ X ] / ϕ
For example, the formerly used R 3 is now denoted as R 3 ϕ 1 ϕ n .
The key generation (Algorithm 16) mainly consists of the same steps as in the generic NTRU construction. The difference is that sampling and operations take place in different rings and spaces. Note that g is always a multiple of ϕ 1 . Other than that, the step of calculating the inverse h q is added. All these modifications aim to add another layer of security against forging ciphertexts, as we will see in the decryption step.
Algorithm 16 NTRU PKE Key Generation: keyGen.
Input: none
  • Sample f R 3 ϕ n
  • Sample g { ϕ 1 · v v R 3 ϕ n }
  • Calculate f q = f 1 R q ϕ n
  • Calculate h R q ϕ 1 ϕ n g · f q
  • Calculate h q = h 1 R q ϕ n
  • Calculate f 3 = f 1 R 3 ϕ n
Output: public key h, secret key s k = ( f , f p , h q )
The encryption process (Algorithm 17) only differs in using a lift-function on the message m before encryption. Let ( m · ϕ 1 1 ) R 3 ϕ n denote a calculation within the ring R 3 ϕ n , then:
L i f t ( m ) = ϕ 1 · ( m · ϕ 1 1 ) R 3 ϕ n .
It is easy to see that L i f t ( m ) R 3 ϕ n m . Now, as a consequence of g and m ^ being multiples of ϕ 1 , the same is true for c.
Algorithm 17 NTRU PKE Encryption: enc.
Input: public key h, message m R 3 ϕ n
  • Calculate m ^ = L i f t ( m )
  • Sample r R 3 ϕ n
  • Calculate c R q ϕ 1 ϕ n 3 · r · h + m ^
Output: ciphertext c
The decryption differs the most compared to the general NTRU construction and therefore requires further explanation. In case of a correctly encrypted message m, deciphering takes place in steps 2 and 3. It is not obvious that the calculation in step 2 still yields the same result as in the general NTRU scheme since f q is the inverse in R q ϕ n and not in R q ϕ 1 ϕ n .
Using the fact that g is a multiple of ϕ 1 and f q is the inverse of f in R q ϕ n , we can equivalently say
g = ϕ 1 · v for some v R 3 ϕ n
f · f q = 1 + k · ϕ n for some k Z [ X ]
Step 2 then resolves to
a = f · c = f · f q · 3 · r · g + f · m ^ ( 2 ) = ( 1 + k · ϕ n ) · 3 · r · g + f · m ^ ( 1 ) = 3 · r · g + k · ϕ n · ϕ 1 · v · 3 · r + f · m ^ ϕ 1 ϕ n R q ϕ 1 ϕ n 0 R q ϕ 1 ϕ n 3 · r · g + f · m ^
Finally, we can obtain m in step 3 since f 3 is indeed the inverse in the considered ring R 3 ϕ n by calculating
a · f 3 = 3 · r · g · f 3 + f · f 3 · m ^ R 3 ϕ n m
The first term vanishes since it is a multiple of 3, and as seen before, m ^ = L i f t ( m ) R 3 ϕ n m .
The decryption (Algorithm 18) contains a built-in validation process, which justifies the additional steps. As shown later, this enables the construction of a KEM that avoids re-encryption (in contrast to classic FO-transformation).
The first step validates whether or not c is a multiple of ϕ 1 , which is true for any correctly generated ciphertext, as seen before. In order to verify that r is correctly sampled from R 3 ϕ n (step 6 and 7), step 4 and 5 retrieve r using c , L i f t ( m ) and h q . If any of the validation steps fail, the procedure returns the error vector ( 0 , 0 , 1 ) ; otherwise, ( r , m , 0 ) is returned.
Algorithm 18 NTRU PKE Decryption: dec.
Input: secret key sk = ( f , f p , h q ) , ciphertext c
  • if c R q ϕ 1 0 return ( 0 , 0 , 1 )
  • Calculate a R q ϕ 1 ϕ n f · c
  • Calculate m R 3 ϕ n a · f 3
  • Calculate m ^ = L i f t ( m )
  • Calculate r R q ϕ n ( c m ^ ) · h q 3
  • if r R 3 ϕ n return ( r , m , 0 )
  • else return ( 0 , 0 , 1 )
Output: Correct ( r , m , 0 ) or error ( 0 , 0 , 1 )
Constructing the NTRU KEM is now straightforward. The key generation (Algorithm 19) simply calls PKE.keyGen() and samples a random value σ , which is later used in the decapsulation for implicit rejection.
Algorithm 19 NTRU KEM Key Generation.
Input: none
  • Generate ( h , ( f , f p , h q ) ) = PKE.keyGen()
  • Sample σ { 0 , 1 } 256
Output: public key h, secret key s k = ( f , f p , h q , σ )
Encapsulation (Algorithm 20) consists of three steps: Random sampling r , m R 3 ϕ n , generating the ciphertext using PKE.enc() and calculating the shared secret K as the hash of r and m with some cryptographic hash function H 1 .
Algorithm 20 NTRU KEM Encapsulation.
Input: public key h
  • Sample r , m R 3 ϕ n
  • Calculate c = PKE . enc r ( h , m )
  • K = H 1 ( r m )
Output: encapsulation c, shared secret K
Decapsulation (Algorithm 21) starts with the decryption of c using PKE.dec(). Next, two hashes are calculated, the correct one as a hash of r and m and a decoy as a hash of the sampled values s and c with some cryptographic hash function H 2 . In case of valid decryption, the former is returned; otherwise, the decoy value is returned.
Algorithm 21 NTRU KEM Decapsulation.
Input: secret key sk = ( f , f p , h q , σ ) , encapsulation c
  • ( r , m , f a i l ) = PKE.dec( ( f , f p , h q ) , c )
  • k 1 = H 1 ( r m )
  • k 2 = H 2 ( σ c )
  • if ( f a i l = 0 ) set K = k 1
  • else set K = k 2
Output: shared secret K
The NTRU submission recommends two different families of parameter sets, which are referred to as NTRU-HRSS and NTRU-HPS. The explanations of this section regard NTRU-HRSS, but the details of both can be found in the algorithm specification [18].

2.3.3. Falcon

Falcon [19] is a signature scheme based on the Gentry–Peikert–Vaikuntanathan (GPV) signature scheme using the NTRU structure [20]. On a very high level, the underlying idea of the GPV framework is as follows. The public key is a full-rank matrix A Z q n × m (with m > n ) generating a lattice L, while the secret key is a matrix B Z q m × m generating a corresponding lattice L q . The lattices L and L q are orthogonal modulo q, meaning that
x L , y L q : x , y = 0 mod q .
Equivalently, the rows of A and B are pairwise orthogonal, i.e.,  B · A t = 0 mod q.
Given a hash H ( m ) of some arbitrary message m and a hash-function H that maps onto L, a valid signature s has to fulfill two properties:
1.
A · s = H ( m ) mod q;
2.
s < β for some boundary β , i.e., s has to be short.
A solution s satisfying the first property can be easily computed using standard linear algebra; however, additionally considering the second property, finding a valid s is much harder. These requirements are almost identical to the short integer solution problem (SIS), the only difference being that an SIS solution s fulfills A · s = 0 mod q instead. The SIS problem is average-case-hard and reducible to SVP [21].
However, with knowledge of the secret matrix B, a valid signature s can be efficiently computed. The first step is to find any solution s to the first requirement A · s = H ( m ) . Afterwards, a sufficiently close vector v in the orthogonal lattice L q needs to be found. Knowing B, this can be achieved using an efficient CVP approximation algorithm such as Babai’s Algorithm [22] satisfying s v < β .
Finally, s = s v forms a valid signature since A · s = A · s A · v = H ( m ) 0 due to v being orthogonal to the rows of A.
The overall framework of Falcon is quite similar to GPV and uses the basic idea of the NTRU scheme to generate the required lattices. Similar to NTRU, Falcon’s operations take place in the ring of truncated polynomials R q = Z q [ X ] / ( X n + 1 ) , where n and q are two positive coprime integers and Z q = Z / q Z denotes the ring of integers modulo q. Therefore, R q is the ring of all polynomials of degree < n with coefficients in Z q .
For the key generation (Algorithm 22) a set of four polynomials f , g R q and F , G R = Z [ x ] / ( X n + 1 ) that fulfills
f G g F = q mod ( X n + 1 )
is needed. Afterwards, analogously to NTRU, we calculate h = g · f 1 R q . From these polynomials, we can generate the public key matrix A Z q n × 2 n and the secret key matrix B Z q 2 n × 2 n as
A = 1 h B = g f G F ,
where every polynomial is represented as its corresponding matrix (see Section 2.3.1 for notation details).
It is easy to check that
B · A t = g h f G h F = g ( g f 1 ) f G ( g f 1 ) F = g g f 1 f ( G g f 1 F ) = 0 f 1 ( f G g F ) = 0 f 1 q = 0 0 mod q
indeed holds.
Algorithm 22 Falcon Key Generation.
Input: none
  • Sample f , g R q
  • Find F , G R such that f · G g · F = q mod ( X n + 1 )
  • Calculate h = g · f 1
Output: public key h, secret key s k = ( f , g , F , G )
To make a Falcon signature probabilistic, a random r { 0 , 1 } 320 is sampled and used to generate the hash c = H ( r m ) with some cryptographic hash function H. Due to the construction of A = 1 h , finding an s satisfying property 1 is easy. Since
1 h · c 0 = c
holds, we can always use s = ( c , 0 ) . As described above, we are looking for a vector v L q close to s . Falcon does that using a variant of Babai’s algorithm [23]. Then, the difference s v satisfies properties 1 and 2 and forms a valid signature (Algorithm 23).
To increase security, only the second component of s = ( s 1 , s 2 ) = ( c v 1 , v 2 ) is transmitted, which is already sufficient to verify the validity of s. Due to
A · s = 1 h · s 1 s 2 = s 1 + h · s 2 = c ,
we can see that s 1 just represents a shift by a small constant.
Algorithm 23 Falcon Signature generation.
Input: secret key sk = ( f , g , F , G ) , message m { 0 , 1 }
  • Sample r { 0 , 1 } 320
  • Calculate c = H ( r m )
  • Set s = ( c , 0 )
  • Find v L q with s v < β
  • Calculate ( s 1 , s 2 ) = s v = ( c v 1 , v 2 )
Output: signature σ = ( r , s 2 )
Analogously to the signature generation, for verification (Algorithm 24), the message m is hashed together with the provided r. Assuming s 2 was correctly generated, the missing value s 1 can be calculated by
s 1 = A · c s 2 = c s 2 · h
and is declared valid if it is sufficiently small, i.e.,  ( s 1 , s 2 ) < β .
Algorithm 24 Falcon Verification.
Input: public key h, message m { 0 , 1 } , signature σ = ( r , s 2 )
  • Calculate c = H ( r m )
  • Calculate s 1 = c s 2 · h
Output: valid if ( s 1 , s 2 ) < β , else invalid
The Falcon instances with their corresponding parameter choices are shown in Table 5.

3. Code-Based Cryptography

3.1. Linear Code Fundamentals

Error-correcting codes are a standard approach to detect and correct communication errors that might happen due to noise during transmission. This technique can be applied to the construction of cryptographic systems where errors are intentionally inserted and can only be corrected by the intended receiver.

3.1.1. Linear Codes

Definition 3
(hamming weight, hamming distance). For a given vector x = ( x 1 , , x n ) F 2 n , we call
w e i g h t ( x ) = i = 1 n x i
the hamming weight of x. Note that this simply counts the number of entries equal to 1.
Given another vector y = ( y 1 , , y n ) F 2 n , we call
d i s t ( x , y ) = i = 1 n x i y i
thehamming distance, which denotes the number of bits in which x and y differ.
Definition 4
(inear code). Let C be a linear subspace of the vector space F 2 n with dimension k. Furthermore, let
d = min c i , c j C c i c j d i s t ( c i , c j )
be the minimum distance of two distinct elements of C .
C is calledlinear codeor, equivalently, ( n , k , d ) c o d e . The elements of C are called codewords.
Elements of an ( n , k , d ) -code C are binary vectors of length n. However, since C has dimension k, only k entries can be arbitrarily chosen, and this choice already defines the remaining n k entries. This means we end up with vectors of length n containing only k bits of non-redundant information. Therefore, it is possible to represent C as the span of k linear independent codewords c 1 , , c k C , i.e., 
C = { G · x x F 2 k } ,
where G F 2 n × k with codewords c 1 , , c k as columns. G is called generator matrix of C . We can transform G to its standard form (this definition deviates a little from standard literature; however, it is more useful in the context of Classic McEliece). G = ( T I k ) , where I k is the k × k identity matrix and T F 2 ( n k ) × k .
Given a generator matrix G = ( T I k ) in standard form, there is a neat way to check whether a given word c F 2 n is a valid codeword, i.e., an element of C . Let H = ( I n k T ) F 2 ( n k ) × n and c F 2 n be a valid codeword, i.e., c = G · x for some x F 2 k . Then,
H · c = H · ( G · x ) = ( H · G ) · x = 1 0 0 t 1 , 1 t 1 , 2 t 1 , k 0 1 0 t 2 , 1 t 2 , 2 t 2 , k 0 0 1 t n k , 1 t n k , 2 t n k , k ( n k ) + k · t 1 , 1 t 1 , 2 t 1 , k t 2 , 1 t 2 , 2 t 2 , k t n k , 1 t n k , 2 t n k , k 1 0 0 0 1 0 0 0 1 k · x = t 1 , 1 t 1 , 1 t 1 , 2 t 1 , 2 t 1 , k t 1 , k t 2 , 1 t 2 , 1 t 2 , 2 t 2 , 2 t 2 , k t 2 , k t n k , 1 t n k , 1 t n k , 2 t n k , 2 t n k , k t n k , k · x = 0 F 2 n k
equals the zero-vector in F 2 n k for any codeword c. Because of this property, H is called the parity check matrix. Indeed, the condition H · c = 0 holds if and only if c is a valid codeword.
Therefore, a linear code C can equivalently be defined via its parity check matrix H since C is exactly the kernel of the map H, so C = { x F 2 n H · x = 0 } . This also motivates the following definition.
Definition 5
(syndrome). Let C be a linear code with parity check matrix H and x F 2 n be a vector. We call
H · x F 2 n k
the syndrome of x.

3.1.2. Binary Goppa Codes

A traditional and well-studied family of linear codes are the so-called binary Goppa codes [24]. They were proposed for cryptography in 1978 due to their good security properties and fast decoding capabilities [25].
Definition 6
(binary Goppa code, support, Goppa polynomial). Let F 2 m be a finite field for some m N and g ( x ) F 2 m [ x ] be an irreducible polynomial of degree t < 2 m . Let L = ( α 1 , α 2 , , α n ) be a sequence of n distinct elements of F 2 m which are not roots of g ( x ) . Then, we define a binary Goppa code Γ ( g , L ) by
Γ ( g , L ) = { c F 2 n i = 1 n 1 x α i · c i 0 mod g ( x ) } .
We call L supportand g ( x ) Goppa polynomial.
While we will not delve into the mathematical structure behind binary Goppa codes, it is important to highlight that a given Goppa code depends on its Goppa polynomial and its support. In order to derive the parity check matrix of a given Goppa code Γ ( g , L ) , we define
I ^ i ( x , α i ) : = 1 x α i mod g ( x )
to be the inverse of x α i reduced modulo g ( x ) for i { 1 , . . . , n } . Note that the condition of α 1 , , α n not being roots of g ensures that the inverses 1 x α 1 , , 1 x α n exist since, otherwise, x α i would divide g.
Since I ^ i ( x , α i ) is already reduced modulo g ( x ) and the addition of polynomials with the same degree cannot increase their degree, we can rewrite the defining condition of Γ ( g , L ) in the following way:
i = 1 n I ^ i ( x , α i ) · c i = 0
Moreover, I ^ i ( x , α i ) is a polynomial with a maximum degree of t 1 , i.e.,  I ^ i ( x , α i ) = k = 1 t I ^ i , k ( α i ) · x k 1 . Again, we can rewrite the condition as:
i = 1 n c i k = 1 t I ^ i , k ( α i ) · x k 1 = 0 .
From this equation, one can easily derive the parity check matrix H ^ for Γ ( g , L ) :
H ^ = I ^ 1 , 1 ( α i ) I ^ 2 , 1 ( α i ) I ^ n , 1 ( α i ) I ^ 1 , 2 ( α i ) I ^ 2 , 2 ( α i ) I ^ n , 2 ( α i ) I ^ 1 , t ( α i ) I ^ 2 , t ( α i ) I ^ n , t ( α i ) F 2 m t × n
The inverses in H ^ can then be calculated using n executions of the extended euclidean algorithm (EEA). Applying EEA directly to any ( x α i ) and g ( x ) = i = 0 t g i · x i would lead to a simpler version of H ^ :
H ^ = 1 0 0 0 g t 1 1 0 0 g t 2 g t 1 1 0 g 1 g 2 g 3 1 1 1 1 α 1 1 α 2 1 α n 1 α 1 2 α 2 2 α n 2 α 1 t 1 α 2 t 1 α n t 1 1 g ( α 1 ) 0 0 0 1 g ( α 2 ) 0 0 0 1 g ( α n ) F 2 m t × n
The details of the derivation can be found in [26].
In Classic McEliece, we will always consider a binary version of the parity check matrix H ^ F 2 m t × n . That means that all elements in F 2 m are converted to a column vector representing their binary form of length m. We call the resulting matrix
H F 2 m t × n
the binary parity check matrix.
Theorem 1.
Let Γ ( g , L ) be an ( n , k , d ) binary Goppa code, where g F 2 m [ x ] has degree t. Then, we can lower-bound the dimension k by
k n m t
and the minimum distance d by
d 2 t + 1 .
Since H is an m t × n matrix, the corresponding generator matrix G has dimension n × k with k n m t . The derivation of the lower bound for the minimum distance is not easy to see; that is why we refer to [27] for a detailed proof.

3.1.3. Computational Linear Code Problems

Analogously to lattices, there exist various code-based calculation problems which are considered to be computationally hard and are therefore suitable for cryptographic algorithms.
Given a linear ( n , k , d ) -code C F 2 n with a random parity-check matrix H F 2 ( n k ) × n and q F 2 n , the task of finding the closest codeword c C to q, i.e., the codeword c which minimizes d i s t ( q , c ) , is computationally hard and is called a syndrome decoding problem.
Due to its structured parity-check matrix, the hardness of the syndrome decoding problem cannot directly be applied to Goppa codes. However, research indicates that distinguishing a Goppa parity-check matrix from a parity-check matrix of a random code is difficult. The parity-check matrix H ^ introduced in Equation (3) gives an intuition of that fact. Observe, that each entry is a polynomial inverse depending on a random Goppa support and a random Goppa polynomial. This Goppa code indistinguishability assumption is the basis for the Classic McEliece cryptosystem, which we will discuss in the following section.

3.2. Classic McEliece

Classic McEliece [28] is a CCA-secure key encapsulation mechanism based on a version of the Niederreiter encryption scheme. A message is represented as an error vector e whose syndrome, i.e., the parity check matrix (public key) applied on e, is used as encryption. Knowing the structure (secret key) of the underlying code, the receiver is able to restore the error vector e from the syndrome using a syndrome decoding algorithm [29].
The Classic McEliece scheme uses binary Goppa codes, which form linear ( n , k , d ) -codes. During key generation (Algorithm 25), Classic McEliece generates a random binary Goppa code Γ ( g , L ) . As described above, the code Γ ( g , L ) consists of a Goppa polynomial g ( x ) F 2 m [ x ] with degree t for a suitable m and a support L. Then, the corresponding binary parity-check matrix H F 2 m t × n is computed and transformed to standard form. H is then published as public key while the Goppa parameters g and L are kept secret. Classic McEliece uses the fact that it is generally infeasible to reconstruct a Goppa code from a given parity-check matrix H.
Algorithm 25 Classic McEliece PKE Key Generation: keyGen.
Input: none
  • Generate random Goppa Code Γ ( g , L ) with
    • Goppa polynomial g ( x ) F 2 m [ x ] of degree t
    • Uniform random sequence L = ( α 1 , , α n ) of n distinct elements of F 2 m
  • Compute corresponding binary parity-check matrix H F 2 m t × n
Output: public key H, private key Γ ( g , L ) .
The idea of Classic McEliece encryption (Algorithm 26) is to send the syndrome H ( c + e ) of an erroneous codeword c Γ ( g , L ) with error e F 2 n . We can observe that H ( c + e ) is independent of the concrete codeword c, because  H ( c + e ) = H c + H e = H e holds for all codewords. Therefore, we drop c and just calculate H e . The value e serves as message, and we require it to have weight t, which is defined to be the largest value possible while also guaranteeing correct decryption. Therefore, the size of the message space is n t . Using the provided parity-check matrix H, the corresponding syndrome H e is calculated and sent to the receiver.
Algorithm 26 Classic McEliece PKE Encryption: enc.
Input: public key H F 2 m t × n , message e F 2 n with weight t
  • Compute C = H e F 2 n k
Output: ciphertext C
Knowing the structure of the Goppa code, i.e., the secret Goppa polynomial g and support ( α 1 , , α n ) , the receiver is able to reconstruct e from the provided syndrome C = H e . In order to do that, the given syndrome C F 2 n k is extended to the column vector C = ( C , 0 , , 0 ) F 2 n by appending k zeros. We first observe that
H ( C + e ) = H ( ( H e , 0 , , 0 ) + e ) = H ( H e , 0 , , 0 ) + H e = H e + H e = 0
Note that H ( H e , 0 , , 0 ) = H e holds because H is in standard form, i.e.,  H = ( I n k T ) for some matrix T. The equation above implies that c = C + e is a codeword in our Goppa code Γ ( g , L ) . Furthermore, we know there can only be one codeword in Γ ( g , L ) with distance t to C since, due to Theorem 1, Goppa codewords have a minimum distance of 2 t + 1 (see Figure 5). Since e has weight t, the codeword C + e is the unique codeword with distance t to C .
Having the secret Goppa parameters, the receiver is able to use a syndrome-decoding algorithm, e.g., Patterson’s Algorithm [30], to find the closest codeword to C , obtaining c = C + e . Then, simple addition yields e = C + c . These steps are summarized in Algorithm 27.
Note that general syndrome decoding is difficult, as seen in Section 3.1. Therefore, a third party without knowledge of the secret Goppa parameters is not able to perform this decryption step.
Algorithm 27 Classic McEliece PKE Decryption: dec.
Input: ciphertext C F 2 n k , Goppa code Γ ( g , L )
  • Extend C to C = ( C , 0 , , 0 ) F 2 n by appending k zeros
  • Find the unique codeword c Γ ( g , L ) that is at distance t from C . If there is no such codeword, return ⊥
  • Set e = C + c
  • If weight ( e ) t or C H e , return ⊥
Output: message e
Classic McEliece KEM key generation (Algorithm 28) samples an additional value σ F 2 n . Despite that, the key generation is identical to the key generation of the PKE.
Algorithm 28 Classic McEliece KEM Key Generation.
Input: none
  • Generate random σ F 2 n
  • Generate ( H , Γ ( g , L ) ) = P K E . k e y G e n ( )
Output: public key H, secret key s k = ( Γ ( g , L ) , σ )
The family of cryptographic hash functions H i for i { 0 , 1 , 2 } is used for both encapsulation and decapsulation. A random vector e F 2 n of weight t is sampled and encrypted with a given parity-check matrix H. Additionally, e is hashed to e H . The shared secret K is computed by K = H 1 ( e , C , e H ) , i.e., a random-looking value depending on e and C (Algorithm 29).
Algorithm 29 Classic McEliece KEM Encapsulation.
Input: public key H
  • Generate random vector e F 2 n of weight t
  • Compute C = P K E . e n c ( e , H ) .
  • Compute e H = H 2 ( e )
  • Compute K = H 1 ( e , C , e H )
Output: encapsulation ( C , e H ) , shared secret K
The decapsulation (Algorithm 30) starts with the decryption of a given C, thereby calculating a message candidate e. Assuming a valid input, the original e is obtained. It is clear that by K = H 1 ( e , C , e H ) the same shared secret as in the encapsulation is computed.
However, prior to that calculation, the decapsulation process has two means of verifying its input: If decryption fails, the hash value e H will be based on the random-looking σ instead of e. This will ensure that the following comparison will not hold. The obtained message candidate e is verified by checking whether it is indeed a weight-t vector (this happens during PKE.dec). Then, the provided e H is compared with the computed version e H , thereby checking that the encapsulation was indeed performed based on e as well as the provided public key H and according to the rules of the algorithm.
Algorithm 30 Classic McEliece KEM Decapsulation.
Input: secret key s k = ( Γ ( g , L ) , σ ) , encapsulation ( C , e H )
  • Calculate e = P K E . d e c ( C , Γ ( g , L ) )
  • If e = calculate e H = H 2 ( σ )
  • If e calculate e H = H 2 ( e )
  • If e H = e H set K = H 1 ( e , C , e H )
  • If e H e H set K = H 0 ( σ , C , e H )
Output: shared secret K
Encryption and decryption operations are competitively fast compared to lattice-based cryptography; however, the key sizes in Classic McEliece are quite large [31]. Given, for example, the largest parameter set (see Table 6), storing the compressed public key H requires k · ( n k ) = 6528 · ( 8192 6528 ) = 10,862,592 bits 1.3 MB.
The Classic McEliece instances with their corresponding parameter choices are shown in Table 6.

4. Multivariate Cryptography

Multivariate cryptography uses multivariate polynomials, i.e., polynomials in multiple variables, for the construction of key pairs. Its security is based on the assumption that solving a set of multivariate quadratic polynomial equations over a finite field is computationally hard.

4.1. Multivariate Polynomial Fundamentals

4.1.1. Multivariate Polynomial Functions

Definition 7
(multivariate quadratic polynomial function). Let F be a field. A function f : F n F is called multivariate function. Let p be a polynomial in the variables x 1 , , x n F . If f can be represented as p ( x 1 , , x n ) , f is called multivariate polynomial function (for finite F , this is possible for any function). If f only contains terms of degree two or less, f is calledmultivariate quadratic polynomial function.
To give an example, the function f 1 ( x 1 , x 2 , x 3 ) = x 1 2 x 2 + 2 x 1 x 2 2 + 3 x 3 + 4 is a multivariate polynomial function (of degree 3), while the function f 2 ( x 1 , x 2 , x 3 ) = x 1 2 + 2 x 1 x 2 + 3 x 3 + 4 is a multivariate quadratic polynomial function. Let p 1 , , p k be multivariate quadratic polynomial functions. The vector
P = p 1 p k
can be interpreted as a function P : F n F k by component-wise application and is called polynomial map.

4.1.2. MQ Problem

Let F be a finite field. Let p 1 , , p k : F n F be multivariate quadratic polynomial functions. Finding a solution s F n to the system of equations    
p 1 ( s ) = 0 p k ( s ) = 0
is called an MQ (multivariate quadratic) problem [32]. This problem has been proven to be computationally hard.

4.1.3. Multivariate Signature Schemes

Multivariate public-key cryptosystems (MPKC) are constructions where polynomial maps are used to represent public and private keys. However, MPKC constructions are mainly used as digital signature schemes and are not suited for encryption purposes [33].
Let F be a finite field. The main idea of generating a signature s F n to a given message m F k is to calculate one of possibly many pre-images of the image m under a polynomial map P . This is equivalent to finding a solution s to the system of equations
p 1 ( s ) = m 1 p k ( s ) = m k p 1 ( s ) m 1 = 0 p k ( s ) m k = 0 ,
where P = p 1 , , p k and m = m 1 , , m k . P is called a public map and represents the public key. We can design an MPKC scheme in a way that allows finding pre-images under a public map without directly solving the MQ problem. Usually, this mechanism involves some polynomial map F : F n F k , which we call the central map. We hide F by applying the following compositions involving the two affine maps S : F n F n and T : F k F k . The resulting function
P = T F S : F n F k
is the public key to the corresponding secret key ( T , F , S ) . In general, the central map F requires efficient computation of pre-images. The affine maps S and T have to be invertible; therefore, they need to be of full rank.
As mentioned above, a signature s for a given message m is a pre-image of m under P . With knowledge of the decomposition P = T F S , s can be computed by calculating the pre-image of T 1 ( m ) under F and subsequent application of S 1 . To verify a signature s for a message m, we simply need to verify whether m = P ( s ) holds. Note that there could exist multiple valid signatures for a given message m.
The key component of an MPKC is the design of the central map F . Without prior knowledge of the secret key ( T , F , S ) , an attacker cannot distinguish the public map from a randomly generated polynomial map. The complexity of a direct attack is therefore reduced to the difficulty of the MQ problem. However, observe that the security assumption of this system is stronger than in the case of the MQ problem due to possible efficient attacks against the central map F . There exists an alternative approach for breaking an MPKC without trying to directly attack the public map. The idea is to find two alternative affine maps that suffice the same criteria of transforming the central map F into P . Thus, the cryptosystem can be broken by finding alternative private keys which correspond to the same public map. In this context, we need to define a new problem called the IP problem (Isomorphism of Polynomials) [32].

4.1.4. IP Problem

Let P , F be two polynomial maps from F n to F k with:
P = ( p 1 , , p k ) F = ( f 1 , , f k ) .
Assuming two invertible affine transformations S : F n F n and T : F k F k with
P = T F S
exist, finding S and T is called an IP problem, which is also computationally hard [34]. Solving the IP problem could possibly break an MPKC by finding alternative secret affine functions to a given public map and an arbitrarily chosen central map.

4.2. Rainbow

The Rainbow signature scheme [35] is closely related to arguably the most common multivariate-based signature scheme, namely, Unbalanced Oil and Vinegar (UOV). In order to understand Rainbow, we first need to properly introduce UOV.
The basic idea of UOV consists of dividing the set of variables of the central map F into two disjoint subsets, which we refer to as oil and vinegar variables. The most important property is that the quadratic polynomials of F are not allowed to contain cross-terms between two oil variables. UOV has the general advantage of offering fast computations of both public and private keys, as well as a simple structure. A major disadvantage lies in the comparably large key sizes.
The UOV scheme is an MPKC scheme, as described in Section 4.1 with the public map
P = T F S : F n F k
and secret polynomial maps ( T , F , S ) . It is characterized by the integer parameters o and v, specifying the number of oil and vinegar variables, respectively. These parameters define the structure of the central map F having k = o polynomial functions with n = o + v variables. The central map F : F n F k contains k polynomial functions f 1 , , f k . We restrict those to their homogeneous forms, i.e., omitting constant and linear terms. Then, each f r has the form:
f r = i = 1 v j = i v α i j ( r ) x i x j + i = 1 v j = v + 1 n β i j ( r ) x i x j ,
where x 1 , , x v and x v + 1 , x n correspond to the vinegar variables and oil variables, respectively. A central map F is generated by randomly assigning values to the coefficients α i j ( r ) , β i j ( r ) from F .
The affine transformations S and T are also sampled by randomly assigning their coefficients, which is repeated if they turn out not to be invertible. The calculation of pre-images under F works as follows:
  • We randomly assign values to the vinegar variables x 1 , , x v (highlighted in red), thus reducing the product between two vinegar variables to constants and the product of an oil and a vinegar variable to a linear term:
    f r = i = 1 v j = i v α i j ( r ) x i x j + i = 1 v j = v + 1 n β i j ( r ) x i x j
  • This results in a linear system of k equations in n v = k variables, namely, x v + 1 , , x n . By applying Gaussian elimination, we solve this system and thereby derive the remaining oil values x v + 1 , , x n . In case the system does not have a solution, we repeat the previous step by sampling some other random values for the vinegar variables.
As mentioned in Section 4.1, this MPKC construction is designed in a way that makes it hard to find pre-images under a given public map P ; however, knowing the secret decomposition P = T F S enables an efficient computation of pre-images, i.e., signature generation.
Rainbow generalizes the UOV concept. The Rainbow signature scheme consists of two layers of Oil–Vinegar polynomials, where the second layer includes all variables from the first layer, i.e., the set of vinegar variables from layer two contains all variables of layer one.
A concrete Rainbow instance is defined by an initializing number v 1 of vinegar variables for layer one and the number of oil variables for layers one and two, i.e.,  o 1 and o 2 , respectively. The total number of variables n and the number of equations k can be derived through n = v 1 + o 1 + o 2 and k = o 1 + o 2 . The resulting structure of Rainbow’s central map F consists of the two layers:
Layer 1 : x 1 , , x v 1 vinegar variables , x v 1 + 1 , , x v 1 + o 1 oil variables Layer 2 : x 1 , , x v 1 , x v 1 + 1 , , x v 1 + o 1 vinegar variables , x v 1 + o 1 + 1 , , x v 1 + o 1 + o 2 oil variables
This construction improves the ratio between the signature size and the message size from v + o o in the UOV case to v 1 + o 1 + o 2 o 1 + o 2 in the two-layer Rainbow case due to v 1 < v , i.e., signatures are relatively smaller (in practice, this amounts to an about 26% reduction [36]). Due to the reduced number of coefficients needed in the first layer, this also results in a smaller private key. Recent attacks have shown this construction to be vulnerable resulting in a loss of these efficiency gains, see Section 4.2.1.
To generate a concrete key pair (Algorithm 31), the following maps are sampled:
  • The secret central map F consisting of k = o 1 + o 2 polynomials f 1 , , f k of the form
    f r = i j V 1 α i j ( r ) x i x j + i V 1 , j O 1 β i j ( r ) x i x j for r { 1 , , o 1 } i j V 2 γ i j ( r ) x i x j + i V 2 , j O 2 δ i j ( r ) x i x j for r { o 1 + 1 , , o 1 + o 2 }
    with layer one vinegar indices V 1 = { 1 , , v 1 } , layer one oil indices O 1 = { v 1 + 1 , , v 1 + o 1 } , layer two vinegar indices V 2 = V 1 O 1 and layer two oil indices O 2 = { o 1 + 1 , , o 1 + o 2 } .
    The coefficients α i j ( r ) , β i j ( r ) , γ i j ( r ) , δ i j ( r ) are randomly chosen from F .
  • Two secret randomly chosen invertible affine maps T : F k F k and S : F n F n . Their coefficients are randomly sampled from F which is repeated if the maps turn out not to be invertible.
  • The public key P = T F S : F n F k .
Algorithm 31 Rainbow Key Generation.
Input: none
  • Sample S F n F n
  • Sample F F n F k
  • Sample T F k F k
  • Calculate P = T · F · S F n F k
Output: public key P , secret key s k = ( T , F , S )
As described in Section 4.1, a valid signature s of a given message m { 0 , 1 } of arbitrary length is a pre-image of H ( m ) under P , i.e., the equation
P ( s ) = H ( m )
holds with a suitable cryptographic hash function H : { 0 , 1 } F k . Since S and T are efficiently invertible, finding pre-images under those maps is trivial. Finding pre-images under the central map F is similar to finding pre-images in a UOV scheme:
  • We randomly assign values to the vinegar variables of layer one, i.e.,  x 1 , , x v 1 :
    Layer 1 : x 1 , , x v 1 vinegar variables , x v 1 + 1 , , x v 1 + o 1 oil variables ;
  • We solve the resulting linear system of o 1 equations to obtain concrete values for the o 1 oil variables of layer one;
  • The resulting assignment of values for x 1 , , x v 1 + o 1 is substituted into the second layer:
    Layer 2 : x 1 , , x v 1 , x v 1 + 1 , , x v 1 + o 1 vinegar variables , x v 1 + o 1 + 1 , , x v 1 + o 1 + o 2 oil variables ;
  • We solve the resulting linear system of o 2 equations to obtain concrete values for the remaining oil variables of layer two;
  • In case one of the linear systems has no solution, we start from the beginning by choosing other random values for the vinegar variables of the first layer.
A valid signature of a message m under P can be computed by hashing and subsequently finding pre-images under the secret maps T , F and S (Algorithm 32).
Algorithm 32 Rainbow Signature generation.
Input: secret key sk = ( T , F , S ) , message m { 0 , 1 }
  • Calculate m H = H ( m )
  • Calculate u = T 1 ( m H )
  • Find pre-image u 1 of u under F
  • Calculate s = S 1 ( u 1 )
Output: signature s
Let m { 0 , 1 } be a message and s F n a signature. The signature s is accepted if H ( m ) = P ( s ) holds; otherwise, it is rejected (Algorithm 33).
Algorithm 33 Rainbow Verification.
Input: public key P , message m { 0 , 1 } , signature s F n
  • Calculate m H = P ( s )
  • Calculate m H = H ( m )
Output: valid if m H = m H , else invalid
The Rainbow instances with their corresponding parameter choices are shown in Table 7.

4.2.1. Rainbow Security Considerations

A recent attack by Ward Beullens [37] in February 2022 has shown a considerable improvement in previously known attacks against Rainbow. The main enhancement arises from the combination of a previously developed rectangular MinRank attack [38] and an improved guessing technique. In order to resist this attack, the Rainbow parameters would be required to even exceed the length of the standard (and better-understood) UOV approach. This essentially renders the preferred usage of Rainbow over UOV questionable since the performance gain is rather small compared to the additional complexity of the scheme.

Author Contributions

Writing—original draft, M.R., M.B. and J.S.; Writing—review & editing, A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was funded by Volkswagen AG as part of a joint project.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, L.; Jordan, S.; Liu, Y.K.; Moody, D.; Peralta, R.; Perlner, R.; Smith-Tone, D. Report on Post-Quantum Cryptography; Technical Report NIST Internal or Interagency Report (NISTIR) 8105; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2016. [CrossRef]
  2. Fraunhofer AISEC: Crypto Engineering. Post-Quantum Database (pqdb). Available online: https://cryptoeng.github.io/pqdb/ (accessed on 1 July 2022).
  3. Regev, O. On lattices, learning with errors, random linear codes, and cryptography. J. ACM 2009, 56, 34:1–34:40. [Google Scholar] [CrossRef]
  4. Bos, J.; Costello, C.; Ducas, L.; Mironov, I.; Naehrig, M.; Nikolaenko, V.; Raghunathan, A.; Stebila, D. Frodo: Take off the Ring! Practical, Quantum-Secure Key Exchange from LWE. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 1006–1018. [Google Scholar] [CrossRef] [Green Version]
  5. Lyubashevsky, V.; Peikert, C.; Regev, O. On Ideal Lattices and Learning with Errors over Rings. In Advances in Cryptology—EUROCRYPT 2010, Proceedings of the 29th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Riviera, French, 30 May–3 June 2010; Gilbert, H., Ed.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 1–23. [Google Scholar]
  6. Langlois, A.; Stehle, D. Worst-Case to Average-Case Reductions for Module Lattices. Cryptology ePrint Archive, Report 2012/090. 2012. Available online: https://ia.cr/2012/090 (accessed on 1 July 2022).
  7. Alkim, E.; Ducas, L.; Pöppelmann, T.; Schwabe, P. Post-Quantum Key Exchange—A New Hope. Cryptology ePrint Archive, Report 2015/1092. 2015. Available online: https://ia.cr/2015/1092 (accessed on 1 July 2022).
  8. Peikert, C.; Pepin, Z. Algebraically Structured LWE, Revisited. Cryptology ePrint Archive, Report 2019/878. 2019. Available online: https://ia.cr/2019/878 (accessed on 1 July 2022).
  9. Banerjee, A.; Peikert, C.; Rosen, A. Pseudorandom Functions and Lattices. Cryptology ePrint Archive, Report 2011/401. 2011. Available online: https://ia.cr/2011/401 (accessed on 1 July 2022).
  10. Avanzi, R.; Bos, J.; Ducas, L.; Kiltz, E.; Lepoint, T.; Lyubashevsky, V.; Schanck, J.M.; Schwabe, P.; Seiler, G.; Stehlé, D. CRYSTALS-KYBER: Algorithm Specifications and Supporting Documentation. Version 3.02. Available online: https://pq-crystals.org/kyber/data/kyber-specification-round3-20210804.pdf (accessed on 1 July 2022).
  11. Fujisaki, E.; Okamoto, T. Secure Integration of Asymmetric and Symmetric Encryption Schemes. J. Cryptol. 2013, 26, 80–101. [Google Scholar] [CrossRef]
  12. Hofheinz, D.; Hövelmanns, K.; Kiltz, E. A Modular Analysis of the Fujisaki-Okamoto Transformation. In Theory of Cryptography; Kalai, Y., Reyzin, L., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2017; Volume 10677, pp. 341–371. [Google Scholar] [CrossRef]
  13. Basso, A.; Bermudo Mera, J.M.; D’Anvers, J.P.; Karmakar, A.; Roy, S.S.; Van Beirendonck, M.; Vercauteren, F. SABER: Mod-LWR Based KEM (Round 3 Submission). Available online: https://www.esat.kuleuven.be/cosic/pqcrypto/saber/files/saberspecround3.pdf (accessed on 1 July 2022).
  14. Bai, S.; Ducas, L.; Kiltz, E.; Lepoint, T.; Lyubashevsky, V.; Schwabe, P.; Seiler, G.; Stehlé, D. CRYSTALS-Dilithium: Algorithm Specifications And Supporting Documentation. Version 3.1. Available online: https://pq-crystals.org/dilithium/data/dilithium-specification-round3-20210208.pdf (accessed on 1 July 2022).
  15. Lyubashevsky, V. Lattice signatures without trapdoors. In Proceedings of the EUROCRYPT 2012—31st Annual International Conference on the Theory and Applications of Cryptographic Techniques, Lecture Notes in Computer Science, Cambridge, UK, 15–19 April 2012; Pointcheval, D., Schaumont, P., Eds.; Springer: Cambridge, UK, 2012; Volume 7237, pp. 738–755. [Google Scholar] [CrossRef] [Green Version]
  16. Bai, S.; Galbraith, S.D. An Improved Compression Technique for Signatures Based on Learning with Errors. In Topics in Cryptology – CT-RSA 2014, Proceedings of the Cryptographer’s Track at the RSA Conference 2014, San Francisco, CA, USA, 25–28 February 2014; Benaloh, J., Ed.; Springer International Publishing: Cham, Switzerland, 2014; pp. 28–47. [Google Scholar]
  17. Hoffstein, J.; Pipher, J.; Silverman, J. An Introduction to Mathematical Cryptography, 1st ed.; Springer Publishing Company, Incorporated: New York, NY, USA, 2008. [Google Scholar]
  18. Chen, C.; Danba, O.; Hoffstein, J.; Hülsing, A.; Rijneveld, J.; Schanck, J.M.; Schwabe, P.; Whyte, W.; Zhang, Z. NTRU: Algorithm Specifications and Supporting Documentation; Version September 2020. Available online: https://csrc.nist.gov/CSRC/media/Projects/post-quantum-cryptography/documents/round-3/submissions/NTRU-Round3.zip (accessed on 1 July 2022).
  19. Fouque, P.A.; Hoffstein, J.; Kirchner, P.; Lyubashevsky, V.; Pornin, T.; Prest, T.; Ricosset, T.; Seiler, G.; Whyte, W.; Zhang, Z. Falcon: Fast-Fourier Lattice-Based Compact Signatures over NTRU. Version 1.2. Available online: https://falcon-sign.info/falcon.pdf (accessed on 1 July 2022).
  20. Gentry, C.; Peikert, C.; Vaikuntanathan, V. Trapdoors for Hard Lattices and New Cryptographic Constructions. Cryptology ePrint Archive, Report 2007/432. 2007. Available online: https://ia.cr/2007/432 (accessed on 1 July 2022).
  21. Ajtai, M. Generating Hard Instances of the Short Basis Problem; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  22. Babai, L. On Lovász’ lattice reduction and the nearest lattice point problem. Combinatorica 1986, 6, 1–13. [Google Scholar] [CrossRef]
  23. Ducas, L.; Prest, T. Fast Fourier Orthogonalization. Cryptology ePrint Archive, Report 2015/1014. 2015. Available online: https://ia.cr/2015/1014 (accessed on 1 July 2022).
  24. Lint, J.H.V. Introduction to Coding Theory, 3rd ed.; Number 86 in Graduate Texts in Mathematics; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  25. McEliece, R.J. A Public-Key Cryptosystem Based on Algebraic Coding Theory; National Aeronautics and Space Administration: Washington, DC, USA, 1978.
  26. Marcus, M. White Paper on McEliece with Binary Goppa Codes. 2019. Available online: https://www.hyperelliptic.org/tanja/students/m_marcus/whitepaper.pdf (accessed on 1 July 2022).
  27. Engelbert, D.; Overbeck, R.; Schmidt, A. A Summary of McEliece-Type Cryptosystems and their Security. 2006. Available online: https://ia.cr/2006/162 (accessed on 1 July 2022).
  28. Albrecht, M.R.; Bernstein, D.J.; Chou, T.; Cid, C.; Gilcher, J.; Lange, T.; Maram, V.; von Maurich, I.; Misoczki, R.; Niederhagen, R.; et al. Classic McEliece: Conservative Code-Based Cryptography. Version October 2020. Available online: https://classic.mceliece.org/nist/mceliece-20201010.pdf (accessed on 1 July 2022).
  29. Niederhagen, R.; Waidner, M. Practical Post-Quantum Cryptography; Fraunhofer SIT: Darmstadt, Germany, 2017. [Google Scholar]
  30. Sardinas, A.; Patterson, C. A necessary sufficient condition for the unique decomposition of coded messages. IRE Internat. Conv. Rec. 1953, 104–108. [Google Scholar]
  31. Niederreiter, H.; Xing, C. Algebraic Geometry in Coding Theory and Cryptography; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  32. Ding, J.; Yang, B.Y. Multivariate Public Key Cryptography. In Post-Quantum Cryptography; Springer: Berlin/Heidelberg, Germany, 2009; pp. 193–241. [Google Scholar] [CrossRef] [Green Version]
  33. Tao, C.; Diene, A.; Tang, S.; Ding, J. Simple Matrix Scheme for Encryption. In Post-Quantum Cryptography; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013; pp. 231–242. [Google Scholar] [CrossRef]
  34. Patarin, J. Hidden Fields Equations (HFE) and Isomorphisms of Polynomials (IP): Two New Families of Asymmetric Algorithms. In Advances in Cryptology—EUROCRYPT ’96, Proceedings of the International Conference on the Theory and Application of Cryptographic Techniques, Saragossa, Spain, 12–16 May 1996; Maurer, U., Ed.; Springer: Berlin/Heidelberg, Germany, 1996; pp. 33–48. [Google Scholar]
  35. Ding, J.; Chen, M.S.; Petzoldt, A.; Schmidt, D.; Yang, B.Y. Rainbow—Algorithm Specification and Documentation, The 3rd Round Proposal. Available online: https://csrc.nist.gov/CSRC/media/Projects/post-quantum-cryptography/documents/round-3/submissions/Rainbow-Round3.zip (accessed on 1 July 2022).
  36. Thomae, E. About the Security of Multivariate Quadratic Public Key Schemes. Ph.D. Thesis, Universitätsbibliothek, Ruhr-Universität Bochum, Bochum, Germany, 2013; pp. 84–85. [Google Scholar]
  37. Beullens, W. Breaking Rainbow Takes a Weekend on a Laptop. Cryptology ePrint Archive. 2022. Available online: https://ia.cr/2022/214 (accessed on 1 July 2022).
  38. Beullens, W. Improved Cryptanalysis of UOV and Rainbow. Cryptology ePrint Archive, Report 2020/1343. 2020. Available online: https://ia.cr/2020/1343 (accessed on 1 July 2022).
Figure 1. A 2-dimensional lattice.
Figure 1. A 2-dimensional lattice.
Mathematics 10 02579 g001
Figure 2. Two-dimensional lattice with a reduced (good) basis { b 1 , b 2 } and a bad basis { b 1 , b 2 } .
Figure 2. Two-dimensional lattice with a reduced (good) basis { b 1 , b 2 } and a bad basis { b 1 , b 2 } .
Mathematics 10 02579 g002
Figure 3. Two-dimensional lattice with basis { b 1 , b 2 } and shortest vector .
Figure 3. Two-dimensional lattice with basis { b 1 , b 2 } and shortest vector .
Mathematics 10 02579 g003
Figure 4. Two-dimensional lattice with as closest vector to point q.
Figure 4. Two-dimensional lattice with as closest vector to point q.
Mathematics 10 02579 g004
Figure 5. Black dots represent Goppa codewords. The interior of each black circle represents the set of words which are mapped to its central black dot during error-correction. This figure shows the intuition behind Classic McEliece encryption: an error vector e is encrypted to a point C on a black circle around some Goppa codeword. The receiver is able to obtain the corresponding central black dot and thereby retrieve the error vector e.
Figure 5. Black dots represent Goppa codewords. The interior of each black circle represents the set of words which are mapped to its central black dot during error-correction. This figure shows the intuition behind Classic McEliece encryption: an error vector e is encrypted to a point C on a black circle around some Goppa codeword. The receiver is able to obtain the corresponding central black dot and thereby retrieve the error vector e.
Mathematics 10 02579 g005
Table 1. Comparison of algebraic structures used in LWE variants.
Table 1. Comparison of algebraic structures used in LWE variants.
Plain LWERing-LWEModule-LWE
A Z q n × m Z q [ x ] / f ( Z q [ x ] / f ) n × m
·matrix mult.polynomial mult.matrix mult.
s Z q m Z q [ x ] / f ( Z q [ x ] / f ) m
b , e Z q n Z q [ x ] / f ( Z q [ x ] / f ) n
Table 2. Kyber parameter sets with corresponding decryption failure probability δ .
Table 2. Kyber parameter sets with corresponding decryption failure probability δ .
nkq δ
Kyber512                   25623329 2 139
Kyber76825633329 2 164
Kyber102425643329 2 174
Table 3. Saber parameter sets with corresponding decryption failure probability δ .
Table 3. Saber parameter sets with corresponding decryption failure probability δ .
nkqp δ
LightSaber                   2562 2 13 2 10 2 120
Saber2563 2 13 2 10 2 136
FireSaber2564 2 13 2 10 2 165
Table 4. Dilithium parameter sets for NIST security levels 2, 3 and 5 with corresponding expected number of needed repetitions #reps of signature generation.
Table 4. Dilithium parameter sets for NIST security levels 2, 3 and 5 with corresponding expected number of needed repetitions #reps of signature generation.
n(k,l)q#reps
Dilithium 2                   256(4,4)83804174.25
Dilithium 3256(6,5)83804175.1
Dilithium 5256(8,7)83804173.85
Table 5. Falcon parameter sets.
Table 5. Falcon parameter sets.
nq
Falcon-512                   51212,289
Falcon-1024102412,289
Table 6. Classic McEliece parameter sets using an ( n , k , d ) Goppa code with error correction capability t.
Table 6. Classic McEliece parameter sets using an ( n , k , d ) Goppa code with error correction capability t.
nkdt
McEliece348864                   3488272012964
McEliece4608964608336019396
McEliece668812866885024257128
McEliece696011969605413239119
McEliece819212881926528257128
Table 7. Rainbow round 3 parameter sets.
Table 7. Rainbow round 3 parameter sets.
F v 1 o 1 o 2
Level IGF(16)363232
Level IIIGF(256)683248
Level VGF(256)963664
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Richter, M.; Bertram, M.; Seidensticker, J.; Tschache, A. A Mathematical Perspective on Post-Quantum Cryptography. Mathematics 2022, 10, 2579. https://doi.org/10.3390/math10152579

AMA Style

Richter M, Bertram M, Seidensticker J, Tschache A. A Mathematical Perspective on Post-Quantum Cryptography. Mathematics. 2022; 10(15):2579. https://doi.org/10.3390/math10152579

Chicago/Turabian Style

Richter, Maximilian, Magdalena Bertram, Jasper Seidensticker, and Alexander Tschache. 2022. "A Mathematical Perspective on Post-Quantum Cryptography" Mathematics 10, no. 15: 2579. https://doi.org/10.3390/math10152579

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop