Abstract
In recent years, several new notions of security have begun receiving consideration for public-key cryptosystems, beyond the standard of security against adaptive chosen ciphertext attack (CCA2). Among these are security against randomness reset attacks, in which the randomness used in encryption is forcibly set to some previous value, and against constant secret-key leakage attacks, wherein the constant factor of a secret key’s bits is leaked. In terms of formal security definitions, cast as attack games between a challenger and an adversary, a joint combination of these attacks means that the adversary has access to additional encryption queries under a randomness of his own choosing along with secret-key leakage queries. This implies that both the encryption and decryption processes of a cryptosystem are being tampered under this security notion. In this paper, we attempt to address this problem of a joint combination of randomness and secret-key leakage attacks through two cryptosystems that incorporate hash proof system and randomness extractor primitives. The first cryptosystem relies on the random oracle model and is secure against a class of adversaries, called non-reversing adversaries. We remove the random oracle oracle assumption and the non-reversing adversary requirement in our second cryptosystem, which is a standard model that relies on a proposed primitive called lossy functions. These functions allow up to M lossy branches in the collection to substantially lose information, allowing the cryptosystem to use this loss of information for several encryption and challenge queries. For each cryptosystem, we present detailed security proofs using the game-hopping procedure. In addition, we present a concrete instantation of lossy functions in the end of the paper—which relies on the DDH assumption.
1. Introduction
Adaptive Chosen ciphertext attack (CCA2) secure cryptosystems. Since the invention of the Diffie–Helman key exchange and the RSA primitive, public-key cryptography has become one of the most well-studied areas in cryptography research [1]. Currently, the security notion required of any public-key cryptosystem is security against adaptive chosen ciphertext attacks [2,3], or CCA2 security. Security against adaptive chosen ciphertext attacks guarantees that ciphertexts are not malleable, which implies that ciphertexts cannot be modified in transit by some efficient adversarial algorithm. Initially, encryption schemes provided CCA2 security under the random oracle model [4]. Random oracle-based models are heuristic in approach and are randomness-recovering, i.e., they allow a scheme to recover its randomness during an encryption. However, they rely on very strong assumptions, for example, that some functions, i.e., hash functions, are indistinguishable from truly random functions. A more practical CCA2-secure public-key encryption scheme is presented in [3], which relies on the decisional Diffie–Helman (DDH) assumption. The scheme of [3] uses hash-proof systems, which involve projective hash functions that perform function delegation through auxiliary information. Without this auxiliary information, however, the function’s behaviour is close to uniform and it is hard to distinguish as non-random. The design of several practical public-key encryption schemes essentially use this hash proof system by [5] or some variants of it [6,7].
Following the hash proof system of [3], several other CCA2 secure cryptosystems have been proposed. Among these is the CCA2 secure cryptosystem of [8] that relies on a lossy function primitive. The lossy function primitive is a collection of functions, such that some functions in the collection, i.e., the lossy functions, substantially lose significant information from the input. It is difficult, however, to determine if a function is lossy or not. By exploiting the loss of information in the function, along with the computational difficulty in determining the type of a function, several CCA2 secure cryptosystems can be developed under the standard model, thereby presenting an alternative to the practical CCA2 cryptosystem of [3].
Yet, while more practical and efficient CCA2 cryptosystems are being developed, recently, a number of cryptography papers have called into question the security guarantees provided by CCA2 security [9,10,11,12,13,14], due to newer types of attacks. Among the common categories of these newer attacks are (i) randomness attacks and (ii) secret-key leakage attacks. Both of these attack categories have been shown to be strong enough to trivially break CCA2 security. These attacks are briefly described as follows.
Randomness Attacks. The first category, i.e., randomness attacks, considers the case where the randomness used in cryptosystems fails to be truly random. These attacks tamper with the encryption process of a cryptosystem. Randomness failures can be due to a faulty, pseudorandom generator design and implementation [15], or due to simple attacks, such as virtual machine resets [16]. In a virtual machine reset—called randomness reset attack—a computer is forced to restart to some previous state stored in memory and random number variables are reset to some previous value. This applies, especially, to virtual machine systems that use virtual machine monitors (or hypervisors) to manage several operating systems. Given the increased use of cloud computing, such as with Amazon’s servers, several systems have become reliant on virtual machines. A feature of a virtual machine monitor is the taking of snapshots [16] of the system’s state, where the snapshot includes all items of the system in memory, at a certain point in time, for backup and fault tolerance. Included in this snapshot are the random numbers used by the operating system for its encryptions. A hacker, however, may force a virtual machine to be reset to a prior snapshot and re-use the randomness therefrom. For instance, a hacker may perform a denial-of-service attack against a virtual machine, whereupon the virtual machine is forced to be reset to some previous state. In particular [17] point out that snapshots of virtual machines may impair security due to the reuse of security-critical states. Ref. [18] exhibits virtual machine reset vulnerabilities in TLS clients and servers, wherein the attacker takes advantage of snapshots to expose a server’s DSA signing key. Given these actual examples, ref. [16] considered the effect of virtual machine resets on existing CCA2 secure cryptosystems. The results were not positive, as [16] showed a scenario wherein an adversary can trivially break CCA2 security by exploiting such vulnerabilities.
We demonstrate such vulnerabilities on a simpler cryptosystem in Section 3, wherein we present a concrete example of a randomness reset attack in the context of the ElGamal cryptosystem, along with the effect of a randomness reset attack on the formal definition of CCA2 security as done in [16]. Briefly, CCA2 security is formally defined in terms of an attack game between a challenger and an adversary [19], where the adversary may perform challenge queries and decryption queries. The adversary’s task is to correctly guess the underlying message of a challenge ciphertext, given that the challenge ciphertext cannot serve as input to a decryption query. An attack game with a randomness reset attack will incorporate additional encryption queries, in which random numbers can be set, by the adversary, to some previous value. These may present difficulties to some existing cryptosystems, since, unlike challenge ciphertexts, ciphertexts from encryption queries may validly serve as input for decryption queries. In fact, randomness reset attacks are so strong that [11,16,20] have been forced to rule-out situations wherein adversaries can perform arbitrary queries. Instead, adversaries are assumed to satisfy the equality pattern, respecting constraint. The cryptosystem of [16] provides a generic transformation to render a CCA2 public-key cryptosystem secure against randomness reset attacks. The transformation involves feeding a random number and an associated plaintext message to a pseudorandom function. If the joint entropy of the random number and the plaintext message is high enough, the security properties of the pseudorandom function can fix any faulty randomness in the cryptosystem. In Section 4, we present a list of schemes that consider various other types of randomness reset attacks.
Secret Key Leakage Attacks. The second category, i.e., secret-key leakage attacks, considers the case where an adversary learns bits of the secret key [9].These attacks tamper with the decryption process of a cryptosystem. Leakage of secret keys may, perhaps, be due to some devious means, such as side-channel attacks. For instance [21] have reported that practical implementations of cryptosystems in software are often vulnerable to side-channel attacks. For example, the power traces of 8000 encryptions are sufficient to extract the secret key of ASIC AES, which is substantially faster than a brute-force search for the secret key. It follows that, given enough bits of the secret key, a simple exhaustive search over the set of candidate keys can break any cryptosystem’s security. A formal definition of this attack is first considered in [9]. Several articles have provided cryptosystems that are provably secure against secret-key leakage attacks, on top of CCA2 security [12,13]. In particular, the scheme of [13] provides a cryptosystem that is secure against a constant factor of secret key bits leaked to the adversary, where the factor can be as high as bits of the secret key. The scheme of [13] is composed of an ensemble of various cryptographic primitives, such hash proof systems [5], lossy functions [8], and randomness extractors [22,23]. In particular, ref. [13] proposed the one-time lossy filter, which is a special type of lossy function that does not implement a trapdoor. Unlike the cryptosystem of [3], the scheme of [13] is randomness-recovering and can tolerate a higher degree of secret-key leakage. The paper of [12] showed that the cryptosystem of [13] is also secure against arbitrary functions of secret-key leakage. In Section 3, we illustrate how secret-key leakage attacks would affect the formal definition of CCA2 security (expressed as an attack game between a challenger and an adversary). Briefly, secret-key leakage attacks would provide additional leakage queries for the adversary, where he gets to learn a constant factor of bits of the secret key. In Section 4, we present a list of various other types of secret-key leakage attacks along with the primitives that they use.
Our contributions. As noted in [11], given these new types of attacks, an interesting problem is the construction of a public-key cryptosystem that is both resistant to randomness attacks and secret-key leakage attacks. Attacks that jointly involve both types effectively tamper with both the encryption and decryption processes of a cryptosystem, and the cryptosystem has to deal with attacks from both sides. On the one hand, in a randomness attack, the randomness involved in encryption is tampered to some value dictated by the adversary. On the other hand, in a secret-key leakage attack, the adversary learns information about the secret key involved in decryption. In terms of attack games between a challenger and an adversary, this implies that the adversary has access to additional encryption queries and secret-key leakage queries, aside from the usual decryption and challenge queries in a CCA2 attack game. To address these challenges, we propose two cryptosystems; the first is a random oracle model, and the second is a standard model that relies on a proposed primitive, called lossy functions. A collection of lossy functions provides multiple lossy branches and is simple to construct from existing ABO lossy functions [8]. Having multiple lossy branches is crucial for our cryptosystem, given that the adversary may query multiple encryption and challenge queries, and, unlike challenge ciphertexts, encrypted ciphertexts can validly serve as input for decryption. By having several lossy branches, the cryptosystem is able to exploit the loss of information given by the lossy branch even under multiple encryption and challenge queries. We can say that the lossy function forms the core primitive of our second cryptosystem since, without this primitive, the hash proof systems would be insufficient for security (at least in the context of constructions). The presentation of our cryptosystems follows [2,10], which begins from random oracle models and is followed by standard models. This is because, while the random oracle model from [4] is useful for simplifying security proofs, it relies on the strong assumption that some hash functions are truly random, which may not necessarily be true, in practice [2]. For this reason, standard models usually follow initial random models, albeit with some added complexity in their schemes. Both of our proposed cryptosystems apply several primitives, such as hash proof systems, pseudorandom functions, and randomness extractors.
To put our contributions into context, the problem mentioned in [11] considers general classes of related randomness and related secret-key leakage attacks. For this paper, however, we approach the problem under a more limited class of randomness reset attacks [16], in which random numbers are reset to previous values, and under constant-bit secret-key leakage attacks [13], where a constant number of bits of the secret key are leaked. At the end of the paper, we present concrete instances of collections that rely on the decisional Diffie–Helman assumption, and on ElGamal matrix encryptions. We present security proofs for our proposed cryptosystems using the well-known game-hopping proving scheme, as described in [2,19].
2. Preliminaries
2.1. Notations
Given the set of natural numbers , let denote the set for any . Let denote a security parameter following standard cryptography literature [2]. A function is negligible in κ if for every fixed constant c. A function is superpolynomial in κ if is negligible. Throughout the paper, the notation refers to x being randomly drawn from the probability distribution of a random variable, X. Let be any probabilistic polynomial-time algorithm. The advantage of is defined to be its capacity to distinguish between the probability distributions of two collections of random variables. For instance, let and be two collections of random variables indexed by . The advantage of , in this instance, is for and . Two collections of random variables and Y are computationally indistinguishable if the advantage of any polynomial-time algorithm is negligible in . The statistical distance between two random variables X and Y with the same domain D is denoted as [22]. The min-entropy of X is denoted as . If X is conditioned on Y, the average min-entropy of X conditioned on Y is .
2.2. Hashing and Randomness Extractors
Given the security parameter , let and be values of polynomials in . A hash function maps inputs of length to outputs of length , where . A family of hash functions with domain X and range Y is pairwise independent if, for every distinct pair and , the probability that and is equal to for any . On the other hand, if, for every distinct pair , we have for any , the family is a universal family of hash functions, which is a strictly weaker property than pairwise independence [13]. A family of hash functions is collision resistant if no polynomial time algorithm can compute a distinct pair such that for any . The following useful result regarding average min-entropy will be used in several security proofs.
Lemma 1
([24]). Given the random variables X, Y, and Z, suppose that Y has possible values; then, .
Definition 1.
Randomness Extractor. Let X and Z be random variables such that and with and . Let Y any random variable such that for some . Let R be any random variable. An efficiently computable function, is an average case strong extractor if , where , is the uniform distribution over Z, and .
Concrete instantiations of strong randomness extractors involve a family of universal hash functions. This leads to the following lemma.
Lemma 2
([25]). A universal family of hash functions can be used as average-case strong extractors whenever .
2.3. Public Key Cryptosystems and CCA2 Security
A public-key cryptosystem consists of three probabilistic algorithms, , described as follows.
- is an initialization algorithm that outputs a public/secret key pair given security parameter .
- is an encryption algorithm that outputs a ciphertext, c, given , a plaintext message, m, and a sampled random number, r, during computation.
- is a decryption algorithm that outputs m such that .
We now describe security against adaptive chosen ciphertext attacks or CCA2 security using attack games, following [4].
Security Notion of Adaptive Chosen Ciphertext Attack (CCA2 Security)
This security notion is defined in terms of an attack game between a challenger and an adversary, , in which both are polynomial-time algorithms. On input , the challenger draws then generates the public/secret key pair . It forwards to . can perform decryption queries by providing a ciphertext, c, to the challenger, and the challenger returns . performs a challenge query by giving a message pair to the challenger, and the challenger returns the challenge ciphertext, . To prevent trivial wins, any ciphertext input for decryption should not equal . The game ends when outputs a guess , who wins the game if . The advantage of is . If the advantage of any polynomial-time adversary for this game is negligible in , the public-key cryptosystem is CCA2 secure.
2.4. Hash Proof Systems
A hash proof system is an encapsulation system that uses a projective hash [5]. The domain of a projective hash consists of two disjoint sets, the valid set and the invalid set. Each projective hash function is associated with a projection function whose role is to provide auxiliary information. Without this auxiliary information, it is computationally difficult to evaluate the projective hash over the valid set in its domain, and its behaviour is close to uniform. In more detail, let denote a projective hash with ciphertext domain C. Let denote the set of valid ciphertexts and denote the set of invalid ciphertexts. Let K denote a set of encapsulated ciphertexts. A hash proof system, H, consists of three polynomial time algorithms that are as follows.
- is a parameter generation algorithm that generates a secret key and projective hash , with the associated projection function . It computes public key , representing auxiliary information.
- is a public evaluation algorithm that, given , ciphertext , and witness, , of the fact that , outputs .
- is a private evaluation algorithm that, given , ciphertext , outputs , without requiring witness, , of the fact that .
A key property required of H is the subset membership hardness property, whereby V is computationally difficult to distinguish from . Formally, let be any polynomial-time algorithm. The advantage of with respect to the subset membership problem over is defined as . If this advantage is negligible for any , the subset membership problem over is computationally hard.
Definition 2.
Given , a projective hash function, , with corresponding projection function, μ, is ϵ-universal if, for all , , we have , where the probability is computed over all and .
The following lemma and definition will be used in the security proofs.
Lemma 3
([13]). Let be an ϵ-universal projective hash function with the associated projection function, μ. For all and invalid ciphertexts , we have , where is a randomly drawn secret key, and .
Definition 3.
A hash proof system, H, is ϵ-universal if the underlying projective hash function is ϵ-universal and the underlying subset membership problem is computationally hard.
2.5. Lossy Functions
2.5.1. Lossy Functions
A lossy function is a function that loses information from its input. A collection of lossy functions (lossy collection) consists of a set of injective functions, along with a set of lossy functions [8]. Let and be values of polynomials in . Given , the input length of any function in the collection is , and the size of the domain is . A lossy function in the collection has an image size of, at most, , where . For convenience, the dependence of n and p on is omitted hereafter. The functions in the collection are indexed by the set S. A collection of lossy functions is given by the polynomial-time algorithms :
- for is a function index sampling algorithm. If , it outputs , where s is the index of an injective function. If , its output is , where s is the index of a lossy function.
- is an evaluation algorithm that, on input and , outputs an element in . If s refers to an injective function, is injective. If s refers to a lossy function, the image size of is, at most, p.
Definition 4.
Required properties of a collection of lossy functions: (i) the index of an injective or lossy function can be efficiently sampled, (ii) the distribution of is computationally hard to distinguish from the distribution of .
2.5.2. ABO Lossy Functions
A collection of all-but-one (ABO) lossy functions (ABO collection) consists of functions that are each equipped with a set of branches [8]. One branch corresponds to a lossy branch, while the rest are injective branches. Let denote the collection of branches indexed by . Given , define with lossy branch . The functions in the ABO collection are indexed by the set S. A collection of ABO lossy functions is given by polynomial-time algorithms , which are as follows.
- is a function index sampling algorithm that, given lossy branch , outputs function index .
- is an evaluation algorithm that, on input , branch and , outputs an element in . If , its image has size at most p. Otherwise, it is injective.
Definition 5.
Required properties of a collection of ABO lossy functions: (i) given κ, a lossy branch can be efficiently sampled; (ii) can efficiently sample s given ; (iii) it is computationally difficult to distinguish the distributions of from for any ; and (iv) given s, it is computationally difficult to determine .
2.5.3. Lossy Functions
A collection of lossy functions (or collection for short) generalizes the ABO collection. Each function in the collection is equipped with a set of branches, but there are several possible lossy branches. collections are similar to ABN lossy functions in [26] and ABM lossy functions in [27]. However, they are simpler and can be constructed from a set of ABO collections using Cartesian products. Let denote the collection of branches indexed by . Define , with lossy branch set of size and with elements . Define q to be an ordered tuple that corresponds to , i.e., and for . The functions in the collection are indexed by the set S. A collection of lossy functions is given by the polynomial-time algorithms , which are as follows.
- is a function index sampling algorithm that takes as input q corresponding to and outputs .
- is an evaluation algorithm that, on input , branch , and , outputs an element in . If , the image size is at most .
Definition 6.
Required properties of a collection of lossy functions: (i) given κ, a lossy branch set can be efficiently sampled; (ii) can efficiently sample s, given q corresponding to ; (iii) it is computationally difficult to distinguish the distributions of from for ; and (iv) given s, it is computationally difficult to generate an element of .
2.6. Pseudorandom Functions
A pseudorandom function , where R is a key space and M is an input data block, is a deterministic algorithm that behaves like a truly random function [2].
Security Notion of a Pseudorandom Function
The security of a pseudorandom function, P, is defined in terms of an attack game between a challenger and an adversary. Given , at the start of the game the challenger draws and selects a random function, f, from M to Y. The adversary submits a sequence of queries to the challenger, where each query consists of an element . If , the challenger draws and submits to the adversary. If , the challenger submits to the adversary. The game ends once the adversary submits a guess who wins if . The advantage of the adversary in this game is defined as . The pseudorandom function, P, is a secure PRF if the advantage of any polynomial time adversary in this game is negligible in .
2.7. Strongly Unforgeable One-Time Signatures
A strongly unforgeable one-time signature scheme has the strong one-time unforgeability property [8] and is given by the algorithms below.
- Key Generation.
- . On input , outputs the verification/signing key pair .
- Signing.
- : given and a plaintext x, outputs a signature .
- Verification.
- : given , x, and , it outputs 0 if and 1 otherwise.
Security Notion of a Strongly Unforgeable One-Time Signature Scheme
The security of a strongly unforgeable one-time signature scheme is defined in terms of an attack game consisting of a challenger and an adversary, . Given , at the start of the game, the challenger generates and gives to adversary . queries a plaintext message, x, to the challenger and the challenger returns . wins the game if it outputs a distinct message signature pair such that . A signature scheme is strongly unforgeable one-time secure if no probabilistic polynomial time adversary can win the attack game described with non-negligible probability.
3. Security Notions
For illustration, we first describe randomness reset attacks and secret-key leakage attacks in the context of the ElGamal public-key cryptosystem. Recall that given a group, , of prime order p, with generator g, the ElGamal cryptosystem draws a secret key, , and defines the public key as . A message, , is encrypted as for a randomly chosen .
Randomness Reset Attack Example. In a randomness reset attack [16], an adversary can force the cryptosystem to re-use a previous random number. In terms of the ElGamal cryptosystem above, suppose that Alice draws a secret key, , and gives the public key, , to Bob. Bob now encrypts a message, m, by drawing a random number, , and sends the ciphertext to Alice. In a normal setting, without randomness reset attacks, suppose that Bob wants to send another message, , for , to Alice. To do this in the normal setting Bob draws a fresh random number, , and sends the new ciphertext to Alice. Given that and involve different random numbers, they are computationally indistinguishable for any efficient adversary. In a setting with randomness reset attacks, however, an adversary forces Bob to re-use in encrypting m, i.e., . This arbitrarily breaks the security of the ElGamal cryptosystem. To see this, suppose that some adversary obtained and , and let the adversary know, as well, m, , and . The adversary can compute . It follows that if , we have , but if , we have . The adversary can compute and on its own, given that it knows g, , and . It follows that, with randomness reset attacks, the ElGamal cryptosystem is not even semantically secure. Relating this scenario to a CCA2 attack game between a challenger and an adversary, a randomness reset attack allows the adversary to perform encryption queries apart from challenge queries. In the example above, the adversary can ask the challenger to encrypt m (encryption query) followed by the encryption of . The adversary can also force the challenger to re-use a previous random number in both encryption and challenge queries. For more details on the power of randomness attacks, in [16], it has been shown that randomness reset attacks may break arbitrary CCA2 cryptosystems that are more complicated than the ElGamal crypstosystem if no additional primitives are applied to secure the randomness.
Constant secret-key leakage attack example. In a secret-key leakage attack, the adversary can obtain bits of the secret key. The secret-key leakage attack in [13] considers the leakage of a constant factor of bits of the secret key, where it should be a percentage of the secret key’s length. The percentage for any public-key cryptosystem should obviously not equal one. Otherwise, the entire secret key is leaked. In terms of the ElGamal cryptosystem, this implies that is leaked to the adversary, which arbitrarily breaks the cryptosystem. However, in some cryptosystems, such as the Cramer and Shoup CCA2 cryptosystem [3], the allowable amount leakage is even lower—by a factor of . This is because the security of the Cramer and Shoup cryptosystem involves jointly using two secret keys, and, if either is leaked, the entire cryptosystem is insecure. Relating this scenario to a CCA2 attack game between a challenger and an adversary, a constant secret-key leakage attack involves leaking some constant number of bits to the adversary through a leakage query.
We now present the attack game corresponding to the security notion of a public-key cryptosystem that is secure against (i) adapative chosen ciphertext attacks, (ii) randomness reset attacks, and (iii) constant secret-key leakage attacks. Table 1 presents the attack game.
Table 1.
Attack game with adaptive chosen ciphertext attack, randomness reset, and constant secret-key leakage, where is the value of a polynomial in that represents the combined length of the random numbers used during encryption. The adversary in this game is allowed to perform multiple encryption and challenge queries under different randomness indices. The notation and , for , refers to the jth element of and , respectively.
The attack game is initialized by the challenger through Initialize, where he generates the public/secret key pair , and forwards to the adversary. The adversary has access to (i) decryption queries Dec, (ii) secret-key leakage queries Leak, (iii) challenge query Challenge, and (iv) encryption queries Enc, which are all described in Table 1. In Table 1, the adversary has access to a set of indices that are mapped to prior random numbers generated during encryption or challenge queries. In any subsequent encryption query or challenge query, the adversary can use any of these indices at will, representing a randomness reset attack. The adversary can ask the challenger in an encryption query to use any public key—this follows [16]. In a leakage query, Leak, the adversary may request up to bits of the secret key . The game ends once the adversary outputs a bit, , through Finalize, who wins if . The advantage of the adversary, in this case, is defined as , and a cryptosystem is secure with respect to the attack game of Table 1 if no polynomial-time adversary has non-negligible advantage.
3.1. Attack Game with Random Oracles
In certain situations, the random oracle heuristic [4] is convenient for simplifying security proofs. A random oracle Φ captures a truly random function and can be incorporated in cryptosystems. Suppose that a random oracle is part of a cryptosystem. The corresponding attack game incorporates additional random oracle queries, whereby, on input from the adversary, the challenger returns such that .
3.2. Adversary Constraints
As stated in [11,16], if the adversary can perform a randomness reset attack and has no constraints on encryption/challenge queries, it can trivially win. To prevent this, the adversary is assumed to be equality-pattern-respecting. We provide the definition of an equality-pattern-respecting adversary below, along with the corresponding notion of a non-reversing adversary that will be used in cryptosystem 1.
Definition 7.
Let be an adversary in Table 1’s attack game. Let I represent the set of randomness indices mapped to random numbers generated by the challenger during ’s challenge and encryption queries. Let perform encryption queries. Let perform challenge queries using index . Let represent the set of input messages, m, in the encryption queries done by using the public key, , and randomness index, , i.e., . Let represent the set of message pairs given in challenge queries using randomness index . For Table 1’s attack-game, we say that is equality-pattern-respecting: (1) if we have, for all and for all , , and (2) if, for all and , we have and .
Definition 8.
Let be an adversary in Table 1 attack game that performs encryption queries and decryption queries. Let denote the set of ciphertext outputs received by from the challenger in its encryption queries. Let denote the set of ciphertexts submitted by for its decryption queries. We say that is non-reversing if, for any , we have at any point in the game.
4. Comparison of Cryptosystems/Lossy Functions
Given the security notion presented in the previous section, for context, we present a list of various cryptosystems in Table 2 that deal with the related notions of randomness attacks and secret-key leakage attacks. From Table 2, the types of randomness attacks considered in the literature involve linear and polynomial functions of random numbers involved in the encryption process. The same holds for secret-key leakage attacks, where leakage may consist of affine or polynomial functions of bits of the secret key, along with bounded degrees of secret-key tampering. Our proposed cryptosystems are listed in the last two lines of Table 2 and consider joint attacks involving randomness reset and constant secret-key leakage. Similar to the other constructions in Table 2, we propose both a random oracle and standard model of our cryptosystems.
In Table 3, we list several lossy function constructions from the literature. From Table 3, the first lossy function collection is from [8] which uses the DDH assumption. The next lossy function constructions present improvements in terms of the number of lossy function branches or tags that can be sampled efficiently while retaining their amounts of lossiness. In particular, the construction of [27] provides an efficient lossy function that can sample a superpolynomially large number of lossy tags. The construction of [27], however, is quite complicated, since it involves Waters signatures along with chameleon hashing. For the purposes of our cryptosystems, our proposed lossy function collection (the last line of Table 3) is able to sample up to M lossy branches per function index but is compromised by a rather high amount of lossiness, i.e., . Yet, despite this amount of lossiness, the security proof is still held intact, given that the factor is still superpolynomial in . In addition, our proposed lossy function collection is simpler to construct—involving only the DDH assumption, along with the Cartesian product operation. In terms of size complexity, we show, in the FIS and LBS columns, the size of the function index and the lossy branch index, respectively, where size is measured in terms of matrix representations. For instance, the function index size of the lossy functions in [8] is , which means that the index is a square matrix consisting of n rows and n columns. Our proposed collection has a larger function index size, of , and a larger branch size, of . This is because it relies on the M Cartesian product operation. A concrete instantation of an collection is presented in Section 6.
Table 2.
List of CCA2 secure PKE schemes that incorporate either randomness attack or secret-key leakage attack along with their primitives. Scheme models are classified according to random oracle or standard, where standard refers to schemes that do not use random oracles. Our proposed schemes are in the last two lines of the table and incorporate joint randomness reset attack and constant-bit secret-key leakage attack.
Table 2.
List of CCA2 secure PKE schemes that incorporate either randomness attack or secret-key leakage attack along with their primitives. Scheme models are classified according to random oracle or standard, where standard refers to schemes that do not use random oracles. Our proposed schemes are in the last two lines of the table and incorporate joint randomness reset attack and constant-bit secret-key leakage attack.
| Reference | Randomness Attack | Secret-Key Leakage Attack | Model | Primitives/Assumptions |
|---|---|---|---|---|
| Canetti and Goldwasser [28] | − | − | random oracle | random oracle assumption |
| Cramer and Shoup [3] | − | − | standard | hash proof system/DDH |
| Yilek [16] | randomness reset | − | standard | pseudorandom function |
| Peikert and Waters [8] | − | − | standard | lossy functions/DDH/DCR |
| Wee [29] | − | linear leakage | standard | BDDH/LWE |
| Qin and Liu [13] | − | constant leakage | standard | hash proof system + lossy filter/DDH |
| Bellare et al. [10] | chosen distribution attack | − | random oracle | random oracle assumption |
| Bellare et al. [10] | chosen distribution attack | − | standard | lossy functions |
| Paterson et al. [11] | linear/polynomial | − | random oracle | random oracle assumption |
| Paterson et al. [11] | linear functions | − | standard | pseudorandom function |
| Paterson et al. [11] | polynomial functions | − | standard | CIS hash functions |
| Paterson et al. [30] | vector of functions | − | standard | Goldreich-Levin extractor |
| Boneh et al. [31] | − | affine leakage | random oracle | random oracle assumption |
| Boneh et al. [31] | − | polynomial leakage | standard | EDBDH |
| Faonio and Venturi [12] | − | leakage + bounded tampering | standard | hash proof system + lossy filter/RSI |
| ours | randomness reset | constant leakage | random oracle | random oracle assumption |
| ours | randomness reset | constant leakage | standard | hash proof system + lossy functions |
Table 3.
List of lossy function constructions found in the literature. Our proposed construction is shown in the last line of the table, where up to M lossy branches are given for each function index. While the lossiness of our construction is higher than the other schemes, it is simpler to construct and uses only the DDH assumption. ABO: all-but-one lossy functions; ABN: all-but-N lossy functions; ABM: all-but-many lossy functions; LF: lossy function; LB: lossy branch; LT: lossy tag; DS: domain size, LS: lossiness size; FIS: function index size (in terms of matrix representation, where n is the number of rows/columns); LBS: lossy branch index size; DDH: decisional Diffie–Helman; DCR: decisional composite residuousity; QR: quadratic residuousity; CH: chameleon hash.
Table 3.
List of lossy function constructions found in the literature. Our proposed construction is shown in the last line of the table, where up to M lossy branches are given for each function index. While the lossiness of our construction is higher than the other schemes, it is simpler to construct and uses only the DDH assumption. ABO: all-but-one lossy functions; ABN: all-but-N lossy functions; ABM: all-but-many lossy functions; LF: lossy function; LB: lossy branch; LT: lossy tag; DS: domain size, LS: lossiness size; FIS: function index size (in terms of matrix representation, where n is the number of rows/columns); LBS: lossy branch index size; DDH: decisional Diffie–Helman; DCR: decisional composite residuousity; QR: quadratic residuousity; CH: chameleon hash.
| Reference | Primitive | No. of LF/LB/LT | DS | LS | FIS | LBS | Assumptions |
|---|---|---|---|---|---|---|---|
| Peikert and Waters [8] | lossy functions | several | n | DDH/DCR (lattice) | |||
| Peikert and Waters [8] | ABO lossy functions | 1 LB/function | n | DDH | |||
| Hemenway et al. [26] | ABN lossy functions | N LB/function | - | - | DDH/DCR/QR | ||
| Hofheinz [27] | ABM lossy functions | superpolynomial LT | - | - | Waters sig./CH | ||
| Qin and Liu [13] | one-time lossy filter | superpolynomial LT | n | DDH/CH | |||
| ours | lossy functions | M LB/function | DDH |
5. Proposed Cryptosystems
5.1. Cryptosystem 1
In this section, we present our first cryptosystem that is secure against the attack game of Table 1. It uses several primitives, such as hash proof systems, randomness extractors, and pseudorandom functions, along with a random oracle . Using a random oracle assumption simplifies the security proof and usually serves as the starting point in cryptosystem design, as done in [10]. However, as mentioned, the random oracle assumption is rather strong. In addition, this cryptosystem is limited to facing non-reversing adversaries who cannot submit prior-encrypted ciphertexts for decryption. We overcome the non-reversing limitation in the next cryptosystem—which also does away with the random oracle requirement.
5.2. Cryptosystem 1 Requirements
Let , , and denote the bounds in the number of encryption, decryption and challenge queries respectively. The requirements of cryptosystem 1 are as follows.
- An -universal hash proof system H for some given by . The ciphertext domain of H is C, with as its valid subset. and denote the secret-key space and public-key space of H, with elements and . The space W of the witnesses of H is set to R. The encapsulated key space of H is K, with elements . is set as , and . The projective function is , with associated projection function .
- A ABO collection given by . The set of branches is B with elements , and lossy branch . Functions are indexed by S, and
- E is a average-case strong-randomness extractor
- A secure pseudorandom function
- A strongly unforgeable one-time signature scheme given by . and denote the spaces of signature and verification keys, with elements and , and where the domain of is , and the domain of is equal to B.
- Elements of have length n. Elements of M have length l.
- The values of , l, and are such that for some .
- n and p satisfy .
- The polynomial in whose value is n is superpolynomial with respect to , , and .
5.3. Cryptosystem 1
- Key Generation.
- first runs to obtain and with . It computes . The output is a public/secret key pair , where and .
- Encryption.
- : on input and message , let denote a random oracle. It performs the following:
- It samples then computes . It sets . Using , it chooses
- It samples , then computes , followed by .
- Using , it computes and
- It samples and computes
- It computes
- It returns ciphertext
- Decryption.
- : on input and , performs the following:
- The algorithm checks if . If not, it outputs ⊥.
- It computes
- It computes
- It checks if . If not, it outputs ⊥
- It returns the plaintext message .
For cryptosystem 1, we note that the role of P is to generate fresh random numbers and using the joint entropy of the message m and old random numbers and (due to reset attacks). It follows that and serve as the actual randomness input to and E, respectively. To show correctness of the cryptosystem, given m and , first computes and followed by , , and . Let be the corresponding ciphertext where is jointly sampled with under , and is derived using as shown above. If this c were given to , it follows that given that and the signature scheme satisfies the correctness property. Having passed this first check, computes and we have , given that uses the same from encryption and is paired with using . Given that satisfies the correctness property, it follows that as claimed. Given that , we have under given that is a function, thereby passing the second check. Finally, with , we have , and the original message m is recovered.
5.4. Security Results for Cryptosystem 1
Theorem 1.
Let be a random oracle and let denote cryptosystem 1. For any non-reversing, equality-respecting, polynomial-time adversary that makes (a) at most challenge queries under multiple randomness indices, (b) at most encryption queries under multiple randomness indices, and (c) at most decryption queries, and following the attack game of Table 1, then cryptosystem 1 is secure against (i) adaptive chosen ciphertext attack, (ii) λ bits of secret-key leakage, and (iii) randomness reset attacks.
- Game 0.
- This game implements the original cryptosystem with no modifications. The following attack game incorporates random oracle queries in the attack game of Table 1. is modelled using an associative array, , which follows the faithful/forgetful gnome method [2]. The notation for refers to the jth element of .
- proc.Initialize()
- initialize empty associative array
- initialize empty arrays and
- send to adversary
- proc.Enc()
- if then randomly sample and set
- compute with line 5 modified as:
- -
- if then and set . Set
- return c
- proc.Challenge()
- if return ⊥
- if then randomly sample and set
- compute with the line 5 modified as:
- -
- if then and set . Set
- return
- proc.Dec(c)
- if , return ⊥
- compute with line 3 modified as:
- -
- if , then and set . Set
- return m
- proc.Leak()
- return bits of
- proc.Oracle(k)
- if , then and set
- return
The adversary can perform any number of encryption queries under different randomness indices in . Prior to making any challenge query, the adversary can request bits of the secret key. At any point in the game, the adversary can perform a decryption query under the non-reversing condition. - Game 1.
- This game is similar to Game 0, except that are drawn randomly, instead of being computed using the pseudorandom function P in .
- Game 2.
- This game is similar to Game 1, except that once the adversary submits a ciphertext for decryption such that for some , the challenger returns ⊥.
- Game 3.
- This game is similar to Game 2, except that is sampled randomly in instead of being queried using .
- Game 4.
- This game is similar to Game 3, except that the challenger computes instead of in .
- Game 5.
- This game is similar to Game 4, except that is sampled from instead of V in .
- Game 6.
- This game is similar to Game 5, except that if the adversary submits a ciphertext for decryption such that , the challenger returns ⊥.
- Game 7.
- This game is similar to Game 6, except that the challenger draws uniformly at random from instead of computing in
Proposition 1.
Game 0 and Game 1 are computationally indistinguishable, given the security of the pseudorandom function P.
Proof.
To prove this claim, we define hybrid experiments, , where emulates the challenger in Game 0, samples randomly, and samples both and randomly. We have to show show that for any , experiments and are computationally indistinguishable.
Suppose that some adversary can efficiently distinguish between and for . Using this adversary, we construct a simulator that breaks the security of P. The simulator has access to an oracle that, on input m, returns a number, y, where we either have for some random number r, or y is sampled using a truly random function. For , the simulator emulates the challenger in Game 0 perfectly in the initialization phase and draws . If , the simulator does not modify anything from the challenger in Game 0. For , , , the simulator knows and it can answer decryption and secret-key leakage queries. In encryption queries with input , if , the simulator sends m to its oracle and receives y, where we either have or y is sampled randomly. It sets and proceeds with the rest, as before. If , it samples randomly and sends m to its oracle. It receives y where we either have or y is sample randomly. It sets .
In challenge queries, the input is a message pair, . If , the simulator sends to its oracle and receives y, where either have or y is sampled randomly. If , the simulator samples randomly and sends to its oracle. It receives y, where we either have or y is sample randomly. When the adversary submits a guess , the simulator outputs 1 if and 0 otherwise. The probability that the oracle computes y using P and the simulator outputs 1 is equal to the advantage of the simulator in distinguishing between outputs of P and randomly sampled numbers. In turn, the simulator’s advantage is equal to the probability that the adversary outputs such that less 1/2. However, due to the pseudo-randomness of P, no efficient adversary is able to output such that with non-negligible probability. Hence, the simulator’s advantage is likewise negligible. By construction, experiment perfectly simulates Game 0, while experiment perfectly simulates Game 1. The proposition thus follows. □
Proposition 2.
Game 1 and Game 2 are computationally indistinguishable, given the strong one-time existential unforgeability of the signature scheme.
Proof.
Games 1 and 2 behave the same, except when the adversary submits a ciphertext query , such that and for some but . We construct a simulator that attacks the security of the signature scheme as follows. The simulator emulates Game 2 against an adversary. The simulator has access to an oracle that provides it with a verification key upon request. Since the simulator does not know , it can query the oracle for a signature, where on input , the oracle returns computed using the hidden associated with the latest provided to the simulator. The simulator emulates the challenger in every aspect of the initialization phase of Game 1, but requests for a preliminary . Since the simulator knows , it can answer any decryption query and secret leakage query. Moreover, prior to any challenge or encryption query, for any decryption query with input , the simulator checks if and if . If this condition is met, the simulator outputs as a forgery and terminates the simulation. If this event does not occur and the simulator encounters the first challenge or encryption query, the simulator uses as the verification key and queries the oracle for . Subsequent challenge or encryption queries require the simulator to ask the oracle for a fresh as well as for . Once it receives another decryption query with input , it checks if with for some . If this is true for some , it checks if and and . If this is true as well, the simulator outputs and terminates the simulation. By construction, the simulator emulates Game 2 perfectly against the adversary, and the advantage of the simulator in coming up with a forgery is equal to the probability that the adversary queries a ciphertext that meets the conditions mentioned. However, because the signature scheme is strongly one-time unforgeable, no efficient adversary can query such a ciphertext with non-negligible probability. It follows that the probability whereby the simulator outputs a valid forgery is negligible. Thus, with large probability, the ciphertexts that involve with for some , and which meet the check requirements for decryption are not forgeries and are equal to some prior challenge ciphertext. However, no challenge ciphertexts can be valid for decryption as of Game 0. □
Proposition 3.
Game 2 and Game 3 are computationally indistinguishable, given that Φ is a random oracle.
Proof.
We note that due to the non-reversing nature of the adversary, no ciphertext outputs of prior encryption or challenge queries can be submitted for decryption. It follows that in Game 3, the adversary cannot use decryption to check if is randomly drawn or not. Thus, the only event where Games 2 and 3 differ is when the adversary performs an oracle query in Game 3 on some input k, where k is computed under line 3 of in some prior encryption or challenge query, and is associated with some randomly drawn such that in Game 2. The probability that this event occurs is . Since , this probability is negligible. □
Proposition 4.
Game 3 and Game 4 are perfectly equivalent.
Proof.
The claim readily follows, since the change from computing k using to computing k using and is merely conceptual. □
Proposition 5.
Game 4 and Game 5 are computationally indistinguishable, given the hardness of the underlying subset membership problem in the hash proof system.
Proof.
To prove this claim, we define two experiments, and . and behave the same, except that samples from V, while samples from . Suppose that some adversary can distinguish and with non-negligible probability. Using this adversary, we construct a simulator that can break the hardness of the underlying subset membership problem of hash proof system H. The simulator has access to an oracle that, on input , provides it with , where we either have , or . At initialization, the simulator emulates the challenger of Game 4 in all aspects. In encryption queries with input the simulator forwards to its oracle and receives . In challenge queries, the simulator forwards to its oracle and receives . In both challenge and encryption queries, it does not compute for using . Since the simulator knows , the simulator can answer any decryption or secret-key leakage queries. Once the adversary submits a guess , the simulator outputs 1 if . It follows that the advantage of the simulator in distinguishing from is equal to the probability that the adversary outputs such that . However, given the underlying subset membership problem of hash proof system H is hard, the probability that is negligible. It follows that the simulator’s advantage is likewise negligible. Experiment emulates Game 4, while emulates Game 5, thereby proving the proposition. □
Proposition 6.
Game 5 and Game 6 are computationally indistinguishable, given the -universal hash proof system.
Proof.
Define Z to be the event that some ciphertext with , is accepted for a decryption query in Game 5, but is rejected in Game 6. Games 5 and 6 proceed identically until Z occurs. We claim the following.
Let be a decryption query input that triggers Z. In encryption, serves as input to . From the adversary’s point of view, is dependent on the set of challenge ciphertexts where and on the set of encryption query outputs , where . For , do not provide additional information on since it is a function of . The same applies to for . The sets of verification keys and do not provide additional information on since they are sampled independently. Likewise, and do not provide additional information since they are randomly sampled as of Game 5. It follows that only and provide information on . Given that for all , has possible values and for all , has possible values, we have the following using Lemma 1.
Applying Lemma 1 to the secret-key leakage , the above reduces to:
Using the fact that H is an -universal hash proof system we have:
In addition, we can assume that has not been queried to the oracle, since the event that k is queried is taken into consideration in Proposition 3. It follows that, from the adversary’s point of view, the mapping of to k is injective. Since injective mappings preserve average min-entropies, we have:
Taking the logarithm of both sides and multiplying by , the probability that Z occurs is at most . Assuming that up to decryption queries are not rejected, the adversary can rule out up to values of (i.e., outputs of ). Combining these, we have Equation (1) which represents an upper bound on the probability of Z. This probability is negligible given that under the assumptions. This proves the proposition. □
Proposition 7.
Game 6 and Game 7 are computationally indistinguishable, given that E is a average case strong randomness extractor.
Proof.
By Game 6, all ciphertexts with an invalid component are explicitly rejected for decryption. Given any challenge ciphertext for , the adversary cannot learn any additional information on aside from those provided by , , the secret-key leakage , and outputs of up to encryption queries: and challenge queries: . Let and . Both and have possible values for and . Denote by the information from the point of view of the adversary, i.e., . Under the assumptions of the cryptosystem, for all and , we have . Combining these, we apply Lemma 1, and have the following result for each .
Given that extractor E is a average case strong randomness extractor, the value of is close to uniform from the point of view of the adversary. The claim thus follows. Combining all these claims prove the stated theorem as well. □
5.5. Cryptosystem 2
The rationale for constructing cryptosystem 2 over cryptosystem 1 is to do away with the non-reversing property of the adversary along with the need for random oracles. Cryptosystem 2 uses an collection of lossy functions. The idea behind an collection is that the system can sample up to M lossy branches. Having several lossy branches is useful given that, in a cryptosystem that uses lossy function primitives, the branch is published in the ciphertext.
5.6. Cryptosystem 2 Requirements
Let , , and denote the bounds in the number of encryption, decryption, and challenge queries, respectively. Define . Define . All requirements of cryptosystem 2 are the same as cryptosystem 1, except for 2, 3, 6, and 7 which are, now, as follows.
- 2.
- An lossy function collection given by with function index set S, branch set B, set of lossy branches , and lossy branch , and where .
- 3.
- E is a average-case strong-randomness extractor
- 6.
- Elements of have length n. Elements of M have length l. Elements of S have length .
- 7.
- , l, , and p are such that for some constant .
5.7. Cryptosystem 2
- Key Generation.
- first runs to obtain , and with . It computes . It defines and generates . The output is a public/secret key pair , where and along with q.
- Encryption.
- : on input and message , performs the following:
- It samples , then computes and sets . Using and , it chooses .
- It samples , then computes , followed by . Using , it computes and .
- It generates . It defines , and computes .
- It computes .
- It returns ciphertext
- Decryption.
- : on input and , performs the following:
- The algorithm checks if . If not, it outputs ⊥
- It computes and
- It checks if . If not, it outputs ⊥
- It returns the plaintext message
For cryptosystem 2, the role of P is to generate fresh random numbers, and , using the joint entropy of the message m and old random numbers and (due to reset attacks). Similar to cryptosystem 1, and serve as the actual randomness input to and E, respectively. To show correctness of the cryptosystem, given m and , first computes and followed by , , and , where s is part of and b is equal to . Let be the corresponding ciphertext where is jointly sampled with under and is derived using as shown above. If this c were given to , it follows that given that , and the signature scheme satisfies the correctness property. Having passed this first check, computes and we have given that uses the same from encryption and is paired with using . Given that satisfies the correctness property of a hash proof system, we have as claimed. Given that , we have under , given that s and b are the same as those is used in and is a function—thereby passing the second check. Finally, with , we have , and the original message m is recovered.
5.8. Security Results for Cryptosystem 2 Scheme
Theorem 2.
Let denote cryptosystem 2. For any equality-respecting, polynomial-time adversary that makes (a) at most challenge queries under multiple randomness indices, (b) at most encryption queries under multiple randomness indices, and (c) at most decryption queries, and following the attack game of Table 1, then cryptosystem 2 is secure against (i) adaptive chosen ciphertext attack, (ii) λ bits of secret-key leakage, and (iii) randomness reset attacks.
Denote the challenge ciphertext as for .
- Game 0.
- This game implements the original cryptosystem with no modifications.
- proc.Initialize()
- initialize empty arrays and
- initialize empty associative arrays and
- for sample and set
- for sample and set
- send to adversary
- proc.Enc()
- if then randomly sample and set
- compute
- return c
- proc.Challenge()
- if return ⊥
- if then randomly sample and set
- compute
- return
- proc.Dec(c)
- if , return ⊥
- compute
- return m
- proc.Leak()
- return bits of
- Game 1.
- This game is similar to Game 0, except that in , the challenger samples randomly.
- Game 2.
- This game is similar to Game 1, except that once the adversary submits a ciphertext for decryption such that and for some , the challenger automatically returns ⊥.
- Game 3.
- This game is similar to Game 2, except in encryption query i for , on input from the adversary such that , instead of sampling a fresh verification/signing key pair , it sets the verification/signing key pair as from the initialization phase.
- Game 4.
- This game is similar to Game 3, except in challenge query j for all , on input from the adversary such that , instead of sampling a fresh verification/signing key pair , it sets the verification/signing key pair as from the initialization phase.
- Game 5.
- This game is similar to Game 4, except that during initialization, it defines instead of .
- Game 6.
- This game is similar to Game 5, except that in , the challenger computes instead of using .
- Game 7.
- This game is similar to Game 6, except that in encryption queries or challenge queries, is sampled from instead of V.
- Game 8.
- This game is similar to Game 7, except that if the adversary submits a ciphertext for decryption such that , the challenger returns ⊥.
- Game 9.
- This game is similar to Game 8, except that in , the challenger draws uniformly at random from instead of computing .
Proposition 8.
Game 0 and Game 1 are computationally indistinguishable, given the security of the pseudorandom function P.
Proof.
The proof for this proposition is similar to the proof for Proposition 1. □
Proposition 9.
Game 1 and Game 2 are computationally indistinguishable, given the strong one-time existential unforgeability of the signature scheme.
Proof.
The proof for this proposition is similar to the proof for Proposition 2 since in . □
Proposition 10.
Games 2 and 3 are perfectly equivalent.
Proof.
We note that the only difference in Games 2 and 3 is that, in Game 3, the verification/signature keys used in encryption queries are drawn from the initialization phase instead of being sampled on the fly. This does not affect any other part of the computation. □
Proposition 11.
Games 3 and 4 are perfectly equivalent.
Proof.
We note that the only difference in Games 3 and 4 is that, in Game 4, the verification/signature keys used in challenge queries are drawn from the initialization phase instead of being sampled on the fly. This does not affect any other part of the computation. □
Proposition 12.
Game 4 and 5 are indistinguishable given that two candidate lossy branch sets of the collection are computationally indistinguishable.
Proof.
To prove this proposition, we two experiments A and B. Experiment A is a sequence of sub-experiments . Experiment B is a sequence of sub-experiments . For all sub-experiments in A and B, assume a fixed and .
For experiment A, sub-experiment emulates the challenger of Game 4 perfectly. Given , sub-experiment defines q as in the initialization phase and computes , where for , we have such that . Suppose that there exists an efficient adversary that can distinguish between and . Using this adversary, we construct a simulator that can distinguish between two candidate lossy branch sets of the collection. The simulator has access to an oracle that, on input , provides it with the function index s, where s can either be or . At initialization, given , the simulator constructs and . It forwards to its oracle and receives function index s. Using s, it constructs and , draws , then forwards to the adversary. Since the simulator knows , it can answer encryption and challenge queries. Since it knows , it can answer decryption and secret-key leakage queries. Once the adversary outputs a guess , the simulator outputs 1 if . The advantage of the simulator in distinguishing between two candidate lossy branch sets of the collection is equivalent to the probability that the adversary outputs such that less 1/2. However, given that it is computationally difficult to distinguish between two lossy branch sets in an collection, no efficient adversary can output such that with non-negligible probability. It follows that the advantage of the simulator is likewise negligible. By construction, if the oracle computes , the simulator is performing sub-experiment . If the oracle computes , the simulator is performing sub-experiment for any .
For experiment B, sub-experiment emulates sub-experiment of experiment A perfectly. Given , sub-experiment defines q as:
in the initialization phase and computes , where for , we have such that , and where for , we have such that . Suppose that there exists an efficient adversary that can distinguish between and . Using this adversary, we construct a simulator that can distinguish between two candidate lossy branch sets of the collection. The simulator has access to an oracle that, on input , provides it with function index s, where s can either be or . At initialization, given , the simulator constructs as follows
It forwards to its oracle and receives function index s. Using s, it constructs and , draws , then forwards to the adversary. Since the simulator knows , it can answer encryption and challenge queries. Since it knows , it can answer decryption and secret-key leakage queries. Once the adversary outputs a guess , the simulator outputs 1 if . The advantage of the simulator in distinguishing between two candidate lossy branch sets of the collection is equivalent to the probability that the adversary outputs such that less 1/2. However, given that it is computationally difficult to distinguish between two lossy branch sets in an collection, no efficient adversary can output such that with non-negligible probability. It follows that the advantage of the simulator is likewise negligible. By construction, if the oracle computes , the simulator is performing sub-experiment . If the oracle computes , the simulator is performing sub-experiment for any .
Combining the above, we have that sub-experiment of experiment A perfectly simulates Game 4. Sub-experiment of experiment B perfectly simulates sub-experiment of experiment A. Sub-experiment of experiment B perfectly simulates Game 5. Since all sub-experiments in A and B are pairwise indistinguishable, the proposition thus follows. □
Proposition 13.
Games 5 and 6 are perfectly equivalent.
Proof.
The proof for this proposition is similar to the proof for Proposition 4. □
Proposition 14.
Games 6 and 7 are indistinguishable given the hardness of the underlying subset membership problem in H.
Proof.
The proof for this proposition is similar to the proof for Proposition 5. □
Proposition 15.
Game 7 and Game 8 are computationally indistinguishable, given (i) the lossy property of the collection, (ii) the computational hardness of determining lossy branches for the collection, and (iii) the -universal property of H.
Proof.
Define Z to be the event that some ciphertext with , is accepted for a decryption query in Game 7, but is rejected in Game 8. It follows that Game 7 and Game 8 proceed identically until Z occurs. Given at most decryption queries, encryption queries, and challenge queries, we claim the following.
where represents the probability of some adversary generating a lossy branch for the collection and , with and . We first prove the equation for . represents the event that c is accepted for decryption, and where is an injective branch with . Let be a decryption query input that triggers . In encryption, serves as input to . From the adversary’s point of view, is dependent on , , , the set of encryption query outputs and the set of challenge ciphertexts. For and , the signatures and do not provide additional information on , since they are functions of and , respectively. For and , the branches and likewise do not provide additional information as they are independently sampled. Using results of Lemma 1, we prove the following:
The first inequality applies Lemma 1. For the second inequality, given challenge ciphertext for , information on is provided by , and as shown above. Given that has possible values, and has possible values, the second inequality follows by applying Lemma 1. For the third inequality, given any encryption query output for , information on is covered by , and as shown above, where has possible values, while has possible values. Given that the total number of encryption queries is , the third inequality follows from Lemma 1. For the last inequality, for all and , we have , under the assumption that H is an -universal hash proof system.
Using the above set of inequalities, we note that with . Starting with Game 2, all ciphertexts for decryption involve injective branches. Injective functions preserve the min-entropy of its input, and we have , where . Using the fact that H is an -universal hash proof system we have . We then have:
Taking the logarithm of both sides and multiplying by , the probability that occurs is at most . Assuming that up to decryption queries are not rejected, the adversary can rule out up to values of (i.e., outputs of ). Combining these, we have:
is negligible given that by assumption. We now state the proof for the second element at the right hand side of Equation (2). accounts for the event that an adversary submits c for decryption such that is a new lossy branch, i.e., for any prior encryption query output and for any . Suppose that occurs under some efficient adversary. Using this adversary, we construct a simulator that can efficiently generate a lossy branch for . The simulator has access to an oracle that provides it with a function index s, but the oracle does not disclose the corresponding set of lossy branches. In encryption query i for , the oracle provides the simulator with a signing/verification key pair such that is equal to a lossy branch in . The same is done by the oracle for challenge queries. At initialization, the simulator completely emulates the challenger in Game 10, but requests its oracle for s. Since the simulator knows , s, , and can request its oracle for signing and verification keys, it can answer any encryption or challenge query. Since the simulator knows , it can answer secret-key leakage queries and decryption queries. The simulator keeps a list of all the ciphertext inputs it received for a decryption query. At the end of the game, the simulator randomly picks and outputs the branch of the ith decryption query ciphertext input. Suppose that with probability , the adversary is able to query a ciphertext for decryption such that its branch is lossy. The probability that the simulator outputs a lossy branch for is then . However, given that it is computationally difficult to generate a lossy branch for , is negligible. It follows that in Equation (2) or is also negligible. Combining all these facts the proposition follows. □
Proposition 16.
Game 8 and Game 9 are computationally indistinguishable, given that E is a average case strong randomness extractor.
Proof.
By Game 8, all ciphertexts with an invalid component are explicitly rejected for decryption. Denote challenge ciphertext as for . Denote an encryption query output as as for . For each , the adversary cannot learn any information on aside from those provided by , and , , encryption query outputs and challenge ciphertexts . Both and have possible values for and . has possible values, and has possible values for and . Under the assumptions of the cryptosystem, for all and , we have . Combining these, we apply Lemma 1, and have the following result for each .
Applying the above for , and given that extractor E is a average case strong randomness extractor, the value of is close to uniform from the point of view of the adversary. The claim thus follows. Combining all these claims prove the stated theorem as well. □
6. Concrete Instantiations
Let be a group sampling algorithm that takes as input a security parameter, , and which outputs a tuple, , where is a cyclic group of order p for some prime p, and g is the generator of the group. Let . The DDH assumption states that the ensemble of tuples is computationally difficult to distinguish from the ensemble of tuples
6.1. Hash Proof System Based on DDH
This concrete example of a hash proof system follows [2].
- Parameter Generation.
- first runs to obtain . It randomly samples followed by . It defines the secret key as . It defines the projective hash function as . In this case, the projection function associated with is the same, i.e., . Using , it constructs the public key , where . In this case, the set of ciphertexts C consists of , i.e., . The set of valid ciphertexts consists of all element of for which is a DH-triple, i.e., .
- Public Evaluation.
- takes as input public key , and witness that . Let . Given u, first checks if . If true, outputs , which equals to given that:We note here that the value of is evaluated using only the auxiliary information h, without knowing the secret key .
- Private Evaluation.
- takes as input private key , and . Let . evaluates on any element of C without requiring a witness. In this case, outputs .
As shown in [2] if is evaluated in without using auxiliary information h, the probability that equals some number is , which is close to uniform.
6.2. ABO Function Collection Based on DDH
Given , let . The ElGamal cryptosystem is initiated as follows: (i) a secret key is drawn randomly and (ii) the public key is computed as [8]. Given public key h and secret key z, let denote the encryption algorithm of ElGamal, while is the decryption algorithm. Encryption of a message is done by first drawing random number , and the ciphertext c is the tuple . Decryption on the other hand is computed as . The correctness of the ElGamal cryptosystem follows from . From [8], this cryptosystem is semantically secure under the DDH assumption and is additively homomorphic in the sense that, given two ciphertexts and for any pair of messages and random elements , they can be ’added’, in the sense that , where ⊠ is the coordinate-wise multiplication of ciphertexts. This can, likewise, be applied to the exponentiation operation, i.e., . We also define the operation ⊞; given ciphertext , we can add a scalar to the underlying plaintext, i.e., , using the r used in computing c.
6.2.1. Matrix Encryption
Let be some integer and p a prime number. Given a matrix , can be encrypted based on ElGamal. Encryption produces indistinguishable ciphertexts under the DDH assumption (Lemma 4 below). The first step is to sample for , followed by computing for . Encrypting is done by drawing n random numbers , and then, for rows and columns , we compute the ciphertext . This scheme outputs a ciphertext tuple that serves as the encryption of , where the vector and the matrix are as follows.
Lemma 4
([8]). The ElGamal matrix encryption scheme produces indistinguishable ciphertexts.
6.2.2. DDH-Based ABO Function Collection
Let a collection of ABO functions be given by ([8] also includes a trapdoor algorithm but we do not include it here). Given , let denote the set of branches with lossy branch . Each element of B belongs to . The algorithms are described as follows:
- Function sampling algorithm.
- takes as input the security parameter , and the lossy branch . Denote . The algorithm first runs . Let denote the identity matrix over . samples a vector of secret keys and forms the vector . It also samples a random vector, . The output of consists of the function index , which is the second element (i.e., the matrix ) of the ElGamal matrix encryption of (using , , and ), along with the consisting of the vector of secret keys . The vectors and do not need to be published publicly. In detail, we have as follows.
- Evaluation algorithm.
- takes as input the function index consisting of the matrix sampled by , a desired branch b from B, and a vector . The output consists of a vector . In summary:Let denote the jth coordinate of and let denote the dot product of and . Let denote the jth coordinate of , and let denote the function that discards the first element in its output and returns only . Using the homomorphic properties of the system we restate as: . This is computed without knowing and from the sampling stage; oOnly is needed to compute .
Lemma 5
([8]). The algorithms , described above, give a collection of ABO lossy functions under the DDH assumption for .
6.3. Lossy Function Collection
Let n, p be values of polynomials in , and let . Let S denote the set of function indices of a collection and let denote the collection of sets of branches indexed by . A concrete instantiation of a collection is given by algorithms , , which are as follows.
Concrete Instantiation of the Collection
Given , define as the branch set corresponding to function index . Define as a lossy branch set of size M with elements for . Using , define q as the ordered tuple for , . Suppose that are algorithms that give a ABO collection satisfying the required properties. Using , we implement , as follows.
- Function sampling algorithm.
- takes as input and q. The algorithm computes . Given q and p, outputs s, where . Let . Denote by the ith coordinate of s, denote by the ith coordinate of q. Using the algorithm , we compute for so that .
- Evaluation algorithm.
- : on input s, branch and message , the output is a vector . Denote by the ith coordinate of s and denote by the ith coordinate of . Using the algorithm , is computed as for , so that .
Lemma 6.
Assume that are algorithms that give a ABO collection of lossy functions satisfying the required properties for an ABO collection. Using , the algorithms , , presented above, give a collection of lossy functions that satisfies the required properties of a collection under the DDH assumption.
Proof.
Property 1 is satisfied since , which can be constructed in polynomial time by drawing M elements of . For property 3, each coordinate of the function index s is computed as in polynomial time. Since this is done M times, the total time of is polynomial. For property 4, to generate an element of , one has to sample elements of , which cannot be done in polynomial time. To prove property 3, we construct experiments . Let with represent two ordered tuples. Denote and . Experiment computes . For , experiment constructs q as follows:
Suppose that some polynomial time adversary can distinguish between and for any . Using this adversary, we construct a simulator that breaks property 3 of the ABO collection given by . The simulator has access to an oracle that, on input , provides it with either or . For , let be fixed with and where and . Given , the simulator constructs and as follows during initialization:
For , the simulator computes , since and are equal for all coordinates not equal to . For , the simulator forwards to its oracle and it receives . The simulator then forms and forwards it to its adversary. Once the adversary outputs a guess , the simulator outputs . The advantage of the simulator is thus equal to the probability that the adversary outputs and the oracle computes . However, given that the ABO collection given by satisfies the required properties, no efficient adversary would be able to output correctly with non-negligible probability. It follows that the advantage of the simulator is likewise negligible. By construction, if the oracle computes , the simulator is performing experiment , while if the oracle computes , the simulator is performing experiment for . Experiment computes , while computes . This proves property 3.
We now show that , satisfy the lossiness property. Let for some q. We note that to efficiently sample a lossy branch, simply pick any ith coordinate of q. Let denote the jth coordinate of , where is the output of for some . For with , we have , as injective since . For , we have , which is lossy given that , i.e., is computed with as the lossy branch. It follows that has, at most, p possible values. Transferring from the domain to the binary domain, let , with have possible values for some . It follows that can have, at most, possible values. This proves the lossiness property. □
7. Conclusions and Future Work
In this paper, we proposed cryptosystems that seek to address the problem of constructing CCA2 secure public-key cryptosystems that are resilient against related randomness attacks and related key attacks, albeit under the more limited classes of randomness reset attacks and constant-bit secret-key leakage attacks. Under this security notion, attacks from both the encryption and decryption processes of a cryptosystem are considered. We formally define this security notion in terms of an attack game between a challenger and an adversary, where the adversary has access to encryption queries with randomness reset and secret-key leakage queries aside from the standard challenge queries and decryption queries in CCA2 security. Under this attack game, we have presented two cryptosystems that are provably secure, where the first cryptosystem relies on the random oracle assumption, while the second cryptosystem is a standard model that relies on a proposed primitive called lossy functions, which can provide up to M lossy branches. In particular, the second cryptosystem uses the loss of information provided by multiple lossy branches to render it secure under a bounded number of encryption and challenge queries. While the collection exhibits a higher degree of information in the lossy branch and requires more memory than other lossy functions, it is easy to construct, as it uses Cartesian product operations over ABO lossy function primitives.
For future work, stronger public-key cryptosystems could, perhaps, be developed such that they are secure against more general types of related randomness attacks or related key attacks. For instance, instead of a randomness reset, a randomness attack may involve linear or polynomial functions of the randomness. The same can be said of secret-key leakage attacks, wherein, instead of the leakage of a constant number of bits, the leakage could be arbitrary functions of the secret key. In addition, further analysis can be done on the efficiency and experimental performance of the algorithm. In particular, the following can be explored: (1) to experimentally assess the efficiency/speed of cryptosystem 1 using an appropriate hash function as a simulated random oracle, and to compare this with cryptosystem 2 using the LM hash function and with other algorithms; (2) to experimentally assess the memory complexity of the LM lossy function relative to the other hash functions in the literature; (3) to provide a discussion on the time and space complexity of our algorithm relative to other algorithms; and (4) to perform an experimental simulation of a randomness reset attack and constant secret-key leakage attack, and demonstrate the algorithm’s response and resiliency to these attacks
Author Contributions
Conceptualization, A.L.; formal analysis, A.L. and H.A.; writing—review and editing, H.A. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Engineering Research and Development Technology (ERDT) program of the Department of Science and Technology (DOST), Philippines.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Koç, Ç.K.; Özdemir, F.; Ödemiş Özger, Z. Rivest-Shamir-Adleman Algorithm. In Partially Homomorphic Encryption; Springer: Berlin/Heidelberg, Germany, 2021; pp. 37–41. [Google Scholar]
- Boneh, D.; Shoup, V. A graduate Course in Applied Cryptography. 2020. Available online: https://toc.cryptobook.us/book.pdf (accessed on 23 December 2021).
- Cramer, R.; Shoup, V. A practical public key cryptosystem provably secure against adaptive chosen ciphertext attack. In Annual International Cryptology Conference; Springer: Berlin/Heidelberg, Germany, 1998; pp. 13–25. [Google Scholar]
- Bellare, M.; Rogaway, P. Random oracles are practical: A paradigm for designing efficient protocols. In Proceedings of the 1st ACM Conference on Computer and Communications Security, Fairfax, VA, USA, 3–5 November 1993; pp. 62–73. [Google Scholar]
- Cramer, R.; Shoup, V. Universal hash proofs and a paradigm for adaptive chosen ciphertext secure public-key encryption. In Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques, Amsterdam, The Netherlands, 28 April–2 May 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 45–64. [Google Scholar]
- Lucks, S. A variant of the Cramer-Shoup cryptosystem for groups of unknown order. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Queenstown, New Zealand, 1–5 December 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 27–45. [Google Scholar]
- Lafourcade, P.; Robert, L.; Sow, D. Fast Cramer-Shoup Cryptosystem. In Proceedings of the 18th International Conference on Security and Cryptography SECRYPT 2021, Paris, France, 6–8 July 2021. [Google Scholar]
- Peikert, C.; Waters, B. Lossy trapdoor functions and their applications. SIAM J. Comput. 2011, 40, 1803–1844. [Google Scholar] [CrossRef] [Green Version]
- Naor, M.; Segev, G. Public-key cryptosystems resilient to key leakage. In Proceedings of the Annual International Cryptology Conference, Santa Barbara, CA, USA, 16–20 August 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 18–35. [Google Scholar]
- Bellare, M.; Brakerski, Z.; Naor, M.; Ristenpart, T.; Segev, G.; Shacham, H.; Yilek, S. Hedged public-key encryption: How to protect against bad randomness. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Tokyo, Japan, 6–10 December 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 232–249. [Google Scholar]
- Paterson, K.G.; Schuldt, J.C.; Sibborn, D.L. Related randomness attacks for public key encryption. In Proceedings of the International Workshop on Public Key Cryptography, Buenos Aires, Argentina, 26–28 March 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 465–482. [Google Scholar]
- Faonio, A.; Venturi, D. Efficient public-key cryptography with bounded leakage and tamper resilience. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Hanoi, Vietnam, 4–8 December 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 877–907. [Google Scholar]
- Qin, B.; Liu, S. Leakage-resilient chosen-ciphertext secure public-key encryption from hash proof system and one-time lossy filter. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Bengaluru, India, 1–5 December 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 381–400. [Google Scholar]
- Marton, K.; Suciu, A.; Ignat, I. Randomness in digital cryptography: A survey. Rom. J. Inf. Sci. Technol. 2010, 13, 219–240. [Google Scholar]
- Yuen, T.H.; Zhang, C.; Chow, S.S.; Yiu, S.M. Related randomness attacks for public key cryptosystems. In Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, Singapore, 14–17 April 2015; pp. 215–223. [Google Scholar]
- Yilek, S. Resettable public-key encryption: How to encrypt on a virtual machine. In Proceedings of the Cryptographers’ Track at the RSA Conference, San Francisco, CA, USA, 1–5 March 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 41–56. [Google Scholar]
- Garfinkel, T.; Rosenblum, M. When Virtual Is Harder than Real: Security Challenges in Virtual Machine Based Computing Environments. In HotOS; USENIX: Berkeley, CA, USA, 2005. [Google Scholar]
- Ristenpart, T.; Yilek, S. When Good Randomness Goes Bad: Virtual Machine Reset Vulnerabilities and Hedging Deployed Cryptography. In NDSS; Internet Society: Reston, VA, USA, 2010. [Google Scholar]
- Bellare, M.; Rogaway, P. Code-Based Game-Playing Proofs and the Security of Triple Encryption. IACR Cryptol. ePrint Arch. 2004, 2004, 331. [Google Scholar]
- Austrin, P.; Chung, K.M.; Mahmoody, M.; Pass, R.; Seth, K. On the impossibility of cryptography with tamperable randomness. Algorithmica 2017, 79, 1052–1101. [Google Scholar] [CrossRef] [Green Version]
- Tiri, K. Side-channel attack pitfalls. In Proceedings of the 2007 44th ACM/IEEE Design Automation Conference, San Diego, CA, USA, 4–8 June 2007; pp. 15–20. [Google Scholar]
- Nisan, N.; Ta-Shma, A. Extracting randomness: A survey and new constructions. J. Comput. Syst. Sci. 1999, 58, 148–173. [Google Scholar] [CrossRef] [Green Version]
- Dodis, Y.; Vaikuntanathan, V.; Wichs, D. Extracting Randomness from Extractor-Dependent Sources. In Advances in Cryptology–EUROCRYPT 2020; 2020; pp. 313–342. Available online: https://par.nsf.gov/servlets/purl/10165162 (accessed on 23 December 2021).
- Dodis, Y.; Reyzin, L.; Smith, A. Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. In Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques, Interlaken, Switzerland, 2–6 May 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 523–540. [Google Scholar]
- Håstad, J.; Impagliazzo, R.; Levin, L.A.; Luby, M. A pseudorandom generator from any one-way function. SIAM J. Comput. 1999, 28, 1364–1396. [Google Scholar] [CrossRef]
- Hemenway, B.; Libert, B.; Ostrovsky, R.; Vergnaud, D. Lossy encryption: Constructions from general assumptions and efficient selective opening chosen ciphertext security. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Seoul, Korea, 4–8 December 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 70–88. [Google Scholar]
- Hofheinz, D. All-but-many lossy trapdoor functions. In Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, Cambridge, UK, 15–19 April 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 209–227. [Google Scholar]
- Canetti, R.; Goldwasser, S. An efficient threshold public key cryptosystem secure against adaptive chosen ciphertext attack. In Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques, Prague, Czech Republic, 2–6 May 1999; Springer: Berlin/Heidelberg, Germany, 1999; pp. 90–106. [Google Scholar]
- Wee, H. Public key encryption against related key attacks. In Proceedings of the International Workshop on Public Key Cryptography, Darmstadt, Germany, 21–23 May 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 262–279. [Google Scholar]
- Paterson, K.G.; Schuldt, J.C.; Sibborn, D.L.; Wee, H. Security against related randomness attacks via reconstructive extractors. In Proceedings of the IMA International Conference on Cryptography and Coding, Oxford, UK, 15–17 December 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 23–40. [Google Scholar]
- Boneh, D.; Boyen, X.; Shacham, H. Short group signatures. In Proceedings of the Annual International Cryptology Conference, Santa Barbara, CA, USA, 15–19 August 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 41–55. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).