Statically Aggregate Verifiable Random Functions and Application to E-Lottery

: Cohen, Goldwasser, and Vaikuntanathan (TCC’15) introduced the concept of aggregate pseudo-random functions (PRFs), which allow efﬁciently computing the aggregate of PRF values over exponential-sized sets. In this paper, we explore the aggregation augmentation on veriﬁable random function (VRFs), introduced by Micali, Rabin and Vadhan (FOCS’99), as well as its application to e-lottery schemes. We introduce the notion of static aggregate veriﬁable random functions (Agg-VRFs), which perform aggregation for VRFs in a static setting. Our contributions can be summarized as follows: (1) we deﬁne static aggregate VRFs, which allow the efﬁcient aggregation of VRF values and the corresponding proofs over super-polynomially large sets; (2) we present a static Agg-VRF construction over bit-ﬁxing sets with respect to product aggregation based on the q -decisional Difﬁe–Hellman exponent assumption; (3) we test the performance of our static Agg-VRFs instantiation in comparison to a standard (non-aggregate) VRF in terms of costing time for the aggregation and veriﬁcation processes, which shows that Agg-VRFs lower considerably the timing of veriﬁcation of big sets; and (4) by employing Agg-VRFs, we propose an improved e-lottery scheme based on the framework of Chow et al.’s VRF-based e-lottery proposal (ICCSA’05). We evaluate the performance of Chow et al.’s e-lottery scheme and our improved scheme, and the latter shows a signiﬁcant improvement in the efﬁciency of generating the winning number and the player veriﬁcation. denotes the input length. The experimental results show that, even for 1024 bits of inputs, the aggregation of 2 1004 pairs of function values/proofs can be computed very efﬁciently in 6881 ms. Moreover, the time required to verify their aggregated function values/proofs of 2 1004 pairs only increases 50%, comparing with the veriﬁcation time for each single function value/proof pair of standard VRF. Sections 3.2 and 3.3 present a detailed efﬁciency discussion and our experimental tests and comparisons.


Introduction
Verifiable random functions (VRFs), initially introduced by Micali, Rabin, and Vadhan [1], can be seen as the public key equivalent of pseudorandom functions (PRFs) that, besides the pseudorandomness property (i.e., the function looks random at any input x), also provide the property of verifiability. More precisely, VRFs are defined by a pair of public and secret keys (pk, sk) in such a way that they provide not only the efficient computation of the pseudorandom function f sk (x) = y for any input x but also a non-interactive publicly verifiable proof π sk (x) that, given access to pk, allows the efficient verification of the statement f sk (x) = y for all inputs x. VRFs have been shown to be very useful in multiple application scenarios including key distribution centres [2], non-interactive lottery systems used in micropayments [3], domain name security extensions (DNSSEC) [4][5][6], e-lottery schemes [7], and proof-of-stake blockchain protocols such as Ouroboros Praos [8,9].
Cohen, Goldwasser, and Vaikuntanathan [10] were the first to investigate how to answer aggregate queries for PRFs over exponential-sized sets and introduced a type of augmented PRFs, called aggregate pseudo-random functions, which significantly enriched the existing family of (augmented) PRFs including constrained PRFs [11], key-homomorphic PRFs [12], and distributed PRFs [2]. Inspired by the idea of aggregated PRFs [10], in this paper, we explore the aggregation of VRFs and introduce a new cryptographic primitive, static aggregate verifiable random functions (static Agg-VRFs), which allow not only the efficient aggregation operation both on function values and proofs but also the verification on the correctness of the aggregated results.
Aggregate VRFs allow the efficient aggregation of a large number of function values, as well as the efficient verification of the correctness of the aggregated function result by employing the corresponding aggregated proof. Let us give an example to illustrate this property. Consider a cloud-assisted computing setting where a VRF can be employed in the client-server model, i.e., Alice is given access to a random function where the function description (or the secret key) is stored by a server (seen as the random value provider). Whenever Alice requests an arbitrary bit-string x, the server simply computes the function value y = f (x) together with the corresponding proof π and returns the tuple (x, y, π) to Alice. Alice may also request the aggregation (such as the product) of the function values over a large number of points (e.g., x 1 , x 2 , . . . , x n , which may match some pattern, such as having same bits on some bit locations). In this case, aggregate VRFs allow the server to compute the product of f (x 1 ), . . . , f (x n ) efficiently, instead of firstly evaluating f (x 1 ), . . . , f (x n ) and then calculating their product. On receiving either the function value y of an individual input or the aggregated function value y agg over multiple inputs, Alice needs to verify the correctness of the returned value. VRFs allow the verification of the correctness of y using π, while, to verify the correctness of y agg , there is a trivial way, namely firstly verifying (x i , y i , π i ) for i = 1, . . . , n using the verification algorithm of VRFs and then checking if y agg = Π n i=1 y i , but the running time of which depends on the number n. Via aggregate VRFs, the verification of y agg can be achieved much more efficiently by using the aggregated proof π agg that is generated by the server and returned to Alice along with y agg .
A representative application of aggregate VRFs is in e-lottery schemes. More precisely, aggregate VRFs can be employed in VRF-based e-lottery schemes [7], where a random number generation mechanism is required to determine not only a winning number but also the public verifiability of the winning result, which guarantees that the dealer cannot cheat in the random number generation process. In this paper, we provide an e-lottery scheme, which has significant gain in the efficiency of generating the winning numbers and verifying the winning results. In a nutshell, VRF-based e-lottery schemes [7] proceed as follows: Initially, the dealer generates a secret/public key pair (sk, pk) of VRFs and publishes the public key pk, together with a parameter T associated with the time (this is the input parameter controlling the time complexity of the delaying function D(·)) during which the dealer must release the winning ticket value. To purchase the ticket, a player chooses his bet number s and obtains a ticket ticket i (please refer to Section 4.2 for the generation of ticket ticket i on a bet number x in detail) from the dealer. The dealer links the ticket to a blockchain, which could be created as chain 1 := H(ticket 1 ), chain i := H(chain i−1 ||ticket i ) for i > 1, and publishes chain j where j is the number of tickets sold so far. To generate the random winning number, the dealer first computes a VRF as (w 0 , π 0 ) = ( f sk (d), π sk (d)) on d = D(h), where h is the final value of the blockchain (i.e., suppose there are n tickets sold, then h := H(chain n )). Assume that the numbers used in the lottery game are {1, 2, . . . , N max }. If w 0 > N max , then the dealer iteratively applies the VRF on w i−1 d to obtain . Suppose that, within T units of time after the closing of the lottery session, until applying the VRF for t times, the dealer obtains (w t , π t ) such that w t ≤ N max . Afterwards, the dealer publishes (w t , π t ) as the winning number and the corresponding proof as well as all the intermediate tuples (w 0 , π 0 ), . . . , (w t−1 , π t−1 ). If s = w t , a player wins. To verify the validity of a winning number w t , each player verifies the validity of (w i , π i ) for i = 0, . . . , t.
Chow et al.'s e-lottery scheme [7] seems to be very promising when considering an ideal case that after a small number t of times that the VRF is applied, a function value w t such that w t ≤ N max can be obtained successfully. Otherwise, it means that the dealer needs to calculate the VRF more times, while the player needs to verify the correctness of more tuples in order to verify the winning result; the latter leads to large computational overhead and requires storage of all intermediate tuples of VRF function values and corresponding proofs, both from the dealer and the player.
Observe that both the evaluation and verification of multiple pairs of VRF function value/proof are time consuming. By using our aggregate VRF instantiation, we improve the e-lottery by devising the dealer to evaluate aggregate VRF twice at most so as to obtain a random winning number together with corresponding proof, thus rendering the verification for only such a single pair. This reduces the amount of data written to the dealer's storage space and also decreases the computational cost for the verification process of each player.
Our Contribution. We introduce the notion of static aggregate verifiable random functions (static Agg-VRFs). Briefly, a static Agg-VRF is a family of keyed functions each associated with a pair of keys, such that, given the secret key, one can compute the aggregation function for both the function values and the proofs of the VRFs over super-polynomially large sets in polynomial time, while, given the public key, the correctness of the aggregate function values could be checked by the corresponding aggregated proof. It is very important that the sizes of the aggregated function values and proofs should be independent of the size of the set over which the aggregation is performed. The security requirement of a static Agg-VRF states that access to an aggregate oracle provides no advantage to the ability of a polynomial time adversary to distinguish the function value from a random value, even when the adversary could query an aggregation of the function values over a specific set (of possibly super-polynomial size) of his choice.
In this paper, the aggregate operation we consider is the product of all the VRF values and proofs over inputs belonging to a super-polynomially large set. We show how to compute the product aggregation over a super-polynomial size set in polynomial time, since it is impossible to directly compute the product on a super-polynomial number of values. More specifically, we show how to achieve a static Agg-VRF under the Hohenberger and Waters' VRF scheme [13] for the product aggregation with respect to a bit-fixing set. We stress that after revisiting the JN-VRF scheme [14] proposed by Jager and Niehuesbased (currently the most efficient VRFs with full adaptive security in the standard model), we find that, even though JN-VRF almost enjoys the same framework of HW-VRF (since an admissible hash function H AHF is applied on inputs x before evaluating the function value and the corresponding proof, which impacts negatively the nice pattern of all inputs in a bit-fixing set), it is impossible to perform productive aggregation of a super-polynomial number of values f sk (H AHF (x)) efficiently over bit-fixing sets.
We implemented and evaluated the performance of our proposed static aggregate VRF in comparison to a standard (non-aggregate) VRF for inputs with different lengths i.e., 56, 128, 256, 512, and 1024 bits, in terms of the costing time for aggregating the function values, aggregating the proofs as well as the cost of verification for the aggregation. In all cases, our aggregate VRFs present significant computational advantage and are more efficient than standard VRFs. Furthermore, by employing aggregate VRFs for bit-fixing sets, we propose an improved e-lottery scheme based on the framework of Chow et al.'s VRF-based e-lottery proposal [7], by mainly modifying the winning result generation phase and the player verification phase. We implemented and tested the performance of both Chow et al.'s and our improved e-lottery schemes. Our improved scheme shows a significant improvement in efficiency in comparison to Chow et al.'s scheme.
Core Technique. We present a construction of static aggregate VRFs, which performs the product aggregation over a bit-fixing set, following Hohenberger and Waters' [13] VRF scheme. A bit-fixing set consists of bit-strings which match a particular bit pattern. It can be defined by a pattern string v ∈ {0, 1, ⊥} poly(λ) as . . . , U = g u are public keys and u 0 , . . . , u are kept secret. The corresponding proofs of the VRF are given using a step ladder approach, namely, for j = 1 to , To aggregate the VRF, let π agg 0 = g 2 −τ , for i = 1, . . . , ; we compute and π agg +1 = (π agg ) u 0 . The aggregated function value is computed as The aggregation verification algorithm checks the following equations: for i = 1, . . . , and e(π agg +1 , g) = e(π agg , U 0 ) and e(π agg +1 , h) = y agg . Improved Efficiency. We provide some highlights on the achieved efficiency. Efficiency of Improved E-Lottery Scheme. We test the performance of Chow et al.'s e-lottery scheme [7] and our improved (aggregate VRF based) counterpart and make a comparison. In our improved e-lottery scheme, the computation of the aggregate function value/proof pair and the verification are performed via a single step of Aggregation and AggVerify algorithms, respectively, while Chow et al.'s e-lottery scheme is processed by t steps. We perform some experiments on Chow et al.'s scheme to see how big/small the t is so as to reach the point where the dealer obtains (w t , π t ) such that w t ≤ N max , thus figuring out the computation-time for the corresponding multiple function evaluation and verification. In the experiments, we ran 10 times Chow et al.'s scheme and we obtained the median of all the runs. We reached t ≈ 2 and it took ≈100 s for each run of the winner generation and ≈5 s for player verification. In our improved version, the generation of the winner ticket costs less than 90 s, and the time for verification decreases to ≈2.5 s, which shows a significant improvement in efficiency.
Related work. We summarize relevant current state-of-the-art. Verifiable Random Functions. Hohenberger and Waters' VRF scheme [13] is the first that shows all the desired properties for a VRF (we say that a VRF scheme has all the desired properties if it allows an exponential-sized input space, achieves full adaptive security, and is based on a non-interactive assumption). Formerly, there have been several VRF proposals [15][16][17], all of which have some limitations: they only allow a polynomial-sized input space, they do not achieve fully adaptive security, or they are based on an interactive assumption. Thus far, there are also many constructions of VRFs with all the desired properties based on the decisional Diffie-Hellman assumption (DDH) or the decision linear assumption (DLIN) presenting different security losses [18][19][20][21][22]. Kohl [22] provided a detailed summary and comparison of all existing efficient constructions of VRFs in terms of the underlying assumption, sizes of verification key and the corresponding proof, and the associated security loss. Recently, Jager and Niehues [14] provided the most efficient VRF scheme with adaptive security in the standard model, relying on the computational admissible hash functions.
Aggregate Pseudorandom Functions. Cohen et al. [10] introduced the notion of aggregate PRFs, which is a family of functions indexed by a secret key with the functionality that, given the secret key, anyone is able to aggregate the values of the function over super-polynomially many PRF values with only a polynomial-time computation. They also proposed constructions of aggregate PRFs under various cryptographic hardness assumptions (one-way functions and sub-exponential hardness of the Decisional Diffie-Hellman assumption) for different types of aggregation operators such as sums and products and for several set systems including intervals, bit-fixing sets, and sets that can be recognized by polynomial-size decision trees and read-once Boolean formulas. In this paper, we explore how to aggregate VRFs, which involves efficient aggregations both on the function evaluations and on the corresponding proofs, while providing verifiability for the correctness of aggregated function value via corresponding proof.
E-lottery Schemes/Protocols. In 2005, Chow et al. [7] proposed an e-lottery scheme using a verifiable random function (VRF) and a delay function. To reduce the complexity in the (purchaser) verification phase, Liu et al. [23] improved Chow et al.'s scheme by proposing a multi-level hash chain to replace the original linear hash chain, as well as a hash-function-based delay function, which is more suitable for e-lottery networks with mobile portable terminals. Based on the secure one-way hash function and the factorization problem in RSA, Lee and Chang [24] presented an electronic t-out-of-n lottery on the Internet, which allows lottery players to simultaneously select t out of n numbers in a ticket without iterative selection. Given that the previous schemes [7,23,24] offer single participant lottery purchases on the Internet, Chen et al. [25] proposed an e-lottery purchase protocol that supports the joint purchase from multi-participants that enables them to safely and fairly participate in a mobile environment. Aiming to provide an online lottery protocol that does not rely on a trusted third party, Grumbach and Riemann [26] proposed a novel distributed e-lottery protocol based on the centralized e-lottery of Chow et al. [7] and incorporated the aforementioned multi-level hash chain verification phase of Liu et al. [23]. Considering that the existing works on e-lottery focus either on providing new functionalities (such as decentralization or threshold) or improving the hash chain or delay function, the building block of VRFs has received little attention. In this paper, we explore how to improve the efficiency of Chow et al.'s [7] e-lottery scheme by using aggregate VRFs.

Verifiable Random Functions
Let F : K × X → Y × P be an efficient function, where the key space K, domain X , range Y, and proof space P are dependent on the security parameter λ. Consider (Setup, Eval, Verify) as the following algorithms: • Setup(1 λ ) → (sk, pk) takes as input a security parameter λ and outputs a key pair (pk, sk). We say that sk is the secret key and pk is the verification key. • Eval(sk, x) → (y, π) takes as input the secret key sk and x ∈ X and outputs a function value y ∈ Y and a proof π ∈ P. We write Fun sk (x) to denote the function value y and Prove sk (x) to denote the proof of correctness computed by Eval on input (sk, x). • Verify(pk, x, y, π) → {0, 1} takes as input the verification key pk, x ∈ X , y ∈ Y, and the proof π ∈ P and outputs a bit.

Definition 1.
We say that (Setup, Eval, Verify) is a verifiable random function (VRF) if all the following properties hold.
Here, we note that Eval and Fun denote two different functions. The former denotes the function that outputs both function value y and proof π, while the latter denotes the function that only outputs function value y.

Bilinear Maps and the HW-VRF Scheme
Bilinear Groups. Let G and G T be algebraic groups. A bilinear map is an efficient mapping e : G × G → G T which is both: (bilinear) for all g ∈ G and a, b ← Z p , e(g a , g b ) = e(g, g) ab ; and (non-degenerate) if g generates G, then e(g, g) = 1.

HW-VRF Scheme
Here, we describe one of the elegant constructions of VRFs proposed by Hohenberger and Waters [13] (that is abbreviated as HW-VRF scheme). The latter is employed as a basis for our aggregate VRF scheme. HW-VRF is the first fully-secure VRF from the Naor-Reingold PRF [27] with exponential-size input space whose security is based on the so-called q-type complexity assumption, namely q-DDHE assumption, and is built as follows.
• Setup(1 λ , 1 ): The setup algorithm takes as input the security parameter λ as well as the input length . It firstly runs G(1 λ ) to obtain the description of the groups G, G T and of a bilinear map e : G × G → G T . The description of G contains the generators g, h ∈ G. Let {0, 1} be the input space. It next selects random values u 0 , u 1 , . . . , u ∈ Z p and sets U 0 = g u 0 , U 1 = g u 1 , . . . , U = g u . It then sets the keys as: . . x as: . This algorithm outputs a proof π, which is comprised as follows. Let π +1 = Verify(pk, x, y, π). Let π = (π 1 , . . . , π , π +1 ). To verify that y was computed correctly, proceed in a step-by-step manner by checking that e(π 1 , g) = e(g, g) , then for i = 2, . . . , it checks, if the following equations are satisfied: Finally, it checks that e(π +1 , g) = e(π , U 0 ) and e(π +1 , h) = y. It outputs 1, if and only if all checks verify. Otherwise, it outputs 0.

Static Aggregate VRFs
In a (static) aggregate PRF [10] (here, we call the aggregate PRF proposed by Cohen, Goldwasser, and Vaikuntanathan [10] as a static aggregate PRF since their aggregation algorithm needs the secret key of the PRF to be taken as input), there is an additional aggregation algorithm which given the secret key can (efficiently) compute the aggregated result of all the function values over a set of all the inputs in polynomial time, even if the input set is of super-polynomial size. Note that in an aggregate VRF, similarly to an aggregate PRF, an additional aggregation algorithm is brought into the ordinary VRF [1]. Thus, aggregate VRFs can be regarded as an extension of ordinary VRFs. The static aggregate VRF differs from a static aggregate PRF [10] in that given the secret key the aggregation operation is performed not only on the function values but also on the corresponding proofs. Moreover, the resulted aggregate function value can be publicly verified by using aggregate proof (together with the public key and the input subset), which proves that the aggregate function value is a correct result on the aggregation of all function values over the input subset.
Cohen, Goldwasser, and Vaikuntanathan [10] were the first to consider the notion of aggregate PRFs over the super-polynomial large but efficiently recognizable set classes. In their model, they treat the efficiently recognizable set ensemble as a family of predicates, i.e., for any set S there exists a polynomial-size boolean circuit C : {0, 1} * → {0, 1} such that x ∈ S if and only if C(x) = 1. Boneh and Waters [11] also employed such a predicate to define the concept of constrained PRFs with respect to a constrained set. In this paper, we employ the concept and formalization of the efficiently recognizable set in the definition of static aggregate VRFs.
Recall that a verifiable random function (VRF) [1] is a function F : K × X → Y × P defined over a secret key space K, a domain X , a range Y, and a proof space P (and these sets may be parameterized by the security parameter λ). Let Fun : K × X → Y denote the mapping of random function evaluations on arbitrary inputs and Prove : K × X → P denote the mapping of proof evaluations on inputs, each of which can be computed by a deterministic polynomial time algorithm.
Let Ψ λ : (Y λ , P λ ) * → (Y λ , P λ ) be the aggregation function that takes as inputs multiple pairs of values from the range Y λ and the proof space P λ of the function family, and aggregates them to output an aggregated function value in the range Y λ and the corresponding aggregated proof in the proof space P λ . Definition 2 (Static Aggregate VRF). Let F = {F λ } λ∈N be a VRF function family where each function F ∈ F λ : K × X → Y × P computable in polynomial time is defined over a key space K, a domain X , a range Y and a proof space P. Let S be an efficiently recognizable ensemble of sets {S λ } λ where for any S ∈ S, S ⊂ X , and Ψ λ : (Y λ , P λ ) * → (Y λ , P λ ) be an aggregation function. We say that F is an (S, Ψ)-static aggregate verifiable random function family (abbreviated (S, Ψ)-sAgg-VRFs) if it satisfies: • Efficient aggregation: There exists an efficient (computable in polynomial time) algorithm Aggregate F,S,Ψ (sk, S) → (y agg , π agg ) which on input the secret key sk of a VRF and a set S ∈ S, outputs aggregated results (y agg , π agg ) ∈ Y × P such that for any S ∈ S, Aggregate F sk ,S,Ψ (sk, Verification for aggregation: There exists an efficient (computable in polynomial time) algorithm AggVerify(pk, S, y agg , π agg ) → {0, 1} which on input the aggregated function value y agg and the proof π agg for an ensemble S ∈ S of the domain, verifies if it holds that y agg = Ψ(Fun sk (x 1 ), . . . , Fun sk (x |S| )) using the aggregated proof π agg . • Correctness of aggregated values: For all (pk, sk) ← Setup(1 λ ), set S ∈ S and the aggregate function Ψ ∈ Ψ λ , let (y, π) ← Eval(sk, x) and (y agg , π agg ) ← Aggregate F,S,Ψ (sk, S), then AggVerify(pk, S, y agg , π agg ) = 1.
where L Eval is the set of all inputs that D queries to its oracle Eval, L Agg consists of all the sets S i that D queries to its oracle Aggregate, and C S i is the polynomial-size boolean circuit that is able to recognize the ensemble S i .

•
Compactness: There exists a polynomial poly(·) such that for every λ ∈ N, x ∈ X , set S ∈ S and the aggregate function Ψ ∈ Ψ λ , it holds with overwhelming probability over (pk, sk) ← Setup(1 λ ), (y, π) ← Eval(sk, x) and Aggregate F,S,Ψ (sk, S) → (y agg , π agg ) that the resulting aggregated value y agg and aggregated proof π agg has size |y agg |, |π agg | ≤ poly(λ, |x|). In particular, the size of y agg and π agg are independent of the size of the set S.
We stress that the set S over which the aggregation is performed can be super-polynomially large. Clearly, given exponential numbers of values F sk (·), it is impossible to perform aggregation on them but yet, we show how to efficiently compute the aggregation function on an exponentially large set with respect to a concrete VRF given the secret key.
Some explanations on the notion of static aggregate VRFs. Firstly, the algorithm Aggregate F,S,Ψ achieves an efficient aggregation on function values/proofs over super-polynomially large sets S in polynomial time. We stress that our aim is to work on super-polynomially large sets, since, for any constant size of sets, the (productive) aggregation can be computed trivially, given the function value/proof pairs on all inputs in such a set. Secondly, the verification algorithm AggVerify is employed to efficiently verify the correctness of the aggregated function values y agg . Given {(x i , y i , π i )} |S| i=1 and the aggregated function value y agg , there is a trivial way to verify the correctness of y agg , by verifying the correctness of each tuple (x i , y i , π i ) for i = 1, . . . , | S | and then checking if y agg = Π |S| i=1 y i , which is not computable in polynomial time if S is a super-polynomially large set. Therefore, our main concern is to achieve efficient verification on y agg via the corresponding proof π agg , the size of which is independent of the size of S. Thirdly, the condition AggVerify(pk, S, y agg , π agg ) = 1 is interpreted as that value y agg is a correct result on the aggregation of {Fun sk (x i )} |S| i=1 , i.e., y agg = Ψ(Fun sk (x 1 ), . . . , Fun sk (x |S| )), by using the corresponding proof π agg . We note that the verification for the aggregation does not violate the uniqueness of the underlying basic VRF. Indeed, there probably exist different sets S 1 and S 2 that result in a same y agg , but the uniqueness for any input point x ∈ S 1 (x ∈ S 2 ) always holds. Looking ahead, in our instantiation of aggregate VRFs, to find two sets S 1 = S 2 such that ∏ x i ∈S 1 Fun sk (x i ) = ∏ x i ∈S 2 Fun sk (x i ) is computationally hard, without knowledge of sk. Lastly, the condition AggVerify(pk, S, y agg , π agg ) = 1 does not imply Verify(pk, x i , y i , π i ) = 1 for all i = 1, . . . , | S |, since by maintaining a correct pair (y agg , π agg ), we always can alter any two tuples as (x i , y i · r, π i ) and (x j , y j · r −1 , π j ) for any random r ∈ G, which means Verify(x i , y i · r, π i ) = Verify(x j , y j · r −1 , π j ) = 0.

A Static Aggregate VRF for Bit-Fixing Sets
We now propose a static aggregate VRF, whose aggregation function is to compute products over bit-fixing sets. In a nutshell, a bit-fixing set consists of bit-strings, which match a particular bit pattern.
⊥} is the bit-fixing sets on {0, 1} , we now prove the following theorem: Theorem 1. Let > 0 be a constant. Choose the security parameter λ = Ω( 1/ ), and assume the (2 λ , 2 −λ )-hardness of q-DDHE over the group G and G T . Then, the collection of verifiable random functions F defined above is a secure aggregate VRF with respect to the subsets S BF and the product aggregation function over G and G T .
The compactness follows straightforward, since the aggregated function value y agg ∈ G T and the aggregated proof π agg = (π agg 1 , . . . , π agg +1 ) ∈ G +1 , the sizes of which are independent of the size of the bit-fixing set S v , i.e., 2 −τ .
The proof for pseudorandomness is similar to that of HW-VRF scheme in [13] since our static aggregate VRF is built on the ground of HW-VRF and the only phase we need to deal with in the proof is to simulate the responses of the aggregation queries. Here, we provide the simulation routine that the q-DDHE solver executes to act as a challenger in the pseudorandomness game of the aggregated VRFs. The detailed analysis of the game sequence is similar to the related descriptions in [13].
Proof of Theorem 1. Let Q(λ) be a polynomial upper bound on the number of queries made by a p.p.t. distinguisher D to the oracles Eval and Aggregate. We use D to create an adversary B such that, if D wins in the pseudorandomness game for aggregate VRFs with probability 1 2 + 3 64Q( +1) , then B breaks the q-DDHE assumption with probability 1 2 + 3 64Q( +1) , where q = 4Q( + 1), and is the input length of the static Agg-VRFs.
For x ∈ {0, 1} , let x i denote the ith bit of x. Define the following functions: B sets U 0 = (g a m(1+k)+r ) s and U i = (g a r i ) s i for i = 1, . . . , . It sets the public key as (G, p, g, h, U 0 , . . . , U ), and the secret key implicitly includes the values u 0 = a m(1+k)+r s and {u i = a r i s i } i∈ [1, ] . Oracle Queries to Eval(sk, ·). The distinguisher D will make queries of VRF evaluations and proofs. On receiving an input x, B first checks if C(x) = q and aborts if this is true. Otherwise, it defines the function value as F(x) = e((g a C(x) ) J(x) , h), and the corresponding proof as π = (π 0 , π 1 , . . . , π ) where π 0 = (g a C(x) ) J(x) , π i = (g aĈ (x,i) )ˆJ (x,i) for i = 1, . . . , . Note that for any x ∈ {0, 1} it holds: As a result, if C(x) = q, B could answer all the Eval queries. Oracle Queries to Aggregate F sk ,S,Ψ (·). The distinguisher D will also make queries for aggregate values. On receiving a pattern string v ∈ {0, 1, ⊥} , B uses the above secret key to compute the aggregated proof and the aggregate function value. More precisely, B answers the query Aggregate F sk ,S,Ψ (S v ) as follows: Let π agg 0 := g 2 −τ . Since the aggregated proof is defined as π agg = (π agg 1 , . . . , π agg , π agg +1 ), where, for i = 1, . . . , , and π agg +1 = (π agg ) u 0 , B will compute concretely: and, for j = 2, . . . , , π agg The above value could be computed by B through its knowledge of r i , s i . The value of can be handled similarly using m, k, r , s . While the aggregated function value is defined as y agg = e(π agg +1 , h). Challenge. D will send a challenge input x * with the condition that x * is never queried to its Eval oracle. If C(x * ) = q, B returns the value y. When D responds with a bit b , B outputs b as its guess to its own q-DDHE challenger. If C(x * ) = q, B outputs a random bit as its guess. This ends our description of q-DDHE adversary B.

Remark 1. Discussion on the impossibility of productive aggregation on JN-VRF for bit-fixing sets.
Recently, based on q-DDH-assumption, Jager and Niehues [14] proposed the currently most efficient VRFs (that is abbreviated as JN-VRF scheme) with full adaptive security in the standard model. JN-VRF almost enjoys the same framework of HW-VRF, and the only difference is that in the former an admissible hash function H AHF is applied on inputs x before evaluating the function value and corresponding proof, while the latter is not. We stress that hash function H AHF : {0, 1} → {0, 1} n on inputs x destroys the nice pattern of all inputs in a bit-fixing set, which implies that, for any x ∈ S v , i.e., for all i ∈ [ ], Otherwise, it is possible to find the collisions of H AHF . Therefore, given exponential numbers of values F sk (H AHF (x)), it is impossible to perform productive aggregation over them efficiently by using the same technique as in the last subsection.

Efficiency Analysis
Analysis of Costs. The instantiation in Section 3.1 is very compact since the aggregated function value consists of a single element in G T , while the aggregated proof is composed of + 1 elements in G, which are independent of the size of a set S. The Aggregate algorithm simply requires at most multiplications plus one exponentiation to compute y agg and + 2 exponentiations to evaluate π agg , which needs much less computation compared to computing 2 −τ multiplications to obtain y agg and 2 −τ · ( + 1) multiplications to obtain π agg on all 2 −τ number of inputs in S. The AggVerify algorithm simply requires at most (2 + 3) pairing operations, while 2 −τ · (2 + 3) pairings are needed for verifying 2 −τ number of function values/proofs on all inputs in S. We summarize the cost for the Aggregate and AggVerify algorithms in Table 1, where MUL is the shortened form of the multiplication operation, EXP is the abbreviation for the exponentiation operation, and ADD denotes the addition operation.

Implementation and Experimental Results
Choice of elliptic curves and pairings. In our implementation, we use Type A curves as described in [28], which can be defined as follows. Let q be a prime satisfying q = 3 mod 4 and let p be some odd dividing q + 1. Let E be the elliptic curve defined by the equation y 2 = x 3 + x over F q ; then, E(F q ) is supersingular, #E(F q ) = q + 1, #E(F q 2 ) = (q + 1) 2 , and G = E(F q )[p] is a cyclic group of order p with embedding degree k = 2. Given map Ψ(x, y) = (−x, iy), where i is the square root of −1, Ψ maps points of E(F q ) to points of E(F q 2 )\E(F q ), and if f denotes the Tate pairing on the curve E(F q 2 ), then defining e : G × G → F q 2 by e(P, Q) = f (P, Ψ(Q)) gives a bilinear nondegenerate map. For more details about the choice of parameters, please refer to [28]. In our case, we use the standard parameters proposed by Lynn [28] (https://crypto.stanford.edu/pbc/), where q has 126 bits and p = 7 30750818665451621361119245571504901405976559617. To generate random elements, we use libsodium (https://libsodium.gitbook.io/). Our implementation uses the programming language "C" and the GNU Multiple Precision Arithmetic for arithmetic with big numbers. We use the GCC version 10.0.1 with the following compilation flags: "-O3 -m64 -fPIC -pthread -MMD -MP -MF". Implementing HW-VRF. In our implementation, we use the bilinear map as pairing implemented by Lynn [28] for the BLS signature scheme. We notice that, when computing the function value Fun sk (x) = e(g, h) u 0 Π i=1 u x i i , we usually compute first the bilinear e(g, h), and then do the exponentiation. However, it is expensive to do the exponentiation of an element in G T . To improve the efficiency of computing Fun sk (x), we use the following mathematical trick: e(g, h) ab = e(g a , h b ), which implies that we calculate Fun sk (x) as e(g u 0 , h Π i=1 u x i i ). Since the computation of g a (or h b ) corresponds to the scalar multiplication of a point P (or Q) by a scalar a (or b), using this trick, we avoid the exponentiation on an element in G T by requiring cost of two scalar multiplications of a point of the curve.
Implementing our static Agg-VRFs. Since p is fixed, when calculating the aggregated proof as π agg i := (π agg i−1 ) (u i +1)/2 , we can precompute the inversion of 2 and thus only need to compute (π agg i−1 ) (u i +1)inv(2) by the scalar multiplication of a point on curve with scalar (u i + 1) inv(2). We use a similar approach when computing e(π agg i−1 , g · U i ) 1/2 ; in this case, we always perform e((π agg i−1 ) inv (2) , g · U i ). Again, (π agg i−1 ) inv (2) corresponds to the scalar multiplication of a point with scalar inv(2), while g · U i corresponds to the additive operation on two points on the elliptic curve.
Comparison. We tested the performance of our static Agg-VRFs in comparison to a standard (non-aggregate) VRF, for five different input lengths, i.e., 56, 128, 256, 512, and 1024 bits. In all cases, we set the size of the fixed-bit equal to 20. Thus, naturally, we wanted to compare the efficiency of our aggregated VRF versus the evaluation and corresponding verification of 2 36 , 2 108 , 2 236 , 2 492 , and 2 1004 VRF values. To perform our comparisons, we recorded the verification time for 100 pairs of function values and their corresponding proofs, if the verification is performed one-by-one (i.e., without using the aggregation) versus the corresponding performance of employing our proposed static aggregate VRF. Obviously, it holds 100 2 36 , 100 2 108 , 100 2 236 , 100 2 492 , and 100 2 1004 . In fact, it is fine to choose any number that is smaller than 2 36 . We choose 100 to have sensible running time for the performance of the standard (non-aggregate) VRF. By taking the 56 bits input length with 20 fixed bits as an example, the bit-fixing set should contain 2 36 elements; then, we should consider the verification time for 2 36 pairs of function values-proofs, which is drastically larger than the running time when we evaluate the verification for only 100 pairs. Thus, showing that our aggregate VRF is much more efficient than the evaluation and corresponding verification of 100 VRF values obviously implies that it is more efficient than the evaluation and corresponding verification of 2 36 , 2 108 , 2 236 , 2 492 , and 2 1004 VRF values, correspondingly. Table 2 shows the result of our experiments. The column "Verify" corresponds to the required time for verifying a single pair of function value/proof. We tested how much time it costs to aggregate all the function values and their proofs for inputs belonging to the bit-fixing set. Furthermore, we evaluated the verification time to check the aggregated function value/proof. The column "Total Verification" corresponds to the total required time for verifying 100 pairs of function values/proofs via the standard VRFs (i.e., verification one-by-one), while the column "AggVerify" represents the costing time for verifying the aggregated value/proof via aggregate VRF (i.e., aggregated verification algorithm). The experimental results show that, even for 1024 bits of inputs, the aggregation of 2 1004 pairs of function values/proofs can be computed very efficiently in 6881 ms. Moreover, the time required to verify their aggregated function values/proofs of 2 1004 pairs only increases 50% compared to the verification time for each single function value/proof pair of HW-VRFs. Table 2. Running time (milliseconds) of the aggregate VRFs and standard (non-aggregate) VRF. [13] Aggregate VRFs 56  89  100  949  20  41  122  128  197  100  22371  20  196  298  256  472  100  52199  20  602  579  512  842  100  95233  20  1924  1212  1024  1556  100  164129  20  6881  2459 We stress that our implementation is hardware independent. The only requirement is to have a compiler that is able to translate C code to the specific architecture. To give an estimation of what would happen if a different frequency in a computer architecture is used to run our code for HW-VRFs as well as our aggregate VRFs, we considered the original run using 56, 128, 256, 512, and 1024 bits, respectively. Then, we computed the difference between the frequencies and multiply for this result, as shown in Table 3. For different frequencies (GHz), the verification time for the aggregated function values/proofs increases 30-50%, compared to that for each single function value/proof pair of the HW-VRFs, as shown in Table 3. Moreover, we performed experiments for the cases where the input lengths are equal to 256 (depicted in Figure 1b) and 1024 (in Figure 1a), respectively, by choosing different numbers τ of the fixed bits to see the variation of the costing time on the aggregation and verification processes. When = 256, we ran experiments for three cases, i.e., worst-case where all τ fixed bits are 1, best-case where all τ fixed bits are 0, and average-case where τ fixed bits are chosen at random from {0, 1}. In the worst-case, the Aggregate algorithm requires 256 multiplications plus 1 exponentiation to compute y agg and 258 exponentiation to evaluate π agg , while the AggVerify algorithm requires 515 pairing operations, as shown in Figure 1b with square dot dashed line, which cost almost the same amount of time with different τ. In the best-case, the Aggregate algorithm requires (256 − τ) multiplications plus 1 exponentiation to compute y agg and (258 − τ) exponentiation to evaluate π agg , while the AggVerify algorithm requires (516 − τ) pairing operations, as shown in Figure 1b    To instantiate such an e-lottery scheme, a practical and concrete VRF scheme is needed. To the 474 best of our knowledge, so far, most of the efficient instantiations of VRFs are based on bilinear maps, 475 e.g., [13,14,18,19,22], where the input spaces of VRFs are defined over binary strings and the function 476 values on inputs are elements in a group G T , i.e., w i ∈ G T , while the verification needs to compute 477 some pairings. 478 Chow et al.'s construction [7] seems to be very promising when considering an ideal case that 479 after a small number t of times that the VRF is applied, a function value w t such that w t ≤ N max can be    +1)) and corresponding proof π agg .

491
If the exponent of w agg is less than N max , w agg can be set as the winning number. Otherwise, the dealer 492 can output a value w agg = e(g, h) as a winning number whose exponent is less than N max . Thus, the player only needs to verify the

Discussion on the Practical Instantiation of Chow et al.'s E-Lottery
Chow et al. [7] proposed an e-lottery scheme based on VRFs, where a random number generation mechanism is required to determine not only a winning number (chosen from a predefined domain) but also the public verifiability of the winning result, which guarantees that the dealer cannot cheat in the random number generation process. Let the numbers used in the lottery game be {1, 2, . . . , N max }. To generate the winning number, Chow et al. [7] employed a VRF that maps a bit-string of k length into a bit-string of l length. More precisely, the dealer firstly computes (w 0 , π 0 ) := (VRF.Fun(sk VRF , d), VRF.Prove(sk VRF , d)), where d is the 'hash value' of all tickets sold so far. If w 0 > N max , then the dealer iteratively calculates (w i , π i ) := (VRF.Fun(sk VRF , w i−1 d), VRF.Prove(sk VRF , w i−1 d)) for i = 1, 2, . . ., until obtaining (w t , π t ) such that w t ≤ N max . Afterwards, the dealer publishes the final (w t , π t ) and the intermediate tuples (w i , π i ) for i = 0, 1, . . . , t − 1 of VRF function values and their corresponding proofs.
To instantiate such an e-lottery scheme, a practical and concrete VRF scheme is needed. To the best of our knowledge, so far, most of the efficient instantiations of VRFs are based on bilinear maps (e.g., [13,14,18,19,22]), where the input spaces of VRFs are defined over binary strings and the function values on inputs are elements in a group G T , i.e., w i ∈ G T , while the verification needs to compute some pairings.
Chow et al.'s construction [7] seems to be very promising when considering an ideal case that after a small number t of times that the VRF is applied, a function value w t such that w t ≤ N max can be obtained successfully. Nevertheless, what if t is a large number? Then, it implies that the dealer needs to calculate the VRF more times, while the player needs to verify the correctness of more tuples in order to verify the winning result, which also requires more consumption on pairing-computations from both the dealer and the player.
In the following section, we show how we modify Chow et al.'s e-lottery scheme [7] by employing our aggregate VRF, in order to reduce the computational overhead for the verification process of each player.

An E-Lottery Scheme Based on Aggregate VRFs
Observing that the intermediate tuple (w i , π i ) is a VRF function value/proof on an input w i−1 d, which has the same last | d | bits and consists of a bit-fixing set. Thus, we compress all the intermediate tuples into a single function value w agg = e(g, h) +1)) and corresponding proof π agg . If the exponent of w agg is less than N max , w agg can be set as the winning number. Otherwise, the dealer can output a value w agg = e(g, h) and their corresponding proofs by using the efficient aggregation algorithm as (y agg , π agg ) := Aggregate(sk VRF ,d). More precisely, since sk VRF = (u 0 , u 1 , . . . , u ), which is defined by HW-VRFs scheme in Section 2.2, we have y agg = e(g, h) ). The dealer checks if EXP(y agg )(modp − 1) ≤ N max . If it is true, then set y win := y agg as the winning result. 5. Otherwise, the dealer chooses a random index ζ ∈ [1, − D ] and then uses ζ to define a new wildcardd the dealer sets y win := e(g, h) EXP(d ) as the winning result and computes the corresponding proof by using the efficient aggregation algorithm Aggregate(sk VRF ,d ). 6. The dealer publishes the winning result and its proof (y win , π win ) together with correspondingd within ∆ units of time after the closing of the lottery session.
Prize Claiming: 1. The player checks if e(g, h) x = y win . If it is true, the player wins. 2. The player submits (s, r) to the dealer. 3. The dealer checks whether there exists a ticket ticket i in the blockchain C such that ticket i = s (x ⊕ r) H(x s r). 4. If it is true, the dealer checks whether the tuple (s, r) has already been published (i.e., the prize has been claimed by someone already). 5. If the prize is not yet claimed, the dealer pays the player and publishes (s, r).

Player Verification:
Player side: 1. The player checks whether his/her ticket(s) is/are included in the blockchain C and checks whether the final state st n of the blockchain C is correct. 2. The player verifies the correctness of d by using the verification algorithm of VDF. 3. The player parsesd = ⊥ ζ−1 0 ⊥ − D −ζ d 1 · · · d D and checks if d = d 1 · · · d D . 4. The player verifies the correctness of y win by using the verification algorithm b ← AggVerify(pk VRF ,d , y win , π win ). 5. For each winning ticket published, the players verify the validity of s (x ⊕ r) H(x s r).

Implementation and Comparison on Chow et al.'s/Improved E-Lottery
To implement Chow et al.'s e-lottery scheme, we use the instantiation of HW-VRF presented in Section 2.2, while using our aggregate VRF scheme in Section 3.1 to implement the improved counterpart. When implementing the winning result generation phase and the player verification phase, we reuse part of the code for our implementation in Section 3.3. For the required delay functions, we use an instantiation of VDFs proposed by Pietrzak [29] (see Appendix A for details), which is defined as D(x) := x 2 T (modN), where N = p 1 q 1 is a product of two large, distinct primes p 1 and q 1 , x is a random element in Z * N , and T is the timing parameter. For the signature scheme, we used EdDSA [30] provided in the libsodium (https://libsodium.gitbook.io/doc/) library.
Parameters. In our concept implementation, we consider Pietrzak's VDF scheme [29] with the following parameters, where the statistical soundness security parameter of a proof is λ Sound = 100, the time parameter is T = 2 40 , the bit-lengths of two safe primes p 1 and q 1 are λ QR /2 = 256 bit safe primes p 1 , q 1 , and the bit-length of an RSA modulus N = p 1 q 1 is λ RSA = 512. (Here, we set λ RSA = 512 as a toy example of implementation, which can be easily scaled to 1024 or 2048, if we adjust the corresponding parameters of the hash function as well as the inputs/outputs spaces of the VRFs.) Then, the VDF defines a function D : {0, 1} 256 → {0, 1} 512 . We use SHA-256 as a hash function H : {0, 1} * → {0, 1} 256 . The inputs/outputs of the VRFs are set equal to 1024 binary strings. We randomly choose an integer N max ∈ Z p s.t. 1≤N max <p − 1, and the numbers used in the lottery game are {1, 2, . . . , N max }. For every outcome w ∈ G T of VRFs evaluation, to check if w can be set as a winning number, the dealer actually checks if the discrete log of w under the base generator e(g, h) lies in the set {1, 2, . . . , N max }. In fact, as explained in the implementation of HW-VRF and the aggregate VRF in Section 3.3, to obtain the function value Fun sk (x) = e(g, h) u 0 Π i=1 u x i i , the exponent u 0 Π i=1 u x i i is computed beforehand, which makes it handy to decide if w can be set as a winning number or not.
Results. Table 4 shows the implementation results for both Chow et al.'s and our improved e-lottery schemes, particularly regarding the costing time for the winning number generation and the player verification. We employ a small blockchain scenario, where we have 1000 blocks, that is, there are 1000 random tickets in the system. The running time in Table 4 is for the experimental scenario that in Chow et al.'s e-lottery scheme the VRF is applied only twice and a function value is obtained, whose exponent EXP(w 1 )(modp − 1) ≤ N max , while in our scheme the exponent of y agg is beyond N max , i.e., EXP(y agg )(modp − 1) > N max , so that extra time is taken to search for another y agg whose exponent is below N max . We consider such a case, because it optimally shows the efficiency gain of our e-lottery scheme over Chow Table 4. Running time (seconds) of the e-lottery schemes.

Lottery Scheme Winning Number Generation Player Verification
Scheme in [7] 96.12992 s 4.384418 s Our Scheme 90.13322 s 2.782610 s

Conclusions
Inspired by the idea of aggregated PRFs [10], in this paper, we investigate how we may efficiently aggregate VRF values and their corresponding proofs. We introduce the notion of static aggregate VRFs. We show how to achieve static aggregate VRFs under the Hohenberger and Waters' VRF scheme [13] for the product aggregation with respect to a bit-fixing set, based on the q-decisional Diffie-Hellman exponent assumption. Furthermore, we apply our aggregate VRFs to improve the prior VRF-based e-lottery scheme in the efficiency of generating the winner number and player verification phases. We test the performances of our static aggregate VRFs and improved e-lottery scheme, which show significant computation advantage and efficiency gains compared to the original counterparts.
As future work, it would be interesting to explore if it is possible to realize static aggregate VRFs for much more expressive sets, such as sets that can be recognized by polynomial-size decision trees, read-once Boolean formulas, or polynomial-size boolean circuits. Furthermore, it would be interesting to consider a public dynamic aggregate VRF, which allows taking any two fresh (aggregate) function values and proofs and combine them into a new aggregate function value and proof, without requiring the exponential number of inputs (from an exponentially large set) to be known in advance, as required in static aggregation. Funding: This work was partially supported by the GENIE Chalmers CryptoQuaC project, the VR PRECIS project, the WASP expedition project "Massive, Secure, and Low-Latency Connectivity for IoT Applications", and the National Nature Science Foundation of China (No. 61972124).

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: