Next Article in Journal
Novel Hybrid Prognostics of Aircraft Systems
Next Article in Special Issue
Dynamic Vulnerability Knowledge Graph Construction via Multi-Source Data Fusion and Large Language Model Reasoning
Previous Article in Journal
Hybrid Radio-Frequency-Energy- and Solar-Energy-Harvesting-Integrated Circuit for Internet of Things and Low-Power Applications
Previous Article in Special Issue
Anonymous Networking Detection in Cryptocurrency Using Network Fingerprinting and Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Transformation Approach from Constrained Pseudo-Random Functions to Constrained Verifiable Random Functions

College of Information Engineering, Henan University of Science and Technology, Luoyang 471023, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(11), 2194; https://doi.org/10.3390/electronics14112194
Submission received: 14 April 2025 / Revised: 14 May 2025 / Accepted: 26 May 2025 / Published: 28 May 2025
(This article belongs to the Special Issue Cryptography and Computer Security)

Abstract

Constrained pseudorandom functions (CPRFs) are fundamental cryptographic primitives used in broadcast encryption and attributed-based encryption. Constrained verifiable random functions (CVRFs) extend CPRFs by incorporating verifiability. A constrained key s k S , derived from the master secret key s k , restricts computation to a set Sf with correct evaluation. This allows holders of s k S to compute function values only for inputs in S. Prior constructions of CVRFs rely on strong assumptions like multilinear maps or indistinguishability obfuscation, which often suffer from theoretical or practical limitations. In this work, we introduce a simple, generic approach for building CVRFs from basic cryptographic primitives. Specifically, we give a general transformation from any CPRF to a CVRF, achieving provability, uniqueness, and pseudorandomness. We demonstrate that CVRFs can be generically constructed from the following cryptographic primitives: CPRFs, perfectly binding commitment schemes, and non-interactive proof systems. Compared to previous schemes, our approach features a fixed-length public key independent of the circuit depth, improving efficiency and scalability.

1. Introduction

With the advancement of communication technologies, security and privacy concerns in networks have become increasingly prominent. Lightweight encryption mechanisms remain a fundamental approach to safeguarding cyberspace security and data privacy. Goldreich et al. [1] pioneered pseudorandom functions (PRFs), a fundamental primitive that revolutionized cryptographic theory and practice. Pseudorandom functions are widely used in various cryptographic applications, including secure multiparty computation, digital signature schemes, and encryption algorithms.
Micali et al. [2] introduced verifiable random functions (VRFs), an important extension of pseudorandom functions (PRFs) that enables publicly verifiable proofs of correct computation. A VRF system enables the secret key holder to compute both a function value y = f ( s k , x ) and an accompanying proof π . Using π , any party can verify that y was correctly computed from x without requiring interaction with the prover. Meanwhile, the pseudorandomness of the function evaluation on other points should be preserved even if an attacker is given polynomial evaluations along with the corresponding proofs. VRFs are useful primitives that are used to construct a consensus algorithm of the blockchain [3,4,5].
Boneh and Waters [6] proposed constrained pseudorandom functions (CPRFs). A Constrained PRF (CPRF) scheme extends standard PRFs with a constrained key generation algorithm s k S . The derived key s k S permits evaluation of the PRF only on inputs x S while guaranteeing that the function remains pseudorandom on all x S (even when given access to s k S ). The scheme maintains computational pseudorandomness at x : for any adversary, obtaining many polynomial evaluations { F ( s k , x i ) } x i x (and potentially constrained keys s k S j , where x S j ), F ( s k , x ) ) remains indistinguishable from a uniformly random value. Constrained PRFs have been widely used for various branches of cryptography, such as broadcast encryption [7,8], multi-party key exchange [9], attribute-based encryption [10,11], and so on.
Fuchsbauer [12] proposed constrained verifiable random functions (CVRFs), combining the properties of VRFs and constrained PRFs. In a CVRF system, the master secret key holder can generate pseudorandom evaluations along with publicly verifiable proofs for any input x X while also supporting constrained key derivation for subsets S X . A constrained key s k S permits computation of the function (with proofs) only on inputs x S , yet preserves the pseudorandomness of evaluations at all x S , even against adversaries who polynomially obtain many constrained keys for sets excluding x and observe valid function outputs (with proofs) at other points.
CVRFs have demonstrated valuable practical applications, with prior work by Liu et al. [13] and Zan et al. [14] exploring their use in micropayment systems. We present a novel application of CVRFs for random leader election in consensus mechanisms. Consider a network of N nodes N = { N 1 , N 2 , , N n } with respective stakes w 1 , w 2 , , w n (total stake W = w i ) and current round seed r. Using a minimum stake threshold w m i n , we define a constraint set S = { w i | w i > w m i n } to filter out low-stake nodes. The system generates a constrained private key s k S = Constrain ( s k , S ) , enabling each node to compute ( y i , π i ) = Prove ( s k S , x ) , where x = r | r o u n d | N i , r o u n d represents the election round. Normalizing y i yields p i [ 0 , 1 ) , and node N i becomes leader if p i < w i W , with results verifiable via Verify ( p k , x , y , π ) . This scheme uses constraint set S to enforce participation rules, while x guarantees fairness, with the constrained key embedding stake weights into the random generation process to ensure only qualified nodes produce valid outputs. By leveraging CVRF’s cryptographic properties, we achieve a secure, efficient, and provably fair leader election method that could significantly enhance next-generation blockchain consensus protocols.
Prior works [12,13,14,15,16,17] have proposed various constructions of constrained verifiable random functions, primarily relying on either multilinear maps or indistinguishability obfuscation ( i O ) [18]. However, these approaches suffer from significant limitations. First, i O remains an impractical assumption due to its reliance on an exponential number of security assumptions [19], and recent cryptanalytic advances have further undermined confidence in multilinear maps [20]. Second, existing multilinear-map-based constructions [12,15] exhibit inefficient parameter sizes. In particular, the public key size grows linearly with the circuit depth d C , as the construction requires multilinear maps of degree n + d C , where n denotes the input bit length. These shortcomings necessitate more efficient CVRF constructions based on standard assumptions.

1.1. Our Work

We introduce a simple and generic framework for constructing constrained verifiable random functions from basic cryptographic primitives. Our approach is inspired by Bitansky’s construction of standard VRFs [21], which relies on general primitives such as the NIWI proof system and perfectly binding commitments. Building upon these techniques, we propose a universal transformation from any constrained pseudorandom function to a constrained VRF that achieves strong security guarantees while preserving the desired properties of the underlying constrained PRF.
We present a generic construction of constrained verifiable random functions building on constrained PRFs [6], perfectly binding commitments, the NIWI proof system, and the NIZK proof system. Our approach, formally detailed in Section 4, adapts the VRF framework of Goyal et al. [22] to the constrained setting. The setup algorithm generates a key containing a constrained PRF key K along with commitment randomness ( r 1 , r 2 , r 3 ) and produces a public key consisting of three perfectly binding commitments ( c 1 , c 2 , c 3 ) to the PRF key K. Evaluations leverage the underlying constrained PRF directly, serving as the VRF output. The proof system employs NIWI proofs to establish that at least two commitments open to keys K 1 , K 2 producing matching PRF outputs (i.e., y = CPRF . Eval ( K , x ) ), enabling public verification while maintaining privacy. For constrained key generation, we naturally inherit the constrained PRF’s key derivation mechanism, outputting without additional proof generation—verification of constrained evaluations relies entirely on the same NIWI-based approach as unconstrained outputs. This construction maintains the security guarantees of the base PRF while introducing verifiability via standard cryptographic components. However, the proving algorithm fails to generate validity proofs for evaluations on inputs x S —a critical limitation that prevents full CVRF functionality.
To address this problem, we enhance our construction through two key modifications: We enhance the base scheme through two fundamental modifications: (1) extending the setup algorithm to generate a common reference string (CRS) included in the system parameters, and (2) augmenting the constrained key generation with non-interactive zero-knowledge (NIZK) proofs to preserve verifiability across constrained evaluations. Specifically, when deriving a constrained key K S for set S, the algorithm now produces an NIZK proof attesting that at least two of the committed keys, K , K in ( c 1 , c 2 , c 3 ) , would generate the same constrained key K S when restricted to S. This preserves consistency between the master key and constrained keys while maintaining the zero-knowledge property, ensuring that no additional information about the master key is leaked through the proof. The NIZK proof constitutes a part of the constrained key. In order to compute an evaluation on x S , it first runs the evaluation algorithm y = CPRF . Eval ( K S , x ) of constrained PRF and then generates a NIWI proof because it does not have a witness for the previous statement. So, we modified the NIWI statement to where either it satisfies the previous statement or there exists a witness ( K , K , π ^ , π , S , c r s ) such that NIZK . V ( c r s , ( c 1 , c 2 , c 3 , K , S ) , π ^ ) = 1 NIZK . V ( c r s , ( c 1 , c 2 , c 3 , K , S ) , π ) = 1 CPRF . Eval ( K , x ) = CPRF . Eval ( K , x ) = y . We can find that it does not have a witness for the first sub-statement. However, it can generate a proof for the second sub-statement. This construction enables efficient verification of evaluations through the NIWI verifier, which can be executed non-interactively using only the public parameters and proof. The verification process maintains constant-time complexity regardless of the constraint set S, as it simply checks the NIWI proof’s validity against the committed public values.
The pseudorandomness of our constrained VRF construction can be tightly reduced to the security of the underlying constrained PRF via a standard hybrid argument. Our security proof establishes selective security, where the adversary must commit to its challenge input prior to observing the public parameters. While this can be generically boosted to adaptive security through complexity leveraging techniques [23]—requiring the reduction to predict the adversary’s challenge query—this approach incurs an exponential (in the input length) security loss. Nevertheless, by carefully adjusting the input length parameter, we maintain meaningful polynomial-time security guarantees.
Table 1 provides a comprehensive comparison between our construction and prior works. Fuchsbauer’s constructions [12] are based on leveled multilinear groups, where the public key consists of 2 n group elements for an input of length n. During the evaluation phase, the scheme performs 2 n + 1 multiplication operations, while its verification phase requires n multiplications and 2 multilinear operations. The approaches of Liu et al. [13] and Zan et al. [14] incorporate an obfuscated circuit in the public key, demanding full circuit computation during evaluation. For verification, Liu et al.’s construction [13] involves computing a circuit C along with 2 bilinear group operations, whereas Zan et al.’s scheme [14] requires circuit C computation and a commitment operation. Our construction relies on a CPRF, commitment scheme, NIZK, and NIWI, with the public key comprising three commitments. During the constrained evaluation phase, we employ Groth and Sahai’s bilinear group-based NIWI proof system [24], which requires computing an arithmetic circuit C to generate the proof. The verification process involves checking this NIWI proof through only O ( 1 ) bilinear group operations, making it highly efficient. The proof size scales with the number of constraints in C while maintaining constant-time verification regardless of the circuit’s complexity. This approach achieves both expressiveness through arbitrary constraints encoded in C and practical efficiency, particularly in the verification phase. The arithmetic circuit representation can be further optimized using techniques like R1CS or QAP to improve prover efficiency.

1.2. Related Works

The construction of verifiable random functions has evolved through several important milestones. Lysyanskaya’s pioneering work [25] established the first VRF construction based on bilinear groups, though it suffered from linear-size proofs and keys. A significant efficiency improvement came from Dodis and Yampolskiy [26], who achieved constant-size proofs and keys, but their scheme was limited to polynomially sized input spaces. Subsequent work by Jager [27] demonstrated how to construct efficient VRFs with full adaptive security, though they relied on parameterized q-type assumptions, where security depends on the (potentially large) parameter q. Notably, most VRF constructions offering desirable properties have relied on these progressively stronger q-type assumptions. A major theoretical breakthrough came from Hofheinz and Jager [28], who presented the first VRF construction based on non-interactive, constant-size assumptions while simultaneously supporting exponentially large input spaces and achieving full adaptive security.
Recent advances in constrained pseudorandom functions have made significant progress in both security guarantees and construction paradigms. Fuchsbauer et al. [29] established a breakthrough in adaptive security for the Goldreich–Goldwasser–Micali (GGM) construction, achieving a quasi-polynomial security reduction relative to the adversary’s query complexity. Parallel developments include Hofheinz et al.’s [30] novel constrained PRF construction in the RO model, as well as Hohenberger et al.’s [31] standard-model puncturable PRFs with polynomial security loss. The latter work also demonstrated how indistinguishability obfuscation ( i O ) can transform standard puncturable PRFs into t-puncturable variants, though requiring polynomially sized constraint sets. Attrapadung et al. [32] contributed traditional group-based constructions, albeit with selective security limitations. Their subsequent work [33] overcame this restriction by achieving adaptive single-key security through a combination of i O and subgroup hiding assumptions.
Boyle et al. [34] introduced functional pseudorandom functions (F-PRFs) that enable fine-grained access control. In their definition, the holder of secret key s k can evaluate function value over its entire domain. A function-specific key s k f restricts evaluation to outputs within the range of a function f (i.e., values y for which there exists some x such that f ( x ) = y ). A related notion was proposed by Kiayias et al. [35] through delegatable pseudorandom functions (D-PRFs), which enable the controlled delegation of PRF evaluation rights for specific subsets of the domain. These two primitives (F-PRFs and D-PRFs) are fundamentally equivalent to constrained PRFs, differing primarily in their formulation and intended application scenarios. The D-PRF framework particularly emphasizes the delegation aspect, where a proxy can be authorized to evaluate the PRF on precisely defined subsets of inputs.
Fuchsbauer [12] introduced the notion of CVRFs and provided two constructions derived from bit-fixing VRFs and circuit-constrained VRFs. Following this work, numerous other constructions emerged, including those based on multilinear maps [15] and indistinguishability obfuscation (iO) [13,14,17]. While most existing constrained VRF (CVRF) constructions only achieve selective security, recent works have made incremental progress toward stronger security notions. Liu et al. [13] proposed a semi-adaptive secure CVRF where the adversary can make evaluation queries before selecting (but must commit to) the challenge point. In a parallel development, Zan et al. [14] introduced single-key security, requiring the adversary to specify the constraint set before receiving the public key, and provided a construction based on indistinguishability obfuscation. However, the fundamental challenge of achieving full adaptive security based on standard assumptions, where the adversary has complete flexibility in choosing both the challenge point and constraint queries at any stage, remains open. This gap presents several research directions: (1) developing novel proof techniques to bridge semi-adaptive to fully adaptive security without exponential security loss; (2) exploring alternative constructions beyond obfuscation-based approaches that might offer better efficiency. Resolving these challenges would significantly advance the deployability of CVRFs in real-world privacy-preserving systems.

2. Preliminaries

This section formally defines the cryptographic primitives underlying our CVRF construction. We introduce these notions with precise notation to establish a rigorous foundation for both security analysis and protocol design. We assume standard familiarity with the following:
  • Commitment schemes ( Com , Ver ) ;
  • Non-interactive zero-knowledge proofs ( NIWI . Setup , NIZK . P , NIZK . V ) ,
and omit their formal definitions for brevity.

2.1. Constrained Pseudorandom Functions (CPRFs)

We formalize the notion of CPRFs as introduced by Boneh and Waters [6]. A CPRF system extends standard pseudorandom functions (PRFs) by enabling the generation of constrained keys that only permit evaluation of authorized inputs while preserving pseudorandomness on all other inputs.
Definition 1. 
( CPRF ). A CPRF system for a constraint family S 2 X is defined by a function CPRF . F : K × X Y and three polynomial-time algorithms ( CPRF . Setup , CPRF . Constrain , CPRF . Eval ) .
  • CPRF . Setup ( 1 λ ) K : A PPT algorithm that, on input of security parameter λ, outputs a master secret key K K .
  • CPRF . Constrain ( K , S ) K S : A PPT algorithm that, on input K and constraint set S S , outputs a constrained key K S K .
  • CPRF . Eval ( K S , x ) y : A deterministic algorithm that, on input K S and x X , outputs
    y = CPRF . F ( K , x ) if x S ; o t h e r w i s e .
(Pseudorandomness). Pseudorandomness of CPRF guarantees computational indistinguishability between real function evaluations and random values in the presence of an adversary that can adaptively query a constrained key-generated oracle and evaluation oracle. Formally, this is captured through the following security experiment Exp A CPRF ( λ , b ) :
  • The challenger runs setup algorithm K CPRF . Setup ( 1 λ ) and chooses b { 0 , 1 } randomly.
  • The challenger initializes two empty sets:
    Initialize V as the set of points derivable through constrained key evaluations;
    Initialize E , where E = { x X | A made query CPRF . Eval ( · , x ) } .
  • A interacts with the following oracles:
    Constrain Oracle: Upon input S S , it returns K S CPRF . Constrain ( K , S ) and updates V : = V S , which records constrained domains.
    Evaluation Oracle: Upon input x X , it returns CPRF . F ( K , x ) and updates E : = E { x } , which records evaluation queries.
    Challenge Phase: A submits challenge point x X . If x E V , it returns ⊥, which implies it is a trivial challenge. Otherwise, it samples b { 0 , 1 } uniformly. It returns
    y CPRF . F ( K , x ) if b = 0 , uniform sample from Y if b = 1 .
  • A outputs a bit b 0 , 1 . The experiment outputs 1 if b = b , and 0 otherwise.
A CPRF scheme provides pseudorandomness security if, for all probabilistic polynomial-time (PPT) adversaries A , the following advantage is negligible in security parameter λ :
| Pr [ Exp A CPRF ( λ , 0 ) = 1 ] Pr [ Exp A CPRF ( λ , 1 ) = 1 ] | .

2.2. Non-Interactive Witness Indistinguishable Proofs (NIWIs)

Definition 2. 
( NIWI ). For an NP relation R { 0 , 1 } × { 0 , 1 } , a NIWI proof system consists of two PPT algorithms:
  • NIWI . P ( x , w ) π : Upon input or a statement x and witness w, where ( x , w ) R , it outputs a proof π. It requires that for all ( x , w ) R .
  • NIWI . V ( x , π ) 0 or 1 : Upon input x and proof π, it outputs an acceptance bit b.
For an NP relation R with associated language L ( R ) = { x w : ( x , w ) R } , a NIWI proof system NIWI . P ( x , w ) , NIWI . V ( x , π ) must satisfy the following:
(Perfect completeness). ( x , w ) R ,
Pr [ NIWI . V ( x , π ) = 1 | π NIWI . P ( x , w ) ] = 1 .
(Statistical soundness). PPT A , there exists a negligible function negl ( λ ) ,
Pr ( x , π ) A ( 1 λ ) , x L ( R ) NIWI . V ( x , π ) = 1 negl ( λ ) .
(Witness indistinguishability). For all PPT adversaries, A and all triples, ( x , w 0 , w 1 ) , where ( x , w 0 ) , ( x , w 1 ) R :
{ π 0 NIWI . P ( x , w 0 ) } c { π 1 NIWI . P ( x , w 1 ) } ,
where c denotes computational indistinguishability.

3. Definition of Constrained Verifiable Random Functions (CVRFs)

Definition 3. 
Let F : K × X Y be a polynomial-time computable function with the following:
  • Key space: K = { 0 , 1 } λ (parameterized by security parameter λ);
  • Domain: X = { 0 , 1 } p ( λ ) for some polynomial p;
  • Range: Y = { 0 , 1 } q ( λ ) for some polynomial q.
where F ( k , · ) is computable in time for all k K . A function F : K × X Y is called a constrained verifiable random function (CVRF) with respect to a constraint family S 2 X if there exists a constrained key space K , a proof space P , and four algorithms.
  • Setup ( 1 λ ) ( p k , s k ) : Upon input of a security parameter λ, it outputs ( p k , s k ) , where p k denotes the public verification key, and s k denotes the private evaluation key.
  • Constrain ( s k , S ) s k S : The constrained key generation algorithm takes as input a private evaluation key s k and a subset S S , then derives a restricted key s k S K that retains functionality only for inputs in S.
  • Prove ( s k S , x ) ( y , π ) or ( , ) : Upon input s k S and x, it computes an evaluation y associate with a proof π such that ( y , π ) Y × P if x S . Otherwise, it outputs ( , ) .
  • Verify ( p k , x , y , π ) 0 or 1 : Upon input p k , x,y and proof π, it outputs an acceptance bit b { 0 , 1 } .
Provability. For all security parameters λ N , key pair ( p k , s k ) Setup ( 1 λ ) , constraint sets S S , and s k S Constrain ( s k , S ) , the following holds for any input x X :
  • Membership case x S : let ( y , π ) Prove ( s k S , x ) ; we require
    Correct output y = F ( s k , x ) ;
    Valid proof Verify ( p k , x , y , π ) = 1 .
  • Non-membership case x S : the algorithm must satisfy
    Explicit rejection ( y , π ) = ( , ) .
(Uniqueness). This implies that there are no two valid proofs π 0 , π 1 that can verify distinct outputs y 0 y 1 for the same x. Specifically, for all security parameters λ N , the public/secret key pair ( p k , s k ) Setup ( 1 λ ) , and x X , the function F ( s k , x ) is deterministic, i.e., ( y 0 , π 0 ) , ( y 1 , π 1 ) Y × P ; if Verify ( p k , x , y 0 , π 0 ) = 1 and Verify ( p k , x , y 1 , π 1 ) = 1 hold, then y 0 = y 1 .
(Pseudorandomness). A CVRF scheme achieves pseudorandomness if, for all PPT adversaries A , there exists a negligible function negl ( λ ) such that
| Pr [ Exp A CVRF ( λ , 0 ) = 1 ] Pr [ Exp A CVRF ( λ , 1 ) = 1 ] | negl ( λ ) ,
where the security experiment Exp A CVRF ( λ , b ) is defined as follows:
  • The challenger begins by generating ( p k , s k ) Setup ( 1 λ ) and provides p k to A . It selects a bit b uniformly at random from { 0 , 1 } to start the challenge phase.
  • The challenger initializes V and E as empty sets.
    V tracks all points for which the adversary A can compute evaluations (e.g., via constrained keys or direct queries);
    E records all evaluation oracle queries made by A .
  • A can make the query of the following oracles:
    Constrain(·): For input S S , returns s k S Constrain ( s k , S ) , and updates V V S .
    Evaluation (·): For input x X , returns Prove ( s k S , x ) , and updates E E { x } .
    Challenge ( x ): For input x X , checks:
    If x E V , it returns ⊥.
    Otherwise, it returns F ( s k , x ) if b = 0 , or a uniformly random string y Y if b = 1 .
  • The experiment’s output is the adversary’s guess, b .

4. Construction of CVRFs

A CVRF is a tuple F = ( Setup , Constrain , Prove , Verify ) with domain X { 0 , 1 } ( X { 0 , 1 } ) , key space K , and output space Y . It allows verifiable computation of pseudorandom outputs while supporting constrained key derivation.
Our construction relies on a NIWI proof system ( NIWI . P , NIWI . V ) , an NIZK proof system ( NIWI . Setup , NIZK . P , NIZK . V ) , a perfectly binding commitment scheme ( Com , Ver ) , and a CPRF CPRF = ( CPRF . Setup , CPRF . Constrain , CPRF . Eval ) .
Definition 4. 
An instance ( c 1 , c 2 , c 3 , x , y ) belongs to L if there exists a witness w = ( t , j , K , K , r , r , π ^ , π , S , c r s ) with i < j { 1 , 2 , 3 } , such that at least one of the following two conditions holds:
  • Verification condition:
    Ver ( K , c i , r i ) = 1 , Ver ( K , c j , r j ) = 1 ;
    CPRF . Eval ( K , x ) = y and CPRF . Eval ( K , x ) = y .
  • Proof condition:
    NIZK . V ( c r s , ( c 1 , c 2 , c 3 , K , S ) , π ^ ) = 1 ;
    NIZK . V ( c r s , ( c 1 , c 2 , c 3 , K , S ) , π ) = 1 ;
    CPRF . Eval ( K , x ) = y and CPRF . Eval ( K , x ) = y .
We present our CVRF construction as follows.
  • Setup ( 1 λ ) ( p k , s k ) :
    Generate cryptography parameters: computer K CPRF . Setup ( 1 λ ) and c r s NIZK . Setup ( 1 λ ) ;
    Commit the PRF key K: for i = 1 , 2 , 3 , c i Com ( K ; r i ) , where r i is the random string;
    Set key pair: s k = ( K , { c i , r i } i = 1 3 , c r s ) , p k = ( c 1 , c 2 , c 3 ) .
    The evaluation algorithm of constrained VRF is defined as follows. Upon input x:
    Compute y = CPRF . Eval ( K , x ) ;
    Generate NIWI proof π = NIWI . P ( ( c 1 , c 2 , c 3 , x , y ) , w ) for ( c 1 , c 2 , c 3 , x , y ) L using witness w = ( i = 1 , j = 2 , K , K , r 1 , r 2 , 0 | π ^ | , 0 | π | , , 0 | c r s | ) ;
    Output ( y , π ) .
  • Constrain ( s k , S ) s k S : Upon input s k = ( K , { c i , r i } i = 1 3 ) and set S X :
    Generate constrained key: K S CPRF . Constrain ( K , S ) ;
    Compute NIZK proof π ^ NIZK . P ( c r s , s 1 , w 1 ) for statement s 1 = ( c 1 , c 2 , c 3 , K S , S ) using witness w 1 = ( i , j , K , K , r 1 , r 2 ) where i < j { 1 , 2 , 3 } , which proves that K S is properly constrained from committed K;
    Output constrained key s k S = ( K S , S , π ^ , c r s ) .
  • Prove ( s k S , x ) ( y , π ) or ( , ) : Upon input s k S = ( K S , S , π ^ , c r s ) and x X :
    If x S , output ( , ) ;
    Otherwise, it computes y CPRF . Eval ( K S , x ) first. Then, it constructs witness w = ( i = 1 , j = 2 , K S , K S , 0 | r 1 | , 0 | r 2 | , π ^ , π ^ , S , c r s ) . Finally, it generates NIWI proof π NIWI . P ( ( c 1 , c 2 , c 3 , x , y ) , w ) ;
    Output ( y , π ) .
  • Verify ( p k , x , y , π ) { 0 , 1 } : Upon input p k = ( c 1 , c 2 , c 3 ) , x, y, and π , it verifies the following proof:
    If NIWI . V ( ( c 1 , c 2 , c 3 , x , y ) , π ) = 1 : it accepts y as valid evaluation of F ( K , x ) , and returns 1;
    Otherwise, it returns 0.

4.1. Properties of Constrained VRFs

Firstly, we describe the property of provability.
Provability. The construction achieves provability by leveraging (1) the perfect correctness of the constrained PRF and (2) the perfect soundness of the NIWI proof system, guaranteeing that valid proofs exist for all properly computed outputs while ensuring only true statements can be verified. According to the description of algorithm Prove , for ( p k , s k ) Setup ( 1 λ ) , S X , s k S CPRF . Constrain ( K , S ) , if x S , y = CPRF . Eval ( K S , x ) , π NIWI . P ( ( c 1 , c 2 , c 3 , x , y ) , w ) . For the language L = ( c 1 , c 2 , c 3 , x , y ) , there exists a witness w = ( 1 , 2 , K S , K S , 0 | r | , 0 | r | , π ^ , π ^ , S , c r s ) that satisfies the second condition of the statement L. Therefore, we have Verify ( p k , x , y , π ) = 1 . It outputs ( , ) if x S .
Next, we describe the property of uniqueness.
Uniqueness. We prove the uniqueness of output by contradiction. Assume there exists an adversary producing ( p k , x , y 1 , π 1 , y 2 , π 2 ) such that the following are true:
  • Verify ( p k , x , y 1 , π 1 ) = 1 ;
  • Verify ( p k , x , y 2 , π 2 ) = 1 ;
  • y 1 y 2 .
We exhaustively analyze all possible scenarios below.
Case 1: Both proofs derived from constrained keys.
  • The perfectly binding property of the commitment scheme ensures that for each commitment c i , there exists at most one key K i satisfying such that Ver ( K i , c i , r i ) = 1 hold;
  • If CPRF . Constrain ( K 1 , S ) = CPRF . Constrain ( K 2 , S ) = K S , then
    By constrained PRF correctness: y 1 = CPRF . Eval ( K S , x ) , y 1 = CPRF . Eval ( K 1 , x ) , y 1 = CPRF . Eval ( K 3 , x ) ;
    For y 1 y 2 to hold, there must exist K 3 with CPRF . Constrain ( K 3 , S ) = K S and CPRF . Eval ( K S , x ) = y 2 ;
    Contradiction: The NIZK statement ( c 1 , c 2 , c 3 , K S , S ) is false (no two keys in ( K 1 , K 2 , K 3 ) constrain to K S . Thus, π ^ is invalid, violating NIZK soundness.
Case 2: Both proofs derived from the secret key K.
  • Let c i commit to K i . Due to constrained PRF correctness, the following occur:
    If CPRF . Eval ( K 1 , x ) = y 1 and CPRF . Eval ( K 2 , x ) = y 1 , then y 1 y 2 requires CPRF . Eval ( K 3 , x ) = y 2 ;
    Contradiction: The statement ( c 1 , c 2 , c 3 , x , y 2 ) must be false for the NIWI proof system, as no two distinct keys in ( K 1 , K 2 , K 3 ) can produce output y 2 . This contradicts the soundness guarantee of NIWI, rendering proof π 2 invalid.
Case 3: Mixed proofs (one constrained, one secret key).
  • Suppose ( y 1 , π 1 ) uses constrained key K S and ( y 2 , π 2 ) uses K.
    CPRF property: if CPRF . Constrain ( K 3 , S ) = K S , then CPRF . Eval ( K S , x ) = y 1 and CPRF . Eval ( K 3 , x ) = y 1 ;
    For y 2 y 1 , we must have CPRF . Eval ( K 1 , x ) = y 2 and CPRF . Eval ( K 2 , x ) = y 2 .
    Contradiction: The NIWI statement ( c 1 , c 2 , c 3 , x , y 2 ) is necessarily false, as there cannot exist two distinct keys that both evaluate y 1 on input x. Thus, π 1 is invalid, violating NIWI soundness.

4.2. Proof of Pseudorandomness

Theorem 1. 
Our construction achieves pseudorandomness under the following cryptographic assumptions:
  • ( NIWI . P , NIWI . V ) satisfies the property of being witness-indistinguishable;
  • ( NIZK . Setup , NIZK . P , NIZK . V ) satisfies the properties of zero-knowledge and sound;
  • ( Com , Ver ) satisfies the property of perfectly binding;
  • ( CPRF . Setup , CPRF . Constrain , CPRF . Eval ) satisfies the property of pseudorandomness for constrained keys.
Proof. 
We prove security via a sequence of hybrid arguments, where each adjacent experiment differs by at most one cryptographic modification, enabling rigorous analysis of the adversary’s advantage through incremental transitions. The first experiment is an exact emulation of the original pseudorandomness security game. Each subsequent hybrid introduces controlled modifications while maintaining computational indistinguishability from its predecessor for all PPT adversaries. The proof framework maintains two critical restrictions on the adversary A : it cannot request constrained keys for sets containing its chosen challenge point x , nor can it directly query the evaluation oracle on x . This hybrid argument ultimately shows that any adversary breaking our construction can be converted into an algorithm B that breaks the underlying CPRF’s pseudorandomness. For clarity, we provide complete specifications only for the initial experiment, with successive hybrids described solely in terms of their incremental differences from previous configurations. This approach ensures rigorous security analysis while efficiently highlighting the essential modifications at each proof stage. The main experimental differences are systematically compared in Table 2. □
Experiment Exp 1 : The initial experiment is the original security for our construction with challenge bit b = 0 , which implies y = F ( s k , x ) .
  • A selects its challenge point x X .
  • The challenger generates a fresh CPRF key K CPRF . Setup ( 1 λ ) and establishes the necessary proof infrastructure by sampling c r s NIZK . Setup ( 1 λ ) . Then, the challenger proceeds to generate three perfectly binding commitments to the PRF key K, computing for each i { 1 , 2 , 3 } , c i Com ( K ; r i ) , where each r i is sampled uniformly from the commitment scheme’s randomness space. It sets s k = ( K , { ( c i , r i ) } i = 1 3 , c r s ) , p k = ( c 1 , c 2 , c 3 ) .
  • When adversary A queries the oracles, it responds as follows:
    • Constrain Queries: Upon input of a set S j X , it computes K S j CPRF . Constrain ( K , S j ) . Then, it computes an NIZK proof π ^ j NIZK . P ( c r s , s 1 , w 1 ) for the NP statement s 1 = ( c 1 , c 2 , c 3 , K S j , S j ) , which is defined the same as the constrained algorithm, where w 1 = ( 1 , 2 , K , K ) . If x S j , it returns s k S j = ( K S j , S j , π ^ j , c r s ) . Otherwise, it returns ⊥.
    • Evaluation Queries: Upon input x i X , it evaluates y = CPRF . Eval ( K , x i ) and π i = NIWI . P ( ( c 1 , c 2 , c 3 , x , y ) , w ) , where w = ( i = 1 , j = 2 , K , K , r 1 , r 2 , 0 | π ^ | , 0 | π | , , 0 | c r s | ) . If x i x , it returns ( y i , π i ) . Otherwise, it returns ( , ) .
  • The challenger computes and returns the VRF evaluation at the challenge point y = CPRF . Eval ( K , x ) .
  • Oracle queries:
    • Constrain queries: Answered as specified in the previous protocol description.
    • Evaluation queries: Processed identically to prior interactions (returning ( y , π ) for x x , and ( , ) for x ).
  • The experiment terminates when A outputs guess b , returning 1 if b = b (indicating a correct guess), and 0 otherwise.
Experiment Exp 2 : This experiment is the same as the previous one, except that the random string c r s and legitimate proof for NIZK are replaced with a simulated random string c r s ˜ and a simulated proof. Let NIZK . Sim = ( NIZK . Sim 1 , NIZK . Sim 2 ) be simulators for the NIZK proof system.
  • A generates a fresh CPRF key K CPRF . Setup ( 1 λ ) and establishes the necessary proof infrastructure by sampling c r s ˜ NIZK . Sim 1 ( 1 λ ) . Then, it generates three commitments to PRF’s key K, c i Com ( K ; r i ) , i = 1 , 2 , 3 . Sets s k = ( K , { ( c i , r i ) } i = 1 3 , c r s ˜ ) , p k = ( c 1 , c 2 , c 3 ) .
  • Constrain Queries: Upon input of a set S j X , the challenger computes K S j CPRF . Constrain ( K , S j ) . Then, it computes an NIZK proof π ^ j NIZK . Sim 2 ( s 1 ) for the NP statement s 1 = ( c 1 , c 2 , c 3 , K S j , S j ) , which is defined the same as the constrained algorithm, where w 1 = ( 1 , 2 , K , K ) . If x S j , it returns s k S j = ( K S j , S j , π ^ j , c r s ) . Otherwise, it returns ⊥.
Lemma 1. 
If the NIZK scheme satisfies zero-knowledge and soundness, then Exp 1 c Exp 2 .
Proof. 
If there exists a PPT adversary A capable of distinguishing between Exp 1 and Exp 2 with non-negligible advantage, we construct a reduction algorithm B that leverages A to break security of the NIZK system by perfectly simulating A ’s environment while forwarding its constrained queries to the NIZK challenger, ultimately outputting A ’s guess to preserve the distinguishing advantage. When A submits its distinguishing guess, B uses this to attack the NIZK system’s zero-knowledge property.
  • A selects a challenge point x X .
  • B generates a fresh CPRF key K CPRF . Setup ( 1 λ ) and receives an NIZK proof system’s c r s . Then, it generates three commitments to PRF’s key, c i Com ( K , r i ) , i = 1 , 2 , 3 . Return p k = ( c 1 , c 2 , c 3 ) to A .
  • When A queries constrained oracle on S X , B computes K S CPRF . Constrain ( K , S ) . Then, it sends ( s = ( c 1 , c 2 , c 3 , K S , S ) , w = ( 1 , 2 , K , K , r 1 , r 2 ) ) to the challenger of NIZK and obtains a proof π ^ . If x S , it returns s k S = ( K S , S , π ^ , c r s ) . Otherwise, it returns ⊥. When A queries evaluation oracle, B answers the same as the experiment Exp 1 .
  • B runs steps 4–5, which are the same as the experiment Exp 1 , and outputs the A ’s result.
The security reduction establishes a perfect correspondence between the following:
  • When B receives authentic NIZK parameters ( c r s , π ) , this exactly replicates A ’s view in Exp 1 ;
  • When B receives simulated parameters ( c r s ˜ , π ) , this perfectly matches A ’s view in Exp 2 .
Consequently, any non-negligible distinguishing advantage ϵ ( λ ) that A achieves between Exp 1 and Exp 2 directly translates to an identical advantage for B in breaking the NIZK’s zero-knowledge property. This reduction preserves the adversary’s success probability while maintaining perfect simulation fidelity in both operational modes.
If A makes Q = Q ( λ ) constrain queries, we can define Q intermediate hybrid experiments Exp 1 i , i = 0 , 1 , , Q , where the first one is Exp 1 , and the last one is Exp 2 . In the ith intermediate experiment Exp 1 i , B uses the random string and proof returned from the NIZK challenger to answer the constrain query. For j < i , B uses the simulated string and proof to answer the constrained query. For j > i , B uses the real string and proof to answer the constrained query. By recursively applying the aforementioned proof technique to each adjacent pair of hybrid experiments, we establish a chain of computational indistinguishability across all Q intermediate hybrids. This hybrid argument demonstrates that the computational indistinguishability of Exp 1 and Exp 2 , as any non-negligible advantage in distinguishing the initial and final experiments would necessarily violate the security of one underlying cryptographic primitive through this sequence of transformations. □
Experiment Exp 3 : This experiment is the same as the previous one except that c 3 is a commitment of the constrained key K x . It can be found that the generated methods of the proofs π and the constrained key s k S are the same as the experiment Exp 2 , because the random strings r 1 , r 2 satisfy the witness relation.
  • The challenger generates a fresh PRF key K CPRF . Setup ( 1 λ ) and establishes the necessary proof infrastructure by sampling c r s ˜ NIZK . Sim 1 ( 1 λ ) . Then, it generatesconstrained key K x CPRF . Constrain ( K , { x } ) , and commits the PRF key K, c i Com ( K ; r i ) , i = 1 , 2 and c 3 Com ( K x ; r 3 ) . Sets s k = ( K , { ( c i , r i ) } i = 1 3 , c r s ˜ ) , p k = ( c 1 , c 2 , c 3 ) .
Lemma 2. 
If ( Com , Ver ) is a perfectly binding commitment scheme, then Exp 2 c Exp 3 .
Proof. 
If a PPT adversary A can distinguish experiments Exp 2 from Exp 3 with advantage ϵ ( λ ) . We construct a reduction algorithm B that leverages A to break the hiding property of the commitment scheme ( Com , Ver ) . B first runs Step 1 and Step 2 as the experiment Exp 1 , expect that c 3 is evaluated by the challenger of commitment scheme. B sends K , K x to the commitment challenger. Then, it returns a commitment c to B , where K x CPRF . Constrain ( K , { x } ) . Set c 3 = c and r 3 is an empty string. c 3 is either a commitment to K, or a commitment K x . Because B does not need to know the random string r 3 for the NIWI proofs, B perfectly simulates the experiment Exp 2 if c 3 = Com ( K ; r ) . If c 3 = Com ( K x ; r ) , B perfectly simulates the experiment Exp 3 . Thus, any PPT adversary A distinguishing the two experiments with advantage ϵ ( λ ) implies an efficient adversary B that breaks the commitment scheme’s hiding property with advantage ϵ ( λ ) . □
Experiment Exp 4 : This experiment retains the same structure as the previous one, with only one modification: the witness selection for NIWI proof generation. Specifically, the challenger now exclusively uses the alternative witness:
w = ( i = 2 , j = 3 , K , K x , r 2 , r 3 , 0 | π ^ | , 0 | π | , , 0 | c r s | )
in place of the original witness:
w = ( i = 1 , j = 2 , K , K , r 1 , r 2 , 0 | π ^ | , 0 | π | , , 0 | c r s | ) ,
for all NIWI proof computations, while keeping all other experiment procedures unchanged.
  • Evaluation Queries: Upon input x i X , it computes y i = CPRF . Eval ( K , x i ) and π i = NIWI . P ( ( c 1 , c 2 , c 3 , x i , y i ) , w ) , where w = ( i = 2 , j = 3 , K , K x , r 2 , r 3 , 0 | π ^ | , 0 | π | , , 0 | c r s | ) . If x i x , it returns ( y i , π i ) . Otherwise, it returns ( , ) .
Lemma 3. 
Assume ( P , V ) is a secure NIWI proof system, then Exp 3 c Exp 4 .
Proof. 
If a PPT adversary A can distinguish Exp 3 from Exp 4 with advantage ϵ ( λ ) . An adversary B can be constructed to break the witness indistinguishability of ( P , V ) . We first consider the restricted case where A makes a single evaluation query.
  • B runs steps 1–2, which are the same as the experiment Exp 3 .
  • When A queries constrained oracle, B answers the same as the experiment Exp 3 . When A queries evaluation oracle on x x , B computes y = CPRF . Eval ( K , x ) and sends ( s = ( c 1 , c 2 , c 3 , x , y ) , w 1 = ( i = 1 , j = 2 , K , K , r 1 , r 2 , 0 | π ^ | , 0 | π | , , 0 | c r s | ) , w 2 = ( i = 2 , j = 3 , K , K x , r 2 , r 3 , 0 | π ^ | , 0 | π | , , 0 | c r s | ) ) to the NIWI challenger. The challenger returns a proof π to B . B returns ( y , π ) to A
  • B runs steps 4–5, which are the same as the experiment Exp 3 , and outputs A ’s result.
When the proof π is generated by witness w 1 , the simulation perfectly reconstructs A ’s view from experiment Exp 3 . Conversely, when π is generated using w 2 , A ’s view corresponds exactly to experiment Exp 4 . Thus, any adversary capable of distinguishing between Exp 3 and Exp 4 with non-negligible advantage immediately implies a PPT adversary that attacks the witness-indistinguishability of NIWI system with the advantage ϵ ( λ ) . This contradicts the fundamental security guarantee of NIWI proofs.
If A makes Q 1 = Q 1 ( λ ) evaluation queries, we can define Q intermediate hybrid experiments Exp 3 i , i = 0 , 1 , , Q 1 , where the first one is Exp 3 , and the last one is Exp 4 . In the ith intermediate experiment Exp 3 i , B uses the proof returned from the NIWI challenger to answer the evaluation query. For j < i , B uses the witness w 2 = ( i = 2 , j = 3 , K , K x , r 2 , r 3 , 0 | π ^ | , 0 | π | , , 0 | c r s | ) to answer the evaluation query. For j > i , B uses the witness w 1 = ( i = 1 , j = 2 , K , K , r 1 , r 2 , 0 | π ^ | , 0 | π | , , 0 | c r s | ) to answer the evaluation query. Applying the method mentioned above, this can prove that the outputs of adjacent experiments are indistinguishable. Using the method of hybrid argument with Q 1 intermediate experiments, we can prove that the outputs of Exp 3 and Exp 4 are computationally indistinguishable. □
Experiment Exp 5 : This experiment is the same as the previous one, except c 1 is a commitment of the constrained key K x . It can be found that the generated method of the proofs π in each evaluation query and the constrained key s k S in each constrain queries is the same as the experiment Exp 4 , because the random strings r 2 , r 3 satisfy the witness relation.
  • The challenger generates K CPRF . Setup ( 1 λ ) and establishes the necessary proof infrastructure by sampling c r s ˜ NIZK . Sim 1 ( 1 λ ) . Then, it generates K x CPRF . Constrain ( K , { x } ) , and three commitments to the PRF key K, c 1 Com ( K x ; r 1 ) , c 2 Com ( K ; r 2 ) and c 3 Com ( K x ; r 3 ) . It sets s k = ( K , { ( c i , r i ) } i = 1 3 , c r s ˜ ) , p k = ( c 1 , c 2 , c 3 ) .
Lemma 4. 
Assuming ( Com , Ver ) is a perfectly binding commitment, then Exp 4 c Exp 5 .
Proof. 
This proof employs an identical reduction strategy to that of Lemma 2. □
Experiment Exp 6 : This experiment is the same as Exp 5 , except the challenger uses ( i = 1 , j = 3 , K x , K x , r 2 , r 3 , 0 | π ^ | , 0 | π | , , 0 | c r s | ) as the witness instead of ( i = 2 , j = 3 , K , K x , r 1 , r 2 , 0 | π ^ | , 0 | π | , , 0 | c r s | ) for generating all NIWI proofs.
  • Evaluation Queries: Given x i X , it computes y = CPRF . Eval ( K , x i ) and π i = NIWI . P ( ( c 1 , c 2 , c 3 , x , y ) , w ) , where w = ( i = 1 , j = 3 , K x , K x , r 2 , r 3 , 0 | π ^ | , 0 | π | , , 0 | c r s | ) . If x i x , it returns ( y i , π i ) . Otherwise, it returns ( , ) .
Lemma 5. 
Assume ( P , V ) a secure NIWI proof system, then Exp 5 c Exp 6 .
Proof. 
This proof employs an identical reduction strategy to that of Lemma 3. □
Experiment Exp 7 : This experiment is the same as the previous one, except that c 2 is a commitment to the constrained key K x . It can be found that the generated method of the proofs π in each evaluation query and the constrained key s k S in each constrain query is the same as the experiment Exp 6 , because the random strings r 1 , r 3 satisfy the witness relation.
  • The challenger generates K CPRF . Setup ( 1 λ ) and establishes the necessary proof infrastructure by sampling c r s ˜ NIZK . Sim 1 ( 1 λ ) . Then, it generates K x CPRF . Constrain ( K , { x } ) , and three commitments to the PRF key K, c 1 Com ( K x ; r 1 ) , c 2 Com ( K x ; r 2 ) and c 3 Com ( K x ; r 3 ) . Sets s k = ( K , { ( c i , r i ) } i = 1 3 , c r s ˜ ) , p k = ( c 1 , c 2 , c 3 ) .
Lemma 6. 
If ( Com , Ver ) is a perfectly binding commitment scheme, then Exp 6 c Exp 7 .
Proof. 
This proof employs an identical reduction strategy to that of Lemma 2. □
Experiment Exp 8 : This experiment is the same as the previous one, except the challenger answers the challenge with a random value y from Y .
  • The challenger responds with a uniformly random value y Y replacing the actual value y = CPRF . Eval ( k , x ) .
Lemma 7. 
Assume that ( CPRF . Setup , CPRF . Constrain , CPRF . Eval ) satisfies pseudorandomness, then Exp 7 c Exp 8 .
Proof. 
Via reduction to the pseudorandomness property of CPRFs, we build a distinguisher B that, given black-box access to A , attempts to distinguish the CPRF’s output from uniformly random values. B operates as follows.
  • A first outputs x X .
  • B samples c r s ˜ NIZK . Sim 1 ( 1 λ ) . Then, it sends { x } to the constrained PRF challenger and obtains K x with a value y . B generates three commitments to the PRF key, c i Com ( K x , r i ) , i = 1 , 2 , 3 . It returns p k = ( c 1 , c 2 , c 3 ) to A . Set is s k = ( k x , { c i , r i } i = 1 3 , c r s ˜ ) .
  • When A queries constrain the oracle on S j X for x S j , B sends S j to the constrain PRF challenger and obtains K S j . Then, it computes an NIZK proof π ^ j NIZK . Sim ( s j ) , where s j = ( c 1 , c 2 , c 3 , K S j , S j ) . It returns s k S = ( K S , S , π ^ , c r s ˜ ) . If x S j , it returns ⊥. When A queries the evaluation oracle on x j x , B computes y = CPRF . Eval ( K x , x j ) and π j = NIWI . P ( ( c 1 , c 2 , c 3 , x , y ) , w ) , where w = ( i = 1 , j = 2 , K x , K x , r 1 , r 2 , 0 | π ^ | , 0 | π | , , 0 | c r s | ) . It returns ( y i , π i ) to A .
  • B sends the evaluation y to A .
  • A may adaptively query the constrained key oracle and evaluation oracle throughout the security experiment. B answers the same as the previous step.
  • B outputs the A ’s result.
When B returns the real CPRF evaluation y = CPRF . Eval ( K , x ) , adversary A ’s view perfectly simulates its view in experiment Exp 7 . Conversely, the simulation ensures that if B ’s output is uniformly random, A ’s environment is distributed exactly as in Exp 8 . Hence, a non-negligible distinguishing advantage ϵ ( λ ) between and translates directly to B achieving an advantage ϵ ( λ ) against the CPRF’s pseudorandomness. □

5. Conclusions

In this study, we present a generic construction for transforming any constrained pseudorandom function into a constrained verifiable random function that simultaneously achieves provability, uniqueness, and pseudorandomness. Our construction relies solely on standard cryptographic primitives—CPRFs, perfectly binding commitments, NIWI proofs, and NIZK—thereby circumventing the need for indistinguishability obfuscation. Our scheme achieves constant-size public keys that do not scale with circuit depth, yielding significant efficiency improvements over obfuscation-based constructions. However, our construction just satisfies selective security, where the adversary must declare the challenge point a priori. Extending this to the adaptive security model—where the adversary selects the challenge point dynamically during the experiment—poses an important open challenge for future work.

Author Contributions

Conceptualization, Y.S.; Methodology, P.L. and M.L.; Formal analysis, P.L. and M.L.; Writing—original draft, P.L.; Writing—review & editing, P.L. and Y.S.; Funding acquisition, Y.S. and M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This project received financial support from China’s National Natural Science Foundation under grants 62102134 and 12071112.

Data Availability Statement

This work presents a theoretical framework without experimental components; hence, no research data are available for sharing.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goldreich, O.; Goldwasser, S.; Micali, S. How to Construct Random Functions. In Proceedings of the 25th Annual Symposium on Foundations of Computer Science, Singer Island, FL, USA, 24–26 October 1984; pp. 464–479. [Google Scholar]
  2. Micali, S.; Rabin, M.; Vadhan, S. Verifiable Random Functions. In Proceedings of the 40th Annual Symposium on the Foundations of Computer Science, New York, NY, USA, 17–19 October 1999; pp. 120–130. [Google Scholar]
  3. Giunta, E.; Stewart, A. Unbiasable Verifiable Random Functions. In Advances in Cryptology—EUROCRYPT 2024—Proceedings of the 43rd Annual International Conference on the Theory and Applications of Cryptographic Techniques, Zurich, Switzerland, 26–30 May 2024; Lecture Notes in Computer Science; Proceedings, Part IV; Joye, M., Leander, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2024; Volume 14654, pp. 142–167. [Google Scholar]
  4. Malavolta, G. Key-Homomorphic and Aggregate Verifiable Random Functions. In Theory of Cryptography—Proceedings of the 22nd International Conference, TCC 2024, Milan, Italy, 2–6 December 2024; Lecture Notes in Computer, Science; Proceedings, Part IV; Boyle, E., Mahmoody, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2024; Volume 15367, pp. 98–129. [Google Scholar]
  5. Shi, Y.; Luo, T.; Liang, J.; Au, M.H.; Luo, X. Obfuscating Verifiable Random Functions for Proof-of-Stake Blockchains. IEEE Trans. Dependable Secur. Comput. 2024, 21, 2982–2996. [Google Scholar] [CrossRef]
  6. Boneh, D.; Waters, B. Constrained Pseudorandom Functions and Their Applications. In Advances in Cryptology—ASIACRYPT 2013—Proceedings of the 19th International Conference on the Theory and Application of Cryptology and Information Security, Bengaluru, India, 1–5 December 2013; Proceedings, Part II; Springer: Berlin/Heidelberg, Germany, 2013; pp. 280–300. [Google Scholar]
  7. Chen, Z.; Deng, L.; Ruan, Y.; Feng, S.; Wang, T.; Wang, B. Certificateless Broadcast Encryption with Authorization Suitable for Storing Personal Health Records. Comput. J. 2024, 67, 617–631. [Google Scholar] [CrossRef]
  8. Maiti, S.; Misra, S.; Mondal, A. ABP: Attribute-Based Broadcast Proxy Re-Encryption With Coalitional Game Theory. IEEE Syst. J. 2024, 18, 85–95. [Google Scholar] [CrossRef]
  9. Roy, A.K.; Nath, K.; Srivastava, G.; Gadekallu, T.R.; Lin, J.C. Privacy Preserving Multi-Party Key Exchange Protocol for Wireless Mesh Networks. Sensors 2022, 22, 1958. [Google Scholar] [CrossRef] [PubMed]
  10. Li, X.; Wang, H.; Ma, S. An efficient ciphertext-policy weighted attribute-based encryption with collaborative access for cloud storage. Comput. Stand. Interfaces 2025, 91, 103872. [Google Scholar] [CrossRef]
  11. Ge, C.; Susilo, W.; Liu, Z.; Baek, J.; Luo, X.; Fang, L. Attribute-Based Proxy Re-Encryption With Direct Revocation Mechanism for Data Sharing in Clouds. IEEE Trans. Dependable Secur. Comput. 2024, 21, 949–960. [Google Scholar] [CrossRef]
  12. Fuchsbauer, G. Constrained Verifiable Random Functions. In Security and Cryptography for Networks—Proceedings of the 9th International Conference, SCN 2014, Amalfi, Italy, 3–5 September 2014; Proceedings; Springer: Berlin/Heidelberg, Germany, 2014; pp. 95–114. [Google Scholar]
  13. Liu, M.; Zhang, P.; Wu, Q. A Novel Construction of Constrained Verifiable Random Functions. Secur. Commun. Netw. 2019, 2019, 4187892:1–4187892:15. [Google Scholar] [CrossRef]
  14. Zan, Y.; Li, H.; Xu, H. Adaptively Secure Constrained Verifiable Random Function. In Science of Cyber Security—Proceedings of the 5th International Conference, SciSec 2023, Melbourne, VIC, Australia, 11–14 July 2023; Lecture Notes in Computer, Science; Proceedings; Yung, M., Chen, C., Meng, W., Eds.; Springer: Berlin/Heidelberg, Germany, 2023; Volume 14299, pp. 367–385. [Google Scholar]
  15. Chandran, N.; Raghuraman, S.; Vinayagamurthy, D. Constrained Pseudorandom Functions: Verifiable and Delegatable. IACR Cryptol. ePrint Arch. 2014, 2014, 522. [Google Scholar]
  16. Datta, P.; Dutta, R.; Mukhopadhyay, S. Constrained Pseudorandom Functions for Turing Machines Revisited: How to Achieve Verifiability and Key Delegation. Algorithmica 2019, 81, 3245–3390. [Google Scholar] [CrossRef]
  17. Liang, B.; Li, H.; Chang, J. Verifiable Random Functions from (Leveled) Multilinear Maps. In Cryptology and Network Security—Proceedings of the 14th International Conference, CANS 2015, Marrakesh, Morocco, 10–12 December 2015; Proceedings; Springer: Berlin/Heidelberg, Germany, 2015; pp. 129–143. [Google Scholar]
  18. Garg, S.; Gentry, C.; Halevi, S.; Raykova, M.; Sahai, A.; Waters, B. Candidate Indistinguishability Obfuscation and Functional Encryption for all Circuits. In Proceedings of the 54th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2013, Berkeley, CA, USA, 26–29 October 2013; pp. 40–49. [Google Scholar]
  19. Brakerski, Z.; Rothblum, G.N. Virtual Black-Box Obfuscation for All Circuits via Generic Graded Encoding. In Theory of Cryptography—Proceedings of the 11th Theory of Cryptography Conference, TCC 2014, San Diego, CA, USA, 24–26 February 2014; Proceedings; Springer: Berlin/Heidelberg, Germany, 2014; pp. 1–25. [Google Scholar]
  20. Hu, Y.; Jia, H. Cryptanalysis of GGH Map. In Advances in Cryptology—EUROCRYPT 2016—Proceedings of the 35th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Vienna, Austria, 8–12 May 2016; Lecture Notes in Computer, Science; Proceedings, Part I; Fischlin, M., Coron, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; Volume 9665, pp. 537–565. [Google Scholar]
  21. Bitansky, N. Verifiable Random Functions from Non-interactive Witness-Indistinguishable Proofs. J. Cryptol. 2020, 33, 459–493. [Google Scholar] [CrossRef]
  22. Goyal, R.; Hohenberger, S.; Koppula, V.; Waters, B. A Generic Approach to Constructing and Proving Verifiable Random Functions. In Theory of Cryptography—Proceedings of the 15th International Conference, TCC 2017, Baltimore, MD, USA, 12–15 November 2017; Proceedings, Part II; Springer: Berlin/Heidelberg, Germany, 2017; pp. 537–566. [Google Scholar]
  23. Boneh, D.; Boyen, X. Short Signatures Without Random Oracles. In Advances in Cryptology—EUROCRYPT 2004, Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques, Interlaken, Switzerland, 2–6 May 2004; Lecture Notes in Computer, Science; Proceedings; Cachin, C., Camenisch, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3027, pp. 56–73. [Google Scholar]
  24. Groth, J.; Sahai, A. Efficient Non-interactive Proof Systems for Bilinear Groups. In Advances in Cryptology—EUROCRYPT 2008, Proceedings of the 27th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Istanbul, Turkey, 13–17 April 2008; Lecture Notes in Computer Science; Proceedings; Smart, N.P., Ed.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 4965, pp. 415–432. [Google Scholar]
  25. Lysyanskaya, A. Unique Signatures and Verifiable Random Functions from the DH-DDH Separation. In Advances in Cryptology—CRYPTO 2002, Proceedings of the 22nd Annual International Cryptology Conference, Santa Barbara, CA, USA, 18–22 August 2002; Proceedings; Springer: Berlin/Heidelberg, Germany, 2002; pp. 597–612. [Google Scholar]
  26. Dodis, Y.; Yampolskiy, A. A Verifiable Random Function with Short Proofs and Keys. In Public Key Cryptography—PKC 2005, Proceedings of the 8th International Workshop on Theory and Practice in Public Key Cryptography, Les Diablerets, Switzerland, 23–26 January 2005; Proceedings; Springer: Berlin/Heidelberg, Germany, 2005; pp. 416–431. [Google Scholar]
  27. Jager, T. Verifiable Random Functions from Weaker Assumptions. In Theory of Cryptography—Proceedings of the 12th Theory of Cryptography Conference, TCC 2015, Warsaw, Poland, 23–25 March 2015; Proceedings, Part II; Springer: Berlin/Heidelberg, Germany, 2015; pp. 121–143. [Google Scholar]
  28. Hofheinz, D.; Jager, T. Verifiable Random Functions from Standard Assumptions. In Theory of Cryptography—Proceedings of the 13th International Conference, TCC 2016-A, Tel Aviv, Israel, 10–13 January 2016; Proceedings, Part I; Springer: Berlin/Heidelberg, Germany, 2016; pp. 336–362. [Google Scholar]
  29. Fuchsbauer, G.; Konstantinov, M.; Pietrzak, K.; Rao, V. Adaptive Security of Constrained PRFs. IACR Cryptol. ePrint Arch. 2014, 2014, 416. [Google Scholar]
  30. Hofheinz, D.; Kamath, A.; Koppula, V.; Waters, B. Adaptively Secure Constrained Pseudorandom Functions. IACR Cryptol. ePrint Arch. 2014, 2014, 720. [Google Scholar]
  31. Hohenberger, S.; Koppula, V.; Waters, B. Adaptively Secure Puncturable Pseudorandom Functions in the Standard Model. In Advances in Cryptology—ASIACRYPT 2015—Proceedings of the 21st International Conference on the Theory and Application of Cryptology and Information Security, Auckland, New Zealand, 29 November–3 December 2015; Proceedings, Part I; Springer: Berlin/Heidelberg, Germany, 2015; pp. 79–102. [Google Scholar]
  32. Attrapadung, N.; Matsuda, T.; Nishimaki, R.; Yamada, S.; Yamakawa, T. Constrained PRFs for NC1 in Traditional Groups. In Advances in Cryptology—CRYPTO 2018—Proceedings of the 38th Annual International Cryptology Conference, Santa Barbara, CA, USA, 19–23 August 2018; Proceedings, Part II; Springer: Berlin/Heidelberg, Germany, 2018; pp. 543–574. [Google Scholar]
  33. Attrapadung, N.; Matsuda, T.; Nishimaki, R.; Yamada, S.; Yamakawa, T. Adaptively Single-Key Secure Constrained PRFs for NC1. In Public-Key Cryptography—PKC 2019—Proceedings of the 22nd IACR International Conference on Practice and Theory of Public-Key Cryptography, Beijing, China, 14–17 April 2019; Proceedings, Part II; Springer: Berlin/Heidelberg, Germany, 2019; pp. 223–253. [Google Scholar]
  34. Boyle, E.; Goldwasser, S.; Ivan, I. Functional Signatures and Pseudorandom Functions. In Public-Key Cryptography—PKC 2014—Proceedings of the 17th International Conference on Practice and Theory in Public-Key Cryptography, Buenos Aires, Argentina, 26–28 March 2014; Proceedings; Springer: Berlin/Heidelberg, Germany, 2014; pp. 501–519. [Google Scholar]
  35. Kiayias, A.; Papadopoulos, S.; Triandopoulos, N.; Zacharias, T. Delegatable pseudorandom functions and applications. In Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security, CCS’13, Berlin, Germany, 4–8 November 2013; pp. 669–684. [Google Scholar]
Table 1. Comprehensive comparison between our construction and prior works.
Table 1. Comprehensive comparison between our construction and prior works.
CitePublic Key SizeProof SizeEvaluation ComplexityVerification Complexity
Fuchsbauer [12] ( 2 n + 1 ) λ p o l y ( λ ) O ( λ n ) O ( λ n ) + 2 m u l
Liu et al. [13] p o l y ( λ , | C | ) p o l y ( λ ) p o l y ( λ , | C | ) p o l y ( λ , | C | ) + 2 m u l
Zan et al. [14] p o l y ( λ , | C | ) p o l y ( λ ) p o l y ( λ , | C | ) p o l y ( λ , | C | ) + p o l y ( λ )
Our construction 3 λ p o l y ( λ ) p o l y ( λ , | C | ) O ( 1 ) · m u l
Table 2. Comparisons between consecutive experiments.
Table 2. Comparisons between consecutive experiments.
Previous ExperimentNext ExperimentAssumption
Exp 1 : Exp 2 :
c r s NIZK . Setup ( 1 λ ) c r s ˜ NIZK . Sim 1 ( 1 λ ) Zero-knowledge of NIZK
π ^ j NIZK . P ( c r s , s 1 , w 1 ) π ^ j NIZK . Sim 2 ( s 1 )
Exp 2 : Exp 3 :
c 3 Com ( K ; r 3 ) K x CPRF . Constrain ( K , { x } ) Hiding property of commitment
c 3 Com ( K x ; r 3 )
Exp 3 : Exp 4 :
w = ( i = 1 , j = 2 , K , K , r 1 , r 2 , w = ( i = 2 , j = 3 , K , K x , r 2 , r 3 , Witness indistinguishability of NIWI
0 | π ^ | , 0 | π | , , 0 | c r s | ) 0 | π ^ | , 0 | π | , , 0 | c r s | )
Exp 4 : Exp 5 :Hiding property of commitment
c 1 Com ( K ; r 1 ) c 1 Com ( K x ; r 1 )
Exp 5 : Exp 6 :
w = ( i = 2 , j = 3 , K , K x , r 2 , r 3 , w = ( i = 1 , j = 3 , K x , K x , r 2 , r 3 , Witness indistinguishability of NIWI
0 | π ^ | , 0 | π | , , 0 | c r s | ) 0 | π ^ | , 0 | π | , , 0 | c r s | )
Exp 6 : Exp 7 :Hiding property of commitment
c 2 Com ( K ; r 2 ) c 2 Com ( K x ; r 2 )
Exp 7 : Exp 8 :Pseudorandomness of CPRF
y = CPRF . Eval ( K , x ) y Y
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, P.; Liu, M.; Shang, Y. A Transformation Approach from Constrained Pseudo-Random Functions to Constrained Verifiable Random Functions. Electronics 2025, 14, 2194. https://doi.org/10.3390/electronics14112194

AMA Style

Li P, Liu M, Shang Y. A Transformation Approach from Constrained Pseudo-Random Functions to Constrained Verifiable Random Functions. Electronics. 2025; 14(11):2194. https://doi.org/10.3390/electronics14112194

Chicago/Turabian Style

Li, Pu, Muhua Liu, and Youlin Shang. 2025. "A Transformation Approach from Constrained Pseudo-Random Functions to Constrained Verifiable Random Functions" Electronics 14, no. 11: 2194. https://doi.org/10.3390/electronics14112194

APA Style

Li, P., Liu, M., & Shang, Y. (2025). A Transformation Approach from Constrained Pseudo-Random Functions to Constrained Verifiable Random Functions. Electronics, 14(11), 2194. https://doi.org/10.3390/electronics14112194

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop