Next Article in Journal
Minimization of Resource Consumption with URLLC Constraints for Relay-Assisted IIoT
Next Article in Special Issue
Cybersecurity Baseline and Risk Mitigation for Open Data in IoT-Enabled Smart City Systems: A Case Study of the Hradec Kralove Region
Previous Article in Journal
Multi-Sensor Heterogeneous Signal Fusion Transformer for Tool Wear Prediction
Previous Article in Special Issue
Linkable Ring Signature for Privacy Protection in Blockchain-Enabled IIoT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lattice-Based Certificateless Proxy Re-Signature for IoT: A Computation-and-Storage Optimized Post-Quantum Scheme

Department of Electronic and Communication Engineering, Beijing Electronic Science and Technology Institute, Beijing 100070, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(15), 4848; https://doi.org/10.3390/s25154848
Submission received: 3 July 2025 / Revised: 31 July 2025 / Accepted: 2 August 2025 / Published: 6 August 2025
(This article belongs to the Special Issue IoT Network Security (Second Edition))

Abstract

Highlights

What are the main findings?
  • We propose a novel and practical proxy re-signature scheme that utilizes the Dilithium algorithm.
  • Performance evaluation and comparative analysis demonstrate that our proposed scheme significantly reduces the computational overhead compared to existing algebraic lattice-based schemes. Furthermore, it substantially optimizes the signature storage size compared to mainstream NTRU-based solutions, achieving a 52.9% reduction in storage space.
What are the implications of the main findings?
  • This study fills a critical research gap by establishing a practical certificateless proxy re-signature scheme based on the Dilithium algorithm. Beyond this novelty, it also promotes the broader application and exploration of the Dilithium algorithm within the domain of certificateless cryptography.
  • The considerable reduction in computational overhead positions our proposed scheme as a highly attractive option for resource-constrained environments, such as Internet of Things (IoT) terminals.

Abstract

Proxy re-signature enables transitive authentication of digital identities across different domains and has significant application value in areas such as digital rights management, cross-domain certificate validation, and distributed system access control. However, most existing proxy re-signature schemes, which are predominantly based on traditional public-key cryptosystems, face security vulnerabilities and certificate management bottlenecks. While identity-based schemes alleviate some issues, they introduce key escrow concerns. Certificateless schemes effectively resolve both certificate management and key escrow problems but remain vulnerable to quantum computing threats. To address these limitations, this paper constructs an efficient post-quantum certificateless proxy re-signature scheme based on algebraic lattices. Building upon algebraic lattice theory and leveraging the Dilithium algorithm, our scheme innovatively employs a lattice basis reduction-assisted parameter selection strategy to mitigate the potential algebraic attack vectors inherent in the NTRU lattice structure. This ensures the security and integrity of multi-party communication in quantum-threat environments. Furthermore, the scheme significantly reduces computational overhead and optimizes signature storage complexity through structured compression techniques, facilitating deployment on resource-constrained devices like Internet of Things (IoT) terminals. We formally prove the unforgeability of the scheme under the adaptive chosen-message attack model, with its security reducible to the hardness of the corresponding underlying lattice problems.

1. Introduction

Proxy re-signature schemes provide a mechanism for the transitive authentication of cross-domain digital identities. In proxy re-signature, a proxy (P) serves the function of transforming signatures on an identical message between two distinct signers. Specifically, using a proxy re-signature key, proxy P can convert delegator A’s signature on a given message into a signature verifiable with delegatee B’s public key for that same message. This transformation ensures that the resulting re-signed signature is indistinguishable from one genuinely generated by delegatee B while being clearly distinguishable from delegator A’s original signature. Proxy re-signature technology has significant applications in diverse fields, such as Digital Rights Management (DRM), cross-domain certificate interoperability, and access control in distributed systems. A prime example of this is its application in international travel document systems. Consider a traveler (C) holding an electronic signature ( S i g E ) issued by their home country (E) who seeks entry into country F. The border control agency in country F, acting as a proxy entity, first verifies the validity of S i g E . Upon successful verification, the agency can convert S i g E into an electronic signature ( S i g F ) that conforms to country F’s standards. Subsequently, relevant authorities within country F can perform the traveler’s identity verification and achieve transnational authentication merely by utilizing the public key of country F’s border control agency. This process obviates the need to manage and maintain complex, transnational certificate chains.
The concept of proxy re-signature was first introduced by Blaze et al. [1]. Ateniese and Hohenberger [2] formally defined the security model and explored additional application scenarios. Early applications primarily focused on smart card key updates, enabling the dynamic expansion of terminal device key spaces using proxy signatures. Subsequently, their use has expanded to areas such as anonymous group signatures and distributed system path attestation. For instance, in distributed network routing verification, data packets can leverage a proxy signature chain to prove their complete transmission path, with verifiers only requiring the public key of the terminal node. However, these early proxy re-signature schemes were predominantly based on certificate-based public-key cryptosystems. In such systems, the public keys of the delegatee and delegator must be obtained from certificates prior to signature verification, leading to significant certificate distribution and management challenges. To address the bottleneck of certificate management, researchers [3,4] have developed various identity-based proxy re-signature (IBPRS) schemes. These schemes utilize user identity information as public keys, thereby avoiding the reliance on certificates.
Nevertheless, in IBPRS schemes, the private keys of both the delegatee and delegator are generated by a Private Key Generator (PKG). This inherently introduces a key escrow problem [5], as the PKG possesses knowledge of all user private keys, enabling it to potentially eavesdrop on communications or to forge user signatures. To resolve both certificate management and key escrow issues, scholars have begun investigating certificateless proxy re-signature (CLPRS) schemes. Guo et al. [6] combined certificateless cryptography with proxy re-signature, proposing the first bidirectional CLPRS scheme. Although this approach eliminated certificate dependency and avoided the key escrow problem, a concrete security proof was not provided. The proxy re-signature schemes discussed previously are all based on traditional public key cryptography, with their security relying on the presumed intractability of problems such as large integer factorization and discrete logarithm problems. This reliance introduces inherent security vulnerabilities to the system. With the advent of quantum computing, the security of public-key cryptosystems founded on classically hard number-theoretic problems faces a significant challenge. As Shor [7] demonstrated, the advancement of quantum computing renders problems like discrete logarithm and integer factorization computationally tractable. Thus, developing post-quantum certificateless proxy re-signature schemes is the only comprehensive approach to resolving the certificate management bottleneck, key escrow problem, and imminent quantum computational threat faced by current proxy re-signature technologies.
To address these problems, we introduce a computationally efficient post-quantum certificateless proxy re-signature scheme based on algebraic lattices. This scheme is designed to guarantee the security and integrity of communication messages exchanged among the Key Generation Center (KGC), proxy, delegator, and delegatee, even in quantum attack environments.

2. Contribution

The main contributions of this paper are as follows:
(1)
We propose an efficient certificateless proxy re-signature scheme with a core architecture based on algebraic lattice theory. This scheme leverages the Dilithium algorithm as a foundational building block, ensuring operational efficiency while resolving the complexities associated with Discrete Gaussian Sampling. In contrast to comparable schemes, our proposal innovatively adopts a parameter-selection strategy enhanced by lattice basis reduction. This strategy ensures strict adherence to the NIST-standardized parameter configurations while effectively circumventing the potential algebraic attack vectors inherent in NTRU lattice structures.
(2)
By applying structured compression techniques, our scheme optimizes the storage size of signatures to be comparable to that of NTRU-based architectures. This results in a storage space compression rate of over 52.9% when compared to mainstream NTRU-based signature schemes, offering marked advantages for deployment in storage-constrained environments, including Internet of Things (IoT) terminals.
(3)
We formally prove our scheme to be unforgeable under the Existential Unforgeability under Chosen Message Attack (EUF-CMA) security model, based on the dual hardness assumptions of the Module Small Integer Solution (MSIS) and Module Learning With Errors (MLWE) problems. The security of the scheme is reducible to the presumed intractability of these underlying mathematical problems.

3. Related Work

Current research hotspots in proxy re-signature primarily encompass identity-based schemes, lightweight schemes for the Internet of Things (IoT), certificateless schemes, and post-quantum secure schemes. Dutta et al. [8] proposed an identity-based unidirectional proxy re-signature scheme; however, it is susceptible to private key leakage. Tian et al. [9] constructed an identity-based bidirectional proxy re-signature scheme. However, its proxy re-signature process necessitates joint computation involving the private keys of both the delegator and delegatee, thereby increasing the key management complexity. Zhang et al. [10] introduced an identity-based non-interactive proxy re-signature scheme tailored for Mobile Edge Computing. While this scheme reduces the computational overhead by avoiding bilinear pairing operations, its reliance on the hardness of large integer factorization renders it vulnerable to quantum computing threats. In the context of mobile Internet environments, Lei et al. [11] proposed a unidirectional, variable-threshold proxy re-signature scheme notable for its shorter signature lengths, reduced computational costs, enhanced verification efficiency, and improved adaptability; this construction was also formally proven secure in the Standard Model against both collusion and adaptive chosen message attacks. Nevertheless, its reliance on the hardness of the bilinear pairing problem renders it incapable of withstanding quantum computing attacks.
Operating within certificateless frameworks, Fan et al. [12] remedied the limitations inherent in the signature protocol devised by Tian et al. [9]. This resolution involved the presentation of a certificateless proxy re-signature method exhibiting superior operational efficiency and distinguished by more concise private keys. In a separate study, Wu et al. [13] proposed a flexible unidirectional certificateless proxy re-signature scheme. Nevertheless, the structural dependence of this particular scheme on the bilinear pairing problem means that it is not equipped to counteract threats emerging from quantum computation. Zhang et al. [14] developed a revocable certificateless proxy re-signature scheme capable of supporting signature evolution within Electronic Health Record (EHR) sharing systems. Specifically, it facilitates dynamic user management and enables efficient revocation and updating of signatures in response to evolving data requirements. However, this scheme also lacks resistance to quantum computing threats.
Currently, post-quantum signature methodologies predominantly fall into three main classes: those derived from hash functions, those constructed using multivariate polynomials, and those based on lattices. The security of hash-derived signatures is contingent upon the collision resistance of the underlying hash functions; however, such schemes often present limitations regarding both signature compactness and execution velocity. Multivariate polynomial signatures, on the other hand, are recognized as being vulnerable to algebraic cryptanalysis, which can potentially undermine their claimed security. Diverging from these two paradigms, lattice-based signatures exhibit notable strengths in terms of computational performance and security robustness. The initial basis for public-key cryptography leveraging lattices was provided by Ajtai [15], who demonstrated a fundamental linkage between the average-case difficulty and worst-case intractability of certain lattice problems. Following this foundational work, Gentry [16], during the process of developing signature schemes from lattices, introduced the precise notion of a ‘one-way trapdoor function.’ In [17], Lyubashevsky introduced a novel methodology for converting identification schemes into signature schemes using the Fiat-Shamir transform [18,19]. This approach incorporates an ‘abort’ mechanism to discard any signature values that might leak private key information, thereby ensuring that the output signature values adhere to a uniform distribution. Later, Lyubashevsky [20] employed rejection sampling techniques to generalize sampling to arbitrary distributions and demonstrated that signature schemes based on the Learning With Errors (LWE) problem could achieve smaller public key sizes.
Building upon Lyubashevsky’s signature scheme [20], Tian et al. [21] introduced a lattice-based certificateless signature scheme that is notable for its shorter private keys and enhanced efficiency compared with other contemporary schemes. Subsequently, Xie et al. [22] proposed a versatile unidirectional lattice-based proxy re-signature scheme. However, a significant drawback of their approach is that users must fully generate their own public and private keys, creating vulnerabilities to attacks from malicious users. Furthermore, their scheme risks exposing the delegatee’s private key during the generation of the re-signature key. Fan et al. [23] developed a lattice-based re-signature method proven secure in the CCA-PRE model. Nonetheless, this scheme is susceptible to man-in-the-middle attacks and requires a greater storage capacity for re-signatures. Jiang et al. [24] put forward a lattice-based proxy re-signature scheme that permits a message to be re-signed multiple times. A proxy re-signature construction employing lattice structures, designed for unidirectional applications and infinite use, was proposed by Chen et al. [25]. This scheme incorporates private re-signature keys, which allow an individual message to undergo an unbounded number of re-signing operations. Separately, Luo et al. [26] advanced a proposal for an attribute-based proxy re-signature methodology, establishing its foundations upon conventional lattice structures. Through the application of dual-mode cryptographic techniques, Zhou et al. [27] developed a certificateless proxy re-signature scheme engineered to be resistant to collusion attacks; however, an efficiency analysis of this scheme was not provided, and it suffers from large signature sizes.
To date, scholarly investigations into certificateless proxy re-signature schemes based on algebraic lattices are not extensive. This scarcity is primarily because certificateless signature schemes constructed using algebraic lattices often result in large signature or private key lengths, thereby imposing a notable burden on the available storage resources. To address the challenge of substantial signature lengths, Guneysu et al. [28] proposed a methodology centered on partitioning numerical values into two distinct components: higher-significance bits and lower-significance bits. This approach permits the elision of the lower-significance bits on the condition that their removal does not alter the rounding outcome of the higher-significance bits, consequently leading to diminished storage requirements. Concurrently, Bai et al. [29] introduced a method for discarding signature subcomponents, which implicitly incorporates a proof of noise knowledge within the overarching proof pertaining to the private key. Recognized as a NIST-standardized algorithm for post-quantum signatures, Dilithium utilizes compression strategies comparable to those in Guneysu’s work. It further makes use of ‘hints’ within its rounding operations, thereby aiming to avert failures during the verification process. Furthermore, Dilithium is implemented on algebraic lattices, and its security is based on the presumed hardness of the Module Small Integer Solution (MSIS) and Module Learning With Errors (MLWE) problems [30]. Due to its design, which incorporates uniform key sampling, the scheme demonstrates resistance to known algebraic and subfield attacks. For NIST-selected parameter sets of Dilithium (e.g., security levels 2, 3, and 5), solving the corresponding MLWE and MSIS instances remains computationally infeasible under currently known optimal classical and quantum attacks [31]. In summary, existing certificateless proxy re-signature schemes are typically either designed based on traditional computationally hard problems, rendering them vulnerable to quantum attacks, or they achieve quantum security at the cost of low storage and computational efficiency, making them unsuitable for practical deployment. Consequently, there is substantial value in furthering the design of certificateless proxy re-signature schemes employing lattices that concurrently achieve high efficiency and strong security guarantees.

4. Preliminaries

4.1. Parameter Notation and Their Definitions

Table 1 summarizes the notation used in the proposed scheme.

4.2. Lattices

Given an n-dimensional space, let b 1 , , b n be m linearly independent vectors. Suppose matrix B R n m is formed by these vectors. Thereafter, let L B = { B x :   x Z m } denote the lattice that B generates.

4.3. Hardness Assumption

The conceptual integration of module lattices facilitates the formulation of the Module Learning With Errors (MLWE) and Module Small Integer Solution (MSIS) problems, which represent structural advancements over their foundational counterparts, Learning With Errors (LWE) and Small Integer Solution (SIS), respectively. These intractability assumptions, functioning as broader versions of LWE and SIS, are fundamentally derived from challenges in standard lattice theory. The primary application of the MLWE assumption lies in safeguarding cryptographic keys and offering resilience against attacks aimed at key recovery. In tandem, the postulated hardness of the MSIS problem forms the cryptographic basis for the robust unforgeability of the signature scheme. A comprehensive exposition of these foundational hardness assumptions is allocated to the security analysis chapter later in this document.
Definition 1. 
SIS problem:
Given a positive integer q, and a uniformly random matrix A Z q n × m , find a non-zero vector v Z m  such that  A v = 0   m o d   q  and  v β .
Definition 2. 
MSIS problem:
For a given matrix A Z q m × k , the MSIS challenge is to find a non-zero integer vector v satisfying the conditions   [ A | I ] v = 0 ( m o d q )  and  v β .
Definition 3. 
MLWE problem:
A matrix  A R q n × m  is selected uniformly at random. Subsequently, two vectors,  s 1  and  s 2 , are sampled according to  s 1 χ m  and  s 2 χ n , where  χ  represents a probability distribution over a ring R. The output is  t   =   A s 1 + s 2 R q n . The decision version of MLWE is to distinguish between the pairs  ( A , t   =   A s 1 + s 2 )  and  A ,   t R q n × m × R q n .

4.4. Number Theoretic Transform (NTT) Domain Representation

Let a R q (denoted as ( â Z q 256 ) ) be a polynomial. Its representation in the NTT domain, denoted by â , is specified as the vector: â = ( a r 0 ,   a r 0 , ,   a r 127 , a r 127 ) . In this formulation, the components r i are defined by the expression r i = r b r v ( 128 + i ) . The function b r v ( k ) herein signifies the bit-reversal of the 8-bit integer k.

5. Construction of the Scheme

5.1. System Model

Our proposed scheme design targets general proxy re-signature requirements. To intuitively demonstrate its application value, this section constructs a system model using the Medical Internet of Things as an example scenario. The system architecture, illustrated in Figure 1, comprises four distinct entities: the Key Generation Center (KGC), Terminal User (TU), Cloud Server (CS), and Healthcare Authority (HA).
(1)
Key Generation Center (KGC): A trusted root authority entrusted with initializing the system, managing the registration of TUs and HAs, and distributing partial private keys.
(2)
Terminal User (TU): An entity representing the data owner and their associated medical devices, characterized by limited computational and storage resources. The TU obtains system parameters and a partial private key from the KGC upon registration, after which it digitally signs personal medical data (e.g., EHRs) for transmission to the CS.
(3)
Cloud Server (CS): A cloud service platform designed for the storage, processing, and dissemination of the TU’s health data. The CS validates the cryptographic signature of incoming data from the TU before ingestion. Upon a valid request from an HA, the CS executes a proxy re-signature protocol to delegate access.
(4)
Healthcare Authority (HA): A service provider that must also be registered with the KGC is responsible for performing authentication and validation checks. These checks confirm the data’s authenticity (originating from the TU), integrity (unaltered content), and access legitimacy (explicit authorization from the TU).
Within this framework, the TU generates personal health records from wearable physiological monitors and delegates access to an HA for diagnostic use. The operational protocol involves the following phases:
Phase 1: System Initialization and Entity Registration. The KGC generates a master key pair and public system parameters. Entities petition the KGC for registration, and the KGC generates a unique partial private key for each entity and delivers it via a secure channel.
Phase 2: Full Key Derivation and Source Signature. Each entity derives its full key pair by combining its self-generated secret value with the partial key obtained from the KGC. The TU then applies an original signature to its data packet and securely uploads it to the CS for storage. The CS can verify the validity of the original signature, thereby ensuring integrity during transit, but lacks the ability to decipher the signature’s content or forge new signatures. Figure 2 illustrates the concrete workflow.
Phase 3: Authorization and Re-Signature Key Generation. When the TU decides to grant access to an HA, it executes a re-signature key generation algorithm to produce a key that is designated for that HA. This key is securely transmitted to the CS, which stores it in association with the TU data.
Phase 4: Proxy Re-Signature and Data Access. The HA initiates a data access request by invoking a re-signature key generation algorithm to produce a key designated for the TU and sends it to the CS. The CS verifies the identity of the HA, retrieves the TU’s data packet along with the key designated for that HA, executes the re-signature algorithm to convert the original signature into a re-signature, and transmits the resultant data packet to the HA. This phase ensures delegation unforgeability (effective re-signature keys cannot be forged without the original signer’s authorization) and delegation precision (re-signature keys are not misused by others).
Phase 5: Data Verification and Utilization. The HA validates the re-signature. A positive verification result cryptographically assures the following properties:
(1)
Authenticity: The data originated and was signed by the claimed TU.
(2)
Integrity: The data have remained unchanged since their creation.
(3)
Delegation unforgeability: Access by the HA is explicitly granted by the TU.
(4)
Long-Term Quantum Resistance: The security of historical signatures is preserved against retrospective attacks launched by future quantum computers.

5.2. Basic Functions

5.2.1. Hash

Our proposed scheme utilizes several distinct algorithms to map strings to elements within diverse domains. The algorithms are detailed below.
Collision-Resistant Hash (CRH): Maps strings to   0 ,   1 384 .
ExpandMask: Maps a string of any length to a vector y S γ 1 l .
ExpandA: Maps a uniform random seed ρ 0 ,   1 256 to a matrix A     R q k × l , represented in the NTT domain with coefficients in Z q 256 .
ExpandS: Maps a uniform random seed ρ 0 ,   1 256 to vectors s 1 S η l and s 2 S η k , each with coefficients in the interval [ η , η ] .
Moreover, we use the function and its parameters as defined in reference [31]: ExpandA make use of the more efficient extendable-Output Function (XOF) H 128 , whereas ExpandS and ExpandMask use the XOF H .

5.2.2. SampleInBall

Generates an element B τ of pseudo-randomly using the seed ρ . The first 8 bytes of H ( ρ ) are used to choose the signs of the nonzero entries of c , and subsequent bytes of H ( ρ ) are used to choose the positions of those nonzero entries. The parameter τ is always less than or equal to 64, and thus 8 bytes are suffcient to choose the signs for all τ nonzero entries of c .

5.2.3. Element Decomposition

The basic idea is to drop the d low-order bits of each coefficient of the polynomial vector t from the public key using the function Power2Round. The procedure detailed in Algorithm 1 facilitates the partitioning of an element r     Z q into its higher-order bits r 1 and lower-order bits r 0 . The element r can be represented as r =   r 1 2 d   +   r 0 , where r 0 = r   m o d   2 d and r 1 = ( r     r 0 ) 2 d . Here, d represents the number of binary bits. To ensure that the deviation in the value represented by the higher-order bits does not exceed a magnitude of one.
Algorithm 1  P o w e r 2 R o u n d q ( r , d )
Decomposes r into ( r 1 , r 0 ) such that r     r 1 2 d   +   r 0   m o d   q
Input:  r     Z q
  1: r + r   m o d   q
  2: r 0 r +   m o d ±   2 d
  3: return (( r + r 0 ) 2 d , r 0 )
Output:  ( r 1 ,   r 0 )
Algorithm 2 is utilized for selecting α as a divisor of q     1 . This alternative methodology then permits the decomposition to be expressed as r =   r 1 α + r 0 .
Algorithm 2  D e c o m p o s e ( r , α )
Input:  r ,   α
  1: r : =   r   m o d +   q
  2: r 0   : =   r   m o d ±   α
  3: if  r r 0 = q 1
  4:  then  r 1   : = 0 ;   r 0   : = r 0 1
  5: else  r 1   : = ( r   r 0 ) / α
Output:  ( r 1 ,   r 0 )
Subsequently, the extraction of these respective higher-order r 1 and lower-order r 0 bit components from the element r is accomplished using Algorithms 3 and 4.
Algorithm 3  H i g h B i t s   ( r , α )
Input:  r ,   α
  1: r 1 ,   r 0 : = D e c o m p o s e ( r , α )
Output:  r 1
Algorithm 4  L o w B i t s   ( r , α )
Input:  r ,   α
  1: r 1 ,   r 0 : = D e c o m p o s e ( r , α )
Output:  r 0
Lemma 1. 
Assuming | | s | |   β  and  | | L o w B i t s ( r ,   α ) | | < α / 2 β . Then we consider the following formula to hold: H i g h B i t s ( r , α )   = H i g h B i t s ( r + s ,   α ) .

5.3. The Proposed Scheme

Our scheme is based on the contributions of Ducas [30]. Parameter settings are as follows: Let the matrix elements be polynomials in the ring R q =   Z q X / ( X n + 1 ) . Positive integers d specifically indicate the bit length for element decomposition and are invariably set to d   =   13 . For the modulus q , we choose q = 2 23 2 13 + 1 , while the polynomial degree n is fixed at 256. A critical constraint is that coefficients of all sampled random vectors must not exceed η . We define γ 1 = q 1 16 , γ 2 = γ 1 2 , and b = l o g 2 η . The parameter β denotes the maximum allowable magnitude for coefficients in vectors considered to have ‘small’ entries. The construction of the proposed scheme is outlined below: Algorithms 5–10 constitute the primary signature and verification, while Algorithms 11–13 constitute the re-signature and verification.
Algorithm 5 Setup:
1:   ρ { 0,1 } 256
2:   ρ { 0,1 } 512
3:   K { 0,1 } 256
4:   k × l   m a t r i x   A E x p a n d A ρ
5:   s 1 , s 2 S η l × S η k E x p a n d S ρ
6:   t A s 1 + s 2
7:   p k = ρ , t , s k = ρ , K , s 1 , s 2
8:   H 1 : { 0 , 1 } * × Z q k × Z q k { 0 , 1 } *
9:   H 2 : { 0 , 1 } * × { 0 , 1 } * × Z q k × Z q k { 0 , 1 } *
Algorithm 6 Partial Private Key Extract:
10: User submits identity information ID
11: KGC selects   ρ 1 { 0,1 } 256
12:   r 1 S γ 1 1 l E x p a n d M a s k K ρ 1  
13:   R A r 1
14:   R 1 , R 0 H i g h B i t s R , 2 γ 2
15:   c 1 ~ { 0,1 } 256 = H 1 I D t R 1
16:   c 1 B 60 : = S a m p l e I n B a l l c 1 ~
17: partial private key   d a c 1 s 1 + r 1
18:   r 0 = L o w b i t s R c 1 s 2 , 2 γ 2
19: If d a γ 1 β    or  r 0 γ 2 β , then return to step 10
20: KGC sends the partial private key d a , c 1 to user
21: User calculates R 1 : = H i g h B i t s A d a c 1 t , 2 γ 2
22: If c 1 : = H 1 I D t R 1 is determined to be invalid
23:  then user rejects it and initiates to the KGC for the partial private key
Output: d a , c 1
Algorithm 7 Set Secret Value:
1:   ρ 2 { 0,1 } 256
2:   K 1 { 0,1 } 256
3:   x a S η k
Output: x a
Algorithm 8 Generate full public-private key pairs for users:
4:   ( d a 1 ,   d a 0 )   =   P o w e r 2 R o u n d q ( d a , b )
5:   w a : = A d a 0 + x a
6:  private key s k a = K 1 , ρ 2 , d a 0 , x a
7: public key p k a = w a
Algorithm 9 Signature Generation:
Input:   M ,   I D , s k a  
8:   A E x p a n d A ρ
9:   μ C R H ρ M
10:   y S γ 1 1 l E x p a n d M a s k K 1 ρ 2 μ
11:   Y A y
12:   Y 1 = H i g h B i t s Y , 2 γ 2
13:   c 2 ~ { 0,1 } 256 = H 2 I D μ t Y 1
14:   c 2 B 60 : = S a m p l e I n B a l l c 2 ~
15:   z T U c 2 d a 0 + y
16:   r 3 = L o w b i t s Y c 2 x a , 2 γ 2
17:  If z T U γ 1 β or r 3 γ 2 β , then return to step 9
Output: σ T U = z T U , c 2
Algorithm 10 Verification:
Input: σ T U ,   M ,   I D , p k a  
1:   A E x p a n d A ρ
2:   μ C R H ρ M
3:   Y 1 = H i g h B i t s A z T U c 2 w a , 2 γ 2
4:  If z T U < γ 1 β and c 2 = H 2 I D μ t Y 1
Output 1 else Output 0
It should be noted that the core part of the signature algorithm consists of a rejection sampling loop, where each iteration of the cycle produces either a valid signature or an invalid signature; the loop repeats until a valid signature is generated and output. The rejection sampling loop follows the Fiat-Shamir with aborts paradigm [17]. The standardization document [31] specifies that Dilithium2/3/5 (corresponding to ML-DSA-44/65/87) requires average cycle counts of 4.25/5.1/3.85 times, respectively, for valid signature generation, as calculated from Equation 5 in [30]. Before requesting shared TU data from CS, HA must first register with KGC to obtain system parameters, as well as both its public p k b = w b and private keys s k b = K 1 , ρ 2 , d b 0 , x b . The specific process has been described previously.
Algorithm 11 Proxy Re-key Generation:
Input: I D T U ,   I D H A  
1: delegator TU selects a random vector u S η k
2:   K T U H A = d a 0 + u
3: shares vector u with delegatee HA confidentially
4: transmits the K T U H A to proxy signer CS via a secure channel
5: delegatee HA computes K H A T U = d b 0 + u
6: transmits the K H A T U to proxy signer CS via a secure channel
Output K T U H A , K H A T U
Algorithm 12 Proxy Re-signature:
Input: σ T U ,   K T U H A , K H A T U
1: proxy signer CS calculates z H A = z T U + K H A T U c 2 K T U H A c 2
Output: σ H A = z H A , c 2
Algorithm 13 Verification:
Input: σ H A ,   M , I D , p k b  
1:   A E x p a n d A ρ
2:   μ C R H ρ M
3:   Y 1 = H i g h B i t s A z H A c 2 w b , 2 γ 2
4:  If z H A < γ 1 β and c 2 = H 2 I D μ t Y 1
Output 1 else Output 0

6. Correctness and Security Analysis

6.1. Correctness

The proof of the correctness of signature verification is as follows:
Y 1 = H i g h B i t s q ( A z i c 2 w i , 2 γ 2 ) = H i g h B i t s q ( A ( c 2 d i 0 + y ) c 2 w i , 2 γ 2 ) = H i g h B i t s q ( A y + c 2 ( A d i 0 w i ) , 2 γ 2 ) = H i g h B i t s q Y c 2 x i , 2 γ 2
According to Lemma 1, we know that:
Y 1 = H i g h B i t s q Y c 2 x i , 2 γ 2 = H i g h B i t s q Y c 2 x i + c 2 x i , 2 γ 2 = H i g h B i t s q Y , 2 γ 2 = Y 1
The verification procedure for the proxy re-signature’s correctness is detailed as follows: When the proxy CS uses the proxy re-signing key and the signature of the delegator TU on message µ to produce a signature for the delegatee HA, the verification can be computed.
z H A = z T U + K H A T U c 2 K T U H A c 2 = c 2 d a 0 + y + d b 0 + u c 2 d a 0 + u c 2 = c 2 d a 0 + y + c 2 d b 0 + c 2 u c 2 d a 0 c 2 u = c 2 d b 0 + y

6.2. Security Analysis

As illustrated in Figure 3 and Table 2, Dilithium’s security relies on MSIS/MLWE hardness, which is rooted in the worst-case approximate Shortest Vector Problem (SVP) hardness.

6.2.1. Unforgeability

Lemma 2. 
The signature scheme introduced in this work is resistant to Type I adversaries in the random oracle model, provided that the MSIS problem cannot be solved in polynomial time.
Proof. 
Suppose a Type I adversary, denoted A 1 , successfully compromises the unforgeability of our signature scheme with a non-negligible advantage within a polynomial time frame. It can then be demonstrated that a challenger C , for the MSIS problem, can be constructed, which is consequently able to solve the MSIS problem with a corresponding non-negligible advantage.
Challenger C utilizes the following four lists: L , L H 2 , L s , L R . L for tracking public—key queries made by A 1 , L H 2 for tracking hash queries to the random oracle H 2 , L s for tracking signature queries, and L R for tracking proxy re—signature queries.
  • Setup: Challenger C takes the security parameter n as input, computes the system parameters A and the public-private key pair p k = ( ρ , t ) , s k = ( ρ , K , s 1 , s 2 ) .
  • Queries:
    (1)
    Query-of-user’s creation: Challenger C selects an index j randomly, 0 j q , and the aim is to construct a fraudulent signature for the identity I D j . Initially, challenger C prepares an empty list L structured as ( I D i , c 1 i , ρ 2 i ,   K 1 i ,   d i 0 , x i ,   w i ,   y i ) . When A 1 queries for identity I D i , C scans the entire list L to confirm if the public key for the given identity already exists. If it does, C returns the corresponding key w i to A 1 . Otherwise, C randomly chooses ρ 2 , K 1 , d i S γ 1 l , and x i S η k , then computes ( d i 1 , d i 0 ) = P o w e r 2 R o u n d q ( d i , b ) , w i = A d i 0 + x i , and updates the tuple ( I D i , c 1 i , ρ 2 i ,   K 1 i ,   d i 0 , x i ,   w i ,   y i ) . Finally, C returns w i to A 1 .
    (2)
    Query-of- H 2 : Challenger C sets up an empty list L H 2 = ( I D i ,   Y i 1 , μ i   , c 2 i ) . When A 1 makes a query for identity I D i , C consults the entire list L H 2 ; if a corresponding value c 2 i is found, C returns c 2 i to A 1 . If not, C randomly selects c 2 and updates the tuple ( I D i ,   Y i 1 , μ i , c 2 i ) in list L H 2 , and then returns c 2 i to A 1 .
    (3)
    Query-of-partial-private-key: When A 1 makes a query of a partial private key for I D i , C scans the entire list L . If the list lacks a relevant entry, C executes the user creation algorithm, updates the tuple ( I D i , c 1 i , ρ 2 i ,   K 1 i ,   d i 0 , x i ,   w i ,   y i ) in L, and returns d i 0 to A 1 . If i = j, C aborts the query.
    (4)
    Query-of-replace-public-key: When A 1 wants to replace the public key ( I D i , w i ) of identity I D i with ( I D i , w i ) . C scans the entire list L . If C finds the entry, then replace w i with w i . If not, C performs the user’s creation, subsequently updates list L with the generated tuple ( I D i , c 1 i , ρ 2 i ,   K 1 i ,   d i 0 , x i ,   w i ,   y i ) , then replaces w i with w i .
    (5)
    Query-of-secret-value: Upon adversary A 1 ’s query of secret value for identity I D i , challenger C inspects the entire list L . If a matching entry is found, C provides the associated tuple ( ρ 2 i ,   K 1 i ,   d i 0 , x i ) to A 1 . If no such entry exists, C executes the user’s creation, adds the new tuple ( I D i , c 1 i , ρ 2 i ,   K 1 i ,   d i 0 , x i ,   w i ,   y i ) to L , and subsequently returns the resultant ( ρ 2 i ,   K 1 i ,   d i 0 , x i ) to A 1 .
    (6)
    Query-of-proxy-pe-key: C initializes an empty list L R as ( I D i ,   I D j , r k i j , r k j i ) . When A 1 sends ( I D i , I D j ) to C , C checks L R and sends the corresponding proxy re-key to A 1 .
    (7)
    Query-of-signature: When A 1 queries the signature for identity I D i and message Mi, C checks list L s . If the entry exists, C returns ( z i ,   Y i 1 ,   c 2 i ) to A 1 . Otherwise, C runs the user creation algorithm, computes ( z i ,   Y i 1 ) , updates the tuple, and returns ( z i ,   Y i 1 ,   c 2 i ) to A 1 .
    (8)
    Query-of-re-signature: A 1 sends ( I D i ,   I D j ,   M j ,   s i g j = ( z j ,   c 2 j ) ) to C , asking C to derive a signature for M j under I D j using the signature s i g j = ( z j ,   c 2 j ) of I D i . C first verifies s i g j = ( z j ,   c 2 j ) . If valid, C performs a signature query for ( I D i ,   M j ) and returns the result to A 1 ; otherwise, the query is aborted.
  • Forgery: Upon termination of the query phase, the adversary A 1 outputs a candidate forgery ( I D ,     M ,     s i g ) . The challenger C first checks if I D I D j , C aborts. If I D = I D j , C deems the forgery successful if the following conditions are simultaneously met:
    • Verify M * , s i g = 1 ;
    • I D * was never submitted by A 1 to the Query-of-partial-private-key;
    • I D * and M * were never queried in the Quire-of-signature.
If adversary A 1 achieves a valid forgery sig under these conditions, the subsequent equation is then reformulated as:
H i g h B i t s A z c 2 w ,   2 γ 2 = A z c 2 w + u
where ||u||∞ ≤ 2γ2 + 1. Thus, an adversary A 1 , including quantum adversaries, capable of forging new messages, can find z , c 2 , u , M that satisfy the equation:
H 2 A w I k z c 2 u M I D t = c 2
Invoking the Forking Lemma [32], adversary A 1 can obtain a novel set of valid signatures ( z ,   u ,   M ) . Thus, the following equations are obtained:
H 2 A w I k z c 2 u M I D t = H 2 A w I k z * c 2 u M I D t
We can rewrite the following equation as follows:
A w I k z z c 2 c 2 u u = 0
Let
z z c 2 c 2 u u = v
Challenger C , through interaction with adversary A 1 , thus effectively solves the MSIS hard problem, yielding v as its solution. □
Lemma 3. 
Assuming that the MSIS problem remains computationally difficult within polynomial time in the Random Oracle Model, our proposed signature scheme achieves existential unforgeability against Type II adversaries.
Proof. 
Suppose a Type II adversary A 2 , breaks the existential unforgeability of our scheme in polynomial time with non-negligible advantage. A challenger C , tasked with simulating an MSIS problem instance, can then be constructed to solve the MSIS problem with a comparable non-negligible advantage by utilizing A 2 .
Challenger C utilizes the following four lists: L , L H 2 ,   L s , L R . L for tracking public—key queries made by A 1 , L H 2 for tracking hash queries to the random oracle H 2 , L s for tracking signature queries, and L R for tracking proxy re-signature queries. Lemma 3’s proof methodology resembles Lemma 2’s, differing mainly in the adversarial capabilities of the adversary A 2 . Specifically, a Type II adversary cannot issue Secret Value Queries nor Public Key Replacement Queries for the target identity whose signature is to be forged. Due to space constraints, a detailed elaboration is omitted here. Through a sequence of oracle interactions and application of the Forking Lemma [32], A 2 eventually acquires ( z ,   c 2 , u , M ) . These values fulfill the following equation:
H 2 A w I k z c 2 u M I D t = c 2
By virtue of the Forking Lemma [32], it follows that adversary A 2 can obtain a novel, valid signature tuple ( z , c 2 , u , M ) . Consequently, the following equations are satisfied:
H 2 A w I k z c 2 u M I D t = H 2 A w I k z c 2 u M I D t
We can rewrite the following equation as follows:
A w I k z z c 2 c 2 u u = 0
Let
z z c 2 c 2 u u = v
Challenger C , through interaction with adversary A 2 , thus effectively solves the MSIS hard problem, yielding v as its solution. □

6.2.2. Security Against Key Recovery Attacks

The setup process of the Key Generation Center (KGC) and the private key generation process of the user within our proposed scheme are analogous to the GEN algorithm employed in Dilithium [30]; their security is predicated on the Module Learning with Errors (MLWE) assumption. This assumption implies that a public key, formed as ( A , t = A s 1 + s 2 ) , is computationally indistinguishable from a pair ( A , t ) where t is a uniformly random element. Consequently, our scheme provides robust protection against key recovery attacks under the assumption of MLWE hardness.
Definition 4. 
Existential Unforgeability under Chosen Message Attack (EUF-CMA) [33]: A signature scheme achieves EUF-CMA security in the Random Oracle Model if, for any polynomial-time adversary A, the probability that A wins the standard EUF-CMA security game is negligible. This security notion ensures that an adversary cannot forge a valid signature for any message for which they have not previously obtained a signature by querying the legitimate signing oracle.
Definition 5. 
Lemmas 2 and 3 collectively establish the resistance of our scheme to Type I and Type II adversaries, thereby proving its EUF-CMA security.

7. Performance Evaluation

7.1. Computation Cost

In this section, we provide a comparative analysis of the computational costs associated with our proposed scheme and those of existing lattice-based certificateless proxy re-signature schemes [12,27]. The evaluation framework is adapted from the methodology established by Yu et al. [34] and incorporates enhancements from the computational techniques proposed by Xu et al. [35]. Performance benchmarking was performed on a Windows 11 system featuring an Intel(R) Core(TM) i7-8750H CPU operating at 2.20 GHz. All empirical data were acquired via the PQClean and PALISADE libraries, and the execution time for each cryptographic operation was reported as the arithmetic mean over 1000 iterations. We focused our analysis on the most computationally intensive operations: hashing, Gaussian sampling, preimage sampling, matrix-vector multiplication, and matrix-vector addition. Given that Dilithium2 satisfies NIST Security Level 2, offering collision resistance on par with SHA-256 and classical security surpassing AES-128, we configured our scheme with the initial parameters q = 2 23 2 13 + 1 and n = 256 . Table 3 and Table 4 lists the timing results for these fundamental operations within the schemes under comparison.
To provide a clear and quantitative comparison, Table 3 and Table 5 contrast our proposed scheme with prior certificateless lattice-based proxy re-signature schemes, including those of Fan et al. [12] and Zhou et al. [27], across multiple dimensions from algorithm design to performance metrics. For scheme [12], key generation requires one hash operation ( T h ), three matrix-vector multiplication ( M v ), one matrix-vector addition ( M a ) and one original image sampling algorithm ( S M ). Signing requires one Gaussian sampling operation ( S G ), one hash operation, two matrix-vector multiplication, and one matrix-vector addition. Its verification phase entails two hash operations, two matrix-vector multiplications, and one matrix-vector addition. Re-signing requires four matrix-vector additions. Consequently, the key generation cost for scheme [12] is S M + T h + 3 M v + M a 274.37   μ s , the signing and verification costs are approximately S G + T h + 2 M v + M a 44.06   μ s , and 2 T h + 2 M v + M a 42.05 μ s , the re-signing cost is 4 M a 3.56   μ s .
The operation required in the key generation phase of scheme [27] is the same as that of scheme [12]. Scheme [27] necessitates one Gaussian sampling operation, one hash operation, and four matrix-vector multiplications for signing, while its verification phase involves two hash operations, three matrix-vector multiplications, and one matrix-vector addition. In addition, its re-signature phase requires five matrix-vector multiplications and two matrix-vector additions. Thus, the key generation cost for scheme [27] is S M + T h + 3 M v + M a 274.37   μ s , the signing cost is S G + T h + 4 M v 80.65   μ s , the verification cost is 2 T h + 3 M v + M a 60.79   μ s , and the re-signature cost 5 M v + 2 M a 95.48   μ s .
Our proposed scheme requires two hash operations, four matrix-vector multiplications, and four matrix-vector additions for key generation. The signing phase utilizes two hash operations, one matrix-vector multiplication, and two matrix-vector addition operations. The verification process involves one hash operation, one matrix-vector multiplication, and one matrix-vector addition operation, and the re-signature phase requires four matrix-vector addition operations. Therefore, our scheme’s key generation cost is 2 T h + 4 M v + 4 M a 82.2   μ s . Signing cost is 2 T h + M v + 2 M a 24.2   μ s , with the verification and re-signature costs calculated as T h + M v + M a 21.47 μ s and 4 M a 3.56   μ s .
A comparative summary of computational expenditure for our scheme and others is presented in Table 5 and Figure 4. As shown in Table 4, the Gaussian and preimage sampling algorithms exhibit low efficiency. Furthermore, the TrapGen algorithm can complicate scheme deployment and is computationally intensive. Existing schemes [12,27] employ the TrapGen, preimage sampling, and Gaussian sampling algorithms, thereby consuming substantial resources at the KGC. By eliminating the requirement for these specific algorithms, our proposal facilitates a more straightforward KGC deployment of lattice-based cryptographic systems and enhances the efficiency of its setup procedure. Consequently, as the foregoing analysis indicates, our scheme offers significant advantages in terms of computational overhead.

7.2. Data Storage Efficiency

In this section, we detail the storage scheme. Table 6 lists the data storage sizes.
(1)
KGC public key p k   = ( ρ , t ) is stored in a bit-packed representation. Specifically, the vector t consists of k polynomials t 0 ,   , t k 1 with coefficients in { 0 ,     ,   2 23 1 } . This results in each polynomial occupying 256 ⋅ 23/8 = 736 bytes. Consequently, the KGC public key requires 256 / 8   +   736 k   =   32   +   736 k bytes.
(2)
KGC private key s k = ( ρ , K , s 1 , s 2 ) is stored in a bit-packed representation. The polynomials in s 1 and s 2 have coefficients with maximum infinity norm η . Thus, each coefficient is in η ,   η . In this scheme, we select η from { 2 2 + 1 ,     ,   2 4 1 } . This results in each polynomial occupying 256 · 4 / 8 = 128 bytes. Consequently, the KGC private key requires 64 + 128 · k + 128 · l = 64 + 128 k + 128 l bytes.
(3)
User’s full public key p k a = w is stored in a bit-packed format. Specifically, the vector w consists of k polynomials w 0 , , w k 1 with coefficients in { 0 , , 2 23 1 } . This results in each polynomial occupying 256 · 23 / 8 = 736 bytes. Consequently, the user’s full public key requires 736 k bytes.
(4)
User’s full private key contains s k a = ( K 1 , ρ 2 , d a 0 , x a ) and is stored in a bit-packed format. The polynomials in d a 0 have coefficients in { 2 2 + 1 , , 2 4 1 } , resulting in each polynomial occupying 256 · 4 / 8 = 128 bytes. The polynomials in x a have coefficients bounded in infinity norm by η , meaning each coefficient is in { η ,   η } , also resulting in each polynomial occupying 256 · 4 / 8 = 128 bytes. Thus, the user’s full private key requires 64 + 128 l + 128 k bytes.
(5)
User’s authorization information u is stored in a bit-packed format. The polynomials in u have coefficients bounded in infinity norm by η , meaning each coefficient is in { η ,   η } , also resulting in each polynomial occupying 256 · 4 / 8 = 128 bytes.
(6)
The signature σ = z , c 2 is stored in a bit-packed format. The coefficients of the z polynomial are in { γ 1 + 1 , , γ 1 1 } , resulting in each polynomial being 256·20/8 = 640 bytes. c 2 requires 32 bytes. Thus, the signature requires 640l + 32 bytes.
Table 7 presents the parameters for the specific instances. Following the parameters of Dilithium, we adjusted the parameters to better suit our scheme. In terms of actual performance, the three instances demonstrate an average rejection rate of 1.3–1.4 iterations per signature, which is within the acceptable range of 1.0–3.0. Furthermore, all signatures are generated within five iterations, safely under the upper limit that requires 99.9% of signing attempts to complete in under 10 iterations. It is also evident that even for the third instance, which models the worst-case scenario for NIST Security Level 2, the combined key and signature sizes remain below 5 KB.
Table 8 presents a comparison between our proposed scheme and what is considered among the most efficient existing schemes based on NTRU lattices [36]. We posit that signature length is a critical factor influencing both storage and communication performance. The structural properties of NTRU lattices generally offer advantages over algebraic lattices in terms of storage size. Nevertheless, when compared at the same security level, our scheme achieves smaller signature sizes, with a reduction of 52.9%. Furthermore, owing to the inherent advantages of algebraic lattices, our scheme provides enhanced security and greater ease of implementation.
Furthermore, because the scheme [36] is based on NTRU lattices, it suffers from several drawbacks [30]. For instance, it necessitates sampling from discrete Gaussian distributions, which typically precludes constant-time implementations. The second disadvantage is that its security relies on the NTRU assumption rather than on problems like Ring Learning With Errors (RLWE) or Module Learning With Errors (MLWE). This is crucial because Kirchner and Fouque [37] leveraged the NTRU lattice structure to execute a markedly stronger attack on the big-mode/small-secret version of the problem. Consequently, our approach demonstrates superior performance in terms of both security and computational overhead.

8. Limitations and Future Work

While the proposed scheme exhibits significant advantages in terms of post-quantum security, certificate-less nature, and computational and storage efficiency over comparable lattice-based schemes, its widespread deployment in resource-constrained environments, such as the Internet of Things (IoT), necessitates acknowledging its inherent limitations.
First, concerning energy consumption, our performance evaluation indicates that the proposed scheme outperforms existing solutions in terms of computational overhead. Nevertheless, the underlying lattice-based cryptographic operations are inherently computationally intensive. Furthermore, the rejection-sampling mechanism integral to the scheme has a non-constant execution time. Although the average number of repetitions is acceptable, the iteration count for signature generation still exhibits minor variability. This non-determinism can cause fluctuations in the energy consumption of a device, posing a considerable challenge for battery-dependent IoT terminals that demand extended operational lifetimes. Second, the scalability of the system presents another key concern. The system model of the scheme depends on a centralized Key Generation Center (KGC) for the issuance of partial private keys. In large-scale IoT ecosystems comprising millions of devices, this centralized KGC is likely to become a performance bottleneck and a single point of failure. Finally, and of critical importance, this paper details the scheme’s design at the algorithmic level. Although the officially specified Dilithium algorithm is purported to resist side-channel attacks, its resilience on physical hardware is not guaranteed. The potential for key information leakage through side-channel attacks, such as power analysis or timing attacks, requires empirical validation once the scheme is implemented in physical devices. Furthermore, this scheme primarily focuses on the authenticity of identities and the unforgeability of messages in its design, without optimizing privacy features such as unlinkability of signatures or signer anonymity. This may increase privacy leakage risks in highly sensitive application scenarios like electronic health record (EHR) sharing.
In conclusion, our future work will focus on three core directions: (1) Subsequent work will empirically validate the scheme’s deployment feasibility and efficiency advantages on resource-stringent end devices through concrete IoT use cases (e.g., smart health monitoring or environmental sensor networks). (2) Fine-grained hardware-software co-optimization: Researching and developing constant-time implementations of our scheme to eliminate timing channels and flatten energy consumption. (3) Enhanced scalability: Exploring distributed or hierarchical KGC architectures to alleviate centralization bottlenecks. (4) Strengthened security: Formally integrating mature side-channel countermeasures and conditional privacy preservation into the scheme implementation with formal proofs of security.

9. Conclusions

This paper introduces an algebraic lattice-based certificateless proxy re-signature scheme based on the design principles of the Dilithium signature scheme [30]. Our construction relies on the cryptographic hardness of the Module Small Integer Solution (MSIS) and Module Learning With Errors (MLWE) problems over algebraic lattices. This foundation facilitates significant reductions in both key and signature sizes, thereby enhancing storage efficiency. The security of the proposed scheme is formally proven using the Random Oracle Model. In conclusion, our scheme offers a relevant pathway for designing certificateless proxy re-signatures aligned with emerging post-quantum standards, potentially enabling its deployment in resource-constrained environments such as IoT devices and Medical Internet of Things (IoMT) data-sharing scenarios.

Author Contributions

Conceptualization, Z.W.; Methodology, H.Z.; Investigation, Z.L. and Z.J.; Writing—original draft, G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Beijing Natural Science Foundation (No. L251067) and Fundamental Research Funds for the Central Universities (No. 3282024052, No. 3282024058).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IoTInternet of Things
DRMDigital Rights Management
IBPRSIdentity-based proxy re-signature
PKGPrivate Key Generator
CLPRScertificateless proxy re-signature
KGCKey Generation Center
EUF-CMAExistential Unforgeability under Chosen Message Attack
MSISModule Small Integer Solution
MLWEModule Learning With Errors
EHRElectronic Health Record
LWELearning With Errors
SISSmall Integer Solution
NTTNumber Theoretic Transform
TUTerminal User
CSCloud Server
HAHealthcare Authority
CRHCollision-Resistant Hash
XOFExtendable-Output Function
ML-DSAModule-Lattice-Based Digital Signature Algorithm
RLWERing Learning With Errors
SVPShortest Vector Problem
ROMRandom Oracle Model
IoMTMedical Internet of Things

References

  1. Blaze, M.; Bleumer, G.; Strauss, M. Divertible protocols and atomic proxy cryptography. In Advances in Cryptology—EUROCRYPT ’98; Springer: Berlin/Heidelberg, Germany, 1998; pp. 127–144. [Google Scholar] [CrossRef]
  2. Ateniese, G.; Hohenberger, S. Proxy re-signatures: New definitions, algorithms, and applications. In Proceedings of the ACM Conference on Computer and Communications Security, Alexandria, VA, USA, 7–11 November 2005; pp. 310–319. [Google Scholar] [CrossRef]
  3. Shao, J.; Cao, Z.F.; Wang, L.C.; Liang, X. Proxy re-Signature Schemes without Random Oracles. In Proceedings of the International Conference on Cryptology in India, Chennai, India, 9–13 December 2007; pp. 197–209. [Google Scholar]
  4. Wang, Z.W.; Xia, A.D.; He, M.J. ID-Based Proxy re-Signature without Pairing. Telecommun. Syst. 2018, 69, 217–222. [Google Scholar] [CrossRef]
  5. Shao, J.; Wei, G.Y.; Ling, Y.; Xie, M. Unidirectional Identity-Based Proxy re-Signature. In Proceedings of the 2011 IEEE International Conference on Communications, Kyoto, Japan, 5–9 June 2011; pp. 1–5. [Google Scholar]
  6. Guo, D.T.; Wei, P.; Yu, D.; Yang, X. A Certificateless Proxy re-Signature Scheme. In Proceedings of the 2010 3rd International Conference on Computer Science and Information Technology, Chengdu, China, 9–11 July 2010; pp. 157–161. [Google Scholar]
  7. Shor, P.W. Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. SIAM Rev. 1997, 41, 303–332. [Google Scholar] [CrossRef]
  8. Dutta, P.; Susilo, W.; Duong, D.H.; Baek, J. Identity-based unidirectional proxy re-encryption and re-signature in standard model: Lattice-based constructions. J. Internet Serv. Inf. Secur. 2020, 10, 245–257. [Google Scholar]
  9. Tian, M.M. Identity-based proxy re-signatures from lattices. Inform Process Lett. 2015, 115, 462–467. [Google Scholar] [CrossRef]
  10. Zhang, J.; Bai, W.; Wang, Y. Non-interactive ID-based proxy re-signature scheme for IoT based on mobile edge computing. IEEE Access 2019, 7, 37865–37875. [Google Scholar] [CrossRef]
  11. Lei, Y.; Hu, M.; Gong, B.; Wang, L. A one-way variable threshold proxy re-signature scheme for mobile internet. In Proceedings of the International Conference on Security and Privacy in New Computing Environments, Tianjin, China, 13–14 April 2019; Springer: Cham, Switzerland, 2019; pp. 521–537. [Google Scholar]
  12. Fan, Z.; Ou, H.W.; Pei, T. A certificateless proxy re-signature scheme based on lattice. J. Cryptol. Res. 2020, 7, 15–25. [Google Scholar]
  13. Wu, Y.; Xiong, H.; Jin, C. A multi-use unidirectional certificateless proxy re-signature scheme. Telecommun. Syst. 2020, 73, 455–467. [Google Scholar] [CrossRef]
  14. Zhang, Q.; Sun, Y.; Lu, Y.; Zhang, G. Revocable certificateless proxy re-signature with signature evolution for EHR sharing systems. J. Inf. Secur. Appl. 2024, 87, 103892. [Google Scholar] [CrossRef]
  15. Ajtai, M. Generating hard instances of lattice problems (extended abstract). In Proceedings of the Twenty-Eighth Annual ACM Symposium on the Theory of Computing, Philadelphia, PA, USA, 22–24 May 1996; pp. 99–108. [Google Scholar]
  16. Gentry, C.; Peikert, C.; Vaikuntanathan, V. Trapdoors for hard lattices and new cryptographic constructions. In Proceedings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, BC, Canada, 17–20 May 2008; pp. 197–206. [Google Scholar]
  17. Lyubashevsky, V. Fiat-Shamir with aborts: Applications to lattice and factoringbased signatures. In Advances in Cryptology—ASIACRYPT 2009, Proceedings of the 15th International Conference on the Theory and Application of Cryptology and Information Security, Tokyo, Japan, 6–10 December 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 598–616. [Google Scholar]
  18. Fiat, A.; Shamir, A. How to prove yourself: Practical solutions to identification and signature problems. In Advances in Cryptology—CRYPTO ’86; Springer: Berlin/Heidelberg, Germany, 1986; pp. 186–194. [Google Scholar]
  19. Abdalla, M.; An, J.H.; Bellare, M.; Namprempre, C. From identification to signatures via the fiat-Shamir transform: Minimizing assumptions for security and forward-security. In Advances in Cryptology—EUROCRYPT 2002, Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques, Amsterdam, The Netherlands, 28 April–2 May 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 418–433. [Google Scholar]
  20. Lyubashevsky, V. Lattice signatures without trapdoors. In Advances in Cryptology—EUROCRYPT 2012, Proceedings of the 31st Annual International Conference on the Theory and Applications of Cryptographic Techniques, Cambridge, UK, 15–19 April 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 738–755. [Google Scholar]
  21. Tian, M.M.; Huang, L.S. Certificateless and certificate-based signatures from lattices. Secur. Commun. Netw. 2015, 8, 1575–1586. [Google Scholar] [CrossRef]
  22. Xie, J.; Hu, Y.; Gao, J. Multi-use unidirectional lattice-based proxy re-signatures in standard model. Secur. Commun. Netw. 2016, 9, 5615–5624. [Google Scholar] [CrossRef]
  23. Fan, X.; Liu, F.H. Proxy re-encryption and re-signatures from lattices. In Applied Cryptography and Network Security, Proceedings of the 17th International Conference, ACNS 2019, Bogota, Colombia, 5–7 June 2019; Deng, R., Gauthier-Umaña, V., Ochoa, M., Yung, M., Eds.; Springer: Cham, Switzerland, 2019; pp. 363–382. [Google Scholar]
  24. Jiang, M.; Hou, J.; Guo, Y.; Wang, Y.; Wei, S. An efficient proxy re-signature over lattices. In Frontiers in Cyber Security, Proceedings of the Second International Conference, FCS 2019, Xi’an, China, 15–17 November 2019; Springer: Singapore, 2019; pp. 145–160. [Google Scholar]
  25. Chen, W.; Li, J.; Huang, Z.; Gao, C.; Yiu, S.; Jiang, Z.L. Lattice-based unidirectional infinite-use proxy re-signatures with private re-signature key. J. Comput. Syst. Sci. 2021, 120, 137–148. [Google Scholar] [CrossRef]
  26. Luo, F.; Al-Kuwari, S.; Susilo, W.; Duong, D. Attribute-based proxy re-signature from standard lattices and its applications. Comput. Stand. Interf. 2021, 75, 103499. [Google Scholar] [CrossRef]
  27. Zhou, Y.H.; Dong, S.S.; Yang, Y.G. A unidirectional certificateless proxy re-signature scheme based on lattice. Trans. Emerg. Telecommun. Technol. 2022, 33, e4412. [Google Scholar] [CrossRef]
  28. Güneysu, T.; Lyubashevsky, V.; Pöppelmann, T. Practical lattice-based cryptography: A signature scheme for embedded systems. In Proceedings of the 14th International Workshop, Leuven, Belgium, 9–12 September 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 530–547. [Google Scholar]
  29. Bai, S.; Galbraith, S. An improved compression technique for signatures based on learning with errors. In Topics in Cryptology–CT-RSA 2014, Proceedings of the Cryptographer’s Track at the RSA Conference 2014, San Francisco, CA, USA, 25–28 February 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 28–47. [Google Scholar]
  30. Ducas, L.; Kiltz, E.; Lepoint, T.; Lyubashevsky, V.; Schwabe, P.; Seiler, G.; Stehlé, D. Crystals-dilithium: A lattice-based digital signature scheme. In IACR Transactions on Cryptographic Hardware and Embedded Systems; Ruhr-Universität Bochum: Bochum, Germany, 2018; pp. 238–268. [Google Scholar]
  31. National Institute of Standards and Technology (NIST). Module-Lattice-Based Digital Signature Standard (FIPS Publication 204); U.S. Department of Commerce: Washington, DC, USA, 2024. Available online: https://csrc.nist.gov/pubs/fips/204/final (accessed on 1 August 2025).
  32. Pointcheval, D.; Stern, J. Security proofs for signature schemes. In Advances in Cryptology—EUROCRYPT ’96, Proceedings of the International Conference on the Theory and Application of Cryptographic Techniques Saragossa, Spain, 12–16 May 1996; Springer: Berlin/Heidelberg, Germany, 1996; pp. 387–398. [Google Scholar]
  33. Chen, J. Research on Digital Signature Without Trapdoor on Lattice; Xidian University: Xi’an, China, 2020. [Google Scholar]
  34. Yu, H.; Shi, J. Certificateless multi-source signcryption with lattice. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 10157–10166. [Google Scholar] [CrossRef]
  35. Xu, S.; Yu, S.; Yue, Z.Y.; Liu, Y. CLLS: Efficient certificateless lattice-based signature in VANETs. Comput. Netw. 2024, 255, 110858. [Google Scholar] [CrossRef]
  36. Xu, M.; Li, C. A NTRU-based certificateless aggregate signature scheme for underwater acoustic communication. IEEE Int. Things J. 2024, 11, 10031–10039. [Google Scholar] [CrossRef]
  37. Kirchner, P.; Fouque, P. Revisiting lattice attacks on overstretched NTRU parameters. In Advances in Cryptology—EUROCRYPT 2017, Proceedings of the 36th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Paris, France, 30 April–4 May 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 3–26. [Google Scholar]
Figure 1. System model of the proposed certificateless proxy re-signature scheme.
Figure 1. System model of the proposed certificateless proxy re-signature scheme.
Sensors 25 04848 g001
Figure 2. Process flow of TU performing the original signature and CS verifying the signature.
Figure 2. Process flow of TU performing the original signature and CS verifying the signature.
Sensors 25 04848 g002
Figure 3. Mapping attack paths to lattice problems.
Figure 3. Mapping attack paths to lattice problems.
Sensors 25 04848 g003
Figure 4. Computation cost [12,27].
Figure 4. Computation cost [12,27].
Sensors 25 04848 g004
Table 1. Symbol description.
Table 1. Symbol description.
NotationDescription
q Prime number
K A private random seed
k , l Integers, dimensions of A
η Private key range
S η l the subset of polynomial vectors
τ The number of ±1 in polynomial c
β β = τ   ·   η
γ 1 Coefficient range of y
γ 2 Low-order rounding range
Z The ring of integers
R Univariate polynomial ring on   Z
B τ The set of all polynomials in   R q
b r v Bit reversal
Bold letter, such as R A Vector
M Message
I D i Entities’ identity
p k , s k Public-private key of KGC
p k i , s k i Public and private key of Entities’ identity
σ Signature
Table 2. Overview of the security attributes.
Table 2. Overview of the security attributes.
Adversary TypeCapabilitiesAssumptionAttack Surface Mitigated
Type IReplace public keysMSIS HardnessMalicious KGC + Key Replacement
Type IIAccess master keyMSIS HardnessHonest-but-Curious KGC
Key RecoveryEavesdrop on communicationsMLWE HardnessQuantum Key Search Attacks
Table 3. Comparison with existing certificateless lattice-based schemes.
Table 3. Comparison with existing certificateless lattice-based schemes.
FeatureFan [12]Zhou [27]Our Scheme
Special FeaturesTrapGen-basedCollusion-resistantDilithium-based
Core Primitives TrapGen, Gaussian Sampling, Preimage SamplingTrapGen, Gaussian Sampling, Preimage SamplingRejection Sampling
Hardness AssumptionsSIS, ISISSIS, ISISMLWE, MSIS
Security ProofEUF-CMA in Random Oracle ModelCollusion Attack ResistanceEUF-CMA in Random Oracle Model
Table 4. Notation and function execution times.
Table 4. Notation and function execution times.
NotationDescriptionFunction Execution Time
T h   One-way hash 1.84   μ s
S M Preimage sampling 215.42   μ s
S G Gaussian sampling 3.85   μ s
M v matrix-vector multiplication 18.74   μ s
M a matrix-vector addition 0.89   μ s
Table 5. Computational expenditure comparison.
Table 5. Computational expenditure comparison.
OperationFan [12]Zhou [27]Our Scheme
KeyGen274.37 μs274.37 μs82.2 μs
Sign44.06 μs80.65 μs24.2 μs
Verify42.05 μs60.79 μs21.47 μs
Re-Sign3.56 μs95.48 μs3.56 μs
Table 6. Notation and storage size.
Table 6. Notation and storage size.
NotationDescriptionStorage Size (Bytes)
p k K G C   p u b l i c   k e y   ρ , t 32 + 736 k
s k K G C   p r i v a t e   k e y   ρ , K , s 1 , s 2 64 + 128 k + 128 l
p k a U s e r s   f u l l   p u b l i c   w 736 k
s k a U s e r s   f u l l   p r i v a t e   k e y   ( K 1 , ρ 2 , d a 0 , x a ) 64 + 128 l + 128 k
u U s e r s   a u t h o r i z a t i o n   i n f o r m a t i o n   u 128 k
σ S i g n a t u r e   ( z , c 2 ) 640 l + 32
Table 7. Parameters of specific instances.
Table 7. Parameters of specific instances.
Instance 1Instance 2Instance 3
( k ,   l ) ( 3,2 ) ( 4 , 4 ) ( 5 , 6 )
η 2 2 4
τ 606060
β 120 120 240
K G C   p k   ( b y t e s ) 2240 2976 3712
K G C   s k   ( b y t e s ) 704 1088 1472
U s e r   p k   ( b y t e s ) 2208 2944 3680
U s e r   s k   ( b y t e s ) 704 1088 1472
u   ( b y t e s ) 384 512 640
S i g   ( b y t e s ) 1312 2592 3872
Table 8. Comparison of required storage sizes.
Table 8. Comparison of required storage sizes.
Xu [36]Our Scheme
Lattice NTRU LatticeAlgebraic Lattice
Required storage sizes of Sig2787 bytes1312 bytes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wei, Z.; Lan, G.; Zhao, H.; Li, Z.; Ju, Z. Lattice-Based Certificateless Proxy Re-Signature for IoT: A Computation-and-Storage Optimized Post-Quantum Scheme. Sensors 2025, 25, 4848. https://doi.org/10.3390/s25154848

AMA Style

Wei Z, Lan G, Zhao H, Li Z, Ju Z. Lattice-Based Certificateless Proxy Re-Signature for IoT: A Computation-and-Storage Optimized Post-Quantum Scheme. Sensors. 2025; 25(15):4848. https://doi.org/10.3390/s25154848

Chicago/Turabian Style

Wei, Zhanzhen, Gongjian Lan, Hong Zhao, Zhaobin Li, and Zheng Ju. 2025. "Lattice-Based Certificateless Proxy Re-Signature for IoT: A Computation-and-Storage Optimized Post-Quantum Scheme" Sensors 25, no. 15: 4848. https://doi.org/10.3390/s25154848

APA Style

Wei, Z., Lan, G., Zhao, H., Li, Z., & Ju, Z. (2025). Lattice-Based Certificateless Proxy Re-Signature for IoT: A Computation-and-Storage Optimized Post-Quantum Scheme. Sensors, 25(15), 4848. https://doi.org/10.3390/s25154848

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop