Next Article in Journal
Chinese Paper-Cutting Style Transfer via Vision Transformer
Previous Article in Journal
Cross-Domain Feature Enhancement-Based Password Guessing Method for Small Samples
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identity-Based Provable Data Possession with Designated Verifier from Lattices for Cloud Computing

Beijing Electronic Science and Technology Institute, Beijing 100070, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2025, 27(7), 753; https://doi.org/10.3390/e27070753
Submission received: 18 April 2025 / Revised: 6 July 2025 / Accepted: 13 July 2025 / Published: 15 July 2025
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

Provable data possession (PDP) is a technique that enables the verification of data integrity in cloud storage without the need to download the data. PDP schemes are generally categorized into public and private verification. Public verification allows third parties to assess the integrity of outsourced data, offering good openness and flexibility, but it may lead to privacy leakage and security risks. In contrast, private verification restricts the auditing capability to the data owner, providing better privacy protection but often resulting in higher verification costs and operational complexity due to limited local resources. Moreover, most existing PDP schemes are based on classical number-theoretic assumptions, making them vulnerable to quantum attacks. To address these challenges, this paper proposes an identity-based PDP with a designated verifier over lattices, utilizing a specially leveled identity-based fully homomorphic signature (IB-FHS) scheme. We provide a formal security proof of the proposed scheme under the small-integer solution (SIS) and learning with errors (LWE) within the random oracle model. Theoretical analysis confirms that the scheme achieves security guarantees while maintaining practical feasibility. Furthermore, simulation-based experiments show that for a 1 MB file and lattice dimension of n = 128, the computation times for core algorithms such as TagGen, GenProof, and CheckProof are approximately 20.76 s, 13.75 s, and 3.33 s, respectively. Compared to existing lattice-based PDP schemes, the proposed scheme introduces additional overhead due to the designated verifier mechanism; however, it achieves a well-balanced optimization among functionality, security, and efficiency.

1. Introduction

As information technology continues to advance rapidly, cloud computing has become a key component of modern information technology, particularly in the areas of data storage and processing, providing powerful support. As one of the core services of cloud computing, cloud storage enables users to store their local data on cloud servers, alleviating the storage burden on physical devices while enhancing the convenience and flexibility of data access [1,2]. However, as cloud storage becomes more widespread, outsourcing data to cloud service providers (CSPs) introduces security and integrity challenges.
One critical issue is ensuring the integrity of cloud-stored data. As cloud storage removes direct control over data, it exposes the data to potential attacks such as unauthorized tampering, malicious deletion, or even accidental damage due to hardware or software failures. Although users depend on CSPs to preserve data integrity, the cloud’s inherent openness and sharing raise significant issues regarding data security, privacy, and reliability. Whether due to malicious actions by CSPs or external attackers, there is a pressing need for robust mechanisms to ensure that outsourced data are properly stored and remain intact.
Real-world incidents further underscore this need. For example, during a six-month high-energy physics operation involving approximately 97 petabytes of data, CERN reported that about 128 megabytes had been irreversibly corrupted, raising concerns about the impact of even minor data degradation on scientific research outcomes [3]. In another case, Jeff Bonwick, creator of the ZFS file system, revealed that Greenplum—a high-performance database—experiences silent data corruption every 10 to 20 minutes without triggering system alerts. These cases illustrate that even in rigorously managed scientific and enterprise environments, data corruption can occur undetected, undermining both data reliability and decision making [3].
Traditional data integrity verification solutions, such as downloading the entire dataset for inspection, have become impractical due to the large data volumes involved and the high communication and computational costs [4]. To address this challenge, researchers have proposed various alternative approaches.
Among them, blockchain-based storage frameworks [5,6] leverage distributed ledgers and consensus protocols to achieve verifiability and tamper resistance. These schemes often utilize Merkle hash trees and smart contracts for auditability and state tracking. However, they typically suffer from high communication costs, significant computational overhead, and consensus-related delays, which limit their applicability in resource-constrained environments. On the other hand, coding techniques in edge computing [3,5] enhance data availability through redundant encoding and erasure coding, providing strong resilience against data loss or corruption. Nonetheless, these methods primarily focus on fault tolerance and recovery, and they lack fine-grained audit control or designated verifier mechanisms.
Intuitively, data owners can perform integrity verification tasks themselves, but this requires them to retrieve and check the data’s integrity individually, resulting in a significant communication and computational burden. To alleviate this burden, public auditing allows data users to delegate these tasks to a third-party auditor (TPA). Auditors can regularly check the integrity of outsourced data on behalf of the users. If the audit fails, the auditor quickly informs the user, indicating that the data may have been tampered with.
Although public auditing offers significant benefits, there are two main obstacles to its widespread application in cloud computing. On the one hand, many existing public auditing schemes [7,8,9,10,11] are based on traditional cryptographic hardness assumptions, which will be vulnerable to the emerging threats posed by quantum computing. With the inevitable rise of quantum computing, developing post-quantum secure auditing schemes becomes increasingly vital. On the other hand, current public auditing approaches depend on third-party auditors to verify data integrity. However, these schemes often demand significant computational resources from the auditors, as they involve intensive verification processes, including operations like bilinear pairings and modular exponentiations. These procedures place a heavy computational burden on auditors and create a performance bottleneck.
In addition to these challenges, the deployment of public auditing faces another security issue: public auditing may expose user privacy. Unauthorized third parties should not have access to sensitive user information. To protect data users’ privacy, users can designate a specific verifier to carry out data integrity checks.
In addition to existing security threats, the rapid development of quantum computing poses a fundamental challenge to data integrity verification mechanisms in cloud environments. Most widely adopted integrity verification methods are based on classical number-theoretic problems, such as integer factorization and discrete logarithms. However, these foundations are no longer secure in the face of quantum attacks—particularly because Shor’s algorithm [12] can efficiently solve these problems in polynomial time, rendering many cryptographic schemes that rely on them vulnerable. Once quantum computing becomes practical, existing integrity verification mechanisms will no longer provide long-term security guarantees, thereby seriously undermining the reliability and trustworthiness of cloud storage systems.
Consequently, designing data integrity verification mechanisms that are not only resistant to quantum attacks but also efficient in terms of computational, storage, communication, and transmission overhead has become a critical research direction for securing the next generation of cloud storage infrastructures.

1.1. Related Work

PDP is a practical method for verifying the outsourced data’s integrity, and it was first introduced by Ateniese et al. [13] based on the integer factorization hypothesis. PDP generates verifiable metadata during the data processing phase, which are then outsourced to the cloud service provider (CSP). Afterward, the verifier checks the data’s integrity by randomly sampling blocks. For instance, Ateniese and colleagues demonstrated that with a file containing 10,000 blocks, the verifier can achieve a 99% error detection rate by requesting proofs from just 460 randomly selected blocks [14].
Subsequently, a variety of PDP mechanisms have been developed to suit the diverse requirements of various cloud deployment environments, such as dynamic scenarios, batch verification, and privacy protection. For instance, Yuan et al. [15] developed a new dynamic PDP scheme with multiple replicas to verify the integrity of files stored by users across multiple CSPs. He et al. [7] introduced a PDP scheme for shared data that enables completely dynamic updates and ensures that verifier storage costs remain constant. Zhang et al. [16] tackled enterprise private cloud data sharing challenges by employing attribute-based signatures to design a revocable integrity auditing scheme under the SIS problem. Focusing on the key management issue, Zhang et al. [17] developed an identity-based public auditing scheme with key-exposure resistance based on lattices, which updates the user’s private key using lattice basis delegation while maintaining a constant key size. Wang et al. [18] designed an identity-based data integrity auditing scheme from the lattice method, ensuring forward security. In short, identity-based PDP schemes can simplify key management, thereby reducing the burden on both CSPs and data owners. Sasikala and Bindu [19] introduced a certificateless batch verification scheme over lattices designed to support integrity checking of multiple cloud-stored files. To address the inherent private key escrow problem and lower the overhead of managing public key certificates, Zhou et al. [8] developed a certificate-based PDP scheme under the square computational Diffie–Hellman (CDH) assumption. Zhang et al. [20] developed a revocable certificateless PDP scheme that preserves user identity privacy while also eliminating the key escrow and certificate management problems found in traditional approaches, supporting efficient user revocation.
The abovementioned schemes, referred to as public PDP schemes [21], allow anyone to verify data integrity without downloading the full data. In these schemes, the proofs produced by cloud servers can be validated by anyone.
In contrast to public PDP schemes, Shen and Tzeng [22] introduced a delegatable PDP scheme in 2011, enabling the data owner to create a delegation key for the designated verifier, which is then stored on the cloud server for later verification. The following year, Wang [23] introduced the concept of proxy PDP and provided a concrete construction, allowing users to delegate auditing authority to a proxy through a delegation warrant. In fact, both of the previously discussed methods fall under the category of PDP with designated verifier (DV-PDP) schemes. In the DV-PDP scheme, the user can designate a specific verifier (proxy) to conduct the outsourced data’s integrity verification on their behalf. However, both DV-PDP schemes [22,23] were demonstrated to be insecure by Ren et al. [24], as a dishonest cloud storage server could obtain the key information associated with the delegated verifier. Unfortunately, Zhang et al. [25] demonstrated that Ren et al.’s [24] scheme is insecure and vulnerable to forgery attacks. In 2017, Wu et al. [21] introduced the earliest non-repudiable DV-PDP scheme aimed at addressing the non-repudiation issue and reducing possible conflicts between users and CSPs. Zhang et al. [26] proposed a lattice-based designated verifier auditing scheme specifically designed for cloud-assisted wireless body area networks, which ensures that only the designated verifier is capable of verifying the integrity of outsourced medical data stored on the associated cloud server. To address the vulnerability of DV-PDP to replay attacks launched by malicious cloud servers, a remote data possession verification scheme with a designated verifier was proposed by Yan et al. [27] under the CDH assumption, ensuring that only the specified verifier can validate data integrity, while others are unable to do so. However, this approach depends on public key infrastructure technology and fails to tackle data privacy concerns. To address these limitations, Bian et al. [28] designed an identity-based remote data possession verification scheme based on the discrete logarithm and CDH assumptions, allowing data owners to designate a specific verifier.

1.2. Contribution

To address the challenge of constructing a post-quantum PDP scheme that supports both identity-based cryptosystem and designated verifier auditing, this paper proposes a lattice-based PDP framework tailored for secure and accountable cloud storage in the post-quantum era. The key contributions of this paper are summarized as follows:
  • This paper proposes a novel identity-based PDP scheme that employs a specially leveled IB-FHS scheme to eliminate the complexity of traditional public key infrastructures, thereby simplifying key management.
  • The proposed scheme introduces a designated verifier mechanism, ensuring that only authorized auditors can perform legitimate data integrity checks. This effectively mitigates the privacy risks associated with public verifiability and enhances the controllability and accountability of the auditing process.
  • The scheme is proven secure under the SIS and LWE assumptions in the random oracle model, ensuring its resistance against quantum attacks.
  • This paper conducts a comprehensive evaluation of the proposed scheme through theoretical analysis and simulation-based experiments, covering communication overhead, storage requirements, and computational cost. The experimental results under representative parameter settings demonstrate that the core algorithms maintain reasonable computation times. Compared to existing PDP schemes, although the introduction of a designated verifier mechanism leads to a certain increase in computational overhead, the proposed scheme achieves a well-balanced tradeoff between functionality and efficiency.
Overall, our work aims to construct an identity-based PDP with designated verifier scheme over lattices, offering quantum-resistant security and flexible auditing control. It is particularly suitable for cloud auditing scenarios that require authorization verification in a post-quantum setting.

1.3. Organization

The rest of this paper is organized as follows. Section 2 introduces the necessary preliminaries. Section 3 defines our PDP scheme and its security model. Section 4 presents the detailed construction of the proposed PDP scheme. Section 5 provides formal security analysis covering unforgeability, indistinguishability, and robustness. Section 6 offers a performance evaluation in terms of computation, storage, and communication. Finally, Section 7 provides the conclusion.

2. Preliminaries

2.1. Notation

In this paper, we apply some initial symbols, as shown in Table 1.

2.2. Lattice

A lattice Λ in R m is a discrete additive subgroup generated by integer linear combinations of basis vectors. Given a matrix A = [ a 1 | | a n ] R m × n with column vectors a i , the lattice is defined as
Λ = i = 1 n x i a i | x i Z R m .
The value n is called the rank of Λ . If n = m , then Λ is said to be full rank.
The dual lattice Λ * consists of all vectors y R m such that x , y Z for all x Λ . When A is a basis of Λ , the basis of Λ * can be written as B * = B ( B T B ) 1 .
In this paper, we focus on q-ary lattices, i.e., full-rank lattices that contain q Z m .
Definition 1
(q-ary lattice). Given B Z n × m , k Z q n , the following definitions describe q-ary lattices:
Λ q ( B ) : = { a Z m | e Z q n such that B T e = a mod q } , Λ q ( B ) : = { a Z m | Ba = 0 mod q } , Λ q k ( B ) : = { a Z m | Ba = k mod q } ,
where if y Λ q k ( B ) , then Λ q k ( B ) = Λ q ( B ) + y .
Definition 2
(Gaussian distribution on lattice). Let u R m and φ R define the following: ρ φ , u ( t ) = exp ( π t u 2 φ 2 ) and ρ φ , u ( Λ ) = t Λ ρ φ , u ( t ) . The discrete Gaussian distribution on lattice Λ, centered at u with parameter φ, is then given by y Λ , D Λ , φ , u ( y ) = ρ φ , u ( y ) ρ φ , u ( Λ ) .
Lemma 1
(Leftover hash lemma [29]). Let q > 2 , m > ( n + 1 ) log q + ω ( log n ) , Q { 1 , 1 } m × k , J Z q n × m , B Z q n × k be randomly selected matrices, where k = k ( n ) is a polynomial. The distributions ( J , JQ , Q T d ) and ( J , B , Q T d ) are statistically indistinguishable for any vector d Z m .

2.3. Trapdoor and Sample Algorithms from Lattices

We now present the algorithms used for trapdoor generation and lattice-based sampling. In our construction, the public key and master secret key are generated using the efficient randomized algorithm TrapGen. To derive private keys for the data owner and the designated verifier, we employ the SampleBasisLeft and NewBasisDel algorithms, respectively. For generating tags on data blocks, we utilize the SampleLeft algorithm.
Lemma 2
( TrapGen ( 1 n , 1 m , q ) [29]). Given n 1 , q 2 , m = O ( n log q ) , we can obtain ( U , P ) TrapGen ( 1 n , 1 m , q ) , where the distribution of U Z q n × m is statistically close (within negligible distance in n) to uniform over Z q n × m , and P is U ’s trapdoor.
Definition 3
(Gadget matrix). Given m = n · log q , the gadget matrix G is defined as G = g I n Z q n × m , where g = ( 1 , 2 , 4 , , 2 log q ) Z q log q . The inverse function G 1 : Z q n × m { 0 , 1 } m × m converts each entry x Z q of matrix A into a column of its binary representation. For any A Z q n × m , G n · G n 1 ( A ) = A .
Lemma 3
( SampleLeft ( J , Q , P , k , s ) [30]). Let us have q 2 , m n , s P ˜ · ω ( log ( m + m 1 ) ) , matrices J Z q n × m , Q Z n × m 1 , J ’s trapdoor P , and vector k Z q n ; then, we can get e Z m + m 1 SampleLeft ( J , Q , P , k , s ) , where e is distributed statistically close to D Λ q k ( J | Q ) , s . Accordingly, with the delegated algorithm SampleBasisLeft ( J , Q , P , s ) , it is possible to output Λ q ( J | Q ) ’s basis as P ( J | Q ) .
Lemma 4
( SampleRight ( J , Q , H , P Q , k , s ) [30]). For given q > 2 , m > n , J Z q n × k , Q Z n × m , H Z k × m , Q ’s trapdoor P Q and k Z q n , we can get c Z m + k SampleRight ( J , Q , H , P Q , k , s ) , where c is statistically close to the distribution D Λ q k ( J | JH + Q ) , s , s > P ˜ Q · s H · ω ( log m ) and to s H = sup y = 1 H y . Furthermore, with the delegated algorithm SampleBasisRight ( J , Q , H , P Q , s ) , it is possible to output Λ q ( J | JH + Q ) ’s basis as P ( J | JH + Q ) .
Lemma 5
( NewBasisDel ( J , H , P , s ) [31]). Given q > 2 , m > 2 n log q , J Z q n × m , H D m × m , and J ’s trapdoor P , we get P Q NewBasisDel ( J , H , P , s ) , where P Q is Λ q ( Q = JH 1 ) ’s basis, s > P ˜ · s H m · ω ( log 3 / 2 m ) , s H = n log q · ω ( log m ) , and D m × m follows the discrete Gaussian distribution D Z m , s H .
Lemma 6
( SampleRwithBasis ( J ) [30]). Given q > 2 , m > 2 n log q , J Z q n × m , SampleRwithBasis ( J ) produces an invertible matrix H D m × m and Λ q ( Q = JH 1 ) ’s basis P Q , where P ˜ Q O ( n log q ) .
Lemma 7
( SampleD ( J , P , k , s ) [29]). Given J Z q n × m , k Z q n , Λ q ( J ) ’s basis P , and s = O ( n log q ) , SampleD ( J , P , k , s ) samples v Z q n from the discrete Gaussian distribution D Λ q k ( J ) , s · ω ( log n ) .

2.4. Hard Problems on Lattices

Definition 4
( SIS n , m , q , β problem). Given n , m , q , β and q β · ( n log n ) , the SIS n , m , q , β problem is to find a vector k Z q n { 0 } with k β such that Bk = 0 , where B Z q n × m is a uniform and random matrix. The average case SIS n , m , q , β problem is as difficult as solving the worst case GapSVP O ( β · n ) [29].
Definition 5
( LWE n , m , q , β problem). Given n , m n , q 2 and β q > 2 n , then randomly select vectors s Z q n , k Z q m and a uniform matrix B Z q n × m , as well as sample a Gaussian vector r ( D Z m , 2 β q ) m , and the distributions ( B , B T s + r ) and ( B , k ) are indistinguishable. The LWE n , m , q , β problem is as hard as SIVP under quantum reductions to within O ( n / β ) [30].
For the LWE instance ( B , B T s + r ) , if the Euclidean norm of B ’ trapdoor P is sufficiently small and if r follows a discrete Gaussian distribution, then s can be easily retrieved, as explained in [32]. It is important to note that P T ( B T s + r ) ( mod q ) = ( BP ) T s + P T r ( mod q ) , where P and r both contain very small entries, and P T r ( mod q ) = P T r . Then, multiply by ( P T ) 1 to obtain r . Moreover, we can easily recover s .

2.5. Pseudo-Random Functions

For any 1 , a pseudo-random function (PRF) is defined by a pair of the probabilistic polynomial-time (PPT) algorithms PRF = ( PRF . Gen , PRF . Eval ) as follows:
  • PRF . Gen ( 1 λ ) : Output a seed φ 1 Z q * under the security parameter λ .
  • PRF . Eval ( φ 1 , i ) : Given φ 1 Z q * and i [ ] , return ζ i Z q . For convenience, we use PRF φ 1 ( · ) to represent the algorithm PRF.Eval, where PRF φ 1 ( · ) : Z q * × { 1 , 2 , , } { ζ 1 , , ζ } .

2.6. Pseudo-Random Permutations

For any 1 , a pseudo-random permutation (PRP) is defined by a pair of PPT algorithms PRP = ( PRP . Gen , PRP . Eval ) as follows:
  • PRP . Gen ( 1 λ ) : Output a seed φ 2 Z q * under the security parameter λ .
  • PRP . Eval ( φ 2 , i ) : Given φ 2 Z q * and i [ ] , return ψ i [ ] . For convenience, we use PRP φ 2 ( · ) to represent the algorithm PRP.Eval, where PRP φ 2 ( · ) : Z q * × { 1 , 2 , , } { ψ 1 , , ψ } .
It should be noted that the elements in the set { ψ 1 , , ψ } are distinct.

3. Framework of Our Provable Data Possession

3.1. System Model

The system model of our PDP scheme involves four primary entities: the key generation center (KGC), the data owner (DO), the CSP, and the designated verifier (DV). The PDP scheme comprises four entities, as depicted in Figure 1. Their roles and responsibilities are described as follows:
  • KGC: The KGC is a globally trusted authority in the system. It is tasked with generating the public parameter and master secret key. In addition, the KGC produces the data owner and the designated verifier’s private keys based on their identities.
  • DO: To reduce storage and management burdens, the data owner outsources their data to CSPs. Moreover, the data owner can specify a particular verifier who is exclusively authorized to perform integrity checks on the outsourced data.
  • CSP: With ample computational resources and storage space, the CSP provides data storage services to users. However, it is untrusted, meaning it may delete or tamper with the outsourced data for its own benefit or deceive the user for reputation.
  • DV: The designated verifier is a trusted entity explicitly appointed by the data owner to conduct data integrity verification. Unlike publicly verifiable schemes where any party can audit the data, the designated verifier model restricts the ability to verify to a specific, authorized party.

3.2. Syntax

The proposed PDP scheme Π includes six algorithms: Π = (Setup, KeyGenS, KeyGenV, TagGen, GenProof, and CheckProof). The following PPT algorithms constitute our PDP scheme:
  • Setup ( 1 λ , 1 l ) : Given security parameter λ and the maximum data blocks l, produce public parameter p a r a m s and master secret key m s k . For simplicity, p a r a m s is implicitly treated as an input for the remaining algorithms.
  • KeyGenS ( m s k , i d A ) : Upon the input of master secret key m s k and the identity i d A , produce the public/private key ( P K A , S K A ) of the data owner.
  • KeyGenV ( m s k , i d B ) : Upon the input of master secret key m s k and the identity i d B , produce the public/private key ( P K B , S K B ) of the designated verifier.
  • TagGen ( P K A , S K A , P K B , U ) : Given public keys P K A , P K B , secret key S K A , and file U , the data owner splits U into l blocks (with zero padding if needed), i.e., U : = ( U 1 , , U l ) , where U i Z q n × n for i [ l ] ; return Ω i d = { ( U i , u i ) } i [ l ] , where ( U i , u i ) represents the i-th block–tag pair. Let Ω i d be the set of all block–tag pairs.
  • GenProof ( P K A , P K B , Ω i d , C H ) : Given public keys P K A , P K B , block–tag pairs Ω i d , and the designated verifier’s challenge C H , return a PDP proof Σ i d .
  • CheckProof ( P K A , P K B , S K B , C H , Σ i d ) : Given public keys P K A , P K B , secret key S K B , the challenge C H , and the PDP proof Σ i d , output 1 (accept) or 0 (reject).
In Figure 2, the algorithm shown below the entity name indicates that the entity is responsible for executing the algorithm. The dashed arrow indicates that the algorithm output is transmitted to the designated receiver.
Correctness: Given ( p a r a m s , m s k ) Setup ( 1 λ , 1 l ) , for all m s k , i d A , i d B , U , C H , ( P K A , S K A ) KeyGenS ( m s k , i d A ) , ( P K B , S K B ) KeyGenV ( m s k , i d B ) , and Ω i d TagGen ( P K A , S K A , P K B , U ) , if Σ i d GenProof ( P K A , P K B , Ω i d , C H ) , then CheckProof ( P K A , P K B , S K B , C H , Σ i d ) = 1 .

3.3. Security

The unforgeability proof shows that a malicious CPS, acting as an adversary, cannot deceive the designated verifier or pass the verification by submitting a forged response auditing proof.
For the scheme Π , the selectively unforgeable proof is defined by the game Exp A ( 1 λ ) between the PPT adversary A and the challenger C . Let i d A , i d B denote identities of Alice and Bob, respectively. The game is defined as follows:
  • Initial:  A announces to C the target identities i d A * and i d B * , as well as a list of γ messages U 1 , , U γ under the target identity i d A * denoted as U : = { U 1 , , U γ } for 0 γ l .
  • Setup:  C gets ( p a r a m s , m s k ) Setup ( 1 λ , 1 l ) and provides p a r a m s to A while keeping m s k confidential.
  • Key queries:  A queries any identity i d A , i d B ’s secret key (except i d A * , i d B * ). C gets ( P K A , S K A ) using KeyGenS ( m s k , i d A ) and ( P K B , S K B ) using KeyGenV ( m s k , i d B ) and then transmits them to A .
  • Block–tag queries:  A sends U under the identity i d A * to C . C returns block–tag pairs Ω i d A * = { ( U i , u i i d A * ) } i [ γ ] TagGen ( P K A , S K A , P K B , U ) to A .
  • PDP proof queries:  A transmits a challenge C H limited to U . Using Ω i d A * , C produces a PDP proof Σ i d A * GenProof ( P K A , P K B , Ω i d A * , C H ) and transmits it to A .
  • Challenge:  C produces C H i d A * restricted to U and transmits it to A .
  • Forgery:  A generates a forgery Ω ^ i d A * .
A wins the game if either condition (1) or (2) is satisfied. These conditions are defined as follows:
(1) The PDP proof Ω ^ i d A * passes the verifier’s check.
(2) The PDP proof meets Ω ^ i d A * Ω i d , where Ω i d is an honest PDP proof.
Let Adv A represent the advantage of A in winning the experiment based on the coin tosses of A and C . This defines the security for our PDP scheme.
Definition 6
(Unforgeability). For given the security parameters λ and the maximum number of data blocks l, if no PPT adversary A can succeed in the game Exp A ( 1 λ ) with non-negligible probability, i.e., Adv A is negligible, then the proposed scheme is considered selectively unforgeable.

4. Our Provable Data Possession Scheme

In this section, we propose an identity-based PDP with a designated verifier, which is constructed using leveled IB-FHS. Building upon the approach in [14], the proposed scheme introduces the designated verifier mechanism that ensures that only the designated verifier can check for data integrity. Specifically, we pre-encode the messages to be signed into matrix form, use the leveled IB-FHS in combination with the designated verifier’s public key information to produce signatures, and treat these signatures as tags for the file, which are then verified by the designated verifier. The lattice-based PDP scheme is constructed as follows.
Let i d A and i d B represent the identities of data owner Alice and designated verifier Bob, respectively. The hash function refers to the full-rank difference (FRD) map H 1 : Z q n Z q n × n  [29]. H 2 : { 0 , 1 } * D m × m and H 3 : { 0 , 1 } * × Z q n × n Z q 2 m × m are collision-resistant hash functions.

4.1. Construction

  • Setup ( 1 λ , 1 l ) is shown in Algorithm 1. Given the security parameter λ and the maximum number of data blocks l, return the public parameters p a r a m s and the master secret key m s k .
Algorithm 1: Setup  ( 1 λ , 1 l )
    Ensure: 
p a r a m s and m s k
      1:
Set n = n ( λ , l ) , q = q ( n , l ) , m 0 = n ( n , l ) , m 1 = n log q and m = m 0 + m 1
      2:
Pick Gaussian parameters s 1 , s 2 , s 3
      3:
( A 0 , T A 0 ) TrapGen ( 1 n , 1 m 0 , q )
      4:
Sample random matrices A 1 , B , { D i } i [ l ] Z q n × m 1 and A 2 Z q n × m
      5:
Return  p a r a m s = ( A 0 , A 1 , A 2 , B , { D i } i [ l ] ) and m s k = T A 0
2.
KeyGenS ( m s k , i d A ) is shown in Algorithm 2. Given m s k and the identity i d A , return the public key and secret key of Alice.
Algorithm 2: KeyGenS ( m s k , i d A )
    Ensure: 
( P K A , S K A )
      1:
Compute F i d A = A 1 + H 1 ( i d A ) B Z q n × m 1
      2:
Set identity matrix A i d = [ A 0 | F i d A ] Z q n × m
      3:
Run SampleBasisLeft ( A 0 , F i d A , T A 0 , s 1 ) to obtain T A i d , which is a basis of Λ q ( A 0 | F i d A ) , where s 1 T ˜ A 0 · ω ( log m ) according to Lemma 3
      4:
Return Alice’s public/secret key ( P K A , S K A ) = ( A i d , T A i d )
3.
KeyGenV ( m s k , i d A , i d B ) : Given m s k and the identities i d A , i d B , then run Algorithm 3 to return the public key and secret key of Bob.
Algorithm 3:  KeyGenV ( m s k , i d A , i d B )
    Ensure: 
( P K B , S K B )
      1:
Compute F i d B = H 2 i d B D m × m and B i d = A i d F i d B 1 Z q n × m
      2:
Run NewBasisDel ( A i d , F i d B , T A i d , s 2 ) to get T B i d , where s 2 > T ˜ A i d · s H m · ω ( log 3 / 2 m ) and s H = n log q · ω ( log m ) according to Lemma 6
      3:
Return Bob’s public/secret key ( P K B , S K B ) = ( B i d , T B i d )
4.
TagGen ( P K A , S K A , P K B , U ) is shown in Algorithm 4. Upon inputting public keys P K A , P K B , secret key S K A , and a file U , Alice outputs ( U , Ω i d ) .
Algorithm 4:  TagGen ( P K A , S K A , P K B , U )
    Ensure: 
( U , Ω i d )
      1:
Divide the file U into l blocks, each forming an n × n matrix over Z q (padding with zeros if needed), i.e., U : = ( U 1 , , U l ) , where U i Z q n × n for i [ l ]
      2:
u i i d Z 2 m × m 1 SampleLeft ( A i d , A 2 + B i d , T A i d , D i + U i G , s 3 ) such that [ A i d | A 2 + B i d ] · u i i d = D i + U i G , where s 3 T ˜ A i d · ω ( log 2 m )
      3:
Let Ω i d = { ( U i , u i i d ) } i [ l ] = { ( U 1 , u 1 i d ) , ( U 2 , u 2 i d ) , , ( U l , u l i d ) } as the block-tag pairs collection
      4:
Return  ( U , Ω i d )
5.
GenProof ( P K A , P K B , PRF , PRP , Ω i d , C H ) is shown in Algorithm 5. Upon inputting public keys P K A , P K B , a pseudo-random function PRF, a pseudo-random permutation PRP, a block–tag pairs collection Ω i d , and a query C H = ( | K | , φ 1 , φ 2 ) submitted by Bob—where | K | [ l ] represents the queried data blocks’ number, and φ 1 , φ 2 Z q * —the CSP outputs Σ i d = { μ i d , E i d , Q i d } .
Algorithm 5:  GenProof ( P K A , P K B , PRF , PRP , Ω i d , C H )
    Ensure: 
Σ i d
      1:
Let K = { 1 , 2 , , | K | } be a set, where | K | represents the queried data blocks’ number
      2:
Compute W = PRF φ 1 ( K ) , where W = { W 1 , W 2 , , W | K | } and W j Z q for j [ | K | ]
      3:
Compute V = PRP φ 2 ( K ) , where V = { V 1 , V 2 , , V | K | } and V j K for j [ | K | ]
      4:
Compute σ i d = j = 1 | K | u V j i d L W j and μ i d = j = 1 | K | W j U V j , where GL W j = W j G . Note that we have L W j = G 1 ( W j G ) { 0 , 1 } m 1 × m 1
      5:
Randomly select a matrix R i d Z q n × n and an error matrix S i d Z q n × m
      6:
Compute Y i d = H 3 ( μ i d , R i d ) Z q 2 m × m 1
      7:
Compute E i d = σ i d + Y i d Z q 2 m × m 1
      8:
Compute Q i d = B i d T R i d + S i d T Z q m × n
      9:
Return  Σ i d = { μ i d , E i d , Q i d }
6.
CheckProof ( P K A , P K B , S K B , C H , Σ i d ) : Upon inputting public keys P K A , P K B , secret key S K B , a query C H = ( | K | , φ 1 , φ 2 ) , and a response Σ i d from the CSP, Bob performs Algorithm 6.
Algorithm 6:  CheckProof ( P K A , P K B , S K B , C H , Σ i d )
    Ensure: 
1 (accept) or 0 (reject)
      1:
Compute T B i d T Q i d = T B i d T ( B i d T R i d + S i d T ) = T B i d T S i d T mod q
      2:
Compute S i d T = ( T B i d 1 ) T T B i d T Q i d , then obtain R i d from Q i d and S i d
      3:
Compute σ i d = E i d H 3 μ i d , R i d Z q 2 m × m 1
      4:
Compute W = { W 1 , , W | K | } = PRF φ 1 ( K )
      5:
Compute V = { V 1 , , V | K | } = PRP φ 2 ( K )
      6:
Compute D C H = j = 1 | K | D V j L W j , where L W j = G 1 ( W j G ) { 0 , 1 } m 1 × m 1
      7:
Return 1 (accept) if the verification equation [ A i d | A 2 + B i d ] · σ i d = D C H + μ i d G holds and σ i d s 3 2 m m 1 , otherwise return 0 (reject)

4.2. Correctness

The correctness of our PDP scheme holds if all parties—the data owner, CSP, and designated verifier—follow the prescribed construction. Specifically, the CSP generates the PDP proof using GenProof, and the designated verifier checks it with CheckProof.
Lemma 8
(Correctness). If the parties act honestly, the challenge–response will pass verification, ensuring that the proposed scheme meets correctness.
Proof. 
For each j [ | K | ] , W j Z q , and V j K , we obtain
A σ V j i d = D V j + M V j G ,
and
A σ V j i d L W j = A σ V j i d G 1 ( W j G ) ,
where A = [ A i d | A 2 + B i d ] .
Then, we have
A σ V j i d L W j = D V j L W j + M V j G G 1 ( W j G ) = D V j L W j + W j M V j G .
For all j [ | K | ] , we obtain
j = 1 | K | A σ V j i d L W j = j = 1 | K | D V j L W j + j = 1 | K | W j M V j G = D C H + μ i d G ,
where D C H = j = 1 | K | D V j L W j .
Hence, the verification equation [ A i d | A 2 + B i d ] · σ i d = D C H + μ i d G holds. □

5. Security Analysis

5.1. Unforgeability

Assume that there exists an adversary A , as defined in Section 3.3; we design an algorithm B that utilizes a forged but valid PDP proof provided by A to solve the SIS n , m 0 , q , β problem. Let U : = { U 1 , , U γ } be the set (possibly empty) under the target identities i d A * and i d B * , where A will generate a forgery with 0 γ l .
Theorem 1
(Unforgeability). If a malicious CSP (adversary) succeeds in the security game with overwhelming probability Adv A , then the SIS n , m 0 , q , β problem can also be solved with at least the same probability. Given the assumed hardness of the SIS n , m 0 , q , β problem, the proposed scheme achieves selective unforgeability.
Proof. 
  • Invocation:  B is given a SIS n , m 0 , q , β assumption’s random instance, i.e., A 0 Z q n × m 0 , and is requested to return a solution e Z m 0 { 0 } with | | e | | β such that A 0 e = 0 mod q .
  • Initial:  A announces to C the target identities i d A * and i d B * .
  • Setup:  B employs the algorithm outlined below, utilizing A 0 from the SIS challenge as follows:
    • Set n , m 0 , m 1 , q , m 1 = n log q and m = m 0 + m 1 .
    • Let s 1 , s 2 , s 3 denote the Gaussian parameters.
    • ( B , T B ) TrapGen ( 1 n , 1 m 1 , q ) .
    • Randomly pick the matrices M 1 from { 1 , 1 } m 0 × m 1 , M 2 from { 1 , 1 } m 0 × m , and M 3 D m × m .
    • Set A 1 = A 0 M 1 H 1 i d A * B and A 2 = A 0 M 2 A 0 | A 0 M 1 M 3 1 .
    • Randomly sample u i , 1 i d A * T | u i , 2 i d A * T T D Z m , s 3 2 m for i [ γ ] as “pre-signatures”, and then set
      D i = A i d A * u i , 1 i d A * u i , 2 i d A * U i G ,
      where A i d A * = A 0 | A 0 M 1 | A 0 M 2 = A 0 I m 0 | M 1 | M 2 .
    • Randomly pick l γ matrices { D j } j [ [ l ] [ γ ] ] Z q m × m 1 for j [ l ] [ γ ] .
    • Output the public parameter p a r a m s = A 0 , A 1 , A 2 , B , D i i l .
    The inquiry process of B simulation random oracle H 2 and H 3 is as follows. B initializes two empty lists L 1 and L 2 . The following outlines the queries and steps:
  • H 2 queries. Upon inputting an identity i d B , perform the following:
    • If i d B i d B * , there exists H 2 ( i d B ) in list L 1 ; return H 2 ( i d B ) = F i d B to A . Otherwise, run the algorithm SampleRwithBasis A i d to generate F i d B D m × m , B i d = A i d F i d B 1 Z q n × m and Λ q B i d ’s basis T B i d , where A i d = A 0 | A 0 M 1 . Then, store i d B , F i d B , B i d , T B i d into list L 1 and return H 2 i d B = F i d B to A .
    • If i d B = i d B * , then add ( i d B , M 3 , A i d M 3 1 , ) into list L 1 and return H 2 ( i d B * ) = M 3 .
  • Key Queries. Upon inputting identities i d A , i d B (except i d A * , i d B * ), perform the following:
    • T A i d SampleBasisRight ( A 0 , ( H 1 ( i d A ) H 1 ( i d A * ) ) B , M 1 , T B , s 1 ) .
    • Search for i d B in L 1 and return the associated T B .
    • Output ( P K A , S K A ) = ( A i d , T A i d ) and ( P K B , S K B ) = ( B i d , T B i d ) .
  • Block–tag pairs queries. Upon receiving the set U from A , B responds with block–tag pairs { ( U i , u i i d A * ) } i [ γ ] , where { u i i d A * } i [ γ ] = { [ u i , 1 i d A * T | u i , 2 i d A * T ] T } i [ γ ] are precomputed during Setup phase.
  • H 3 queries. Upon inputting μ i d , perform the following:
    • If there exists μ i d in list L 2 , return the corresponding value Y i d and Q i d to A .
    • Otherwise, B randomly picks a matrix R i d Z q n × n and an error matrix S i d Z q n × m , computes
      Y i d = H 3 μ i d , R i d ,
      and
      Q i d = B i d T R i d + S i d T ,
      and then adds μ i d , Y i d , Q i d into L 2 and returns Y i d and Q i d .
  • PDP proof queries.  A issues a challenge C H = ( | K | , φ 1 , φ 2 ) , limited to the set U , where | K | [ γ ] and φ 1 , φ 2 Z q * . The challenger uses { ( U i , u i i d A * ) } i [ γ ] to produce a valid PDP proof Σ i d A * = { μ i d A * , E i d A * , Q i d A * } by running Algorithm 5 and transmits it to A .
  • Challenge.  B generates a challenge C H * = ( | K * | , φ 1 * , φ 2 * ) restricted to U , where 0 < | K * | γ and φ 1 * , φ 2 * Z q * , and transmits it to A .
  • Forgery.  A returns a forgery Σ ^ i d A * = { μ ^ i d A * , E ^ i d A * , Q ^ i d A * } . If Σ ^ i d A * passes the verifier’s check with overwhelming probability Adv A , then the Equation (10) holds, with Adv A , as
    A i d A * σ ^ i d A * = D C H + μ ^ i d A * G ,
    where A i d A * = A 0 | A 0 M 1 | A 0 M 2 = A 0 I m 0 | M 1 | M 2 and D C H = j = 1 | K * | D V j L W j .
    Let Σ i d A * = { μ i d A * , E i d A * , Q i d A * } be computed honestly. Here,
    μ i d A * = j = 1 | K * | W j U V j ,
    E i d A * = σ i d A * + H 3 ( μ i d A * , R i d A * ) ,
    and
    Q i d A * = B i d A * T R i d A * + S i d A * T ,
    where W = { W 1 , , W | K * | } = PRF φ 1 * ( K * ) , and V = { V 1 , , V | K * | } = PRP φ 2 * ( K * ) . Thus, we obtain
    A i d A * σ i d A * = D C H + μ i d A * G .
    Hence, we have
    A i d A * ( σ ^ i d A * σ i d A * ) = ( μ ^ i d A * μ i d A * ) G ,
    which holds with overwhelming probability Adv A .
    Next, we show that if adversary A succeeds in the security game with non-negligible advantage Adv A , then challenger B can solve an instance of the SIS n , m 0 , q , β problem with at least the same probability.
    Let σ ^ id A * = [ u ^ 1 id A * T | u ^ 2 id A * T ] T be the forged response produced by A . Since Σ ^ id A * Σ id A * , at least one of the following must hold: (i) μ ^ id A * = μ id A * , where σ ^ id A * σ id A * ; (ii) μ ^ id A * μ id A * , where σ ^ id A * = σ id A * ; (iii) μ ^ id A * μ id A * , where σ ^ id A * σ id A * .
    We analyze the above cases:
    Case 1: If the forged tag differs while the file block remains unchanged, then B outputs [ [ I m 0 | M 1 ] [ u ^ 1 i d A * T u 1 i d A * T ] T + M 2 [ u ^ 2 i d A * T u 2 i d A * T ] T ] as the SIS n , m 0 , q , β solution.
    Case 2: If the file block differ but the tags are identical, then this contradicts Equation (15). Thus, this case is not possible.
    Case 3: If both the file block and tag differ, then B outputs [ [ I m 0 | M 1 ] [ u ^ 1 i d A * T u 1 i d A * T ] T + M 2 [ u ^ 2 i d A * T u 2 i d A * T ] T ] T G as the SIS n , m 0 , q , β solution, where T G is a short basis of Λ q ( G ) .

5.2. Indistinguishability

In addition to unforgeability, the proposed scheme also satisfies indistinguishability. This property simply means that the simulated experiment and the real algorithm are indistinguishable from the adversary’s perspective.
Theorem 2
(Indistinguishability). Let { p a r a m s , P K A , S K A , P K B , S K B , Σ i d } and { p a r a m s * , P K A * , S K A * , P K B * , S K B * , Σ i d * } be the outputs of the Setup, KeyGenS, KeyGenV, TagGen, and GenProof algorithms in real and simulated execution, respectively. We demonstrate that the two distributions are statistically indistinguishable.
Proof. 
The differences are as follows in the execution of the algorithm:
  • Setup: In the real Setup algorithm, A 0 Z q n × m 0 and its trapdoor T A 0 are obtained by using the TrapGen algorithm, and matrices ( A 1 , A 2 , B , { D i } i [ l ] ) are selected uniformly at random. In the simulated Setup algorithm, the SIS generator uniformly randomly selects A 0 without its trapdoor. Define A 1 = A 0 M 1 H ( i d A * ) B and A 2 = A 0 M 2 [ A 0 | A 0 M 1 ] M 3 1 , where M 1 { 1 , 1 } m 0 × m 1 , M 2 { 1 , 1 } m 0 × m and M 3 D m × m are selected uniformly randomly. B is sampled using the Trapdoor algorithm. Additionally, set D i = [ ( A 0 | A 0 M 1 ) | A 0 M 2 ] u i , 1 i d u i , 2 i d U i G for i [ γ ] , where u i , 1 i d T | u i , 2 i d T T ( D Z m , s 3 ) 2 m , and for j [ l ] [ γ ] , sample uniform random D j Z q n × m 1 .
  • KeyGenS: In real KeyGenS algorithm, the trapdoor T A i d of A i d is generated by running SampleBasisLeft. In the simulated KeyGenS algorithm, T A i d is produced using SampleBasisRight.
  • KeyGenV: In real KeyGenV algorithm, the trapdoor T B i d of B i d is produced using NewBasisDel. In the simulated KeyGen algorithm, T B i d is generated using SampleRwithBasis.
  • TagGen: In the real TagGen algorithm, u i i d is generated using the SampleLeft algorithm where i [ l ] . In the simulated TagGen algorithm, u i i d is pre-produced during the Setup phase.
  • GenProof: In the real GenProof algorithm, R i d Z q n × n and S i d Z q n × m are uniformly selected randomly. u i i d generated by the SampleLeft algorithm is used to produce σ i d , where i [ l ] . Σ i d is then derived from R i d , S i d and u i i d . In the simulated GenProof algorithm, the matrix R i d = [ A 0 | A 0 M 1 ] M 4 for M 4 { 1 , 1 } m × n , the matrix S i d = ( [ A 0 | A 0 M 1 ] M 5 ) T for M 5 { 1 , 1 } m × m , and u i i d are pre-produced during the Setup phase, and Σ i d is obtained by using R i d , S i d and u i i d .
Now, we argue that the distribution ( A 0 , A 1 , A 2 , B , { D i } i [ l ] ) is statistically indistinguishable in both real and simulated executions. By Lemma 1, it has ( A 0 , [ A 1 | A 2 | D 1 | D 2 | | D l ) ( A 0 , [ A 0 M 1 H ( i d A ) B | A 0 M 2 [ A 0 | A 0 M 1 ] M 3 1 | D 1 | D 2 | | D l ] ) , where D i = [ A 0 | A 0 M 1 ] u i , 1 i d + A 0 M 2 u i , 2 i d U i G for i [ γ ] , and D j Z q n × m 1 for j [ l ] [ γ ] . Here, u i , 1 i d T | u i , 2 i d T T ( D Z m , s 3 ) 2 m . In both executions, ( B , T B ) is produced identically. Hence, the distribution of ( A 0 , A 1 , A 2 , B , { D i } i [ l ] ) is statistically indistinguishable in both cases.
For a key query on an identity i d A , the algorithms SampleBasisLeft and SampleBasisRight are invoked to generate the corresponding trapdoor. By leveraging Lemmas 3 and 4, we ensure that the output of these algorithms is statistically close to a sample from the discrete Gaussian distribution D Λ ( A i d ) , s 1 , assuming that the Gaussian parameter s 1 is chosen to be sufficiently large. This implies that the generated trapdoor is statistically independent of any specific structure or information leakage. Similarly, for another identity i d B , the NewBasisDel and SampleRwithBasis algorithms are used to derive the trapdoor. According to Lemmas 5 and 6, their outputs are computationally and statistically indistinguishable, ensuring that no adversary can distinguish whether the trapdoor was generated in the real execution or through simulation. Thus, in both executions, T A i d ( T B i d ) is statistically indistinguishable.
According to Lemmas 3 and 4, the distributions of public parameters, public/secret keys for identities, and tags associated with challenge files are statistically indistinguishable in both the real and simulated executions. Therefore, an adversary cannot leverage these components to distinguish between the two settings.
For the GenProof algorithm in both executions, considering the secure hash function H 3 and the LWE instance, the Σ i d = { μ i d , E i d , Q i d } values produced by the two executions are statistically close to uniform and random. □

5.3. Robustness

Theorem 3
(Robustness). Our scheme meets robustness if the LWE problem is hard.
Proof. 
We define robustness as the guarantee that only the designated verifier, authorized by the data owner, is capable of successfully validating the integrity of the data.
In the proposed scheme, the verifier-specific component of the proof response is defined as Q i d = B i d T · R i d + S i d T , which is an LWE instance based on the robustness assumption of the LWE problem.
Under the LWE assumption, the distribution of Q i d is indistinguishable from a uniformly random matrix over Z q n × m . Therefore, without knowledge of the designated verifier’s private key, it is infeasible for any adversary—including other verifiers, CSPs, or external attackers—to recover the underlying secret R i d .
If such an adversary A could recover R i d from Q i d without being the designated verifier, this would imply that A can distinguish Q i d from random and invert the LWE function, thereby solving an instance of the LWE problem—contradicting our hardness assumption.
Therefore, under the assumption that LWE is hard, our scheme ensures that only the designated verifier can perform valid proof verification, satisfying the definition of robustness. □

5.4. Parameter Selection

Since the proposed PDP scheme is based on the trapdoor construction [33] and utilizes a specific leveled IB-FHS for signature generation, the parameter relationships were analyzed with these factors in mind. Following the computational formulas provided in [32,34], we derived the parameters of the scheme accordingly. The parameters are presented in Table 2, ensuring that the scheme achieved the specified level of security.
Although the SIS and LWE are widely considered hard problems for post-quantum cryptography, their security may be affected by unforeseen advances in quantum algorithms or future quantum computing capabilities. In such cases, it would be necessary to increase the lattice dimension n and the modulus q to maintain an adequate security level.

6. Performance Evaluation

6.1. Functionality Comparison

As presented in Table 3, several existing schemes, like Deng et al.’s (2023) [10], Wu et al.’s [21], and Yan et al.’s [27], do not incorporate an identity-based cryptosystem (IBC), thereby limiting their applicability in environments that demand streamlined identity management. Although Deng et al.’s (2024) [11], Luo et al.’s [14], and Zhou et al.’s [35] schemes support an IBC, they lack the capability for designated third-party auditing, which is crucial in applications such as cloud auditing—especially in scenarios where it is necessary to guard against potential malicious auditors tampering with or forging audit results.
With respect to security assumptions, certain schemes (e.g., [10,11,21,27]) rely on traditional number-theoretic problems like DL and CDH problems, which are potentially vulnerable in the presence of quantum adversaries. While schemes such as Luo et al.’s [14], Sasikala and Bindu’s [19], and Zhou et al.’s [35] adopt lattice-based hardness assumptions (e.g., SIS) and offer a degree of quantum resistance, they still fail to provide a complete set of functionalities, as none of them simultaneously support both IBC and feature a designated verifier.
In contrast, the PDP scheme proposed in this work offers a more comprehensive and well-balanced design. It supports both IBC and designated third-party validation, thereby enhancing its practical deployability and ensuring stronger accountability. Moreover, this scheme relies on the hardness of both the SIS and LWE problems, offering strong resistance against quantum attacks. Consequently, the proposed scheme theoretically exhibits strong resistance against quantum attacks while fulfilling advanced functional requirements, making it particularly suitable for practical deployment and applications.

6.2. Time and Space Complexity

Table 4 presents a comparison between our scheme and the scheme by Luo et al. [14] in terms of time and space complexity across three core phases: TagGen, GenProof, and CheckProof. This comparison aims to quantitatively evaluate the computational efficiency and resource consumption of different designs.
In the TagGen phase, the time complexity of our scheme is O ( l n m 1 2 ) , and the space complexity is O ( l ( n 2 + n m 1 ) ) . By contrast, the scheme by Luo et al. [14] exhibits a time complexity of O ( l ( n 2 m 1 + ( m + m 1 ) m 1 ) ) and a space complexity of O ( l ( n 2 + ( m + m 1 ) m 1 ) ) . Our scheme incurs lower computational and storage overhead in the TagGen phase, making it especially suitable for efficient large-scale data tag generation.
Moreover, in the GenProof and CheckProof phases, our scheme incorporates a designated verifier mechanism, which embeds verifier identity information into authentication tags and proofs, thereby enabling control over verification rights. This mechanism ensures that only authorized verifiers can perform verification, significantly enhancing the system’s support for controlled auditability.
Although this mechanism introduces additional structure and computation—leading to slightly higher complexity in the GenProof and CheckProof phases compared to the Luo et al. [14] scheme—the overall complexity remains within a practical polynomial range. The overhead remains moderate under typical parameters. In summary, our scheme achieves a well-balanced tradeoff between enhanced security capabilities and acceptable computational cost, offering both flexibility and efficiency.

6.3. Communication and Storage Cost

As shown in Table 5, we evaluated the communication and storage overhead of our scheme in comparison with several lattice-based PDP constructions, focusing on public and secret key (PK + SK) size, tag size, and proof size. The findings indicate that our scheme achieves a balanced tradeoff between efficiency and functionality.
In terms of key size, our scheme requires ( n m + m 2 ) log q bits, which matches the construction proposed by Sasikala and Bindu [19] and is smaller than those by Luo et al. [14] and Zhou et al. [35], demonstrating efficient storage without compromising functionality. Regarding tag size, our design incurs 2 m m 1 log q , which is slightly larger than that in Sasikala and Bindu’s [19] work and the approach by Zhou et al. [35], but remains considerably smaller than the quadratic growth in Luo et al.’s scheme [14], striking a balance between efficiency and security. With respect to proof size, our scheme requires ( n 2 + ( n + 2 m 1 ) m ) log q bits, which is larger than the proof sizes in Sasikala and Bindu’s [19] and Zhou et al.’s [35] designs, but still more efficient than the construction proposed by Luo et al. [14]. Notably, the increased proof size in our design stems primarily from the incorporation of a designated verifier mechanism. This feature enhances privacy and accountability in audit scenarios, thus slightly increasing the communication cost during the proof generation phase.
In summary, the proposed scheme maintains reasonable efficiency while supporting IBS, designated third-party validation, and post-quantum security. It demonstrates strong practicality and is particularly well suited for secure cloud storage applications in the post-quantum era.

6.4. Computation Cost

In this section, we present the experiments conducted to evaluate the proposed PDP scheme, which were executed using MATLAB 2020b on a system with an Intel(R) Core(TM) i5-13500H processor (2.60 GHz) and 16GB of RAM. The file size M used in the experiments was approximately 1MB.
The computation cost for the proposed PDP scheme algorithms (TagGen, GenProof, and CheckProof) was assessed for different lattice dimensions—denoted by n. For a file of size M 1 MB, when n = 128 , the corresponding number of data blocks came out to 21, and when n = 256 , the number of data blocks came out to 5. The designated verifier can challenge all the data blocks to achieve a detection probability of 100 % .
The results of the computation time for n = 128 and n = 256 are shown in Table 6. Figure 3 illustrates the computation time of the TagGen, GenProof, and CheckProof algorithms with respect to the number of data blocks. As shown in the figure, both the TagGen and GenProof algorithms exhibited a linear increase in computation time as the number of data blocks grew. Notably, in the CheckProof algorithm, the verifier could compute the matrix D C H offline, which helped reduce the verification time. As a result, the computation time for CheckProof remained relatively stable even as the number of data blocks increased.
Compared with existing PDP schemes based on various insecure number-theoretic assumptions, the proposed PDP scheme requires larger parameters, including public and private keys, and exhibits higher computational costs. However, it offers promising potential for post-quantum cryptography. Compared with existing lattice-based PDP schemes, the additional overhead primarily stems from the introduction of the designated verifier mechanism. Moreover, in certain practical applications and implementations, the proposed scheme inherits the same limitations as the leveled FHS scheme—namely, suboptimal performance.

7. Conclusions

This paper presents a lattice-based PDP scheme that incorporates a designated verifier using a leveled IB-FHS. The scheme was proven secure under SIS and LWE assumptions in the random oracle model, confirming its theoretical soundness and feasibility. We have also evaluated the effectiveness and practicality of the proposed scheme through performance comparisons with existing PDP schemes in terms of computation, storage, and communication cost. The results demonstrate that our approach achieves a favorable balance between security and efficiency, making it well suited for authorization-based cloud storage auditing scenarios.
However, we also acknowledge several limitations that merit further exploration. First, compared to traditional PDP schemes, the increased computational overhead of the proposed construction is primarily attributable to its reliance on lattice-based cryptography and the inclusion of a designated verifier mechanism. Second, the current design is centered around a designated verifier and does not consider the complexities introduced by multi-tenant cloud environments, where ensuring data isolation and enforcing fine-grained access control among tenants are essential. Third, although the scheme includes theoretical performance estimates, real-world deployment may involve additional overhead and integration challenges that require practical validation. Furthermore, transitioning the construction to the standard model remains an important and meaningful direction for enhancing its theoretical robustness. As future work, we plan to (1) optimize the efficiency of the core algorithms to reduce computational costs; (2) extend the proposed scheme to support secure auditing in multi-tenant cloud settings with proper isolation mechanisms and delegated verification control; and (3) develop a standard model instantiation to eliminate reliance on idealized cryptographic assumptions.

Author Contributions

Writing—original draft preparation, M.Z.; writing—review and editing, M.Z. and H.C.; supervision, H.C.; funding acquisition, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundamental Research Funds for the Central Universities, and it is denoted by grant number 3282024048.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Akhtar, N.; Kerim, B.; Perwej, Y.; Tiwari, A.; Praveen, S. A comprehensive overview of privacy and data security for cloud storage. Int. J. Sci. Res. Sci. Eng. Technol. 2021, 8, 113–152. [Google Scholar] [CrossRef]
  2. Yanamala, A.K.Y. Emerging challenges in cloud computing security: A comprehensive review. Int. J. Adv. Eng. Technol. Innov. 2024, 1, 448–479. [Google Scholar]
  3. Zhao, Y.; Qu, Y.; Xiang, Y.; Uddin, M.P.; Peng, D.; Gao, L. A comprehensive survey on edge data integrity verification: Fundamentals and future trends. ACM Comput. Surv. 2024, 57, 1–34. [Google Scholar] [CrossRef]
  4. Liu, Y.; Xiao, S.; Wang, H.; Wang, X.A. New provable data transfer from provable data possession and deletion for secure cloud storage. Int. J. Distrib. Sens. Netw. 2019, 15, 1550147719842493. [Google Scholar] [CrossRef]
  5. Ren, Y.; Leng, Y.; Cheng, Y.; Wang, J. Secure data storage based on blockchain and coding in edge computing. Math. Biosci. Eng. 2019, 16, 1874–1892. [Google Scholar] [CrossRef]
  6. Wei, P.; Wang, D.; Zhao, Y.; Tyagi, S.K.S.; Kumar, N. Blockchain data-based cloud data integrity protection mechanism. Future Gener. Comput. Syst. 2020, 102, 902–911. [Google Scholar] [CrossRef]
  7. He, K.; Chen, J.; Yuan, Q.; Ji, S.; He, D.; Du, R. Dynamic group-oriented provable data possession in the cloud. IEEE Trans. Dependable Secur. Comput. 2021, 18, 1394–1408. [Google Scholar] [CrossRef]
  8. Zhou, C. A certificate-based provable data possession scheme in the standard model. Secur. Commun. Netw. 2021, 2021, 9974485. [Google Scholar] [CrossRef]
  9. Tian, J.; Song, Q.; Wang, H. Blockchain-based incentive and arbitrable data auditing scheme. In Proceedings of the 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), Baltimore, MD, USA, 27–30 June 2022; pp. 170–177. [Google Scholar] [CrossRef]
  10. Deng, L.; Chen, Z.; Ruan, Y.; Zhou, H.; Li, S. Certificateless provable data possession scheme suitable for smart grid management systems. IEEE Syst. J. 2023, 17, 4245–4256. [Google Scholar] [CrossRef]
  11. Deng, L.; Feng, S.; Wang, T.; Hu, Z.; Li, S. Identity-based data auditing scheme with provable security in the standard model suitable for cloud storage. IEEE Trans. Dependable Secur. Comput. 2024, 21, 3644–3655. [Google Scholar] [CrossRef]
  12. Shor, P.W. Algorithms for quantum computation: Discrete logarithms and factoring. In Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, USA, 20–22 November 1994; pp. 124–134. [Google Scholar]
  13. Ateniese, G.; Burns, R.; Curtmola, R.; Herring, J.; Kissner, L.; Peterson, Z.; Song, D. Provable data possession at untrusted stores. In Proceedings of the 14th ACM Conference on Computer and Communications Security, Alexandria, VA, USA, 29 October–2 November 2007; pp. 598–609. [Google Scholar] [CrossRef]
  14. Luo, F.; Al-Kuwari, S.; Lin, C.; Wang, F.; Chen, K. Provable data possession schemes from standard lattices for cloud computing. Comput. J. 2021, 65, 3223–3239. [Google Scholar] [CrossRef]
  15. Yuan, Y.; Zhang, J.; Xu, W. Dynamic multiple-replica provable data possession in cloud storage system. IEEE Access 2020, 8, 120778–120784. [Google Scholar] [CrossRef]
  16. Zhang, X.; Liu, X.; Liu, Q.; Wang, J.; Liu, X. Revocable attribute-based data integrity auditing scheme on lattices. In Proceedings of the 2022 International Conference on Computer Science, Information Engineering and Digital Economy (CSIEDE 2022), Guangzhou, China, 28–30 October 2022; Atlantis Press: Dordrecht, The Netherlands, 2022; pp. 383–396. [Google Scholar]
  17. Zhang, X.; Wang, H.; Xu, C. Identity-based key-exposure resilient cloud storage public auditing scheme from lattices. Inf. Sci. 2019, 472, 223–234. [Google Scholar] [CrossRef]
  18. Wang, H.; Wang, X.A.; Liu, J.; Lin, C. Identify-based outsourcing data auditing scheme with lattice. In Frontiers in Cyber Security, Proceedings of the Third International Conference, FCS 2020, Tianjin, China, 15–17 November 2020; Proceedings 3; Springer: Berlin/Heidelberg, Germany, 2020; pp. 347–358. [Google Scholar] [CrossRef]
  19. Sasikala, C.; Shoba, C.B. Certificateless batch verification protocol to ensure data integrity in multi-cloud using lattices. Int. J. Internet Protoc. Technol. 2019, 12, 236–242. [Google Scholar] [CrossRef]
  20. Zhang, K.; Guo, Z.; Wang, L.; Zhang, L.; Wei, L. Revocable certificateless provable data possession with identity privacy in cloud storage. Comput. Stand. Interfaces 2024, 90, 103848. [Google Scholar] [CrossRef]
  21. Wu, T.-Y.; Tseng, Y.-M.; Huang, S.-S.; Lai, Y.-C. Non-repudiable provable data possession scheme with designated verifier in cloud storage systems. IEEE Access 2017, 5, 19333–19341. [Google Scholar] [CrossRef]
  22. Shen, S.-T.; Tzeng, W.-G. Delegable provable data possession for remote data in the clouds. In Information and Communications Security; Springer: Berlin/Heidelberg, Germany, 2011; pp. 93–111. [Google Scholar] [CrossRef]
  23. Wang, H. Proxy provable data possession in public clouds. IEEE Trans. Serv. Comput. 2013, 6, 551–559. [Google Scholar] [CrossRef]
  24. Ren, Y.; Shen, J.; Wang, J.; Fang, L. Analysis of delegable and proxy provable data possession for cloud storage. In Proceedings of the 2014 Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Kitakyushu, Japan, 27–29 August 2014; pp. 779–782. [Google Scholar] [CrossRef]
  25. Zhang, J.; Li, P.; Xu, M. On the security of an mutual verifiable provable data auditing in public cloud storage. Int. J. Netw. Secur. 2017, 19, 605–612. [Google Scholar]
  26. Zhang, X.; Huang, C.; Zhang, Y.; Zhang, J.; Gong, J. LDVAS: Lattice-based designated verifier auditing scheme for electronic medical data in cloud-assisted WBANs. IEEE Access 2020, 8, 54402–54414. [Google Scholar] [CrossRef]
  27. Yan, H.; Li, J.; Zhang, Y. Remote data checking with a designated verifier in cloud storage. IEEE Syst. J. 2020, 14, 1788–1797. [Google Scholar] [CrossRef]
  28. Bian, G.; Zhang, R.; Shao, B. Identity-based privacy preserving remote data integrity checking with a designated verifier. IEEE Access 2022, 10, 40556–40570. [Google Scholar] [CrossRef]
  29. Dutta, P.; Susilo, W.; Duong, D.H.; Roy, P.S. Puncturable identity-based and attribute-based encryption from lattices. Theor. Comput. Sci. 2022, 929, 18–38. [Google Scholar] [CrossRef]
  30. Xie, C.; Weng, J.; Zhou, D. Revocable identity-based fully homomorphic signature scheme with signing key exposure resistance. Inf. Sci. 2022, 594, 249–263. [Google Scholar] [CrossRef]
  31. Yang, M.; Wang, H.; He, D. PM-ABE: Puncturable bilateral fine-grained access control from lattices for secret sharing. IEEE Trans. Dependable Secur. Comput. 2024, 22, 1210–1223. [Google Scholar] [CrossRef]
  32. Regev, O. On lattices, learning with errors, random linear codes, and cryptography. J. ACM 2009, 56, 1–40. [Google Scholar] [CrossRef]
  33. El Bansarkhani, R.; Buchmann, J. Improvement and efficient implementation of a lattice-based signature scheme. In Selected Areas in Cryptography; Springer: Berlin/Heidelberg, Germany, 2014; pp. 48–67. [Google Scholar]
  34. Chen, Z.; Shi, Y.; Song, X. Estimating concert security parameters of fully homomorphic encryption. J. Cryptologic Res. 2016, 3, 480–491. [Google Scholar]
  35. Zhou, C.; Wang, L.; Wang, L. Lattice-based provable data possession in the standard model for cloud-based smart grid data management systems. Int. J. Distrib. Sens. Netw. 2022, 18, 15501329221092940. [Google Scholar] [CrossRef]
Figure 1. System model.
Figure 1. System model.
Entropy 27 00753 g001
Figure 2. Algorithm steps.
Figure 2. Algorithm steps.
Entropy 27 00753 g002
Figure 3. Computation time (s).
Figure 3. Computation time (s).
Entropy 27 00753 g003
Table 1. Symbol description.
Table 1. Symbol description.
SymbolsDefinitions
a Z q Random numbers on integer module q spaces
z Z q n n-dimensional vectors on integer module spaces q
Q Z q n × m n-row m-column matrices on integer module q space
Q T Transpose matrix of Q
Q ˜ Gram–Schmidt orthogonalization of a matrix Q
Q Matrix Q l 2 -norm
[ · | · ] Horizontal concatenation of vectors or matrices
[ t ] Set { 1 , 2 , , t }
O ( · ) Asymptotic upper bound
Table 2. Parameter set for Gaussian parameter r = 8 .
Table 2. Parameter set for Gaussian parameter r = 8 .
L c 0001151015
λ 75100128751287575128
n15717854449711271256462411,912
m4059461026,09926,82894,618143,0881,747,5918,814,469
log q 131324274257189370
L c —circuit depth; λ —a security parameter; n , m —the parameters employed in trapdoor function; q—a large integer.
Table 3. Functionality comparisons with related schemes.
Table 3. Functionality comparisons with related schemes.
ItemsIBCDesignated VerifierAssumptionQAR
Schemes
Deng et al. (2023) [10]DL
Deng et al. (2024) [11]CDH
Luo et al. [14]SIS
Sasikala and Bindu [19]SIS
Wu et al. [21]DL
Yan et al. [27]CDH
Zhou et al. [35]SIS
Our SchemeSIS, LWE
IBC—identity-based cryptosystem; QAR—quantum attacks resistance; DL—discrete logarithm problem; ✓ indicates that the scheme has this feature; ✗ indicates that the scheme does not have this feature.
Table 4. Time and space complexity comparisons with related scheme.
Table 4. Time and space complexity comparisons with related scheme.
SchemesAlgorithmsTime ComplexitySpace Complexity
Luo et al. [14]TagGen O ( l ( n 2 m 1 + ( m + m 1 ) m 1 ) ) O ( l ( n 2 + ( m + m 1 ) m 1 ) )
GenProof O ( | K | ( n 2 + ( m + m 1 ) m 1 2 ) ) O ( | K | ( n 2 + ( m + m 1 ) m 1 ) )
CheckProof O ( | K | ( ( m + m 1 ) m 1 2 + n m 1 ) ) O ( ( m + m 1 ) + n m 1 + | K | )
Our SchemeTagGen O ( l n m 1 2 ) O ( l ( n 2 + n m 1 ) )
GenProof O ( | K | n 2 + n m ) O ( | K | ( n 2 + m 1 ) + n m )
CheckProof O ( | K | m m 1 + ( m + m 1 ) m 1 ) O ( | K | + m m 1 + n 2 )
m 1 = n log q ; l—the maximum number of data blocks; | K | —the number of the queried data blocks.
Table 5. Communication and storage comparisons with related schemes.
Table 5. Communication and storage comparisons with related schemes.
ItemsPK + SK SizeTag SizeProof Size
Schemes
Luo et al. [14] ( n m + n m 1 + m m 1 ) log q ( m + m 1 ) m 1 log q ( n 2 + ( m + m 1 ) m 1 ) log q
Sasikala and Bindu [19] ( n m + m 2 ) log q m log q ( n + 2 m ) log q
Zhou et al. [35] ( n m + m 2 + 2 l ¯ m ) log q m log q ( n + 2 m ) log q
Our Scheme ( n m + m 2 ) log q 2 m m 1 log q ( n 2 + ( n + 2 m 1 ) m ) log q
PK—public key; SK—secret key; m = m 1 + m 0 ; m 1 = n log q ; l ¯ —the bit length of a file block.
Table 6. Computation cost (s).
Table 6. Computation cost (s).
AlgorithmsTagGenGenProofCheckProof
n
12820.7613.753.33
25636.6630.675.17
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, M.; Chen, H. Identity-Based Provable Data Possession with Designated Verifier from Lattices for Cloud Computing. Entropy 2025, 27, 753. https://doi.org/10.3390/e27070753

AMA Style

Zhao M, Chen H. Identity-Based Provable Data Possession with Designated Verifier from Lattices for Cloud Computing. Entropy. 2025; 27(7):753. https://doi.org/10.3390/e27070753

Chicago/Turabian Style

Zhao, Mengdi, and Huiyan Chen. 2025. "Identity-Based Provable Data Possession with Designated Verifier from Lattices for Cloud Computing" Entropy 27, no. 7: 753. https://doi.org/10.3390/e27070753

APA Style

Zhao, M., & Chen, H. (2025). Identity-Based Provable Data Possession with Designated Verifier from Lattices for Cloud Computing. Entropy, 27(7), 753. https://doi.org/10.3390/e27070753

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop