1. Introduction
The proliferation of digital assets—such as cryptocurrencies and non-fungible tokens (NFTs)—is reshaping the global financial landscape [
1]. While blockchain networks have introduced edge-computing techniques to offload intensive tasks [
2,
3,
4], they still suffer from efficiency bottlenecks when securely processing transaction data. However, most current ledgers still assume a single credential issuer, which becomes a trust and performance bottleneck once multiple regulators or service providers need to co-govern digital-asset flows.
Ciphertext Policy Attribute-Based Encryption (CP-ABE) provides fine-grained, attribute-oriented access control as to matters of digital asset transaction [
5,
6,
7,
8]. However, there are some challenges to implementing these existing CP-ABE schemes on digital asset trading platforms. For bilinear pairing-based classical versions, the cost increases with the size of the attribute set, while the keys to be generated and matrix operations of lattice-based post-quantum ones are extremely large and expensive, which are unsuitable for resource-constrained devices. Existing solutions for outsourced decryption either delegate full ciphertexts to untrusted proxies, compromising security, or undertake most of the computation at the client, limiting both scalability and responsiveness [
9,
10,
11]. Together, the above limitations cause performance bottlenecks, which in turn prevent the extensive application of CP-ABE in practice.
Equally important, almost all of the above schemes rely on a single Attribute Authority (AA) that issues secret keys. In real trading ecosystems, attributes such as “KYC-level”, “jurisdiction”, and “credit-rating” are certified by different organizations; a single-AA design therefore reintroduces central point of failure risks and invites collusion between the AA and users.
Recent blockchain-assisted outsourcing methods [
12] introduce verifiable computation and fair-payment mechanisms. Nevertheless, they typically leave the underlying cryptographic operations untouched and still rely on centralized verifiers, which reintroduce single points of failure. Furthermore, existing schemes lack effective strategies to balance computational efficiency with security guarantees, especially in high-throughput digital asset scenarios.
In light of these challenges, we advocate for a symmetric allocation of computation and security responsibilities between edge decryption servers (EDSs) and end users. Such symmetry minimizes resource contention, mitigates bottlenecks, and enhances scalability under high-throughput workloads.
Challenges. This work addresses three primary challenges:
Computational Efficiency in ABE Operations. The demands of real-time digital-asset trading necessitate cryptographic operations that are highly efficient. In high-frequency trading environments, where thousands of transactions can occur within a single second, even delays measured in milliseconds have the potential to significantly impact overall system performance.
Security Risks in Outsourced Computation. Offloading decryption to edge servers inevitably expands the attack surface: a malicious or compromised proxy may mount chosen ciphertext attacks, collude with other entities, or infer users’ attribute information. Consequently, an effective outsourcing strategy must conceal sensitive data from the proxy while still relieving the client of heavy cryptographic workloads.
Centralized Trust Dependencies. Current verification frameworks often depend on a single trusted authority to validate outsourced results, which stands in contrast to the decentralized principles underlying blockchain systems. This reliance on centralization not only introduces a single point of failure but also creates an appealing target for potential adversaries. Eliminating this dependency is thus essential for building a resilient and trust-minimized decryption service.
Contributions. To tackle these challenges, we introduce EBODS (Efficient Blockchain-Based Outsourced Decryption System), a solution that combines optimized attribute-based encryption with blockchain-enhanced outsourced decryption. The key contributions of this work are as follows:
Optimized ABE: We construct an optimized ABE scheme using strategy matrices and an efficient polynomial operations [
13], it greatly improves the computational efficiency as well as scalability.
Secure Blockchain-Based Outsourced Architecture: We design a secure and verifiable outsourced decryption architecture that is built in a decentralized manner over blockchain, which allows fine-grained and computation-efficient offloading of decryption operations to edge servers.
Comprehensive Security Scheme: We construct a comprehensive security scheme with strong collusion attacks resistant and non-repudiable through the auditability provided by the blockchain.
Practical Validation: We experimentally validate EBODS using large-scale experiments by comparing it with state-of-the-art ABE systems, and show that EBODS outperforms them in terms of efficiency, scalability and security.
The remainder of this paper is organized as follows:
Section 2 presents the preliminaries, including essential concepts and cryptographic foundations.
Section 3 introduces our system model and security assumptions.
Section 4 details the design and implementation of EBODS.
Section 5 reports the experimental results and performance evaluation.
Section 6 concludes the paper and discusses possible future research directions. concludes the paper and outlines directions for future work.
1.1. Related Work
We survey four inter-connected research streams: (1) classical improvements of attribute-based encryption, (2) post-quantum and lattice-based CP-ABE, (3) multi-authority and blockchain-assisted frameworks, and (4) outsourced decryption techniques.
Table 1 summarizes representative schemes published between 2021 and 2025 and highlights where our EBODS framework advances the state-of-the-art.
1.1.1. From Classical Optimization to Multi-Authority Blockchain ABE
Early optimization efforts in attribute-based encryption (ABE) primarily focused on minimizing policy size or reducing the number of pairing operations. For instance, Chen et al. [
14] introduced a small policy matrix (SPM) CP-ABE scheme, achieving a 32% improvement in encryption speed. However, this approach remains limited by its reliance on a single authority, its pairing-based construction, lack of post-quantum security, and the absence of both revocation mechanisms and result verification features.
In efforts to advance decentralization, Wu et al. [
15] and Lin et al. [
17] developed multi-authority CP-ABE schemes. Despite these advancements, their frameworks continue to rely on pairing operations and do not provide quantum resistance. Meanwhile, Konda et al. [
18] integrated PPO-optimized key issuance within mHealth applications, yet their approach lacks ciphertext-verifiable outsourcing and dynamic revocation capabilities. Cherukupalle [
19] innovatively combined Kyber with AI-driven key rotation, achieving a throughput of 4.8 ktps/s. However, this method does not support fine-grained access policies.
EBODS inherits the consortium-chain governance model, but uniquely integrates (i) lattice-based SPM encryption for post-quantum security, (ii) multi-authority key management, and (iii) verifiable outsourced decryption with dynamic attribute revocation, thereby filling the gaps left by the above schemes.
1.1.2. Post-Quantum/Lattice CP-ABE
To be resistant to Shor-style attacks, they looked to hardness against lattices. Ma et al. [
20] achieved Boolean-circuit CP-ABE over ideal lattices, but it incurs a high encryption latency (
s for 40 attributes), which is unacceptable for wearables. Adeoye [
16] used Kyber/Dilithium together with blockchain EHR sharing; quantum-secure, but again single-authority and not result verified. Fu et al. [
8] suggested offline/online lattice CP-ABE and could reduce cost at the user side to
s, but they needed to store 240 KB of pre-computation data.
EBODS keeps lattice resistance via a polynomial-domain SPM encoder, whilst pushing heavy algebra to auditable blockchain nodes.
1.1.3. Outsourced Decryption
Since Green & Chase (2011) [
9], three lines evolved:
(i) full-ciphertext outsourcing (Riad [
21]) exposes the whole CT to proxies;
(ii) partial outsourcing (Tao [
22]) reduces leakage but still demands >200 ms local pairing;
(iii) task-division and verification. Sethi [
23] adds MAC-based checks; Hong [
12] uses Ethereum to settle fair payment.
EBODS follows line (iii) yet is the first to (a) support lattice-based PQ ciphertext, (b) employ consortium-chain smart contracts for both integrity proof and load balancing across proxy set. Compared with the six most recent schemes (2025), EBODS uniquely realizes the triple-goal “PQ-secure + multi-authority + verifiable outsourcing”.
As
Table 1 makes evident that only EBODS simultaneously satisfies post-quantum resilience, decentralized authority, blockchain auditability, dynamic revocation, and lightweight decryption, precisely addressing the gaps identified by recent surveys.
2. Preliminaries
2.1. Notion
The ring of integers modulo q is denoted , while the field of real numbers is denoted . Standard lowercase letters represent scalars, bold lowercase letters (e.g., ) denote column vectors, and bold uppercase letters (e.g., ) denote matrices. For any positive integer q, we write as shorthand for . The Euclidean norm is denoted by .
For a vector with entries , the norm of a polynomial is defined as . The norm of a matrix is defined as the maximum norm of its column vectors, that is, , where denotes the j-th column of .
We write to indicate that the random variable x is sampled from distribution D. Additionally, the uniform distribution over a finite set S is denoted by .
The distributions and , both defined over the same countable domain , are considered statistically indistinguishable if their statistical distance, denoted by , is negligible. Alternatively, they are deemed computationally indistinguishable if no efficient probabilistic algorithm can distinguish between them with non-negligible advantage.
2.2. Outsourced Decryption in Digital Asset Systems
The outsourced decryption framework for digital assets with CP-ABE comprises four entities:
Attribute Authority (AA)—is responsible for generating the attribute-bound secret keys that embed user-specific randomness to prevent collusion;
EDSs (Edge Decryption Servers)—perform lightweight partial decryption;
Blockchain network—serves as the immutable ledger which keeps track of the access request and checks the correctness of computations that have been outsourced;
End users: Encrypt, decrypt and manage data based on attribute-based access controls.
During encryption, data owners post the access policy and the encrypted metadata to the blockchain. For decryption, an EDS first transforms a ciphertext into an intermediate value, which the user then decrypts locally with her private key. Both steps are publicly verifiable through consensus on the blockchain.
Three of these security notions can be achieved by our scheme.
Collusion resistance: Decryption keys are bound cryptographically to the attribute sets and blockchain-verified identities.
Forward secrecy: A smart contract-based key-rotation scheme allowing stale keys to be automatically revoked when the policy expires.
Non-repudiation: All important events are on-chain eternally recorded, making decentralized verification of the outsourcing calculable and eliminating the single point of failure about any one EDS node.
2.3. Lattices and Lattice Algorithms
The security of the cryptosystem is based on the infeasibility of solving certain lattice problems. Before presenting the security and the correctness of our scheme, it is useful to first present some basic lattice background and algorithms. The goal of this section is to give an intuitive and coherent summary of lattice theory at large, in the context of cryptography, in order to set up the analysis of security in later sections.
Definition of Lattice. A lattice in
is a discrete additive subgroup generated by integer linear combinations of basis vectors. Formally, given a basis matrix
, the lattice generated by
is defined as:
The structure and properties of lattices form the foundation for many post-quantum cryptographic schemes.
q-ary Lattices. In lattice-based cryptography,
q-ary lattices play a central role. These are lattices that satisfy
for some modulus
q. Given a matrix
, two important q-ary lattices are defined as follows:
These lattices are essential for constructing cryptographic primitives such as encryption schemes and digital signatures.
Module Lattices. For efficiency and additional structure, cryptographic schemes often use module lattices, which are defined over polynomial rings. Specifically, let , where n is a power of 2 and q is a prime. Module lattices generalize q-ary lattices and enable more efficient implementations, especially in schemes based on the Ring-LWE or Module-LWE assumptions.
Discrete Gaussian Distribution. A crucial concept in lattice cryptography is the
discrete Gaussian distribution over a lattice. For a lattice
, a center
, and a positive-definite covariance matrix
, the discrete Gaussian is defined as:
Sampling from the discrete Gaussian is fundamental for generating lattice-based trapdoors and for ensuring the security of cryptographic schemes.
Lemma 1. For a lattice Λ with basis and Gram–Schmidt orthogonalization , the smoothing parameter satisfiesfor any . Hardness Assumption: M-LWE. The security of our scheme is based on the Module Learning With Errors (M-LWE) problem, which generalizes the well-known LWE problem to module lattices.
Definition 1 (Decision M-LWE). Given a uniformly random matrix and a vector , where s is sampled uniformly from and e is sampled from a discrete Gaussian , the goal is to distinguish from a uniformly random pair in .
The presumed hardness of the M-LWE problem underpins the security of many lattice-based cryptographic constructions.
Lattice Sampling Algorithms. Efficient and secure sampling algorithms are essential for lattice-based cryptography. The following are commonly used:
SampleZ: Denoted as , this algorithm samples from a discrete Gaussian over , centered at with a truncated support of . It is fundamental for generating vectors with desired statistical properties.
SampleD: Generates a sample z such that its distribution is statistically close to , i.e., the discrete Gaussian centered at over the lattice D.
SamplePre: Used in preimage sampleable functions, this probabilistic polynomial-time algorithm samples from the preimage of a function given trapdoor information. Given a trapdoor T and a target y, it outputs x such that , and x is distributed according to the conditional distribution defined by the sampling algorithm.
Threat Model and Security Definition.
The security of our lattice-based CP-ABE scheme is defined in the indistinguishability under chosen-ciphertext attack (IND-CCA) model:
Definition 2 (IND-CCA Game).
Setup: The challenger runs Setup and gives the public key to the adversary A, keeping the master secret key.
Oracle Queries: A may
obtain decryptions of any ciphertexts of its choice, and
request encryptions of arbitrary messages,
as long as no query decrypts the forthcoming challenge.
Challenge: A submits with and no key satisfying . The challenger picks , returns .
Post-challenge Queries: Same oracles, except A cannot decrypt (or its variants).
Guess: A outputs and wins if .
The scheme is IND-CCA secure if this advantage is negligible for every probabilistic polynomial-time adversary that never asks for secret keys authorized for .
2.4. LSSS and Small Policy Matrix
Linear secret-sharing schemes (LSSS) are an access policy represented by a matrix , whose rows are labeled by attribute predicates. In the EBODS system, one row of is allocated for each leaf node of the policy tree. After we have fixed both and the random secret-sharing polynomial, all ciphertext shares can be computed in a single offline step.
Definition 3 (Small policy matrix [
14]).
Let q be a positive integer. A matrix is called a small policy matrix
(SPM) if it satisfies the following conditions:(i) every entry lies in ;
(ii) there exists a submatrix whose inverse has determinant and whose first row is .
In practice, the policy matrix is relatively small. The matrix is sparse—at most three non-zero entries per row—and so inner products can be computed fast. It is also low-width: the number of columns ℓ is equal to the dimension of the underlying LSSS and usually that makes the size of the ciphertext small.
Moreover, SPMs are composable. We also propose a theoretical framework where multiple policies can be combined together for the same user just by adding new rows, and policies cannot be altered once they are added, which enables the reuse of a single decryption key over multiple ciphertexts. This slim and modular design limits the cost of matrix–vector multiplications and controls the increase of noise in decryption, which leads to enhancement of speed and decoding efficiency.
3. System Overview
3.1. System Model
The EBODS system architecture consists of four main modules that allow for secure encryption and outsourced decryption of digital contents cumulatively, as shown in
Figure 1, to work closely for the purpose of achieving both security and efficiency.
Attribute Authorities (AAs). There are n independent AAs with one attribute universe . A distributed key generation (DKG) outputs public params and secret shares of a global master key in, according to a () linear secret–sharing scheme. There is no polynomial-time computable AA that can retrieve .
For a user having an attributeset , every contacted by him, both in round and for each iteration, leaks a signed partial key and spends it on-chain. Upon the availability of any valid shares, the user executes to recover the entire key . Smaller than- AA Actions Collusion provides no information on or user keys.
Edge Decryption Server (EDS): The EDS decrypts partially, and the computational work is off the end-users. It transmits the ciphertext and the user’s decryption-key pieces (obtained from the blockchain), and it computes partial results back to the user. Since the EDS never has entire private keys or plaintexts, trust and security are enhanced.
Blockchain Network: A blockchain serves as a distributed ledger, which (i) inappellably records the events of key management and data access, (ii) proofs for outsourced computations are maintained, and (iii) applies access-policy logic to deliver, and K-supports or cancel supporters that are not authorized with the power to access the supported item for delegations due to non-repudiation and full auditability. The Key Aggregator smart contract, embedded in this ledger, keeps the collection of public parameters and the set of posted key shares. It automatically verifies that at least unique jint values out of the total are valid signatures that appear when a party replies without releasing a combination of instruction vehicles in the form of re-encrypted ciphertext pieces. Secret material is never available to the KA; it is only metadata, managing the workflow, and is thus unable to decipher messages on its own.
End-User: The data is encrypted locally by each end user and uploaded, then decrypted again when it is accessed. The user looks at the blockchain to figure out who has access rights and combines the EDS’s partial result with the user’s own key material to complete the decryption.
By employing the CP-ABE and blockchain combination, this model is able to provide fine-grained access control with verifiability and decentralized security. The system architecture describes the duty and responsibilities of each component.
One interesting characteristic of EBODS is its symmetric structure, in which both the EDS and end user share the computational and security burden. This design pressure ensures that no single party is a performance bottleneck and that the system’s robustness and efficiency are mutually determined. The resultant load balance also enhances fault tolerance, scalability, and ease of automation, as each constituent is capable to run independently while preserving overall coordination.
Design Overview: As will be shown in the performance analysis that follows, such a design allows EBODS to maintain a constant throughput and low latency despite the high number of incoming transactions. As a result, the scheme is a logical choice for large-scale distributed digital asset management.
We now link the system design to the actual operation by describing the workflow that is used to illustrate how these parts communicate to enable secure and efficient data sharing.
3.2. System Workflow
EBODS proceeds through six sequential phases, as illustrated in
Figure 2.
The process includes the Data Owner (DO) proposed a group of independent Attribute Authorities (AAs), an on-chain Key Aggregator (KA) smart contract, Edge Decryption Server (EDS), the blockchain, and the End User (EU).
- (1)
Key-share request.
We assume that a user first sends a Partial-SK Share Request to the n independent AAs over the blockchain. All requests are logged on-chain immutably, making it fully auditable.
- (2)
Threshold key-share distribution and on-chain aggregation.
All s running a DKG protocol, output their signed key share and submit it to be deposited to the KA contract. The KA accepts the transaction as valid once obtaining at least legitimate signatures.
The user now downloads the shares from the chain and computes locally and computes from the shares the entire secret key . The public parameters are publicly released in system setup time.
- (3)
Data encryption and storage.
Given a plaintext M, the data owner first encrypts it under with the public key together with an access policy , that is: and then stores the pair into an external storage (e.g., cloud or IPFS). A hash of the encrypted message, a timestamp and a hash of are written to the blockchain at the same time.
- (4)
Access request.
When a data holder EU wants the data, it sends an Access Request that includes an attribute proof to the KA/Policy contract. The trust contract verifies whether the set of a user’s attributes is satisfied or not by and sends back if the authorization is true.
- (5)
Outsourced decryption.
An authorized EU provides C and its to the EDS. If the on-chain authorization and the user’s signature is valid, the EDS evaluates (assuming Alice used the key ) and outputs the partially decrypted ciphertext. Since it never has the complete key material, the EDS does not recover the plaintext.
- (6)
Final decryption.
The EU locally applies a Finalize operation to with its private key and retrieves the original message M.
3.3. Security Model
Trust assumptions. The trusted DO, and the n independent PDPs are considered to reflect our model assumption. Attribute Authorities (AAs) are considered to be semi trusted: any collusion of a subset lower than the threshold may cooperate without compromising system secrecy. The correctness of the on-chain Key-Aggregator (KA) contract is being considered valid, since its rules are written in stone through blockchain consensus. The EDS is honest-but-curious, i.e., it closely follows the protocol and escapes the protocol while attempting to learn extra information. End users collude or have keys in common. Finally, the inbuilt blockchain provides a Byzantine-fault-tolerant, pubs- and-subscribe, sorted, meta-service network, which is an unfailing record of the history of all important actions.
Security goals. (1) Data confidentiality: the scheme is IND-CCA secure under the M-LWE assumption, so neither unauthorized users nor the EDS can distinguish encrypted messages. (2) Outsourced-decryption privacy and verifiability: the EDS learns nothing about plaintext and its output is publicly checkable. (3) Access-policy privacy: attributes remain hidden from parties that do not satisfy the policy. (4) Collusion resistance: any coalition of fewer than AAs together with arbitrary users cannot decrypt additional data.
IND-CCA game (sketch). A challenger runs Setup to produce public parameters and master key , giving only to the adversary . During Query I, adaptively requests secret keys or partial decryptions. Next, chooses two equal-length messages and an access structure that none of its obtained keys satisfies. selects a random bit , returns the challenge ciphertext , and Query II continues under the same restriction. Finally, outputs a guess . The advantage must be negligible in . A similar game shows that forging an incorrect partial decryption that still passes verification is also negligible.
4. CP-ABE Scheme with Policy Matrix on Ring
The scheme is a four-stage cycle, i.e., Setup, Extract, Encrypt,, and Decrypt between the Attribute Authority, the data owners, and the users, respectively. Setup is performed in the wild by each of the
ℓ attribute authorities, resulting in their public/master–secret key pairs
and
. Then, all the authorities perform the
to generate the user’s attribute secret keys
. Encrypt returns a ciphertext (
C) that encapsulates an access structure along with the protected message, which is later restored using Decrypt if the holder’s attributes satisfy this structure and a sufficient number of attribute authorities participate.
Figure 3 consolidates these stages, indicating the specific entries, exits, and data-flow between modules.
4.1. Cryptographic Algorithm and Workflow
This section describes the algorithms used in the system. Inspired by Chen’s matrix-optimization method, we further extend it from the integer domain to the polynomial domain and apply the Number-Theoretic Transform (NTT) to boost the multiplication.
The parameters in
Table 2 are chosen to provide 112-bit security level. At the same time, they are compatible with previous methods being compared with, thus providing a good baseline for evaluation. The security of the parameter is justified based on a security analysis of M-LWE and is also supported by the LWE estimator, being a trade-off between security level and computational efficiency with key and ciphertext sizes pruned to be competitive in practice.
4.1.1. Setup
The system setup process comprises two consecutive steps: GlobalSetup: all attribute authorities collectively publish the common ring parameters and the global parity-check matrix . GlobalSetup: The global setup phase is a single time execution, where the result is written to the blockchain in the form of Genesis transaction, so that it can be fetched by other parties later for the correct parameters.
LocalSetup
i
: each authority
generates its private trapdoor and public-key share
for the shared modulus
q. The system is based on a
threshold design such that any
or more key shares are enough to recover a user’s secret key, whilst fewer than
key shares provide no information about the global master secret. The details are presented in Algorithm 1. Each
publishes the tuple
on-chain, and the consensus layer ensures the persistence and tamper-evident, auditable factor.
Algorithm 1. Setupi—executed by the i-th AA |
Require: Security parameters , parity-check matrix |
Ensure: Public parameters and master secret key share for this AA
- 1:
▹ Uniform random matrix - 2:
- 3:
▹ Discrete Gaussian sampling for trapdoor - 4:
- 5:
- 6:
- 7:
Generate signature - 8:
Publish on-chain (transaction type: PublishPK) for Key-Aggregator contract audit - 9:
return
|
4.1.2. Extract
Each authority , as referenced in Algorithm 2, runs the usual sampling routine with its own to produce a share , where every is obtained from via , satisfying . The hash of and the AA’s signature is written to the blockchain.
The user gathers at least valid shares, verifies their signatures, and linearly combines the vectors with Lagrange coefficients to reconstruct a single private key . With fewer than shares no information about the master secret leaks, so the threshold scheme preserves both privacy and robustness.
The
SampleLeft algorithm, as referenced in Algorithm 3, is used to sample short vectors from a specific lattice. This sampling algorithm, as detailed in [
13], is employed in practical applications to generate users’ private keys. Specifically,
SampleLeft receives a full-rank matrix
, a matrix
, a short basis
of
, a Gaussian parameter
, and a vector
as inputs. It then outputs a vector
, which is statistically close to the distribution
. Here,
, where
.
Algorithm 2. Extracti
|
Require: Public key share , master secret key share , attribute set S |
Ensure: Share private key - 1:
for each do - 2:
- 3:
- 4:
end for - 5:
Generate signature - 6:
On-chain event ShareCreated - 7:
return
|
Algorithm 3. |
Require: The public key , the master secret key , and a vector |
Ensure: A vector that satisfies , where - 1:
Sample ▹ Sample Gaussian noise - 2:
Compute - 3:
Sample - 4:
Return
|
In our setting, we need a short user secret key for which the following holds: where . This is because we have a trapdoor basis for and so we can sample much more efficiently with (5); the entire matrix is also impractical. To alleviate this problem, we model the task in two steps: the first stage is SamplePre, which uses a trapdoor to create a short vector for the single matrix under a linear constraint; then, SampleLeft calls a single time to SamplePre to address the workload. The sampling problem for the concatenated matrix hence obtains a small secret key (in the sense of the security proof).
4.1.3. Encryption
The encryption process is initiated by assigning an access-policy matrix using the attribute set . Each line of represents a separate attribute. A secret vector is sampled uniformly from , and then a random vector is created, where are drawn from a centered binomial distribution. The encryption process is detailed in Algorithm 4. The scheme works over a polynomial ring and guarantees security and efficiency.
The encryption steps are as follows:
Algorithm 4. |
Require: Public key , policy matrix for access policy , message |
Ensure: Ciphertext - 1:
Randomly select - 2:
for to l do - 3:
Randomly select from the centered binomial distribution - 4:
end for - 5:
Set - 6:
Generate - 7:
Compute - 8:
for to k do - 9:
Let be the i-th row of - 10:
Compute - 11:
Sample - 12:
Compute - 13:
end for - 14:
Set - 15:
The blockchain logs the encryption event for transparency. - 16:
return
|
In conclusion, the ciphertext is generated with and each , i = 1,..., k. This construction simply guarantees that only users with an attribute which satisfies the access policy can correctly decrypt the message. The detailed view on the computations in this appendix are an immediate match with those referred to by the correctness proof, thus enabling to maintain coherence between the algorithm’s specification and its theoretical treatment.
Complexity Analysis. The encryption operation is dominated by random vector sampling and the matrix–vector multiplication, specifically the products for each policy row. When the policy matrix has k and the public matrices have size , the time and space complexity are both . This is to guarantee the efficiency of the encryption when the number of attributes provided in the access policy remains relatively moderate.
4.1.4. Decryption
Decryption takes the ciphertext
and the user’s private key
, where
S is the user’s attribute set as input. The user can first confirm whether the attributes they hold meet the access policy set by
. If it does, the user reconstructs the secret using the LSSS structure and polynomial-ring operations, as detailed in Algorithm 5.
Algorithm 5. Decryption (C, ) |
Require: Ciphertext and user’s private key |
Ensure: The original message M- 1:
Attribute Verification: Check whether the user’s attribute set S satisfies the access policy defined by . - 2:
Compute : For each corresponding attribute i in S, compute . - 3:
Secret Reconstruction: Let . Compute , where is the first row of the inverse of the candidate submatrix . - 4:
Message Recovery: Recover the message by , where . - 5:
Blockchain Logging: The blockchain logs the decryption event to ensure integrity and auditability. - 6:
return
M
|
Remarks:
Attribute verification: Step 1 makes sure only these users with their attribute sets satisfy the access policy are possible to decrypt the ciphertext.
Computation of : In step 2, the user’s private key exchanges with the corresponding ciphertext component to derive intermediate values needed for secret recovery.Secret reconstruction: In Step 3, the value is then evaluated by linear combinations over combined with the reconstruction vector from the access policy matrix.
Message recovery: We extract the original message during step 4 by undoing the encryption offset and decoding according to the chosen message encoding.
Blockchain auditing: Step 5 records an on-chain decryption transaction to provide an increased transparency and traceability mechanism for data access.
This decryption process aligns with the computational steps outlined in the correctness proof, thereby ensuring both theoretical consistency and practical clarity.
Complexity Analysis. The decryption algorithm essentially consists of checking if the user-instance attribute set satisfies the access policy, and carrying out some matrix–vector multiplications and additions to recover the plaintext message. In the case of an access policy of k rows and ciphertext components of dimension , the leading operations have time and space complexities in . Hence, the decryption operation is efficient and linear scaling is maintained with the number of attributes present in the access policy.
4.1.5. Verification
The verification function is employed to determine whether a user is authorized to access specific data. Implemented as a smart contract on the blockchain, this function checks whether the user’s attributes satisfy the system-defined access policy (Algorithm 6).
Algorithm 6. |
Require: the public key , the user’s attribute set S, and the ciphertext C |
Ensure: Access verification result (True/False)
- 1:
Verify that the user’s attributes in S satisfy the access policy defined in C - 2:
if User’s attributes match the policy then - 3:
return True - 4:
else - 5:
return False - 6:
end if - 7:
The blockchain logs the verification process to ensure transparency and provide an immutable audit trail.
|
4.2. Correctness
Theorem 1. The proposed ABE scheme can decrypt correctly with overwhelming probability.
Proof. With the optimization introduced by the minimum policy matrix algorithm, both the encryption and decryption processes are streamlined.
For the encryption process, the ciphertext components are computed as follows:
where
contains the attribute control information. The final ciphertext, denoted as
C, is transmitted as:
For the decryption process, the components are:
where
, and the ciphertext is
.
Next, the decrypted value
is computed as:
Using the SampleLeft algorithm, we obtain
. Subsequently, each
is computed as:
Note that nodes leaves may reference the same attribute and the size of the policy matrix does not grow linearly with the number of attributes. For lattice-based ABE, the size of the ciphertext grows linearly with the number of rows in the policy matrix, i.e., number of leaf-nodes in the policy tree.
In the optimized method, the ciphertext is a k-dimensional polynomial vector, and is a polynomial matrix. The policy tree for this setting has k leaves. A larger policy matrix leads to a ciphertext size explosion, but does not influence the private key size.
The correctness of the algorithm is ensured by the private key generation process, in which each private key is constructed based on the user’s attributes. In the left sampling function, it holds that
, thus leading to:
During decryption, if the user’s attributes satisfy the access policy, the decryption vector is:
In the linear secret sharing scheme,
, so the secret value
is the first element of this vector:
Therefore, we compute:
The final decryption result is:
Here, the decryption noise term is defined as
. Therefore, the difference between the computed value and the original ciphertext is:
When , the algorithm can correctly decrypt the ciphertext and recover the original message M. To guarantee correctness, the overall noise term, including e and , must remain sufficiently small to satisfy the required condition with overwhelming probability.
□
4.3. Threat Model and Security Proofs
The system features n independent attribute authorities . A linear secret-sharing protocol generates master-key shares such that any shares reconstruct the global master key , while fewer than shares reveal no information about it. A user collects signed local keys and executes to obtain her complete secret key.
Theorem 2 (CCA Security).
Let the ring dimension be and let the policy matrix contain rows. Under the hardness of theM–LWEproblem, any PPT adversary that corrupts at most authorities have an advantagewhere is a reduction toM–LWE. Proof of Game-Hopping.
We follow the standard sequence-of-games technique and highlight only the modifications required by the threshold multi-authority setting.
Game 0 (Real World). This is the normal IND-CCA experiment for our scheme:
- (i)
Setup. All n authorities jointly run the DKG protocol, publishing the common parameters and holding shares .
- (ii)
Corruption. The adversary chooses a subset of at most authorities and obtains their internal states, including .
- (iii)
Queries/Challenge. interacts with the key-generation and decryption oracles exactly as in the single-authority game, subject to the usual rule that it may not obtain a key that satisfies the challenge policy .
Game 1 (Embedding an M–LWE instance). Relative to Game 0 we make two changes.
- (1)
Simulation of honest authorities. The reduction
randomly selects one honest authority index
, keeps its trapdoor
, and uses SampleLeft to answer
all key-extraction and partial-decryption queries with a single genuine share
. For every other honest authority
(
) it returns a placeholder ⊥; because the public reconstruction vector
satisfies
one correct share is sufficient, so the view of
is unchanged.
- (2)
Challenge ciphertext. receives an M–LWE sample , where b is either (real) or uniform. It embeds b into the challenge ciphertext component u exactly as in the single-authority proof. If b is uniform, the ciphertext is statistically independent of the challenge bit.
Since the only difference with Game 0 is the possible replacement of
u,
Games 2 to (Hybrid Games). Let have rows . Starting from Game 1, we gradually replace, for , the component derived from by a uniformly random vector of the same dimension. Each step changes the distribution by at most , so after steps we obtain Game in which the challenge ciphertext is independent of the plaintext and the adversary’s advantage is 0.
Final bound. By a telescoping sum over the
games,
Correctness. The threshold mechanism modifies only the way private keys are assembled; the encryption, decryption and noise analysis remain unchanged, hence correctness follows as in the single-authority case.
□
6. Future Work and Conclusions
Future Work: Future directions will move along three lines. First, we will tighten the existing conservative parameter set by a more fine-grained concrete security analysis and implementation-level optimizations such as NTT and SIMD sampling, aiming at reducing the size of ciphertext and the latency of decryption by 30–50% with the same level of security. Second, we intend to develop an on-chain revocation and attribute evolution mechanism that amortizes gas costs on a per-batch basis, and devoid of a requirement to re-encrypt historical ciphertexts, via key-insulated methods. Finally, we will generalize the policy matrix to model richer logical operations (including, for example, NOT supports), that can be used to model more complex access structures over a larger number of attributes.
Conclusions: In this paper, we present an efficient outsourced decryption scheme for ABE using matrix optimization technique. The technique generalizes the idea of matrix optimization from the integer domain to the polynomial domain, and speeds up algorithm by using the Number Theoretic Transform (NTT). When instantiated under some concrete parameter values, this leads to better efficiency for decryption under the standard model at no expense to selective security. However, the proposed scheme is not strong enough for complex access policies. In the future, we will improve the efficiency of matrix decomposition algorithms, and more flexible access structures will be supported, and efficient revocation mechanisms will also be developed.