1. Introduction
In the rapidly advancing information age, cloud computing has become an integral platform for consolidating vast computing resources. Many organizations and individuals choose to upload large volumes of data to cloud servers for processing and storage. However, while cloud computing provides extensive computational resources, it also faces numerous security challenges. When users transfer their data to cloud servers, they relinquish control over it, and both cloud service providers and malicious actors may exploit the cloud to steal users’ sensitive information.
Fully homomorphic encryption (FHE) [
1] addresses this privacy challenge by enabling additions and multiplications to be performed directly on encrypted data, without requiring decryption. In other words, FHE can perform a series of operations on ciphertexts that directly correspond to additions and multiplications on the respective plaintexts. However, traditional (single-key) FHE [
2,
3,
4,
5] is limited to performing homomorphic operations on ciphertexts encrypted with a single key, and therefore does not support multi-party collaborative computation scenarios.
To address this problem, several generations of schemes have been proposed that allow joint computation on encrypted data from multiple users without the need for decryption. These schemes support flexible function selection and participant configuration, while ensuring that decryption of computation results can only be performed jointly. These schemes are commonly referred to as multi-party fully homomorphic encryption (MFHE) schemes [
6,
7]. Over the years, various multi-key and threshold variants of MFHE schemes have been proposed [
8,
9,
10,
11,
12,
13,
14], which serve as alternatives to traditional multi-party computation (MPC) protocols in cloud-based systems. These are particularly useful in collaborative data scenarios, such as distributed training of machine learning models [
15] and biomedical analysis [
16,
17]. For users, compact schemes are advantageous, as the number of interactions is not affected by the function being executed or the number of parties involved, and the computation remains constant regardless of the circuit depth of the function, thereby avoiding significant computational and communication costs.
Although using passively secure threshold multi-party fully homomorphic encryption (TMFHE) schemes is effective and practical in most scenarios, these schemes are extensions of traditional FHE schemes to multiple users, and can, at most, operate among honest-but-curious parties. Therefore, there is a possibility of privacy leakage when facing malicious adversaries. In TMFHE schemes, when a set of parties corrupted by a malicious adversary or a dishonest majority outputs incorrect partial decryption results, it can cause decryption failures or even leakage of private information, thereby compromising the correctness and security of the schemes.
To ensure security against malicious adversaries in TMFHE schemes, a natural and feasible approach is to introduce commitment and non-interactive zero-knowledge (NIZK) proof systems based on the Fiat–Shamir transform [
18]. However, constructing an actively secure TMFHE scheme presents several difficult challenges. In one respect, the workflow of TMFHE schemes involves uncertainty in the parties and their numbers, and there are constraints such as the inability to ensure consistency of secrets between different protocols. This means that the verification mechanism based on commitment and NIZK must be specifically designed according to the characteristics of the TMFHE scheme, without affecting the execution of the primary encryption scheme. In another respect, the variable ciphertext structure dimensions and complex computation processes of the current FHE schemes result in additional computations when NIZKs are introduced into the multi-party execution of prescribed protocols. Therefore, there is an urgent need for an efficient TMFHE scheme that simultaneously supports flexible threshold structures, robust security against adversaries, and verifiability of protocol execution. Although homomorphic encryption has found increasingly widespread use in multi-party computation, existing TMFHE schemes generally suffer from structural limitations, incomplete security models, and inadequate verifiability. For instance, some schemes only support joint computation by all parties (e.g., N-out-of-N structures), which makes them unsuitable for real-world scenarios involving offline or dynamically changing participants. Others lack rigorous constraints on malicious adversaries and fail to provide effective verification mechanisms. While some improvement directions have been proposed in the literature—such as by Boneh et al. [
19], who designed a universal thresholdizer without a complete verification mechanism; Chatel et al. [
20], who enhanced adversarial resistance, but did not support general threshold structures; and Mouchet et al. [
21], who optimized key generation without addressing verifiability—these approaches still leave important issues unresolved. In light of these limitations, this work aims to construct an efficient TMFHE scheme that supports flexible threshold access structures and incorporates robust security verification mechanisms.
The verifiable threshold multi-party fully homomorphic encryption (VTMFHE) scheme proposed in this paper is specifically designed to address the aforementioned challenges. Distinct from existing works, our approach introduces improvements and innovations in two key aspects. First, we enhance share resharing techniques to allow their seamless integration into LWE-based homomorphic encryption schemes. By combining Shamir secret sharing and share resharing, our scheme enables additive sharing of secret key shares, thereby supporting arbitrary t-out-of-N threshold structures and accommodating offline or dynamically changing participants. This significantly improves the flexibility and practicality of the protocol. Second, we introduce commitment and non-interactive zero-knowledge (NIZK) techniques to design verifiable distributed key generation and threshold decryption protocols. These mechanisms ensure the structural validity and computational consistency of key shares and partial decryption results generated by each party, thereby strengthening the scheme’s resilience against malicious adversaries. Compared with prior studies, our scheme achieves better generality in structure and functionality, while maintaining a balanced trade-off among security, verifiability, and efficiency. Therefore, this work holds both theoretical significance and practical value for advancing secure and efficient multi-party homomorphic computation protocols.
1.1. Related Work
Bendlin and Damgård [
22] considered a setting where parties obtain secret shares via pseudo-random secret sharing techniques, and provided a threshold variant of a CPA-secure encryption scheme based on Regev’s work [
23]. This led to a non-interactive key generation process; however, it is non-compact, as it necessitates a unique key for each possible subset of corrupted participants. Xie et al. [
24] proposed a threshold CCA-secure public key encryption (PKE) scheme based on the Learning with Errors (LWE) problem [
23], which incorporates lossy trapdoor functions. However, in this scheme, the size of the ciphertext and public key is at least linear with respect to the number of servers. Therefore, due to this factor’s expansion, the aforementioned two schemes are impractical for multiple parties.
López-Alt et al. [
6] were the first to propose a multi-key fully homomorphic encryption (MKFHE) scheme based on the NTRU algorithm, after which MKFHE research entered a phase of rapid development. In LWE-based MKFHE schemes [
7,
9,
10,
11], each participating party encrypts its input using an FHE scheme and broadcasts the ciphertext to the others. Subsequently, each participant homomorphically evaluates the received ciphertexts from other parties and engages in a round of distributed decryption protocol. If a trusted authority thresholdizes the encryption keys during the Setup phase and distributes them to the parties, the keys can then be additively shared. However, these schemes all require a ciphertext extension step, which significantly increases the ciphertext size and makes the computational cost of homomorphic evaluation expensive. Among these, Asharov et al. [
8], in their pioneering work on LWE-based multi-party homomorphic encryption, demonstrated the construction of a three-round MPC protocol using a common reference string. They achieved semi-malicious security based on the LWE problem, and further achieved full malicious security through NIZK. However, they did not specifically detail the secret share scheme employed to achieve the t-out-of-N access structure. Additionally, in practice, directly reconstructing the secret key shares is undesirable, because it reveals the shares of the failed parties to the other parties.
Boneh et al. [
19] proposed a t-out-of-N universal thresholdizer based on the LWE problem, enabling threshold versions of arbitrary multi-party signature schemes. This scheme uses a TMFHE scheme constructed with secret sharing techniques to encrypt the keys of the underlying signature scheme, and uses threshold decryption to recover the signature. However, in more robust asynchronous settings, participants cannot determine which other parties are online when generating decryption shares, leading to increased complexity and performance overhead. Agrawal et al. [
25] applied this method [
19] to the signature scheme by Ducas et al. [
26], and demonstrated how to handle adaptive corruption. Mouchet et al. [
27] proposed a multi-party homomorphic encryption scheme based on the RLWE problem, which achieves linear communication complexity and non-interactive circuit evaluation through multi-party protocols such as key switching, relinearization, and public key switching. However, the scheme only provides passive security under the semi-honest model. Similarly, Mouchet et al. [
21] constructed a TMFHE scheme based on the Ring-LWE problem, using share resharing techniques, which has a t-out-of-N access structure. They proposed reconstructing the secret key shares in the threshold decryption protocol without needing a trusted dealer to distribute the share. Therefore, their scheme is simpler and more compact, with good performance, but its security can only be effective against static passive semi-honest adversaries.
Baum et al. [
28] provided a compact threshold cryptosystem with commitment and zero-knowledge proof protocols. However, their approach is limited by the server’s ability to perform only a predetermined number of online non-interactive decryption operations, after which an offline interactive step is required. Subsequently, Baum et al. [
29] proposed an efficient homomorphic commitment scheme based on lattice hardness assumptions, and constructed the corresponding zero-knowledge proof of knowledge for opening. The scheme achieves significant compression in the sizes of commitments and proofs under the assumptions of computational hiding and binding, and it supports commitments to vectors with larger norms. This efficiency comes at the cost of the commitment size growing linearly with the dimension of the input vector. Building on Baum et al.’s scheme and its improvements, Yang et al. [
30] developed an efficient zero-knowledge proof system for lattice-based matrix-vector relations, which supports the standard definition of soundness and achieves negligible soundness error. Rotaru et al. [
31] proposed a distributed key generation protocol, based on the BGV scheme [
3], that provides active security; however, it is limited to the full-threshold scenario. Gentry et al. [
32] built verifiable secret sharing based on the LWE problem. Nevertheless, in their construction, the discrete logarithm assumption underpins the zero-knowledge proofs, giving them non-resistant quantum resilience. Viand et al. [
33] constructed a threshold FHE scheme using general zero-knowledge proofs.
Gür et al. [
34] constructed an MFHE scheme with threshold decryption based on the BGV [
3] scheme, and implemented a verifiable distributed key generation protocol using the hash function, commitment, and NIZK. On this basis, they instantiated a variant of the signature scheme developed by Ducas et al. [
26]. Chatel et al. [
20] proposed a practically deployable multi-party fully homomorphic encryption scheme secure against malicious adversaries, which supports zero-knowledge verification of the correctness of key generation, decryption, and bootstrapping. However, their scheme does not support threshold settings and lacks flexibility in the configuration of participating parties. Aranha et al. [
35] achieved malicious security for Bendlin and Damgård’s scheme [
22] based on commitment and NIZK, and constructed a verifiable distributed decryption protocol based on the RLWE problem. Although Bendlin and Damgård’s scheme [
22] supports an arbitrary threshold, Aranha et al.’s scheme [
35] only supports the full-threshold setting, resulting in insufficient system reliability and flexibility, as well as higher communication overhead compared to arbitrary threshold settings.
1.2. Our Contributions
In this work, we present the first LWE-based verifiable threshold multi-party fully homomorphic encryption scheme (VTMFHE) with active security, addressing the aforementioned challenges. Our scheme builds upon the GSW encryption scheme proposed by Gentry et al. [
4]. We achieve security against malicious adversaries by integrating Shamir secret sharing, share resharing, commitments, and NIZK proofs. The main contributions are summarized as follows:
To address the high complexity and overhead in asynchronous settings of existing TMFHE schemes, as well as the rapid growth of smudging noise during decryption, we constructed an LWE-based TMFHE scheme using Shamir secret sharing. This scheme is compact, as the size of the ciphertexts or the evaluated ciphertexts remains independent of the evaluation functions and the number of parties, N. Furthermore, we propose an improved share resharing technique that transforms secret key shares in the scheme into additive shares. Each party only needs to perform the resharing protocol once to store a constant state, enabling any subset of at least t parties to locally execute precomputed additive key recombination during decryption. This supports flexible t-out-of-N threshold structures and allows non-interactive key reconstruction in the decryption phase. As a result, partial decryption results from such a subset can be combined in a single round of communication to produce the decryption of the evaluated ciphertext, enabling some parties to remain offline or to generate new keys for dynamically changing participant sets in different computation tasks.
To address the issue of existing TMFHE schemes lacking active security, we constructed a verifiable distributed key generation encryption algorithm and threshold decryption protocols using commitment and NIZK, demonstrating how to integrate these components into our proposed VTMFHE scheme. During the implementation of the scheme, the following operations can be effectively verified for correct protocol execution and parameter usage: (a) generation of the encryption public key, (b) generation of ciphertexts by each party using the encryption algorithm, and (c) computation of partial decryption results by parties using the correct secret key shares in threshold decryption. Finally, we have demonstrated, through hybrid experiments, that our scheme is secure against malicious adversaries.
1.3. Organization
In
Section 2, we present the necessary preliminaries for this work, including the foundational security assumption (the LWE problem), Shamir secret sharing, and fully homomorphic encryption, as well as the definitions and properties of commitment schemes and non-interactive zero-knowledge (NIZK) proofs.
Section 3 introduces the core building blocks of our scheme, including the basic structure of TMFHE, the share resharing technique, and the oracle query.
Section 4 provides a detailed description of the complete construction of the proposed VTMFHE scheme, covering the verifiable distributed key generation protocol, the distributed threshold decryption protocol, and the NP relations of NIZK.
Section 5 offers a systematic analysis of the scheme’s correctness, security, and performance, and presents a simulation-based hybrid experiment proof of active security. Finally,
Section 6 concludes the paper by summarizing the main contributions and discussing potential directions for future research.
2. Preliminaries
This section first defines the notation and preliminaries for the construction modules used in our scheme.
Section 3 and
Section 4 then demonstrate how these components are integrated into the overall scheme.
Notation. In this work, we use bold lowercase letters to denote vectors (e.g., ), and bold uppercase letters to denote matrices (e.g., ). For two vectors and of the same dimension, we use to denote their inner product. Let denote the security parameter, with other parameters depending on , such as . If the parameter is asymptotically bounded by the inverse of a polynomial in , i.e., , we say that the parameter is negligible, denoted as . For a prime power , we define integers in the range as elements in the finite field . For any set , we let denote the cardinality of , and for any element , denotes the absolute value of . We use to denote the infinity norm of a vector, which is the maximum absolute value of its elements, and a similar definition applies to the infinity norm of a matrix. For any , we use to denote the set . We use to denote that is randomly and uniformly sampled from the probability distribution , and to denote that is randomly and uniformly sampled from the set . For two distributions and , we denote that they are statistically indistinguishable and computationally indistinguishable by and , respectively.
Definition 1 (B-bounded Distribution). If an ensemble of distributions on integers is called B-bounded distribution for an integer bound , it satisfies the following condition: The technique of “smudging out”, proposed by Asharov et al. [
8], seeks to render ciphertext noise unusable by overwhelming it with newly sampled noise from a distribution with larger variance. We depend on the following lemma, which describes introducing large noise to “smudge out” or conceal the small noise.
Lemma 1 ([8]). Let and be positive integers, with being a fixed integer. Let the integer be uniformly randomly sampled. Then, provided that , the distribution of is statistically indistinguishable from the distribution of , i.e., .
For the short preimage matrix
, proposed by Micciancio and Peikert [
36], multiplying by
can be viewed as executing the
function in the GSW scheme, while the function
corresponds to the
function. The low-level details of this implementation can be omitted (see [
7,
9] for more details). Note that
is not a matrix, but an efficiently computable function.
Lemma 2 ([36]). For any , there exists a fixed and efficiently computable matrix and an efficiently computable deterministic short preimage function satisfying the following property: for any , given an input matrix , the function outputs a bit-matrix , such that .
2.1. The Learning with Errors (LWE) Problem
Our scheme is based on the hardness of the LWE problem. The LWE problem, introduced by Regev [
23], is a well-known lattice-based hard problem that provides security against quantum attacks and allows for efficient implementation. In lattice-based public key cryptosystems, solving the average-case LWE problem is at least as hard as solving certain well-known worst-case lattice approximation problems.
Definition 2 (LWE Problem). Given a security parameter , let the lattice dimension , let the prime be integers, and let be an error distribution over . Let be a uniformly sampled matrix, . The goal of the problem is to distinguish between the two distributions as follows:
: Uniformly randomly sample and , and let . Then .
: Uniformly randomly sample . Then .
Then, for any probabilistic polynomial time (PPT) adversary , the probability of solving the problem (i.e., distinguishing between these two distributions) is negligible: Lemma 3 ([23]). Regev [23] proposed a reduction of the LWE problem from the worst case to the average case. For any integer lattice dimension , integer prime , and bound , there exists a B-bounded distribution that can be efficiently uniformly sampled. Therefore, if a quantum algorithm exists that can efficiently solve the problem, then a quantum algorithm also exists that can efficiently solve the -approximate worst-case problems SIVP and GapSVP on the lattice. 2.2. Shamir Secret Sharing
In this section, we will introduce the standard definitions of the threshold access structures and the Shamir secret sharing used in this work.
Definition 3 (Threshold Access Structure). Let the set of parties be . A threshold access structure (TAS), denoted as , is defined such that for any subset of parties , if and only if . Furthermore, let be a subset of the party set . If , but for any party , it holds that , then is called a maximal invalid set. We categorize the TAS as the collection of access structures for all .
Next, we review the Shamir secret sharing scheme [
37], which uses polynomial interpolation over a finite field to implement a t-out-of-N threshold access structure. In Shamir’s scheme, a secret
from the secret space
is divided into
shares, ensuring that at least
shares are needed to reconstruct the secret
. For simplicity, we consider reconstructing the secret using the first
shares.
Definition 4 (Shamir Secret Sharing Scheme). Let the set of parties be , and let represent a class of threshold access structures on . A secret space exists. Then, Shamir secret sharing is a PPT algorithm Shamir = (S.Setup, S.Share, S.Combine), defined as follows:
S.Setup: Determine the threshold access structure for the set of parties , and ensure the parties concur on a field . Each party is associated with an element , which is a non-zero value, and if , then . This sequence of public points is referred to as the Shamir public points.
S.Share : To share a secret among parties and ensure that at least shares are required to recover the secret, the algorithm uniformly samples elements . It then defines a polynomial of degree : . Thus, . The value is then sent to party .
S.Combine : Any subset of shares from at least parties can reconstruct the secret , while no subset of fewer than parties can learn any information about the secret . The secret can be computed as , where are the Lagrange coefficients.
For a prime q, the Shamir secret sharing scheme satisfies the following properties of correctness and privacy:
Definition 5 (Correctness). For any valid subset of parties , and any secret , the Shamir secret sharing scheme can correctly recover the secret : Definition 6 (Privacy). For any subset of parties and secret , where , if , then the two distributions and are indistinguishable, i.e.: In our scheme, secret sharing is consistently applied to a vector (the secret key) rather than a single scalar . We can directly apply the function to each component of the vector using randomly sampled values , resulting in shares . Similarly, by executing the function, we can reconstruct the secret as a linear combination of the shares , using the same coefficients as for a single scalar element. This approach ensures that the privacy property of the scheme is preserved.
2.3. Fully Homomorphic Encryption
Our scheme is built upon FHE schemes based on the standard LWE problem, such as the GSW scheme [
4]. FHE enables computation directly on encrypted data, without the need for decryption.
Definition 7 (FHE Scheme). FHE = (F.Setup, F.SecKeyGen, F.PubKeyGen, F.Enc, F.Eval, F.Dec) is a tuple of PPT algorithms, defined as follows:
F.Setup: Given a security parameter and a circuit depth bound , compute the lattice dimension , the modulus , and the error distribution as a B-bounded distribution, where the bound is . Let , and . Output ; use the as implicit input for other algorithms.
F.SecKeyGen: Given the , uniformly sample , and output the secret key , where .
F.PubKeyGen: Given the secret key , uniformly sample the matrix and the error vector . Let . Output the public key , where .
F.Enc : Given the public key and the message , uniformly sample a random matrix . Output the ciphertext corresponding to the message , where and is the short preimage matrix (Lemma 2).
F.Eval: Given a circuit with a depth bound and a set of ciphertexts , output the evaluated ciphertext .
F.Dec: Given the secret key and the ciphertext , compute , where . Output as the decryption result, where .
The properties of compactness, correctness, and security, as defined below, should be satisfied by an FHE scheme.
Definition 8 (Compactness). If there exists a polynomial function such that for arbitrary security parameters , a circuit with circuit depth bound , and , the FHE scheme satisfies the following: for , , , and , the size of the ciphertext is only polynomially related to and , and at most, ; therefore, the FHE scheme satisfies compactness, i.e., .
Definition 9 (Correctness). For all security parameters , a circuit with circuit depth bound , and , the FHE scheme satisfies the following: for , , , , and , if the infinity norm condition holds, then the ciphertext can be correctly decrypted. Thus, the FHE scheme satisfies correctness, i.e.: Definition 10 (IND-CPA Security). For all security parameters and a circuit depth bound , an FHE scheme that relies on the problem satisfies the following: for , , and for , where , if any adversary can distinguish between the two ciphertexts with a negligible probability, then the FHE scheme satisfies IND-CPA security, i.e.: 2.4. Commitments
We begin by reviewing the definition of commitment schemes. The first commitment scheme was introduced by Blum [
38]. In this work, commitment schemes are employed to ensure the verifiability of modules such as key generation and distributed decryption. They bind secret key shares and intermediate computation results, providing a secure basis for subsequent non-interactive zero-knowledge (NIZK) proofs. Our scheme incorporates ideas from the lattice-based commitment scheme by Baum et al. [
29], which constructs a structured commitment matrix allowing a commitment to a vector message to be expressed as a linear combination
, where
is the committed message,
is the randomness, and
are public matrices. Compared to earlier SIS-based commitment schemes, this approach removes the restriction that messages must be small-norm vectors, and instead allows for committing to arbitrary vectors, making it more suitable for binding intermediate states in multi-party encryption and zero-knowledge protocols. Moreover, this commitment scheme supports a “relaxed opening” strategy, which permits the use of a non-zero polynomial coefficient
to scale both the commitment and the message simultaneously during validity verification, thus enhancing compatibility with subsequent NIZK proofs.
Definition 11 (Commitment Scheme). Let and be positive integers that are polynomial in the security parameter . Let be a small positive integer, such that . The public parameters of the commitment scheme include a matrix , structured as , where and are identity matrices, and and are random matrices sampled from and , respectively. Let the above public parameters (denoted as ) be provided as implicit input. The message space is defined over a set , and the randomness is drawn from a set according to the distribution . A commitment scheme is defined as a tuple of two PPT algorithms C = (C.Com, C.Open), as follows:
C.Com: Given a message , sample a random value . Construct the concatenated vector , compute the commitment value , and output .
C.Open: Given a commitment , the committed message , and the randomness used in the commitment, the verifier reconstructs the vector and checks whether and whether is bounded by a specified norm. If the check passes, the verifier outputs 1, indicating that the commitment is valid; otherwise, it outputs 0.
The properties of correctness, binding, and hiding, as defined below, should be satisfied by the commitment scheme:
Definition 12 (Correctness). A commitment scheme satisfies correctness if the algorithm always accepts an honestly generated commitment for any , i.e.: Definition 13 (Binding). A commitment scheme satisfies the binding property if any PPT adversary cannot find two valid random values that create the same valid commitment for two different messages , i.e.: Definition 14 (Hiding). A commitment scheme satisfies the hiding property if any PPT adversary , given two messages and , cannot distinguish which message the given commitment corresponds to (the commitments are indistinguishable), i.e.: 2.5. Non-Interactive Zero-Knowledge Proof
A zero-knowledge (ZK) proof system allows a prover to convince a verifier of the truth of a certain statement without disclosing any knowledge of privacy. In this work, we adopt the standard definition of non-interactive zero-knowledge (NIZK) proofs [
39] to ensure the verifiability of key components. This ensures that, even in the presence of malicious adversaries, the involved parties follow the protocol correctly, the data satisfy structural and bounded constraints, and no secrets or randomness are leaked. An NIZK proof can be derived from an interactive ZK proof system using the Fiat–Shamir transform [
18], which allows messages as inputs and enables non-interactive proof generation under a common reference string, thereby improving efficiency. We adopt a non-interactive variant of the zero-knowledge proof by Yang et al. [
30], which supports the verification of witnesses with complex constraints in the random oracle model (ROM). In this construction, the prover commits to the witness and auxiliary vectors, hashes an initial transcript to generate a challenge, and computes response values as linear combinations. The final proof consists of the responses along with the auxiliary commitments. The verifier checks the witness’s validity with respect to the relation by verifying linear and multiplicative constraints, and ensuring commitment consistency and norm bounds. This construction supports vector arithmetic verification with polynomial complexity. We now provide the definition of the non-interactive zero-knowledge proof required in this work:
Definition 15 (NIZK). Consider to be an NP relation on the language , with being an element of . A witness exists such that forms the statement–witness set for the NP relation. NIZK = (N.Setup, N.Prove, N.Verify) is a tuple of polynomial-time algorithms, defined as follows:
N.Setup: Given the public parameters , output the common reference string , which includes the public matrices , , and , two auxiliary vectors and , and a set consisting of triplets of the form . These elements satisfy the following conditions: , and for all , it holds that .
N.Prove: Given and the , the prover commits to the witness using the commitment scheme to obtain . The following steps are then repeated for rounds with fixed parameters:
Generate a random vector and compute ;
Construct the intermediate multiplication vectors and ;
Use the public matrix to generate commitments , and for , and , respectively;
Sample uniformly at random and construct the auxiliary commitment .
For each , compute , and then compute , , and , where , and are the randomness used during the commitment procedures. Finally, output the proof .
N.Verify: Given the element to be proven, the proof , and the , for each , perform the following checks:
;
;
;
;
.
If all the above checks pass and the norms of all vectors are within the prescribed bounds, then the proof is valid and the verifier outputs 1; otherwise, it outputs 0.
An NIZK system should satisfy the following properties, as defined:
Definition 16 (Completeness). An NIZK protocol satisfies completeness if, for arbitrary , the verifier will accept the proof generated by the prover using the N.Prove algorithm, where this probability exceeds the randomness in the NIZK algorithm, i.e.: Definition 17 (Knowledge Soundness). An NIZK protocol satisfies knowledge soundness if for any malicious prover running N.Prove without knowing a witness , there is a sufficiently high probability that the honest verifier will accept the statement . In this case, an extractor exists that, through black-box access to , can extract a valid witness such that , i.e.: Definition 18 (Honest Verifier Zero-Knowledge). An NIZK protocol satisfies the honest verifier zero-knowledge property if, for any PPT adversary , a simulator algorithm exists that can simulate and output a proof such that the transcript produced by the given is computationally indistinguishable from the transcript produced by the honest prover, i.e.: Our scheme relies on three main NP relations:
for proving the correctness of the distributed key generation protocol,
for the encryption algorithm, and
for the threshold decryption process. The detailed definitions of these relations will be formally presented in
Section 4.
3. Module Construction
This section introduces the core construction modules of our scheme, which are later integrated in
Section 4. In
Section 3.1, we outline the basic ideas of the TMFHE scheme and define the required properties.
Section 3.2 introduces the share resharing mechanism, and
Section 3.3 describes the oracle query method and the error-handling strategy.
3.1. Threshold Multi-Party Fully Homomorphic Encryption
We define a multi-party FHE scheme based on the LWE problem that incorporates the distributed threshold decryption protocol (TMFHE). The scheme is similar to the lattice-based threshold scheme by Agrawal et al. [
25] and the threshold PKE scheme proposed by Aranha et al. [
35]. In our construction, the decryption secret key is distributed to all parties via secret sharing, and all parties use the same public key for encryption. This enables ciphertexts generated by each party to be directly evaluated homomorphically, without requiring complex and costly ciphertext extension steps (compared to MW16 [
9]). The final distributed threshold decryption can be achieved through the threshold properties of secret reconstruction in secret sharing.
Definition 19 (TMFHE). Let be a set of parties, and let be the bound for the smudging noise (Lemma 1). TMFHE = (T.Setup, T.KeyGen, T.Enc, T.Eval, T.PartDec, T.FinDec) is a tuple of algorithms, defined as follows:
T.Setup: Given the security parameter , the circuit depth bound , and the threshold access structure , run . Execute S.Setup to associate each party with a non-zero element , and incorporate the sequence of public points into . Use and as public parameters, serving as implicit inputs for other algorithms.
T.KeyGen: Given the public parameters, run the algorithms and to output a key pair . Then, run the algorithm to distribute a secret key share to each party .
T.Enc: Each party uses the public key and its own message as input, executes the algorithm, and then outputs the ciphertext .
T.Eval: Given a circuit with a depth bound and a set of ciphertexts , execute the algorithm and output the evaluated ciphertext .
T.PartDec: Each party in the subset takes its own secret key share and the ciphertext as input, computes , and then outputs the partial decryption result , where is the added “smudging” noise vector. Note that this operation on is linear.
T.FinFec: Given a set of partial decryption results and the corresponding ciphertext , select any partial decryption results from the parties and reconstruct the plaintext using the combination algorithm of the secret sharing scheme:
where , and output the plaintext .
Similarly to FHE schemes, the TMFHE scheme needs to satisfy the properties of compactness, correctness, and semantic security, as defined below.
Definition 20 (Compactness). If there exists a polynomial function such that for all security parameters , circuit depth bound , threshold access structure , number of parties , and a circuit with depth at most , and , the TMFHE scheme satisfies the following: for , , , , the size of the ciphertext is only polynomially related to and , and does not depend on the number of parties . The size of any partial decryption result is only polynomially related to , , and , i.e., and ; therefore, the TMFHE scheme satisfies compactness.
Definition 21 (Correctness). For all security parameters , circuit depth bound , threshold access structure , number of parties , a circuit with depth at most , and , the TMFHE scheme satisfies the following: for , , , and , the output of the combination algorithm executed in the final decryption algorithm T.FinDec is given by . If the infinity norm condition holds, then the ciphertext can be correctly decrypted in a threshold manner. Thus, the TMFHE scheme satisfies correctness, i.e.:where is a set of partial decryption results, and .
Definition 22 (IND-CPA Security). The TMFHE scheme’s semantic security is directly derived from the FHE scheme (Definition 10). For all security parameters , circuit depth bound , threshold access structure , and number of parties , the TMFHE scheme satisfies the following: for and , the ciphertext , and the ciphertexts and are computationally indistinguishable . This implies that for any PPT adversary , the probability of distinguishing between these two ciphertexts is negligible. Thus, the TMFHE scheme is considered IND-CPA-secure, i.e.: Next, we extend the key generation method of this scheme to a distributed key generation protocol, which is an interactive process jointly run by all parties. In this protocol, all parties use the same matrix sampled from the common reference string (CRS) to generate the public key, with noise . In subsequent chapters, our construction follows a similar approach. We begin with a brief description of the modified TMFHE scheme:
T.SecKeyGen: Each party samples and sets the secret key share .
T.PubKeyGen: Each party samples a matrix and computes . Then, . The public key is set as .
T.PartDec: Each party inputs its own secret key share and the ciphertext , computes , and then outputs the partial decryption result , where .
T.FinDec: Input all the partial decryption results and the ciphertext , and then combine them to compute the plaintext result .
In the basic TMFHE scheme, when each party executes the T.PartDec algorithm, the smudging noise (Lemma 1) is added to the partial decryption result. This step prevents the leakage of information about the secret key share
through the simple inner product of
and the public value
. This converts the simple computation into a difficult problem, where
. However, ensuring that this added noise does not impact the correctness of the decryption is crucial. During the reconstruction of the secret from the partial decryption results, the noise is multiplied by the Lagrange coefficients, and the smudging noise term becomes
. This can cause the noise to grow too quickly, affecting the scheme’s correct decryption and reducing its practicality, i.e.:
3.2. Share Resharing
In this scheme, we employ an “ideal secret key” generation process, where each party independently samples its own ideal secret key share . The decryption key shares are generated in the subsequent distributed key generation protocol, which introduces the necessity of the secure channels between parties. We note that the ideal secret key satisfies the property , meaning that complete knowledge of is required for the final ciphertext decryption. This restricts the scheme to an N-out-of-N access structure, preventing support for arbitrary threshold settings. To address this limitation, we propose share resharing to support a t-out-of-N threshold access structure.
We mitigate the rapid growth of smudging noise in partial decryption by applying share resharing techniques in the TMFHE scheme. This approach avoids the method used by Boneh et al. [
19], which scales to eliminate denominators by considering Lagrange coefficients as rational numbers, thereby violating compactness requirements where the size of ciphertext is no longer a polynomial in the number of parties
, but rather a polynomial of
. In the share resharing process, each party interactively runs the protocol to achieve the ideal secret key additive sharing for the TMFHE scheme. This allows any subset of at least
parties to reconstruct the secret during decryption—even if they differ from those in the key generation phase—while requiring each party to maintain a constant-size state.
Share resharing techniques leverage ideal secret keys and the linearity of resharing schemes to achieve more compact and communication-efficient schemes. Additionally, share resharing techniques allow for generating keys based on the set of parties currently online or flexibly updating keys for new computations. Specifically, let the set of parties be
, and let
be a threshold access structure on
, with
as the secret sharing space. As mentioned in Definition 6 of
Section 2.2, each party
uniformly samples a Shamir public point
and distributes the secret shares of a polynomial of degree
,
, to party
through the S.Share algorithm. The Lagrange coefficient for the secret reconstruction computation is
.
Definition 23 (Share Resharing). Share resharing is the process where each party redistributes their own secret key share through a t-out-of-N Shamir secret sharing scheme among themselves. This enables the secret key to be reconstructed in the threshold decryption phase through simple linear addition, thereby avoiding the rapid growth of smudging noise when multiplied by the Lagrange coefficient . The process for each party to run the share resharing protocol is as follows:
RS.Rshare: Given the shared secret key share , the threshold access structure , and the public point sequence of all parties, each party uniformly samples elements . Then, using the S.Share algorithm, computes and sends to party for all .
RS.Receive: Each party receives the secret key shares from the other parties and computes .
RS.Combine: For the subset of parties with , each party computes .
In
Section 2.2, we thoroughly explained the definition of the Shamir secret sharing scheme and demonstrated its properties. Building on this, we prove that the additive secret key shares
obtained through share resharing can meet the correctness requirements for reconstructing the ideal secret key
during threshold decryption.
For a subset of parties
in the threshold access structure
, each party
can locally precompute its secret key share
and use it to compute partial decryption results during the threshold decryption phase. By summing the partial decryption results, the secret key
can be reconstructed. The proof of the computation process is as follows:
3.3. Oracle Query
The share resharing proposed in
Section 3.2 requires each party to store a constant-sized state
locally, and it must be aggregatable. During the key reconstruction process, each party can precompute
and store it locally by running the RS.Receive algorithm after receiving
from other parties. After this, any subset of at least
parties in the set
can use the RS.Combine algorithm to compute the additive share of the secret key
, denoted as
. This
will serve as the secret key share for threshold decryption in the final scheme. In dynamic participant settings, when new parties join the computation, the secret resharing process can be re-executed among all current participants to generate updated secret key shares. If some parties go offline or leave the computation after executing the RS.Receive algorithm, as long as the number of remaining online parties exceeds the threshold t, the decryption secret key can still be reconstructed by executing the RS.Combine algorithm, without affecting the correctness of the scheme.
To accommodate dynamic settings where the set of participating parties may change across computation rounds or differ between key generation and decryption phases, we provide each party with an oracle query interface to determine the set of currently online parties. The oracle determines the online parties via a round of broadcast communication.
We use to denote querying the set of online parties in the current environment, and to denote the query result given by the oracle. As long as the set of online parties is valid, i.e., , it will be broadcast to all parties as a common result; otherwise, it will return . Note that different stages may involve different parties, requiring fresh queries at each stage. However, after the broadcast query, some parties may crash or go offline, causing the secret key shares at the threshold decryption stage to fall short of the threshold requirement, and thus leading to decryption failure. Therefore, we refer to solutions used in practical applications. The decryption protocol sets a maximum time limit for all parties to return partial decryption results, and stipulates that parties handle the situation as follows after timeout, to ensure successful decryption:
Set a timeout for each party, marking the parties that fail to return partial decryption results within the threshold decryption stage as the failed set .
Excluding the set , re-execute the oracle query. If the number of remaining online parties does not meet the threshold , return ; otherwise, re-execute the threshold decryption.
Before re-executing the threshold decryption stage, add an encryption of 0 to the resulting ciphertext to ensure the security of the decryption process. This prevents the re-decryption from leaking information about the secret key while not changing the decryption result.
4. Verifiable Threshold Multi-Party Fully Homomorphic Encryption
This section presents our proposed verifiable threshold multi-party fully homomorphic encryption scheme (VTMFHE).
Section 4.1 defines a verifiable distributed key generation protocol.
Section 4.2 presents the complete construction of the scheme.
Section 4.3 describes the threshold decryption process in detail. Finally,
Section 4.4 defines the NP relations required for the NIZK proofs used in the scheme.
4.1. Verifiable Distributed Key Generation Protocol
We propose a new verifiable distributed key generation protocol (VDKeyGen) based on the LWE problem and Shamir secret sharing. The protocol employs commitments and NIZK proofs to secure against active attacks from malicious adversaries and to verify both the correctness of secret key shares and the validity of parameters used by the parties. Unlike traditional MFHE key generation methods, our protocol does not distribute a uniformly generated pair of public and secret keys. Instead, each party uniformly randomly samples from the secret key space and generates a locally stored secret key state through mutual share resharing. According to the results of the oracle query, the secret key share is determined through the secret reconstruction algorithm in secret sharing, and the public key for the current round of encryption is computed based on it.
Building on the previous chapters, for the set of parties , the implicit parameters , the threshold access structure , and , the detailed process of the VDKeyGen protocol is as follows:
Each party uniformly randomly samples from the ideal secret key space and commits to using a random value , resulting in . Then, party runs , where . Before sending the shares to other parties , commits to each share using a random value , resulting in . In this step, each commitment generated by the participating parties must be bound to the randomly sampled value used in the current round, ensuring that the committed value cannot be altered or forged afterward. Moreover, the structure of the commitment must satisfy the correctness requirement defined in Definition 12.
Each party broadcasts the verification information and securely sends individually to the corresponding party through a secure channel. Party simultaneously receives shares from all other parties .
After receiving the verification information from other parties, party verifies the consistency of all commitments by running the C.Open algorithm to check whether each share and its randomness satisfy the required constraints. If any commitment fails verification, the corresponding party is marked as invalid, and its share is excluded from key reconstruction and local state generation. Otherwise, proceeds with secret resharing by computing , and stores as the fixed local state for computing the secret key share .
Before starting a round of computation, the public key for encryption in this round needs to be generated based on the set of parties that will remain online for this round of computation. First, the oracle is queried to obtain the set of online participants,
. If
, it indicates that the current set of online parties does not meet the threshold requirement and is considered an invalid set. In this case, Step 4 of the protocol should be re-executed to ensure the correctness of key generation. If
, then
parties are selected from
to execute the algorithm
. Then, these parties sample the common matrix
and the noise
, computing
. After computing the key share, each party must generate an NIZK proof to ensure that other parties can verify the structural correctness, sampling validity, and source consistency of the share. The proof must satisfy the public NP relation
, and is computed as
, where
and
. The construction details of this relation will be explained in
Section 4.4.
Finally, party broadcasts . Upon receiving it, each party verifies the proof . This ensures that each participant’s share satisfies the structural constraints defined in Definition 16. If any verification fails, the corresponding user is regarded as an abnormal participant, and their share is excluded from the public key aggregation process. This guarantees that the public key is formed only from valid data. If the number of shares that pass verification still satisfies the threshold access structure (allowing some public key shares to be invalid), then compute and output as the public key. Otherwise, re-execute Step 4 of the protocol.
Note that the VDKeyGen protocol is an interactive process in which shares
are exchanged pairwise between parties via secure channels during the resharing phase. This allows each party to store a state
locally, which can be used by a set of parties (at least
parties) to compute the secret key share for generating the public key. Thus, any qualified subset of parties can generate new secret key shares
for use in later decryption phases. During the execution of the scheme, if new participants dynamically join the computation, the VDKeyGen protocol must be rerun among all current parties to regenerate secret shares and a new public key, ensuring the correctness of homomorphic evaluation and decryption. If some participants go offline after key generation, the protocol proceeds as long as the remaining online parties meet the threshold requirement. Otherwise, execution is halted and resumed only when the number of active participants meets the minimum threshold
, as specified in the timeout handling of
Section 3.3.
Moreover, to provide active security, we incorporate commitment schemes and NIZK proofs to prevent corrupted parties from submitting forged or incorrect key shares during key generation. These mechanisms verify that each party follows the protocol correctly and that submitted data are well formed. If any commitment or proof fails, the corresponding share is marked invalid, the sender is treated as an abnormal participant, and their data are excluded from key aggregation. If the threshold condition is still met, the protocol proceeds with the remaining valid parties; otherwise, it must be re-executed. This ensures all inputs are structurally bound (via commitments) and relation-consistent (via NIZK) before key reconstruction, preventing malicious parties from submitting invalid inputs or tampering with data, while not relying on a trusted third party.
4.2. Construction of the VTMFHE Scheme
In this section, we define the construction of our VTMFHE scheme. The scheme is based on the LWE problem, and leverages Shamir secret sharing and share resharing to distribute secret key shares among the parties. It ensures that the computation process is honestly and correctly executed by all parties and against active malicious adversaries through commitment and NIZK proof.
Briefly, after the Setup, the scheme executes a verifiable distributed key generation protocol. Then, the current parties encrypt their plaintext using a common public key, send the ciphertexts, and perform homomorphic evaluation. Finally, a distributed threshold decryption process is performed to obtain the final plaintext result.
Let be a set of parties, and let be the bound for the smudging noise. We construct the scheme VTMFHE = (V.Setup, V.KeyGen, V.Enc, V.Eval, V.TDec) as follows:
V.Setup: Given the security parameter , the circuit depth bound , and the threshold access structure , is output as the implicit input for other algorithms, and each party is associated with a non-zero element .
V.KeyGen: The VDKeyGen protocol is executed, where each party stores the used to compute the secret key share locally. Then, based on the results of the oracle query, the public key is computed for encryption.
V.Enc: The parties involved in this round of computation uniformly sample a random matrix and use the public key to encrypt their own message , outputting the ciphertext , where and is the short preimage matrix. To prevent malicious participants from forging ciphertexts or submitting malformed data, each party commits to the encryption-related variables (e.g., and ) and computes an NIZK proof under the encryption NP relation to prove the ciphertext’s correctness and structural validity. The ciphertext and associated verification data are sent together, and must be verified before any homomorphic evaluation. Specifically, each party samples and , then computes and . The proof is computed as , where and . The final output is .
V.Eval: Given a Boolean circuit with a computation depth bound and a set of ciphertexts , before performing homomorphic evaluation, each ciphertext is parsed and verified using the proof via the NIZK verification algorithm: . If any ciphertext fails verification, it is discarded, and its sender is marked as an invalid participant, and thus excluded from this round of homomorphic evaluation. This process effectively isolates incorrect inputs and prevents invalid data from affecting the correctness of the system. If all proofs pass, the evaluation proceeds with , and the resulting ciphertext is returned.
V.TDec: Before decrypting , the oracle query is executed again to obtain the set of online parties . If , is returned; otherwise, the threshold decryption protocol is executed and the decryption result is output.
We will provide a detailed analysis of the security and performance of this scheme in
Section 5. Before that, we will explain the specific process of the distributed threshold decryption result in detail.
4.3. Distributed Threshold Decryption
In this section, we will present the complete process of the decryption phase in the VTMFHE scheme. Similarly to the verification mechanism of the VDKeyGen protocol, our scheme’s threshold decryption also achieves active security through commitment and NIZK. We will now define the process of the distributed threshold decryption protocol:
After the homomorphic evaluations are completed, query the current set of online parties . If , this step is re-executed. Otherwise, each party in locally executes the algorithm , and uses as its secret key share for decryption.
The parties in execute the algorithm from TMHE. In this process, party needs to sample a smudging noise and add it to the partial decryption result to prevent privacy leakage. Additionally, to achieve verifiability of the decryption result, the party needs to sample random values and to commit to and , resulting in and . They then compute the proof according to the NP relation , where and . Together with the dimensions and format of the partial decryption results, this information serves as the sole basis for verifying data integrity during the decryption phase. Finally, the partial decryption result and its corresponding proof are output.
Given the input ciphertext , a set of partial decryption results , and the corresponding proofs , each proof is first verified against the relation : . If any proof fails verification, the corresponding party is treated as an abnormal participant, and its partial decryption result is discarded. If the number of remaining valid partial decryption results exceeds , the algorithm is executed to combine them and obtain the final decryption result . Otherwise, the decryption process is aborted, and all valid participants are required to re-execute the distributed decryption protocol and submit new partial decryption results.
The distributed threshold decryption protocol is secure against malicious adversaries through the combined use of commitments and NIZK proofs. In summary, after reconstructing their decryption key shares, the current online participants perform partial decryption on the evaluated ciphertext and compute a proof based on the relation in the NIZK system, to prove that is a correctly computed partial decryption result obtained by following the protocol steps, using valid parameters and the corresponding secret key share on . This prevents the final ciphertext from being incorrectly decrypted or leaking sensitive information under malicious attacks. Moreover, with the proposed share resharing technique, any t partial decryption results can be aggregated to obtain the final decryption, thus providing threshold fault tolerance: decryption can still proceed correctly even if some participants are corrupted. We define the maximum fault tolerance as . Therefore, this mechanism does not rely on a trusted third party, and ensures the reliability and active security of the decryption protocol, even in asynchronous and dynamic environments.
4.4. NP Relations of NIZK
To ensure security against active malicious adversaries, this scheme employs a verification mechanism based on commitments and NIZK, as formally defined in
Section 2.4 and
Section 2.5. In NIZK, proofs are generated based on the following NP relations over a language
, covering distributed key generation, encryption, and threshold decryption. These proofs are used to demonstrate the correct execution of the corresponding algorithms. The NP relations are formally defined as follows:
Definition 24 (). During the distributed key generation protocol, the relation defines the correctness of the secret key share generation and proves this knowledge through the proof . The relation demonstrates that is correctly sampled from the specified distribution, is obtained through the share resharing process, and is correctly computed. The relation can be expressed as follows: Definition 25 (). During the encryption phase of each party, the relation proves through the proof that the ciphertext is correctly computed by party with the use of the given public key to encrypt the message , and that the sampled random matrix meets the encryption scheme requirements. The relation can be expressed as follows: Definition 26 (). During the threshold decryption phase, the relation proves through the proof that the partial decryption result provided by party is correctly computed with the use of the appropriate secret key share to decrypt the ciphertext through the decryption algorithm, and that the smudging noise within the specified range is added to the result. The relation can be expressed as follows: 5. Correctness, Security, and Performance Analysis
This section provides a detailed analysis of the VTMFHE scheme, focusing on three aspects: correctness, security, and performance.
5.1. Correctness
We begin by analyzing the correctness of the VTMFHE scheme in homomorphic evaluation and decryption, focusing on homomorphic addition, homomorphic multiplication, and distributed decryption. Since all parties in our scheme use a shared public key for encryption, the correctness analysis of homomorphic evaluation is similar to that of single-key FHE schemes based on the LWE problem.
As defined in
Section 2.3, the VTMFHE scheme relies on the correctness of the underlying FHE scheme. We first prove the correctness of homomorphic addition and homomorphic multiplication:
For the security parameter and the circuit depth bound , let the lattice dimension , modulus , and error distribution be B-bounded distributions. In our scheme, is the public key determined by the parties after executing the VDKeyGen protocol. The ciphertext is obtained by party encrypting the plaintext using the public key , where . The plaintext result obtained during decryption is computed as . We assume that and are encryptions of and , respectively. The following properties hold in our scheme:
Homomorphic Addition: Let . Then, , i.e., .
Homomorphic Multiplication: Let
. Then,
, i.e.,
, where
is the resulting error vector. According to the definition of the GSW scheme [
4], its size will not disrupt the essential form of the ciphertext.
For the correctness of the distributed threshold decryption process, after completing the homomorphic evaluation
, the parties generate the secret key shares for decryption according to the share resharing algorithm
. They then compute the partial decryption results
. The decryption process of the VTMFHE scheme has the following correctness:
Ultimately, we rely on the correctness of the underlying FHE scheme and the TMFHE scheme to prove the correctness of our scheme. Additionally, consistency checks through the commitment and the NIZK system further ensure that the actively secure scheme is correct.
From a structural perspective, the proposed scheme offers good compactness and scalability. In terms of ciphertext and key sizes, the ciphertext size in our scheme is fixed, independent of the original plaintext length and the number of participants
, and does not grow with the data volume or system scale. Additionally, since the proposed share resharing mechanism allows secret key shares to be randomly restructured into new sharing formats, the key size remains constant regardless of the number of parties or ciphertexts, providing better scalability than classical multi-party FHE schemes (e.g., Mukherjee and Wichs [
9], Aranha et al. [
35]), where ciphertext and key sizes depend on
. However, we note that introducing commitment and NIZK proofs for security verification, although one-time and binding, leads to communication and computation costs that grow linearly with
, as each party must broadcast verification data; this may impose a communication burden in large-scale deployments. For complex computations, our scheme supports Boolean circuits
composed of addition and multiplication gates with fan-in 2, allowing multiple ciphertexts to be jointly evaluated, thereby offering strong computational expressiveness. Nonetheless, once the circuit depth exceeds the predefined limit, ciphertext noise will surpass the decryption bound, making correct decryption unreliable. Thus, homomorphic evaluation remains constrained by the circuit depth parameter
.
5.2. Security
This section discusses the security of the VTMFHE scheme. To formally model and prove security, we define the corresponding adversarial model. We begin with a static passive adversary model (i.e., semi-honest, as defined by Mouchet et al. [
21]), where the adversary
can corrupt up to
parties before the protocol starts. In the passive setting, corrupted parties follow the protocol honestly, and
is allowed to access their internal state. However, existing TMFHE schemes typically provide security only under semi-honest adversaries. When facing malicious adversaries, a corrupted group may disrupt correctness or privacy by submitting invalid outputs (e.g., incorrect partial decryption results) or performing biased decryption. Therefore, we extend the model to an active malicious adversary setting, where corrupted parties are allowed to behave arbitrarily, such as going offline, submitting malformed data, or sending invalid verification messages at any point in time. Moreover, the adversary may observe and interfere with protocol interactions, or attempt to forge valid outputs. Compared to the scheme by Mouchet et al. [
21], our approach significantly improves security by incorporating a verification mechanism based on commitments and NIZK in key generation, encryption, and threshold decryption. We prove that the scheme achieves IND-CPA security and is secure against active attacks from malicious adversaries.
In terms of semantic security, for the security parameter , the circuit depth bound , and the set of parties in the VTMFHE scheme, a PPT simulator exists, such that for a static and passive PPT adversary , the hybrid experiments in the following real world and ideal world are indistinguishable, where the adversary can corrupt up to parties:
Input ; the adversary outputs the threshold access structure .
The challenger runs and , and sends the public key to .
outputs a maximal invalid set (Definition 3) and messages .
The challenger executes the share resharing protocol, and then sends and the challenge ciphertexts to the adversary .
issues polynomial adaptive queries in the form of , where is a circuit with depth bound . For each query, the challenger executes and sends to .
At last, guesses the bit as within polynomial time and sends the bit to the challenger.
Input ; the adversary outputs the threshold access structure .
The challenger runs and , and sends the public key to .
outputs a maximal invalid set (Definition 3) and messages .
The challenger executes the share resharing protocol, and then sends and the challenge ciphertexts to .
issues polynomial adaptive queries in the form of , where is a circuit with depth bound . For each query, the challenger executes the simulator algorithm and sends the simulated partial decryption results to the adversary .
At last, guesses the bit as within polynomial time and sends the bit to the challenger.
Under the LWE problem, ciphertext matrices are indistinguishable from random matrices. According to the smudging lemma (Lemma 1), the information of the secret key shares included in the partial decryption results is computationally hidden by the smudging noise . This means that the public key, ciphertexts, and partial decryption results are computationally indistinguishable from random values and unrelated to the messages. Therefore, our VTMFHE scheme is IND-CPA-secure.
The execution of the protocol should not disclose any knowledge about the secret key shares or the messages through the partial decryption results beyond what is already included in the homomorphic evaluation result. Additionally, corrupted parties should be unable to send incorrect information that leads to the leakage of privacy or decryption failure. Next, we further set up an active malicious adversary to verify that our scheme is actively secure.
To verify the source of information, prevent the adversary from forging legitimate information, and ensure that ciphertexts are not tampered with during transmission, we construct a verification mechanism through commitment and NIZK. This verifies the correctness of the scheme’s execution process, including distributed key generation, encryption, and threshold decryption. We simulate proofs through a series of hybrid experiments:
Input
; the adversary
sends a set of corrupted parties
, where
. The parties interact according to the VDKeyGen protocol in
Section 4.1, generating the corresponding commitments and proofs
. After completing the verification, they generate the public key
and
through share resharing and oracle query.
issues encryption queries by choosing plaintexts and obtaining the ciphertexts through the encryption algorithm.
issues polynomial adaptive queries in the form of , where is a circuit with depth bound . For each query, the scheme executes and sends the partial decryption results to the adversary . These partial decryption results include the information used for verification: .
The corrupted parties controlled by the adversary compute the partial decryption results , according to the protocol, and send them. If any proof in the partial decryption results fails verification, is output; otherwise, the plaintext result is computed according to the threshold decryption protocol.
Finally, the adversary determines the value of the bit through decryption queries and other information. Based to the previous IND-CPA security proof, the probability that can successfully distinguish is no more than 1/2.
: We introduce the first change, using the simulator to generate proofs. In the third step of , when honest users perform partial decryption, the proofs generated by the simulator are used to prove that these partial decryption results are correct, with the rest remaining unchanged. According to the honest verifier zero-knowledge property defined in Definition 18, the proofs generated by the simulator are computationally indistinguishable from those generated by an honest prover. Therefore, the experiments and are indistinguishable.
: Furthermore, we replace the commitments that do not need to be opened during the threshold decryption process, such as and . These commitments are only used for computing and verifying proofs, with the rest of the experiment remaining unchanged. According to the commitment hiding property defined in Definition 14, cannot obtain any information about the committed values from the commitments themselves. Therefore, the experiments and are indistinguishable.
: During the VDKeyGen protocol, we extract the secret key shares from the corrupted parties using a knowledge extractor, and then derive and simulate the behavior of these parts, with the rest of the experiment remaining unchanged. According to the knowledge soundness property defined in Definition 17, can accept the proofs generated using the witnesses provided by the knowledge extractor. Therefore, the experiments and are indistinguishable.
: Now, we replace the partial decryption results of each honest party during the threshold decryption process. Instead of honestly computing , the simulator uniformly samples under the condition that the final decryption result remains unchanged, i.e., . Since is randomly sampled from a uniform distribution, according to the definition of the LWE problem, is indistinguishable from , and the decryption result is not affected by the introduced randomness. Therefore, and are indistinguishable.
: Based on , all partial decryption results of honest parties no longer depend on the secret key shares . Therefore, we replace the zero-knowledge proofs generated by honest parties during the VDKeyGen process with proofs generated by the simulator . Similarly to , the proofs generated by the simulator are computationally indistinguishable from those generated by an honest prover. Therefore, the experiments and are indistinguishable.
: Similarly to , we replace the commitments in the VDKeyGen protocol that do not need to be opened, such as . Since we have already replaced the proofs in the key generation process in , replacing these commitments will not affect the protocol. According to the commitment hiding property defined in Definition 14, the adversary cannot obtain any information about the committed values from the commitments themselves. Therefore, the experiments and are indistinguishable.
We now prove that is indistinguishable from the ideal world simulated by . If the adversary wins , then can break the LWE problem, i.e., it can effectively solve the hard lattice problems SVP and GapSVP. Ultimately, is clearly secure, because all components are generated by the simulator and no private information is leaked. By proving step by step the indistinguishability between each pair of adjacent experiments, we can conclude that the initial experiment and the final experiment are statistically indistinguishable, thereby proving the security of the VTMFHE scheme against active malicious adversaries. This concludes the proof.
We observe that most existing TMFHE schemes still rely on the semi-honest adversary model for their security guarantees. This assumption often lacks robustness against malicious adversaries or execution faults, where collusion, input forgery, or deliberately incorrect partial decryption by corrupted parties can threaten the data security of honest participants, particularly in the key sharing and threshold decryption processes. Moreover, traditional mechanisms generally lack systematic verification structures. Compared with representative TMFHE schemes, such as those by Mouchet et al. [
27], Agrawal et al. [
25], and Mouchet et al. [
21], our proposed VTMFHE scheme achieves multi-dimensional improvements in security. In particular, we construct a verifiable framework for key generation and threshold decryption that addresses the need to prove complex structural constraints and maintain secret consistency across protocol phases, all without relying on a trusted third party.
5.3. Performance
To simplify the performance analysis and maintain consistency with prior studies, we adopt a symbolic-level theoretical complexity model to evaluate the computational and communication overhead of our scheme. The analysis is based on symbolic expressions involving key parameters such as the number of participants
, the modulus
, and the threshold
, without assigning specific numerical values. The parameter settings and analytical methodology follow the typical models used in related works by Mukherjee and Wichs [
9], Boneh et al. [
19], Mouchet et al. [
21], and Aranha et al. [
35], ensuring both comparability and theoretical representativeness.
Our performance analysis begins with the study of the underlying FHE scheme. We adopt the GSW scheme [
4] as the underlying encryption scheme. Unlike the BGV scheme [
3], the GSW scheme evaluates on ciphertexts in matrix form with constant dimensions, thereby avoiding the need for key-switching and an evaluation key. This makes the scheme logically simple and clear. Secondly, to allow multiple parties to perform secure computations jointly, we use share resharing techniques to enable the parties to collaboratively generate a public key, with the secret key distributed and shared among them. This allows each party to encrypt using the same public key. Unlike the currently popular multi-key fully homomorphic encryption (MKFHE) schemes, ciphertexts under the same public key can directly execute homomorphic evaluation, avoiding the complex ciphertext extension steps in MKFHE schemes. This significantly reduces the size of ciphertexts involved in the evaluation. Thus, the VTMFHE scheme is compact, and the number of interactions between parties during the execution of our scheme is not affected by the function being executed or the number of parties
. This reduces both the computational complexity and communication complexity of the scheme.
It must be acknowledged that to make sure that the scheme is effective against malicious adversaries, we have included commitment and NIZK to construct a verification mechanism. This slightly increases the computational and communication overhead of our scheme. Specifically, according to the commitment scheme defined in Definition 11, each commitment corresponding to a submitted key share or intermediate value is a vector over
of dimension
, where
is much less than
. Thus, the communication cost of each commitment is
. For computation, both commitment generation and verification require a matrix-vector multiplication, resulting in a complexity of
. In the VDKeyGen protocol of our scheme, each party generates and sends
commitments and verifies the commitments received from others, yielding a total communication cost of
and computation cost of
per party. In the encryption and decryption algorithms, each party only needs to generate two commitments, each with a cost of
. For NIZK, according to the construction in Definition 15, each proof
consists of
commitments and responses. Its communication cost, based on Yang et al. [
30] (Appendix F.3), is given by the following:
In our scheme, each proof contains a constant number of commitments. Therefore, the communication complexity per participant for each proof is . In terms of computation, generating the NIZK proof involves multiple commitments and multiplication relation checks, resulting in a complexity of . The corresponding verification includes checks for linear relations, multiplication relations, and commitment consistency, and requires the same complexity of . During the VDKeyGen process, share resharing requires each party to store an additional state , which is a polynomial of degree , incurring a space complexity of . Additionally, the protocol requires each party to send and receive Shamir secret shares, resulting in a communication overhead of . The computational overhead for executing the combination algorithm when generating the public key and secret key shares is . It is important to emphasize that the total overhead in the VDKeyGen protocol can be regarded as a one-time cost related to the security level. The commitments and proofs are only executed once during system initialization, and can be reused across multiple homomorphic computations without impacting subsequent rounds. Therefore, the overall impact of this verification mechanism on system performance is limited. Its communication and computation costs can be effectively amortized across many tasks, especially in scenarios involving long-term key sharing or multi-round collaborative computation. Overall, this overhead is both necessary for enhancing active security and acceptable in practice.
During the encryption phase, the ciphertext size for each party is . The size of an individual commitment is , and a proof consists of s commitments each of size . In our scheme, the final ciphertext output by each party is . Therefore, the total size of the final output ciphertext is .
A comparison of our scheme with several other typical fully homomorphic encryption schemes that allow multi-user joint computation includes aspects such as security, adversary model, public key size, evaluation key size, and compactness, as shown in
Table 1.
6. Conclusions
This work proposes a verifiable threshold multi-party fully homomorphic encryption (VTMFHE) scheme based on the LWE problem, addressing the high complexity and expense issues of existing TMFHE schemes in asynchronous settings, as well as their insufficient security against malicious adversaries. Our scheme features a t-out-of-N threshold access structure, making it more compact than multi-key FHE schemes, and utilizes Shamir secret sharing and share resharing techniques for key generation. Compared to traditional TMFHE schemes, our scheme only introduces additional interactions during the key generation, and allows any user set above the threshold to jointly perform decryption without causing the smudging noise to grow too quickly. This balances user overhead and makes it more flexible for practical applications. Additionally, we introduced commitments and NIZK to construct an effective verification mechanism. This mechanism proves the correctness and validity of the actions of each party during the execution of the scheme, addressing the issue of TMFHE schemes being insecure against malicious adversaries. Finally, we demonstrated, through hybrid experiments, that the VTMFHE scheme is secure against malicious adversaries.
Outlook. The proposed VTMFHE scheme offers strong structural generality and security, making it particularly suitable for low-latency, communication-efficient environments, where multiple parties can efficiently perform joint homomorphic computation over ciphertexts. However, since each party must generate and transmit verification information (e.g., commitments and NIZK proofs), the scheme’s performance may degrade in high-communication-cost settings or large-scale systems, as the communication overhead grows linearly with the number of parties N. Furthermore, the scheme’s homomorphic evaluation capability is constrained by the circuit depth bound , which depends on the underlying LWE parameter settings, especially when handling complex computations. To address these limitations, future work will focus on optimizing the LWE parameters using techniques such as modulus-dimension switching and binary secret reduction, aiming to achieve smaller moduli and larger circuit depth bounds. Additionally, it is worth exploring the application of this scheme in broader cryptographic settings, such as constructing threshold signature schemes with active security to support joint signing and decentralized systems.