Next Article in Journal
Impedance-Sensitivity-Based Equivalent Modeling of Distributed Direct-Drive Wind Turbine Groups in Microgrids for Sub/Super-Synchronous Oscillation Analysis
Next Article in Special Issue
Blockchain-Based Mixed-Node Auction Mechanism
Previous Article in Journal
Directed and Resolution-Adaptive Louvain Community Method for Hardware Trojan Detection and Localization in Gate-Level Netlists
Previous Article in Special Issue
A Transformer-Based Deep Learning Approach for Cache Side-Channel Attack Detection on AES
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Secure and Verifiable Edge-Federated Learning with Homomorphic Encryption and a Trusted Execution Environment for UAV Communication

1
School of Computer Science, School of Cyber Science and Engineering, Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
School of Information Management, Nanjing University, Nanjing 210093, China
3
China Mobile Zijin (Jiangsu) Innovation Research Institute, Nanjing 211899, China
*
Authors to whom correspondence should be addressed.
Electronics 2026, 15(5), 1029; https://doi.org/10.3390/electronics15051029
Submission received: 9 January 2026 / Revised: 15 February 2026 / Accepted: 26 February 2026 / Published: 28 February 2026
(This article belongs to the Special Issue Novel Methods Applied to Security and Privacy Problems, Volume II)

Abstract

To address the security challenges of untrusted servers and the difficulty in verifying malicious node identities in UAV swarm federated learning when relying solely on homomorphic encryption (HE), this paper proposes a verifiable HE-based federated learning (VHEFL) framework that integrates trusted execution environments (TEEs) with aggregate signatures. The method proceeds through a series of well-defined steps. Initially, each UAV computes model gradients locally, encrypts them using additive HE, and attaches a lightweight identity-based digital signature. The encrypted model gradients and signatures are uploaded to the server. Subsequently, the ground server performs gradient aggregation, including both the encrypted gradients and signatures, in the regular execution environment (REE). After performing aggregation in the REE, the aggregated ciphertext and all the signatures are sent to the TEE. Inside the TEE, an efficient aggregate verification algorithm is executed to batch-verify the authenticity and integrity of all signatures simultaneously. This enables the TEE to securely verify the aggregated result and return the aggregated model parameters to the UAVs, where the encrypted model parameters are decrypted locally to obtain the updated parameters. By combining low-overhead aggregate signature verification with hardware isolation, VHEFL provides a high-performance and verifiable security solution that effectively addresses the challenges of data privacy, malicious update prevention, and computational overhead in UAV swarm federated learning.

1. Introduction

In edge deployment scenarios, drones continuously collect data highly relevant to mission execution, including flight status information, onboard perception images, and multi-source telemetry data [1,2,3]. Such data typically contains sensitive operational details and restricted environmental characteristics, making it difficult to centrally aggregate under multiple constraints such as privacy protection, security control, and regulatory compliance [4,5]. This weakens the feasibility of traditional centralized model training paradigms in drone applications. In addition, this mission data is crucial for improving drones’ capabilities in autonomous flight, path planning, and intelligent decision making, but it is often scattered across different fleets, operating entities, and deployment areas [6,7]. This means that the data held by a single node is insufficient in terms of scale and diversity to meet the demands of high-capacity model training [8,9,10,11].
FL provides a distributed training mechanism for collaborative modeling in UAV scenarios, eliminating the need for centralized raw data [12,13]. This allows UAVs or edge nodes to participate in collaborative model updates while maintaining the privacy of their local data. In this mechanism, each node independently trains its model based on its onboard data and submits only the updated results to the coordinating node for aggregation. Through multiple rounds of interaction, a shared global model is gradually formed, enabling knowledge integration across different fleets and mission scenarios without compromising data sovereignty or privacy boundaries [14,15]. This provides a feasible foundation for UAV collaborative intelligence under privacy-constrained conditions and supports the adaptation of large-scale and multimodal models at the edge [16,17]. Reliable communication is a prerequisite for effective federated collaboration in UAV networks. Advances in IoT-oriented modulation and multi-user transmission techniques have demonstrated the importance of robust and spectrum-efficient transmission mechanisms [18,19].
FL enables collaborative model training without sharing raw flight data, offering significant benefits for the protection of mission-sensitive information [20]. However, the existing federated learning solutions based on HE still suffer from systemic security flaws. These schemes typically assume the aggregation server is semi-honest, meaning they cannot effectively defend against server-side attacks such as aggregation tampering or malicious update injection [21,22,23]. At the same time, they lack effective mechanisms for verifying a participant’s identity and data integrity, allowing malicious UAVs to easily upload poisoned gradients and disrupt global model convergence. Furthermore, traditional per-node digital signature verification can theoretically defend against client-side poisoning. In UAV edge networks, however, the server is required to verify the signatures of all participating nodes individually in each training round. High-frequency serial verification imposes significant computational pressure on the ground aggregator. Additionally, intermittent air-to-ground communication further exacerbates verification delays. As a result, security protection and system efficiency become severely imbalanced [24].
To address these issues, this paper proposes a federated learning framework based on verifiable HE. The framework operates as follows. First, each UAV computes its local model gradients and protects them using HE, and it attaches a lightweight identity-based digital signature before transmitting them to the server. Second, the ground server aggregates the gradients in the ciphertext domain and submits the aggregated ciphertext, along with all original signatures, to a remotely attested trusted execution environment. Next, the TEE executes an efficient aggregation verification algorithm, verifying the authenticity and integrity of all signatures in a single batch. Upon successful verification, the TEE outputs the encrypted global model after aggregation. Finally, the server distributes the encrypted global model to each UAV node, which decrypts the model locally and updates its local model.
The proposed solution contributes as follows:
  • We propose a security framework based on TEE and aggregate signatures. By leveraging hardware isolation and verifiable execution mechanisms, the framework ensures the integrity of the aggregation process, preventing the server from tampering with the model aggregation or injecting malicious updates. This significantly enhances the overall security of the system.
  • We introduce a lightweight identity-based digital signature mechanism to ensure that the identity of each participating node is effectively verified. The aggregate signature mechanism further guarantees data integrity. It enables the rapid identification and exclusion of malicious nodes. This prevents them from uploading poisoned gradients and preserves the accuracy and consistency of the global model.
  • We propose an aggregate signature mechanism that compresses multiple individual signatures into a single one for verification, thereby substantially reducing computational overhead. This mechanism reduces the time complexity of the verification process from O ( N ) to O ( 1 ) , effectively alleviating server-side computational pressure and improving system processing efficiency.

2. Related Work

Privacy-preserving FL at the edge typically adopts differential privacy (DP), secure multi-party computation (MPC), or HE to protect updates while keeping raw data local on UAVs or edge devices [25,26,27,28]. Among these, HE is especially appealing for embedded edge deployments because it supports computation directly over encrypted updates and aligns with strict airspace and mission data regulations [29]. In addition, shared model updates (gradients or weights) can still leak operational details such as flight trajectories, target features, or environmental imagery if left unprotected.
For UAV collaborative training, several HE-based schemes inform design choices under edge constraints. Zhang et al. [30,31] proposed BatchCrypt, combining gradient quantization with batch encoding to reduce computation and bandwidth without sacrificing accuracy, techniques well suited to bandwidth-limited UAV swarms. Li et al. [32] presented an IoT-oriented FL framework with a threshold Paillier cryptosystem, whose resilience to untrusted participants extends to cross-fleet UAV operations. He et al. [33] optimized the Paillier cryptosystem and aggregation at the edge to lower latency, a key requirement for time-sensitive aerial missions with intermittent air-ground links and tight energy budgets.
Privacy-preserving mechanisms based on cryptographic gradients have been extensively studied in distributed artificial intelligence and have provided important references for embedded edge applications. Zhang et al. [34] combined masking with HE to construct an end-to-end privacy-preserving IoT training process suitable for UAV–ground collaborative scenarios; Wang et al. [35] proposed PPFLHE, integrating HE with trust assessment to support dynamic access control during aggregation, which is particularly important for multi-fleet and multi-operator collaboration; Firdaus et al. [36] introduced the blockchain to achieve decentralized federated learning, enhancing auditability and multi-party collaboration; and Mantey et al. [37] demonstrated that cryptographic gradients can effectively preserve model utility in recommendation tasks, further confirming the practicality of HE-based privacy protection. Beyond cryptographic solutions, recent research has explored the integration of TEE to provide trusted computation in UAV networks. Lightweight TEE architectures for UAV systems and TEE-assisted privacy-preserving offloading frameworks show that hardware-enforced isolation can offer reliable aggregation and verifiable computation in resource-constrained UAV scenarios [38,39].
In summary, existing research on UAV FL still suffers from three major limitations. (1) End-to-end privacy and integrity guarantees remain insufficient under edge constraints, as HE alone cannot prevent mission-sensitive information leakage when aggregation or verification mechanisms are inadequately protected. (2) The aggregation process lacks verifiability, with servers commonly assumed semi-honest yet vulnerable to undetected tampering and inconsistent execution. (3) Finally, under tight resource budgets, it remains a significant challenge to efficiently verify node identities, update integrity, and model consistency, especially given the dynamic nature of multi-fleet collaboration. To address these issues, this paper constructs a secure and verifiable federated learning framework tailored for UAV edge scenarios. On the device side, HE is employed to encrypt local gradients before upload. This ensures that the server operates only on ciphertexts during aggregation in the REE. Each UAV also attaches a lightweight identity-based signature to its encrypted update. On the server side, ciphertext aggregation is performed in the REE. An aggregate signature mechanism combines individual signatures into a single aggregated signature. The aggregated signature is then verified inside a TEE to ensure correctness and prevent tampering during integrity validation. The encrypted global model is subsequently returned to UAV nodes for local decryption and model updates. Through this collaborative design, the framework achieves end-to-end update confidentiality, verifiable integrity of aggregation inputs, and efficient identity authentication. At the same time, it maintains low computational overhead suitable for resource-constrained UAV edge environments.

3. Preliminary

To facilitate understanding, the key notations adopted in this paper are summarized below.
SymbolDescription
E ( · ) the encryption function
m i plaintext
E ( m i ) encrypted ciphertext
P i user
x i private key
Q i public key
( R i , s i ) the Schnorr signature
ethe challenge value for all signers
s a g g weighted sum of all signatures
R a g g the sum of the common points of all signers
( R a g g , s a g g ) the aggregated signature
Q a g g the sum of the public keys of all signers
TAtrusted application
IDuser identifier
H : { 0 , 1 } the collision-resistant hash function
r e q i detection request
w g l o b a l global model parameter
C k an integrity proof
σ k the signature on the encrypted local model parameter
R E S = r e s 1 , , r e s N response collection
C = C 1 , C 2 , , C N local model parameter cipher
A an adversary capable of launching a collusion attack
B the simulator of A
the integrity proof

3.1. Homomorphic Encryption

HE systems are usually composed of key generation algorithms, encryption algorithms, decryption algorithms, and algorithmic rules. Among these, the key generation algorithm is used to generate a public key for encryption and a private key for decryption; the encryption algorithm and decryption algorithm are used to transform plaintext data and ciphertext. The operational rules define which operations can be performed on ciphertext to ensure a homomorphic property. This property guarantees that the result of the ciphertext operation corresponds to the result of the plaintext operation. The definition of a homomorphism is as follows.
Definition 1.
In HE’s arithmetic rules, homomorphisms generally include additive homomorphisms and multiplicative homomorphisms.
For additive HE, if the encryption function E ( · ) corresponds to the addition operation, then for any plaintext m 1 and m 2 , encrypted ciphertexts E ( m 1 ) and E ( m 2 ) satisfy
E ( m 1 + m 2 ) = E ( m 1 ) E ( m 2 )
where, is the addition operation of the ciphertext.
For multiplicative HE, if the encryption function E ( · ) corresponds to a multiplication operation, then for any plaintext m 1 and m 2 , encrypted ciphertexts E ( m 1 ) and E ( m 2 ) satisfy
E ( m 1 × m 2 ) = E ( m 1 ) E ( m 2 )
Currently, HE technology has been widely adopted in privacy-preserving computing, encrypted computing, and cloud computing, providing strong security for related fields.

3.2. Trusted Execution Environment

A TEE provides a secure, isolated operating environment to protect sensitive data and code from malware and unauthorized access [40]. Its core goal is to create a hardware- or software-isolated space where programs can run independently from the OS and regular applications, ensuring data confidentiality and integrity. A TEE typically includes a trusted execution area, such as Intel SGX’s Enclave, separate from the REE. This isolation offers higher security than a conventional OS, safeguarding user data and processes. The TEE also consists of the TEE Application, which handles sensitive tasks like encryption, and the TEE Manager, which manages the environment, applications, and resources, providing an interface to the outside world. The key features of the TEE are outlined in [41]:
(1)
Isolation: The TEE realizes physical isolation of sensitive information through a combination of hardware and software. Even if the operating system or application program is attacked, the sensitive data can still remain safe.
(2)
Protection of data confidentiality and integrity: The TEE prevents data leakage or tampering during transmission and storage. It uses encryption, digital signatures, and other means to ensure data integrity.
(3)
Support for trusted computing: The TEE provides trusted computing services for applications. It ensures that the computational process remains free from external interference and guarantees the credibility of the results.
(4)
Hardware acceleration: The TEE usually relies on the security features provided by hardware (ARM’s TrustZone, Intel’s SGX, etc.), and is therefore more powerful than pure software security measures.
These benefits have led to a wide range of applications for the TEE, especially in the areas of mobile devices, payment security, cloud computing, and the Internet of Things.

3.3. Schnorr Aggregated Signature

The Schnorr Aggregate Signature is an extension of the Schnorr signature algorithm and is designed to enhance the efficiency of processing multiple signatures, especially in scenarios with multiple participants signing the same message. It combines multiple independent Schnorr signatures into a single “aggregated signature”, which allows the verifier to check just one combined signature, greatly improving verification efficiency. The key mechanism involves using aggregated challenge values to merge data from multiple signatures, thereby enabling the entire set to be verified in a single step. The flow of a Schnorr aggregate signature is as follows:
(1)
Individual Signature Generation
For each signer P i , suppose its private key is x i , and the public key is Q i = x i G . The signer generates the signature using a similar approach to the Schnorr signature ( R i , s i ) ; P i selects a random number k i and calculates R i = k i G , e i = H ( m | | R i ) , s i = k i e i x i mod q . Then it sends ( R i , s i ) to the server.
(2)
Aggregate Signature Generation
The server calculates the total challenge values for all signers e = H ( m | | R 1 | | R 2 | | | | R n ) , the weighted sum of all signatures s a g g = i = 1 n s i mod q , and the sum of the common points of all signers R a g g = i = 1 n R i and gets the aggregated signature ( R a g g , s a g g ) , subsequently sending the aggregated signature to the verifier.
(3)
Aggregate Signature Verification
For verification, the verifier needs to verify only one aggregated signature and not each individual signature. The verification process is similar to the verification of individual Schnorr signatures but requires use of the aggregated common point and the aggregated challenge value to compute e = H ( m | | R a g g ) and calculate the sum of the public keys of all signers Q a g g = i = 1 n Q i . Finally, it is verified whether the equation s a g g G = R a g g + e Q a g g holds; if it does, then the verification passes.

4. Secure and Verifiable Edge-Federated Learning for UAV Applications with HE and a TEE

Figure 1 illustrates the trusted federated learning framework proposed in this chapter, which targets UAV-based edge learning scenarios. It encrypts on-device gradients with HE to protect the confidentiality of local model updates before transmission. Server-side aggregation is performed in the REE over ciphertexts, ensuring that plaintext model updates are never exposed during the aggregation process. Meanwhile, aggregate signatures authenticate participating nodes and bind updates to their sources. The aggregated signatures are verified inside a TEE, which provides hardware-enforced integrity protection for the verification process.

4.1. System Model

We considered a potentially malicious server that may arbitrarily deviate from the prescribed protocol. The server may attempt to tamper with the aggregation process, discard client updates, or manipulate the final model parameters. UAV clients were assumed to be honest but may operate in untrusted communication environments. The HE secret key was generated and stored exclusively on each UAV device and never left the client side. The TEE was assumed to provide hardware-enforced memory isolation against software adversaries. Physical attacks and advanced hardware side-channel attacks were considered out of scope.
This study introduces a verifiable homomorphic encryption-based federated learning framework for UAV scenarios, combining homomorphic encryption (HE) with an optimized aggregate signature mechanism to provide dual-layer security for distributed training. The CKKS scheme encrypts local model parameters on UAV devices, while aggregate signatures generate integrity proofs for these encrypted updates, ensuring both confidentiality and verifiability throughout the training process.
In practical UAV swarm deployments, nodes may dynamically exit due to battery constraints or unstable communication, making a fixed-participant assumption unrealistic. Inspired by fault-tolerant secure aggregation mechanisms [42], our VHEFL framework is designed to operate under dynamic participation without requiring a predefined set of nodes per training round. Aggregation is performed over the encrypted updates actually received in the REE, and the aggregate signature is constructed over this dynamically formed participant set. The TEE verifies only the aggregated signatures corresponding to the active nodes, ensuring correctness and integrity without assuming full participation. Unlike classical threshold-based fault-tolerant schemes that rely on secret sharing reconstruction, our approach leverages homomorphic aggregation combined with signature-level batch verification. This design achieves fault tolerance through dynamic participant selection and verifiable aggregation binding while maintaining a low computational overhead suitable for resource-constrained UAV environments. Consequently, the framework preserves security guarantees and aggregation robustness under intermittent connectivity and heterogeneous device conditions, ensuring that client dropout does not invalidate aggregation nor compromise verification correctness.
Cloud server: The cloud server is responsible for executing the federated learning process and providing aggregation and verification services to participating nodes. Internally, the server is divided into the TEE and REE, where the TEE offers an isolated and trusted computing space while the remaining system resources operate in the REE. In the proposed framework, a trusted application (TA) is deployed within the TEE to perform integrity and consistency verification of the model parameters. In addition, the TA plays a central role in secure identity management; the user identifier (ID) is not a public label but securely negotiated between the TA and each user through a confidential channel during system initialization, and it is known only to these two parties. The negotiated ID is cryptographically bound to protocol requests and responses via digital signatures, enabling the TA to verify message authenticity and integrity. To prevent replay attacks, nonces and timestamps are incorporated into the signed messages, ensuring freshness and uniqueness. During protocol execution, the TA can perform individual or aggregated verification based on ID-bound signature keys without explicitly revealing user identities, thereby supporting multi-party verification while preserving user privacy.
Although HE introduces non-negligible cryptographic overhead, our design carefully confines its usage to on-device model updates and avoids repeated key generation and expensive ciphertext operations on UAV platforms. The key pairs are generated once during system initialization and reused across training rounds, thereby amortizing the associated cost over long-running federated learning processes. In addition, lightweight update representations are employed to significantly reduce encrypted payload sizes, effectively mitigating both computation and communication overhead on resource-constrained arm-based edge devices. To further balance security and efficiency, the proposed framework deliberately minimizes the trusted computing base by avoiding the execution of all cryptographic operations inside the TEE. Specifically, only aggregation and integrity verification are performed within the TEE, while encryption, communication, and other non-sensitive operations remain in the REE. This selective deployment substantially reduces the enclave entry and exit frequencies, alleviates trusted–untrusted context-switching overhead, and ensures that strong security guarantees are achieved with minimal performance degradation.

4.2. Security Aggregation and Authentication Mechanism

4.2.1. Initialization

  • First, in Algorithm 1 the TA deployed in the TEE selects a common point G, the collision-resistant hash function (CRHF) H : { 0 , 1 } * Z p and shares e c , G and H ( · ) with all federated learning users. Thus, the server and all federated learning users use the same parameters for generating and verifying integrity proofs.
  • TA selects its private key s k T A Z p , calculates the corresponding public key p k T A = s k T A G , and selects a random integer x. Subsequently, the TA generates detection requests for each of the N federated learning users r e q i ( i [ 1 , N ] ) . Detecting a request r e q i is a tuple of three elements, namely r e q i = < p k T A , x , I D i > , where I D i is the identity of the federated learning user P i .
  • After consultation among all participants, it was agreed to encrypt the user’s local model parameters using the HE function E H E ( · ) to generate the public and private keys required to perform HE. Subsequently, the server generates the initial global model parameter w g l o b a l and distributes it to each user together with r e q i .
Algorithm 1: System initialization.
Electronics 15 01029 i001

4.2.2. Model Training and Upload

For users P k in the federal learning system, after receiving the initial global model parameter w g l o b a l from the server, it will be loaded into the local model and trained in Algorithm 2. When P k is satisfied with the trained model, it will first encrypt its own local model parameter w P k using E H E ( · ) to obtain the ciphertext C k = E H E ( w P k ) of the local model parameter. Subsequently, P k generates an integrity proof for C k .
For P k , it chooses a private key s k k Z p and a corresponding public key p k k = s k k G . After receiving the request r e q k , P k first checks if r e q k is correct and then generates the response r e s k . This phase consists of four basic steps:
  • User P k checks the correctness of r e q k by checking if the user identification ( r e q k . I D k ) in r e q k is correct. If it is correct, then  r e q k is considered valid; otherwise, r e q k is discarded. In a subsequent step, the integrity proof σ k is generated using p k T A and r e q k . x .
  • The user P k chooses a random integer r k Z p and obtains R k = r k G . Note that r k and R k are time-sensitive values that will only be used once.
  • User P k generates the signature of the ciphertext C k of the local model parameters, denoted as σ k , which serves as the integrity proof of C k :
    σ k = H ( p k T A | | x | | R k | | C k )     s k k + r k
  • The user P k generates the response r e s k and sends it back to the TA in response to the TA’s check request. The response r e s k is a tuple containing three elements, namely r e s k = < p k k , R k , σ k > , and also sends C k to the server.
Algorithm 2: Local model training and parameter upload.
Electronics 15 01029 i002

4.2.3. Parameter Integrity Validation

In existing HE-based privacy-preserving federated learning schemes, the system directly aggregates and decrypts the encrypted parameters uploaded by participants in Algorithm 3. However, these parameters may be corrupted during transmission. In our scheme, the system first verifies the integrity of the uploaded parameters, using only those that pass verification for model aggregation. This approach enhances the trustworthiness of the federated learning system.
Algorithm 3: Local model parameter integrity validation.
Electronics 15 01029 i003
    To ensure the authenticity of parameters in the aggregation process, the integrity of the local model parameter ciphertexts uploaded by users must be verified. Verifying each ciphertext individually would incur significant computational overhead. Therefore, the TA uses an aggregation verification method, combining all integrity proofs (user signatures) into a single aggregated signature. This allows the TA to verify the integrity of the received ciphertexts by checking only the aggregated signature. Specifically, given a set of response collections R E S = r e s 1 , r e s 2 , , r e s N and local model parameter ciphers C = C 1 , C 2 , , C N , the system follows these steps to complete the aggregation verification process:
  • The REE initiates a parameter integrity verification request to the TA and sends its own received { C 1 , C 2 , , C N } to the TA.
  • The TA aggregates the signatures σ and R of each user in the response set RES to obtain σ s u m = i = 1 N σ i and R s u m = i = 1 N R i .
  • The TA performs the following calculations and determines if the equation holds true:
    σ s u m G = i = 1 N H ( p k T A | | x | | R i | | C i ) p k i + R s u m
    If Equation (4) holds, then the ciphertexts received by the server for all users’ local model parameters are complete, and the TA sends the message A c c e p t to the REE, which receives and acknowledges it and performs the ciphertext aggregation computation to obtain the aggregated ciphertext C ^ = i = 1 N C i .
  • If the calculation yields that Equation (4) does not hold, then the TA starts a lookup of the corrupted local model parameters. Depending on the size of N, this scheme provides two methods of lookup.
    Sequential lookup: If N is small, then the TA can verify each element in { C 1 , C 2 , , C N } individually, one by one, to verify the integrity of C k to compute
    H ( p k T A | | x | | R k | | C k ) p k k + R k = σ k G
    If Equation (5) above holds, then the integrity of C k can be determined. However, as mentioned earlier, the individual-by-individual verification approach results in a high computational overhead when N is large. Instead, it is obvious that aggregate verification can be performed on any corresponding subset of the response set R E S and the ciphertext set C. This allows the TA to divide a set of integrity proofs into multiple subsets for aggregate verification.
    Bisection lookup: When the number of users N is large, the idea of bisection lookup can be borrowed to iteratively split the response set R E S into two subsets on average and verify the aggregation of these subsets. The specific process includes the following basic steps.
  • Given a response set R E S and a ciphertext set C, first check if Equation (4) holds. If it holds, then this indicates that there is no data corruption in C, and the localization process ends. If Equation (4) does not hold and | C | = 1 , then the only copy of data currently in C is corrupted.
  • If | C | > 1 and Equation (4) still does not hold, then this indicates that C contains multiple model parameter ciphers, at least one of which is corrupted. At this point, C and R E S are partitioned into two corresponding subsets, denoted as C + , C , R E S + , and R E S .
  • Repeat steps 5 and 6 for C + , C , R E S + and R E S until all corrupted model parameter ciphertexts have been successfully localized.
Through the binary lookup process, all corrupted model parameter ciphertexts can be efficiently identified in a larger response set. Once the corrupted parameters are located, the TA removes them from the ciphertext set and returns the updated set to the R E E for normal ciphertext aggregation, yielding the result C ^ . The aggregation decryption result remains unaffected after removing the corrupted ciphertexts.

4.2.4. Parameter Consistency Check

After the R E E obtains the ciphertext aggregation result C ^ , it sends C ^ to the TEE for partial decryption. The partial decryption result is then used for aggregation decryption to obtain the new global model parameter W, which is sent to each user in Algorithm 4. To protect user rights, the TA verifies the consistency of the global model parameters received by all users. To prevent the REE from sending incorrect parameters to specific users, a consistency verification mechanism is introduced. This mechanism checks whether any two users receive identical global model parameters, ensuring fair distribution across all participants:
  • The TA sends a detection request to R E E r e q R E E = { p k T A , x , W I } , where W I is the identity of the new global model parameter W.
  • After confirming that the received reqREE is valid, the  R E E selects its private key s k R E E Z p and the corresponding public key p k R E E = s k R E E G and selects a random integer r R E E Z p to obtain R R E E = r R E E G .
  • The R E E generates an integrity proof of W, denoted as σ R E E , which is used to perform a consistency check of the global model parameters, where
    σ R E E = H ( p k T A | | x | | R R E E | | W )     s k R E E + r R E E
    and send res R E E = < p k R E E , R R E E , σ R E E , W > to each user
  • After user P k receives the r e s R E E from the REE, it first performs the integrity proof of W and calculates
    H ( p k T A | | x | | R R E E | | W )     p k R E E + R R E E = σ R E E G
    If Equation (7) holds, then P k determines that its received W is complete and subsequently sends σ R E E to the TA for consistency verification.
  • Let the signatures received by the TA from any two different users P k and P r be σ R E E k and σ R E E r , respectively. Since the global model parameters obtained by users P k and P r are the same in the normal case, it is certain that the users receive the same global model parameters if σ R E E k = σ R E E r .
Algorithm 4: Global model parameter consistency check.
Electronics 15 01029 i004

5. Security Analysis

We analyze the end-to-end security of VHEFL under the assumption that the server is untrusted and UAVs may be malicious.
Confidentiality: Each UAV encrypts its gradient using additive HE before uploading. The decryption key is held exclusively by each UAV and is never shared with the server or the TEE. The server performs aggregation on ciphertexts in the REE and never accesses plaintext. This ensures end-to-end confidentiality from upload to download.
Authenticity and integrity: Each UAV signs its ciphertext. All signatures are aggregated into a single aggregate signature in the REE and verified inside the TEE. Any tampering with the ciphertext or aggregation process leads to verification failure. This ensures the end-to-end authentication and integrity of all participating updates.
TEE-mediated decryption clarification: Decryption is not performed inside the TEE. The TEE only verifies signatures and does not hold the homomorphic decryption key. This clarifies our trust model; the TEE is trusted for verification correctness and not for confidentiality.

5.1. Correctness

5.1.1. Individual Verification of Method Correctness

In Section 4, Equations (5) and (7) use the separate verification method. This section uses Equation (5) as an example for the correctness analysis of the separate verification method. In Equation (5), σ k is the signature generated by the federated learning system user P k using his private key and the local model parameter ciphertext C k via Equation (3). Thus, σ k is the basis for integrity verification. If the model cipher parameters received by the server are not corrupted, then the following should hold:
H ( p k T A | | x | | R k | | C k )     p k k + R k = H ( p k T A | | x | | R k | | C k )     s k k G + r k G = H ( p k T A | | x | | R k | | C k )     s k k + r k G = σ k G
From Equation (8), it can be shown that the separate verification methods in the program are effective.

5.1.2. Aggregate Validation Method Correctness

The aggregatedverification method is used in Equation (4) in Section 4.2.3, given the set of responses R E S and the set of ciphertexts C of a size N and the sum of the signatures of all the users σ s u m , R s u m . If all the ciphertexts of the model parameters received by the server are complete, then there is a proof similar to that of the separate verification method:
i = 1 N H ( p k T A | | x | | R i | | C i )     p k i + R s u m = i = 1 N H ( p k T A | | x | | R i | | C i )     p k i + i = 1 N R i = i = 1 N H ( p k T A | | x | | R i | | C i )     s k i G + i = 1 N r i G = i = 1 N [ H ( p k T A | | x | | R i | | C i )     s k i + r i ] G = i = 1 N σ i G = σ s u m G
From Equation (9), it can be demonstrated that the aggregation verification method in the scheme is effective.

5.2. Anti-Forgery Attacks

Theorem 1. 
In the case where the cryptographic hash function H ( · ) is a collision-resistant hash function and the ECDLP complexity assumptions hold, the ciphertext data of model parameters or ciphertext data integrity proofs forged by an external attacker cannot be verified by a non-negligible margin.
Proof. 
The proof of the theorem needs to consider the following two scenarios: (1) attacker A intercepts the ciphertext C k of the model parameters of user P K , generates a forged ciphertext C k , and sends it to the server and (2) attacker A intercepts and checks the proof of validity σ k of P k , which generates a forged proof of data integrity σ k :
  • Scenario 1: In this case, A obtains the forged ciphertext data C k by corrupting C k or by generating it independently and sends it to the server, and according to Equation (5), the necessary condition for A to pass the authentication at the TA is that the following equality holds:
    H p k T A | | x | | R k | | C k = H p k T A | | x | | R k | | C k
    However, since C k C k , if Equations (10) hold, then this implies that H ( · ) produces the same hash value for two different inputs, which conflicts with the fact that H ( · ) is collision-resistant.
  • Scenario 2: In this case, A chooses and generates its own public-private key pair ( p k A , s k A ) , and r A . A then intercepts a valid proof σ k for user P k and forges an integrity proof for P k based on σ k :
    σ k = H p k T A | | x | | R k | | C k     s k A + r A
    According to Equations (3) and (5), if we want the forged σ k to pass the verification of the TA, then we need to make s k c A = s k k , r c A = r k , which is because the hash function H ( · ) is a one-way function that contains R k , and thus it is impossible to change the hash value or R_k in Equation (3). In order to forge the integrity proof σ k , A must know skk and r k . However, as mentioned earlier, skk and rk are held separately and undisclosed by P k , and the attacker A can only obtain the corresponding public keys p k k and R k . Nevertheless, if A has a non-negligible advantage in obtaining s k k and r k through p k k and R k , respectively, then this will violate the ECDLP complexity assumption. In summary, our scheme guarantees security against forgery attacks.

5.3. Anti-Collusion Attack

The aggregation algorithm of the verification scheme has to be reliable against collusion attacks. According to Theorem 1, if individual integrity proofs are unforgeable under the ECDLP assumption, then the corresponding verification methods are reliable. And this section proves the reliability of the aggregated verification methods in the scheme.
Theorem 2.
Assuming that H ( · ) is a collision-resistant hash function, given the aggregation verification algorithm of our scheme, an aggregation integrity proof generated by a set containing at least one invalid integrity proof is also invalid.
Proof. 
Assume that A is an adversary capable of launching a collusion attack. We construct a simulator B to respond to A ’s queries, and the interaction between A and B proceeds as follows:
(1) Initialization phase: Given a common point G on an elliptic curve ec and an anti-collision hash function H : { 0 , 1 } * Z p , B simulates the initialization algorithm of the TA to generate public-private key pairs ( p k B , s k B ) and chooses a random integer x. Subsequently, B generates a request set R E Q = { r e q 1 , , r e q N } and sends R E Q to A .
(2) Query stage: Given the above public parameters, A performs the following operations:
  • Key query: B receives such a query from A on user P k and runs a key generation algorithm to generate a key pair ( p k k , s k k ) and returns it to A.
  • Forgery: A forges an individual integrity proof for a set of users after using the received keys of each user { σ 1 , , σ N } , which is signed in the ciphertext set C = { C 1 , , C N } under the ciphertext set { p k 1 , , p k N } . Subsequently, A generates the forged aggregation integrity proofs σ s u m and R s u m .
  • Aggregation verification query: A generates a response set R E S = { r e s 1 , , r e s N } using the key pairs received from each user and sends the RES together with the ciphertext set C to B for an aggregate verification query. B receives the response and simulates the aggregate verification algorithm to determine whether σ s u m is valid. B then returns the validation result to A .
A can win the game if the following requirements are met:
(1) σ s u m is a valid proof of aggregation integrity, which means that
σ s u m G = i = 1 N H p k B | | x | | R i | | C i p k i + R s u m
(2) At least one user’s individual integrity proof σ i fails to pass the verification algorithm, and at least one honest user P h does not collude with A . Then, we have
σ h G H ( p k B | | x | | R h | | C h )     p k h + R h = [ H ( p k B | | x | | R h | | C h )     s k h + r h ] G
P 1 is the only honest user, and σ 1 denotes the forged invalid individual integrity proof. Then, if σ s u m is valid, it means that
i = 1 N H ( p k B | | x | | R i | | C i )     p k i + R s u m = i = 1 N H ( p k B | | x | | R i | | C i )     s k i     G + i = 1 N r i G = i = 1 N [ H ( p k B | | x | | R i | | C i )     s k i + r i ] G = i = 1 N σ i G
From Equation (14), we have σ 1 G H p k B | | x | | R 1 | | C 1     s k 1 + r 1 G , and thus Equation (12) clearly does not hold, or else it would conflict with the collision resistance of H ( · ) . Therefore, a valid aggregation integrity proof can be generated only when all valid individual integrity proofs are included as inputs to the aggregation algorithm. Otherwise, verification fails. In summary, our scheme guarantees security in the face of collusion attacks. □

5.4. Anti-Replay Attack Security

Theorem 3.
In the federated learning process, if H ( · ) is a collision-resistant hash function, then replay attacks cannot succeed. Specifically, if an attacker responds to the server and TA with outdated integrity proofs, then the response will fail verification.
Proof. 
We define a security game that occurs during the parameter integrity verification phase to simulate a replay attack. The game involves a challenger C (the TA) and an adversary A . Given a system with N federal learning users, whose model ciphertexts must be inspected, C initiates the integrity check by broadcasting a challenge to all users. In response, users generate and return their respective integrity proofs during the verification phase.
Suppose, however, that certain users are subjected to a replay attack. Notably, as shown in Equation (3), a user is required to incorporate the parameter § provided by C with the ciphertext of their local model parameters to generate the integrity proof . Consequently, the value of § varies across different inspection rounds. Suppose that A forces user A to utilize a stale response r e s j = < p k j , R j , σ j > , where the response is generated based on a stale x . Meanwhile, other users are able to respond correctly to the TA by following the procedures detailed in Section 4.2.2. Assuming that all other local model parameter ciphertexts C i ( i N | j ) are correct, we have
R e s u l t 1 = σ s u m G = i N | j σ i G + σ j G = [ i N | j H ( p k T A | | x | | R i | | C i )     s k i + i N | j r i + H ( p k T A | | x | | R i | | C j )     s k j + r j ] G = i N | j H ( p k T A | | x | | R i | | C i )     p k i + i N | j R i + H ( p k T A | | x | | R i | | C j )     p k j + R j
and
R e s u l t 2 = i = 1 n H ( p k T A | | x | | R i | | C i )     p k i + R s u m
The check A is successful if and only if R e s u l t 1 equals R e s u l t 2 , which implies that
H ( p k T A | | x | | R j | | C j )     p k j + R j = H ( p k T A | | x | | R j | | C j )     p k j + R j
Since all parameters are reused, we have p k j = p k j and R j = R j . However, x and χ are different. In this case, the hash function H ( · ) would produce the same hash value for two different inputs, which contradicts the collision resistance property of H ( · ) . Therefore, A fails to pass the verification. This completes the proof of Theorem 3, demonstrating that our scheme ensures security against replay attacks. □

5.5. Discussion on SGX Side Channel Risks

We acknowledge that TEEs such as Intel SGX may be vulnerable to microarchitectural side channel attacks, including cache timing and Spectre-type speculative execution attacks. In our design, such attacks may threaten the confidentiality of enclave-resident intermediate states, but they do not compromise the HE secret key because it is never stored or used inside the enclave. To reduce the practical risk of Spectre-type leakage, our implementation follows common hardening practices, including keeping the enclave code minimal, applying vendor-recommended SGX SDK or microcode mitigations, and avoiding secret-dependent branches and memory accesses in enclave cryptographic routines (i.e., constant-time implementations). Defending against advanced physical side channel attacks (e.g., power analysis) is orthogonal to the protocol design and is considered out of scope in this work.

6. Experiments and Analysis

6.1. Efficiency Analysis

This section analyzes the computational overhead of the proposed scheme to evaluate its efficiency. It focuses on the initialization, parameter uploading, integrity verification, and consistency checking phases, specifically examining the overhead of point addition, point multiplication, and hash operations on the elliptic curve e c . Algebraic addition and multiplication overheads are not included, as they are negligible compared with the point and hash operations. The analysis results, assuming n model parameter ciphertexts, are shown in Table 1.
Table 1 shows that our scheme did not impose excessive computational overhead on the federated learning server or the user. In the parameter upload phase, the operation of generating integrity proofs for the model parameter ciphertexts by the user imposed the main computational overhead in this phase. Each user performed only one hash operation, one dot addition, and two dot multiplications, which guaranteed the efficiency of this scheme on the user side.
In the integrity verification phase, this scheme used the aggregation verification method to check n model parameter ciphertexts with only n + 1 dot multiplications. Compared with individual verification, it replaced nearly half of the dot multiplications with the less costly dot additions, thus reducing computation. As a result, the aggregated signature-based verification imposed only minor overhead on the server, ensuring efficiency.
The sequential localization method in the integrity verification phase had the same computational overhead as individual verification, and thus only the dichotomous search-based localization method was analyzed. Table 1 examines the worst-case scenario with all user model parameter ciphertexts corrupted. In most practical cases, only a few ciphertexts were corrupted, leading to performance similar to aggregation verification, as shown in the next section’s experiments. The consistency verification phase combines the first three phases, with both the server and user operating on a single aggregated parameter cipher unaffected by n.

6.2. Experiment

We systematically evaluated the proposed algorithm from five perspectives: accuracy, real-time performance, energy consumption, communication, and safety. For accuracy, we employed the UAV123 aerial video dataset to validate the algorithm’s performance under dynamic conditions, including scale variation, viewpoint changes, and motion blur. In terms of real-time performance, we recorded the per-frame processing time on an embedded platform, and the experimental results demonstrate that the method met the real-time requirements for UAV vision tasks. Furthermore, based on the high-fidelity simulation environment provided by AirSim (version 1.8.1), we monitored energy consumption during flight, measured the end-to-end transmission delay and data volume for image transmission, and further evaluated the computational overhead of the algorithm.
Building on this evaluation framework, this section assesses the effectiveness and efficiency of our federated learning parameter integrity verification scheme through simulation experiments. Unlike our approach, most existing methods probabilistically verify data integrity by randomly selecting a subset of data blocks from each parameter message and generating integrity proofs for each sampled block using either RSA-based homomorphic functions or BLS signatures, which are then transmitted to the server for verification. We compared our scheme with two representative approaches—RSA-based [43] and BLS-based [44] schemes—to evaluate its performance. Effectiveness is defined as the accuracy in detecting corruption in model parameter messages, with a higher detection probability indicating better performance, while efficiency was measured by computational overhead, where a lower computation time indicates better efficiency.

6.2.1. Experimental Set-Up

We evaluated the proposed VHEFL scheme in a large-scale simulated UAV edge computing environment using AirSim and Gazebo (version 11.1.0), which were integrated with ROS 2. The entire testbed was deployed on a high-performance server cluster (8× NVIDIA V100 GPUs, 256 GB RAM), where N virtual UAV nodes were instantiated as containerized edge computing instances, each of which was constrained to 4 vCPUs and 4 GB of RAM to emulate the resource limitations of typical onboard platforms, such as Raspberry Pi 4.
In our evaluations, we also examined the computational latency of Schnorr aggregate signature generation and verification in this simulated UAV edge environment. Each UAV node generated an individual signature for its model update, with an average generation latency of 12.8 ms, based on representative values for elliptic curve operations on ARM Cortex-A72 class devices designed by ARM Limited, Cambridge, UK. On the server side, we measured the aggregate verification latency for 50 concurrent UAVs, which was 9.4 ms on the same high-performance server cluster. This reduced the verification cost from O ( N ) to O ( 1 ) , with an amortized per-UAV verification overhead of 0.19 ms, bringing the total per-UAV cryptographic overhead to 13.0 ms.
A key design consideration in our server-side implementation is the separation of out-of-enclave computation. Specifically, the aggregation of model updates is performed entirely in the REE outside the SGX enclave, while the enclave is used exclusively for secure validation of the aggregated model’s integrity and authenticity, namely signature verification. This approach reduces the memory footprint inside the enclave and mitigates the risk of enclave page cache overflow. By confining trusted execution to critical verification tasks only, the system ensures robust protection of sensitive data while avoiding performance degradation caused by the enclave’s limited memory capacity.
We explicitly modeled UAV-to-ground communication links based on typical broadband wireless connections (Wi-Fi bridge) that are commonly deployed on commercial UAV platforms. The bandwidth was configured to be in the range of 1–10 Mbps, with round-trip times between 5 and 20 ms and packet loss rates from 0 to 5.4%. Each transmission payload was determined by the actual ciphertext and signature sizes of VHEFL. Under these constraints, we measured the per-round uplink and downlink latency and system throughput.
The security of CKKS parameters is primarily determined by the polynomial modulus degree (N) and the total bit length of the coefficient modulus ( log q ). We selected the parameter set with N = 16,384 and log q = 438 , as summarized in Table 2, which achieved a 128-bit security level and aligned with the NIST’s recommendations for long-term security. Compared with N = 8192 , this configuration offered stronger resistance to quantum attacks, while compared with N = 32,768, it reduced the communication overhead by approximately 4 × and the computational overhead by 2.3 × , making it more suitable for the bandwidth and computational constraints of UAV edge networks.
Precision loss in CKKS primarily stems from scaling rounding and noise accumulation. With N = 16,384, the MAE decreased from 1.23 × 10 4 to 3.56 × 10 5 , a 3.5× improvement over N = 8192 , which restored the model accuracy from 98.95% to 99.22%. Setting the depth to three degraded the precision by an order of magnitude and reduced the accuracy by 1.38 percentage points, justifying our choice of two for the depth. Although N = 32,768 offered higher precision, it increased the computational and communication overhead by 131% and 300%, respectively, with diminishing returns, making it unsuitable for resource-constrained UAV edge networks. These results are summarized in Table 3.
For the RSA-based scheme, a 512-bit prime was used to generate the integrity proof, with a prime generation determinism of 2 64 . Both the BLS-based scheme and our scheme used SHA-256 for integrity proof generation and an s e c p 256 - k 1 elliptic curve, which provided security similar to 3072-bit RSA and DSA but with 256-bit points. The curve is defined by e c : y 2 = x 3 + 7 and p = 2 256 2 32 2 9 2 8 2 7 2 6 2 4 1 . To simulate corrupted model parameter messages, 1% of the data blocks (each 16 KB) was altered to random values based on the local model size.

6.2.2. Model Accuracy Analysis

We compared the model accuracy of the proposed scheme with those of RSA-based and BLS-based federated learning schemes across different aggregation rounds, as shown in Figure 2. The proposed scheme achieved approximately 3% higher accuracy, converged faster in the early rounds, and maintained its advantage until all schemes stabilized after about 30 rounds. The accuracy improvement mainly resulted from three key advantages of our UAV-oriented verifiable homomorphic encryption federated learning framework: (1) precise homomorphic ciphertext aggregation that preserved gradient fidelity without introducing additional distortion; (2) efficient REE-based aggregation and TEE-based batch verification, which reduced aggregation latency and mitigate stale gradient effects; and (3) low-overhead identity authentication that alleviated the computational burden on edge devices, allowing more resources to be allocated to local model training. These advantages collectively enhanced the training stability and convergence efficiency.

6.2.3. Energy Efficiency

In this experiment, the energy efficiency is denoted by η , which is defined as the amount of effective information processed per unit energy η = B eff E tot , where B eff represents the effective information completed under a given system scale and E tot denotes the corresponding total energy consumption. A larger η indicates higher energy utilization efficiency. The horizontal axis S F denotes the system scale parameter and is used to characterize the expansion of task scale as its value increases. As shown in Figure 3, with the increase in S F , the η of the proposed scheme gradually decreased from approximately 1.50 and stabilized around 1.38 , indicating that the energy overhead grew slightly faster than the effective information gain as the system scale increased. In contrast, the η values of HE + TEE, HE + Aggsign, and TEE + Aggsign exhibited an increasing trend with respect to S F . Nevertheless, throughout the entire tested range, the proposed scheme consistently achieved the highest η values. These results demonstrate that the proposed scheme maintained the overall best energy utilization efficiency across different system scales.

6.2.4. Experimental Analysis of Program Availability

Figure 4 illustrates the accuracy performance of the four schemes across different aggregation rounds. As the number of aggregation rounds increased from 20 to 140, the proposed scheme consistently achieved the highest accuracy and exhibited steady improvement, rising from approximately 95.2% to about 99.2%. It not only converged faster but also attained significantly higher final accuracy than the other schemes. In contrast, HE + TEE and HE + Aggsign showed gradual improvements but remained noticeably below the proposed scheme, while TEE + Aggsign remained nearly unchanged and exhibited a limited convergence capability. These results demonstrate that the proposed scheme can more effectively integrate model updates while maintaining security guarantees, thereby accelerating convergence and achieving superior overall performance and robustness.
As shown in Figure 5, the computational overhead of the RSA-based and BLS-based schemes increased rapidly with the sampling size. When the sample size reached 400, their accuracy approached that of our scheme, but the computational overhead was too high for efficient inspection. In contrast, the time consumption of our scheme remained stable as the sample size increased, highlighting its better usability. A detailed analysis of its efficiency will be provided in the next section.
Figure 6 illustrates the accuracy performance of the four schemes under different malicious client ratios α (%), evaluating their robustness against adversarial updates. As α increased from 0 % to 30 % , the accuracy of HE + Aggsign and TEE + Aggsign declined significantly, with TEE + Aggsign experiencing the most severe degradation, eventually dropping to around 62 % , which indicates high sensitivity to malicious updates. HE + Aggsign also showed a continuous downward trend, suggesting limited robustness under high attack intensities. In contrast, HE + TEE exhibited only a slight decrease in accuracy, demonstrating moderate stability. Notably, the proposed scheme consistently achieved the highest accuracy across the entire range of attack ratios, with only a minimal performance drop. This indicates that the proposed scheme can effectively mitigate the impact of malicious updates and maintain stable model performance, even under strong adversarial conditions.
As shown in Table 4, the per-round communication cost of all schemes remained at a comparable scale under K = 50 clients. Our scheme incurred 5.14 × 10 8 bits per round, which was close to the other schemes. Although the RSA-based and BLS-based schemes exhibited slightly higher communication overhead, the differences remained marginal. This indicates that the proposed design does not introduce excessive communication burden compared to existing secure aggregation mechanisms.
Table 5 analyzes the retransmission overhead under unstable air-to-ground links. As the packet loss rate increased from 3 % to 9 % , the total communication cost of our scheme rose moderately from 5.29 × 10 8 to 5.59 × 10 8 bits. The overhead grew approximately linearly with the packet loss rate, validating the retransmission model. Importantly, even under packet loss conditions, the overall communication remained within the same order of magnitude, demonstrating stable behavior in unreliable wireless environments.
Table 6 provides a detailed per-round time breakdown. All schemes exhibited comparable performance in terms of communication time, enclave transition latency, and HE operation time. However, our scheme substantially reduced the time cost in the aggregation and verification phase compared with the other methods. Overall, the communication overhead of the proposed scheme was comparable to existing methods. The proposed scheme achieved the lowest total system latency. The advantage mainly comes from its efficient aggregation and verification mechanism. The results show that the proposed design maintained both communication efficiency and low computational overhead.
During the parameter upload phase, each user generated an integrity proof for its model parameter message and sent it to the server. Our scheme incurred computational overhead based on the message size, as it generated proofs for the entire message, unlike the RSA- and BLS-based schemes, which only sampled a few data blocks. With a fixed sampling ratio, the overhead of the RSA- and BLS-based schemes remained constant. Figure 7 shows the average overhead when the message size increased from 4 MB to 128 MB, with 128 users and a sample size of 100. The results indicate that our scheme had significantly lower overhead than the RSA- and BLS-based schemes, taking only 5.7 ms on average, compared with 59.1 ms and 708.2 ms for the RSA and BLS schemes, respectively. This confirms the efficiency of our scheme at the user side.

7. Conclusions

We proposed a secure and verifiable federated learning framework for UAV edge environments. The framework integrates HE, a TEE, and an aggregate signature mechanism to ensure confidentiality, integrity, and robustness against malicious server behavior. The HE secret key remains exclusively on UAV devices, preserving model privacy without relying on enclave trust. The experimental results demonstrate that the proposed scheme achieved a communication overhead comparable to existing methods. At the same time, it significantly reduced aggregation and verification latency, resulting in the lowest total per-round execution time. Retransmission analysis further confirmed stable performance under unstable air-to-ground links. Overall, the framework effectively balances security, efficiency, and practicality for resource-constrained, UAV-assisted federated learning scenarios.

Author Contributions

Conceptualization, H.S.; methodology, S.H.; software, S.H.; validation, W.Z. and H.Z.; formal analysis, W.Z.; investigation, H.Z.; resources, X.Z.; data curation, S.H.; writing—original draft preparation, H.S. and Y.Z.; writing—review and editing, H.S. and Y.Z.; visualization, W.Z.; supervision, M.L.; project administration, M.L. and X.Z.; funding acquisition, M.L. and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Nanjing University of China’s “Mobile Joint” Research Institute Project (NJ20250043); the National Natural Science Foundation of China (L2324126); and the Undergraduate Innovation and Entrepreneurship Program (Project Title: Applications of Secret Sharing–Based Privacy-Preserving Federated Learning; XJDC202510300547).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

Author Xiaoyang Zhou was employed by the company China Mobile Zijin (Jiangsu) Innovation Research Institute. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Zhang, C.; Xie, Y.; Bai, H.; Yu, B.; Li, W.; Gao, Y. A survey on federated learning. Knowl.-Based Syst. 2021, 216, 106775. [Google Scholar] [CrossRef]
  2. Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R. Advances and open problems in federated learning. Found. Trends Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
  3. Aubinais, E.; Gassiat, E.; Piantanida, P. Fundamental limits of membership inference attacks on machine learning models. J. Mach. Learn. Res. 2025, 26, 1–54. [Google Scholar]
  4. Ferrag, M.A.; Tihanyi, N.; Debbah, M. Reasoning beyond limits: Advances and open problems for llms. arXiv 2025, arXiv:2503.22732. [Google Scholar] [CrossRef]
  5. Wen, J.; Zhang, Z.; Lan, Y.; Cui, Z.; Cai, J.; Zhang, W. A survey on federated learning: Challenges and applications. Int. J. Mach. Learn. Cybern. 2023, 14, 513–535. [Google Scholar] [CrossRef]
  6. Alqazzaz, A. Federated learning with homomorphic encryption: A privacy-preserving solution for smart cities. Int. J. Comput. Intell. Syst. 2025, 18, 304. [Google Scholar] [CrossRef]
  7. Wang, Y. Optimizing Distributed Computing Resources with Federated Learning: Task Scheduling and Communication Efficiency. J. Comput. Technol. Softw. 2025, 4. [Google Scholar] [CrossRef]
  8. He, X.; Xu, G.; Han, X.; Wang, Q.; Zhao, L.; Shen, C.; Lin, C.; Zhao, Z.; Li, Q.; Yang, L.; et al. Artificial intelligence security and privacy: A survey. Sci. China Inf. Sci. 2025, 68, 1–90. [Google Scholar] [CrossRef]
  9. Hasan, M.M. Federated Learning Models for Privacy-Preserving AI In Enterprise Decision Systems. Int. J. Bus. Econ. Insights 2025, 5, 238–269. [Google Scholar] [CrossRef]
  10. Qin, L.; Zhu, T.; Zhou, W.; Yu, P.S. Knowledge distillation in federated learning: A survey on long lasting challenges and new solutions. Int. J. Intell. Syst. 2025, 2025, 7406934. [Google Scholar] [CrossRef]
  11. Matrouk, K.M.; Rasappan, P.; Bhutani, P.; Mittal, S.; Nisha, A.S.A.; Konduru, R.M. Development of Heuristic Strategy with Hybrid Encryption for Energy Efficient and Secure Data Storage Scheme in Blockchain-Based Mobile Edge Computing. Trans. Emerg. Telecommun. Technol. 2025, 36, e70057. [Google Scholar] [CrossRef]
  12. Dimlioglu, T.; Choromanska, A. Communication-Efficient Distributed Training for Collaborative Flat Optima Recovery in Deep Learning. arXiv 2025, arXiv:2507.20424. [Google Scholar] [CrossRef]
  13. Zhou, H.; Peng, J.; Liao, C.; Li, J. Application of deep learning model based on image definition in real-time digital image fusion. J. Real-Time Image Process. 2020, 17, 643–654. [Google Scholar] [CrossRef]
  14. Park, S.; Ye, J.C. Multi-task distributed learning using vision transformer with random patch permutation. IEEE Trans. Med. Imaging 2022, 42, 2091–2105. [Google Scholar] [CrossRef] [PubMed]
  15. Li, J.; Jiang, M.; Qin, Y.; Zhang, R.; Ling, S.H. Intelligent depression detection with asynchronous federated optimization. Complex Intell. Syst. 2023, 9, 115–131. [Google Scholar] [CrossRef]
  16. Ren, Y.; Leng, Y.; Qi, J.; Sharma, P.K.; Wang, J.; Almakhadmeh, Z.; Tolba, A. Multiple cloud storage mechanism based on blockchain in smart homes. Future Gener. Comput. Syst. 2021, 115, 304–313. [Google Scholar] [CrossRef]
  17. Thakur, A.; Sharma, P.; Clifton, D.A. Dynamic neural graphs based federated reptile for semi-supervised multi-tasking in healthcare applications. IEEE J. Biomed. Health Inform. 2021, 26, 1761–1772. [Google Scholar] [CrossRef]
  18. Ma, H.; Tao, Y.; Fang, Y.; Chen, P.; Li, Y. Multi-Carrier Initial-Condition-Index-aided DCSK Scheme: An Efficient Solution for Multipath Fading Channel. IEEE Trans. Veh. Technol. 2025, 74, 15743–15757. [Google Scholar] [CrossRef]
  19. Huang, J.; Lu, H.; Tang, J.; Zhao, N.; Shi, Z.; Wang, X. NOMA Assisted Semi-Grant-Free Transmission for UAV Networks: A Multi-User Scheduling Approach. IEEE Trans. Commun. 2025, 73, 6780–6797. [Google Scholar] [CrossRef]
  20. Fan, X.; Wang, Y.; Huo, Y.; Tian, Z. BEV-SGD: Best effort voting SGD against Byzantine attacks for analog-aggregation-based federated learning over the air. IEEE Internet Things J. 2022, 9, 18946–18959. [Google Scholar] [CrossRef]
  21. Jaswal, R.; Panda, S.N.; Khullar, V. Federated learning: An approach for managing data privacy and security in collaborative learning. Recent Adv. Electr. Electron. Eng. 2025, 18, 1754–1769. [Google Scholar] [CrossRef]
  22. Zhang, K.; Song, X.; Zhang, C.; Yu, S. Challenges and future directions of secure federated learning: A survey. Front. Comput. Sci. 2022, 16, 165817. [Google Scholar] [CrossRef] [PubMed]
  23. Xue, J.; Liu, Y.; Li, S. A Tripartite Federated Learning Framework with Ternary Gradients and Differential Privacy for Secure IoV Data Sharing. In Proceedings of the 2025 5th International Symposium on Computer Technology and Information Science (ISCTIS), Xi’an, China, 16–18 May 2025. [Google Scholar]
  24. Sun, L.; Wang, Y.; Ren, Y.; Xia, F. Path signature-based xai-enabled network time series classification. Sci. China Inf. Sci. 2024, 67, 170305. [Google Scholar] [CrossRef]
  25. Wahab, O.A.; Rjoub, G.; Bentahar, J.; Cohen, R. Federated against the cold: A trust-based federated learning approach to counter the cold start problem in recommendation systems. Inf. Sci. 2022, 601, 189–206. [Google Scholar] [CrossRef]
  26. Xie, Q.; Jiang, S.; Jiang, L.; Huang, Y.; Zhao, Z.; Khan, S.; Dai, W.; Liu, Z.; Wu, K. Efficiency optimization techniques in privacy-preserving federated learning with homomorphic encryption: A brief survey. IEEE Internet Things J. 2024, 11, 24569–24580. [Google Scholar] [CrossRef]
  27. Madi, A.; Stan, O.; Mayoue, A.; Grivet-Sébert, A.; Gouy-Pailler, C.; Sirdey, R. A secure federated learning framework using homomorphic encryption and verifiable computing. In Proceedings of the 2021 Reconciling Data Analytics, Automation, Privacy, and Security: A Big Data Challenge (RDAAPS), Hamilton, ON, Canada, 18–19 May 2021. [Google Scholar]
  28. Zhang, C.; Ekanut, S.; Zhen, L.; Li, Z. Augmented multi-party computation against gradient leakage in federated learning. IEEE Trans. Big Data 2022, 10, 742–751. [Google Scholar] [CrossRef]
  29. Nowroozi, E.; Haider, I.; Taheri, R.; Conti, M. Federated learning under attack: Exposing vulnerabilities through data poisoning attacks in computer networks. IEEE Trans. Netw. Serv. Manag. 2025, 22, 822–831. [Google Scholar] [CrossRef]
  30. Jin, W.; Yao, Y.; Han, S.; Gu, J.; Joe-Wong, C.; Ravi, S.; Avestimehr, S.; He, C. FedML-HE: An efficient homomorphic-encryption-based privacy-preserving federated learning system. arXiv 2023, arXiv:2303.10837. [Google Scholar]
  31. Ren, Y.; Lv, Z.; Xiong, N.N.; Wang, J. HCNCT: A cross-chain interaction scheme for the blockchain-based metaverse. ACM Trans. Multimed. Comput. Commun. Appl. 2024, 20, 1–23. [Google Scholar] [CrossRef]
  32. Liu, W.; You, L.; Shao, Y.; Shen, X.; Hu, G.; Shi, J.; Gao, S. From accuracy to approximation: A survey on approximate homomorphic encryption and its applications. Comput. Sci. Rev. 2025, 55, 100689. [Google Scholar] [CrossRef]
  33. He, C.; Liu, G.; Guo, S.; Yang, Y. Privacy-preserving and low-latency federated learning in edge computing. IEEE Internet Things J. 2022, 9, 20149–20159. [Google Scholar] [CrossRef]
  34. Zhang, L.; Xu, J.; Vijayakumar, P.; Sharma, P.K.; Ghosh, U. Homomorphic encryption-based privacy-preserving federated learning in IoT-enabled healthcare system. IEEE Trans. Netw. Sci. Eng. 2022, 10, 2864–2880. [Google Scholar] [CrossRef]
  35. Wang, B.; Li, H.; Guo, Y.; Wang, J. PPFLHE: A privacy-preserving federated learning scheme with homomorphic encryption for healthcare data. Appl. Soft Comput. 2023, 146, 110677. [Google Scholar] [CrossRef]
  36. Firdaus, M.; Rhee, K.-H. Secure Federated Learning with Blockchain and Homomorphic Encryption for Healthcare Data Sharing. In Proceedings of the 2024 International Conference on Cyberworlds (CW), Kofu, Japan, 29–31 October 2024. [Google Scholar]
  37. Mantey, E.A.; Zhou, C.; Anajemba, J.H.; Arthur, J.K.; Hamid, Y.; Chowhan, A.; Otuu, O.O. Federated learning approach for secured medical recommendation in internet of medical things using homomorphic encryption. IEEE J. Biomed. Health Inform. 2024, 28, 3329–3340. [Google Scholar] [CrossRef]
  38. Lu, P.; Xi, N.; Ma, C.; Wang, Q.; Lu, D.; Tian, C.; Ma, J. FC-TEE: Lightweight Trusted Execution Environment for Low-Cost UAV Flight Control Systems. IEEE Internet Things J. 2025, 12, 45649–45662. [Google Scholar] [CrossRef]
  39. Wang, Y.; Su, Z.; Luan, T.H.; Li, J.; Xu, Q.; Li, R. SEAL: A strategy-proof and privacy-preserving UAV computation offloading framework. IEEE Trans. Inf. Forensics Secur. 2023, 18, 5213–5228. [Google Scholar] [CrossRef]
  40. Liu, C.; Guo, H.; Xu, M.; Wang, S.; Yu, D.; Yu, J.; Cheng, X. Extending on-chain trust to off-chain–trustworthy blockchain data collection using trusted execution environment (tee). IEEE Trans. Comput. 2022, 71, 3268–3280. [Google Scholar] [CrossRef]
  41. Wang, Y.; Zhang, Z.; He, N.; Zhong, Z.; Guo, S.; Bao, Q.; Li, D.; Guo, Y.; Chen, X. Symgx: Detecting cross-boundary pointer vulnerabilities of sgx applications via static symbolic execution. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS’23), Copenhagen, Denmark, 26–30 November 2023. [Google Scholar]
  42. Mansouri, M.; Önen, M.; Ben Jaballah, W. Learning from failures: Secure and fault-tolerant aggregation for federated learning. In Proceedings of the 38th Annual Computer Security Applications Conference (ACSAC’22), Austin, TX, USA, 5–9 December 2022. [Google Scholar]
  43. Hahn, C.; Kim, H.; Kim, M.; Hur, J. Versa: Verifiable secure aggregation for cross-device federated learning. IEEE Trans. Dependable Secur. Comput. 2021, 20, 36–52. [Google Scholar] [CrossRef]
  44. Gao, H.; He, N.; Gao, T. SVeriFL: Successive verifiable federated learning with privacy-preserving. Inf. Sci. 2023, 622, 98–114. [Google Scholar] [CrossRef]
Figure 1. System architecture of trusted federated learning based on aggregate signatures and HE.
Figure 1. System architecture of trusted federated learning based on aggregate signatures and HE.
Electronics 15 01029 g001
Figure 2. Plot of results of model accuracy comparison.
Figure 2. Plot of results of model accuracy comparison.
Electronics 15 01029 g002
Figure 3. Energy efficiencies of our scheme, HE + TEE, HE + Aggsign, and TEE + Aggsign.
Figure 3. Energy efficiencies of our scheme, HE + TEE, HE + Aggsign, and TEE + Aggsign.
Electronics 15 01029 g003
Figure 4. Comparison of examination accuracy.
Figure 4. Comparison of examination accuracy.
Electronics 15 01029 g004
Figure 5. Comparison of overall computational overhead.
Figure 5. Comparison of overall computational overhead.
Electronics 15 01029 g005
Figure 6. Accuracy under different malicious client ratios of four schemes.
Figure 6. Accuracy under different malicious client ratios of four schemes.
Electronics 15 01029 g006
Figure 7. Comparison of computational overhead in the parameter upload phase.
Figure 7. Comparison of computational overhead in the parameter upload phase.
Electronics 15 01029 g007
Table 1. The main computation costs of the algorithm.
Table 1. The main computation costs of the algorithm.
StageDot Addition
Operation
Dot MultiplicationHash
Operation
Initialization Stage010
Parameter Upload Stage121
Individual Verification (IVS)n2nn
Polymerization Verification (IVS)2nn + 1n
Localization (IVS)2 l o g 2 n 2nn
Coherence Check Phase252
Table 2. Security and performance comparison of candidate CKKS parameter sets.
Table 2. Security and performance comparison of candidate CKKS parameter sets.
ParameterPolynomialTotal BitSecurityCiphertextApplicable
SetModulus Degree N Length log 2 q LevelSize (KB)Scenario
P-8192819218080-bit256Lightweight edge nodes
P-16,38416,384438128-bit1024Our work
P-32,76832,768880256-bit4096High security requirements
Table 3. Decryption precision under different CKKS parameter configurations.
Table 3. Decryption precision under different CKKS parameter configurations.
ParameterScalingMean AbsoluteMax RelativeModel Accuracy
ConfigurationFactorError (MAE)Error
Baseline (Plaintext)N/A00%99.31%
P-8192, Depth = 2 2 40 1.23 × 10−42.1%98.95%
P-16,384, Depth = 2 2 40 3.56 × 10−50.6%99.22%
P-16,384, Depth = 3 2 40 2.18 × 10−43.8%97.84%
P-32,768, Depth = 2 2 45 8.92 × 10−60.2%99.28%
Table 4. Communication cost per round ( K = 50 clients).
Table 4. Communication cost per round ( K = 50 clients).
SchemeUpload (bit)Download (bit)Total per Round (bit)
Our scheme 8.9 × 10 6 1.38 × 10 6 5.14 × 10 8
HE + TEE 8.6 × 10 6 1.35 × 10 6 4.98 × 10 8
RSA-based 9.1 × 10 6 1.40 × 10 6 5.25 × 10 8
BLS-based 9.4 × 10 6 1.45 × 10 6 5.43 × 10 8
Table 5. Retransmission overhead of our scheme under different packet loss rates (K = 50).
Table 5. Retransmission overhead of our scheme under different packet loss rates (K = 50).
Packet Loss RateExtra Overhead (bit)Total per Round (bit)
3% 1.49 × 10 7 5.29 × 10 8
6% 2.99 × 10 7 5.44 × 10 8
9% 4.48 × 10 7 5.59 × 10 8
Table 6. Per-round time breakdown (ms) of four schemes ( K = 50 clients).
Table 6. Per-round time breakdown (ms) of four schemes ( K = 50 clients).
SchemeComm.Enclave TransitionHE OperationAgg. and Verif.Total
Our scheme28204270160
HE + TEE272243288380
RSA-based282142164255
BLS-based272042146235
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Su, H.; Zhao, Y.; Zhang, W.; Zhang, H.; Huang, S.; Lu, M.; Zhou, X. Secure and Verifiable Edge-Federated Learning with Homomorphic Encryption and a Trusted Execution Environment for UAV Communication. Electronics 2026, 15, 1029. https://doi.org/10.3390/electronics15051029

AMA Style

Su H, Zhao Y, Zhang W, Zhang H, Huang S, Lu M, Zhou X. Secure and Verifiable Edge-Federated Learning with Homomorphic Encryption and a Trusted Execution Environment for UAV Communication. Electronics. 2026; 15(5):1029. https://doi.org/10.3390/electronics15051029

Chicago/Turabian Style

Su, Huachang, Yekang Zhao, Wenrui Zhang, Hongling Zhang, Shitao Huang, Mingxin Lu, and Xiaoyang Zhou. 2026. "Secure and Verifiable Edge-Federated Learning with Homomorphic Encryption and a Trusted Execution Environment for UAV Communication" Electronics 15, no. 5: 1029. https://doi.org/10.3390/electronics15051029

APA Style

Su, H., Zhao, Y., Zhang, W., Zhang, H., Huang, S., Lu, M., & Zhou, X. (2026). Secure and Verifiable Edge-Federated Learning with Homomorphic Encryption and a Trusted Execution Environment for UAV Communication. Electronics, 15(5), 1029. https://doi.org/10.3390/electronics15051029

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop