Next Article in Journal
Operation of Electronic Security Systems in an Environment Exposed to Conducted and Radiated Electromagnetic Interference
Previous Article in Journal
Screen-Printed 1 × 4 Quasi-Yagi-Uda Antenna Array on Highly Flexible Transparent Substrate for the Emerging 5G Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Asynchronous Federated Learning Aggregation Method Based on Adaptive Differential Privacy

College of Computer Science and Technology, National University of Defense Technology, No.137 Yanwachi Street, Changsha 410073, China
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(14), 2847; https://doi.org/10.3390/electronics14142847
Submission received: 25 June 2025 / Revised: 9 July 2025 / Accepted: 15 July 2025 / Published: 16 July 2025
(This article belongs to the Special Issue Emerging Trends in Federated Learning and Network Security)

Abstract

Federated learning is a distributed machine learning technique that allows multiple devices to collaborate on learning a shared model without exchanging data. It can be used to improve model accuracy while protecting user privacy. However, traditional federated learning is vulnerable to attacks from generative adversarial networks (GANs). As a new privacy protection method, differential privacy enhances privacy protection capabilities by sacrificing some data accuracy. To optimize the privacy budget allocation scheme in traditional differential privacy, we propose a differential privacy method called ADP-FL, which dynamically adjusts the privacy budget based on Newton’s Law of Cooling. While maintaining the overall privacy budget, it dynamically tunes adaptive parameters to improve training accuracy. Additionally, we propose an asynchronous federated learning aggregation scheme that combines privacy budget with data freshness, thereby reducing the impact of differential privacy on accuracy. We conducted extensive experiments on differential privacy algorithms based on Gaussian mechanisms and Laplace mechanisms. The experimental results show that, under the same privacy budget, our algorithm achieves higher accuracy and lower communication overhead compared to the baseline algorithm.

1. Introduction

With the increasing popularity of IoT devices, the amount of distributed data has skyrocketed. The total amount of global data will grow to 175 ZB by 2025 [1], and the increase in data volume has promoted the development of artificial intelligence in many fields such as smart medical, smart home, and traffic accident detection, leading artificial intelligence—driven by the big data environment—to enter its third golden period of development. Traditional centralized learning requires all data collected on local devices to be stored centrally in a data center or cloud server. This requirement not only raises concerns about privacy risks and data breaches but also places high demands on the storage and computing power of servers, particularly in cases involving large amounts of data. Distributed data parallelism enables multiple machines to train a copy of a model in parallel, using different datasets. While it may be a potential storage solution and address compute power issues, it still requires access to the entire training data, segmenting it into evenly distributed fragments, which can pose security and privacy concerns for the data.
Federated learning aims to train a global model that can train on data distributed across different devices while protecting data privacy. In 2016, McMahan et al. first introduced the concept of federated learning based on data parallelism [1] and proposed the federated averaging (FedAvg) algorithm. As a decentralized machine learning approach, FedAvg enables multiple devices to collaborate in training machine learning models while storing user data locally. FedAvg eliminates the need to upload sensitive user data to a central server, enabling edge devices to train shared models locally using their local data. By aggregating updates of local models, FedAvg meets the basic requirements of privacy protection and data security.
While federated learning offers a promising approach to privacy protection, numerous challenges arise when applying it to the real world [2]. The first is the problem of privacy. Studies in recent years have shown that gradient information during training can reveal private information [3,4,5,6,7,8,9], whether of third parties or central servers [10,11]. As shown in [12,13], even a slight gradient can reveal a significant amount of sensitive information about local data. By simply observing the gradient, a malicious attacker can steal training data within a few iterations [6]. Although traditional privacy protection methods, such as encryption and secure multi-party computing, can protect private information from being leaked, they are not designed for edge environments. Excessive algorithm complexity leads to high latency and communication overhead in practical applications. To solve the above problems, this paper proposes an efficient federated learning method that combines differential privacy and outdated level methods. The main contributions are as follows:
  • This paper proposes a differential privacy method with adaptive parameters that dynamically adjusts privacy parameters during training. This method improves the accuracy of federated learning while maintaining the total privacy budget.
  • This paper proposes an asynchronous federated learning aggregation scheme that combines privacy budget and device aging. This scheme adjusts weights based on privacy budget, device aging, and dataset during aggregation to improve training accuracy.
  • This paper conducts extensive experimental testing and validates the effectiveness of the algorithm using real-world datasets under Gaussian and Laplace noise.

2. Related Work

Differential privacy is a new definition of privacy proposed in 2006 by Dwork [14] for addressing the problem of privacy leakage in databases. It is mainly implemented through the use of random noise to ensure that the results of querying publicly visible information do not reveal the private information of the individual, that is, to provide a way to maximize the accuracy of the data query when querying from a statistical database, while minimizing the chance of identifying its records. Differential privacy enhances the ability to protect privacy by sacrificing some data accuracy. Determining how to balance privacy and efficiency in the actual use process is a problem worth studying.
In response to differential attacks, Robin C. et al. [15] propose a federated optimization algorithm for protecting client differential privacy. This algorithm dynamically adjusts the level of differential privacy during distributed training, aiming to hide customer contributions during training while balancing the trade-off between privacy loss and model performance. Xue J. et al. [16] proposed an improved SignDS-FL framework, which shares the same dimension-selection concept as FedSel but saves privacy costs during the value perturbation stage by assigning random sign values to the selected dimensions. Patil et al. [17] introduced the concept of differential privacy into traditional random forests [18]. The Random Forest algorithm was tested in three aspects. The experiments demonstrated that the traditional random forest algorithm and the random forest based on differential privacy achieved nearly identical classification accuracy. Badih Ghazi et al. [19] improved the privacy guarantee of the FL model by combining the shuffling technique with DP and masking user data using an invisibility cloak algorithm. Cai et al. [20] proposed the idea of differential private continuous release (DPCR) into FL and proposed a FL framework based on DPCR (FL-DPCR) to effectively reduce the overall error added to the parameter model and improve the accuracy of FL. Wang et al. [21] propose a Loss Differential Strategy (LDS) for parameter replacement in FL. The key idea is to maintain the performance of the Private Model by preserving it through parameter replacement with multi-user participation while significantly reducing the efficiency of privacy attacks on the model. However, these schemes introduce uncertainty in the uploaded parameters, which may affect training performance.
Although the above methods have promoted the application of differential privacy in federated learning from different perspectives [22,23,24,25,26], they have limitations in terms of dynamic adaptability and aggregation strategy. The ADP-FL algorithm proposed in this paper has formed significant advantages through targeted design, with the following specific differences:
1. Fundamental differences in privacy budget allocation logic: J. F. et al. [27] proposed a new differential privacy classification system. This study classified different differential privacy models based on their definitions, guarantees, and federated learning scenarios. However, this classification system remains at the theoretical framework level and does not involve specific dynamic allocation strategies, failing to resolve the privacy–accuracy trade-off across different training stages; J. L. [28] proposed a record-level differential privacy federated learning framework based on two-stage mixed sampling, and designed a simulated curve-fitting strategy (SCF) to determine the sampling probability of all records under a given personalized privacy budget. However, this personalized scheme focuses on privacy differentiation at the individual record level, ignoring the temporal characteristics of the training process in federated learning, and struggles to address the dynamic changes in gradient information sensitivity as training progresses.
2. Differences in the comprehensiveness of aggregation strategies: J.M. [29] et al. proposed a differential privacy framework based on optimal sparse responses, combining minimization of convergence rates to optimize sparse parameters adaptively. However, this optimal sparse response mechanism only optimizes sparse parameters to reduce noise effects, without considering the interference of device participation timeliness on the quality of aggregation. Some studies, such as DP-FLAGD [30], provide privacy budgets for different privacy requirements by adaptively allocating privacy budgets and corresponding learning rates. However, this linked scheme does not address the issue of differentiated weighting of contributions from multiple clients.
3. Differences in practical applicability: To address the risk of data leakage when broadcasting parameters to the central server and the issue of excessive noise affecting parameter aggregation quality, FedBADP [31] implements a bidirectional adaptive differential privacy scheme. By adaptively adding noise to the transmitted gradients, it protects data security without affecting model accuracy. However, the gradient sampling mechanism may lead to the loss of critical information.
In summary, the core advantage of this paper lies in its deep coupling of dynamic privacy control with multi-factor aggregation strategies, thereby forming a systematic solution that encompasses the entire training cycle and addresses the shortcomings of existing research in temporal adaptability and multidimensional collaborative optimization.

3. Problem Formulation

3.1. Model Definition

Neural networks update parameters through backpropagation. Similarly, in a federated learning framework based on gradient (weight) updates, gradient information is propagated between clients and servers. The gradient information transmitted by clients originates from local datasets. Attackers can use gradient information to reverse-engineer datasets, resulting in privacy leaks among participants in federated learning.
Definition 1
(Differential Privacy Problem Model). A trusted data regulator C has a set of data D = D 1 ,   D 2 , D n . The goal of the data regulator is to derive a random algorithm A ( D ) ,   D D , where A ( D ) describes certain information about the data set D , while ensuring the privacy of all data D.
The schematic diagram of differential privacy is shown in Figure 1. For example, for a query on the average age, the average age of all individuals in the query dataset D is first calculated, followed by the average age of the adjacent dataset D , which lacks Bob’s age. This can be inferred from the results of the previous two queries, which constitute a differential attack. The figure illustrates the core idea of differential privacy technology, which involves processing query results to ensure that the query algorithm produces highly similar output probability distributions on adjacent datasets. Thus, for datasets with only one record difference, the query results are likely to be the same. Differential privacy applications can effectively prevent attackers from inferring information about datasets through gradient analysis. However, the added noise inevitably leads to reduced accuracy and slower convergence in federated learning, and may even prevent convergence altogether. Therefore, finding a method to minimize the impact of differential privacy on accuracy while ensuring privacy security has become an urgent research issue.

3.2. Algorithm Definition

This section introduces the relevant definitions and basic properties that will be applied in the composition of adaptive differential privacy algorithms. Differential privacy is defined as follows:
Definition 2
(Adjacent data sets). For data sets D and D with the same data structure, if these two data sets differ only in a certain element x, then these two data sets are called adjacent data sets.
Definition 3
(Differential Privacy [14]). A randomized algorithm M with domain R | X | is ( ϵ , Δ ) -differentially private if for all S Range( M ) and for all x , y R | X | , such that | | x y | | 1 1 :
P r M x S e x p ϵ P r M y S + δ
where the probability space is over the coin flips of the mechanism M . If  δ = 0 , we say that M is ϵ-differentially private. M satisfies ( ϵ , δ ) -differential privacy, then when δ = 0, M satisfies ϵ-differential privacy.
( ϵ , 0 ) -differential privacy guarantees that the absolute value of privacy loss for all adjacent databases is less than or equal to ϵ . ( ϵ , δ ) -differential privacy guarantees that the probability of privacy loss for all adjacent databases being less than or equal to ϵ is 1 δ , meaning that a privacy algorithm failure with a probability of δ is acceptable. This is a relatively lenient differential privacy strategy.
Definition 4
(Local Sensitivity). Local sensitivity of FL training is defined as follows:
Δ f L s = max D | | f ( D ) f ( D ) | | 1
Definition 5
(Global Sensitivity). Global sensitivity of FL training is defined as follows:
Δ f L s g l o b a l = max D , D | | f ( D ) f ( D ) | | 1
Definition 6
(Laplace Mechanism). For input dataset D and function F , if the algorithm Γ satisfies:
Γ = F ( D ) + L a p ( Δ F / ϵ )
Theorem 1
(Gaussian Mechanism [14]). For any δ , σ > 2 ln ( 1.25 / δ ) Δ f ϵ , if algorithm Γ satisfies:
Γ = F ( D ) + N ( σ 2 )
then algorithm Γ satisfies ( ϵ , σ ) -differential privacy. N ( σ 2 ) is a Gaussian distribution with center 0 and variance σ 2 .
Theorem 2
(Composition Theorem [14]). Let M i each provide ϵ i differentially private. The sequence of M i ( X ) provides ( i ϵ i ) differentially private.
The differential privacy method selected for this study is a differential privacy algorithm based on the Laplace noise mechanism and the Gaussian noise mechanism.

4. Adaptive Differential Privacy Mechanisms for Federated Learning

In this section, we will introduce the building blocks of our method and explain how to implement our algorithm. In Section 4.1, we introduced adaptive differential privacy design. In Section 4.2, we introduced security analysis and scheme design. In Section 4.3, we provided a detailed explanation of the ADP-FL algorithm implementation. The hyperparameter selection strategy in the algorithm and the actual deployment of the algorithm are analyzed separately in Appendix A.2 and Appendix A.3.

4.1. Design of Adaptive Differential Privacy

Differential privacy technology was first proposed by Dwork in 2006 [14] to prevent differential attacks from obtaining sensitive information about a single record, thereby protecting the confidentiality of data. For example, for a query for average wages, select a set of 100 people, query the average wages of these 100 people, and then query the average wages of any 99 people in the set. The wages of the remaining one person can be analyzed by the results of the first two queries, which is a differential attack. The core idea of differential privacy technology is to process query results in a way that, for a dataset with only one record difference, the query result is likely to remain the same.
According to the composition theorem, when deploying differential privacy multiple times on the same input data, the requirements of differential privacy can still be met. However, it should be noted that there is a correlation between the outputs of each algorithm in serial composition, which leads to an increase in the overall privacy budget ϵ and failure probability δ , thereby reducing the effectiveness of privacy protection. In particular, when different differential privacy methods are applied multiple times on the same dataset, the level of privacy protection may be significantly weakened.
Therefore, allocating privacy budgets of different sizes according to the various stages of federated learning has greater advantages than the traditional method of evenly distributing privacy budgets across all stages.
In the early stages of federated learning, the gradient information contains less sensitive information, allowing for a more relaxed privacy budget to be adopted. As training progresses, the privacy information contained in the delayed gradient increases, and the privacy budget must be reduced to protect user information. Based on this idea, this paper employs an optimization algorithm that adjusts the client’s privacy budget in real time according to the model’s training progress and accuracy, ensuring that the overall privacy budget remains unchanged. By dynamically allocating the privacy budget, this paper achieves a more balanced approach between privacy protection and model performance during federated learning. This strategy of allocating privacy budgets of different sizes for various stages of federated learning enables the method proposed in this paper to flexibly address privacy protection requirements while fully utilizing the dataset’s information to enhance the model’s accuracy and performance.
Since our overall method gradually reduces the privacy budget ϵ i t as training progresses, we use Newton’s cooling law formula [32] to adjust ϵ i t for each training round. We analyze the mathematical characteristics of Newton’s cooling law (rapid decay followed by stabilization) and find that it highly matches the dynamic requirements of the privacy budget in federated learning [33,34,35]. Existing privacy budget adjustment methods mainly include linear decay [36] and fixed allocation. However, linear decay may cause the privacy budget to decrease sharply in the later stages of training, introducing excessive noise that disrupts model convergence. Fixed allocation cannot adapt to the privacy requirements of different training stages. In contrast, the exponential decay characteristic of Newton’s cooling law enables a smooth transition in budget consumption and adaptive adaptation to training progress. A detailed comparison and verification of the plans is described in Appendix A.1.
The adaptive adjustment process of the privacy budget ϵ i t can be formalized as:
ϵ i t = ϵ i × e α × ( E t ) + ϵ i
where t is current communication round, E is maximum communication round, and α is the adjustment coefficient that defaults to 0.1.
Algorithm 1 demonstrates the adaptive differential privacy process. When the client begins participating in federated learning, the cumulative privacy budget is set to 0. As training progresses, if the cumulative privacy budget exceeds the total privacy budget, continuing to participate in federated learning will result in privacy leakage risks, so the client exits federated learning. It is important to note that the decline curve of Newton’s cooling law is very rapid. To prevent the privacy budget from depleting too quickly, which could lead to excessive noise and negatively impact training, this paper introduces a detection callback mechanism. When the client detects that the model’s accuracy has decreased beyond a threshold—i.e., when noise is affecting model convergence—the privacy budget is adjusted accordingly. Through this mechanism, this paper achieves a balance between privacy and efficiency.
Algorithm 1 Adaptive Differential Privacy
Input: 
privacy budget ϵ , accumulated privacy budget ϵ a c c , Coefficient λ , Maximum number of communication rounds E
Output: 
ϵ t
  1:
ϵ a c c = 0
  2:
while  t E and ϵ a c c ϵ  do
  3:
    if  A c c t A c c t 1 > λ  then
  4:
         ϵ t ϵ t 1 × e α × ( E E t ) + ϵ t 1
  5:
    else
  6:
         ϵ t = ϵ t 1
  7:
    end if
  8:
     ϵ a c c + = ϵ t
  9:
end while
10:
return ϵ t

4.2. Design of Weighted Aggregation

Due to communication or device issues, some devices may not participate in training for extended periods and are therefore considered outdated devices. These outdated devices can lead to a decline in model accuracy, a significant issue in practice. To address this issue, a common approach in the context of federated learning is to adjust the weight of the gradient based on the degree of model obsolescence, thereby reducing the impact of outdated gradient information on the model. This paper adjusts the weights of gradient information based on the number of training rounds the model has not participated in and uses an exponential function to implement this adjustment. Through this approach, this paper can more effectively address the impact of outdated devices that have not participated in training for an extended period, thereby improving the overall accuracy of the model. This dynamic weight adjustment strategy ensures that the contribution of outdated devices in model updates gradually decreases, allowing the updates from devices that participate on time to be more significant. Therefore, this paper can better balance the contributions of different devices, thereby enhancing the effectiveness of federated learning and the model’s performance. The obsolescence degree function we use is as follows:
f ( λ ) = α t 1 t 2 λ
where t 1 is the current communication round, t 2 is the last communication round of the client, and  α is the adjustment coefficient. λ is the outdated coefficient. It is initially set to 1 and reset to 1 each time it has participated in training.
According to the characteristics of the exponential function, the weights of clients who have not participated in training for multiple rounds will be tiny, effectively reducing the impact of outdated information.
During parameter aggregation, differential privacy noise impacts the model’s convergence. Clients with smaller privacy budgets have a higher probability of their uploaded gradient parameters deviating from the model convergence direction. Therefore, it is necessary to adjust the weights based on the amount of noise added by the client. Regarding how to assess the amount of noise added, the privacy budget ϵ and the amount of noise added are negatively correlated. The smaller the privacy budget, the more noise is added and the greater the deviation of gradient information. Naturally, this paper uses the privacy budget ϵ as a parameter to assess the degree of noise added and uses ϵ as one of the weight parameters for model aggregation.
Combining the weight adjustment algorithm for outdated devices and the weight adjustment algorithm for noise, this paper proposes the following aggregation scheme:
g t + 1 i = 1 n ϵ i t λ i t | D i | ϵ 1 t λ 1 t | D 1 | + ϵ 2 t λ 2 t | D 2 | + + ϵ n t λ n t | D n | g i t
where g i t is the gradient of client i in round t. | D i | is the size of the dataset for client i. In this formula, the more noise, the older the model, and the smaller the weight of the gradient provided by the client. When the client continuously participates in training and adjusts the privacy budget ϵ i t , the weight in the aggregation will increase.
Accordingly, the calculation method for the global model is:
W t + 1 W t η i = 1 n ϵ i t λ i t | D i | ϵ 1 t λ 1 t | D 1 | + ϵ 2 t λ 2 t | D 2 | + + ϵ n t λ n t | D n | g i t
The flowchart of the aggregation scheme is shown in Figure 2. As shown in the figure, in a training round, the green portion represents the time window during which the server receives gradients from clients. In contrast, the red portion represents the time window during which the server performs aggregation and updates the global model. Clients that upload gradients during the green time window are considered regular clients participating in aggregation. Clients who upload gradients during the red time window or do not upload gradients are marked as lagging clients. For users marked as lagging clients, their aggregation weights will decay exponentially over time, allowing them to participate in normal model aggregation again.

4.3. Adaptive Differential Privacy Federated Learning Algorithm

Based on the previously proposed method, we propose Adaptive Differential Privacy Federated Learning (ADP-FL). Figure 3 is the overview of ADP-FL. Algorithm 2 shows the whole process of ADP-FL. F a d p is the Adaptive Differential Privacy function. The specific details of the algorithm are as follows:
(1) Steps 2 to 7 are performed by the server. In Step 2, the server initializes the model w and sends it to all clients. Before reaching the maximum communication round E, the server receives the gradient information sent by the clients, aggregates the gradients of each participating client according to the aggregation scheme proposed in this paper, and updates the global model W g l o b a l = W η g t + 1 , then W g l o b a l is sent to participating clients to update their local models. After the update is complete, the server resets the obsolescence level of participating clients to 1 based on their participation in this update, and sets the obsolescence level of non-participating clients to λ α , thereby implementing the algorithm.
(2) Steps 8 to 15 are performed by the client. During the preparation phase of federated learning training, the client receives the initial model w. When the client is selected and the privacy budget has not been fully consumed, the client first receives the latest model parameters W g l o b a l from the server. Based on the local dataset and the latest global model W g l o b a l , local training is performed to obtain the local gradient g t i . The privacy budget for this round of training is calculated as ϵ t i F a d p ( t , ϵ t 1 i , E ) . Based on the privacy budget ϵ t i and the Laplace mechanism or Gaussian mechanism, perturbation noise is generated and added to the gradient information, yielding the perturbed gradient information g t i ^ , while the cumulative consumed privacy budget is updated as ϵ i = ϵ t i + ϵ i .
Figure 3. The overview of ADP-FL.
Figure 3. The overview of ADP-FL.
Electronics 14 02847 g003
Algorithm 2 ADP-FL
Input: 
initial paramenters w, privacy budget ϵ , maximum communication round E, accumulated privacy budget ϵ i = 0 , current communication round t, outdated level  α .
  1:
Server does:
  2:
Send initial parameters w to all clients i
  3:
while  t E   do
  4:
      g t + 1 i = 1 n ϵ i t λ i t ϵ 1 t λ 1 t + ϵ 2 t λ 2 t + + ϵ n t λ n t g i t
  5:
     Return g t + 1 to each selected client i
  6:
     Set participating clients’ λ to 1, other clients’ λ = λ × α
  7:
end while
  8:
Client does:
  9:
Recieve initial parameters w
10:
if client i selected and ϵ i < ϵ  then
11:
     receive g t from server
12:
      g i t local train ( g t + w i t 1 )
13:
      ϵ i t F a d p ( t , ϵ i t 1 , E )
14:
      g ^ i t add noise ( g i t , ϵ i t )
15:
      ϵ i + = ϵ i t
16:
     return g ^ i t , ϵ i t to server
17:
end if

5. Experiment

5.1. Experimental Environment

The experimental environment for this paper is an AMD Ryzen 7 5800H (8 cores, 16 threads) with a Radeon Graphics @3.20 GHz processor and an NVIDIA GeForce RTX 4090 Laptop GPU, running on Windows 10. The experiment uses the PyTorch 1.13.1 (CUDA 11.7 GPU-accelerated) deep learning framework, with auxiliary libraries including Python 3.9.12 (code programming), NumPy (numerical computation), Pandas 1.4.2 (data processing), and Matplotlib 3.5.1 (result visualization).
This experiment simulates a server and 50 local client nodes using a simulation experiment. Non-IID data is constructed based on the MNIST, EMNIST, CIFAR10, and CIFAR100 datasets. The non-IID data was constructed based on the MNIST, EMNIST, CIFAR10, and CIFAR100 datasets. To test the algorithm’s impact on highly non-independent and identically distributed data, each client was randomly assigned two labels, with each client’s data accounting for 10% of the dataset. The neural network models include RNN, VGG9, CNN, and Transformer models. Their detailed parameters are shown in Table 1.
The baseline algorithm used in this paper is the FedAvg algorithm, which evenly distributes the privacy budget. It is also the most commonly used algorithm in current research on federated learning based on differential privacy. Let the total privacy budget for each client be ϵ , the number of clients selected by the server each time be n, the total number of clients be N, and the total number of communication rounds in federated learning be E. Then, for a single client i, the differential privacy budget ϵ i for each upload is:
ϵ i = N ϵ n E
To test the algorithm’s impact on highly non-IID data, each client was randomly assigned two labels, with each client’s data accounting for 10% of the dataset. The learning rate of the gradient descent model used was 0.01, and the batch size was 16. The number of local training rounds was 3, and the optimizer was SGD. The loss function was CrossEntropyLoss. When testing the accuracy of the ADP-FL algorithm and the baseline algorithm, the privacy budget was set to 1, 5, and 10, with the differential privacy relaxation parameter δ of the Gaussian mechanism set to 0.00001. When testing the impact of different noise mechanisms, the privacy budget was set more loosely to 10, 20, 30, 40, and 50 to better observe the experimental results.

5.2. Accuracy of Algorithm on Different Datasets

This paper tests the accuracy of the FedAvg algorithm without privacy protection, the FedAvg algorithm with evenly distributed privacy budgets (baseline algorithm), and the ADP-FL algorithm on three datasets under non-IID conditions. The total privacy budget is set to 1, 5, and 10, respectively. Other model evaluation metrics are described in Appendix A.4.
Figure 4, Figure 5, Figure 6 and Figure 7 show the accuracy rates of the proposed method and the baseline algorithm on the MNIST, CIFAR10, CIFAR100, and EMNIST datasets, with all final accuracy rates listed in Table 2. Among these, “Non” denotes the FedAvg algorithm without privacy protection, and “Nan” denotes the algorithm failing to converge. Non-convergence occurred on the CIFAR10 and EMNIST datasets when ϵ = 1 , as seen in the figure, where the accuracy rate remained at a low level. This paper speculates that this is due to the privacy budget being too small, resulting in excessive noise addition and making it difficult for complex models to converge. In contrast, under the same privacy budget, the algorithm successfully converged on the MNIST dataset with lower complexity.
Compared to the baseline algorithm, the method proposed in this paper achieves higher accuracy and faster convergence speed. When the privacy budget is relatively lenient, the accuracy rate is even higher than that of methods without differential privacy. When ϵ = 1 , compared to the baseline on MNIST, the proposed method improves accuracy by 15.35%. This is because the proposed aggregation method not only considers the impact of noise but also adjusts the weights of lagging clients. Compared to the baseline algorithm, ADP-FL can effectively address the impact of lagging clients. From the experimental results, it is evident that in most cases, ADP-FL achieves higher accuracy compared to traditional local differential privacy algorithms.

5.3. Accuracy of Algorithm Under Differential Privacy Mechanisms

In order to verify the accuracy of the adaptive differential privacy algorithm on different differential privacy mechanisms, this paper conducted comparative experiments on differential privacy algorithms based on Laplace and Gaussian mechanisms. The experimental results are shown in Table 3 and Table 4. Intuitive experimental results are shown in Appendix A.5. Other model evaluation metrics are described in Appendix A.4.
From the experimental results, it is evident that in the vast majority of cases, the ADP algorithm achieves higher accuracy compared to traditional privacy budget allocation algorithms. Especially under the premise of the same privacy budget, the adaptive differential privacy algorithm proposed in this article outperforms traditional differential privacy algorithms in terms of accuracy, with an average privacy budget allocation in most cases, whether using the Laplacian mechanism or the Gaussian mechanism. Especially on the CIFAR10 and CIFAR100 datasets, the accuracy of the ADP algorithm is generally higher than that of the baseline algorithm. This article speculates that this is because, on more complex datasets, the noise added by differential privacy significantly increases, and the superiority of the ADP algorithm in the reasonable allocation of the privacy budget can be better reflected. On the MNIST dataset, the accuracy of the ADP algorithm can exceed that of the baseline algorithm in most cases. The performance is not significant on the EMNIST dataset because the accuracy of the EMNIST dataset is relatively high, leaving limited room for improvement. A large number of experiments have shown that the ADP algorithm has a certain improvement in accuracy compared to the baseline algorithm.

5.4. Time Complexity of Algorithm Under Differential Privacy Mechanisms

Table 5 shows the computational time costs of the baseline algorithm and ADP-FL algorithm across different datasets. Since federated learning focuses more on client performance, and servers typically have powerful computational resources, this paper selected the average time consumption of the client, primarily including local training, noise addition, and gradient transmission processes. It did not test the server’s time consumption. The results demonstrate that the ADP-FL algorithm proposed in this paper exhibits lower time consumption and is more advantageous in terms of aggregation efficiency.

5.5. Gradient Leakage Attack

Figure 8 shows the performance of the ADP-FL algorithm in the face of gradient leakage attacks. From the experimental results, it can be concluded that under the premise of privacy budget ϵ and relaxation term δ within the conventional value range, the ADP-FL algorithm in this paper has a significant effect on gradient leakage attacks.
After multiple tests, we found that the ADP-FL algorithm failed when the privacy budget ϵ = 1000 and the slack term δ = 0.1 . At this point, the privacy budget and relaxation term settings have far exceeded the parameter range in general situations. It can be concluded that the ADP-FL algorithm proposed in this paper is resistant to gradient leakage attacks.

6. Conclusions

In the paper, we proposed ADP-FL, a federated learning method based on adaptive differential privacy. First, based on Newton’s cooling law, ADP-FL dynamically adjusts the privacy budget ϵ according to the training progress and accuracy changes. Additionally, this paper optimizes the federated learning aggregation scheme by changing the aggregation weights based on the privacy budget and model obsolescence. Based on MNIST, CIFAR10, CIFAR100, and EMNIST, this paper constructs corresponding Non-IID datasets and validates them using RNN, VGG9, CNN, and Transformer networks. Extensive experiments are conducted on differential privacy algorithms based on Gaussian mechanisms and Laplace mechanisms. The experimental results show that, under the same privacy budget, ADP-FL achieves higher accuracy and lower communication overhead compared to baseline algorithms. In future work, we will explore the method of introducing clusters [37] into the ADP-FL algorithm, while better compressing the model’s parameters, improving communication efficiency [38], and enhancing the security of federated learning.

Author Contributions

Conceptualization, J.W. and G.X.; methodology, J.W.; investigation, H.H. and C.Y.; writing—original draft preparation, J.W. and Y.Z.; writing—review and editing, H.L.; visualization, J.W. and H.L.; supervision, G.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Dataset available upon request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1. Comparisons of Privacy Adjustment Strategies

This section compares the accuracy ( ϵ = 10 ) and convergence rounds comparisons of different privacy adjustment strategies on MNIST and CIFAR100, as shown in Table A1.
Table A1. Comparisons of different privacy adjustment strategies on MNIST and CIFAR100.
Table A1. Comparisons of different privacy adjustment strategies on MNIST and CIFAR100.
Different Privacy StrategiesDatasetAccuracyConvergence Rounds
Newton’s law of coolingMNIST92.1530
Linear decayMNIST84.5243
Fixed allocationMNIST82.5645
Newton’s law of coolingCIFAR10063.5645
Linear decayCIFAR10056.4559
Fixed allocationCIFAR10052.6955

Appendix A.2. Hyperparameter Selection

In this section, we focus on supplementing the hyperparameter selection strategy. We mainly analyze α and γ in Formula (7), the privacy budget ϵ , and the relaxation parameter δ .
The selection of α requires balancing the rate of privacy budget decay with the stability of model convergence. A value that is too large may cause the budget to deplete rapidly, introducing excessive noise; a value that is too small may result in decay occurring too slowly, failing to meet later privacy requirements. γ primarily ensures that the gradient weights of clients participating for the first time are not affected by decay. Its decay logic is primarily implemented using exponential decay, achieving the principle that “the longer the delay, the lower the weight.”
The choice of ϵ must match the dataset complexity and privacy requirements. For example, simple datasets (e.g., MNIST) are robust to noise, and ϵ = 5 is sufficient for convergence; in contrast, complex datasets (e.g., CIFAR100) require ϵ 10 to avoid noise interference. The choice of δ follows the “low probability of failure” principle to avoid individual information being identified separately. For example, CIFAR100 contains 60,000 samples, and δ 1.7 × 10−5, so setting it to 1 × 10−5 meets the safety constraints.
Based on the above selection strategy, Table A2 shows the hyperparameter tuning process, value basis, and effect comparison for CIFAR100 (Transformer model), covering different scenario requirements.
Table A2. Hyperparameter tuning process and comparative analysis for CIFAR100 (Transformer).
Table A2. Hyperparameter tuning process and comparative analysis for CIFAR100 (Transformer).
Scenario Requirement α λ ϵ δ AccuracyConvergence Rounds
Unoptimized Baseline0.11101 × 10−558.2360
High-Privacy0.15151 × 10−654.5575
Balanced0.11101 × 10−562.3455
High-Utility0.081201 × 10−565.8245
Unstable Network0.11101 × 10−561.5558
Based on the aforementioned selection strategy, Table A2 presents the hyperparameter tuning process, value selection criteria, and performance comparisons for the CIFAR100 dataset using the Transformer model, covering various scenario requirements.
Based on the data in the table, we can summarize the following:
(1) In the baseline scenario, default values are selected based on theoretical constraints, resulting in moderate model accuracy; however, privacy risks and convergence speed are not aligned with the scenario requirements.
(2) In the high-privacy scenario, strict control of privacy leakage is required (e.g., medical image data), so α is increased, and ϵ and δ are decreased to ensure “privacy first”;
(3) In the balanced scenario, α = 0.1 , ϵ = 10 , and δ = 1 × 10−5 make the algorithm more general;
(4) In the high-utility scenario, non-sensitive data can relax privacy constraints, so reducing α and increasing ϵ reduces noise interference, ensuring controllable risks and maximum efficiency;
(5) In the unstable network scenario, due to high network latency, adjust λ from 0.1 to 0.9 to avoid frequent discarding of valid gradients.

Appendix A.3. Actual Deployment Analysis

We can analyze the adaptability of ADP-FL on real devices from the experimental computational complexity results in Table 5:
1. Client computing power requirements: In simulation experiments, the training time for a single client ranged from 22 to 295 min (depending on the dataset), but the computing power of actual edge devices (such as smartphones and IoT sensors) is typically lower than that of experimental servers. Further optimization can be achieved through model lightweighting (e.g., pruning, quantization). For example, after compressing the model parameters of VGG9 by half, we observed an approximately 40% reduction in simulation time on CIFAR10 through testing. We speculate that ADP-FL meets real-time requirements on mainstream chips in mobile devices.
2. Server load: In simulation experiments, the aggregation time for 50 clients was less than 1 s. Based on linear scaling, supporting 1000 clients would result in an aggregation time of approximately 20 s, which aligns with the concurrent processing capabilities of cloud servers.
3. Weak network adaptation: If a client fails to upload gradients due to network interruption, the lag-based sparse λ reset mechanism enables it to reconnect without requiring retraining from scratch.
4. Communication Volume: Taking the VGG9 model as an example, the gradient of the VGG9 model is approximately 1.2 MB per client, which aligns with real-world usage scenarios. Additionally, in actual deployment, gradient sparsification can be employed to reduce the communication volume further.
Based on the simulation results analyzing computational time and communication volume, our algorithm is feasible for practical deployment.

Appendix A.4. Multi-Metric Performance Analysis

In this section, we have added multiple metrics, including precision, recall, and F1 score, to analyze the algorithm’s performance. Table A3, Table A4 and Table A5 compares precision, recall, and F1-Score under different privacy budgets. Table A6 and Table A7 compare metrics under the Laplace mechanism and the Gaussian mechanism.
Under both the Gaussian mechanism and the Laplace mechanism, the ADP-FL algorithm outperforms the Baseline algorithm in most scenarios, particularly demonstrating consistent advantages in precision, recall, and F1 scores, thereby validating the effectiveness of its “dynamic adjustment of privacy budget + weighted aggregation” strategy.
Table A3. Precision of baseline and ADP-FL algorithms under different privacy budgets ϵ .
Table A3. Precision of baseline and ADP-FL algorithms under different privacy budgets ϵ .
AlgorithmMNISTCIFAR10CIFAR100EMNIST
Non86.9255.7653.6266.89
Baseline ϵ = 1 50.87NanNanNan
Baseline ϵ = 5 85.8956.4352.8365.01
Baseline ϵ = 10 85.9361.7861.0263.78
ADP-FL ϵ = 1 65.98NanNanNan
ADP-FL ϵ = 5 87.0553.9252.3164.23
ADP-FL ϵ = 10 91.9863.4260.6569.12
Table A4. Recall of baseline and ADP-FL algorithms under different privacy budgets ϵ .
Table A4. Recall of baseline and ADP-FL algorithms under different privacy budgets ϵ .
AlgorithmMNISTCIFAR10CIFAR100EMNIST
Non87.1555.9253.8567.12
Baseline ϵ = 1 51.22NanNanNan
Baseline ϵ = 5 86.1156.7853.0565.32
Baseline ϵ = 10 86.1862.0361.2564.05
ADP-FL ϵ = 1 66.51NanNanNan
ADP-FL ϵ = 5 87.3254.2152.5464.51
ADP-FL ϵ = 10 92.2763.7160.8869.43
Table A5. F1-Score of baseline and ADP-FL algorithms under different privacy budgets ϵ .
Table A5. F1-Score of baseline and ADP-FL algorithms under different privacy budgets ϵ .
AlgorithmMNISTCIFAR10CIFAR100EMNIST
Non87.0355.8453.7367.00
Baseline ϵ = 1 51.04NanNanNan
Baseline ϵ = 5 85.9956.6052.9465.16
Baseline ϵ = 10 86.0561.9061.1363.91
ADP-FL ϵ = 1 66.24NanNanNan
ADP-FL ϵ = 5 87.1854.0652.4264.37
ADP-FL ϵ = 10 92.1263.5660.7669.27
Table A6. Performance metrics of baseline and ADP-FL algorithms under Gaussian mechanism.
Table A6. Performance metrics of baseline and ADP-FL algorithms under Gaussian mechanism.
DatasetAlgorithm ϵ PrecisionRecallF1-Score
MNISTBaseline1072.9873.2573.11
ADP-FL1074.5674.8974.72
Baseline2085.3785.6885.52
ADP-FL2086.4586.7886.61
Baseline3081.8282.1581.98
ADP-FL3081.7682.0981.92
Baseline4082.8083.1382.96
ADP-FL4084.4184.7484.57
Baseline5079.7880.1179.94
ADP-FL5080.1080.4380.26
CIFAR10Baseline1027.2527.5827.41
ADP-FL1027.5127.8427.67
Baseline2040.6540.9840.81
ADP-FL2041.9142.2442.07
Baseline3048.1048.4348.26
ADP-FL3049.6149.9449.77
Baseline4047.1847.5147.34
ADP-FL4045.8446.1746.00
Baseline5050.9051.2351.06
ADP-FL5053.0653.3953.22
CIFAR100Baseline1029.6129.9429.77
ADP-FL1032.5132.8432.67
Baseline2042.6542.9842.81
ADP-FL2046.0846.4146.24
Baseline3047.9548.2848.11
ADP-FL3045.6145.9445.77
Baseline4046.5746.9046.73
ADP-FL4047.8448.1748.00
Baseline5050.9351.2651.09
ADP-FL5053.0653.3953.22
EMNISTBaseline1088.0188.3488.17
ADP-FL1089.9890.3190.14
Baseline2096.6596.9896.81
ADP-FL2097.2397.5697.39
Baseline3097.5597.8897.71
ADP-FL3096.7697.0996.92
Baseline4097.3597.6897.51
ADP-FL4097.5097.8397.66
Baseline5097.9098.2398.06
ADP-FL5098.4198.7498.57
Under the Gaussian mechanism, the advantage of ADP-FL is more stable, outperforming the Baseline in most scenarios (e.g., F1 score of 53.22 for CIFAR10 with ϵ = 50 , compared to 51.06 for the Baseline), especially under high privacy budgets ( ϵ 30 ), where the optimization effect on complex datasets is more pronounced. This is due to the continuity of Gaussian noise, which is more suitable for ADP-FL’s dynamic budget adjustment strategy. In the Laplace mechanism, performance fluctuates significantly. In some scenarios, ADP-FL performs worse than the baseline (e.g., EMNIST ϵ = 20 with an F1 score of 95.24, baseline 98.04), but it shows significant optimization for CIFAR100 in the ϵ = 30–40 range (F1 improvement of 6.4%). This is related to the discreteness of Laplace noise, which can introduce local biases during dynamic budget allocation.
Table A7. Performance metrics of baseline and ADP-FL algorithms under Laplace mechanism.
Table A7. Performance metrics of baseline and ADP-FL algorithms under Laplace mechanism.
DatasetAlgorithm ϵ PrecisionRecallF1-Score
MNISTBaseline1080.4880.8180.64
ADP-FL1082.2882.6182.44
Baseline2081.8182.1481.97
ADP-FL2080.4280.7580.58
Baseline3082.3682.6982.52
ADP-FL3083.1783.5083.33
Baseline4084.2484.5784.40
ADP-FL4086.0986.4286.25
Baseline5086.8787.2087.03
ADP-FL5086.7487.0786.90
CIFAR10Baseline1049.7650.0949.92
ADP-FL1035.5235.8535.68
Baseline2028.4128.7428.57
ADP-FL2046.1446.4746.30
Baseline3042.8043.1342.96
ADP-FL3046.1446.4746.30
Baseline4051.0351.3651.19
ADP-FL4051.4151.7451.57
Baseline5048.8949.2249.05
ADP-FL5048.9149.2449.07
CIFAR100Baseline1043.2943.6243.45
ADP-FL1036.2036.5336.36
Baseline2030.8131.1430.97
ADP-FL2042.7043.0342.86
Baseline3042.1742.5042.33
ADP-FL3048.5748.9048.73
Baseline4050.0650.3950.22
ADP-FL4052.4452.7752.60
Baseline5046.4946.8246.65
ADP-FL5050.3350.6650.49
EMNISTBaseline1097.3097.6397.46
ADP-FL1097.3397.6697.49
Baseline2097.8898.2198.04
ADP-FL2095.0895.4195.24
Baseline3097.8098.1397.96
ADP-FL3098.0698.3998.22
Baseline4097.8398.1697.99
ADP-FL4097.9198.2498.07
Baseline5097.7498.0797.90
ADP-FL5097.4097.7397.56

Appendix A.5. Comparisons of Performance Under Different Noise Mechanisms

Intuitive experimental results are shown in Figure A1, Figure A2, Figure A3 and Figure A4. The red line represents the accuracy of the baseline algorithm, and the green line represents the accuracy of the ADP-FL algorithm.
Figure A1. On the MNIST dataset, adaptive differential privacy algorithms based on Laplace and Gaussian mechanisms were tested against the baseline algorithm, with privacy budgets ϵ set to 10, 20, 30, 40, and 50 from top to bottom.
Figure A1. On the MNIST dataset, adaptive differential privacy algorithms based on Laplace and Gaussian mechanisms were tested against the baseline algorithm, with privacy budgets ϵ set to 10, 20, 30, 40, and 50 from top to bottom.
Electronics 14 02847 g0a1
Figure A2. On the CIFAR10 dataset, adaptive differential privacy algorithms based on Laplace and Gaussian mechanisms were tested against the baseline algorithm, with privacy budgets ϵ set to 10, 20, 30, 40, and 50 from top to bottom.
Figure A2. On the CIFAR10 dataset, adaptive differential privacy algorithms based on Laplace and Gaussian mechanisms were tested against the baseline algorithm, with privacy budgets ϵ set to 10, 20, 30, 40, and 50 from top to bottom.
Electronics 14 02847 g0a2
Figure A3. On the CIFAR100 dataset, adaptive differential privacy algorithms based on Laplace and Gaussian mechanisms were tested against the baseline algorithm, with privacy budgets ϵ set to 10, 20, 30, 40, and 50 from top to bottom.
Figure A3. On the CIFAR100 dataset, adaptive differential privacy algorithms based on Laplace and Gaussian mechanisms were tested against the baseline algorithm, with privacy budgets ϵ set to 10, 20, 30, 40, and 50 from top to bottom.
Electronics 14 02847 g0a3
Figure A4. On the EMNIST dataset, adaptive differential privacy algorithms based on Laplace and Gaussian mechanisms were tested against the baseline algorithm, with privacy budgets ϵ set to 10, 20, 30, 40, and 50 from top to bottom.
Figure A4. On the EMNIST dataset, adaptive differential privacy algorithms based on Laplace and Gaussian mechanisms were tested against the baseline algorithm, with privacy budgets ϵ set to 10, 20, 30, 40, and 50 from top to bottom.
Electronics 14 02847 g0a4

References

  1. Xia, G.; Chen, J.; Yu, C.; Ma, J. Poisoning Attacks in Federated Learning: A Survey. IEEE Access 2023, 11, 10708–10722. [Google Scholar] [CrossRef]
  2. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated machine learning: Concept and applications. In ACM Transactions on Intelligent Systems and Technology (TIST); ACM: New York, NY, USA, 2019; Volume 10, pp. 1–19. [Google Scholar]
  3. Xia, G.; Chen, J.; Huang, X.; Yu, C.; Zhang, Z. FL-PTD: A Privacy Preserving Defense Strategy Against Poisoning Attacks in Federated Learning. In Proceedings of the 2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC), Torino, Italy, 27–29 June 2023; pp. 735–740. [Google Scholar] [CrossRef]
  4. Bhowmick, A.; Duchi, J.; Freudiger, J.; Kapoor, G.; Rogers, R. Protection against reconstruction and its applications in private federated learning. arXiv 2018, arXiv:1812.00984. [Google Scholar]
  5. Melis, L.; Song, C.; De Cristofaro, E.; Shmatikov, V. Exploiting unintended feature leakage in collaborative learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 19–23 May 2019; pp. 691–706. [Google Scholar]
  6. Zhu, L.; Liu, Z.; Han, S. Deep leakage from gradients. Adv. Neural Inf. Process. Syst. 2019, 32, 14774–14784. [Google Scholar]
  7. Sun, Z.; Kairouz, P.; Suresh, A.T.; McMahan, H.B. Can you really backdoor federated learning? arXiv 2019, arXiv:1911.07963. [Google Scholar] [CrossRef]
  8. Qiu, J.; Ma, H.; Wang, Z. Survey of privacy-preserving aggregation mechanisms in federated learning. Appl. Res. Comput. 2025, 42, 1601–1610. [Google Scholar]
  9. Zhang, W.; Zhou, Z.; Wang, Y.; Tong, Y. DM-PFL: Hitchhiking Generic Federated Learning for Efficient Shift-Robust Personalization. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 6–10 August 2023; KDD ’23. pp. 3396–3408. [Google Scholar]
  10. McMahan, H.B.; Ramage, D.; Talwar, K.; Zhang, L. Learning differentially private recurrent language models. arXiv 2017, arXiv:1710.06963. [Google Scholar]
  11. Agarwal, N.; Suresh, A.T.; Yu, F.X.X.; Kumar, S.; McMahan, B. cpSGD: Communication-efficient and differentially-private distributed SGD. Adv. Neural Inf. Process. Syst. 2018, 31, 7575–7586. [Google Scholar]
  12. Aono, Y.; Hayashi, T.; Wang, L.; Moriai, S.; Moriai, S. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 2017, 13, 1333–1345. [Google Scholar] [CrossRef]
  13. Xu, X.; Wang, W.; Chen, Z.; Wang, B.; Li, C.; Duan, L.; Han, Z.; Han, Y. Finding the PISTE: Towards Understanding Privacy Leaks in Vertical Federated Learning Systems. IEEE Trans. Dependable Secur. Comput. 2025, 22, 1537–1550. [Google Scholar] [CrossRef]
  14. Dwork, C. Differential privacy. In Proceedings of the International Colloquium on Automata, Languages, and Programming, Venice, Italy, 10–14 July 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 1–12. [Google Scholar]
  15. Geyer, R.C.; Klein, T.; Nabi, M. Differentially private federated learning: A client level perspective. arXiv 2017, arXiv:1712.07557. [Google Scholar]
  16. Jiang, X.; Zhou, X.; Grossklags, J. Signds-FL: Local differentially private federated learning with sign-based dimension selection. ACM Trans. Intell. Syst. Technol. (TIST) 2022, 13, 1–22. [Google Scholar] [CrossRef]
  17. Patil, A.; Singh, S. Differential private random forest. In Proceedings of the 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Delhi, India, 24–27 September 2014; pp. 2623–2630. [Google Scholar]
  18. Breiman, L. Random forests. In Machine Learning; Springer: Berlin/Heidelberg, Germany, 2001; Volume 45, pp. 5–32. [Google Scholar]
  19. Ghazi, B.; Pagh, R.; Velingker, A. Scalable and differentially private distributed aggregation in the shuffled model. arXiv 2019, arXiv:1906.08320. [Google Scholar] [CrossRef]
  20. Cai, J.; Liu, X.; Ye, Q.; Liu, Y.; Wang, Y. A Federated Learning Framework Based on Differentially Private Continuous Data Release. IEEE Trans. Dependable Secur. Comput. 2024, 21, 4879–4894. [Google Scholar] [CrossRef]
  21. Wang, T.; Yang, Q.; Zhu, K.; Wang, J.; Su, C.; Sato, K. LDS-FL: Loss Differential Strategy Based Federated Learning for Privacy Preserving. IEEE Trans. Inf. Forensics Secur. 2024, 19, 1015–1030. [Google Scholar] [CrossRef]
  22. Liu, B.; Lu, J.; Wang, P.; Zhang, J.; Zeng, D.; Qian, Z.; Ge, S. Privacy-Preserving Student Learning with Differentially Private Data-Free Distillation. arXiv 2024, arXiv:2409.12384v1. [Google Scholar]
  23. Banse, A.; Kreischer, J.; i Jürgens, X.O. Federated Learning with Differential Privacy. arXiv 2024, arXiv:2402.02230. [Google Scholar] [PubMed]
  24. Chehbouni, K.; Cock, M.D.; Caporossi, G.; Taik, A.; Rabbany, R.; Farnadi, G. Enhancing Privacy in the Early Detection of Sexual Predators Through Federated Learning and Differential Privacy. arXiv 2025, arXiv:2501.12537. [Google Scholar] [CrossRef]
  25. Ni, Z.; Zhou, Q. Differential Privacy in Federated Learning: An Evolutionary Game Analysis. Appl. Sci. 2025, 15, 2914. [Google Scholar] [CrossRef]
  26. Cozuc, A.; Oltean, A.; Mocanu, I.; Cramariuc, O. Combining GANs and Federated Learning for Privacy Protection. In Proceedings of the 2024 E-Health and Bioengineering Conference (EHB), Iasi, Romania, 14–15 November 2024; pp. 1–4. [Google Scholar] [CrossRef]
  27. Fu, J.; Hong, Y.; Ling, X.; Wang, L.; Ran, X.; Sun, Z.; Wang, W.H.; Chen, Z.; Cao, Y. Differentially Private Federated Learning: A Systematic Review. arXiv 2024, arXiv:2405.08299. [Google Scholar] [CrossRef]
  28. Liu, J.; Lou, J.; Xiong, L.; Liu, J.; Meng, X. Cross-silo Federated Learning with Record-level Personalized Differential Privacy. arXiv 2024, arXiv:2401.16251. [Google Scholar]
  29. Ma, J.; Zhou, Y.; Cui, L.; Guo, S. An Optimized Sparse Response Mechanism for Differentially Private Federated Learning. IEEE Trans. Dependable Secur. Comput. 2024, 21, 2285–2295. [Google Scholar] [CrossRef]
  30. Gao, Y.; Shi, R.; Liu, C. Federated learning scheme with adaptive differential privacy. CAAI Trans. Intell. Syst. 2024, 19, 1395–1406. [Google Scholar] [CrossRef]
  31. Li, Y.; Xu, J.; Zhu, J.; Wang, Y. Bidirectional adaptive differential privacy federated learning scheme. J. Xidian Univ. 2024, 51, 158–169. [Google Scholar] [CrossRef]
  32. Maruyama, S.; Moriya, S. Newton’s Law of Cooling: Follow up and exploration. Int. J. Heat Mass Transf. 2021, 164, 120544. [Google Scholar] [CrossRef]
  33. Zhang, H.; Huang, H.; Peng, C. A Novel User Behavior Modeling Scheme for Edge Devices with Dynamic Privacy Budget Allocation. Electronics 2025, 14, 954. [Google Scholar] [CrossRef]
  34. Hong, J.; Wang, Z.; Zhou, J. On Dynamic Noise Influence in Differentially Private Learning. arXiv 2021, arXiv:2101.07413. [Google Scholar]
  35. Zhang, J.; Fay, D.; Johansson, M. Dynamic Privacy Allocation for Locally Differentially Private Federated Learning with Composite Objectives. arXiv 2023, arXiv:2308.01139. [Google Scholar] [CrossRef]
  36. Kiani, S.; Kulkarni, N.; Dziedzic, A.; Draper, S.; Boenisch, F. Differentially Private Federated Learning with Time-Adaptive Privacy Spending. In Proceedings of the The Thirteenth International Conference on Learning Representations, Singapore, 24–28 April 2025. [Google Scholar]
  37. Arisdakessian, S.; Wahab, O.A.; Mourad, A.; Otrok, H. Towards Instant Clustering Approach for Federated Learning Client Selection. In Proceedings of the 2023 International Conference on Computing, Networking and Communications (ICNC), Honolulu, HI, USA, 20–22 February 2023; pp. 409–413. [Google Scholar]
  38. Liu, R.; Cao, Y.; Yoshikawa, M.; Chen, H. Fedsel: Federated sgd under local differential privacy with top-k dimension selection. In Proceedings of the Database Systems for Advanced Applications: 25th International Conference, DASFAA 2020, Jeju, Republic of Korea, 24–27 September 2020; Proceedings, Part I 25. Springer: Berlin/Heidelberg, Germany, 2020; pp. 485–501. [Google Scholar]
Figure 1. Illustration of differential privacy.
Figure 1. Illustration of differential privacy.
Electronics 14 02847 g001
Figure 2. Asynchronous aggregation flowchart.
Figure 2. Asynchronous aggregation flowchart.
Electronics 14 02847 g002
Figure 4. Accuracy of ADP-FL on the MNIST dataset under different privacy budgets.
Figure 4. Accuracy of ADP-FL on the MNIST dataset under different privacy budgets.
Electronics 14 02847 g004
Figure 5. Accuracy of ADP-FL on the CIFAR10 dataset under different privacy budgets.
Figure 5. Accuracy of ADP-FL on the CIFAR10 dataset under different privacy budgets.
Electronics 14 02847 g005
Figure 6. Accuracy of ADP-FL on the CIFAR100 dataset under different privacy budgets.
Figure 6. Accuracy of ADP-FL on the CIFAR100 dataset under different privacy budgets.
Electronics 14 02847 g006
Figure 7. Accuracy of ADP-FL on the EMNIST dataset under different privacy budgets.
Figure 7. Accuracy of ADP-FL on the EMNIST dataset under different privacy budgets.
Electronics 14 02847 g007
Figure 8. The effectiveness of ADP in combating gradient leakage attacks.
Figure 8. The effectiveness of ADP in combating gradient leakage attacks.
Electronics 14 02847 g008
Table 1. Details of datasets and models.
Table 1. Details of datasets and models.
#Dataset#Records#Features#Classes#Model#Parameters
MNIST70,00078410RNN24,714
EMNIST814,25578462CNN206,922
CIFAR1060,000102410VGG93,491,530
CIFAR10060,0001024100Transformer8,568,320
Table 2. Accuracy of baseline and ADP-FL algorithms under different privacy budgets ϵ .
Table 2. Accuracy of baseline and ADP-FL algorithms under different privacy budgets ϵ .
AlgorithmMNISTCIFAR10CIFAR100EMNIST
Non87.2456.0853.8567.26
Baseline ϵ = 1 51.38NanNanNan
Baseline ϵ = 5 86.2055.9253.0565.43
Baseline ϵ = 10 86.2562.1561.2564.16
ADP-FL ϵ = 1 66.73NanNanNan
ADP-FL ϵ = 5 87.4056.3554.5464.62
ADP-FL ϵ = 10 92.3663.8562.8869.55
Table 3. Comparison of accuracy between ADP algorithm and baseline algorithm under different differential privacy mechanisms and privacy budgets on different datasets.
Table 3. Comparison of accuracy between ADP algorithm and baseline algorithm under different differential privacy mechanisms and privacy budgets on different datasets.
Gaussian
ϵ 1020304050
MNISTBaseline73.4685.8282.3283.380.28
ADP-FL75.0486.9182.2684.9180.60
CIFAR10Baseline27.7241.1348.647.6851.4
ADP-FL27.9842.3950.1146.3453.56
CIFAR100Baseline30.0843.1348.4547.07851.43
ADP-FL32.9846.5646.1148.3453.56
EMNISTBaseline88.4797.1298.0597.8598.40
ADP-FL90.4597.7097.2698.0098.91
Table 4. Comparison of accuracy between ADP algorithm and baseline algorithm under different differential privacy mechanisms and privacy budgets on different datasets.
Table 4. Comparison of accuracy between ADP algorithm and baseline algorithm under different differential privacy mechanisms and privacy budgets on different datasets.
Laplace
ϵ 1020304050
MNISTBaseline80.9582.2882.8684.7487.37
ADP-FL82.7580.8983.6786.5987.24
CIFAR10Baseline50.2328.8843.351.5349.39
ADP-FL35.9946.6146.6451.9149.41
CIFAR100Baseline43.7631.2842.6750.5646.99
ADP-FL36.6743.1749.0752.9450.83
EMNISTBaseline97.7798.3598.3098.3398.24
ADP-FL97.8095.5598.4698.4197.90
Table 5. Time complexity of the baseline algorithm and ADP-FL algorithm on different datasets, in minutes.
Table 5. Time complexity of the baseline algorithm and ADP-FL algorithm on different datasets, in minutes.
MNISTCIFAR10CIFAR100EMNIST
Baseline2845295159
ADP-FL2232248122
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, J.; Xia, G.; Huang, H.; Yu, C.; Zhang, Y.; Li, H. An Asynchronous Federated Learning Aggregation Method Based on Adaptive Differential Privacy. Electronics 2025, 14, 2847. https://doi.org/10.3390/electronics14142847

AMA Style

Wu J, Xia G, Huang H, Yu C, Zhang Y, Li H. An Asynchronous Federated Learning Aggregation Method Based on Adaptive Differential Privacy. Electronics. 2025; 14(14):2847. https://doi.org/10.3390/electronics14142847

Chicago/Turabian Style

Wu, Jiawen, Geming Xia, Hongwei Huang, Chaodong Yu, Yuze Zhang, and Hongfeng Li. 2025. "An Asynchronous Federated Learning Aggregation Method Based on Adaptive Differential Privacy" Electronics 14, no. 14: 2847. https://doi.org/10.3390/electronics14142847

APA Style

Wu, J., Xia, G., Huang, H., Yu, C., Zhang, Y., & Li, H. (2025). An Asynchronous Federated Learning Aggregation Method Based on Adaptive Differential Privacy. Electronics, 14(14), 2847. https://doi.org/10.3390/electronics14142847

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop