Next Article in Journal
Document-Level Causal Event Extraction Enhanced by Temporal Relations Using Dual-Channel Neural Network
Previous Article in Journal
Electrical Response of Photovoltaic Power Cells to Cosmic Radiation in the Stratosphere
Previous Article in Special Issue
Masking and Homomorphic Encryption-Combined Secure Aggregation for Privacy-Preserving Federated Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PPRD-FL: Privacy-Preserving Federated Learning Based on Randomized Parameter Selection and Dynamic Local Differential Privacy

School of Engineering, Huaqiao University, Quanzhou 362000, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(5), 990; https://doi.org/10.3390/electronics14050990
Submission received: 30 December 2024 / Revised: 23 February 2025 / Accepted: 24 February 2025 / Published: 28 February 2025
(This article belongs to the Special Issue Security and Privacy in Emerging Technologies)

Abstract

:
As traditional federated learning algorithms often fall short in providing privacy protection, a growing body of research integrates local differential privacy methods into federated learning to strengthen privacy guarantees. However, under a fixed privacy budget, with the increase in the dimensionality of model parameters, the privacy budget allocated per parameter diminishes, which means that a larger amount of noise is required to meet privacy requirements. This escalation in noise may adversely affect the final model’s performance. For that, we propose a privacy protection federated learning (PPRD-FL) approach. First, we design a randomized parameter selection strategy that combines randomization with importance-based filtering, effectively addressing the privacy budget dilution problem by selecting only the most crucial parameters for global aggregation. Second, we develop a dynamic local differential privacy-based perturbation mechanism, which adjusts the noise levels according to the training phase, not only providing robustness and security but also optimizing the dynamic allocation of the privacy budget. Finally, our experiments have demonstrated that the proposed approach maintains a high performance while ensuring strong privacy guarantees.

1. Introduction

The advancement of big data has propelled artificial intelligence to a new stage. To uncover the “secrets” hidden within massive datasets, it is essential to develop higher-precision learning models capable of capturing the inherent features of the data. How-ever, this requires a large amount of high-quality data as support. Recent studies have explored solutions through information fusion [1,2]. Nevertheless, high-quality data often contain participants’ private information. With the growing emphasis on personal privacy, many participants are disinclined to share their data, making information fusion even more difficult.
In this context, Google presented the idea of federated learning (FL) [3] in 2016. FL allows multiple entities to collaboratively develop a shared global model while keeping private data localized, thereby ensuring data confidentiality and safety. Although FL offers some degree of privacy protection, this is far from sufficient. Research [4,5,6,7,8] has shown that there remains a potential risk of privacy data leakage during parameter transmission. Some existing attack methods in machine learning, such as inference attacks, model inversion attack, and reconstruction attacks, can infer local data from intermediate parameters, potentially compromising sensitive local data, as shown in Figure 1. Reference [9] demonstrates that sensitive information from a individual dataset may be obtained through an individual trained model. Reference [10] shows that with certain attacks, raw data may be reconstructed using model parameters sent by FL participants, which can lead to privacy breaches. Therefore, strengthening the privacy protection of federated learning is an urgent challenge that must be resolved.
In recent years, various methods have emerged to address privacy issues in FL, which can generally be classified into two categories: cryptographic algorithms and differential privacy (DP). Most cryptographic algorithms use homomorphic encryption (HE) [11,12], which can provide a high standard of privacy protection by allowing calculations to be carried out on encrypted data. However, for complex data and operations, HE incurs significant computational overhead, which may adversely affect system performance. Compared to encryption algorithms, DP offers a clear lightweight advantage, making it more applicable and useful. DP, in general, requires the entire system or dataset to adhere to differential privacy constraints during aggregation, but without the presence of a trusted third party, it is not ideal for privacy protection in distributed architectures such as FL. Therefore, based on DP, local differential privacy (LDP) [13,14,15] has been introduced, where each data holder randomizes their data prior to transmission to the central server, thereby ensuring privacy protection, with data privacy protection occurring at the data source end. However, in LDP, as the dimensionality of the parameters increases, the privacy budget allocated to each parameter diminishes, resulting in a greater variance introduced by the perturbation mechanism. This increased variance may lead to instability during the model training process, impacting the model’s final performance.
Furthermore, current research on LDP predominantly focuses on fixed noise perturbation, where the noise remains constant across global iterations. This approach may result in an increased training or convergence time, potentially reducing model performance [16]. Alternatively, it is feasible to permit the variance of DP perturbation to adjust dynamically across multiple global aggregation rounds, thereby boosting the model performance of FL without affecting the level of privacy preserving. Specifically, at the initial phase of the FL process, a smaller perturbation noise is expected to facilitate convergence [17]. As the training progresses and the model approaches convergence, the noise can be gradually increased to maintain a high level of privacy protection while still ensuring efficient learning, and it can better adapt to the evolving needs of the training process. This dynamic adjustment helps mitigate the conflict between privacy protection and model performance, and is conducive to adjusting the allocation of the privacy budget, potentially enhancing both the learning efficiency and accuracy.
In response to the issues outlined above, this article proposes an LDP-based privacy-preserving FL approach. This is particularly crucial for applications in sensitive fields such as healthcare, finance, and IoT, where ensuring privacy is essential for protecting personal information while still enabling the benefits of collaborative data usage. Specifically, we employ a randomized parameter selection strategy in conjunction with the dynamic LDP mechanism to enhance the privacy protection of participants in the FL. The main contributions of this paper are outlined as follows:
  • We propose a novel randomized parameter selection strategy (R-PSS), which selects parameters for global aggregation based on their importance in the network, while incorporating controlled randomness to further enhance the process. This strategy effectively mitigates privacy budget dilution, improving the trade-off between privacy and model usability;
  • We introduce a dynamic local differential privacy mechanism, where the standard deviation of the perturbation mechanism varies dynamically with global aggregation. This adaptive approach ensures robust privacy protection while minimizing noise, addressing the key privacy challenges in federated learning;
  • We conducted a comprehensive privacy analysis to demonstrate the feasibility of the proposed approach. Compared to existing methods, the proposed approach achieves better privacy protection while preserving model performance.

2. Related Work

2.1. Privacy-Preserving Techniques in Federated Learning

Federated learning requires participants to upload model parameters for global optimization, which can lead to privacy risks during data transmission. To address these concerns, privacy-preserving techniques like secure aggregation [18], homomorphic encryption [19], and differential privacy [20] have been proposed. Wang et al. [21] propose VOSA, a verifiable and oblivious secure aggregation protocol that utilizes aggregator oblivious encryption and dynamic group management to protect local gradients, ensure result verifiability, and tolerate user dropouts in federated learning. Xu et al. [22] propose PPFDL, a privacy-preserving federated deep learning framework that uses homomorphic encryption to protect data confidentiality and mitigate the impact of low-quality data from irregular users. Chen et al. [23] propose PDLHR, a privacy-preserving deep learning model using homomorphic re-encryption to enhance efficiency and security in robot systems while protecting data privacy. Guo et al. [24] propose an improved differential privacy algorithm for federated learning, utilizing Fast Fourier Transform and Privacy Loss Distribution to enhance privacy protection, minimize resource limitations, and balance privacy and utility during model training. Shen et al. [25] propose PEDPFL, a differential privacy-based federated learning algorithm that improves model robustness against DP noise using a classifier-perturbation regularization method. However, secure aggregation primarily focuses on aggregating encrypted model updates but may still expose sensitive information during the aggregation process. Homomorphic encryption provides strong security guarantees but suffers from high computational complexity and inefficiency, especially for large-scale models. Differential privacy offers a robust privacy framework but it relies on a trusted third party for the aggregation of data.

2.2. Challenges and Applications in Local Differential Privacy

LDP achieves privacy protection by performing noise addition locally. With the in-creasing interest in LDP, many scholars have conducted in-depth research. Zhao et al. [26] integrates FL and LDP for IoV, proposing efficient mechanisms to enhance privacy, reduce costs, and maintain utility. Liu et al. [27] introduced the FedSel algorithm, which selects only the most significant dimensions to add noise, reducing the variance of LDP noise. Chen et al. [28] introduced a layer-wise LDP method, which perturbs each layer of the local model based on the privacy budget assigned to the clients. Kang et al. [29] introduced a novel DP-based FL scheme (NBAFL), which adds artificial noise to client parameters prior to aggregation. Chen et al. [13] proposes SPM to enhance privacy and accuracy in federated learning by reducing noise variance under smaller privacy budgets. However, in practice, deep learning models often produce a considerable number of high-dimensional parameters. In the LDP mechanism, as the dimensionality of the parameters increases, the total privacy budget required to maintain consistent privacy protection across all parameters also expands. This implies that, under a fixed privacy budget, a higher dimensionality of parameters requires injecting larger noise into each dimension.
In addition, Recent studies have also paid attention to the issue of parameter dimensionality. Shin et al. [30] addressed the parameter dependence issue using dimensionality reduction techniques. However, this method randomly discards certain parameters, even those that may have a significant contribution to the model, which could greatly impact the final model effectiveness. Sun et al. [31] combines NF-DP (Noise-Free Differential Privacy) with model distillation to effectively manage the balance between confidentiality and performance. Sun et al. [32] addressed the issue of dimensionality in federated learning by introducing a method that splits and shuffles model updates. Song et al. [33] introduced ASPFL, which is an efficient federated learning approach that combines adaptive sparsity-based pruning with differential privacy. Wang et al. [34] proposed PPeFL, which uses three LDP mechanisms to reduce privacy budget growth, improve performance, and enhance the communication efficiency. However, the effectiveness of their method in achieving its intended purpose requires further examination, and they take fixed DP noise perturbations into account more, which can require long training or convergence times.
Therefore, we aim to propose a privacy-preserving method that avoids the drawbacks mentioned above, ensuring that the training accuracy is not significantly compromised. In comparison to existing privacy-preserving federated learning models, our approach maintains an excellent model performance while ensuring a higher level of privacy protection. The relevant symbols and parameters presented in this paper are shown in Table 1.

3. Preliminaries

3.1. Federated Learning

FL architectures usually involve two primary roles: the server and the participants. Assume there are N participants, each holding an independent dataset D i with the same set of features, while the network model architecture used during training is also identical. In FL, each participant K i ( i { 1 , 2 , , N } ) builds a model from their local dataset without sharing the local data. The objective of these participants to train a convergent model that meets the requirements of multiple parties, which involves finding an optimal model parameter to achieve the minimization of the given loss function [3]. The optimization process can be described as follows:
The local loss function F i ( w i ) corresponding to the participant K i can be written as
F i ( w i ) = 1 D i j = 1 D i f j ( x j , w i )
where w i represents the model parameters of participant K i ; D i denotes the size of the dataset belonging to participant K i ; and f j ( x j , w i ) denotes the loss function for each sample x j in participant K i s dataset.
The global loss function F ( w ) at the server side is defined as the weighted mean of the local loss functions across all participants:
F ( w ) = i = N D i D F i ( w i )
where w represents the global model parameters and D = i = 1 N D i represents the aggregate number of samples across all participants.
Therefore, FL aims to optimize the global model parameters w by minimizing the global loss function F ( w ) . The optimization objective is defined as
w * = arg min w F ( w )
In FL, participants engage in multiple rounds of model parameter exchanges with the server, where each round is called a global iteration. The following describes the process:
Step 1: model initialization: the server generates the global model’s initial parameters w 0 and distributes these parameters to all the participants.
Step 2: participant-side training: In this phase, each participant initializes their local model w i ( t ) using the global model parameters w ( t ) provided by the server. Subsequently, multiple iterations of training are performed on their respective local datasets D i .
Step 3: parameter upload: after completing local training, each participant submits the newly trained model parameters w i ( t ) to the server for aggregation.
Step 4: parameter aggregation: following the receival of updated model parameters from all participants, the server undertakes the task of aggregating these parameters to compute a new global model w ( t + 1 ) .
Step 5: parameter distribution: the server redistributes the newly aggregated results w ( t + 1 ) to all participants.
Step 6: continue iterating through steps 2 to 5 until the global model converges or the maximum training iterations are completed.

3.2. Potential Threats

We assume that the server operates under the premise of being honest yet curious; while it strictly adheres to the FL training protocol, there remains a potential risk of obtaining the participants’ private information. In addition, external attackers may attempt to acquire the participants’ sensitive information. Although users of the FL can locally retain and train their dataset, the local model parameters communicated between participants and the server may jeopardize user privacy [35]. The following are possible threats:
  • An attacker can eavesdrop on the communication in the system to obtain sensitive data from participants;
  • A compromised server may attempt to reconstruct participant data regarding the participants from the parameters uploaded by them;
  • Participants controlled by attackers may try to extract private data about fellow participants from the perturbed data.

3.3. Local Differential Privacy

Differential privacy [36] serves as a robust framework for mitigating the risk of information leakage, capable of ensuring the protection of personal data during both analysis and sharing. By incorporating rigorous mathematical mechanisms, differential privacy guarantees that the results of data queries do not compromise the privacy of any single participant’s information, thus fostering a secure environment for data sharing and publication. Local differential privacy (LDP) [37] represents a decentralized adaptation of differential privacy, suitable for distributed data collection and analysis scenarios. In LDP, data are perturbed on the user’s side before being sent to the data collector. By implementing such local perturbation, LDP enhances the security of sensitive information and mitigates the risk of privacy violations in decentralized environments. Next, this article briefly introduces the concepts of differential privacy.
Definition 1.
(( ε , δ )-differential privacy). A randomized algorithm M satisfies ( ε , δ )-differential privacy if, for any two neighboring datasets  D and  D differing in at most one element, and for any subset of outputs  S , it holds that
Pr [ M ( D ) S ] e ε Pr [ M ( D ) S ] + δ
where ε represents the privacy loss parameter, measuring the degree of privacy safeguarding. A smaller ε results in closer output probabilities between two datasets, indicating enhanced privacy protection and increasing the difficulty for an attacker to ascertain the origin of the data. The parameter δ delineates the probability of breaching differential privacy. When δ = 0 , the algorithm strictly adheres to the principles of differential privacy, signifying a higher standard of privacy assurance.
Definition 2.
(Sensitivity). Sensitivity describes the maximum impact that a change in a single element of the dataset can have on the output of a given function. The Gaussian mechanism ensures differential privacy by introducing noise, and sensitivity determines the magnitude of the noise required to maintain privacy protection. For a given function R and two neighboring datasets  D and  D , the sensitivity of  R is given by
Δ f = max D , D | | R ( D ) R ( D ) | |
where  R ( ) is the generic function in  D and  D , and  | | | | denotes the  l 2 -norm. Higher sensitivity results in a greater amount of noise being added to the parameters for a given privacy budget.
The Gaussian mechanism [38] is a commonly used method in differential privacy, employed to process data while ensuring privacy. It controls the risk of sensitive information leakage by adding noise that follows a normal distribution to the query results.
Definition 3.
(Gaussian mechanism). The output  M ( D ) of the query function  M : D d is perturbed by adding Gaussian noise  z . It is defined as
M ( D ) = M ( D ) + z
where  z N ( 0 , σ 2 ) denotes Gaussian noise with a mean of 0 and a variance of  σ 2 .

4. Method

4.1. PPRD-FL

In PPRD-FL, we begin by assuming the presence of N participants, each of whom possesses a certain amount of private data and initial model with the same architecture. Initially, the server initializes the model parameters, after which it systematically disseminates these parameters to all participants in the network. Subsequently, each participant K i engages in a series of iterative updates leveraging their respective local dataset, enabling each local model to effectively reflect the characteristics of the local data. Once the updates are complete, participants send the model parameters they trained to be aggregated by the server. During this process, the protection of local model parameters is ensured through the use of R-PSS and dynamic LDP methods, effectively mitigating potential attack threats. Ultimately, the server conveys the results of the aggregation back to the all participants, enabling the next iteration of updates, and this iterative cycle persists until the defined training objectives are satisfactorily achieved. The architecture of PPRD-FL is schematically represented in Figure 2.
The implementation of PPRD-FL features two modules: the aggregation module and the local training module.
Aggregation module: Similarly to traditional federated aggregation, after each participant K i performs c iterations of updates on its local dataset, the server collects and aggregates the trained model parameters from all participants. We also use the Federated Averaging algorithm to perform the aggregation operation. After the global aggregation process, the server proceeds to transmit the averaged model parameters to all participants, ensuring that each participant receives the updated information necessary for subsequent iterations of the federated learning protocol.
Local training module: Upon receiving the latest weights w ( t ) , each participant K i iteratively updates their model using their local dataset. First, participant K i performs c rounds of updates, applying the Adam optimization algorithm [39] to the data in each round. After c updates, participant K i obtains the updated local parameters w i ( t + 1 ) . Subsequently, to safeguard privacy, randomized noise is injected into the updated local parameters. Due to the large number of model parameters, it is not a sensible approach to directly use the LDP algorithm to perturb the model. Therefore, it is crucial to filter the parameters beforehand, focusing the privacy budget on key parameters to minimize noise and enhance the model’s accuracy.
Algorithm 1 offers a comprehensive explanation of the overall execution process of PPRD-FL.
Algorithm 1 PPRD-FL
    Input: the number N of participants, the global model’s initial parameter w ( t ) , global training rounds T , local iteration rounds c of participants, the sum | D | of all participant datasets, and the size | D i | of the participant’s local dataset, learning rate η .
   Output: global model parameters w ( t + 1 )
1 Parameter server generates global model weights w ( t ) ;
2 The weight w ( t ) is broadcast to all participants;
3 Global training:
4   for global epoch t = 1 , 2 , , T do:
5     for K i ( i = 1 , 2 , N ) do:
6       Accept global weight w ( t ) ;
7       Perform c rounds of local updates using Adam optimization;
8         for epoch j = 1 , 2 , , c do
9           Update the parameters of participant K i :
10             w i ( t ) = w i ( t 1 ) η f ( w i ( t 1 ) ) ;
11         Clip the parameters of participant K i :
12             w i ( t ) = w i ( t ) / max ( 1 , | | w i ( t ) | | / C )
13       Apply R-PSS to filter parameters: w i f ( t ) F i l t e r i n g ( w i ( t ) ) ;
14       Add dynamic LDP noise: w i f ( t ) ˜ = w i f ( t ) + z i ;
15       Send the noised local weight w i f ( t ) ˜ to the server;
16     End
17     Global model aggregation:
18       Collect the added noise weights w i f ( t ) ˜ for all participants;
19       Calculate global model weights:
20            w ( t + 1 ) = i = 1 N | D i | | D | w i f ( t ) ˜
21   End
22 return w ( t + 1 )

4.2. Filtering Using Randomized Parameter Selection Strategy

In FL, training complex neural network models typically necessitates multiple iterations, resulting in the creation of a substantial quantity of parameters. When LDP is employed on every parameter, the privacy budget allocated to each one decreases due to the composability property of differential privacy, resulting in the injection of larger noise with each successive iteration, the cumulative effect of noise can further diminish the utility of the model, making it more difficult to balance privacy protection and model ac-curacy. This trade-off poses a significant challenge, especially when training large-scale models that require precise parameter updates to achieve a good performance.
As noted in [40], weights that are zero or close to zero signify that these connections do not affect the network, as no signals are transmitted through them. In order to achieve privacy protection while mitigating the impact of privacy dilution on model performance, we do not apply LDP to all parameters but instead use the R-PSS for filtering and select the parameters that better reflect the characteristics of local data for perturbation. First, we sort the parameters based on their absolute values to form a sequence. Then, we filter out the parameters with weights close to zero (assuming a parameter threshold z of 1 × 10−6 close to zero) and directly select a portion of parameters with relatively higher absolute values based on a certain proportion. Finally, to prevent attackers from inferring the participants’ sensitive information by observing changes in specific parameters, we introduce a Bernoulli distribution [41] into the selection process of the remaining parameters to inject randomness. To control the count of selected parameters, we introduce a constraint parameter p for adjustment. The process is described in Algorithm 2. While the weight parameters w can be either positive or negative, we take their absolute values | w | to ensure uniform perturbation. This simplifies the process and prevents complications arising from the sign of the weights. In machine learning models, the sign of a weight indicates the direction of its influence on the model’s output: positive weights contribute positively to the prediction, while negative weights contribute negatively. For example, in a linear model, if w 1 = 0.5 and w 2 = 0.3 , the feature corresponding to w 1 has a positive effect, whereas the feature corresponding to w 2 has a negative effect. By using | w | , we focus on the magnitude of the weights, ensuring that the perturbation process is consistent and unbiased, regardless of the direction of influence.
Algorithm 2 R-PSS
    Input: the weight parameter w i of the participant K i , a filter threshold p , near zero threshold
   Output: Weight parameters w f after final filtering
1 Initialize a w f with the same structure as w i ;
2 Obtain the absolute values of all parameters of w i and arrange them in descending order to obtain a list L w ;
3 Calculate the value c which is the value of the last parameter in the top 50% of L w based on absolute values;
for w w i do:
5   If | w | c , then:
6     Mark w as w and add w to the corresponding position of w f ;
7   Else if | w | < c and | w | > z , then:
8     Generate a random binary b ~ B e r n o u l l i ( p ) ;
9     If b = 1 , then:
10      Mark w as w and add w to the corresponding position of w f ;
11    If b = 1 , then:
12      Do not process w , and add w to the corresponding position of w f ;
13 Else if | w | < z , then:
14    Do not process w , and add w to the corresponding position of w f ;
15 End
16 Return w f
Initially, the algorithm sorts and filters out parameters close to zero, retaining only those parameters with a significant impact on the model. With each iteration, the selected parameters increasingly reflect the key characteristics of the data. After several iterations, the parameter set stabilizes, meaning that the parameters with higher weights undergo minimal changes. This process ensures that the model converges quickly while maintaining high accuracy.

4.3. Dynamic Perturbation Mechanism

Within the framework of differential privacy, the privacy budget is used to measure the effectiveness of privacy protection. In federated learning, participants consume a privacy budget for parameter protection in each global round. If the privacy budget is evenly allocated to each round, this may lead to the poorer protection of parameters in later stages. In other words, since the model is usually simpler in the early stages and has limited capacity to represent data, it requires less noise. As training progresses, the model gradually captures more local data characteristics, meaning that privacy protection for the model parameters in the later stages should be given more attention. Moreover, introducing smaller noise in the early stages of training can facilitate faster convergence [17]. Accordingly, we have designed a logarithmic scale-based dynamic LDP method that applies varying levels of noise based on different phases.
To satisfy ( ε , δ )-differential privacy, the Gaussian mechanism requires adding noise with a standard deviation of σ to the output of the function f(D), where
σ Δ f ε 2 ln 1.25 δ
Based on the statistical properties of the Gaussian mechanism and the privacy protection requirements at different stages, the standard deviation σ ( m ) in the m-th global aggregation round is defined as
σ ( m ) = ( 1 + ln ( m ) a ) Δ f ε 2 ln 1.25 δ
where σ is the initial standard deviation of noise and a > 0 is the adjustable parameter controlling the noise growth rate.
Lemma 1.
The proposed method satisfies ( ε , δ )-differential privacy by carefully controlling the noise added to the model parameters in each global aggregation round. The noise is dynamically adjusted based on the training phase, ensuring that the cumulative privacy budget remains within the global constraint.
Proof of Lemma 1.
By employing the clipping technique, the local model parameters are constrained such that | | w i | | C , where w i denotes the noise-free local model parameters for the participant K i and C is the constrained threshold for the | | w i | | . The sensitivity can be bounded as Δ f = 2 C | D i | . Finally, by applying the given sensitivity Δ f , logarithmically varying noise is added to each global aggregation. □
In the Gaussian mechanism, the accumulation of the privacy budget must satisfy the constraint of a global privacy budget. To analyze and approximate the cumulative privacy budget, we use an integral approximation to show that the total privacy budget is finite.
The privacy budget for a single Gaussian mechanism, denoted as ε m , is defined as
ε m = Δ f 2 ln ( 1 / δ ) σ ( m )
ε m = Δ f 2 ln ( 1 / δ ) ( 1 + ln ( m ) a ) σ
ε m = ε ( 1 + ln ( m ) a )
The privacy budget for the entire process is the aggregate of the privacy budgets for all single iterations:
ε t o t a l = m = 1 M ε m = m = 1 M ε ( 1 + ln ( m ) a )
Since ε m is a monotonically decreasing function, the cumulative sum can be approximated using an integral:
ε t o t a l 1 M ε ( 1 + ln ( m ) a ) d m
To simplify the computation of the integral, we use a substitution. Letting u = 1 + ln ( m ) a , the integral becomes
ε t o t a l 1 1 + ln ( m ) a ε a e a ( u 1 ) u d u 1 1 + ln ( m ) a ε a u d u ε a ln ( 1 + ln ( M ) a )
From this result, it is evident that the total privacy budget ε t o t a l grows at a rate governed by the logarithmic function ln . Even as the total number of iterations M becomes very large, the growth of ε t o t a l remains slow and controlled. Therefore, by selecting an appropriate parameter a , it is possible to effectively limit the growth of the total privacy budget and ensure that it satisfies the global privacy budget constraint.
The work in reference [34] also points out that, based on the post-processing invariance of differential privacy, provided that the local perturbation algorithm adheres to LDP, attackers cannot deduce participants’ private information from the intermediate parameters. Moreover, even if a server or participant is compromised, they can only access the perturbed parameters, not the original private data. Therefore, as long as the proposed mechanism satisfies LDP, the entire architecture can ensure privacy protection. The dynamic perturbation mechanism designed in the text is adjusted based on the initial standard deviation, while still satisfying the differential privacy definition.
Although the proposed perturbation mechanism results in a higher noise magnitude being added to the selected parameters in each round, for neural networks with millions of parameters, it is possible to choose an appropriate noise adjustment magnitude. By adding noise to only a selected subset of key parameters, as opposed to all parameters, the mechanism can significantly reduce the cumulative impact of noise on the global model while maintaining the same level of privacy protection.

5. Results

5.1. Experimental Setting

Experiment environment: The proposed approach was implemented in Python 3.10 using PyTorch 2.2.2 and tested on a single GPU. The experiments were carried out on a Windows 11 machine equipped with a 12th Gen Intel(R) Core(TM) i5-12500H CPU(Intel Corporation, Santa Clara, CA, USA), 16 GB of RAM, and an NVIDIA GeForce RTX 3050 Laptop GPU(NVIDIA Corporation, Santa Clara, CA, USA).
Datasets:
  • MNIST Dataset: Consists of 70,000 grayscale images of hand-written digits ranging from 0 to 9, each with a resolution of 28 × 28 pixels. Among these, 60,000 images are used for training and 10,000 for testing. It is a widely adopted benchmark for evaluating image classification algorithms;
  • Fashion-MNIST: The dataset contains 70,000 grayscale images across 10 fashion-related categories, such as clothing and shoes. The dataset is divided into 60,000 training images and 10,000 testing images, with each image sized at 28 × 28 pixels. It is designed as a more challenging and advanced alternative to the MNIST dataset;
  • CIFAR-10 Dataset: The dataset includes 60,000 color images divided into 10 classes, with each image having a resolution of 32 × 32 pixels. It is split into 50,000 training images and 10,000 testing images. This dataset is widely used in deep learning for object recognition and image classification tasks.
Network model: This paper employs a Convolutional Neural Network (CNN) architecture for all datasets. For the MNIST and Fashion-MNIST datasets, which are composed of grayscale images, this study implemented a CNN comprising two convolutional layers with max-pooling layers after each, followed by two fully connected layers. For the CIFAR-10 dataset, made up of color images, this study used a similar CNN architecture but adjusted the input channels to accommodate the RGB format. Both network architectures used the Adam optimization algorithm with a learning rate of 0.001. Each participant performed 3 local rounds, while the global training comprised 10 rounds. Based on multiple experimental results, we set the parameter selection range to 80% and the growth factor of the dynamic perturbation mechanism to 75.

5.2. Result Evaluation and Analysis

The most objective criterion for evaluating the quality of a proposed scheme is its performance on datasets. Therefore, we tested the proposed scheme on different datasets and used the average results of multiple experiments as the final outcome.

5.2.1. Component-Wise Performance Analysis of PPRD-FL

To assess the contribution of each component to the model, we compared the performance of the methods without R-PSS, without D-LDP, and with Full PPRD-FL under the same privacy budget. As shown in Figure 3, even when using a single technique (such as R-PSS or D-LDP) independently, the overall training process exhibits a smooth and stable convergence trend. However, when R-PSS is excluded, the model’s overall performance is inferior to that of Full PPRD-FL due to the noise added to all parameters, resulting in noise accumulation that degrades the model’s performance. When D-LDP is excluded, although the model performs better, the privacy protection may be uneven under a fixed privacy budget, with certain rounds having excessive privacy protection while others are insufficient. Full PPRD-FL combines R-PSS and D-LDP, which reduces noise accumulation and provides balanced privacy protection, thus achieving optimal performance in both privacy protection and model accuracy.

5.2.2. Evaluation on Privacy Budget ε

First, we analyze the changes in accuracy and loss on the MNIST dataset. As can be seen from Figure 4a,b, when the privacy budget ε 0.3 , the overall trend of the model remains relatively stable, and the accumulation of noise does not significantly affect the training trend of the model. Moreover, as the privacy budget increases, the performance of the model gradually improves, which is consistent with expectations. Most importantly, when the privacy budget ε is 0.4, the accuracy of our proposed method is comparable to that of the model without differential privacy, reaching about 97%. Following this, Figure 5a,b present the experimental results of the model on the Fashion-MNIST dataset, which features more complex data compared to MNIST. A similar trend is observed: higher privacy budgets result in increased accuracy and reduced loss values, further confirming the method’s effectiveness across datasets with varying complexity. Finally, Figure 6a,b illustrate the performance of the model on the CIFAR-10 dataset. Considering the complexity of the dataset and its sensitivity to noise, this paper increases the baseline of the privacy budget. The results continue to show that increasing the privacy budget can improve accuracy and reduce loss, thereby verifying the robustness and scalability of the proposed PPRD-FL method on different datasets.

5.2.3. Model Performance Under Different Privacy Protection Schemes

We compared our results with related works. First, as illustrated in Figure 7, an increase in the privacy budget ε leads to a certain degree of improvement in the model accuracy under the same conditions, which is consistent with our practical expectations. This is because a smaller ε results in a larger discrepancy between the initial parameters and the perturbed parameters, leading to stronger privacy protection but poorer model performance; conversely, a large ε results in the opposite effect. Second, our proposed approach maintains a superior accuracy when ε 0.4 , making it highly competitive compared to other related schemes. From Figure 7, it is evident that, even under conditions of a small privacy budget, the accuracy of PPRD-FL surpasses that of LDP-FL and DPM-EU. Furthermore, in previous research [32,34], the optimal accuracy on the MNIST dataset was achieved with a privacy budget of 5 and 0.5, but we were able to obtain comparable results using a privacy budget of only 0.4, thereby providing better privacy protection. Overall, our scheme demonstrates improved privacy protection while ensuring superior performance compared to other approaches.

6. Discussion

The proposed PPRD-FL method, which combines randomized parameter selection and dynamic local differential privacy, offers significant advancements in privacy-preserving federated learning. This approach is particularly valuable in privacy-sensitive domains where data confidentiality is paramount. In medical data sharing, the PPRD-FL method allows multiple healthcare institutions to collaboratively train models while ensuring that patient data remain private and secure. The dynamic adjustment of noise levels and parameter selection helps in maintaining the model’s utility while safeguarding sensitive health information. Financial institutions can use this approach to develop collaborative models without sharing customers’ private financial data. The method ensures strong privacy protection, even with complex financial datasets, which may otherwise be vulnerable to inference or reconstruction attacks.
The practical implementation of PPRD-FL in mobile or edge computing environments presents both opportunities and challenges. The increasing computational capabilities of mobile devices make them suitable candidates for federated learning. However, PPRD-FL must be optimized to handle resource constraints, such as processing power, memory, and battery life. The dynamic LDP mechanism, which adjusts the noise levels throughout the training process, could reduce the computational burden in the early stages of training by allowing less noise and faster convergence. However, care must be taken to ensure that the privacy budget does not exceed the mobile device’s capabilities. Edge devices typically operate in distributed environments with heterogeneous computational resources. Implementing PPRD-FL on such devices requires balancing the privacy budget allocation across devices with varying capabilities. The method’s ability to dynamically allocate the privacy budget ensures that edge devices can perform local computations securely without compromising the global model’s accuracy. However, challenges remain in terms of the network bandwidth and latency during global aggregation, which could impact performance.
While PPRD-FL provides a robust privacy solution for federated learning, several limitations and open challenges remain. The method is well suited for moderate-sized datasets, but as the number of participants or the dataset dimensionality increases, the effectiveness of the randomized parameter selection strategy may diminish. There is a need for further optimization techniques that can efficiently handle large-scale data while maintaining the privacy guarantees. Although the dynamic adjustment of noise helps mitigate the privacy–utility trade-off, there is still a challenge in finding the optimal balance. The selection of parameters for noise perturbation and the adjustment of the privacy budget throughout training should be fine-tuned for specific applications to avoid compromising model accuracy.

7. Conclusions

In this paper, we propose a new FL approach that employs the dynamic LDP algorithm to provide robust privacy protection for the weight parameters of each participant, preventing adversaries from inferring participants’ private information through intermediate shared data. Specifically, we introduced the R-PSS strategy for parameter selection, which addresses the issue of parameter dimensionality dependence when integrating LDP and FL. The experimental results indicate that our scheme attains a higher accuracy with a smaller privacy budget compared to other approaches, effectively addressing privacy concerns in FL. While the proposed scheme can achieve high accuracy and stability in model training with limited training data, its applicability in large-scale data scenarios remains constrained. In future work, we plan to conduct more extensive experiments to evaluate the scalability and robustness of PPRD-FL in large-scale federated learning networks. Additionally, we aim to explore customized privacy protection mechanisms tailored to the privacy requirements of different participants, allowing for a more flexible balance between privacy and performance across participants in the federated learning system. Furthermore, we will extend the application of PPRD-FL to new domains, such as personalized recommendation systems and collaborative fraud detection, to explore its potential in these areas.

Author Contributions

Conceptualization, J.F.; methodology, J.F.; writing—original draft preparation, J.F.; writing—review and editing, R.G. and J.Z.; funding acquisition, R.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Fujian Provincial Science and Technology Project Guiding Project (2023H002) and the Quanzhou High-level Talent Innovation and Entrepreneurship Project (2022C00R).

Data Availability Statement

The data that support the findings of this study are openly available in the following public repositories: (1) MNIST dataset: available at http://yann.lecun.com/exdb/mnist/ (accessed on 2 December 2022). (2) Fashion-MNIST dataset: available at https://github.com/zalandoresearch/fashion-mnist (accessed on 20 May 2023) (3) CIFAR-10 dataset: available at https://www.cs.toronto.edu/~kriz/cifar.html (accessed on 20 May 2023).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PPRD-FLThe privacy-preserving federated learning based on randomized parameter selection strategy and dynamic local differential privacy
FLFederated learning
D-LDPDynamic local differential privacy
R-PSSRandomized parameter selection strategy

References

  1. Zhuang, Y.; Sun, X.; Li, Y.; Huai, J.; Hua, L.; Yang, X.; Cao, X.; Zhang, P.; Cao, Y.; Qi, L.; et al. Multi-sensor integrated navigation/positioning systems using data fusion: From analytics-based to learning-based approaches. Inf. Fusion 2023, 95, 62–90. [Google Scholar] [CrossRef]
  2. Liu, J.; Wu, G.; Luan, J.; Jiang, Z.; Liu, R.; Fan, X. HoLoCo: Holistic and local contrastive learning network for multi-exposure image fusion. Inf. Fusion 2023, 95, 237–249. [Google Scholar] [CrossRef]
  3. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A.y. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics, PMLR 2017, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. Available online: https://proceedings.mlr.press/v54/mcmahan17a?ref=https://githubhelp.com (accessed on 3 March 2023).
  4. Phong, L.T.; Aono, Y.; Hayashi, T.; Wang, L.; Moriai, H. Privacy-preserving deep learning: Revisited and enhanced. In Proceedings of the Applications and Techniques in Information Security: 8th International Conference, ATIS 2017, Auckland, New Zealand, 6–7 July 2017; Proceedings. Springer: Singapore, 2017; pp. 100–110. Available online: https://link.springer.com/chapter/10.1007/978-981-10-5421-1_9 (accessed on 15 October 2023).
  5. Moshawrab, M.; Adda, M.; Bouzouane, A.; Ibrahim, H.; Raad, A. Securing Federated Learning: Approaches, Mechanisms and Opportunities. Electronics 2024, 13, 3675. [Google Scholar] [CrossRef]
  6. Nasr, M.; Shokri, R.; Houmansadr, A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP). IEEE, San Francisco, CA, USA, 19–23 May 2019; pp. 739–753. Available online: https://ieeexplore.ieee.org/abstract/document/8835245 (accessed on 2 March 2024).
  7. Sagar, R.; Jhaveri, R.; Borrego, C. Applications in security and evasions in machine learning: A survey. Electronics 2020, 9, 97. [Google Scholar] [CrossRef]
  8. Wei, W.; Liu, L.; Loper, M.; Chow, K.-H.; Gursoy, M.E.; Truex, S.; Wu, Y. A framework for evaluating gradient leakage attacks in federated learning. arXiv 2020, arXiv:2004.10397. [Google Scholar]
  9. Shokri, R.; Stronati, M.; Song, C.; Shmatikov, V. Membership inference attacks against machine learning models. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; pp. 3–18. Available online: https://ieeexplore.ieee.org/abstract/document/7958568 (accessed on 1 November 2023).
  10. Song, M.; Wang, Z.; Zhang, Z.; Song, Y.; Wang, Q.; Ren, J.; Qi, H. Analyzing user-level privacy attack against federated learning. IEEE J. Sel. Areas Commun. 2020, 38, 2430–2444. [Google Scholar] [CrossRef]
  11. Phong, L.T.; Aono, Y.; Hayashi, T.; Wang, L.; Moriai, S. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 2017, 13, 1333–1345. [Google Scholar] [CrossRef]
  12. Yang, X.; Liu, Y.; Xie, J.; Hao, T. Improved Design and Application of Security Federation Algorithm. Electronics 2023, 12, 1375. [Google Scholar] [CrossRef]
  13. Chen, Z.; Zheng, H. SPM-FL: A Federated Learning Privacy-Protection Mechanism Based on Local Differential Privacy. Electronics 2024, 13, 4091. [Google Scholar] [CrossRef]
  14. Arachchige, P.C.M.; Bertok, P.; Khalil, I.; Liu, D.; Camtepe, S.; Atiquzzaman, M. Local differential privacy for deep learning. IEEE Internet Things J. 2019, 7, 5827–5842. [Google Scholar] [CrossRef]
  15. Truex, S.; Liu, L.; Chow, K.-H.; Gursoy, M.E.; Wei, W. LDP-Fed: Federated learning with local differential privacy. In Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, Heraklion, Greece, 27 April 2020; pp. 61–66. Available online: https://dl.acm.org/doi/abs/10.1145/3378679.3394533 (accessed on 26 September 2024).
  16. Frisk, O.; Dormann, F.; Lillelund, C.M.; Pedersen, C.F. Super-convergence and differential privacy: Training faster with better privacy guarantees. In Proceedings of the 2021 55th Annual Conference on Information Sciences and Systems (CISS). IEEE, Baltimore, MD, USA, 24–26 March 2021; pp. 1–6. Available online: https://ieeexplore.ieee.org/abstract/document/9400274 (accessed on 18 November 2023).
  17. Cheng, A.; Wang, P.; Zhang, X.S.; Cheng, J. Differentially private federated learning with local regularization and sparsification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022, New Orleans, LA, USA, 18–24 June 2022; pp. 10122–10131. Available online: https://openaccess.thecvf.com/content/CVPR2022/html/Cheng_Differentially_Private_Federated_Learning_With_Local_Regularization_and_Sparsification_CVPR_2022_paper.html (accessed on 16 April 2024).
  18. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3November 2017; pp. 1175–1191. Available online: https://dl.acm.org/doi/abs/10.1145/3133956.3133982 (accessed on 26 February 2023).
  19. Zhang, C.; Li, S.; Xia, J.; Wang, W. {BatchCrypt}: Efficient homomorphic encryption for {Cross-Silo} federated learning. In Proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC 20), Virtual, 15–17 July 2020; pp. 493–506. Available online: https://www.usenix.org/conference/atc20/presentation/zhang-chengliang (accessed on 15 August 2023).
  20. Geyer, R.C.; Klein, T.; Nabi, M. Differentially private federated learning: A client level perspective. arXiv 2017, arXiv:1712.07557. [Google Scholar]
  21. Wang, Y.; Zhang, A.; Wu, S.; Yu, S. VOSA: Verifiable and oblivious secure aggregation for privacy-preserving federated learning. IEEE Trans. Dependable Secur. Comput. 2022, 20, 3601–3616. [Google Scholar] [CrossRef]
  22. Xu, G.; Li, H.; Zhang, Y.; Xu, S.; Ning, J.; Deng, R. Privacy-preserving federated deep learning with irregular users. IEEE Trans. Dependable Secur. Comput. 2020, 19, 1364–1381. [Google Scholar] [CrossRef]
  23. Chen, Y.; Wang, B.; Zhang, Z. PDLHR: Privacy-preserving deep learning model with homomorphic re-encryption in robot system. IEEE Syst. J. 2021, 16, 2032–2043. [Google Scholar] [CrossRef]
  24. Guo, S.; Yang, J.; Long, S.; Wang, X.; Liu, G. Federated learning with differential privacy via fast Fourier transform for tighter-efficient combining. Sci. Rep. 2024, 14, 26770. [Google Scholar]
  25. Shen, X.; Liu, Y.; Zhang, Z. Performance-enhanced federated learning with differential privacy for internet of things. IEEE Internet Things J. 2022, 9, 24079–24094. [Google Scholar] [CrossRef]
  26. Zhao, Y.; Zhao, J.; Yang, M.; Wang, T.; Wang, N.; Lyu, L.; Niyato, D.; Lam, K.-Y. Local differential privacy-based federated learning for internet of things. IEEE Internet Things J. 2020, 8, 8836–8853. [Google Scholar] [CrossRef]
  27. Liu, R.; Cao, Y.; Yoshikawa, M.; Chen, H. Fedsel: Federated sgd under local differential privacy with top-k dimension selection. In Proceedings of the Database Systems for Advanced Applications: 25th International Conference, DASFAA 2020, Jeju, Republic of Korea, 24–27 September 2020; Proceedings, Part I 25. Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 485–501. Available online: https://link.springer.com/chapter/10.1007/978-3-030-59410-7_33 (accessed on 10 July 2024).
  28. Chen, Q.; Wang, H.; Wang, Z.; Chen, J.; Yan, H.; Lin, X. Lldp: A layer-wise local differential privacy in federated learning. In Proceedings of the 2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), Wuhan, China, 9–11 December 2022; pp. 631–637. Available online: https://ieeexplore.ieee.org/abstract/document/10063630 (accessed on 22 July 2023).
  29. Wei, K.; Li, J.; Ding, M.; Ma, C.; Yang, H.H.; Farokhi, F.; Jin, S.; Quek, T.Q.S.; Poor, H.V. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 2020, 15, 3454–3469. [Google Scholar] [CrossRef]
  30. Shin, H.; Kim, S.; Shin, J.; Xiao, X. Privacy enhanced matrix factorization for recommendation with local differential privacy. IEEE Trans. Knowl. Data Eng. 2018, 30, 1770–1782. [Google Scholar] [CrossRef]
  31. Sun, L.; Lyu, L. Federated model distillation with noise-free differential privacy. arXiv 2020, arXiv:2009.05537. [Google Scholar]
  32. Sun, L.; Qian, J.; Chen, X. LDP-FL: Practical private aggregation in federated learning with local differential privacy. arXiv 2020, arXiv:2007.15789. [Google Scholar]
  33. Song, S.; Du, S.; Song, Y.; Zhu, Y. Communication-Efficient and Private Federated Learning with Adaptive Sparsity-Based Pruning on Edge Computing. Electronics 2024, 13, 3435. [Google Scholar] [CrossRef]
  34. Wang, B.; Chen, Y.; Jiang, H.; Zhao, Z. Ppefl: Privacy-preserving edge federated learning with local differential privacy. IEEE Internet Things J. 2023, 10, 15488–15500. [Google Scholar] [CrossRef]
  35. Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and open problems in federated learning. Found. Trends® Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
  36. Dwork, C. Differential privacy: A survey of results. In Proceedings of the International Conference on Theory and Applications of Models of Computation, Xi’an, China, 25–29 April 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–19. Available online: https://link.springer.com/chapter/10.1007/978-3-540-79228-4_1 (accessed on 30 June 2023).
  37. Bhowmick, A.; Duchi, J.; Freudiger, J.; Kapoor, G.; Rogers, R. Protection against reconstruction and its applications in private federated learning. arXiv 2018, arXiv:1812.00984. [Google Scholar]
  38. Dwork, C.; Roth, A. The algorithmic foundations of differential privacy. Found. Trends® Theor. Comput. Sci. 2014, 9, 211–407. [Google Scholar] [CrossRef]
  39. Kingma, D.P. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  40. Rashid, T. Make Your Own Neural Network. 2016. Available online: https://github.com/makeyourownneuralnetwork/makeyourownneuralnetwork?tab=readme-ov-file (accessed on 22 June 2023).
  41. Ross, S.M. Introduction to Probability and Statistics for Engineers and Scientists, 4th ed.; Academic Press: San Diego, CA, USA, 2009; 141p. [Google Scholar]
Figure 1. Privacy risks in federated learning.
Figure 1. Privacy risks in federated learning.
Electronics 14 00990 g001
Figure 2. Architecture of the proposed privacy-preserving federated learning approach based on randomized parameter selection and dynamic local differential privacy (PPRD-FL).
Figure 2. Architecture of the proposed privacy-preserving federated learning approach based on randomized parameter selection and dynamic local differential privacy (PPRD-FL).
Electronics 14 00990 g002
Figure 3. Impact of R-PSS and DLDP on model accuracy and loss over global rounds: (a) accuracy and (b) loss.
Figure 3. Impact of R-PSS and DLDP on model accuracy and loss over global rounds: (a) accuracy and (b) loss.
Electronics 14 00990 g003
Figure 4. Model accuracy and loss under different privacy levels ε on the MNIST dataset: (a) accuracy and (b) loss.
Figure 4. Model accuracy and loss under different privacy levels ε on the MNIST dataset: (a) accuracy and (b) loss.
Electronics 14 00990 g004
Figure 5. Model accuracy and loss under different privacy levels ε on the Fashion-MNIST dataset: (a) accuracy and (b) loss.
Figure 5. Model accuracy and loss under different privacy levels ε on the Fashion-MNIST dataset: (a) accuracy and (b) loss.
Electronics 14 00990 g005
Figure 6. Model accuracy and loss under different privacy levels ε on the CIFAR-10 dataset: (a) accuracy and (b) loss.
Figure 6. Model accuracy and loss under different privacy levels ε on the CIFAR-10 dataset: (a) accuracy and (b) loss.
Electronics 14 00990 g006
Figure 7. Model accuracy under different schemes.
Figure 7. Model accuracy under different schemes.
Electronics 14 00990 g007
Table 1. Relevant symbols and parameters.
Table 1. Relevant symbols and parameters.
SymbolDefinitions
N The number of participants
K i The i-th participant
D , D The dataset differing by one element
D i The dataset of the i-th participant
F i ( w ) The local loss function of the i-th participant
F ( w ) Global loss function
w * Optimization   of   the   global   loss   function   F ( w )
w ( t ) The global model parameters at the t-th iteration
w i ( t ) The local model parameters of the i-th participant
ε Privacy budget
δ Maximum allowable probability of privacy leakage
Δ f Sensitivity
C Parameter Clipping Threshold
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, J.; Guo, R.; Zhu, J. PPRD-FL: Privacy-Preserving Federated Learning Based on Randomized Parameter Selection and Dynamic Local Differential Privacy. Electronics 2025, 14, 990. https://doi.org/10.3390/electronics14050990

AMA Style

Feng J, Guo R, Zhu J. PPRD-FL: Privacy-Preserving Federated Learning Based on Randomized Parameter Selection and Dynamic Local Differential Privacy. Electronics. 2025; 14(5):990. https://doi.org/10.3390/electronics14050990

Chicago/Turabian Style

Feng, Jianlong, Rongxin Guo, and Jianqing Zhu. 2025. "PPRD-FL: Privacy-Preserving Federated Learning Based on Randomized Parameter Selection and Dynamic Local Differential Privacy" Electronics 14, no. 5: 990. https://doi.org/10.3390/electronics14050990

APA Style

Feng, J., Guo, R., & Zhu, J. (2025). PPRD-FL: Privacy-Preserving Federated Learning Based on Randomized Parameter Selection and Dynamic Local Differential Privacy. Electronics, 14(5), 990. https://doi.org/10.3390/electronics14050990

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop