Next Article in Journal
A Cognitive Multi-Criteria Decision Making Framework Based on Fuzzy Digraphs and Fuzzy Cognitive Maps: The Case of Industrial Robot Selection
Previous Article in Journal
Research on Laser Measurement Technology for Online Roll Profile Measurement in Strip Rolling Mills
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Privacy and Communication Efficiency in Federated Learning Through Selective Low-Rank Adaptation and Differential Privacy

Graduate School of System Informatics, Kobe University, Kobe 657-8501, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(24), 13102; https://doi.org/10.3390/app152413102
Submission received: 4 November 2025 / Revised: 5 December 2025 / Accepted: 10 December 2025 / Published: 12 December 2025

Abstract

Federated learning (FL) enables collaborative model training without centralizing raw data, but its application to large-scale vision models remains constrained by high communication cost, data heterogeneity, and privacy risks. Furthermore, in real-world applications such as autonomous driving and healthcare, model updates can inadvertently expose sensitive information even without direct data sharing. This highlights the need for frameworks that balance privacy, efficiency, and accuracy. The current approach to addressing information exposure involves encrypting data by incorporating additional encoding. However, such approaches to encrypting data significantly increase communication costs. In this paper, we propose Federated Share-A Low-Rank Adaptation with Differential Privacy (FedSA-LoRA-DP), a parameter-efficient and privacy-preserving federated learning framework. The framework combines selective aggregation of low-rank parameters with Differential Privacy (DP), ensuring that only lightweight components are shared while formally bounding individual data influence. Since DP simply perturbs the numeric values of existing parameters without altering their dimensionality or structure, it does not increase communication cost. This design allows FedSA-LoRA-DP to provide strong privacy guarantees while maintaining communication efficiency and model accuracy. Experiments on CIFAR-100, MNIST, and SVHN datasets demonstrate that the proposed framework achieves accuracy comparable to non-private counterparts, even under heterogeneous non-independent and identically distributed data and partial client participation. These results demonstrate that integrating differential privacy into low-rank adaptation enables privacy-preserving and communication-efficient federated learning without sacrificing model performance across heterogeneous environments.

1. Introduction

Modern applications in fields such as autonomous driving, healthcare, and smart-city infrastructure depend on vast amounts of data that are spread across different devices and organizations. In the case of autonomous vehicles, for instance, every car constantly records sensory information that could, if jointly utilized, improve perception and driving-assistance models and thus contribute to safer roads [1,2,3,4]. However, centralizing such data is often infeasible due to concerns about privacy, data protection laws, and practical limitations.
Federated Learning (FL) enables distributed training without centralizing raw data [5,6,7]. Each client trains a local model on its private dataset and transmits only model updates to a central server, which aggregates them to form a global model. Although this approach mitigates privacy risks, traditional algorithms such as Federated Averaging (FedAvg) still require exchanging the information of the entire model at every communication round, leading to significant communication overhead [4,5].
To address these challenges, Parameter-Efficient Fine-Tuning (PEFT) has been introduced as a means to reduce communication and computation costs [8,9,10]. Low-Rank Adaptation (LoRA) inserts small trainable matrices A and B into pretrained layers, enabling fine-tuning with fewer parameters [11].
Although the FL does not involve the exchange of raw data during the update process, it remains vulnerable to privacy leakage from shared updates. Gradient inversion attack [12,13,14] and membership inference attack [15,16] have shown that private information can be reconstructed from gradients. Consequently, it is crucial to protect not only data but also shared parameters.
Differential Privacy (DP) provides a formal privacy guarantee by injecting calibrated noise into gradients, thus bounding the influence of any individual data sample [17,18,19,20]. Recent DP-based FL frameworks, adaptive local mechanisms, and feedback-controlled approaches aim to achieve a balance between privacy protection and model accuracy [21,22,23]. Although these approaches improve privacy protection, they often add computational complexity or reduce model accuracy, especially under strict privacy budgets. Furthermore, data heterogeneity in non-i.i.d. settings causes the global model to converge toward an average representation that fits no client well, resulting in degraded overall accuracy [24,25].
In this paper, we propose FedSA-LoRA-DP, a novel FL framework that integrates DP and leverages LoRA for efficient model adaptation. In the framework, we share only the A matrices between clients and the server and apply the DP to them to ensure secure aggregation. This design maintains the same amount of transmitted data even when differential privacy noise is added, as the noise only perturbs the numeric values of the A matrices. Through experiments conducted under various conditions, we confirm that the proposed framework maintains a high accuracy of the model even in a non-independent and identically distributed (non-i.i.d.) environment, demonstrating its robustness and practical effectiveness.
We summarize our contributions as follows:
  • We design a selective aggregation strategy that aggregates only the LoRA A i matrices while keeping the B i matrices local to each client, achieving both communication efficiency and personalization.
  • We incorporate DP into the shared A matrices, providing formal privacy guarantees without increasing communication overhead.
  • We demonstrate through experiments on multiple datasets (CIFAR-100, MNIST, and SVHN) that FedSA-LoRA-DP achieves comparable accuracy to non-private baselines even under heterogeneous and partial participation conditions.
This paper is organized as follows. Section 2 reviews related work on PEFT, FL, and DP. Section 3 introduces our proposed method, which integrates selective Aggregation and LoRA with DP. Section 4 presents the experimental setup and performance evaluation under various conditions. Section 5 discusses the implications of the results, including the effects of data heterogeneity, LoRA rank, dataset characteristics, and models. Finally, Section 6 concludes the paper.

2. Related Work

2.1. Federated Learning and Communication Efficiency

To address high communication cost and client heterogeneity, several strategies have been developed to reduce the amount of data exchanged while maintaining model accuracy. Sattler et al. [4] proposed sparse update compression to reduce bandwidth usage. Li et al. [26] proposed FedProx, an additional regularization term to stabilize optimization under non-i.i.d. conditions, while Wang et al. [27] introduced FedNova, in which client updates are normalized to mitigate the impact of heterogeneous local training. Other communication-efficient approaches leverage gradient quantization and periodic aggregation to reduce transmission overhead and maintain scalability in federated systems [28,29,30].
While these methods improve scalability, they still transmit large parameter sets and may underperform in highly heterogeneous settings.

2.2. Parameter-Efficient Fine-Tuning

PEFT has emerged as a complementary solution to reduce communication and computational load by limiting the number of trainable parameters. Instead of updating all parameters, methods such as Adapter-Tuning [8], Prefix-Tuning [9], and Prompt-Tuning [10] learn lightweight modules that preserve most of the pretrained model’s knowledge. LoRA [11] has attracted particular attention by introducing trainable low-rank matrices A and B into existing layers, achieving both parameter and communication efficiency without degrading performance.
Aggregating the LoRA matrices A and B in an FL setting presents significant challenges. If the server directly aggregates the A and B matrices and broadcasts them to all clients, aggregation errors occur. Specifically, in an FL task with K clients, the model update of each client can be represented by two low-rank matrices, A i and B i , introduced by LoRA. After aggregation and broadcasting on the server, the model update for each client is given as follows:
1 K ( B 1 + B 2 + + B K ) 1 K ( A 1 + A 2 + + A K ) ,
which is different from the proper model update 1 K ( B 1 A 1 + B 2 A 2 + + B K A K ) A 0 .
To address this challenge, some methods have been explored [31,32,33]. For instance, Sun et al. [32] proposed the method that updates only B and freezes the A . Thus, the local update on each client is 1 K ( B 1 + B 2 + + B K ) A 0 where A 0 denotes the initialized and fixed weights. Guo et al. [31] proposed the method that aggregates only the A matrices across clients while keeping the B matrices local. Thus, the local update on each client i is B i 1 K ( A 1 + A 2 + + A K ) .
Despite these frameworks significantly reducing communication overhead, most have yet to achieve a well-balanced integration of model accuracy, privacy preservation, and efficiency. Ensuring consistent performance while maintaining strong privacy guarantees remains an open challenge.

2.3. Differential Privacy in LoRA-Based Federated Learning

Recent advances have explored combining DP with PEFT in FL to enhance privacy and communication efficiency. Liu et al. [34] proposed the method which integrates LoRA and DP to fine-tune large language models in a distributed manner. While this approach effectively reduces communication cost and provides formal privacy guarantees, it aggregates both LoRA matrices A and B across clients, thereby limiting client-level personalization and introducing potential bias when data distributions are non-i.i.d.
Sun et al. [32] proposed the method which freezes the shared A matrix and updates only B to simplify communication and avoid aggregation inconsistency. Although this design improves communication efficiency, it constrains model expressivity since the shared representation A remains fixed throughout training.
Overall, while these DP-integrated LoRA frameworks have made progress in improving privacy and communication efficiency in federated settings, most still face limitations in balancing personalization, model expressivity, and scalability under heterogeneous data conditions. A more comprehensive approach that jointly achieves privacy preservation, communication efficiency, and robustness to client diversity remains an open challenge.

2.4. Privacy Attacks and Defense Mechanisms

Although FL reduces direct data sharing, it does not fully eliminate privacy risks. There are studies that have shown that sensitive information can be extracted from shared model updates. For example, gradient inversion attack [12,13,14] reconstructs input samples by exploiting gradients. Membership inference attacks [15,16] reveal whether specific data points were included in training. These studies argue that even without central data aggregation, model parameters themselves can act as unintended information channels.
To mitigate such risks, several defense strategies have been proposed. For instance, blockchain-assisted FL has been investigated as a way to ensure transparency and accountability across decentralized systems [35,36]. Nevertheless, most of these methods either introduce high computational overhead or compromise model utility when applied to large-scale models.
DP provides a rigorous framework to quantify privacy guarantees by bounding the influence of any individual data sample [17,18,19]. Within FL, DP has been applied to protect model updates from inference and reconstruction attacks. Since DP simply perturbs the numeric values of existing parameters without altering their dimensionality or structure, it does not increase communication cost, in contrast to cryptographic approaches that require additional encoded data [37,38].
Recent research has also provided empirical evidence that the theoretical guarantees of differential privacy can effectively mitigate practical privacy attacks. For example, Mironov [19] introduced Rényi Differential Privacy (RDP), which provides a tighter accounting of cumulative privacy loss, while Jayaraman and Evans [39] empirically demonstrated that DP significantly reduces the success rate of membership inference and data reconstruction attacks in machine learning models. Similarly, Dong et al. [40] established refined analytical bounds for the Gaussian mechanism, confirming its robustness against adversarial inference under realistic settings. These studies collectively support the use of DP as a principled defense framework that provides both theoretical and empirical privacy protection.
Despite extensive research efforts on communication-efficient, parameter-efficient, and privacy-preserving federated learning, an effective framework that simultaneously achieves high model accuracy, strong privacy protection, and low communication overhead has yet to be established.

3. Proposed Method

We propose FedSA-LoRA-DP, a federated learning framework that integrates Selective Aggregation and LoRA with DP to achieve communication efficiency, privacy preservation, and personalization simultaneously. An overview of our proposed framework is shown in Figure 1. To clarify the operation of our proposed framework, the main steps of FedSA-LoRA-DP are summarized as follows.
Step 1: 
Broadcast: The server distributes the aggregated A matrices to all clients, which combine them with their frozen pretrained weights W and local B i .
Step 2: 
Local training: Each client i performs local training on its private dataset D i to update A i and B i .
Step 3: 
Adding noise: After training, differential privacy noise is applied to the gradients of A i through gradient clipping and Gaussian perturbation, altering only their numeric values without changing the communication size.
Step 4: 
Upload: The clients upload the differentially private A i matrices to the server, while keeping B i local to preserve personalization.
Step 5: 
Aggregation: Finally, the server aggregates the received A i matrices using a weighted average based on the local sample sizes, and the updated global A is broadcast to all clients for the next round.
Specifically, FedSA-LoRA-DP exploits the structural asymmetry between LoRA’s low-rank matrices A and B by aggregating the A i matrices into a shared A matrix on the server while keeping the B i matrices local to each client. The overall training and aggregation procedure of the proposed framework is summarized in Algorithm 1.
Let W R m × n be a frozen pretrained weight matrix. In LoRA, a low-rank matrix Δ W is parameterized as Δ W = B A , where B R m × r and A R r × n . In each client i, the pretrained weight matrix W i is augmented with a LoRA module parameterized by low-rank matrices A i and B i . The model weights are updated as
W i t + 1 = W i t + B i t A i t ,
where t denotes the learning round. Only A i and B i are trainable, leading to a significant reduction in the number of trainable and transmitted parameters compared to full fine-tuning. In LoRA clients can collaboratively train global knowledge via A i while retaining personalization through local B i [41].
Algorithm 1 FedSA-LoRA-DP
Require: 
Pretrained model W , LoRA rank r, learning rate η , clipping norm C, noise multiplier σ , rounds T, clients K
Ensure: 
Personalized models for all clients
  • Server: Insert LoRA modules { A , B } into W , freeze backbone, broadcast A 0
  • for   t = 1 to T do
  •     for each client i = 1 , , K in parallel do
  •         Receive A t 1 and update:
  •             Apply DP to A i t :
    A ˜ i t A i t η g ˜ i t
  •               where g ˜ i t is obtained by clipping and adding Gaussian noise
  •             Update B i t
  •         Upload A ˜ i t and n i to the server
  •     end for
  •     Server: Aggregate
    A t i = 1 K n i N A ˜ i t
  •     Broadcast A t to all clients
  •     Privacy accounting: Update ε t using RDP
  • end for
For each client i, let B i = { x j } j = 1 B be a minibatch and g j = A ( x j ) the gradient for sample x j . We apply DP to the shared parameters A as follows. Each gradient is clipped:
g ¯ j = g j max 1 , g j 2 C ,
and then perturbed with Gaussian noise:
g ˜ i = 1 B i j = 1 B g ¯ j + N ( 0 , σ 2 C 2 I ) ,
where C is the clipping norm, σ is the noise multiplier and I is the identity matrix. The A i parameters are then updated by
A ˜ i = A i η g ˜ i ,
where η is the learning rate. In contrast, B i is updated without noise. Following the privacy composition framework of Abadi et al. [18], the noise multiplier σ is computed based on the specified privacy budget ( ε , δ ) , the total number of iterations T, where N represents the total number of training samples.
Consider a situation with K clients where each client i holds a local dataset D i . Each client takes a sample N i from their respective dataset D i ; we set N = i = 1 K N i . Let x i denote the number of samples on client i. After local updates, only the updated A i is uploaded to the server. The server aggregates them using a weighted average:
A = i = 1 K n i N A ˜ i .
The aggregated A is then broadcast to all clients for the next round.
We use Opacus, a PyTorch library for differential privacy, to implement DP and track the cumulative privacy loss during training [20]. Specifically, we employ the built-in RDP accountant to compute the privacy spending ( ε , δ ) across multiple communication rounds, following the composition rules of RDP [19].

4. Experiment and Evaluation

4.1. Experiment Setup

Experiments were conducted on the CIFAR-100 dataset [42], using an input resolution of 224 × 224 and following the standard preprocessing pipeline for BiT models [43]. Samples of the datasets are as shown in Figure 2 The dataset was partitioned into 10 clients to simulate a federated learning environment, and both i.i.d. and non-i.i.d. settings were examined. In the non-i.i.d. scenario, data were distributed across clients according to a Dirichlet distribution with α = 0.5 , introducing statistical heterogeneity to reflect realistic federated learning conditions better. A local batch size of 64 was used for client-side training.
For the model architecture, BiT-s R50 × 1 and BiT-s R101 × 1, both pretrained on ImageNet-21k, were employed as backbones. In all experiments, the backbone networks were kept frozen, and LoRA modules were inserted into all 1 × 1 convolution layers to enable parameter-efficient fine-tuning. Federated training was performed for 100 communication rounds. During each round, only the LoRA A i matrices were aggregated on the server, while the B i matrices remained local to each client. Client-side training was conducted for three epochs per round using Stochastic Gradient Descent (SGD) with a cosine learning rate schedule.
For experiments involving privacy preservation, differential privacy was applied by performing DP exclusively on the shared LoRA A matrix, while local parameters were left unperturbed [18]. The main hyperparameters used in the federated learning setup are summarized in Table 1.
To ensure accurate privacy accounting, we adopted RDP [19] as implemented in Opacus [20]. RDP provides a tighter composition bound compared to standard ( ε , δ ) -DP by tracking the cumulative privacy loss across multiple training iterations. In our setup, each client performed per-sample gradient clipping with norm C = 0.5 and added Gaussian noise with a multiplier σ { 1.4 , 3.75 } to the gradients of the LoRA A i parameters during local training. The sampling rate is determined by the mini-batch size ( B = 64 ) relative to the local dataset size, and the cumulative privacy loss ε was computed from the RDP accountant after T = 100 communication rounds at a fixed δ = 10 5 . We report the final privacy budgets ε = 8 and ε = 2 , corresponding to the noise multipliers σ = 1.4 and σ = 3.75 , respectively.

4.2. Analysis

This section provides an in-depth analysis of the proposed FedSA-LoRA-DP framework by examining how different factors, such as data heterogeneity, client participation rate, and LoRA rank, affect the overall performance. In real-world federated learning scenarios, it is unlikely that all clients participating in training will have the same data distribution, and it is also difficult for the same clients to always participate in training. For this reason, we compared experiments in i.i.d. and non-i.i.d. environments, as well as experiments where the client participation rate was changed. Furthermore, we evaluated whether the proposed method could maintain high accuracy in spite of lighter parameter configuration to confirm its applicability in resource-constrained federated environments. We conducted experiments with LoRA ranks set to r = 8 and r = 4 , and verified the trade-off between model weight reduction and accuracy.
Unless otherwise specified, the parameters listed in Table 2 are used throughout the experiments. Since the primary source of client diversity in our setting arises from differences in dataset size and class balance, clients with larger and more balanced data naturally achieve higher accuracy. Therefore, instead of reporting individual client metrics, we summarize results at the global level to focus on the overall performance trends. All reported results in this section denote the mean and standard deviation of test accuracies across all clients.

4.2.1. Effect of Data Heterogeneity

To analyze the impact of data heterogeneity, we conducted experiments under both i.i.d. and non-i.i.d. (Dirichlet α = 0.5 ) data distributions and compared the resulting model performance. The results of this analysis are summarized in Table 3.
When comparing the i.i.d. and non-i.i.d. settings for the BiT-S R50 × 1 model, FedSA-LoRA exhibited a performance gap of 18.87 points, whereas FedSA-LoRA-DP showed gaps of 18.75 points and 18.73 points for ε = 8 and ε = 2 , respectively. Similarly, for the BiT-S R101 × 1 model, the accuracy gap between i.i.d. and non-i.i.d. settings was 18.99 for FedSA-LoRA, compared to 18.79 points for both ε = 8 and ε = 2 in FedSA-LoRA-DP. Furthermore, when comparing FedSA-LoRA and FedSA-LoRA-DP, the accuracy difference was only 0.40–0.41 points for the R50 × 1 model and 0.44 points for the R101 × 1 model.
These results demonstrate that the proposed FedSA-LoRA-DP framework maintains stable learning performance even under data heterogeneity. The minimal performance difference between FedSA-LoRA and FedSA-LoRA-DP indicates that the introduction of differential privacy does not significantly compromise model accuracy. A detailed discussion of the limitations under more extreme non-i.i.d. conditions is provided in Section 5.1.
As shown in Figure 3, all methods exhibit similar convergence behavior during training, indicating that the introduction of differential privacy does not hinder the learning dynamics or delay convergence. The performance difference appears primarily in the final accuracy, suggesting that DP noise affects only the fine-tuning stage rather than the overall optimization trajectory. Furthermore, when comparing ε = 8 and ε = 2 , there is almost no observable difference in the learning dynamics, indicating that the injected DP noise does not affect convergence stability. This indicates that the injected DP noise is sufficiently small compared to the model’s inherent stochasticity, resulting in nearly identical convergence behavior and learning stability across different privacy levels.

4.2.2. Effect of Client Participation Rate

To verify whether the model can maintain high accuracy even in scenarios where not all clients are able to participate in every training round, we investigate the framework’s robustness to partial client participation by comparing full ( f = 1.0 ), where all clients join every round, and partial ( f = 0.3 ), where 30% of clients are randomly selected for each round. To maintain comparable privacy levels across different participation settings, the noise multiplier σ was set to 1.1 for ε = 8 and 3.0 for ε = 2 . The results evaluating this effect are summarized in Table 4.
When comparing the client participation ratios f = 1.0 and f = 0.3 for the BiT-S R50 × 1 model, FedSA-LoRA exhibited an accuracy difference of 0.50 points, whereas FedSA-LoRA-DP showed gaps of 0.42 points and 0.40 points for ε = 8 and ε = 2 , respectively. Similarly, for the BiT-S R101 × 1 model, the accuracy gap between the two participation settings was 0.55 points for FedSA-LoRA, 0.54 points for FedSA-LoRA-DP ( ε = 8 ), and 0.53 points for FedSA-LoRA-DP ( ε = 2 ).
The largest gap between FedSA-LoRA and FedSA-LoRA-DP was observed in the comparison of the BiT-S R101 × 1 model with ε = 8 , where the accuracy difference reached 0.43 points. These results suggest that FedSA-LoRA-DP maintains comparable accuracy to FedSA-LoRA even when the number of participating clients is reduced, demonstrating robustness against partial participation and communication variability in federated environments.

4.2.3. Effect of LoRA Rank

To investigate the model’s ability to maintain high accuracy while being trained with a smaller number of parameters, we analyze the effect of the LoRA rank by comparing configurations with r = 8 and r = 4 . The results are summarized in Table 5.
When comparing LoRA ranks r = 8 and r = 4 for the BiT-S R50 × 1 model, FedSA-LoRA showed a performance difference of 0.19 points, while FedSA-LoRA-DP reported gaps of 0.13 points for both ε = 8 and ε = 2 . For the BiT-S R101 × 1 model, the accuracy differences were 0.05 points for FedSA-LoRA and only 0.02 and 0.01 points for FedSA-LoRA-DP ( ε = 8 and ε = 2 ), respectively. Furthermore, when comparing FedSA-LoRA and FedSA-LoRA-DP, the accuracy gap was 0.40–0.44 points for rank r = 8 , whereas it decreased to 0.34–0.41 points for rank r = 4 .
The smallest accuracy gap was observed for the BiT-S R50 × 1 model with r = 4 , indicating that the proposed method maintains high accuracy even under stronger parameter compression. These results demonstrate that the proposed framework is robust to changes in LoRA rank, and that privacy preservation can be achieved with negligible loss in accuracy. A more detailed discussion on the relationship between LoRA rank and noise robustness is provided in Section 5.2.

4.3. Generalizability Across Datasets

The goal of this analysis is to evaluate the generalizability of the proposed FedSA-LoRA-DP framework across datasets of varying complexity and domain characteristics. We employed three datasets: CIFAR-100, MNIST [44], and SVHN [45], each differing in image dimensionality, visual diversity, and class structure. Samples of MNIST and SVHN are as shown in Figure 4.
All experiments in this subsection were conducted under the i.i.d. setting with a LoRA rank of r = 8 and client participation f = 1.0 , to ensure consistent comparison across datasets. The results of test accuracy are shown in Figure 5a,b, corresponding to the BiT-S R50 × 1 and R101 × 1 architectures, respectively.
For the BiT-S R50 × 1 model, FedSA-LoRA achieved 74.47% accuracy on CIFAR-100, while FedSA-LoRA-DP achieved 74.07% for ε = 8 and 74.06% for ε = 2 , resulting in only a 0.4-point difference. For MNIST, FedSA-LoRA achieved 95.38%, and FedSA-LoRA-DP achieved 93.91% for both ε = 8 and ε = 2 , corresponding to a 1.47-point reduction. For SVHN, FedSA-LoRA achieved 52.63%, and FedSA-LoRA-DP achieved 50.24% for ε = 8 and 50.28% for ε = 2 , showing a 2.3-point difference.
For the BiT-S R101 × 1 model, FedSA-LoRA achieved 78.32% accuracy on CIFAR-100, while FedSA-LoRA-DP achieved 77.88% for both ε = 8 and ε = 2 , resulting in only a 0.4-point difference. For MNIST, FedSA-LoRA achieved 94.99%, and FedSA-LoRA-DP achieved 94.11% for ε = 8 and 94.35% for ε = 2 , corresponding to reductions of 0.88 and 0.64 points. For SVHN, FedSA-LoRA achieved 51.03%, and FedSA-LoRA-DP achieved 46.47% for ε = 8 and 46.49% for ε = 2 , showing differences of 4.56 points and 4.54 points.
These results confirm that the proposed FedSA-LoRA-DP framework maintains high accuracy across datasets with different visual and structural complexities. A deeper discussion of the dataset-dependent privacy–utility trade-off is provided in Section 5.3.

4.4. Communication Cost

To quantitatively validate the communication efficiency of our proposed methods, we measured both the per-round and total communication costs for FedSA-LoRA, FedSA-LoRA-DP, and the baseline FedAvg.
As shown in Table 6, the per-round communication cost of FedSA-LoRA and FedSA-LoRA-DP is only 0.6375 MB, whereas FedAvg requires 90.428 MB per round, corresponding to an approximate 99% reduction in communication overhead.
The substantial reduction in communication cost stems from the structural difference between FedAvg and our LoRA-based approaches. In FedAvg, the entire set of model parameters W must be transmitted between clients and the server in each communication round, resulting in a payload proportional to the full model size. In contrast, our methods employ LoRA, which decomposes the parameter update into two small matrices, A and B . During training, only A is communicated, while the pretrained base weights W remain frozen. This design effectively transforms federated learning into a transfer learning paradigm, where clients fine-tune lightweight adaptation modules rather than the entire model. Consequently, the per-round communication cost scales linearly with the low-rank dimension r, leading to a 99% reduction compared to standard FedAvg. Even when differential privacy is applied, the communication cost remains unchanged, as DP noise is injected locally without altering message size.
Therefore, our method achieves substantial communication savings while maintaining competitive global accuracy.

5. Discussion

5.1. Limitations Under Extreme Data Heterogeneity

Table 7 presents the results under more extreme non-i.i.d. conditions, where the Dirichlet parameter is set to α = 0.1 . As α decreases, the class distributions among clients become highly imbalanced, and the overlap between local datasets diminishes substantially. Consequently, the global model struggles to generalize across clients, resulting in a marked drop in overall test accuracy. This degradation can be attributed to two key structural factors of the proposed framework.
First, the low-rank structure of LoRA inherently limits the representational capacity of each client model. When client data distributions differ significantly, the restricted trainable subspace may lead to underfitting, as the low-rank decomposition cannot fully capture the diversity of local feature representations. This structural limitation becomes more pronounced as the degree of data heterogeneity increases, leading to diminished generalization across clients.
Furthermore, previous work has shown that the method performs well even under non-i.i.d. conditions in large-scale language models, where a substantially higher-dimensional [31,32] parameter space is used. However, in our image classification setting, the total number of trainable parameters is relatively small, making the expressive power of LoRA more constrained. As a result, it becomes more challenging for low-rank adaptation to capture client-specific feature variations, which exacerbates the performance drop under strong heterogeneity.
Second, since only the A i matrices are aggregated on the server while B i remain local, the amount of shared information across clients is substantially reduced. Although this selective aggregation strategy greatly enhances communication efficiency, it can intensify model divergence when client distributions are highly dissimilar. Under such conditions, the aggregated A updates fail to represent a coherent global direction, limiting the benefits of collaborative training.
In summary, while FedSA-LoRA-DP maintains robustness under moderate heterogeneity (e.g., α = 0.5 ), its performance deteriorates sharply under extreme data skewness, revealing an inherent structural limitation of the selective aggregation mechanism. Future work could address this issue by adopting clustering-based aggregation or personalized model fusion techniques to further enhance robustness in highly non-i.i.d. federated environments.

5.2. Impact of LoRA Rank on Noise Robustness

Table 5 demonstrates an interesting trend; the performance degradation caused by DP noise becomes smaller as the LoRA rank decreases. This observation indicates that lower-rank configurations inherently exhibit stronger robustness to DP-induced noise. The following factors may explain this behavior.
First, reducing the LoRA rank effectively decreases the number of trainable parameters, thereby limiting the dimensionality of the parameter space affected by the injected noise. In lower-rank settings, the gradient perturbations caused by Gaussian noise are distributed over a smaller subspace, leading to more stable optimization and less distortion of the learned representations. In contrast, higher-rank LoRA modules expose a larger number of parameters to DP noise, increasing the likelihood of performance degradation due to noisy gradient updates.
Second, in the proposed FedSA-LoRA-DP framework, DP is applied only to the A i matrices. When the rank r is smaller, the size of these matrices—and consequently the number of perturbed parameters—is also reduced. As a result, the total noise magnitude introduced per round decreases, further mitigating accuracy loss. This selective protection mechanism, combined with low-rank parameterization, thus achieves a natural form of noise resilience.
Finally, these results highlight a trade-off between model capacity and privacy robustness. While higher ranks provide greater representational power, lower ranks yield improved stability against DP noise. This implies that LoRA-based federated systems can exploit rank as a controllable hyperparameter to balance privacy and performance, enabling efficient and privacy-preserving learning even with constrained model capacity.

5.3. Impact of Dataset Characteristics on Privacy–Utility Trade-Off

The variations in performance observed across CIFAR-100, MNIST, and SVHN can be attributed to differences in dataset complexity and representational diversity. Unlike the preceding quantitative analysis, this section focuses on the underlying factors that influence the privacy–utility trade-off in federated LoRA-based training.
First, the dimensionality and representational complexity of each dataset play a critical role in determining the sensitivity to noise injection. CIFAR-100 consists of high-resolution and semantically rich images, where the LoRA modules can capture diverse feature patterns even with low-rank adaptation. As a result, the Gaussian noise added to the gradients of the A i matrices has only a marginal impact on the overall optimization, leading to a minimal accuracy gap of approximately 0.4 points.
In contrast, MNIST is a low-dimensional and homogeneous dataset with limited intra-class variation. In such cases, the model’s representational subspace is narrow, and the DP-induced perturbation in the low-rank A updates may directly disrupt the already compact feature space. Consequently, a larger accuracy drop of about 1.5 points was observed. This slightly larger gap than in the CIFAR-100 case suggests that DP-induced noise has a more noticeable effect on datasets with limited feature diversity and simpler visual representations.
For SVHN, which contains real-world digit images with substantial illumination and background variability, both the low-rank approximation and the heterogeneous data distributions among clients contribute to training instability. Because the shared A matrix captures only a small portion of the total feature variation across clients, the model becomes more sensitive to gradient variance, leading to less stable updates and lower overall accuracy. This explains the relatively lower accuracy, around 2.3 points below FedSA-LoRA, despite consistent privacy guarantees. Also, this effect is inherent to the dataset’s complex visual structure rather than the privacy mechanism itself.
Additionally, these observations suggest that the effectiveness of the proposed method depends on the expressive capacity of the LoRA modules relative to dataset complexity. In large-scale models such as LLMs, where parameter space is extremely high-dimensional, low-rank adaptation can still capture rich feature interactions even under strong privacy constraints. However, for smaller-scale models or simpler tasks like image classification, the limited number of trainable parameters may not fully compensate for DP-induced noise, making the performance degradation more noticeable.
Overall, these findings highlight that the privacy–utility trade-off in federated LoRA-based learning is inherently dataset-dependent. Datasets with richer and higher-dimensional representations tend to absorb privacy noise more effectively, while simpler or noisier datasets experience more pronounced degradation. Future work could explore adaptive noise scaling or rank-aware privacy calibration strategies to balance the noise magnitude according to dataset complexity.

5.4. Future Work

In future work, we plan to verify the extent to which the proposed method is resistant to membership inference attacks and gradient leakage attacks. We demonstrated that FedSA-LoRA-DP effectively balances model accuracy and privacy through differential privacy in this paper. However, further investigation is needed to evaluate its empirical resilience against direct privacy attacks.
Theoretical guarantees of differential privacy have been extensively established in prior studies [17,18], ensuring formal protection against information leakage in expectation. Nevertheless, it would be valuable to empirically validate how effectively FedSA-LoRA-DP withstands practical privacy attacks, such as membership inference or gradient inversion, under realistic federated conditions. This line of research would complement the theoretical analysis presented in this work and provide a more comprehensive assessment of the method’s robustness.
In addition, future work will extend the proposed framework to large-scale and domain-specific applications such as autonomous driving, medical imaging, and natural language processing. Evaluating FedSA-LoRA-DP under these complex, heterogeneous, and data-rich environments will be essential to verify its scalability, practical effectiveness, and robustness to extreme non-i.i.d. conditions. Such experiments will also enable a more direct connection between the theoretical contributions and real-world deployment scenarios.
Furthermore, we plan to expand our experimental evaluation to include comparisons with representative DP-FL baselines. These benchmarks will help quantify the relative advantages of FedSA-LoRA-DP in terms of accuracy retention, communication efficiency, and privacy–utility trade-offs. Such analysis will provide a more comprehensive empirical perspective on how our method aligns with and differs from existing adaptive or personalized DP-FL frameworks.
Depending on future requirements, we will consider integrating cryptographic techniques such as secure aggregation or homomorphic encryption to further strengthen privacy protection. Therefore, a systematic empirical evaluation against these attack scenarios remains an important direction for future work to confirm whether FedSA-LoRA-DP provides robust privacy protection beyond formal differential privacy guarantees.

6. Conclusions

In this paper, we proposed FedSA-LoRA-DP, a novel FL framework that integrates DP and LoRA for efficient model adaptation. This method protects privacy during the training process while maintaining communication efficiency and model accuracy.
Evaluation across various experimental conditions on the CIFAR-100 dataset confirmed that the proposed method minimizes accuracy degradation compared to its non-private counterpart. Furthermore, additional experiments using the MNIST and SVHN datasets confirmed that the proposed method demonstrates high versatility across different data domains. These results demonstrate that FedSA-LoRA-DP is an effective method for achieving both privacy and performance.
As future work, we plan to quantitatively evaluate the actual degree of privacy protection using attack methods such as Membership Inference Attack and Gradient Leakage Attack. In addition, we aim to extend the proposed FedSA-LoRA-DP framework to large-scale, real-world applications such as autonomous driving and healthcare, to further validate its scalability, robustness, and practical effectiveness in complex federated environments. We also plan to include comparative evaluations with representative DP-FL baselines to more comprehensively assess the privacy–utility trade-off and communication efficiency of the proposed framework.

Author Contributions

Software, T.M.; writing—original draft preparation, T.M.; writing—review and editing, T.M., L.Y., Z.Z., P.F. and C.O.; project administration, C.O.; funding acquisition, C.O. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by JSPS KAKENHI Grant Number JP22H03585.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nguyen, A.; Do, T.; Tran, M.; Nguyen, B.X.; Duong, C.; Phan, T.; Tjiputra, E.; Tran, Q.D. Deep Federated Learning for Autonomous Driving. In Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 4–9 June 2022; IEEE: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
  2. Zhang, H.; Bosch, J.; Olsson, H.H. End-to-End Federated Learning for Autonomous Driving Vehicles. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Virtual, 18–22 July 2021; IEEE: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  3. Konečný, J.; McMahan, H.B.; Ramage, D.; Richtárik, P. Federated Optimization: Distributed Machine Learning for On-Device Intelligence. arXiv 2016, arXiv:1610.02527. [Google Scholar] [CrossRef]
  4. Sattler, F.; Wiedemann, S.; Muller, K.R.; Samek, W. Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 3400–3413. [Google Scholar] [CrossRef] [PubMed]
  5. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A.Y. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; Proceedings of Machine Learning Research (PMLR). Singh, A., Zhu, J., Eds.; Volume 54, pp. 1273–1282. [Google Scholar]
  6. Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečný, J.; Mazzocchi, S.; McMahan, B.; et al. Towards Federated Learning at Scale: System Design. Proc. Mach. Learn. Syst. 2019, 1, 374–388. [Google Scholar]
  7. Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Nitin Bhagoji, A.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and Open Problems in Federated Learning. Found. Trends Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
  8. Houlsby, N.; Giurgiu, A.; Jastrzebski, S.; Morrone, B.; De Laroussilhe, Q.; Gesmundo, A.; Attariyan, M.; Gelly, S. Parameter-Efficient Transfer Learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; Chaudhuri, K., Salakhutdinov, R., Eds.; Proceedings of Machine Learning Research (PMLR). Volume 97, pp. 2790–2799. [Google Scholar]
  9. Li, X.L.; Liang, P. Prefix-Tuning: Optimizing Continuous Prompts for Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online, 20–23 November 2021; Zong, C., Xia, F., Li, W., Navigli, R., Eds.; pp. 4582–4597. [Google Scholar] [CrossRef]
  10. Lester, B.; Al-Rfou, R.; Constant, N. The Power of Scale for Parameter-Efficient Prompt Tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Punta Cana, Dominican Republic, 7–11 November 2021; Moens, M.F., Huang, X., Specia, L., Yih, S.W.t., Eds.; pp. 3045–3059. [Google Scholar] [CrossRef]
  11. Hu, E.J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. Lora: Low-rank adaptation of large language models. arXiv 2022, arXiv:2106.09685. [Google Scholar]
  12. Zhu, L.; Liu, Z.; Han, S. Deep Leakage from Gradients. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R., Eds.; Curran Associates, Inc.: New York, NY, USA, 2019; Volume 32. [Google Scholar]
  13. Geiping, J.; Bauermeister, H.; Dröge, H.; Moeller, M. Inverting gradients-how easy is it to break privacy in federated learning? In Proceedings of the 34th International Conference on Neural Information Processing Systems (NIPS’20), Red Hook, NY, USA, 6–12 December 2020. [Google Scholar]
  14. Huang, Y.; Gupta, S.; Song, Z.; Li, K.; Arora, S. Evaluating gradient inversion attacks and defenses in federated learning. In Proceedings of the 35th International Conference on Neural Information Processing Systems (NIPS’21), Red Hook, NY, USA, 6–14 December 2021. [Google Scholar]
  15. Zhu, G.; Li, D.; Gu, H.; Han, Y.; Yao, Y.; Fan, L.; Yang, Q. FedMIA: An Effective Membership Inference Attack Exploiting “All for One” Principle in Federated Learning. In Proceedings of the 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Denver, CO, USA, 3–7 June 2024; pp. 20643–20653. [Google Scholar] [CrossRef]
  16. Wang, X.; Wang, N.; Wu, L.; Guan, Z.; Du, X.; Guizani, M. GBMIA: Gradient-based Membership Inference Attack in Federated Learning. In Proceedings of the ICC 2023—IEEE International Conference on Communications, Rome, Italy, 28 May–1 June 2023; 2023; pp. 5066–5071. [Google Scholar] [CrossRef]
  17. Dwork, C.; McSherry, F.; Nissim, K.; Smith, A. Calibrating Noise to Sensitivity in Private Data Analysis; Massachusetts Institute of Technology: Cambridge, MA, USA, 2006; pp. 265–284. [Google Scholar]
  18. Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep Learning with Differential Privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS’16), New York, NY, USA, 24–28 October 2016; pp. 308–318. [Google Scholar] [CrossRef]
  19. Mironov, I. Rényi Differential Privacy. In Proceedings of the 2017 IEEE 30th Computer Security Foundations Symposium (CSF), Santa Barbara, CA, USA, 22–25 August 2017; pp. 263–275. [Google Scholar] [CrossRef]
  20. Yousefpour, A.; Shilov, I.; Sablayrolles, A.; Testuggine, D.; Prasad, K.; Malek, M.; Nguyen, J.; Ghosh, S.; Bharadwaj, A.; Zhao, J.; et al. Opacus: User-friendly differential privacy library in PyTorch. arXiv 2021, arXiv:2109.12298. [Google Scholar]
  21. Zheng, Q.; Chen, S.; Long, Q.; Su, W. Federated f-differential privacy. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Virtual, 13–15 April 2021; pp. 2251–2259. [Google Scholar]
  22. Cui, L.; Wu, X. ALDP-FL for Adaptive Local Differential Privacy in Federated Learning. Sci. Rep. 2025, 15, 26679. [Google Scholar] [CrossRef]
  23. Wang, D.; Guan, S. FedFR-ADP: Adaptive Differential Privacy with Feedback Regulation for Robust Model Performance in Federated Learning. Inf. Fusion 2025, 116, 102796. [Google Scholar] [CrossRef]
  24. Li, X.; Huang, K.; Yang, W.; Wang, S.; Zhang, Z. On the Convergence of FedAvg on Non-IID Data. In Proceedings of the 8th International Conference on Learning Representations (ICLR) 2020, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  25. Karimireddy, S.P.; Kale, S.; Mohri, M.; Reddi, S.; Stich, S.; Suresh, A.T. Scaffold: Stochastic controlled averaging for federated learning. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 5132–5143. [Google Scholar]
  26. Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated Optimization in Heterogeneous Networks. Proc. Mach. Learn. Syst. 2020, 15, 429–450. [Google Scholar]
  27. Wang, J.; Liu, Q.; Liang, H.; Joshi, G.; Poor, H.V. Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization. Adv. Neural Inf. Process. Syst. 2020, 3, 7611–7623. [Google Scholar]
  28. Bernstein, J.; Wang, Y.X.; Azizzadenesheli, K.; Anandkumar, A. signSGD: Compressed optimisation for non-convex problems. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 560–569. [Google Scholar]
  29. Alistarh, D.; Grubic, D.; Li, J.; Tomioka, R.; Vojnovic, M. QSGD: Communication-efficient SGD via gradient quantization and encoding. Adv. Neural Inf. Process. Syst. 2017, 30, 1707–1718. [Google Scholar]
  30. Reisizadeh, A.; Mokhtari, A.; Hassani, H.; Jadbabaie, A.; Pedarsani, R. Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Online, 26–28 August 2020; pp. 2021–2031. [Google Scholar]
  31. Guo, P.; Zeng, S.; Wang, Y.; Fan, H.; Wang, F.; Qu, L. Selective Aggregation for Low-Rank Adaptation in Federated Learning. arXiv 2025, arXiv:2410.01463. [Google Scholar]
  32. Sun, Y.; Li, Z.; Li, Y.; Ding, B. Improving LoRA in Privacy-preserving Federated Learning. arXiv 2024, arXiv:2403.12313. [Google Scholar]
  33. Liu, X.Y.; Zhu, R.; Zha, D.; Gao, J.; Zhong, S.; White, M.; Qiu, M. Differentially Private Low-Rank Adaptation of Large Language Model Using Federated Learning. arXiv 2024, arXiv:2312.17493. [Google Scholar] [CrossRef]
  34. Liu, X.Y.; Zhu, R.; Zha, D.; Gao, J.; Zhong, S.; White, M.; Qiu, M. Differentially Private Low-Rank Adaptation of Large Language Model Using Federated Learning. ACM Trans. Manag. Inf. Syst. 2025, 16, 3682068. [Google Scholar] [CrossRef]
  35. Manoj, T.; Makkithaya, K.; Narendra, V.G. A Blockchain-Assisted Trusted Federated Learning for Smart Agriculture. SN Comput. Sci. 2025, 6, 221. [Google Scholar] [CrossRef]
  36. Kim, H.; Park, J.; Bennis, M.; Kim, S.L. Blockchained On-Device Federated Learning. IEEE Commun. Lett. 2020, 24, 1279–1283. [Google Scholar] [CrossRef]
  37. Weng, S.; Zhang, L.; Feng, D.; Feng, C.; Wang, R.; Klaine, P.V.; Imran, M.A. Privacy-preserving federated learning based on differential privacy and momentum gradient descent. In Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 18–23 July, 2022; IEEE: New York, NY, USA, 2022; pp. 1–6. [Google Scholar]
  38. Ding, J.; Zhang, X.; Chen, M.; Xue, K.; Zhang, C.; Pan, M. Differentially private robust ADMM for distributed machine learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; IEEE: New York, NY, USA, 2019; pp. 1302–1311. [Google Scholar]
  39. Jayaraman, B.; Evans, D. Evaluating differentially private machine learning in practice. In Proceedings of the 28th USENIX Security Symposium (USENIX Security 19), Santa Clara, CA, USA, 14–16 August 2019; pp. 1895–1912. [Google Scholar]
  40. Dong, J.; Roth, A.; Su, W.J. Gaussian differential privacy. J. R. Stat. Soc. Ser. Stat. Methodol. 2022, 84, 3–37. [Google Scholar] [CrossRef]
  41. Zhang, L.; Zhang, L.; Shi, S.; Chu, X.; Li, B. Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning. arXiv 2023, arXiv:2308.03303. [Google Scholar]
  42. Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images; Technical Report; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
  43. Kolesnikov, A.; Beyer, L.; Zhai, X.; Puigcerver, J.; Yung, J.; Gelly, S.; Houlsby, N. Big transfer (bit): General visual representation learning. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020. [Google Scholar]
  44. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  45. Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; Ng, A.Y. Reading Digits in Natural Images with Unsupervised Feature Learning. In Proceedings of the NIPS Workshop on Deep Learning and Unsupervised Feature Learning, Granada, Spain, 21 October 2011. [Google Scholar]
Figure 1. Overview of the proposed FedSA-LoRA-DP framework. The server aggregates only the LoRA A i matrices uploaded by clients, while keeping the B i matrices local to preserve personalization. DP noise is added to the A i updates before transmission to ensure privacy protection.
Figure 1. Overview of the proposed FedSA-LoRA-DP framework. The server aggregates only the LoRA A i matrices uploaded by clients, while keeping the B i matrices local to preserve personalization. DP noise is added to the A i updates before transmission to ensure privacy protection.
Applsci 15 13102 g001
Figure 2. Sample images from the CIFAR-100 dataset used for evaluating the proposed method. The dataset contains 100 diverse object classes with high intra-class variation.
Figure 2. Sample images from the CIFAR-100 dataset used for evaluating the proposed method. The dataset contains 100 diverse object classes with high intra-class variation.
Applsci 15 13102 g002
Figure 3. Comparison of learning curves (test accuracy) for FedSA-LoRA and FedSA-LoRA-DP under i.i.d. and non-i.i.d. settings (Dirichlet α = 0.5 ) on CIFAR-100. Test accuracy is evaluated every 5 communication rounds. The green and orange curves, corresponding to ε = 8 and ε = 2 , nearly overlap throughout the entire training process, making the orange line barely distinguishable.
Figure 3. Comparison of learning curves (test accuracy) for FedSA-LoRA and FedSA-LoRA-DP under i.i.d. and non-i.i.d. settings (Dirichlet α = 0.5 ) on CIFAR-100. Test accuracy is evaluated every 5 communication rounds. The green and orange curves, corresponding to ε = 8 and ε = 2 , nearly overlap throughout the entire training process, making the orange line barely distinguishable.
Applsci 15 13102 g003
Figure 4. Examples from the MNIST and SVHN datasets used for cross-domain evaluation. MNIST consists of handwritten digits with low visual complexity, whereas SVHN contains real-world street-view digits with complex illumination and backgrounds.
Figure 4. Examples from the MNIST and SVHN datasets used for cross-domain evaluation. MNIST consists of handwritten digits with low visual complexity, whereas SVHN contains real-world street-view digits with complex illumination and backgrounds.
Applsci 15 13102 g004
Figure 5. Comparison of test accuracy (%) of FedSA-LoRA and FedSA-LoRA-DP ( ε = 8 , 2 ) across CIFAR-100, MNIST, and SVHN datasets using BiT-S R50 × 1 in (a) and R101 × 1 in (b) architectures. The results show that FedSA-LoRA-DP achieves nearly identical accuracy to FedSA-LoRA across all datasets, demonstrating that the integration of differential privacy introduces minimal degradation in performance.
Figure 5. Comparison of test accuracy (%) of FedSA-LoRA and FedSA-LoRA-DP ( ε = 8 , 2 ) across CIFAR-100, MNIST, and SVHN datasets using BiT-S R50 × 1 in (a) and R101 × 1 in (b) architectures. The results show that FedSA-LoRA-DP achieves nearly identical accuracy to FedSA-LoRA across all datasets, demonstrating that the integration of differential privacy introduces minimal degradation in performance.
Applsci 15 13102 g005
Table 1. Main hyperparameter configuration for the federated learning experiments, including DP settings. The same configuration was applied across all datasets unless otherwise specified.
Table 1. Main hyperparameter configuration for the federated learning experiments, including DP settings. The same configuration was applied across all datasets unless otherwise specified.
ParameterSymbolValue
Federated learning setup
Number of roundsT100
Number of clientsN10
Local epochsE3
Learning rate η 1.0 × 10 4
Optimizer-SGD
DP configuration
Privacy budget ε 8, 2
Failure probability δ 1.0 × 10 5
Clipping normC0.5
Batch size B 64
Noise multiplier σ 1.4 ( ε = 8 ), 3.75 ( ε = 2 )
Table 2. Default hyperparameter settings used throughout the experiments unless otherwise specified.
Table 2. Default hyperparameter settings used throughout the experiments unless otherwise specified.
HyperparameterValue
Data heterogeneityi.i.d.
Client participation rate 1.0
LoRA rank8
Table 3. Test accuracy (%) of FedSA-LoRA and FedSA-LoRA-DP under i.i.d. and non-i.i.d. settings (Dirichlet α = 0.5 ) on CIFAR-100. Both BiT-S R50 × 1 and R101 × 1 architectures are evaluated to assess the impact of data heterogeneity on model performance. Results show that FedSA-LoRA-DP achieves nearly identical accuracy to FedSA-LoRA in both data distributions, demonstrating that differential privacy has minimal influence on convergence or generalization even under heterogeneous client data.
Table 3. Test accuracy (%) of FedSA-LoRA and FedSA-LoRA-DP under i.i.d. and non-i.i.d. settings (Dirichlet α = 0.5 ) on CIFAR-100. Both BiT-S R50 × 1 and R101 × 1 architectures are evaluated to assess the impact of data heterogeneity on model performance. Results show that FedSA-LoRA-DP achieves nearly identical accuracy to FedSA-LoRA in both data distributions, demonstrating that differential privacy has minimal influence on convergence or generalization even under heterogeneous client data.
MethodBiT-S R50 × 1BiT-S R101 × 1
i.i.d. non-i.i.d. ( α = 0.5) i.i.d. non-i.i.d. ( α = 0.5)
FedSA-LoRA74.47 ± 0.2955.60 ± 2.1478.32 ± 0.2959.33 ± 2.55
FedSA-LoRA-DP ( ε = 8)74.07 ± 0.3155.32 ± 2.1277.88 ± 0.2359.09 ± 2.45
FedSA-LoRA-DP ( ε = 2)74.06 ± 0.2955.33 ± 2.1277.88 ± 0.2259.09 ± 2.44
Table 4. Test accuracy (%) of FedSA-LoRA and FedSA-LoRA-DP under different client participation ratios ( f = 1.0 , 0.3 ) using BiT-S R50 × 1 and R101 × 1 backbones on CIFAR-100. This experiment evaluates robustness to partial participation, a common scenario in real-world federated learning. The results indicate that accuracy degradation remains about 0.5 points even when only 30% of clients participate, showing that FedSA-LoRA-DP maintains stable performance despite reduced client availability.
Table 4. Test accuracy (%) of FedSA-LoRA and FedSA-LoRA-DP under different client participation ratios ( f = 1.0 , 0.3 ) using BiT-S R50 × 1 and R101 × 1 backbones on CIFAR-100. This experiment evaluates robustness to partial participation, a common scenario in real-world federated learning. The results indicate that accuracy degradation remains about 0.5 points even when only 30% of clients participate, showing that FedSA-LoRA-DP maintains stable performance despite reduced client availability.
MethodBiT-S R50 × 1BiT-S R101 × 1
f = 1.0 f = 0.3 f = 1.0 f = 0.3
FedSA-LoRA74.47 ± 0.2973.97 ± 0.3378.32 ± 0.2977.77 ± 0.33
FedSA-LoRA-DP ( ε = 8)74.07 ± 0.3173.65 ± 0.4377.88 ± 0.2377.34 ± 0.26
FedSA-LoRA-DP ( ε = 2)74.06 ± 0.2973.66 ± 0.4277.88 ± 0.2277.35 ± 0.27
Table 5. Test accuracy (%) of FedSA-LoRA and FedSA-LoRA-DP with different LoRA ranks ( r = 4 , 8 ) on CIFAR-100 using BiT-S R50 × 1 and BiT-S R101 × 1 backbones. The upper two columns correspond to the BiT-S R50 × 1 results, and the right two columns to the BiT-S R101 × 1 results. This comparison evaluates the trade-off between model compactness and performance, showing that reducing the LoRA rank from 8 to 4 leads to less than a 0.2-point drop in accuracy for both backbones. Furthermore, FedSA-LoRA-DP maintains comparable accuracy to FedSA-LoRA across all configurations, indicating that differential privacy has minimal impact even under stronger parameter compression.
Table 5. Test accuracy (%) of FedSA-LoRA and FedSA-LoRA-DP with different LoRA ranks ( r = 4 , 8 ) on CIFAR-100 using BiT-S R50 × 1 and BiT-S R101 × 1 backbones. The upper two columns correspond to the BiT-S R50 × 1 results, and the right two columns to the BiT-S R101 × 1 results. This comparison evaluates the trade-off between model compactness and performance, showing that reducing the LoRA rank from 8 to 4 leads to less than a 0.2-point drop in accuracy for both backbones. Furthermore, FedSA-LoRA-DP maintains comparable accuracy to FedSA-LoRA across all configurations, indicating that differential privacy has minimal impact even under stronger parameter compression.
MethodBiT-S R50 × 1BiT-S R101 × 1
r = 8 r = 4 r = 8 r = 4
FedSA-LoRA74.47 ± 0.2974.28 ± 0.3178.32 ± 0.2978.27 ± 0.29
FedSA-LoRA-DP ( ε = 8)74.07 ± 0.3173.94 ± 0.3077.88 ± 0.2377.86 ± 0.29
FedSA-LoRA-DP ( ε = 2)74.06 ± 0.2973.93 ± 0.3177.88 ± 0.2277.87 ± 0.29
Table 6. Comparison of communication costs per round among different federated learning methods. FedSA-LoRA and FedSA-LoRA-DP achieve approximately 99% reduction in communication overhead compared with standard FedAvg, since only LoRA A i matrices are transmitted. The integration of DP introduces no additional communication cost, demonstrating the efficiency of the proposed design.
Table 6. Comparison of communication costs per round among different federated learning methods. FedSA-LoRA and FedSA-LoRA-DP achieve approximately 99% reduction in communication overhead compared with standard FedAvg, since only LoRA A i matrices are transmitted. The integration of DP introduces no additional communication cost, demonstrating the efficiency of the proposed design.
MethodPer-Round Cost (MB)
FedAvg90.428
FedSA-LoRA0.6375
FedSA-LoRA-DP0.6375
Table 7. Test accuracy (%) of FedSA-LoRA and FedSA-LoRA-DP under varying data heterogeneity levels on the CIFAR-100 dataset. Both BiT-S R50 × 1 and R101 × 1 architectures are evaluated under i.i.d. and non-i.i.d. settings with Dirichlet parameters α { 0.5 , 0.1 } . A smaller α value indicates stronger heterogeneity, producing more imbalanced class distributions and less data overlap among clients. The results show that both FedSA-LoRA and FedSA-LoRA-DP remain stable under moderate heterogeneity ( α = 0.5 ) but experience a marked accuracy drop when α = 0.1 .
Table 7. Test accuracy (%) of FedSA-LoRA and FedSA-LoRA-DP under varying data heterogeneity levels on the CIFAR-100 dataset. Both BiT-S R50 × 1 and R101 × 1 architectures are evaluated under i.i.d. and non-i.i.d. settings with Dirichlet parameters α { 0.5 , 0.1 } . A smaller α value indicates stronger heterogeneity, producing more imbalanced class distributions and less data overlap among clients. The results show that both FedSA-LoRA and FedSA-LoRA-DP remain stable under moderate heterogeneity ( α = 0.5 ) but experience a marked accuracy drop when α = 0.1 .
MethodBiT-S R50 × 1BiT-S R101 × 1
i.i.d. α = 0.5 α = 0.1 i.i.d. α = 0.5 α = 0.1
FedSA-LoRA74.47 ± 0.2955.60 ± 2.1430.10 ± 2.8178.32 ± 0.2959.33 ± 2.5531.62 ± 2.91
FedSA-LoRA-DP ( ε = 8)74.07 ± 0.3155.32 ± 2.1230.03 ± 2.8377.88 ± 0.2359.09 ± 2.4531.55 ± 2.88
FedSA-LoRA-DP ( ε = 2)74.06 ± 0.2955.33 ± 2.1230.03 ± 2.8377.88 ± 0.2259.09 ± 2.4431.54 ± 2.88
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Miyata, T.; Yang, L.; Zhu, Z.; Finnerty, P.; Ohta, C. Enhancing Privacy and Communication Efficiency in Federated Learning Through Selective Low-Rank Adaptation and Differential Privacy. Appl. Sci. 2025, 15, 13102. https://doi.org/10.3390/app152413102

AMA Style

Miyata T, Yang L, Zhu Z, Finnerty P, Ohta C. Enhancing Privacy and Communication Efficiency in Federated Learning Through Selective Low-Rank Adaptation and Differential Privacy. Applied Sciences. 2025; 15(24):13102. https://doi.org/10.3390/app152413102

Chicago/Turabian Style

Miyata, Takuto, Liuyi Yang, Zhiyi Zhu, Patrick Finnerty, and Chikara Ohta. 2025. "Enhancing Privacy and Communication Efficiency in Federated Learning Through Selective Low-Rank Adaptation and Differential Privacy" Applied Sciences 15, no. 24: 13102. https://doi.org/10.3390/app152413102

APA Style

Miyata, T., Yang, L., Zhu, Z., Finnerty, P., & Ohta, C. (2025). Enhancing Privacy and Communication Efficiency in Federated Learning Through Selective Low-Rank Adaptation and Differential Privacy. Applied Sciences, 15(24), 13102. https://doi.org/10.3390/app152413102

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop