Next Article in Journal
Lightweight Encryption Algorithms for IoT
Previous Article in Journal
MacHa: Multi-Aspect Controllable Text Generation Based on a Hamiltonian System
Previous Article in Special Issue
Research on the Application of Federated Learning Based on CG-WGAN in Gout Staging Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

pFedKA: Personalized Federated Learning via Knowledge Distillation with Dual Attention Mechanism

School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
*
Author to whom correspondence should be addressed.
Computers 2025, 14(12), 504; https://doi.org/10.3390/computers14120504
Submission received: 26 August 2025 / Revised: 8 November 2025 / Accepted: 19 November 2025 / Published: 21 November 2025
(This article belongs to the Special Issue Mobile Fog and Edge Computing)

Abstract

Federated learning in heterogeneous data scenarios faces two key challenges. First, the conflict between global models and local personalization complicates knowledge transfer and leads to feature misalignment, hindering effective personalization for clients. Second, the lack of dynamic adaptation in standard federated learning makes it difficult to handle highly heterogeneous and changing client data, reducing the global model’s generalization ability. To address these issues, this paper proposes pFedKA, a personalized federated learning framework integrating knowledge distillation and a dual-attention mechanism. On the client-side, a cross-attention module dynamically aligns global and local feature spaces using adaptive temperature coefficients to mitigate feature misalignment. On the server-side, a Gated Recurrent Unit-based attention network adaptively adjusts aggregation weights using cross-round historical states, providing more robust aggregation than static averaging in heterogeneous settings. Experimental results on CIFAR-10, CIFAR-100, and Shakespeare datasets demonstrate that pFedKA converges faster and with greater stability in heterogeneous scenarios. Furthermore, it significantly improves personalization accuracy compared to state-of-the-art personalized federated learning methods. Additionally, we demonstrate privacy guarantees by integrating pFedKA with DP-SGD, showing comparable privacy protection to FedAvg while maintaining high personalization accuracy.

1. Introduction

Federated Learning (FL), a distributed machine learning paradigm, has demonstrated significant potential in privacy-sensitive domains such as medical diagnosis and intelligent IoT systems by adhering to the “Data Immobility and Model Mobility” principle. However, the prevalence of non-IID data in real-world applications presents critical challenges: global models often fail to adapt to diverse local data distributions, while purely local personalized models face data sparsity and isolated learning. Consequently, Personalized Federated Learning (PFL) has garnered increasing attention, aiming to provide client-specific models that maintain privacy.
Knowledge Distillation (KD) and Attention Mechanism (AM) have emerged as cornerstone techniques for addressing data heterogeneity and enhancing personalization effectiveness. Recent advancements span dynamic optimization and aggregation strategies leveraging Bayesian ensemble methods [1,2,3,4], attention-driven client selection [5], Transformer architectures [6,7], and instance-level optimization [8], all contributing to improved model adaptability. In the realm of KD, adversarial feature alignment [9] and spectral collaborative distillation [10] have enhanced generalization on heterogeneous datasets. Cross-domain frameworks—including hybrid healthcare models [11,12,13], edge caching [14], and mobile edge aggregation [15]—have facilitated real-world PFL deployment, while efficient training schemes [16] and standardized benchmarks [17] support rigorous evaluation. Despite these strides, persistent limitations remain in adapting to dynamic distributions, preserving privacy, and achieving multi-level collaborative optimization.
KD serves as a cornerstone for mitigating data heterogeneity by enabling knowledge transfer from global to local models [18,19]. However, existing KD-based methods face substantial challenges: First, many approaches risk privacy leakage by relying on public datasets (e.g., FedMD [20]) or uploading label information (e.g., FedGKT [21]), thus exposing sensitive data and undermining FL’s privacy-preserving principles. Second, most methods rely on static distillation strategies, assuming homogeneous and stable data sharing (e.g., FedDF [22]). In practice, however, local data distributions fluctuate significantly. This discrepancy results in feature misalignment, especially in non-IID environments, causing excessive distillation noise and a significant degradation in model performance. FedMD, for instance, assumes that global and local models can be aligned using public datasets, which introduces significant privacy concerns. When data distributions shift, the static distillation policy in FedMD causes misalignment between the global and local models, increasing distillation noise and reducing performance. Similarly, FedGKT relies on bidirectional distillation, but still depends on label sharing, leading to privacy concerns and misalignment in dynamic data environments.
In contrast, FedDF [22] employs soft labels to improve knowledge transfer. However, FedDF also assumes that the data distributions are homogeneous, which is unrealistic in most federated settings. The failure to accommodate heterogeneous data results in feature misalignment and ineffective distillation, further contributing to performance degradation in non-IID settings. Thus, the static distillation strategies in these methods are not adaptable to changing data distributions and lead to suboptimal performance.
AM enhances model adaptability through dynamic weighting, but it often addresses either client-side personalization or server-side aggregation in isolation. For example, FedAtt [23] employs a self-attention module on clients to bolster feature representation but lacks sufficient server-side aggregation optimization. FedAMP [24] clusters similar clients through attention message passing to promote personalization yet still relies on relatively static attention patterns that may struggle under severe non-IID heterogeneity. pFedHN [25] introduces Hypernetworks for parameter sharing, improving adaptability but encountering scalability issues. Additionally, frameworks like APFL [26], which rely on fixed model mixing, prove inadequate in highly heterogeneous environments and lack flexibility in accommodating varying client demands.
These methods, though significant, exhibit critical limitations, especially in managing both client-side personalization and server-side aggregation effectively. They may struggle with highly heterogeneous data distributions and diverse client behaviors, which can affect model robustness and adaptability in practical federated environments. FedAtt and FedAMP mainly focus on feature representation or client clustering but do not explicitly optimize server-side aggregation, while FedGKT and FedMD are constrained by their reliance on label sharing, raising privacy concerns.
Based on these shortcomings, pFedKA introduces a dual attention mechanism to overcome the limitations of both KD and AM. By utilizing cross-attention (CA) modules for feature alignment before distillation, pFedKA ensures better knowledge transfer even in the presence of data misalignment. This dynamic alignment allows for reduced distillation noise, particularly in non-IID environments, and ensures that knowledge transfer remains effective under heterogeneous client distributions. Furthermore, pFedKA incorporates a GRU-based aggregation strategy, which adjusts aggregation weights based on client data performance and historical states. The dual attention mechanism in pFedKA allows it to handle both client-side personalization and server-side aggregation simultaneously, overcoming the limitations of static attention and enabling adaptive aggregation across heterogeneous data. In addition, pFedKA is designed to be compatible with standard privacy-preserving mechanisms such as differentially private SGD [27] and secure aggregation [28]; in this work, we integrate DP-SGD as an instantiation to empirically assess the privacy–utility trade-off.
Based on these improvements, pFedKA provides the following contributions:
(1)
A dynamic KD framework employing CA modules and adaptive temperature coefficients to achieve feature alignment and noise suppression under non-IID data.
(2)
An adaptive aggregation strategy leveraging Gated Recurrent Unit (GRU) networks to exploit cross-round historical client metadata for dynamic weight assignment and improved robustness over static averaging.
(3)
A cross-level collaborative optimization scheme bridging client-side feature distillation with server-side dynamic aggregation, harmonizing global consistency with local personalization.
The rest of the paper is organized as follows: Section 2 reviews related research on KD and AM in FL; Section 3 details the core design of pFedKA; Section 4 validates the methodology through comparative experiments and ablation analysis; and Section 5 presents the conclusions and future work.

2. Related Work

The PFL approach based on KD and dual AM builds on foundational research across three key domains:

2.1. PFL

FL enables collaborative model training while preserving data privacy, yet global models often underperform for non-IID clients requiring personalization. FedAvg [29], which reduces communication by averaging parameters, struggles in highly heterogeneous environments due to its static aggregation rule. This limitation becomes even more pronounced when data distributions change over time, highlighting the need for more dynamic adaptation. FedProx [30] introduces regularization to align local and global models, yet its fixed regularization strength fails to handle shifts in local data distributions. Per-FedAvg [31], leveraging meta-learning, allows for faster personalization but still relies heavily on the global model, making it ineffective in highly diverse or sparse data scenarios.
In contrast, FedRep [32] separates shared and personalized layers to improve model adaptation but neglects feature misalignment, which can lead to misalignment between the global model and local features. FedMA [33] attempts to address this issue through neuron matching, but this approach introduces scalability problems as the number of clients increases. Ditto [34], which allows clients to maintain both global and personalized models, provides flexibility but incurs significant computational overhead, making it difficult to scale in real-world settings. FedCDA [35] introduces a cross-round aggregation approach to improve personalization by selecting models with minimal divergence, but relies on storing multiple historical models and assumes model homogeneity, limiting its flexibility. These methods, though significant, still struggle with feature misalignment, adaptability, and scalability in diverse environments.

2.2. PFL Based on KD

KD has long been a cornerstone for enabling knowledge transfer from global models to local models, especially in non-IID settings where feature spaces do not align. FedDF [22] aggregates client predictions as soft labels for global training, but its assumption of homogeneous data distributions leads to noisy aggregation and loss of personalization in heterogeneous environments. Similarly, FedMD [20] aligns client models using public datasets, which raises significant privacy concerns, as the reliance on label sharing exposes sensitive client data. FedGKT [21] enhances knowledge transfer through bidirectional distillation, yet still relies on label sharing, which compromises privacy and fails to address the dynamic nature of data distributions. These early methods often operate under static distillation policies, if client data distributions remain stable, which is rarely the case in real-world federated environments. This static nature leads to feature misalignment, excessive distillation noise, and performance degradation in non-IID settings.
Recent advancements have attempted to overcome these limitations. FedKT [36] introduces dynamic temperature adjustment based on each client’s data distribution, but is typically evaluated under relatively mild distribution shifts, limiting the evidence for its effectiveness in more challenging heterogeneous scenarios. FedDyn [37] addresses dynamic knowledge transfer by adjusting the strength of knowledge transfer, but this introduces scalability issues. CD2-pFed [38] applies cyclic distillation to improve feature alignment, yet it overlooks long-term knowledge evolution, causing suboptimal performance as data distributions drift. MH-pFLID [39] proposes a more efficient framework for heterogeneous PFL, but it still struggles with domain generalization, particularly in highly heterogeneous, non-IID settings.
Privacy remains a key challenge in federated distillation. FedRod [40] bridges generic and personalized federated learning via a two-loss, two-predictor framework, achieving strong global and personalized performance but without explicitly integrating formal privacy mechanisms. FedKADP [41] combines knowledge distillation with differential privacy and an adaptive feedback controller to mitigate membership inference attacks while maintaining high utility. KD3A [42] further explores privacy-preserving decentralized knowledge distillation for multi-source domain adaptation, distilling consensus knowledge from multiple source models while dynamically down-weighting malicious or irrelevant domains and significantly reducing communication cost. However, most DP-based distillation frameworks are evaluated in centralized or multi-source adaptation settings and typically do not simultaneously address client-level personalization, non-IID heterogeneity, and adaptive server-side aggregation in federated learning.
New approaches such as FedFomo [43] and FLUID [44] are making strides in improving privacy and distillation effectiveness. FedFomo allows clients to optimize aggregation weights without requiring prior knowledge of data distributions, enabling out-of-distribution personalization, a challenge for methods like pFedMe [45] and LG-FedAvg [46]. However, its reliance on local validation sets limits its applicability, and its first-order approximation for weight updates may not suffice in highly non-linear parameter spaces. FLUID, on the other hand, combines dynamic pruning with KD, allowing for model personalization in resource-constrained environments like maritime predictive maintenance. Yet, it does not integrate privacy-preserving mechanisms for logits transmission and lacks scalability analysis, limiting its broader applicability. Fed-DFA [47] provides a solution for heterogeneous model fusion by using adversarial distillation, optimizing models based on boundary perception. While it shows promise in non-IID environments, its high computational overhead, particularly from PGD-based boundary estimation, and the lack of theoretical convergence analysis, raise concerns regarding its robustness in large-scale, dynamic federated settings.
While these methods have made progress, they still face challenges related to privacy, scalability, and adapting to long-term data drift. These limitations suggest that while KD-based methods have advanced PFL, there is still a need for more flexible and privacy-preserving approaches.

2.3. PFL Based on AM

AM has been widely adopted in PFL for enhancing feature selection, aggregation, and alignment between global and local models. Despite its successes, existing AM-based methods face several limitations, particularly in addressing evolving data distributions and ensuring a balanced approach to both client-side personalization and server-side aggregation. For example, FedAtt [23] utilizes self-attention on the client side to improve feature representation, leading to better personalization. However, its static per-round weighting does not explicitly incorporate client metadata or historical behavior, which may limit its adaptability under strong non-IID heterogeneity and complex client dynamics. FedAMP [24] addresses this by using an attention-based message passing mechanism to cluster similar clients, but it still struggles with local feature alignment in highly heterogeneous environments. Both approaches lack explicit mechanisms to exploit cross-round historical information in aggregation, which can hinder robust adaptation under challenging client heterogeneity. Similarly, while FedRoD [40] improves personalization by introducing dynamic sub-model selection, it is still limited by its memoryless aggregation strategy, which fails to adapt to long-term changes in client data. pFedHN [25] employs hypernetworks for parameter sharing, which enhances adaptability but at the cost of increased computational complexity and architectural overhead.
These existing AM-based methods struggle with feature misalignment, data distribution shifts, and long-term data dependencies. They remain largely ineffective in dynamic, heterogeneous environments where client data continuously evolves. This emphasizes the need for more robust solutions that can simultaneously address these issues of feature misalignment, aggregation, and long-term adaptation in a scalable manner.

3. pFedKA

To ensure consistency, a global notation table is defined in Table 1, which is presented above the main text. Figure 1. illustrates the overall framework of pFedKA, whose core process is divided into the following three phases:
(1)
Global model distribution: the server sends the current global teacher model down to the client collection.
(2)
Client local training phase: each client initializes the student model on local data and optimizes it via CA-based feature alignment between teacher and student, combined with a dynamic distillation loss.
(3)
Server-side global aggregation phase: the server dynamically assigns weights to clients based on the gated attention network, and weightily aggregates local model parameters to update the global teacher model.

3.1. Problem Formulation

The core challenge in FL lies in reconciling the global model’s consistency with the local model’s personalization requirements while preserving data privacy. Given K clients, each client k holds a local dataset D k , drawn from a potentially non-IID distribution P k , which may deviate significantly from the global distribution P G . Traditional FL optimizes the global model parameters θ by minimizing the weighted empirical risk:
min θ k = 1 K n k N L k ( θ ; D k )
While effective under near-IID conditions, a single global model often underperforms on heterogeneous clients, yielding weak personalization.
To address this, PFL equips each client with a local model ϕ k and augments the objective with a proximity term:
min θ , ϕ k k = 1 K E D k L ( ϕ k ; D k ) + λ R ( ϕ k , θ )
Here, R ( ϕ k , θ ) measures the discrepancy between the local and global models—typically instantiated as an L 2 proximity (e.g., ϕ k θ 2 ) to encourage personalization while keeping the local solution close to the global one; however, a fixed λ cannot adapt across rounds or across clients, and the feature-space mismatch between θ and ϕ k under non-IID data further hinders effective knowledge transfer.
To tackle these challenges, pFedKA proposes a dual-attention-driven framework that integrates KD and AM, replacing the fixed λ with dynamic, client-specific coefficients each round: a KD weight μ k ( t ) and a temperature τ k ( t ) . The round- t objective is:
min θ , ϕ k k = 1 K L CE , k ( ϕ k ; D k ) + μ k ( t ) L KD-CA , k
where the CA-guided distillation loss is:
L KD-CA , k ( t ) = τ k ( t ) 2 KL softmax z s τ k ( t ) softmax z t τ k ( t )
Temperature softening rescales logit gaps and shrinks gradients by roughly 1 / τ 2 in standard KD; multiplying by τ 2 compensates for this shrinkage so the KD term remains on a comparable scale to L CE . This follows common KD practice and avoids the KD term vanishing at higher temperatures.
For stability, the adaptive temperature is kept small in practice and numerically clipped to the open interval ( 1 , 2 ) :
τ k ( t ) = 1 + n k L CE , k ( t ) j = 1 K n j L CE , j ( t )
We use a narrow, upper-bounded window to provide moderate smoothing that empirically stabilizes KD on clients without over-flattening soft targets. This design is orthogonal to simulated annealing and is chosen for KD stability on heterogeneous data.
The dynamic KD weight μ k ( t ) balances local fitting and global guidance. To avoid circularity, it depends on the previous-round CE:
μ lower , k ( t ) = L CE , k ( t 1 ) 1 + L CE , k ( t 1 )
μ upper , k ( t ) = μ lower , k ( t ) + n k N log ( 1 + L CE , k ( t 1 ) )
We bound the adaptive KD weight using the client’s sample proportion n k / N , preserving the original design and avoiding extra hyperparameters: larger clients are allowed a higher admissible KD influence, while smaller clients remain more regularized; this choice keeps μ k ( t ) scale-free and aligns the weighting with client representativeness.
And we map the KD/CE trade-off to [ μ l o w e r , μ u p p e r ] with a boundary-consistent rule:
μ k ( t ) = μ lower , k ( t ) + μ upper , k ( t ) μ lower , k ( t ) L KD-CA , k ( t ) L KD-CA , k ( t ) + L CE , k ( t 1 ) + ε
Thus L KD-CA , k ( t ) = 0 μ k ( t ) = μ lower , k ( t ) , and as the KD term dominates, μ k ( t ) μ upper , k ( t ) . where ε is a small constant (e.g., 10 8 ) to ensure numerical stability. The log ( 1 + ) in Equation (7) compresses extreme CE values, yielding smoother, numerically stable bounds so that neither CE nor KD can dominate.

3.2. Algorithm

The pFedKA algorithm coordinates client-side personalization and server-side aggregation, as illustrated in Figure 2a (dynamic, on-device distillation with cross-attention) and Figure 2b (GRU-based, adaptive aggregation on the server). Each round alternates between local updates on a sampled client subset and a server update of the global teacher θ , refining both θ and client students { ϕ k } . By integrating CA with dynamic coefficients μ k ( t ) and τ k ( t ) (Section 3.1), and GRU-gated aggregation on the server, pFedKA addresses the static regularization and feature-mismatch limitations identified earlier.
The pFedKA procedure alternates client-side dynamic distillation (Figure 2a) and server-side GRU-gated aggregation Figure 2b. At round t , the server broadcasts the global teacher θ ( t ) to a sampled client subset C t . Each selected client k initializes its student ϕ k ( t , 0 )     θ ( t ) , then runs local training with CA–guided knowledge adaptation as follows. For an input mini-batch x D k , the teacher and student encoders produce hidden maps via the original notation f enc t and f enc s ( ) :
H t = f enc t ( x ; θ ( t ) )
H s ( t , e 1 ) = f enc s ( x ; ϕ k ( t , e 1 ) )
To mitigate feature-space mismatch, CA aligns H s with H t . Using the original projections W Q , W K , W V and key dimension d k :
Attn ( H s , H t ) = softmax H s W Q ( H t W K ) d k , H fuse ( t , e ) = Attn ( H s , H t ) H t W V
Student features H s reflect local semantics; using them as queries lets the student “pull” only the relevant global components from the teacher’s bank H t (as keys/values). This asymmetry is intentional: it prevents forcing the teacher to attend to local idiosyncrasies, and instead projects global knowledge onto local needs, directly addressing the feature misalignment concern.
The fused representation is decoded by the student:
z s ( t ) = f dec s ( H fuse ; ϕ k ( t , e 1 ) ) , z t ( t ) = f dec t H t ( t ) ; θ ( t )
The teacher’s logits z t ( t ) are generated by the global model θ ( t ) on the same input and serve as soft targets for KD, while the student logits z s ( t ) are the client’s local predictions based on the CA–aligned representation H fuse .
And the client minimizes the round- t objective (consistent with Section 3.1):
L c l i e n t , k ( t ) = L C E , k ( t ) + μ k ( t ) L KD-CA , k ( t )
The local update for epoch e = 1 , , E is:
ϕ k ( t , e ) = ϕ k ( t , e 1 ) η ϕ   L c l i e n t , k ( t ) ( ϕ k ( t , e 1 ) ; θ ( t ) )
where η is the client-side learning rate and ϕ denotes the gradient with respect to. the student parameters ϕ . After local training, client k uploads only ϕ k ( t , E ) , its sample size n k , and a scalar loss summary L CE , k ( t ) . No intermediate features or logits are transmitted, addressing the privacy-leakage concern raised for KD methods.
On the server, a compact per-client summary vector u k ( t ) is constructed internally (e.g., u k ( t ) = f ( ϕ k ( t , E ) θ ( t ) 2 , L CE , k ( t ) , n k / N ) and fed, with the previous hidden state h k ( t 1 ) , into a shared GRU with parameters ψ :
h k ( t ) = GRU h k ( t 1 ) , u k ( t ) ; ψ , q k ( t ) = w h k ( t ) + b ,
which are normalized to attention weights:
a k ( t ) = exp ( q k ( t ) ) j C t exp ( q j ( t ) )
The global teacher is then updated by weighted model averaging:
θ ( t + 1 ) = k C t a k ( t ) ϕ k ( t , E )
Finally, θ ( t + 1 ) is broadcast for the next round, combining client-side alignment and on-device distillation with server-side history-aware weighting.

3.3. Generalization Analysis

We adopt established learning-theoretic results and specialize them to pFedKA without altering the training or communication protocol. For client k with n k samples, augmenting its empirical risk with the KD penalty (Equation (4)) yields the following standard, temperature-scaled decomposition (cf. KD analyses and classical generalization bounds, e.g., [19], FedAvg/FedProx aggregation arguments [29,30]):
L k ( ϕ k ) L ^ k ( ϕ k ) + μ k ( t ) KL softmax z t τ k ( t ) softmax z s τ k ( t ) + O n k 1 / 2
Averaging (Equation (18)) across clients with weights n k / N ( N = k n k ) —as in standard federated averaging analyses [29,30]—gives the global form:
k = 1 K n k N L k ( ϕ k ) k = 1 K n k N L ^ k ( ϕ k )   + k = 1 K n k N μ k ( t ) KL softmax z t τ k ( t ) softmax z s τ k ( t ) + O N 1 / 2
Equations (18) and (19) state that pFedKA’s improvement arises from reducing the KD term via CA feature alignment and stabilizing the student/teacher distributional gap with the adaptive temperature τ k ( t ) , while preserving the same capacity/concentration scaling as FedAvg/FedProx. Random client sampling with m participants per round retains the same order with the empirical terms concentrating as m grows [3,29].
The pFedKA algorithm is summarized in Algorithm 1, which outlines the iterative process of client-side training and server-side aggregation.
Algorithm 1 pFedKA
Require :   C = { 1 , , K } ,   θ ( 1 ) ,   T ,   E ,   η .
Ensure :   θ T + 1
for   t   =   1 , , T  do
Server   samples   C t C   and   broadcasts   θ ( t )   to   all   k C t ;
for   each   client   k     C t    in parallel do
   ϕ k ( t , 0 ) θ ( t )
   for   e = 1 , , E do
    H s ( t , e 1 ) f enc s x ; ϕ k ( t , e 1 ) ,   H t ( t ) f enc t x ; θ ( t ) ;
    H fuse ( t , e ) softmax H s ( t , e 1 ) W Q ( H t ( t ) W K ) d k ( H t ( t ) W V ) ;
    z s ( t , e ) f dec s H fuse ( t , e ) ; ϕ k ( t , e 1 ) ,
    z t ( t ) f dec t H t ( t ) ; θ ( t ) ;
    L CE , k ( t )   on   D k ,   L KD-CA , k ( t )   as   in   ( 4 ) ;   μ k ( t )   via   ( 6 ) ( 8 )   with   n k / N ;   τ k ( t ) ( 1 , 2 ) ;
    ϕ k ( t , e ) = ϕ k ( t , e 1 ) η ϕ L c l i e n t , k ( t ) ϕ k ( t , e 1 ) ; θ ( t ) ;
    Update   θ s , k t θ s , k t η L client , k ( t )
   end   for ;   client   uploads   ϕ k ( t , E ) ,   n k ,   L CE , k ( t ) ;
end for
Server   forms   u k ( t ) e . g . , ϕ k ( t , E ) θ ( t ) 2 , L CE , k ( t ) , n k / N ;
h k ( t ) = GRU h k ( t 1 ) , u k ( t ) ; ψ ,   q k ( t ) = w h k ( t ) + b ;
α k ( t ) = exp q k ( t ) j C t exp q j ( t ) ,   θ ( t + 1 ) = k C t α k ( t ) ϕ k ( t , E ) ;
end   for ;   return   θ t T + 1

3.4. Convergence Analysis

We adopt standard FedAvg-family convergence results under nonconvex smooth objectives with partial participation and bounded heterogeneity (e.g., [3,29,30]). Let
F ( θ ) = k = 1 K n k N L CE , k ( θ ; D k )
Assume F is L -smooth:
F ( θ ) F ( θ ) L   θ θ
and client-gradient divergence is bounded:
E k L k C E ( θ ) F ( θ ) 2 ζ 2
With m clients sampled per round and E local steps of step-size η , canonical analyses yield.
1 T t = 1 T E F ( θ ( t ) ) 2 O ( m T ) 1 / 2 + O η 2 E 2 ζ 2 + O σ 2 m
where σ 2 bounds stochastic gradient variance. pFedKA inherits (Equation (23)), because its local objective (Equation (13)) remains smooth and the protocol matches FedAvg. Moreover, CA (Equation (11)) and temperature-controlled KD (Equation (5)) reduce residual misalignment and prediction discrepancy—i.e., the effective constant multiplying ζ 2 improves—while the GRU produces nonnegative a k ( t ) summing to one (Equation (16)), yielding convex combinations of client updates that do not worsen the variance term and, in practice, concentrate on more representative updates. Consequently, the round-averaged expected gradient norm decays at the standard FedAvg-family rate:
1 T t = 1 T E F ( θ ( t ) ) 2 = O ( T 1 / 2 )
with improved constants when alignment and distillation reduce drift—matching the empirical behavior in Section 4 and consistent with proven rates for adaptive federated optimization under heterogeneity, while retaining the communication and privacy posture of FedAvg [29].

4. Experiment

4.1. Experiment Setup and Datasets

To comprehensively evaluate the generalization and personalization capability of pFedKA under heterogeneous federated scenarios, we conduct experiments on three widely used public datasets: Shakespeare for character-level language modelling, and CIFAR-10/100 [48] for visual classification. Shakespeare [49] contains roughly 1.2 million characters extracted from plays and poems; we follow the standard preprocessing in TensorFlow Federated to convert the text into 80-character subsequences and build a 90-character vocabulary. The corpus is naturally divided into 100 clients by treating each speaking role as a user, yielding consecutive yet non-overlapping text segments that preserve the temporal correlation and vocabulary bias across clients.
For CIFAR-10 and CIFAR-100, we simulate statistical heterogeneity by partitioning the training set into 100 clients using a symmetric Dirichlet distribution with concentration parameter β . Specifically, for each class we draw a 100-dimensional Dirichlet vector and assign the corresponding fraction of class samples to each client. We adopt β = 0.1 to produce highly skewed label distributions and β = 0.6 for moderate heterogeneity; the same strategy is applied to both CIFAR-10 and CIFAR-100, giving a total of four image-based non-IID scenarios. All clients use a fixed batch size of 20, perform 5 local epochs per communication round, and the entire federation runs for 1000 rounds with a constant learning rate η = 0.01 . For image tasks we instantiate a ResNet-18 [50] backbone, while for Shakespeare we employ a 2-layer LSTM [51] with 256 hidden units and an 80-dimensional character embedding. No public data, label leakage or server-side proxy datasets are involved, ensuring a strict FL protocol.

4.2. Baseline Methods

In this study, seven representative personalized federated-learning approaches are employed as baselines: FedAvg [29], FedProx [30], Ditto [34], FedRep [32], FedAvgFT [52], Flow [8] and FedCDA [35]. FedAvg and FedProx serve as canonical global and proximal-regularized FL methods, highlighting the gap between standard aggregation and personalized optimization under non-IID data. Ditto and FedRep represent parameter-decoupling and multi-task style personalization, where each client learns a customized local head or objective on top of a shared representation. FedAvgFT reflects a simple yet competitive two-stage strategy that first learns a global model and then performs local fine-tuning. Flow introduces dynamic per-instance routing to better adapt to client heterogeneity, while FedCDA performs cross-round divergence-aware aggregation and thus represents a recent class of adaptive aggregation schemes. Together, these methods span parameter regularization, multi-task personalization, dynamic routing, and adaptive aggregation, providing a comprehensive spectrum for evaluating pFedKA. All experiments adhere to the unified protocol and hyperparameter settings detailed in Section 4.1.

4.3. Experimental Results

The experimental results demonstrate that pFedKA achieves superior personalization accuracy compared to baseline methods across CIFAR-10, CIFAR-100, and Shakespeare (see Figure 3). Detailed comparisons in Table 2 highlight its advantages across diverse tasks and heterogeneity settings.
On CIFAR-10, pFedKA demonstrates significantly faster convergence than all baseline methods under both high and moderate heterogeneity, highlighting its robust adaptability to varying data distributions. On CIFAR-100 with high heterogeneity, pFedKA achieves an accuracy of 44.78%, outperforming all competing methods by a notable margin. However, when the heterogeneity is moderate, FedCDA attains 44.57% while pFedKA achieves 42.11%. In this mild-heterogeneity regime, FedCDA’s cross-round divergence-aware aggregation simply selects historical models whose parameters are closest to the current global state, effectively acting as a lightweight denoiser that proves sufficient when client distributions are already well-aligned and consistent. In contrast, pFedKA’s client-side component alignment and dynamic temperature introduce additional optimization variables, which can potentially be overfit to local fluctuations and thus marginally dilute the global signal. This outlier case exposes a practical limitation: the dual-attention mechanism in pFedKA may become over-parameterized and less effective when heterogeneity is weak, suggesting that future work should explore heterogeneity-aware gating mechanisms to adaptively control model complexity.
For the Shakespeare dataset, pFedKA attains an accuracy of 58.92%, clearly outperforming Flow (56.36%) and exceeding FedAvg by more than 5%. This result demonstrates that the dual-attention design of pFedKA is particularly effective at capturing long-range sequential dependencies, outperforming both static aggregation strategies and pure local fine-tuning approaches.
For training loss, as presented in Figure 4, pFedKA demonstrates consistent reduction across all datasets. On CIFAR-10 with high heterogeneity, pFedKA’s loss decreases steadily from 0.3122 to 0.2238 over 1000 rounds, compared to higher final losses for FedAvg, FedAvgFT, Ditto, FedRep, FedProx, and Flow; notably, FedCDA descends even faster (0.9142 → 0.1580) but exhibits occasional jumps due to historical-model reuse. The smoother loss curve of pFedKA may result from its KD, which enhances feature alignment between the teacher and student models, and reduces variance in comparison to FedAvg’s basic averaging or FedAvgFT’s fine-tuning. On CIFAR-10 with moderate heterogeneity, pFedKA’s loss drops from 0.5023 to 0.2062, lower than the baselines, possibly due to its gated attention, which stabilizes convergence compared to the proximal term in FedProx or the dynamic adjustments in Flow. On CIFAR-100 with high heterogeneity, pFedKA achieves a loss of 0.3922, compared to higher losses observed in other baseline methods; FedCDA initially descends faster but rises around round 500 and ends at 0.2851, slightly above pFedKA’s 0.3922, indicating that its divergence-aware reuse can occasionally inject stale information under severe skew. On CIFAR-100 with moderate heterogeneity, FedCDA again converges fastest and reaches the lowest terminal loss (0.2427), outperforming pFedKA (0.3416), as its historical-model selection acts as an efficient denoiser when client distributions are already well-aligned. For Shakespeare, pFedKA’s loss declines from 0.3697 to 0.2791, compared to higher losses for FedAvg, FedAvgFT, Ditto, FedRep, FedProx, and Flow, while FedCDA records the steepest descent but occasional spikes. The consistent loss reduction across datasets suggests that pFedKA’s combination of cross-entropy optimization, KD, and AM mitigates data heterogeneity challenges, while baseline methods exhibit higher variance.

4.4. Ablation Experiments

We evaluate the contribution of each core component in pFedKA—knowledge distillation (KD), cross-attention (CA), and gated attention (GA)—by ablating them one at a time under identical data partitions, communication rounds, and hyperparameters. Concretely, the w/o KD, w/o CA, and w/o GA variants respectively remove the KD loss, the CA alignment module, and the GRU-based gated aggregation (the latter replaced by static FedAvg aggregation), while keeping all other settings unchanged. This design isolates the effect of each component and allows us to examine not only their individual benefits but also how they interact to stabilize training and improve personalization.
Results on CIFAR-10/100 under high and moderate heterogeneity and on Shakespeare are summarized in Table 3 (Accuracy) and visualized in Figure 5 for clearer comparison across variants.
Across all datasets, pFedKA consistently outperforms its ablations: removing KD produces the largest degradation, especially in high heterogeneity and on Shakespeare, confirming that KL-guided transfer of global knowledge is central for personalization under distribution shift. Removing CA also hurts markedly—most notably on CIFAR-100—indicating that feature-space alignment before decoding is important for fine-grained recognition with non-IID feature drift. Removing GA yields a steady drop (smaller than KD/CA removal), showing that history-aware, data-aware weighting based on client metadata improves robustness over static averaging. These components are complementary: KD provides a transferable soft-target prior, CA aligns global and local representations to reduce mismatch, and GA adaptively emphasizes stable, representative clients across rounds; their synergy explains the performance gap between pFedKA and any single ablation.
To complement accuracy and reflect per-class behavior under non-IID partitions, we report macro-averaged F1 (Macro-F1) to reflect per-class performance. For a   C -class, single-label task, we compute a per-class one-vs-rest F1 score and then average across classes:
F 1 c = 2 Precision c Recall c Precision c + Recall c , Macro- F 1 = 1 C c = 1 C F 1 c
The formulas for precision and recall are as follows:
Precision c = TP c TP c + FP c ,   Recall c = TP c TP c + FN c
Table 4 exhibits the same ordering and comparable gaps, reinforcing that each component yields non-redundant gains. KD contributes the largest share of improvement (e.g., CIFAR-100-High Macro-F1: 41.98 → 25.32 when removed), highlighting the necessity of shrinking the teacher–student KL under severe heterogeneity; CA provides the next most substantial gain (e.g., CIFAR-100-High: 41.98 → 27.57), consistent with mitigating feature misalignment prior to decoding; GA offers steady improvements (e.g., CIFAR-10-Moderate: 78.67 → 71.08), reflecting the benefit of history-aware, data-aware weighting that suppresses noisy or outlier updates. Interactions among components are also evident: with KD present, CA aligns richer soft-target structure into the local feature space; with CA present, GA more reliably emphasizes clients whose updates are stable and representative; when KD or CA is absent, the effectiveness of GA diminishes toward static averaging.
The ablation results show that pFedKA’s performance gains stem from three complementary components: (i) KD enables knowledge transfer and improves generalization across heterogeneous clients; (ii) CA aligns feature spaces to reduce non-IID drift; and (iii) GA provides history-aware, data-aware aggregation for more stable cross-round learning. KD accounts for the largest improvement, CA adds significant gains—especially on fine-grained tasks—and GA further refines aggregation. Their combined effect surpasses any single component, confirming both their individual and synergistic contributions.

4.5. Communication Overhead & Privacy Verification

4.5.1. Analysis of Communication Overhead

To quantify the bandwidth cost incurred by the CA module and the GRU-based aggregator, we record the exact payload exchanged between the server and ten randomly sampled clients during the first 100 communication rounds of CIFAR-10 training under Dirichlet heterogeneity   β = 0.1 . Each round begins with the server broadcasting the current global teacher θ ( t ) to every selected client; the client then performs five epochs of CA-guided distillation and uploads its updated student ϕ k ( t , E ) together with two scalars (sample count n k and last-epoch cross-entropy L C E , k ( t ) ). No feature maps, gradients or logits are transmitted at any stage, so the uplink volume is identical to that of FedAvg, while the downlink carries one additional full model copy required by the distillation paradigm. Traffic is captured transparently via tcpdump, aggregated over three independent runs and reported with 95% bootstrap confidence intervals.
Table 5 summarizes communication volumes. All baselines transmit at least one model per round; methods with auxiliary variables or historical models increase traffic. FedAvg and FedProx exchange one model per direction (30.82 MB). Flow adds a small routing vector (0.07 MB), totaling 30.91 MB. FedCDA uploads an extra 128-bit digest per client (30.87 MB). pFedKA transmits 33.47 MB per round—an 8.61% increase—leading to 3.35 GB after 100 rounds and 33.47 GB after 1000 rounds. This ratio remains unchanged for CIFAR-100 and Shakespeare, as payload size is independent of label or vocabulary size.
Thus, the dual-attention design improves absolute accuracy on CIFAR-10 from 63.98% (FedAvg) to 78.45%a lift of 14.7%—while increasing the total traffic after 1000 rounds by only 8.61%. On CIFAR-10 ( β = 0.1 , 100 clients, 5 local epochs, batch 20), per-round iteration time on one RTX 3080 is 4.37 s for pFedKA vs. 3.98 s for FedAvg (+9.8%), negligible against the +8.6% traffic and +14.7% accuracy gain. The extra volume is constant per round and scales linearly with model size rather than client count, so the same ratio applies to cross-device federations with thousands of participants. It should be noted that the present analysis accounts only for payload bytes; wall-clock time per round, local FLOP overhead of the CA module, and server-side GRU latency were not measured. A full run-time profile will be included in future work. We therefore conclude that pFedKA achieves substantial personalization gains at a communication cost that remains negligible in practice.

4.5.2. Privacy Verification Mechanisms

We evaluate pFedKA under client-level ( ε ,   δ ) -DP [53] with δ   =   1   ×   10 5 by integrating DP-SGD [27]. Gradients are clipped to unit l 2 norm and Gaussian noise N ( 0 ,   σ 2 C 2 )   ( C   =   1 ) is injected before the CA distillation step; no features or logits are transmitted. Each privacy setting is executed once with a fixed random seed, and the final personalized accuracy after 1000 rounds is reported in Table 6.
Without noise   ( σ   =   0 ,   ε   =   ) pFedKA achieves 78.45% on CIFAR-10 and 44.78% on CIFAR-100, while FedAvg obtains 63.98% and 28.11%. At σ   =   1 the privacy budget is ε     9 . 84 for FedAvg and ε     6 . 45 for pFedKA; the accuracy drop is markedly smaller for our method (0.31% vs. 2.12% on CIFAR-10). σ   =   2 ( ε     3.21 for FedAvg, ε     1 . 58 for pFedKA) pFedKA still outperforms FedAvg by 48.9% on CIFAR-10 and 16.4% on CIFAR-100, confirming that the CA-guided alignment reduces effective gradient variance and thus requires less noise for the same privacy budget. We conclude that pFedKA maintains its accuracy advantage even under stringent differential privacy.

5. Conclusions and Future Work

In summary, pFedKA effectively balances global consistency and local personalization through knowledge distillation and dual-attention design. The client-side CA module aligns global and local features to address non-IID data challenges, while the server-side GRU-based gated aggregator dynamically adjusts client updates using metadata such as losses and sample counts. By leveraging historical information across rounds, this approach adapts the aggregation strategy to client-specific behavior and performance, providing more stability in federated learning. Experimental results show that pFedKA achieves superior convergence speed, training stability, and personalization accuracy compared to existing methods, making it a promising solution for federated learning under data heterogeneity. Furthermore, pFedKA provides strong privacy guarantees via DP-SGD integration, maintaining or even improving personalization accuracy compared to FedAvg.
Despite these advantages, pFedKA faces some limitations. The introduction of additional modules increases computational and memory overhead for both clients and servers, and maintaining the GRU’s historical state over many communication rounds requires extra resources. Additionally, the communication cost per round may rise slightly, though the overhead remains modest. Our experiments did not cover highly skewed data distributions or dynamic client participation, leaving these areas unverified and suggesting a key direction for future work.
Future work will focus on addressing these limitations, including exploring additional privacy mechanisms like secure aggregation to further safeguard client data. We also plan to evaluate pFedKA on larger, more diverse datasets, as well as under dynamic federated settings (e.g., time-varying client data or changing client membership), to validate its robustness and scalability in real-world applications.

Author Contributions

Conceptualization, Y.J. and K.Z.; methodology, Y.J. and K.Z.; software, Y.J.; validation, Y.J. and L.Z.; formal analysis, K.Z.; investigation, L.Z.; resources, C.M.; data curation, Y.J.; writing—original draft preparation, Y.J. and X.C.; writing—review and editing, K.Z., C.M. and H.Z.; visualization, X.C.; supervision, K.Z.; project administration, C.M.; funding acquisition, C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 62172123) and the Key Research and Development Program of Heilongjiang (Grant No. 2022ZX01A36).

Data Availability Statement

This article encompasses the original contributions proposed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, H.Y.; Chao, W.L. Fedbe: Making bayesian model ensemble applicable to federated learning. arXiv 2020, arXiv:2009.01974. [Google Scholar]
  2. Karimireddy, S.P.; Kale, S.; Mohri, M.; Reddi, S.; Stich, S.; Suresh, A.T. Scaffold: Stochastic controlled averaging for federated learning. In Proceedings of the 37th International Conference on Machine Learning (ICML 2020), Vienna, Austria, 12–18 July 2020; pp. 5132–5143. [Google Scholar]
  3. Reddi, S.; Charles, Z.; Zaheer, M.; Garrett, Z.; Rush, K.; Konečný, J.; Kumar, S.; McMahan, B. Adaptive federated optimization. arXiv 2020, arXiv:2003.00295. [Google Scholar]
  4. Jiang, Z.; Xu, J.; Zhang, S.; Shen, T.; Li, J.; Kuang, K.; Cai, H.; Wu, F. FedCFA: Alleviating Simpson’s Paradox in Model Aggregation with Counterfactual Federated Learning. arXiv 2024, arXiv:2412.18904. [Google Scholar] [CrossRef]
  5. Chen, Z.; Li, J.; Shen, C. Personalized Federated Learning with Attention-Based Client Selection. In Proceedings of the 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Republic of Korea, 14–19 April 2024; pp. 6930–6934. [Google Scholar]
  6. Li, H.; Cai, Z.; Wang, J.; Tang, J.; Ding, W.; Lin, C.-T.; Shi, Y. FedTP: Federated Learning by Transformer Personalization. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 13426–13440. [Google Scholar] [CrossRef]
  7. Marfoq, O.; Neglia, G.; Vidal, R.; Kameni, L. Personalized federated learning through local memorization. In Proceedings of the 39th International Conference on Machine Learning, PMLR 2022, Baltimore, MD, USA, 17–23 July 2022; pp. 15070–15092. [Google Scholar]
  8. Panchal, K.; Choudhary, S.; Parikh, N.; Zhang, L.; Guan, H. Flow: Per-instance personalized federated learning. In Proceedings of the Advances in Neural Information Processing Systems 36 (NeurIPS 2023), New Orleans, LA, USA, 10–16 December 2023. [Google Scholar]
  9. Yang, Z.; Zhang, Y.; Zheng, Y.; Tian, X.; Peng, H.; Liu, T.; Han, B. FedFed: Feature distillation against dataheterogeneity in federated learning. In Proceedings of the Advances in Neural Information Processing Systems 36 (NeurIPS 2023), New Orleans, LA, USA, 10–16 December 2023. [Google Scholar]
  10. Chen, Z.; Yang, H.H.; Quek, T.; Chong, K.F.E. Spectral co-distillation for personalized federated learning. In Proceedings of the Advances in Neural Information Processing Systems 36 (NeurIPS 2023), New Orleans, LA, USA, 10–16 December 2023; pp. 8757–8773. [Google Scholar]
  11. Zhao, Y.; Liu, Q.; Liu, P.; Liu, X.; He, K. Medical Federated Model With Mixture of Personalized and Shared Components. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 47, 433–449. [Google Scholar] [CrossRef]
  12. Guo, T.; Guo, S.; Wang, J. Pfedprompt: Learning personalized prompt for vision-language models in federated learning. In Proceedings of the ACM Web Conference 2023, Austin, TX, USA, 30 April–4 May 2023; pp. 1364–1374. [Google Scholar]
  13. Wang, J.; Yang, X.; Cui, S.; Che, L.; Lyu, L.; Xu, D.; Ma, F. Towards personalized federated learning via heterogeneous model reassembly. In Proceedings of the Advances in Neural Information Processing Systems 36 (NeurIPS 2023), New Orleans, LA, USA, 10–16 December 2023. [Google Scholar]
  14. Wu, Z.; Sun, S.; Wang, Y.; Liu, M.; Xu, K.; Wang, W.; Jiang, X.; Gao, B.; Lu, J. FedCache: A Knowledge Cache-Driven Federated Learning Architecture for Personalized Edge Intelligence. IEEE Trans. Mob. Comput. 2024, 23, 9368–9382. [Google Scholar] [CrossRef]
  15. Deng, D.; Wu, X.; Zhang, T.; Tang, X.; Du, H.; Kang, J.; Liu, J.; Niyato, D. FedASA: A Personalized Federated Learning With Adaptive Model Aggregation for Heterogeneous Mobile Edge Computing. IEEE Trans. Mob. Comput. 2024, 23, 14787–14802. [Google Scholar] [CrossRef]
  16. Liu, Z.; Lin, W.; Shi, Y.; Zhao, J. A robustly optimized BERT pre-training approach with post-training. In Proceedings of the 20th China National Conference on Chinese Computational Linguistics, Hohhot, China, 13–15 August 2021; Springer International Publishing: Cham, Switzerland, 2021; pp. 471–484. [Google Scholar]
  17. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jegou, H. Training data-efficient image transformers & distillation through attention. In Proceedings of the 38th International Conference on Machine Learning PMLR, Vienna, Austria, 18–24 July 2021; pp. 10347–10357. [Google Scholar]
  18. Hinton, G. Distilling the Knowledge in a Neural Network. arXiv 2015, arXiv:1503.02531. [Google Scholar] [CrossRef]
  19. Chen, J.; Zhang, J. A Data-Free Personalized Federated Learning Algorithm Based on KD. Netinf. Secur. 2024, 24, 1562–1569. [Google Scholar]
  20. Li, D.; Wang, J. Fedmd: Heterogenous federated learning via model distillation. arXiv 2019, arXiv:1910.03581v1. [Google Scholar] [CrossRef]
  21. He, C.; Annavaram, M.; Avestimehr, S. Group knowledge transfer: Federated learning of large cnns at the edge. Adv. Neural Inf. Process. Syst. 2020, 33, 14068–14080. [Google Scholar]
  22. Lin, T.; Kong, L.; Stich, S.U.; Jaggi, M. Ensemble distillation for robust model fusion in federated learning. Adv. Neural Inf. Process. Syst. 2020, 33, 2351–2363. [Google Scholar]
  23. Ji, S.; Pan, S.; Long, G.; Li, X.; Jiang, J.; Huang, Z. Learning private neural language modeling with attentive aggregation. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
  24. Huang, Y.; Chu, L.; Zhou, Z.; Wang, L.; Liu, J.; Pei, J.; Zhan, Y. Personalized cross-silo federated learning on non-iid data. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 2–9 February 2021; AAAI Press: Palo Alto, CA, USA; Volume 35, pp. 7865–7873. [Google Scholar]
  25. Shamsian, A.; Navon, A.; Fetaya, E.; Chechik, G. Personalized federated learning using hypernetworks. In Proceedings of the 38th International Conference on Machine Learning PMLR, Vienna, Austria, 18–24 July 2021; pp. 9489–9502. [Google Scholar]
  26. Deng, Y.; Kamani, M.M.; Mahdavi, M. Adaptive personalized federated learning. arXiv 2020, arXiv:2003.13461v3. [Google Scholar] [CrossRef]
  27. Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep learning with differential privacy. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 308–318. [Google Scholar]
  28. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 1175–1191. [Google Scholar]
  29. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learningof deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Ft. Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  30. Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated optimization in heterogeneous networks. In Proceedings of the Machine Learning and Systems 2 (MLSys 2020), Austin, TX, USA, 2–4 March 2020; Volume 2, pp. 429–450. [Google Scholar]
  31. Fallah, A.; Mokhtari, A.; Ozdaglar, A. Personalized federated learning: A meta-learning approach. arXiv 2020, arXiv:2002.07948. [Google Scholar] [CrossRef]
  32. Collins, L.; Hassani, H.; Mokhtari, A.; Shakkottai, S. Exploiting shared representations for personalized federated learning. In Proceedings of the 38th International Conference on Machine Learning PMLR, Vienna, Austria, 18–24 July 2021; pp. 2089–2099. [Google Scholar]
  33. Wang, H.; Yurochkin, M.; Sun, Y.; Papailiopoulos, D.; Khazaeni, Y. Federated learning with matched averaging. arXiv 2020, arXiv:2002.06440. [Google Scholar] [CrossRef]
  34. Li, T.; Hu, S.; Beirami, A.; Smith, V. Ditto: Fair and robust federated learning through personalization. In Proceedings of the 38th International Conference on Machine Learning PMLR, Vienna, Austria, 18–24 July 2021; pp. 6357–6368. [Google Scholar]
  35. Wang, H.; Xu, H.; Li, Y.; Xu, Y.; Li, R.; Zhang, T. Fedcda: Federated learning with cross-rounds divergence-aware aggregation. In Proceedings of the Twelfth International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [Google Scholar]
  36. Li, Q.; He, B.; Song, D. Practical One-Shot Federated Learning for Cross-Silo Setting. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, 19–26 August 2021; pp. 1484–1490. [Google Scholar]
  37. Acar, D.A.E.; Zhao, Y.; Navarro, R.M.; Mattina, M.; Whatmough, P.N.; Saligrama, V. Federated learning based on dynamic regularization. In Proceedings of the 9th International Conference on Learning Representations, Vienna, Austria, 4–8 May 2021. [Google Scholar]
  38. Shen, Y.; Zhou, Y.; Yu, L. Cd2-pfed: Cyclic distillation-guided channel decoupling for model personalization in federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 21–24 June 2022; pp. 10041–10050. [Google Scholar]
  39. Xie, L.; Lin, M.; Luan, T.; Li, C.; Fang, Y.; Shen, Q.; Wu, Z. MH-pFLID: Model Heterogeneous personalized Federated Learning via Injection and Distillation for Medical Data Analysis. arXiv 2024, arXiv:2405.06822v1. [Google Scholar]
  40. Chen, H.Y.; Chao, W.L. On bridging generic and personalized federated learning for image classification. arXiv 2021, arXiv:2107.00778v2. [Google Scholar]
  41. Jiang, Y.; Zhao, X.; Li, H.; Xue, Y. A Personalized Federated Learning Method Based on Knowledge Distillation and Differential Privacy. Electronics 2024, 13, 3538. [Google Scholar] [CrossRef]
  42. Feng, H.-Z.; You, Z.; Chen, M.; Zhang, T.; Zhu, M.; Wu, F.; Wu, C.; Chen, W. KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via KD. ICML 2021, 4, 5. [Google Scholar]
  43. Zhang, M.; Sapra, K.; Fidler, S.; Yeung, S.; Alvarez, J.M. Personalized federated learning with first order model optimization. arXiv 2020, arXiv:2012.08565. [Google Scholar]
  44. Kalafatelis, A.S.; Pitsiakou, A.; Nomikos, N.; Tsoulakos, N.; Syriopoulos, T.; Trakadas, P. FLUID: Dynamic Model-Agnostic Federated Learning with Pruning and Knowledge Distillation for Maritime Predictive Maintenance. J. Mar. Sci. Eng. 2025, 13, 1569. [Google Scholar] [CrossRef]
  45. Dinh, C.T.; Tran, N.; Nguyen, J. Personalized federated learning with moreau envelopes. Adv. Neural Inf. Process. Syst. 2020, 33, 21394–21405. [Google Scholar]
  46. Liang, P.P.; Liu, T.; Ziyin, L.; Allen, N.B.; Auerbach, R.P.; Brent, D.; Salakhutdinov, R.; Morency, L.-P. Think locally, act globally: Federated learning with local and global representations. arXiv 2020, arXiv:2001.01523. [Google Scholar] [CrossRef]
  47. Wang, Z.; Yan, F.; Wang, T.; Wang, C.; Shu, Y.; Cheng, P.; Chen, J. Fed-DFA: Federated distillation for heterogeneous model fusion through the adversarial lens. In Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, 25 February–4 March 2025; Volume 39, pp. 21429–21437. [Google Scholar]
  48. Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
  49. Shakespeare. TensorFlow Federated Datasets. Available online: https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/shakespeare (accessed on 12 November 2024).
  50. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  51. Graves, A. Supervised Sequence Labelling with Recurrent Neural Networks; Springer Nature: Durham, NC, USA, 2012; pp. 37–45. [Google Scholar]
  52. Collins, L.; Hassani, H.; Mokhtari, A.; Shakkottai, S. Fedavg with fine tuning: Local updates lead to representation learning. Adv. Neural Inf. Process. Syst. 2022, 35, 10572–10586. [Google Scholar]
  53. Dwork, C.; Roth, A. The Algorithmic Foundations of Differential Privacy. Found. Trends® Theor. Comput. Sci. 2013, 9, 211–407. [Google Scholar] [CrossRef]
Figure 1. pFedKA overview: (left) the global teacher model is broadcast to allocated clients, (right) clients perform local training with cross-attention and loss calculation, and (bottom) the server aggregates updated client models to form the new global model.
Figure 1. pFedKA overview: (left) the global teacher model is broadcast to allocated clients, (right) clients perform local training with cross-attention and loss calculation, and (bottom) the server aggregates updated client models to form the new global model.
Computers 14 00504 g001
Figure 2. Application of KD and Dual AM in pFedKA. (a) Client-Side Dynamic Distillation Process. (b) Server-Side Dynamic Aggregation Process.
Figure 2. Application of KD and Dual AM in pFedKA. (a) Client-Side Dynamic Distillation Process. (b) Server-Side Dynamic Aggregation Process.
Computers 14 00504 g002
Figure 3. Learning Curves of pFedKA and Baselines. (a) CIFAR-10 ( 0.1 ) ; (b) CIFAR-10 ( 0.6 ) ; (c) CIFAR-100 ( 0.1 ) ; (d) CIFAR-100 ( 0.6 ) ; (e) Shakespeare.
Figure 3. Learning Curves of pFedKA and Baselines. (a) CIFAR-10 ( 0.1 ) ; (b) CIFAR-10 ( 0.6 ) ; (c) CIFAR-100 ( 0.1 ) ; (d) CIFAR-100 ( 0.6 ) ; (e) Shakespeare.
Computers 14 00504 g003
Figure 4. Loss Curves of pFedKA and Baselines. (a) CIFAR-10 ( 0.1 ) ; (b) CIFAR-10 ( 0.6 ) ; (c) CIFAR-100 ( 0.1 ) ; (d) CIFAR-100 ( 0.6 ) ; (e) Shakespeare.
Figure 4. Loss Curves of pFedKA and Baselines. (a) CIFAR-10 ( 0.1 ) ; (b) CIFAR-10 ( 0.6 ) ; (c) CIFAR-100 ( 0.1 ) ; (d) CIFAR-100 ( 0.6 ) ; (e) Shakespeare.
Computers 14 00504 g004
Figure 5. Ablation Experiment Comparison. (a) CIFAR-10 ( 0.1 ) ; (b) CIFAR-10 ( 0.6 ) ; (c) CIFAR-100 ( 0.1 ) ; (d) CIFAR-100 ( 0.6 ) ; (e) Shakespeare.
Figure 5. Ablation Experiment Comparison. (a) CIFAR-10 ( 0.1 ) ; (b) CIFAR-10 ( 0.6 ) ; (c) CIFAR-100 ( 0.1 ) ; (d) CIFAR-100 ( 0.6 ) ; (e) Shakespeare.
Computers 14 00504 g005
Table 1. Notation Table.
Table 1. Notation Table.
SymbolDescription
K Total number of clients.
C , S ( t ) Set of all clients; S ( t ) C denotes the subset selected at round t .
e , E Local epoch index in round t .
ψ Server-side shared GRU parameters.
D k , n k Local dataset of client k and its size ( n k =   D k ).
θ ( t ) Global model (teacher) parameters at the beginning of round t .
ϕ k ( t ) Client k s local model (student) parameters after local training in round t .
x , y Input and label.
z t , z s Teacher/student logits computed on device.
L CE , k ( θ ; D k ) Cross-entropy loss of client k evaluated on D k using parameters θ .
L KD , k Distillation loss.
τ k ( t ) KD temperature for client k at round t .
μ k ( t ) Dynamic KD coefficient for client k at round t .
μ lower , μ upper Lower/upper bounds for μ k ( t ) .
β Dirichlet concentration parameter used to create static non-IID partitions in experiments.
u k ( t ) , h k ( t ) Server-side GRU input features and hidden state representing each client’s aggregated update history.
q k ( t ) Unnormalized server score.
a k ( t ) Aggregation weight via softmax.
Table 2. Personalized Accuracy Comparison with 7 Baseline Methods across Different Datasets..
Table 2. Personalized Accuracy Comparison with 7 Baseline Methods across Different Datasets..
MethodsAccuracy (%)
  β = 0.1   β = 0.6
CIFAR-10CIFAR-100CIFAR-10CIFAR-100Shakespeare
FedAvg [29]63.9828.1171.5030.5753.47
FedAvgFT [52]67.2334.1265.4239.3448.19
Ditto [34]70.4537.5672.3441.2352.78
FedRep [32]69.7736.8968.2331.5751.11
FedProx [30]68.1135.0966.2540.1749.53
Flow [8]76.2142.5677.4739.8856.36
FedCDA [35]62.5739.4275.2444.5752.44
pFedKA (Ours)78.4544.7879.6742.1158.92
Table 3. Ablation Experiment Results.
Table 3. Ablation Experiment Results.
MethodsAccuracy (%)
  β = 0.1   β = 0.6
CIFAR-10CIFAR-100CIFAR-10CIFAR-100Shakespeare
pFedKA78.4544.7879.6742.1158.92
w/o KD65.1228.1262.3330.3440.63
w/o CA68.2730.3765.5932.8945.21
w/o GA74.0935.6772.0837.9252.32
Table 4. Macro-F1.
Table 4. Macro-F1.
MethodsAccuracy (%)
  β = 0.1   β = 0.6
CIFAR-10CIFAR-100CIFAR-10CIFAR-100Shakespeare
pFedKA76.9541.9878.6740.1155.42
w/o KD63.6225.3261.3328.3437.13
w/o CA66.7727.5764.5930.8941.71
w/o GA72.5932.8771.0835.9248.82
Table 5. Average communication volume (CIFAR-10, β = 0.1 ).
Table 5. Average communication volume (CIFAR-10, β = 0.1 ).
MethodsPer-Round (MB) 1000-Round (GB)
FedAvg [29]30.82 ± 0.0330.82 ± 0.03
FedProx [32]30.82 ± 0.0330.82 ± 0.03
Flow [8]30.91 ± 0.0430.91 ± 0.04
FedCDA [35]30.87 ± 0.0330.87 ± 0.03
pFedKA (Ours)33.47 ± 0.0533.47 ± 0.05
Table 6. Privacy–utility trade-off under ( ε ,   δ ) -DP after 1000 rounds.
Table 6. Privacy–utility trade-off under ( ε ,   δ ) -DP after 1000 rounds.
CIFAR-10CIFAR-100
Methods δ σ ε Accuracy (%) ε Accuracy (%)
FedAvg 1   ×   10 - 5 0 63.98 28.11
FedAvg 1   ×   10 - 5 19.8461.868.1226.74
FedAvg 1   ×   10 - 5 23.2158.432.6724.05
pFedKA (Ours) 1   ×   10 - 5 0 78.45 44.78
pFedKA (Ours) 1   ×   10 - 5 16.4578.147.9343.95
pFedKA (Ours) 1   ×   10 - 5 21.5875.621.6540/47
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jin, Y.; Zhang, K.; Ma, C.; Cheng, X.; Zhang, L.; Zhang, H. pFedKA: Personalized Federated Learning via Knowledge Distillation with Dual Attention Mechanism. Computers 2025, 14, 504. https://doi.org/10.3390/computers14120504

AMA Style

Jin Y, Zhang K, Ma C, Cheng X, Zhang L, Zhang H. pFedKA: Personalized Federated Learning via Knowledge Distillation with Dual Attention Mechanism. Computers. 2025; 14(12):504. https://doi.org/10.3390/computers14120504

Chicago/Turabian Style

Jin, Yuanhao, Kaiqi Zhang, Chao Ma, Xinxin Cheng, Luogang Zhang, and Hongguo Zhang. 2025. "pFedKA: Personalized Federated Learning via Knowledge Distillation with Dual Attention Mechanism" Computers 14, no. 12: 504. https://doi.org/10.3390/computers14120504

APA Style

Jin, Y., Zhang, K., Ma, C., Cheng, X., Zhang, L., & Zhang, H. (2025). pFedKA: Personalized Federated Learning via Knowledge Distillation with Dual Attention Mechanism. Computers, 14(12), 504. https://doi.org/10.3390/computers14120504

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop