This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessArticle
Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness
by
Junhui Song
Junhui Song 1,
Zhangqi Zheng
Zhangqi Zheng 2,
Afei Li
Afei Li 1,
Zhixin Xia
Zhixin Xia 1 and
Yongshan Liu
Yongshan Liu 1,*
1
School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China
2
School of Mathematics and Information Technology, Hebei Normal University of Science & Technology, Qinhuangdao 066004, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(14), 7843; https://doi.org/10.3390/app15147843 (registering DOI)
Submission received: 16 June 2025
/
Revised: 10 July 2025
/
Accepted: 11 July 2025
/
Published: 13 July 2025
Abstract
Federated learning (FL) has emerged as a prominent distributed machine learning paradigm that facilitates collaborative model training across multiple clients while ensuring data privacy. Despite its growing adoption in practical applications, performance degradation caused by data heterogeneity—commonly referred to as the non-independent and identically distributed (non-IID) nature of client data—remains a fundamental challenge. To mitigate this issue, a heterogeneity-aware and robust FL framework is proposed to enhance model generalization and stability under non-IID conditions. The proposed approach introduces two key innovations. First, a heterogeneity quantification mechanism is designed based on statistical feature distributions, enabling the effective measurement of inter-client data discrepancies. This metric is further employed to guide the model aggregation process through a heterogeneity-aware weighted strategy. Second, a multi-loss optimization scheme is formulated, integrating classification loss, heterogeneity loss, feature center alignment, and L2 regularization for improved robustness against distributional shifts during local training. Comprehensive experiments are conducted on four benchmark datasets, including CIFAR-10, SVHN, MNIST, and NotMNIST under Dirichlet-based heterogeneity settings (alpha = 0.1 and alpha = 0.5). The results demonstrate that the proposed method consistently outperforms baseline approaches such as FedAvg, FedProx, FedSAM, and FedMOON. Notably, an accuracy improvement of approximately 4.19% over FedSAM is observed on CIFAR-10 (alpha = 0.5), and a 1.82% gain over FedMOON on SVHN (alpha = 0.1), along with stable enhancements on MNIST and NotMNIST. Furthermore, ablation studies confirm the contribution and necessity of each component in addressing data heterogeneity.
Share and Cite
MDPI and ACS Style
Song, J.; Zheng, Z.; Li, A.; Xia, Z.; Liu, Y.
Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness. Appl. Sci. 2025, 15, 7843.
https://doi.org/10.3390/app15147843
AMA Style
Song J, Zheng Z, Li A, Xia Z, Liu Y.
Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness. Applied Sciences. 2025; 15(14):7843.
https://doi.org/10.3390/app15147843
Chicago/Turabian Style
Song, Junhui, Zhangqi Zheng, Afei Li, Zhixin Xia, and Yongshan Liu.
2025. "Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness" Applied Sciences 15, no. 14: 7843.
https://doi.org/10.3390/app15147843
APA Style
Song, J., Zheng, Z., Li, A., Xia, Z., & Liu, Y.
(2025). Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness. Applied Sciences, 15(14), 7843.
https://doi.org/10.3390/app15147843
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article Access Statistics
For more information on the journal statistics, click
here.
Multiple requests from the same IP address are counted as one view.