Next Article in Journal
Energy-Efficient Hierarchical Federated Learning in UAV Networks with Partial AI Model Upload Under Non-Convex Loss
Previous Article in Journal
A Universal Method for Identifying and Correcting Induced Heave Error in Multi-Beam Bathymetric Surveys
Previous Article in Special Issue
SelectVote Byzantine Fault Tolerance for Evidence Custody: Virtual Voting Consensus with Environmental Compensation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Secure Hierarchical Asynchronous Federated Learning with Shuffle Model and Mask–DP

1
School of Computer Science, Hubei University of Technology, No.28 Nanli Road, Hongshan District, Wuhan 430068, China
2
Hubei Provincial Engineering Research Center for Digital & Intelligent Manufacturing Technologies and Applications, No.28 Nanli Road, Hongshan District, Wuhan 430068, China
3
Hubei Provincial Key Laboratory of Green Intelligent Computing Power Network, No.28 Nanli Road, Hongshan District, Wuhan 430068, China
*
Author to whom correspondence should be addressed.
Sensors 2026, 26(2), 617; https://doi.org/10.3390/s26020617
Submission received: 24 November 2025 / Revised: 22 December 2025 / Accepted: 10 January 2026 / Published: 16 January 2026

Abstract

Hierarchical asynchronous federated learning (HAFL) accommodates more real networking and ensures practical communications and efficient aggregations. However, existing HAFL schemes still face challenges in balancing privacy-preserving and robustness. Malicious training nodes may infer the privacy of other training nodes or poison the global model, thereby damaging the system’s robustness. To address these issues, we propose a secure hierarchical asynchronous federated learning (SHAFL) framework. SHAFL organizes training nodes into multiple groups based on their respective gateways. Within each group, the training nodes prevent inference attacks from the gateways and committee nodes via a mask–DP exchange protocol and employ homomorphic encryption (HE) to prevent collusion attacks from other training nodes. Compared with conventional solutions, SHAFL uses noise that can be eliminated to reduce the impact of noise on the global model’s performance, while employing a shuffle model and subsampling to enhance the local model’s privacy-preserving level. At global model aggregation, SHAFL considers both model accuracy and communication delay, effectively reducing the impact of malicious and stale models on system performance. Theoretical analysis and experimental evaluations demonstrate that SHAFL outperforms state-of-the-art solutions in terms of convergence, security, robustness, and privacy-preserving capabilities.
Keywords: federated learning; differential privacy; secure aggregation; consensus mechanism; shuffle model federated learning; differential privacy; secure aggregation; consensus mechanism; shuffle model

Share and Cite

MDPI and ACS Style

Chen, Y.; Ai, D.; Yan, L. Secure Hierarchical Asynchronous Federated Learning with Shuffle Model and Mask–DP. Sensors 2026, 26, 617. https://doi.org/10.3390/s26020617

AMA Style

Chen Y, Ai D, Yan L. Secure Hierarchical Asynchronous Federated Learning with Shuffle Model and Mask–DP. Sensors. 2026; 26(2):617. https://doi.org/10.3390/s26020617

Chicago/Turabian Style

Chen, Yonghui, Daxiang Ai, and Linglong Yan. 2026. "Secure Hierarchical Asynchronous Federated Learning with Shuffle Model and Mask–DP" Sensors 26, no. 2: 617. https://doi.org/10.3390/s26020617

APA Style

Chen, Y., Ai, D., & Yan, L. (2026). Secure Hierarchical Asynchronous Federated Learning with Shuffle Model and Mask–DP. Sensors, 26(2), 617. https://doi.org/10.3390/s26020617

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop