SAFE-MED for Privacy-Preserving Federated Learning in IoMT via Adversarial Neural Cryptography
Abstract
1. Introduction
Motivation
- We propose SAFE-MED, a novel federated learning framework that integrates adversarial neural cryptography for privacy preservation in IoMT.
- We design a lightweight, trainable tripartite architecture (encryptor, decryptor, adversary) that eliminates fixed-key cryptographic assumptions.
- We introduce anomaly-aware gradient validation and differential compression to detect malicious clients and reduce communication overhead.
- We conduct comprehensive experiments on real-world medical datasets under adversarial conditions, demonstrating significant improvements in privacy, robustness, and efficiency over state-of-the-art FL methods.
2. Related Work
3. Problem Formulation
3.1. Network Scenario
3.2. System Model
- is the cumulative loss of the global model across all clients.
- captures the expected communication overhead per training round t.
- quantifies adversarial leakage, measured as the reconstruction error between the original gradient and the adversary’s estimate from the encrypted gradient update .
3.3. Threat Modeling
3.3.1. Adversary Types and Assumption Boundaries
- 1.
- Passive Adversary (Honest-but-Curious): An entity, such as the cloud aggregator, that correctly follows the federated protocol but attempts to infer sensitive information by analyzing received ciphertext gradients. The passive adversary has access only to encrypted updates and is not permitted access to plaintext gradients.
- 2.
- Active Adversary (Byzantine Client): A malicious client that may arbitrarily manipulate or craft its model updates, with the objective of degrading global model performance (poisoning) or extracting information through inference. The active adversary cannot directly compromise the encryption module but may attempt to exploit statistical patterns.
- 3.
- Adaptive Adversary: An adversary that evolves strategies across training rounds by observing ciphertext distributions and outcomes. Adaptive adversaries do not have access to encryption/decryption parameters but can attempt gradient reconstruction attacks over time.
- 4.
- Insider Adversary (Malicious Fog Node): A compromised fog node may access subsets of encrypted updates but is prevented from observing plaintext gradients due to the neural encryption mechanism. SAFE-MED mitigates this risk via trust-weighted aggregation and anomaly detection at both fog and cloud layers.
- 5.
- Coordination Assumption: We assume up to 20% of clients may collude maliciously, but not all fog nodes and the cloud aggregator are simultaneously compromised. This boundary sets the threat model scope for collusion and coordination.
3.3.2. Neural Cryptographic Architecture
- Encoder : Implemented as a three-layer MLP with hidden sizes [256, 128, 64]. Each hidden layer is followed by a ReLU activation and batch normalization. The final layer uses a tanh activation to bound ciphertext representations within . Weights are initialized using Xavier uniform initialization.
- Decoder : Configured as a symmetric three-layer MLP with hidden sizes [64, 128, 256]. ReLU activations and batch normalization are applied after each hidden layer, while the output layer uses linear activation to reconstruct the gradient vector.
- Adversary : Designed as a two-layer MLP with hidden sizes [256, 128]. Each hidden layer applies ReLU activation and dropout () for regularization. The final output layer is linear, producing gradient reconstructions.
- Training setup: All networks are trained with the Adam optimizer (learning rate = , , ), using mean squared error (MSE) as both reconstruction and adversarial loss. A batch size of 256 is used, with early stopping based on validation loss (patience = 10 rounds).
4. Proposed SAFE-MED Framework
4.1. System Architecture
4.2. Neural Cryptographic Protocol Design
4.3. Federated Training and Validation Pipeline
4.4. Security and Privacy Analysis
4.5. Optimization Objective
4.5.1. Objective Formulation
4.5.2. Optimization Methodology
4.5.3. Differential Privacy Noise Calibration
4.5.4. Gradient Aggregation with Trust Weighting
4.6. Adversarial Optimization Procedure
Algorithm 1 Adversarial optimization in SAFE-MED |
|
- Local computation cost: per client, dominated by gradient evaluation.
- Encryption/decryption cost: per update, due to the linear complexity of neural modules and .
- Adversary update cost: per gradient.
- Communication cost: per client, reducible to under compression ratio .
- Aggregation cost: at the server for weighted averaging.
4.7. Solution Strategy for Minimax Objectives
5. Experimental Evaluation
5.1. Simulation Parameters and Training Settings
5.2. Deployment Benchmarking on IoMT-Grade Hardware
5.3. Results and Performance Analysis
5.4. Extended Ablation Under Cross-Domain and Noisy Conditions
5.5. Comparison with Hybrid Privacy-Preserving FL Schemes
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Proofs of Theorems
References
- El-Saleh, A.A.; Sheikh, A.M.; Albreem, M.A.; Honnurvali, M.S. The internet of medical things (IoMT): Opportunities and challenges. Wirel. Netw. 2025, 31, 327–344. [Google Scholar] [CrossRef]
- Anurogo, D.; Hidayat, N.A. The Art of Televasculobiomedicine 5.0; Nas Media Pustaka: Yogyakarta, Indonesia, 2023. [Google Scholar]
- Wani, R.U.Z.; Thabit, F.; Can, O. Security and privacy challenges, issues, and enhancing techniques for Internet of Medical Things: A systematic review. Secur. Priv. 2024, 7, e409. [Google Scholar] [CrossRef]
- Li, Z.; Wang, L.; Chen, G.; Zhang, Z.; Shafiq, M.; Gu, Z. E2EGI: End-to-end gradient inversion in federated learning. IEEE J. Biomed. Health Inform. 2022, 27, 756–767. [Google Scholar] [CrossRef]
- Aziz, R.; Banerjee, S.; Bouzefrane, S.; Le Vinh, T. Exploring homomorphic encryption and differential privacy techniques towards secure federated learning paradigm. Future Internet 2023, 15, 310. [Google Scholar] [CrossRef]
- Mantey, E.A.; Zhou, C.; Anajemba, J.H.; Arthur, J.K.; Hamid, Y.; Chowhan, A.; Otuu, O.O. Federated learning approach for secured medical recommendation in internet of medical things using homomorphic encryption. IEEE J. Biomed. Health Inform. 2024, 28, 3329–3340. [Google Scholar] [CrossRef]
- Wu, Y.T. Neural Networks for Mathematical Reasoning–Evaluations, Capabilities, and Techniques. Ph.D. Thesis, University of Toronto, Toronto, ON, Canada, 2024. [Google Scholar]
- Manickam, P.; Mariappan, S.A.; Murugesan, S.M.; Hansda, S.; Kaushik, A.; Shinde, R.; Thipperudraswamy, S. Artificial intelligence (AI) and internet of medical things (IoMT) assisted biomedical systems for intelligent healthcare. Biosensors 2022, 12, 562. [Google Scholar] [CrossRef]
- Li, N.; Xu, M.; Li, Q.; Liu, J.; Bao, S.; Li, Y.; Li, J.; Zheng, H. A review of security issues and solutions for precision health in Internet-of-Medical-Things systems. Secur. Saf. 2023, 2, 2022010. [Google Scholar] [CrossRef]
- Rauniyar, A.; Hagos, D.H.; Jha, D.; Håkegård, J.E.; Bagci, U.; Rawat, D.B.; Vlassov, V. Federated learning for medical applications: A taxonomy, current trends, challenges, and future research directions. IEEE Internet Things J. 2023, 11, 7374–7398. [Google Scholar] [CrossRef]
- Kelly, T.; Alhonainy, A.; Rao, P. A Review of Secure Gradient Compression Techniques for Federated Learning in the Internet of Medical Things. Federated Learning Systems: Towards Privacy-Preserving Distributed AI; Springer: Cham, Switzerland, 2025; pp. 63–85. [Google Scholar]
- McMahan, H.B.; Xu, Z.; Zhang, Y. A Hassle-free Algorithm for Strong Differential Privacy in Federated Learning Systems. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track, Miami, FL, USA, 12–16 November 2024; pp. 842–865. [Google Scholar]
- Chen, Y.; Qin, X.; Wang, J.; Yu, C.; Gao, W. Fedhealth: A federated transfer learning framework for wearable healthcare. IEEE Intell. Syst. 2020, 35, 83–93. [Google Scholar] [CrossRef]
- Zhang, T.; He, C.; Ma, T.; Gao, L.; Ma, M.; Avestimehr, S. Federated learning for internet of things. In Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems, Coimbra, Portugal, 15–17 November 2021; pp. 413–419. [Google Scholar]
- Qi, P.; Chiaro, D.; Guzzo, A.; Ianni, M.; Fortino, G.; Piccialli, F. Model aggregation techniques in federated learning: A comprehensive survey. Future Gener. Comput. Syst. 2024, 150, 272–293. [Google Scholar] [CrossRef]
- Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 1175–1191. [Google Scholar]
- Corrigan-Gibbs, H.; Boneh, D. Prio: Private, robust, and scalable computation of aggregate statistics. In Proceedings of the 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17), Boston, MA, USA, 27–29 March 2017; pp. 259–282. [Google Scholar]
- El Ouadrhiri, A.; Abdelhadi, A. Differential privacy for deep and federated learning: A survey. IEEE Access 2022, 10, 22359–22380. [Google Scholar] [CrossRef]
- Zuraiz, M.; Javed, M.; Abbas, N.; Abbass, W.; Nawaz, W.; Farooqi, A.H. Optimizing Secure and Efficient Data Aggregation in IoMT Using NSGA-II. IEEE Access 2025, 13, 118890–118911. [Google Scholar] [CrossRef]
- Geyer, R.C.; Klein, T.; Nabi, M. Differentially private federated learning: A client level perspective. arXiv 2017, arXiv:1712.07557. [Google Scholar]
- Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 308–318. [Google Scholar]
- Abbass, W.; Khan, M.A.; Farooqi, A.H.; Nawaz, W.; Abbas, N.; Ali, Z. Optimizing Spectrum Utilization and Security in SAS-Enabled CBRS Systems for Enhanced 5G Performance. IEEE Access 2024, 12, 165992–166010. [Google Scholar] [CrossRef]
- Bagdasaryan, E.; Veit, A.; Hua, Y.; Estrin, D.; Shmatikov, V. How to backdoor federated learning. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, Online, 26–28 August 2020; pp. 2938–2948. [Google Scholar]
- Bhagoji, A.N.; Chakraborty, S.; Mittal, P.; Calo, S. Analyzing federated learning through an adversarial lens. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 634–643. [Google Scholar]
- Blanchard, P.; El Mhamdi, E.M.; Guerraoui, R.; Stainer, J. Machine learning with adversaries: Byzantine tolerant gradient descent. In Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Yin, D.; Chen, Y.; Kannan, R.; Bartlett, P. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 5650–5659. [Google Scholar]
- Khan, I.A.; Pi, D.; Kamal, S.; Alsuhaibani, M.; Alshammari, B.M. Federated-Boosting: A Distributed and Dynamic Boosting-Powered Cyber-Attack Detection Scheme for Security and Privacy of Consumer IoT. IEEE Trans. Consum. Electron. 2024, 71, 6340–6347. [Google Scholar] [CrossRef]
- Meraouche, I.; Dutta, S.; Tan, H.; Sakurai, K. Neural networks-based cryptography: A survey. IEEE Access 2021, 9, 124727–124740. [Google Scholar] [CrossRef]
- Abadi, M.; Andersen, D.G. Learning to protect communications with adversarial neural cryptography. arXiv 2016, arXiv:1610.06918. [Google Scholar] [CrossRef]
- Rezaee, M.R.; Hamid, N.A.W.A.; Hussin, M.; Zukarnain, Z.A. Comprehensive review of drones collision avoidance schemes: Challenges and open issues. IEEE Trans. Intell. Transp. Syst. 2024, 25, 6397–6426. [Google Scholar] [CrossRef]
- Gilad-Bachrach, R.; Dowlin, N.; Laine, K.; Lauter, K.; Naehrig, M.; Wernsing, J. CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy. In Proceedings of the 33rd International Conference on Machine Learning (ICML), New York, NY, USA, 20–22 June 2016; pp. 201–210. [Google Scholar]
- Hesamifard, E.; Takabi, H.; Ghasemi, M. Privacy-Preserving Machine Learning as a Service. Proc. Priv. Enhancing Technol. 2018, 2018, 123–142. [Google Scholar] [CrossRef]
- Jiang, L.; Tan, R.; Lou, X.; Lin, G. On Lightweight Privacy-Preserving Collaborative Learning for Internet-of-Things Objects. In Proceedings of the 2019 International Conference on Internet of Things Design and Implementation (IoTDI ’19), Montreal, QC, Canada, 15–18 April 2019; ACM: New York, NY, USA, 2019; pp. 70–81. [Google Scholar] [CrossRef]
- Truex, S.; Liu, L.; Chow, K.H.; Gursoy, M.E.; Wei, W. Hybrid Differential Privacy and Secure Aggregation for Federated Learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security (AISec), London, UK, 15 November 2019. [Google Scholar] [CrossRef]
- Xie, C.; Koyejo, S.; Gupta, I. Asynchronous federated optimization. arXiv 2019, arXiv:1903.03934. [Google Scholar]
- Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2020, 2, 429–450. [Google Scholar]
- Karimireddy, S.P.; Kale, S.; Mohri, M.; Reddi, S.; Stich, S.; Suresh, A.T. Scaffold: Stochastic controlled averaging for federated learning. In Proceedings of the 37th International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 5132–5143. [Google Scholar]
- Zhang, X.; Zeng, Z.; Zhou, X.; Niyato, D.; Shen, Z. Communication-Efficient Federated Knowledge Graph Embedding with Entity-Wise Top-K Sparsification. arXiv 2024, arXiv:2406.13225. [Google Scholar] [CrossRef]
- Acar, A.; Aksu, H.; Uluagac, A.S.; Conti, M. A survey on homomorphic encryption schemes: Theory and implementation. ACM Comput. Surv. (Csur) 2018, 51, 1–35. [Google Scholar] [CrossRef]
- Ranaldi, L.; Gerardi, M.; Fallucchi, F. Crypto net: Using auto-regressive multi-layer artificial neural networks to predict financial time series. Information 2022, 13, 524. [Google Scholar] [CrossRef]
- Zhao, J.; Huang, C.; Wang, W.; Xie, R.; Dong, R.; Matwin, S. Local differentially private federated learning with homomorphic encryption. J. Supercomput. 2023, 79, 19365–19395. [Google Scholar] [CrossRef]
- Nam, Y.; Moitra, A.; Venkatesha, Y.; Yu, X.; De Micheli, G.; Wang, X.; Zhou, M.; Vega, A.; Panda, P.; Rosing, T. Rhychee-FL: Robust and Efficient Hyperdimensional Federated Learning with Homomorphic Encryption. In Proceedings of the 2025 Design, Automation & Test in Europe Conference (DATE), Lyon, France, 31 March–2 April 2025; pp. 1–7. [Google Scholar]
- Muñoz, A.; Ríos, R.; Román, R.; López, J. A survey on the (in) security of trusted execution environments. Comput. Secur. 2023, 129, 103180. [Google Scholar] [CrossRef]
- Zhang, L.; Duan, B.; Li, J.; Ma, Z.; Cao, X. A tee-based federated privacy protection method: Proposal and implementation. Appl. Sci. 2024, 14, 3533. [Google Scholar] [CrossRef]
- Cao, Y.; Zhang, J.; Zhao, Y.; Su, P.; Huang, H. SRFL: A Secure & Robust Federated Learning framework for IoT with trusted execution environments. Expert Syst. Appl. 2024, 239, 122410. [Google Scholar]
- Wang, J.; Qi, H.; Rawat, A.S.; Reddi, S.J.; Waghmare, S.; Yu, F.X.; Joshi, G. FedLite: A Scalable Approach for Federated Learning on Resource-constrained Clients. arXiv 2022, arXiv:2201.11865. [Google Scholar]
- He, C.; Li, S.; So, J.; Zhang, M.; Wang, H.; Wang, X.; Vepakomma, P.; Singh, A.; Qiu, H.; Shen, L.; et al. FedML: A Research Library and Benchmark for Federated Machine Learning. arXiv 2020, arXiv:2007.13518. [Google Scholar] [CrossRef]
- Qi, S.; Ramakrishnan, K.; Lee, M. LIFL: A Lightweight, Event-driven Serverless Platform for Federated Learning. Proc. Mach. Learn. Syst. 2024, 6, 408–425. [Google Scholar]
- Nasirigerdeh, R.; Torkzadehmahani, R.; Matschinske, J.O.; Baumbach, J.; Rueckert, D.; Kaissis, G. HyFed: A Hybrid Federated Framework for Privacy-preserving Machine Learning. arXiv 2021, arXiv:2105.10545. [Google Scholar]
- Fu, J.; Hong, Y.; Ling, X.; Wang, L.; Ran, X.; Sun, Z.; Wang, W.H.; Chen, Z.; Cao, Y. Differentially private federated learning: A systematic review. arXiv 2024, arXiv:2405.08299. [Google Scholar] [CrossRef]
- Zhang, C.; Shan, G.; Roh, B.h. Fair federated learning for multi-task 6G NWDAF network anomaly detection. IEEE Trans. Intell. Transp. Syst. 2024. early access. [Google Scholar] [CrossRef]
- Wu, D. Distributed Machine Learning on Edge Computing Systems. Ph.D. Thesis, The University of St Andrews, St Andrews, UK, 2024. [Google Scholar]
- Acar, D.A.E.; Zhao, Y.; Navarro, R.M.; Mattina, M.; Whatmough, P.N.; Saligrama, V. Federated Learning Based on Dynamic Regularization. arXiv 2021, arXiv:2111.04263. [Google Scholar] [CrossRef]
- Fredrikson, M.; Lantz, E.; Jha, S.; Lin, S.; Page, D.; Ristenpart, T. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing. In Proceedings of the 23rd USENIX Security Symposium, San Diego, CA, USA, 20–22 August 2014; pp. 17–32. [Google Scholar]
- Javadpour, A.; Ja’fari, F.; Taleb, T.; Shojafar, M.; Benzaïd, C. A comprehensive survey on cyber deception techniques to improve honeypot performance. Comput. Secur. 2024, 140, 103792. [Google Scholar] [CrossRef]
- Detrano, R.; Janosi, A.; Steinbrunn, W.; Pfisterer, M.; Schmid, J.-J.; Sandhu, S.; Guppy, K.H.; Lee, S.; Froelicher, V. Cleveland Heart Disease Dataset. UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu/ml/datasets/Heart+Disease (accessed on 8 September 2025).
- Moody, G.B.; Mark, R.G. The Impact of the MIT-BIH Arrhythmia Database. IEEE Eng. Med. Biol. Mag. 2001, 20, 45–50. [Google Scholar] [CrossRef]
- Goldberger, A.L.; Amaral, L.A.N.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.-K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef]
Approach | Privacy/Robustness Mechanism | Limitations in IoMT | SAFE-MED Advantages |
---|---|---|---|
Federated learning extensions (FedHealth [13], FedIoT [14]) | Personalization, scheduling | Lack strong privacy guarantees, vulnerable to adversarial compromise | Embeds neural encryption into FL pipeline, robust against inference/poisoning |
Secure aggregation [16,17] | Homomorphic encryption, multiparty computation | High communication/computation cost, fragile under device churn | Lightweight neural encryption with lower overhead and better scalability |
Differential privacy [20,21] | Noise addition to gradients | Accuracy degradation on sensitive medical data, no adversarial defense | Preserves accuracy while preventing inference without noise injection |
Robust aggregation (Krum [25], trimmed mean [26]) | Outlier filtering | Assumes majority of honest clients, fails against stealthy attacks | Uses anomaly-aware validation + trust-weighted aggregation |
Neural cryptography [28,29] | Learned encryption functions | Mostly toy settings, not integrated into FL, ignores heterogeneity | Production-ready, scalable to IoMT devices with quantization support |
Homomorphic encryption (CryptoNet [40], HEFL [41]) | Computation on encrypted updates | Computationally prohibitive, high latency, poor compatibility with non-linear ops | Efficient neural encryption with sub-ms latency, compatible with deep models |
TEE-based FL [44] | Hardware enclaves (Intel SGX, ARM TrustZone) | Requires special hardware, side-channel vulnerabilities, no protection in transit | Platform-agnostic, fully software-based, end-to-end encryption |
Lightweight FL (FedLite, FedML, LIFL) | Model pruning, compression | Reduce cost but lack encryption; updates remain exposed | Combines quantization-aware encryption + compression for secure lightweight FL |
Hybrid approaches (DP-AggrFL, HyFed) | Combination of DP, HE, secure aggregation | High complexity, poor scalability, not adaptive to adversaries | Adversarially trained neural encryption adapts to diverse threats |
SAFE-MED (Proposed) | Neural cryptography + anomaly detection + trust-weighted aggregation | – | Lightweight, adaptive, adversarially robust, IoMT-ready framework |
Symbol | Meaning |
---|---|
N | Total number of IoMT clients |
Set of selected clients in round t | |
Number of participating clients per round | |
Number of local data samples on client i | |
d | Dimension of model gradient/update vector |
Global model parameters | |
Local gradient of client i at round t | |
Global federated objective function | |
Loss function on data sample | |
Maximum staleness for async updates | |
Trust weight of client i at round t | |
Historical anomaly count of client i | |
Compressed encrypted gradient, with compression ratio | |
p | Original gradient dimension |
Local gradient of client i at round t, dimension p | |
Encrypted gradient from client i at round t | |
Neural encryptor with parameters | |
Neural decryptor with parameters | |
Adversary model with parameters | |
Decryption loss (reconstruction error) | |
Adversarial loss (gradient recovery) | |
Learning rates for modules | |
Gradient compression ratio | |
Std. deviation of Gaussian DP noise | |
Gradient clipping threshold | |
Dimension of encrypted gradient representation | |
Compression ratio applied to encrypted gradient | |
B | Local batch size |
Encrypted gradient representation after neural encryption | |
R | Total communication rounds |
Parameter | Value/Description |
---|---|
Number of IoMT clients | 100 |
Number of fog nodes | 10 |
Cloud server | Single-threaded aggregator |
Client selection per round | 10% (random, varies each round) |
Communication rounds | 150 |
Local epochs per client | 2 |
Local batch size | 16 |
Gradient dimension d | Depends on dataset |
Encryptor/Decryptor architecture | 3-layer MLP, 128 hidden units |
Adversary architecture | 3-layer MLP, sigmoid output |
Learning rates | , , |
Optimizer | Adam (all networks) |
Differential privacy noise | Gaussian, |
Gradient clipping threshold | |
Compression ratio | 0.25 |
Staleness threshold | 5 communication rounds |
Trust weight decay | |
Client compute capacity | 0.5–2.0 GHz (sampled per client) |
Client bandwidth profile | 100–500 kbps (sampled per round) |
Client availability (stress-test) | Up to 20% churn (intermittent dropout) |
Attack intensity (stress-test) | Increased from 10% to 30% over time |
Hardware configuration | 4× NVIDIA RTX A6000 GPUs, 128 GB RAM, Ryzen 7950X |
Frameworks used | PyTorch, TensorFlow Federated, PySyft |
Module | Latency (ms) | Peak SRAM (KB) | Notes |
---|---|---|---|
Adversarial Neural Cryptography (ANC) | 0.42 | 19.3 | 128-dim update vector |
Anomaly Detection (z-score) | 0.18 | 8.6 | Statistical filtering |
Trust-Weighted Aggregation | 0.05 | 6.1 | Lightweight vector ops |
Compression (Top-k sparsification) | 0.37 | 13.2 | k = 0.1 fraction |
Total SAFE-MED | 1.02 | 47.2 | Fits within 64 KB SRAM |
Method | Accuracy (%) | Gradient Leakage (%) | Communication Cost (MB) | Poison Resilience (%) |
---|---|---|---|---|
FedAvg | 95.1 | 100.0 | 1.00 | 75.3 |
FL + DP | 91.3 | 65.0 | 0.82 | 81.4 |
FL + HE | 90.8 | 18.0 | 1.72 | 84.0 |
NeuralCrypto-FL | 92.5 | 45.3 | 0.91 | 86.2 |
SAFE-MED | 94.2 | 15.3 | 0.58 | 90.6 |
Method | 0% Attack | 10% Attack | 20% Attack | 30% Attack | 40% Attack |
---|---|---|---|---|---|
FedAvg | 95.1 | 87.5 | 75.3 | 61.2 | 49.8 |
FL + DP | 91.3 | 88.4 | 81.4 | 73.7 | 66.5 |
FL + HE | 90.8 | 88.0 | 84.0 | 77.5 | 69.9 |
NeuralCrypto-FL | 92.5 | 90.3 | 86.2 | 79.8 | 72.1 |
SAFE-MED | 94.2 | 92.8 | 90.6 | 87.9 | 84.5 |
Dataset | Scenario | Accuracy (%) | Leakage Reduction (%) |
---|---|---|---|
MIT-BIH ECG | Clean | 91.3 | 85.4 |
MIT-BIH ECG | 20% Label Corruption | 87.1 | 79.6 |
PhysioNet Resp. | Noisy Signals | 80.5 | 78.2 |
PathMNIST Images | Cross-domain | 83.7 | 81.1 |
Configuration | Accuracy (%) | Leakage (%) | Poison Resilience (%) | Comm. Cost (MB) |
---|---|---|---|---|
SAFE-MED (Full) | 87.4 | 14.6 | 90.2 | 0.58 |
w/o Neural Cryptography | 82.5 | 41.2 | 82.6 | 0.60 |
w/o Adversarial Training | 84.1 | 27.8 | 85.1 | 0.58 |
w/o Anomaly Detection | 85.2 | 14.3 | 78.4 | 0.58 |
w/o Trust Aggregation | 85.6 | 13.8 | 81.2 | 0.58 |
Configuration | Accuracy (%) | Leakage (%) | Poison Resilience (%) | Comm. Cost (MB) |
---|---|---|---|---|
SAFE-MED (Full) | 94.2 | 15.3 | 90.6 | 0.58 |
w/o Neural Cryptography | 89.7 | 44.5 | 81.3 | 0.60 |
w/o Adversarial Training | 91.4 | 29.5 | 85.0 | 0.58 |
w/o Anomaly Detection | 92.1 | 15.2 | 78.9 | 0.58 |
w/o Trust Aggregation | 92.5 | 14.8 | 82.1 | 0.58 |
Configuration | Accuracy (%) | Leakage (%) | Poison Resilience (%) | Comm. Cost (MB) |
---|---|---|---|---|
SAFE-MED (Full) | 91.3 | 13.9 | 89.8 | 0.58 |
w/o Neural Cryptography | 86.1 | 39.7 | 80.4 | 0.60 |
w/o Adversarial Training | 88.3 | 28.1 | 84.6 | 0.58 |
w/o Anomaly Detection | 88.0 | 13.5 | 77.3 | 0.58 |
w/o Trust Aggregation | 88.5 | 12.9 | 80.7 | 0.58 |
Scenario (Full SAFE-MED) | Accuracy (%) | Leakage (%) | Poison Resilience (%) |
---|---|---|---|
PathMNIST (Cross-domain, Images) | 83.7 ± 0.6 | 18.9 ± 0.5 | 86.2 ± 0.7 |
MIT-BIH (20% Label Corruption) | 87.1 ± 0.5 | 21.3 ± 0.6 | 84.5 ± 0.6 |
PhysioNet (Noisy Signals, ) | 80.5 ± 0.6 | 22.1 ± 0.5 | 82.4 ± 0.7 |
# Clients | Accuracy (%) | Comm. Cost (MB) | Rounds to Converge | Time per Round (s) |
---|---|---|---|---|
20 | 95.0 | 0.12 | 80 | 3.1 |
50 | 94.6 | 0.28 | 100 | 3.4 |
100 | 94.2 | 0.58 | 120 | 3.8 |
200 | 93.8 | 1.02 | 135 | 4.6 |
Dataset | Reconstruction Loss | Adversarial Loss | Validation Loss |
---|---|---|---|
Cleveland | 0.032 | 0.487 | 0.221 |
MIT-BIH | 0.019 | 0.456 | 0.204 |
PhysioNet | 0.025 | 0.472 | 0.211 |
Scenario | Poisoning Success Rate (%) | Anomaly Detection TPR (%) | FPR (%) |
---|---|---|---|
Semi-trustworthy fog (baseline) | 5.8 | 93.4 | 6.1 |
Fully compromised fog (plaintext leakage) | 9.4 | 91.2 | 7.6 |
Method | Accuracy (%) | Leakage (%) | Poison Resilience (%) | Comm. Cost (MB) |
---|---|---|---|---|
FedAvg (Plaintext) | 89.3 | 72.1 | 55.4 | 0.58 |
HE + DP | 86.1 | 12.5 | 83.6 | 1.45 |
DP + Secure Agg. | 84.7 | 19.7 | 81.4 | 0.92 |
SAFE-MED (Proposed) | 87.4 | 14.6 | 90.2 | 0.58 |
Method | SSIM | PSNR (dB) | MIA Accuracy (%) |
---|---|---|---|
FedAvg (Plaintext) | 0.70 | 28.4 | 82.6 |
DP-only | 0.38 | 19.2 | 61.5 |
HE + DP | 0.25 | 14.7 | 54.3 |
SAFE-MED (Proposed) | 0.14 | 10.9 | 47.8 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Khan, M.Z.; Abbass, W.; Abbas, N.; Javed, M.A.; Alahmadi, A.; Majeed, U. SAFE-MED for Privacy-Preserving Federated Learning in IoMT via Adversarial Neural Cryptography. Mathematics 2025, 13, 2954. https://doi.org/10.3390/math13182954
Khan MZ, Abbass W, Abbas N, Javed MA, Alahmadi A, Majeed U. SAFE-MED for Privacy-Preserving Federated Learning in IoMT via Adversarial Neural Cryptography. Mathematics. 2025; 13(18):2954. https://doi.org/10.3390/math13182954
Chicago/Turabian StyleKhan, Mohammad Zubair, Waseem Abbass, Nasim Abbas, Muhammad Awais Javed, Abdulrahman Alahmadi, and Uzma Majeed. 2025. "SAFE-MED for Privacy-Preserving Federated Learning in IoMT via Adversarial Neural Cryptography" Mathematics 13, no. 18: 2954. https://doi.org/10.3390/math13182954
APA StyleKhan, M. Z., Abbass, W., Abbas, N., Javed, M. A., Alahmadi, A., & Majeed, U. (2025). SAFE-MED for Privacy-Preserving Federated Learning in IoMT via Adversarial Neural Cryptography. Mathematics, 13(18), 2954. https://doi.org/10.3390/math13182954