Sensitivity-Aware Differential Privacy for Federated Medical Imaging
Abstract
:1. Introduction
- We propose a novel sensitivity-aware differential privacy (SDP) notion that evaluates data sensitivity based on privacy risks under adversarial attacks, ensuring customized protection for individual data samples.
- To realize this notion, we designed a privacy defense mechanism to counter the gradient inversion attacks encountered during federated medical imaging training. Our approach provides stronger protection for data with higher privacy risks, effectively balancing privacy and utility.
- We demonstrate the natural scalability of our approach in handling multiple attack scenarios. By refining the sensitivity function, our method effectively quantifies comprehensive privacy risks and provides adaptive protection in complex threat environments.
- Theoretical analysis and experimental results validate the effectiveness of our method. Furthermore, while this work focuses on federated medical imaging, the proposed SDP notion is broadly applicable to other domains where data samples exhibit varying levels of sensitivity to adversarial attacks.
2. Related Work
2.1. Gradient Inversion Attacks on Medical Data
2.2. Differential Privacy for Medical Data Protection
3. Sensitive-Aware Differential Privacy
- Privacy guarantee : for each client k, mechanism must satisfy -differential privacy.
- Utility preservation: under the given privacy constraints, the mechanism should minimize the negative impact of the added noise on model utility, thereby maintaining high global model performance.
4. Sensitivity-Aware Privacy Mechanism
Algorithm 1 Sensitivity-aware privacy mechanism. |
Require:: the total privacy budget, : number of images for client k, : input data for user k in round t, C: gradient norm bound. Ensure: Model parameters . 1: for to do 2: Reconstruct data = GIA(). 3: Calculate SSIM() for each image based on Equation (4). 4: end for 5: for to do 6: Calculate the sensitivity of each image based on Equation (5). 7: Allocate privacy budget according to Equation (6). 8: end for 9: LocalTraining() 10: 11: Return to the server. |
4.1. Gradient Inversion Attack in Medical Imaging
- Dummy data initialization: The FL server, acting as an adversary, begins by randomly generating a pair of dummy input data and a corresponding label. These dummy values are iteratively refined to match the gradients of the original training data.
- Gradient capture: The server collects the gradient updates submitted by honest clients during each training iteration. Since FL operates in a distributed manner, clients compute gradients locally on their private datasets before sending them to the server. These gradients contain valuable information about the underlying training samples, making them susceptible to reconstruction.
- Gradient-matching optimization: The adversary then employs an optimization process to iteratively adjust the dummy data and labels, aiming to minimize the difference between the true gradients (received from clients) and the gradients computed from the dummy data. The most common method for calculating the difference between gradients is
- Iterative refinement: The gradient-matching process is executed iteratively, with the attacker continuously adjusting the dummy data to minimize the gradient difference. This optimization continues until one of two stopping conditions is met: (1) the loss function begins to diverge, suggesting that further refinement will not yield a more accurate reconstruction, or (2) the maximum number of iterations is reached, at which point the attacker achieves a high-fidelity approximation of the original training sample.
4.2. Sensitivity Measurement
4.3. Sensitivity-Based Noise Addition
4.4. Theoretical Analysis
4.4.1. Privacy Guarantee
4.4.2. Computational Overhead
4.4.3. Communication Overhead
5. Multi-Attack Sensitivity Evaluation
6. Experiments
6.1. Experimental Setup
- PRiMIA is an open-source framework designed for end-to-end privacy-preserving deep learning in multi-institutional medical imaging settings. It integrates differential privacy mechanisms with secure aggregation and encrypted inference to protect patient data during model training and deployment. Specifically, PRiMIA employs differentially private stochastic gradient descent (DP-SGD) to ensure that individual data contributions remain confidential, while secure multi-party computation techniques are used to aggregate model updates without exposing raw data. This framework has the ability to maintain model performance comparable to non-private training methods while providing robust privacy guarantees against gradient-based attacks.
- MSDP is a federated learning approach tailored to handle non-independent and identically distributed (non-IID) data across multiple medical institutions. It extends the DP-SGD algorithm by incorporating mechanisms that account for data heterogeneity among clients. This method is effective in maintaining model accuracy in scenarios where data distributions vary significantly across participating sites.
6.2. Experimental Evaluation
6.2.1. Results of the GIA Attack
6.2.2. Results of the Sensitivity-Aware Privacy Mechanism
7. Discussion
7.1. Generalizing SDP Across Data Modalities
7.2. Extending SDP with Cryptographic Techniques
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
FL | Federated learning |
DP | Differential privacy |
GIA | Gradient inversion attacks |
AI | Artificial intelligence |
SDP | Sensitivity-aware differential privacy |
DPSGD | Differentially private stochastic gradient descent |
SSIM | Structural similarity index measure |
REA | Rotterdam EyePACS AIROGS |
DLG | Deep leakage from gradient |
LDP | Local differential privacy |
References
- Upreti, D.; Yang, E.; Kim, H.; Seo, C. A Comprehensive Survey on Federated Learning in the Healthcare Area: Concept and Applications. CMES-Comput. Model. Eng. Sci. 2024, 140, 2239–2274. [Google Scholar] [CrossRef]
- Chang, Q.; Yan, Z.; Zhou, M.; Qu, H.; He, X.; Zhang, H.; Baskaran, L.; Al’Aref, S.; Li, H.; Zhang, S.; et al. Mining multi-center heterogeneous medical data with distributed synthetic learning. Nat. Commun. 2023, 14, 5510. [Google Scholar] [CrossRef] [PubMed]
- Feng, J.; Wu, Y.; Sun, H.; Zhang, S.; Liu, D. Panther: Practical Secure 2-Party Neural Network Inference. IEEE Trans. Inf. Forensics Secur. 2025, 20, 1149–1162. [Google Scholar] [CrossRef]
- Zhang, C.; Huang, T.; Mao, W.; Bai, H.; Yu, B. Federated Continual Learning based on Central Memory Rehearsal. Hospital 2024, 1, T2. [Google Scholar]
- Nguyen, D.C.; Pham, Q.V.; Pathirana, P.N.; Ding, M.; Seneviratne, A.; Lin, Z.; Dobre, O.; Hwang, W.J. Federated learning for smart healthcare: A survey. ACM Comput. Surv. (Csur) 2022, 55, 1–37. [Google Scholar] [CrossRef]
- Roth, H.R.; Chang, K.; Singh, P.; Neumark, N.; Li, W.; Gupta, V.; Gupta, S.; Qu, L.; Ihsani, A.; Bizzo, B.C.; et al. Federated learning for breast density classification: A real-world implementation. In Proceedings of the Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning: Second MICCAI Workshop, DART 2020, and First MICCAI Workshop, DCL 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, 4–8 October 2020; Proceedings 2. Springer: Berlin/Heidelberg, Germany, 2020; pp. 181–191. [Google Scholar]
- Rieke, N.; Hancox, J.; Li, W.; Milletari, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K.; et al. The future of digital health with federated learning. NPJ Digit. Med. 2020, 3, 1–7. [Google Scholar] [CrossRef]
- Ogier du Terrail, J.; Ayed, S.S.; Cyffers, E.; Grimberg, F.; He, C.; Loeb, R.; Mangold, P.; Marchand, T.; Marfoq, O.; Mushtaq, E.; et al. Flamby: Datasets and benchmarks for cross-silo federated learning in realistic healthcare settings. Adv. Neural Inf. Process. Syst. 2022, 35, 5315–5334. [Google Scholar]
- Abbas, S.R.; Abbas, Z.; Zahir, A.; Lee, S.W. Federated Learning in Smart Healthcare: A Comprehensive Review on Privacy, Security, and Predictive Analytics with IoT Integration. Healthcare 2024, 12, 2587. [Google Scholar] [CrossRef]
- Zhu, L.; Liu, Z.; Han, S. Deep leakage from gradients. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
- Zhao, B.; Mopuri, K.R.; Bilen, H. idlg: Improved deep leakage from gradients. arXiv 2020, arXiv:2001.02610. [Google Scholar]
- Geiping, J.; Bauermeister, H.; Dröge, H.; Moeller, M. Inverting gradients-how easy is it to break privacy in federated learning? Adv. Neural Inf. Process. Syst. 2020, 33, 16937–16947. [Google Scholar]
- Darzi, E.; Dubost, F.; Sijtsema, N.M.; van Ooijen, P. Exploring adversarial attacks in federated learning for medical imaging. arXiv 2023, arXiv:2310.06227. [Google Scholar] [CrossRef]
- Ye, Z.; Luo, W.; Zhou, Q.; Zhu, Z.; Shi, Y.; Jia, Y. Gradient Inversion Attacks: Impact Factors Analyses and Privacy Enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 9834–9850. [Google Scholar] [CrossRef] [PubMed]
- Hu, J.; Du, J.; Wang, Z.; Pang, X.; Zhou, Y.; Sun, P.; Ren, K. Does Differential Privacy Really Protect Federated Learning From Gradient Leakage Attacks? IEEE Trans. Mob. Comput. 2024, 23, 12635–12649. [Google Scholar] [CrossRef]
- Yan, J.; Zhang, Y.; Lu, L.; Tian, Y.; Zhou, Y. A graph generating method based on local differential privacy for preserving link relationships of social networks. J. Netw. Netw. Appl. 2025, 4, 145–156. [Google Scholar]
- Zhang, P.; Fang, X.; Zhang, Z.; Fang, X.; Liu, Y.; Zhang, J. Horizontal multi-party data publishing via discriminator regularization and adaptive noise under differential privacy. Inf. Fusion 2025, 120, 103046. [Google Scholar] [CrossRef]
- Ahamed, M.F.; Hossain, M.M.; Nahiduzzaman, M.; Islam, M.R.; Islam, M.R.; Ahsan, M.; Haider, J. A review on brain tumor segmentation based on deep learning methods with federated learning techniques. Comput. Med. Imaging Graph. 2023, 102313. [Google Scholar] [CrossRef]
- Xu, J.; Glicksberg, B.S.; Su, C.; Walker, P.; Bian, J.; Wang, F. Federated learning for healthcare informatics. J. Healthc. Inform. Res. 2021, 5, 1–19. [Google Scholar] [CrossRef]
- Darzidehkalani, E.; Sijtsema, N.M.; van Ooijen, P. A comparative study of federated learning models for COVID-19 detection. arXiv 2023, arXiv:2303.16141. [Google Scholar]
- Kaissis, G.; Ziller, A.; Passerat-Palmbach, J.; Ryffel, T.; Usynin, D.; Trask, A.; Lima, I., Jr.; Mancuso, J.; Jungmann, F.; Steinborn, M.M.; et al. End-to-end privacy preserving deep learning on multi-institutional medical imaging. Nat. Mach. Intell. 2021, 3, 473–484. [Google Scholar] [CrossRef]
- Hatamizadeh, A.; Yin, H.; Molchanov, P.; Myronenko, A.; Li, W.; Dogra, P.; Feng, A.; Flores, M.G.; Kautz, J.; Xu, D.; et al. Do gradient inversion attacks make federated learning unsafe? IEEE Trans. Med. Imaging 2023, 42, 2044–2056. [Google Scholar] [CrossRef] [PubMed]
- Li, Z.; Wang, L.; Chen, G.; Zhang, Z.; Shafiq, M.; Gu, Z. E2EGI: End-to-End gradient inversion in federated learning. IEEE J. Biomed. Health Inform. 2022, 27, 756–767. [Google Scholar]
- Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 308–318. [Google Scholar] [CrossRef]
- Bassily, R.; Smith, A.; Thakurta, A. Private empirical risk minimization: Efficient algorithms and tight error bounds. In Proceedings of the 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, Philadelphia, PA, USA, 18–21 October 2014; pp. 464–473. [Google Scholar]
- Ziller, A.; Usynin, D.; Braren, R.; Makowski, M.; Rueckert, D.; Kaissis, G. Medical imaging deep learning with differential privacy. Sci. Rep. 2021, 11, 13524. [Google Scholar] [CrossRef] [PubMed]
- Jorgensen, Z.; Yu, T.; Cormode, G. Conservative or liberal? In Personalized differential privacy. In Proceedings of the 2015 IEEE 31St International Conference on Data Engineering, Seoul, Republic of Korea, 13–17 April 2015; pp. 1023–1034. [Google Scholar]
- Shi, W.; Shea, R.; Chen, S.; Zhang, C.; Jia, R.; Yu, Z. Just fine-tune twice: Selective differential privacy for large language models. arXiv 2022, arXiv:2204.07667. [Google Scholar]
- Adnan, M.; Kalra, S.; Cresswell, J.C.; Taylor, G.W.; Tizhoosh, H.R. Federated learning and differential privacy for medical image analysis. Sci. Rep. 2022, 12, 1953. [Google Scholar] [CrossRef]
- Kotsogiannis, I.; Doudalis, S.; Haney, S.; Machanavajjhala, A.; Mehrotra, S. One-sided differential privacy. In Proceedings of the 2020 IEEE 36th International Conference on Data Engineering (ICDE), Dallas, TX, USA, 20–24 April 2020; pp. 493–504. [Google Scholar]
- McSherry, F.D. Privacy integrated queries: An extensible platform for privacy-preserving data analysis. In Proceedings of the 2009 ACM SIGMOD International Conference on Management of Data, Providence, RI, USA, 29 June–2 July 2009; pp. 19–30. [Google Scholar]
- Rahman, T.; Khandakar, A.; Qiblawey, Y.; Tahir, A.; Kiranyaz, S.; Kashem, S.B.A.; Islam, M.T.; Al Maadeed, S.; Zughaier, S.M.; Khan, M.S.; et al. Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Comput. Biol. Med. 2021, 132, 104319. [Google Scholar]
- de Vente, C.; Vermeer, K.A.; Jaccard, N.; van Ginneken, B.; Lemij, H.G.; Clara I., S. Rotterdam EyePACS AIROGS train set. arXiv 2021, arXiv:2302.01738. [Google Scholar]
Study | Methodology | Addressed Gap | Limitations |
---|---|---|---|
[21] | DPSGD | Prevents information leakage in FL | Uniform noise may degrade utility |
[27] | Personalized DP | Considers the sensitivity of different users | Ignores varying data sensitivities |
[28] | Selective-DP | Protects only the sensitive samples | Lacks consideration of real-world attacks |
Our method | Sensitivity-aware DP | Dynamically adjust noise based on measured sensitivity to privacy attacks | Future work considers different modal data and the combination with cryptography |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zheng, L.; Cao, Y.; Yoshikawa, M.; Shen, Y.; Rashed, E.A.; Taura, K.; Hanaoka, S.; Zhang, T. Sensitivity-Aware Differential Privacy for Federated Medical Imaging. Sensors 2025, 25, 2847. https://doi.org/10.3390/s25092847
Zheng L, Cao Y, Yoshikawa M, Shen Y, Rashed EA, Taura K, Hanaoka S, Zhang T. Sensitivity-Aware Differential Privacy for Federated Medical Imaging. Sensors. 2025; 25(9):2847. https://doi.org/10.3390/s25092847
Chicago/Turabian StyleZheng, Lele, Yang Cao, Masatoshi Yoshikawa, Yulong Shen, Essam A. Rashed, Kenjiro Taura, Shouhei Hanaoka, and Tao Zhang. 2025. "Sensitivity-Aware Differential Privacy for Federated Medical Imaging" Sensors 25, no. 9: 2847. https://doi.org/10.3390/s25092847
APA StyleZheng, L., Cao, Y., Yoshikawa, M., Shen, Y., Rashed, E. A., Taura, K., Hanaoka, S., & Zhang, T. (2025). Sensitivity-Aware Differential Privacy for Federated Medical Imaging. Sensors, 25(9), 2847. https://doi.org/10.3390/s25092847