Lightweight Privacy Protection via Adversarial Sample
Abstract
:1. Introduction
- To our best knowledge, our research is one of the early considerations in weakening the contradiction between limited user device capabilities and a large number of parameters of deep learning models in privacy protection via adversarial samples.
- We use state-of-the-art model pruning techniques and combine them with advanced adversarial sample privacy protections to design two structural pruning-based adversarial sample privacy protections and verify their effectiveness.
- Thorough experiments on four real-world datasets demonstrate that the performance of the defense model after pruning is not significantly affected and prove the feasibility of the proposed locally-based privacy protection for adversarial samples.
2. Related Work
2.1. Privacy Protection Based on Adversarial Samples
2.2. Model Pruning
3. Model Preparation
3.1. Participating Parties
3.1.1. Defender
3.1.2. User
3.1.3. Attacker
3.2. Problem Definition
4. The Proposed Scheme
4.1. Model Pruning
4.1.1. Dependency Graph and Grouping
4.1.2. Importance Evaluation
4.2. The First SP-ASPP
Algorithm 1 The first SP-ASPP |
Input: Number of users N, data of the ith user , dimension of K, defense model D, minimum perturbation magnitude . Output: .
|
4.3. The Second SP-ASPP
Algorithm 2 The second SP-ASPP |
Input: Number of users N, data of the ith user , defense model D. Output: .
|
5. Experiment
5.1. Experimental Settings
5.1.1. Datasets
5.1.2. The First SP-ASPP
5.1.3. The Second SP-ASPP
5.2. Experimental Results
5.2.1. The First SP-ASPP
- When the noise budget is set to 1.5, the maximum inference accuracy is 0.3771, and the minimum inference accuracy is 0.3283. When n = 0.0437, the results satisfy Definition 3.
- When the noise budget is set to 2.0, the maximum inference accuracy is 0.3666, and the minimum inference accuracy is 0.3020. When n = 0.0583, the results satisfy Definition 3.
- When the noise budget is set to 3.0, the maximum inference accuracy is 0.3569, and the minimum inference accuracy is 0.2707. When n = 0.0785, the results satisfy Definition 3.
- When the noise budget is set to 4.0, the maximum inference accuracy is 0.3518, and the minimum inference accuracy is 0.2548. When n = 0.0890, the results satisfy Definition 3.
- When the noise budget is set to 5.0, the maximum inference accuracy is 0.3480, and the minimum inference accuracy is 0.2459. When n = 0.0942, the results satisfy Definition 3.
5.2.2. The Second SP-ASPP
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
- Jia, J.; Gong, N.Z. {AttriGuard}: A practical defense against attribute inference attacks via adversarial machine learning. In Proceedings of the 27th USENIX Security Symposium (USENIX Security 18), Baltimore, MD, USA, 15–17 August 2018; pp. 513–529. [Google Scholar]
- Jia, J.; Salem, A.; Backes, M.; Zhang, Y.; Gong, N.Z. Memguard: Defending against black-box membership inference attacks via adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK, 11–15 November 2019; pp. 259–274. [Google Scholar]
- Shao, R.; Shi, Z.; Yi, J.; Chen, P.Y.; Hsieh, C.J. Robust text captchas using adversarial examples. In Proceedings of the 2022 IEEE International Conference on Big Data (Big Data), IEEE, Osaka, Japan, 17–20 December 2022; pp. 1495–1504. [Google Scholar]
- Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The limitations of deep learning in adversarial settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy (EuroS&P), IEEE, Saarbrücken, Germany, 21–24 March 2016; pp. 372–387. [Google Scholar]
- Shokri, R.; Stronati, M.; Song, C.; Shmatikov, V. Membership inference attacks against machine learning models. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), IEEE, San Jose, CA, USA, 22–24 May 2017; pp. 3–18. [Google Scholar]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv 2017, arXiv:1706.06083. [Google Scholar]
- Dwork, C. Differential privacy. In Proceedings of the International Colloquium on Automata, Languages, and Programming; Springer: Cham, Switzeland, 2006; pp. 1–12. [Google Scholar]
- Shokri, R.; Theodorakopoulos, G.; Troncoso, C.; Hubaux, J.P.; Le Boudec, J.Y. Protecting location privacy: Optimal strategy against localization attacks. In Proceedings of the 2012 ACM Conference on Computer and Communications Security, Raleigh, NC, USA, 16–18 October 2012; pp. 617–627. [Google Scholar]
- Salamatian, S.; Zhang, A.; du Pin Calmon, F.; Bhamidipati, S.; Fawaz, N.; Kveton, B.; Oliveira, P.; Taft, N. Managing your private and public data: Bringing down inference attacks against your privacy. IEEE J. Sel. Top. Signal Process. 2015, 9, 1240–1255. [Google Scholar] [CrossRef]
- Xie, G.; Pei, Q. Towards Attack to MemGuard with Nonlocal-Means Method. Secur. Commun. Netw. 2022, 2022, 6272737. [Google Scholar] [CrossRef]
- Wang, T.; Blocki, J.; Li, N.; Jha, S. Locally differentially private protocols for frequency estimation. In Proceedings of the 26th USENIX Security Symposium (USENIX Security 17), Vancouver, BC, Canada, 16–18 August 2017; pp. 729–745. [Google Scholar]
- Ding, X.; Ding, G.; Guo, Y.; Han, J. Centripetal sgd for pruning very deep convolutional networks with complicated structure. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4943–4953. [Google Scholar]
- You, Z.; Yan, K.; Ye, J.; Ma, M.; Wang, P. Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks. arXiv 2019. [Google Scholar] [CrossRef]
- Fang, G.; Ma, X.; Song, M.; Mi, M.B.; Wang, X. Depgraph: Towards any structural pruning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 16091–16101. [Google Scholar]
- Shen, M.; Molchanov, P.; Yin, H.; Alvarez, J.M. When to prune? A policy towards early structural pruning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 12247–12256. [Google Scholar]
- Gao, S.; Zhang, Z.; Zhang, Y.; Huang, F.; Huang, H. Structural Alignment for Network Pruning through Partial Regularization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 17402–17412. [Google Scholar]
- Shen, M.; Yin, H.; Molchanov, P.; Mao, L.; Liu, J.; Alvarez, J.M. Structural pruning via latency-saliency knapsack. Adv. Neural Inf. Process. Syst. 2022, 35, 12894–12908. [Google Scholar]
- Fang, G.; Ma, X.; Wang, X. Structural Pruning for Diffusion Models. arXiv 2023, arXiv:2305.10924. [Google Scholar]
- Hou, Y.; Ma, Z.; Liu, C.; Wang, Z.; Loy, C.C. Network pruning via resource reallocation. Pattern Recognit. 2024, 145, 109886. [Google Scholar] [CrossRef]
- Ma, X.; Fang, G.; Wang, X. LLM-Pruner: On the Structural Pruning of Large Language Models. arXiv 2023, arXiv:2305.11627. [Google Scholar]
- Dong, X.; Chen, S.; Pan, S. Learning to prune deep neural networks via layer-wise optimal brain surgeon. arXiv 2017. [Google Scholar] [CrossRef]
- Guo, Y.; Yao, A.; Chen, Y. Dynamic network surgery for efficient dnns. arXiv 2016. [Google Scholar] [CrossRef]
- Park, S.; Lee, J.; Mo, S.; Shin, J. Lookahead: A far-sighted alternative of magnitude-based pruning. arXiv 2020, arXiv:2002.04809. [Google Scholar]
- Luo, J.H.; Wu, J. Neural network pruning with residual-connections and limited-data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1458–1467. [Google Scholar]
- Yao, L.; Pi, R.; Xu, H.; Zhang, W.; Li, Z.; Zhang, T. Joint-detnas: Upgrade your detector with nas, pruning and dynamic distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10175–10184. [Google Scholar]
- Li, T.; Lin, L. Anonymousnet: Natural face de-identification with measurable privacy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Salman, H.; Khaddaj, A.; Leclerc, G.; Ilyas, A.; Madry, A. Raising the cost of malicious ai-powered image editing. arXiv 2023, arXiv:2302.06588. [Google Scholar]
- Shan, S.; Cryan, J.; Wenger, E.; Zheng, H.; Hanocka, R.; Zhao, B.Y. Glaze: Protecting artists from style mimicry by text-to-image models. arXiv 2023, arXiv:2302.04222. [Google Scholar]
- Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
- Ma, X.; Li, B.; Wang, Y.; Erfani, S.M.; Wijewickrema, S.; Schoenebeck, G.; Song, D.; Houle, M.E.; Bailey, J. Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv 2018, arXiv:1801.02613. [Google Scholar]
- Liu, Y.; Qin, Z.; Anwar, S.; Ji, P.; Kim, D.; Caldwell, S.; Gedeon, T. Invertible denoising network: A light solution for real noise removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13365–13374. [Google Scholar]
- Meng, D.; Chen, H. Magnet: A two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 135–147. [Google Scholar]
Datasets | Methods | Accuracy | Parameter Size (Mb) | Inference Accuracy |
---|---|---|---|---|
Att | AttriGuard [2] | 0.4549 | 401.0400 | 0.3531 |
1st-SP-ASPP (50) | 0.4586 | 7.0100 | 0.2863 | |
1st-SP-ASPP (100) | 0.4555 | 3.0000 | 0.2967 | |
1st-SP-ASPP (150) | 0.4506 | 2.0100 | 0.2937 | |
1st-SP-ASPP (200) | 0.4433 | 0.9900 | 0.3112 | |
Location | MemGuard [3] | 0.8575 | 0.0492 | 0.5946 |
2nd-SP-ASPP (20) | 0.8575 | 0.0024 | 0.6147 | |
2nd-SP-ASPP (40) | 0.8575 | 0.0012 | 0.5951 | |
2nd-SP-ASPP (60) | 0.8525 | 0.0008 | 0.5949 | |
Texas100 | MemGuard [3] | 0.7360 | 0.0671 | 0.5715 |
2nd-SP-ASPP (20) | 0.7355 | 0.0033 | 0.5677 | |
2nd-SP-ASPP (40) | 0.7353 | 0.0016 | 0.5673 | |
2nd-SP-ASPP (60) | 0.7350 | 0.0010 | 0.5663 | |
CH-MNIST | MemGuard [3] | 0.7075 | 0.0435 | 0.5435 |
2nd-SP-ASPP (50) | 0.7050 | 0.0008 | 0.5442 | |
2nd-SP-ASPP (100) | 0.7025 | 0.0004 | 0.5429 | |
2nd-SP-ASPP (150) | 0.7075 | 0.0003 | 0.5495 | |
2nd-SP-ASPP (200) | 0.7075 | 0.0002 | 0.6449 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xie, G.; Hou, G.; Pei, Q.; Huang, H. Lightweight Privacy Protection via Adversarial Sample. Electronics 2024, 13, 1230. https://doi.org/10.3390/electronics13071230
Xie G, Hou G, Pei Q, Huang H. Lightweight Privacy Protection via Adversarial Sample. Electronics. 2024; 13(7):1230. https://doi.org/10.3390/electronics13071230
Chicago/Turabian StyleXie, Guangxu, Gaopan Hou, Qingqi Pei, and Haibo Huang. 2024. "Lightweight Privacy Protection via Adversarial Sample" Electronics 13, no. 7: 1230. https://doi.org/10.3390/electronics13071230
APA StyleXie, G., Hou, G., Pei, Q., & Huang, H. (2024). Lightweight Privacy Protection via Adversarial Sample. Electronics, 13(7), 1230. https://doi.org/10.3390/electronics13071230