MDIFL: Robust Federated Learning Based on Malicious Detection and Incentives
Abstract
:1. Introduction
2. Literature Review
2.1. Node Selection Issue
2.2. Incentive Mechanism
3. Materials and Methods
3.1. Node Selection
Algorithm 1: MDIFL Algorithm for Server |
3.2. Incentive Mechanism
3.2.1. Problem Formulation
3.2.2. Optimal Contract Design
- Individual Rationality (IR): Participants will only participate in the task of federated learning model training if their maximum utility is not less than zero, i.e.,
- Incentive Compatibility (IC): Each participant of type m can maximize utility only if he chooses the contract designed for his type, i.e.,
4. Experiment Results and Discussion
4.1. Experimental Setup
4.1.1. Datasets
4.1.2. Baselines
4.1.3. Attack Settings
- Free-riders: Free-riders is a passive attack, which means that participants only use the global model to update their own local model and refuse to provide valuable local information to the global model. They usually upload random gradients.
- Targeted poisoning attack: This is such as label-flipping, which refers to the fact that the labels of honest training examples of one class are flipped to another class, while the characteristics of the data remain unchanged. For example, the adversary is trained using ‘1’ as the label for the actual image ‘7’, resulting in distorted gradients.
- Non-targeted poisoning attack: Randomize the sign, rescale the gradient, and invert the value, respectively.
4.1.4. Hyper-Parameters
4.2. Reputation Setting and Malicious Node Identification Analysis of MDIFL
4.3. Comparison of MDIFL with Other Methods under Different Datasets
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Mao, Y.; You, C.; Zhang, J.; Huang, K.; Letaief, K.B. A survey on mobile edge computing: The communication perspective. IEEE Commun. Surv. Tutorials 2017, 19, 2322–2358. [Google Scholar] [CrossRef] [Green Version]
- Konečnỳ, J.; McMahan, H.B.; Yu, F.X.; Richtárik, P.; Suresh, A.T.; Bacon, D. Federated learning: Strategies for improving communication efficiency. arXiv 2016, arXiv:1610.05492. [Google Scholar]
- Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 2019, 10, 1–19. [Google Scholar] [CrossRef]
- Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and open problems in federated learning. Found. Trends® Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
- Lyu, L.; Yu, H.; Ma, X.; Chen, C.; Sun, L.; Zhao, J.; Yang, Q.; Philip, S.Y. Privacy and robustness in federated learning: Attacks and defenses. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–21. [Google Scholar] [CrossRef] [PubMed]
- Cho, Y.J.; Wang, J.; Joshi, G. Client selection in federated learning: Convergence analysis and power-of-choice selection strategies. arXiv 2020, arXiv:2010.01243. [Google Scholar]
- Ribero, M.; Vikalo, H. Communication-efficient federated learning via optimal client sampling. arXiv 2020, arXiv::2007.15197. [Google Scholar]
- Khan, L.U.; Pandey, S.R.; Tran, N.H.; Saad, W.; Han, Z.; Nguyen, M.N.; Hong, C.S. Federated learning for edge networks: Resource optimization and incentive mechanism. IEEE Commun. Mag. 2020, 58, 88–93. [Google Scholar] [CrossRef]
- Zhan, Y.; Zhang, J.; Hong, Z.; Wu, L.; Li, P.; Guo, S. A survey of incentive mechanism design for federated learning. IEEE Trans. Emerg. Top. Comput. 2021, 10, 1035–1044. [Google Scholar] [CrossRef]
- Lu, Y.; Huang, X.; Dai, Y.; Maharjan, S.; Zhang, Y. Blockchain and federated learning for privacy-preserved data sharing in industrial IoT. IEEE Trans. Ind. Inform. 2019, 16, 4177–4186. [Google Scholar] [CrossRef]
- Yin, B.; Yin, H.; Wu, Y.; Jiang, Z. FDC: A secure federated deep learning mechanism for data collaborations in the Internet of Things. IEEE Internet Things J. 2020, 7, 6348–6359. [Google Scholar] [CrossRef]
- Xu, Y.; Lu, Z.; Gai, K.; Duan, Q.; Lin, J.; Wu, J.; Choo, K.K.R. BESIFL: Blockchain Empowered Secure and Incentive Federated Learning Paradigm in IoT. IEEE Internet Things J. 2021, 22, 1–15. [Google Scholar] [CrossRef]
- Xu, X.; Lyu, L. A reputation mechanism is all you need: Collaborative fairness and adversarial robustness in federated learning. arXiv 2020, arXiv:2011.10464. [Google Scholar]
- Wang, S.; Tuor, T.; Salonidis, T.; Leung, K.K.; Makaya, C.; He, T.; Chan, K. Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun. 2019, 37, 1205–1221. [Google Scholar] [CrossRef] [Green Version]
- Kim, H.; Park, J.; Bennis, M.; Kim, S.L. On-device federated learning via blockchain and its latency analysis. arXiv 2018, arXiv:1808.03949. [Google Scholar]
- Lai, F.; Zhu, X.; Madhyastha, H.V.; Chowdhury, M. Oort: Informed participant selection for scalable federated learning. arXiv 2020, arXiv:2010.06081. [Google Scholar]
- Chai, Z.; Ali, A.; Zawad, S.; Truex, S.; Anwar, A.; Baracaldo, N.; Zhou, Y.; Ludwig, H.; Yan, F.; Cheng, Y. Tifl: A tier-based federated learning system. In Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing, Stockholm, Sweden, 23–26 June 2020; pp. 125–136. [Google Scholar]
- Wang, H.; Kaplan, Z.; Niu, D.; Li, B. Optimizing federated learning on non-iid data with reinforcement learning. In Proceedings of the IEEE INFOCOM 2020-IEEE Conference on Computer Communications, Toronto, ON, Canada, 6–9 July 2020; pp. 1698–1707. [Google Scholar]
- Kang, J.; Xiong, Z.; Niyato, D.; Xie, S.; Zhang, J. Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory. IEEE Internet Things J. 2019, 6, 10700–10714. [Google Scholar] [CrossRef]
- Mothukuri, V.; Parizi, R.M.; Pouriyeh, S.; Huang, Y.; Dehghantanha, A.; Srivastava, G. A survey on security and privacy of federated learning. Future Gener. Comput. Syst. 2021, 115, 619–640. [Google Scholar] [CrossRef]
- Bhagoji, A.N.; Chakraborty, S.; Mittal, P.; Calo, S. Analyzing federated learning through an adversarial lens. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 10–15 June 2019; pp. 634–643. [Google Scholar]
- Fang, M.; Cao, X.; Jia, J.; Gong, N.Z. Local model poisoning attacks to byzantine-robust federated learning. In Proceedings of the 29th USENIX Conference on Security Symposium, Boston, MA, USA, 12–14 August 2020; pp. 1623–1640. [Google Scholar]
- Zhang, Z.; Cao, X.; Jia, J.; Gong, N.Z. FLDetector: Defending federated learning against model poisoning attacks via detecting malicious clients. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 14–18 August 2022; pp. 2545–2555. [Google Scholar]
- Lim, W.Y.B.; Xiong, Z.; Miao, C.; Niyato, D.; Yang, Q.; Leung, C.; Poor, H.V. Hierarchical incentive mechanism design for federated machine learning in mobile networks. IEEE Internet Things J. 2020, 7, 9575–9588. [Google Scholar] [CrossRef]
- Kang, J.; Xiong, Z.; Niyato, D.; Yu, H.; Liang, Y.C.; Kim, D.I. Incentive design for efficient federated learning in mobile networks: A contract theory approach. In Proceedings of the 2019 IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS), Singapore, 28–30 August 2019; pp. 1–5. [Google Scholar]
- Li, T.; Sanjabi, M.; Beirami, A.; Smith, V. Fair resource allocation in federated learning. arXiv 2019, arXiv:1905.10497. [Google Scholar]
- Cao, X.; Fang, M.; Liu, J.; Gong, N.Z. Fltrust: Byzantine-robust federated learning via trust bootstrapping. arXiv 2020, arXiv:2012.13995. [Google Scholar]
- Fung, C.; Yoon, C.J.; Beschastnikh, I. The limitations of federated learning in sybil settings. In Proceedings of the 23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020), San Sebastian, Spain, 14–15 October 2020; pp. 301–316. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images. Technical Report, University of Toronto. 2009. Available online: http://www.cs.utoronto.ca/~kriz/learning-features-2009-TR.pdf (accessed on 12 November 2022).
- Pang, B.; Lee, L. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. arXiv 2005, arXiv:cs/0506075. [Google Scholar]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics. PMLR, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
- Yang, Q.; Liu, Y.; Cheng, Y.; Kang, Y.; Chen, T.; Yu, H. Federated learning. Synth. Lect. Artif. Intell. Mach. Learn. 2019, 13, 1–207. [Google Scholar]
- Biggio, B.; Nelson, B.; Laskov, P. Support vector machines under adversarial label noise. In Proceedings of the Asian Conference on Machine Learning, PMLR, Taoyuan, Taiwan, 13–15 November 2011; pp. 97–112. [Google Scholar]
- Bernstein, J.; Zhao, J.; Azizzadenesheli, K.; Anandkumar, A. signSGD with majority vote is communication efficient and fault tolerant. arXiv 2018, arXiv:1810.05291. [Google Scholar]
Symbol | Description |
---|---|
P | Collection of participants |
Reputation weighting factor | |
Malicious participant | |
Suspected as malicious participant | |
Good participant | |
Global gradient of round t | |
Local gradient of participant in round t | |
Cosine similarity between and | |
Threshold for judging malicious nodes 1 | |
Threshold for judging malicious nodes 2 |
MNIST | CIFAR10 | MR | |||
---|---|---|---|---|---|
N | 10 | 10 | 5 | ||
Data Split | UNI | POW | UNI | POW | POW |
FedAvg | 94 | 94 | 39 | 37 | 52 |
q-FFL | 85 | 27 | 41 | 36 | 12 |
RFFL | 94 | 93 | 45 | 42 | 53 |
MDIFL | 96 | 96 | 51 | 53 | 56 |
Free-Riders | Label- Flipping | Sign- Randomizing | Re-Scales | Value- Inverting | |
---|---|---|---|---|---|
FedAvg | 96 | 89 | 10 | 10 | 9 |
RFFL | 96 | 96 | 96 | 96 | 96 |
MFSFL | 96 | 96 | 96 | 96 | 96 |
Free-Riders | Label- Flipping | Sign- Randomizing | Re-Scales | Value- Inverting | |
---|---|---|---|---|---|
FedAvg | 93 | 88 | 94 | 10 | 9 |
RFFL | 96 | 96 | 96 | 96 | 96 |
MFSFL | 96 | 96 | 96 | 96 | 96 |
Maximum Accuracy | Attack Success Rate | |
---|---|---|
FedAvg | 94.5 | 0.7 |
RFFL | 94.0 | 0.5 |
MFSFL | 95.8 | 0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, R.; Chen, Y.; Tan, C.; Luo, Y. MDIFL: Robust Federated Learning Based on Malicious Detection and Incentives. Appl. Sci. 2023, 13, 2793. https://doi.org/10.3390/app13052793
Wu R, Chen Y, Tan C, Luo Y. MDIFL: Robust Federated Learning Based on Malicious Detection and Incentives. Applied Sciences. 2023; 13(5):2793. https://doi.org/10.3390/app13052793
Chicago/Turabian StyleWu, Ruolan, Yuling Chen, Chaoyue Tan, and Yun Luo. 2023. "MDIFL: Robust Federated Learning Based on Malicious Detection and Incentives" Applied Sciences 13, no. 5: 2793. https://doi.org/10.3390/app13052793
APA StyleWu, R., Chen, Y., Tan, C., & Luo, Y. (2023). MDIFL: Robust Federated Learning Based on Malicious Detection and Incentives. Applied Sciences, 13(5), 2793. https://doi.org/10.3390/app13052793