Federated Learning Incentive Mechanism with Supervised Fuzzy Shapley Value
Abstract
:1. Introduction
- (1)
- Amidst the ambiguity and doubt of the participants’ involvement outlooks in FL, this paper proposes a fuzzy Shapley value method that can more accurately assess the degree of participants’ contribution and payoff distribution;
- (2)
- The clash between equity and Pareto effectiveness must be deliberated, as equity necessitates a fair distribution of payoffs among participants, while Pareto efficiency seeks that no participant’s payoff could increase without jeopardizing the payoffs of other participants;
- (3)
- This paper guarantees fairness and optimization of Pareto efficiency with consistency and introduces a supervisory mechanism that monitors and adjusts the behavior of participants. To guarantee that participants are treated justly, the allotment of advantages is essential in FL and to maximize the overall payoffs.
2. Related Work
3. Preliminaries
3.1. Federated Learning Framework
- Step 1:
- Individual federated participants retrieve the initial global model from the aggregation server.
- Step 2:
- Each participant trains their local model using the received initial global model.
- Step 3:
- Upon completing the local model training, participants upload the updated model and associated parameters to the aggregation server.
- Step 4:
- The aggregation server consolidates the models and parameters uploaded by each participant for the subsequent round of updates.
3.2. Cooperative Games
3.3. Shapley Value
3.4. Fuzzy Coalition of FL
3.5. Choquet Integral
3.6. Pareto Optimality
3.7. Nash Equilibrium
4. FL System Incentive Mechanism
4.1. The FL Incentive Model
- (1)
- Economic Participation: All participants in the FL framework are capable of making financial contributions towards the FL process.
- (2)
- Participant Satisfaction: The final distribution of payoffs is designed to satisfy all participants.
- (3)
- Trustworthiness and Integrity: It is assumed that all participants in the FL system are entirely trustworthy and exhibit no instances of cheating or dishonest behavior.
- (4)
- Multi-Party Agreement: To ensure seamless execution of the FL strategy, it is essential to establish a multi-party agreement.
- Step 1:
- Considering n participants in federated model training, each participant possesses its own local dataset, denoted as ;
- Step 2:
- Each participant downloads the initialized model from the aggregation server, they independently train the model using their respective local dataset , and upload their trained model to the federated aggregation server:
- Step 3:
- To acquire a fresh global model, the federated aggregation server assumes the responsibility of gathering the model parameters , and subsequently employs the federated aggregation algorithm to consolidate these parameters.
- Step 4:
- The fuzzy Shapley values method is employed to assess the individual contributions of each participant to the global model. This approach allows for the quantification of individual contributions, considering the uncertainty and fuzziness inherent in the participants’ involvement.
- Step 5:
- The supervising organization assesses the attainment of Pareto optimality for each participant’s payoff. If Pareto optimality is achieved, the fuzzy Shapley value formula is utilized to allocate the payoff. Conversely, the supervising organization imposes penalties on the participant, resulting in the forfeiture of a predetermined fine.
- Step 6:
- Rewards are allocated to participants who achieve Pareto optimality, as determined by the payoff allocation formula.
4.2. Federated Cost Utility Function
4.3. Model Quality Utility Function
4.4. Federated Optimization Function
4.5. The Fuzzy Shapley Value
4.6. The Conflict of Fairness and Pareto Optimality
4.7. Introducing Supervisory Organization
4.7.1. The Implementation of a Supervisory System
4.7.2. Penalty Conditions
- (1)
- If the individual investment of participant i is less than the Pareto optimal federated investment , i.e., e.g., , and is a monotonically increasing function (i.e., , participant i is fined. The remaining payoff after deducting the fine is denoted as , and the profit of participant i can be expressed as follows:
- (2)
- If the individual investment of participant i is equal to the Pareto optimal federated investment , i.e., , then . The remaining payoff after the penalty is , and the profit of participant i can be expressed as follows:
5. Illustrative Examples and Simulations
5.1. Illustrative Examples
- (1)
- Examining the value of participant 1,when the value of and , then
- (2)
- Examining the value of participant 2 when the values of and , then
- (3)
- Examining the value of participant 3 when the values of and , then
5.2. Illustrative Simulations
6. Discussions
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
N | The set of n FL participants |
D | The set of FL participants’ local dataset |
M | The model trained jointly by all FL participants |
The FL sharing model | |
The traditional machine learning model | |
The model accuracy of | |
The model accuracy of | |
S | The alliance subset of different participants |
The characteristic function in the determining case | |
The characteristic function in the fuzzy case | |
The participant’s payoff through the alliance S in the determining case | |
The participant’s payoff through the alliance S in the fuzzy case | |
The overall federated payoff in the determining case | |
The overall federated payoff in the fuzzy case | |
The participant i’s payoff in the determining case | |
The participant i’s payoff in the fuzzy case | |
x | The set of feasible actions that can be taken |
The federated participant’s payoff function in the fuzzy case | |
The coalition cost investment of participant in the fuzzy case | |
To reach Pareto optimality in the fuzzy situation, the supervisor must meet the penalty requirement | |
The supervisor obtained fines in the fuzzy case | |
The federated participant’s profit in the fuzzy case | |
The computational cost function | |
The federated cost utility function | |
The quality assessment function | |
The federated utility function |
Appendix A
Appendix A.1. Proof of Theorem 1
Appendix A.2. Proof of Theorem 2
Appendix A.3. Proof of Theorem 3
Appendix A.4. Proof of Theorem 4
Appendix A.5. Proof of Theorem 5
Appendix A.6. Proof of Theorem 6
References
- Konečnỳ, J.; McMahan, H.B.; Ramage, D.; Richtárik, P. Federated optimization: Distributed machine learning for on-device intelligence. arXiv 2016, arXiv:1610.02527. [Google Scholar]
- McMahan, H.B.; Moore, E.; Ramage, D.; y Arcas, B.A. Federated learning of deep networks using model averaging. arXiv 2016, arXiv:1602.05629. [Google Scholar]
- Konečnỳ, J.; McMahan, H.B.; Yu, F.X.; Richtárik, P.; Suresh, A.T.; Bacon, D. Federated learning: Strategies for improving communication efficiency. arXiv 2016, arXiv:1610.05492. [Google Scholar]
- Cicceri, G.; Tricomi, G.; Benomar, Z.; Longo, F.; Puliafito, A.; Merlino, G. Dilocc: An approach for distributed incremental learning across the computing continuum. In Proceedings of the 2021 IEEE International Conference on Smart Computing (SMARTCOMP), Irvine, CA, USA, 23–27 August 2021; pp. 113–120. [Google Scholar]
- Zhang, Z.; Zhang, Y.; Guo, D.; Zhao, S.; Zhu, X. Communication-efficient federated continual learning for distributed learning system with non-iid data. Sci. China Inf. Sci. 2023, 66, 122102. [Google Scholar] [CrossRef]
- Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol. (Tist) 2019, 10, 1–19. [Google Scholar] [CrossRef]
- Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and open problems in federated learning. Found. Trends Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
- Sarikaya, Y.; Ercetin, O. Motivating workers in federated learning: A stackelberg game perspective. IEEE Netw. Lett. 2019, 2, 23–27. [Google Scholar] [CrossRef]
- Yu, H.; Liu, Z.; Liu, Y.; Chen, T.; Cong, M.; Weng, X.; Niyato, D.; Yang, Q. A fairness-aware incentive scheme for federated learning. In Proceedings of the Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 7–9 February 2020; Volume 2020, pp. 393–399. [Google Scholar]
- Li, T.; Sanjabi, M.; Beirami, A.; Smith, V. Fair resource allocation in federated learning. arXiv 2019, arXiv:1905.10497. [Google Scholar]
- Zhan, Y.; Li, P.; Qu, Z.; Zeng, D.; Guo, S. A learning-based incentive mechanism for federated learning. IEEE Internet Things J. 2020, 7, 6360–6368. [Google Scholar] [CrossRef]
- Kang, J.; Xiong, Z.; Niyato, D.; Xie, S.; Zhang, J. Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory. IEEE Internet Things J. 2019, 6, 10700–10714. [Google Scholar] [CrossRef]
- Tran, N.H.; Bao, W.; Zomaya, A.; Nguyen, M.N.; Hong, C.S. Federated learning over wireless networks: Optimization model design and analysis. In Proceedings of the IEEE INFOCOM 2019-IEEE conference on computer communications, Paris, France, 29 April–2 May 2019; pp. 1387–1395. [Google Scholar]
- Kim, H.; Park, J.; Bennis, M.; Kim, S.L. Blockchained on-device federated learning. IEEE Commun. Lett. 2019, 24, 1279–1283. [Google Scholar] [CrossRef]
- Feng, S.; Niyato, D.; Wang, P.; Kim, D.I.; Liang, Y.C. Joint service pricing and cooperative relay communication for federated learning. In Proceedings of the 2019 International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), Atlanta, GA, USA, 14–17 July 2019; pp. 815–820. [Google Scholar]
- Yang, X. An exterior point method for computing points that satisfy second-order necessary conditions for a C1, 1 optimization problem. J. Math. Anal. Appl. 1994, 187, 118–133. [Google Scholar] [CrossRef]
- Wang, Z.; Hu, Q.; Li, R.; Xu, M.; Xiong, Z. Incentive mechanism design for joint resource allocation in blockchain-based federated learning. IEEE Trans. Parallel Distrib. Syst. 2023, 34, 1536–1547. [Google Scholar] [CrossRef]
- Song, T.; Tong, Y.; Wei, S. Profit allocation for federated learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 2577–2586. [Google Scholar]
- Wang, T.; Rausch, J.; Zhang, C.; Jia, R.; Song, D. A principled approach to data valuation for federated learning. In Federated Learning: Privacy and Incentive; Springer: Berlin/Heidelberg, Germany, 2020; pp. 153–167. [Google Scholar]
- Wang, G.; Dang, C.X.; Zhou, Z. Measure contribution of participants in federated learning. In Proceedings of the 2019 IEEE international conference on big data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 2597–2604. [Google Scholar]
- Liu, Y.; Ai, Z.; Sun, S.; Zhang, S.; Liu, Z.; Yu, H. Fedcoin: A peer-to-peer payment system for federated learning. In Federated Learning: Privacy and Incentive; Springer: Berlin/Heidelberg, Germany, 2020; pp. 125–138. [Google Scholar]
- Zeng, R.; Zeng, C.; Wang, X.; Li, B.; Chu, X. A comprehensive survey of incentive mechanism for federated learning. arXiv 2021, arXiv:2106.15406. [Google Scholar]
- Liu, Z.; Chen, Y.; Yu, H.; Liu, Y.; Cui, L. Gtg-shapley: Efficient and accurate participant contribution evaluation in federated learning. ACM Trans. Intell. Syst. Technol. (Tist) 2022, 13, 1–21. [Google Scholar] [CrossRef]
- Nagalapatti, L.; Narayanam, R. Game of gradients: Mitigating irrelevant clients in federated learning. Proc. AAAI Conf. Artif. Intell. 2021, 35, 9046–9054. [Google Scholar] [CrossRef]
- Fan, Z.; Fang, H.; Zhou, Z.; Pei, J.; Friedlander, M.P.; Liu, C.; Zhang, Y. Improving fairness for data valuation in horizontal federated learning. In Proceedings of the 2022 IEEE 38th International Conference on Data Engineering (ICDE), Kuala Lumpur, Malaysia, 9–12 May 2022; pp. 2440–2453. [Google Scholar]
- Fan, Z.; Fang, H.; Zhou, Z.; Pei, J.; Friedlander, M.P.; Zhang, Y. Fair and efficient contribution valuation for vertical federated learning. arXiv 2022, arXiv:2201.02658. [Google Scholar]
- Yang, X.; Tan, W.; Peng, C.; Xiang, S.; Niu, K. Federated Learning Incentive Mechanism Design via Enhanced Shapley Value Method. Wirel. Commun. Mob. Comput. 2022, 2022, 9690657. [Google Scholar] [CrossRef]
- Pene, P.; Liao, W.; Yu, W. Incentive Design for Heterogeneous Client Selection: A Robust Federated Learning Approach. IEEE Internet Things J. 2023, 11, 5939–5950. [Google Scholar] [CrossRef]
- Yang, X.; Xiang, S.; Peng, C.; Tan, W.; Li, Z.; Wu, N.; Zhou, Y. Federated Learning Incentive Mechanism Design via Shapley Value and Pareto Optimality. Axioms 2023, 12, 636. [Google Scholar] [CrossRef]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics. PMLR, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
- Shapley, L.S. A value for n-person games. Contrib. Theory Games 1953, 2, 307–317. [Google Scholar]
- Murofushi, T.; Sugeno, M. An interpretation of fuzzy measures and the Choquet integral as an integral with respect to a fuzzy measure. Fuzzy Sets Syst. 1989, 29, 201–227. [Google Scholar] [CrossRef]
- Mesiar, R. Fuzzy measures and integrals. Fuzzy Sets Syst. 2005, 156, 365–370. [Google Scholar] [CrossRef]
- Zhou, Z.H.; Yu, Y.; Qian, C. Evolutionary Learning: Advances in Theories and Algorithms; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
- Pardalos, P.M.; Migdalas, A.; Pitsoulis, L. Pareto Optimality, Game Theory and Equilibria; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008; Volume 7. [Google Scholar]
- Nas, J. Non-cooperative games. Ann. Math. 1951, 54, 286–295. [Google Scholar] [CrossRef]
- Ye, M.; Hu, G. Distributed Nash Equilibrium Seeking by a Consensus Based Approach. IEEE Trans. Autom. Control 2017, 62, 4811–4818. [Google Scholar] [CrossRef]
- Kang, J.; Xiong, Z.; Niyato, D.; Yu, H.; Liang, Y.C.; Kim, D.I. Incentive design for efficient federated learning in mobile networks: A contract theory approach. In Proceedings of the 2019 IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS), Singapore, 28–30 August 2019; pp. 1–5. [Google Scholar]
- Pandey, S.R.; Tran, N.H.; Bennis, M.; Tun, Y.K.; Manzoor, A.; Hong, C.S. A crowdsourcing framework for on-device federated learning. IEEE Trans. Wirel. Commun. 2019, 19, 3241–3256. [Google Scholar] [CrossRef]
- Deng, J.; Guo, J.; Zafeiriou, S. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4690–4699. [Google Scholar]
- De Boer, P.T.; Kroese, D.P.; Mannor, S.; Rubinstein, R.Y. A tutorial on the cross-entropy method. Ann. Oper. Res. 2005, 134, 19–67. [Google Scholar] [CrossRef]
- Jauernig, P.; Sadeghi, A.R.; Stapf, E. Trusted execution environments: Properties, applications, and challenges. IEEE Secur. Priv. Mag. 2020, 8, 56–60. [Google Scholar] [CrossRef]
- Wang, H.; Kaplan, Z.; Niu, D.; Li, B. Optimizing federated learning on non-iid data with reinforcement learning. In Proceedings of the IEEE INFOCOM 2020-IEEE Conference on Computer Communications, Toronto, ON, Canada, 6–9 July 2020; pp. 1698–1707. [Google Scholar]
- Karimireddy, S.P.; Kale, S.; Mohri, M.; Reddi, S.; Stich, S.; Suresh, A.T. Scaffold: Stochastic controlled averaging for federated learning. Int. Conf. Mach. Learn. 2019, 119, 5132–5143. [Google Scholar]
- Alchian, A.A.; Demsetz, H. Production, information costs, and economic organization. Am. Econ. Rev. 1972, 62, 777–795. [Google Scholar]
- Holmstrom, B. Moral hazard in teams. Bell J. Econ. 1982, 13, 324–340. [Google Scholar] [CrossRef]
Benefits under Different Coalitions | |||||||
---|---|---|---|---|---|---|---|
1 | 0 | 0 | 0 | 0 | |||
2 | 0 | 0 | 0 | 0 | |||
3 | 0 | 0 | 0 | 0 | 0 |
Invest and Profit Comparison | Invest | Invest | Invest | Federated Payoff | Maximum Profit |
---|---|---|---|---|---|
Pareto optimality | 5.04 | 2.61 | 4.80 | 45.29 | 20.02 |
Nash equilibrium | 2.94 | 1.36 | 3.73 | 26.66 | 17.01 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, X.; Xiang, S.; Peng, C.; Tan, W.; Wang, Y.; Liu, H.; Ding, H. Federated Learning Incentive Mechanism with Supervised Fuzzy Shapley Value. Axioms 2024, 13, 254. https://doi.org/10.3390/axioms13040254
Yang X, Xiang S, Peng C, Tan W, Wang Y, Liu H, Ding H. Federated Learning Incentive Mechanism with Supervised Fuzzy Shapley Value. Axioms. 2024; 13(4):254. https://doi.org/10.3390/axioms13040254
Chicago/Turabian StyleYang, Xun, Shuwen Xiang, Changgen Peng, Weijie Tan, Yue Wang, Hai Liu, and Hongfa Ding. 2024. "Federated Learning Incentive Mechanism with Supervised Fuzzy Shapley Value" Axioms 13, no. 4: 254. https://doi.org/10.3390/axioms13040254
APA StyleYang, X., Xiang, S., Peng, C., Tan, W., Wang, Y., Liu, H., & Ding, H. (2024). Federated Learning Incentive Mechanism with Supervised Fuzzy Shapley Value. Axioms, 13(4), 254. https://doi.org/10.3390/axioms13040254