Collusion-Resistant and Reliable Incentive Mechanism for Federated Learning
Abstract
1. Introduction
- We propose a novel collusion-resistant incentive mechanism for federated learning that integrates reputation, reputation reliability, and task complexity, ensuring that only task workers with high reputation, strong reliability, and high capability are rewarded.
- We devise a novel metric named reliability to measure the credibility of both task publishers and task workers.
- We propose the BRCM based on the basic uncertain information model to compute reputation and reputation reliability for task workers.
- We conduct extensive experiments to verify the efficiency and efficacy of our proposed schemes. The results demonstrate that our proposed schemes are not only collusion-resistant but also achieve 6.31% higher test accuracy compared with the state of the art on the MNIST dataset.
2. Related Work
3. Preliminaries
3.1. Federated Learning
3.2. Adversary Model
4. Reputation-Based Task Worker Selection Scheme
- Step 1: Broadcasting Tasks and Sending Requests: The task publisher initiates tasks based on their model requirements. Send task requests to task workers, including the task worker’s resource requirements (computing resources and communication resources). Task workers who are idle and meet these resource requirements become candidate task workers by sending requests to join the task.
- Step 2: Selection of Task Workers: Firstly, the task publisher employs the improved multi-weight subjective logic model to compute the comprehensive reputation of each candidate task worker. This needs to use both direct reputation opinions and indirect reputation opinions stored in TA. Then, the BUI aggregation model obtains the reputation of the task worker and the reputation reliability based on the comprehensive reputation of the task worker and the reliability of the task publisher in TA. Finally, task publishers select reliable and high-quality task workers to participate in tasks by setting thresholds. More detailed information on reputation and reputation reliability will be introduced in Section 5.1.
- Step 3: Local Model Training: The task publisher distributes an initial global model to selected high-quality task workers. These task workers then conduct several iterations of local training using their local datasets. Upon completion, the resulting local models are transmitted back to the task publishers.
- Step 4: Model Aggregation: In this phase, the task publisher utilizes an aggregation algorithm, like federated averaging, to aggregate the local models received from task workers into an updated global model. The global model is then redistributed to the task workers for a subsequent round of training. The repetition of Steps 3 and 4 continues until the global model satisfies the predetermined accuracy criteria.
- Step 5: Reputation and Reliability Update: After the completion of a task, the direct reputation of task workers and the reliability of task publishers within the TA are updated based on their performance in the current round. On the one hand, the direct reputation of task workers is determined by the task publishers, based on the count of positive and negative events associated with each worker during the task. On the other hand, assessing the reliability of task publishers begins by comparing the gap between the direct reputation given to collusive task workers by a task publisher and the reputations provided by other task publishers. A significant deviation beyond a set threshold suggests the potential unreliability of the task publisher. The suspicious task publisher is then evaluated further based on two key aspects: the task publisher’s tendency to assign unrealistically high reputations, and the frequency of interaction with collusive task workers. Conversely, if the discrepancy is below the threshold, it indicates a lack of suspicion regarding the unreliability of the task publisher, and therefore, there is no need to update the task publisher’s reliability. This ensures a comprehensive and accurate update of reputations and reliability in TA.
- Step 6: Distribution of Rewards: Finally, task publishers will determine the allocation of rewards based on the performance of task workers during the execution of tasks. Task workers’ reputation, the contributions of computational resources, and communication resources are comprehensively considered as key factors in determining rewards. Reputation not only directly influences the allocation of rewards but also indirectly affects the incentive mechanism. For instance, by setting a reputation threshold, task workers with high reputations are provided with more opportunities to participate in future tasks to obtain more rewards over time. Such a strategy is aimed at encouraging and rewarding task workers who provide high-quality work.
5. Methodology
5.1. Reputation Calculation Using BUI Aggregation Model
5.1.1. Task Worker Comprehensive Reputation Calculation
Direct Reputation Calculation
Indirect Reputation Calculation
Comprehensive Reputation Calculation
5.1.2. Task Publisher Reliability Calculation
5.2. Incentive Mechanism Based on Reputation Scheme
5.2.1. An Example
- Step 1: Publish Tasks and Scoring Rules: The task publisher releases an FL task, clearly specifying the task requirements and objectives, including the required training data volume and bandwidth resources. Additionally, the task publisher will announce the scoring rules (as shown in (15)), where represents the data volume, represents the data types, represents the computational resources, represents the bandwidth, p represents the task worker’s expected rewards, and , , , and are the corresponding weights for , , , and , respectively.
- Step 2: Bidding: Task workers interested in participating in the task submit their bids based on the task requirements, providing their resource vectors (r) and the corresponding expected rewards (p).
- Step 3: Select Task Workers: After receiving participation requests from a set of candidate task workers , the task publisher first obtains from the TA not only the reputations of the task workers but also the reliability scores of all relevant task publishers, including its own, since these values are required for fair and accurate trust evaluation. Subsequently, the task publisher calculates the reputation and reputation reliability of task workers based on BRCM. Furthermore, the task publisher selects the set of reliable and reputable task workers by setting the reputation threshold and reputation reliability threshold . Finally, the task publisher selects the top K workers in as task participants according to the scoring rules and sends the global model to the selected participants.
- Step 4: Train Local Model: After receiving the global model, the selected task worker uses its local data and computing resources to train the local model.
- Step 5: Update the Global Model: The task publisher collects all local models from the task workers and aggregates these local models to update the global model. The updated global model is then fed back to all participating task workers for use in the next round of training.
- Step 6: Complete the Task: Repeat Steps 2 through 6 until the global model reaches the predetermined accuracy target. Finally, TA updates reputation and reliability based on the performance of the task workers and task publishers in that round, respectively.
- Step 7: Distribute Rewards: Reward Distribution: The task publisher allocates rewards based on the scores of the task workers, incentivizing nodes to continuously participate and provide high-quality resources.
5.2.2. Analysis
6. Performance Evaluation
6.1. Dataset
6.2. Simulation Setting
6.3. Experiments
6.3.1. Experiment 1
6.3.2. Experiment 2
6.3.3. Experiment 3
6.3.4. Experiment 4
6.3.5. Experiment 5
6.3.6. Experiment 6
7. Discussion
8. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Neto, N.N.; Madnick, S.; Paula, A.M.G.D.; Borges, N.M. Developing a global data breach database and the challenges encountered. J. Data Inf. Qual. JDIQ 2021, 13, 1–33. [Google Scholar] [CrossRef]
- Li, L.; Fan, Y.; Tse, M.; Lin, K.-Y. A review of applications in federated learning. Comput. Ind. Eng. 2020, 149, 106854. [Google Scholar] [CrossRef]
- Khan, L.U.; Saad, W.; Han, Z.; Hossain, E.; Hong, C.S. Federated learning for internet of things: Recent advances, taxonomy, and open challenges. IEEE Commun. Surv. Tutor. 2021, 23, 1759–1799. [Google Scholar] [CrossRef]
- An, X.; Wang, D.; Shen, L.; Luo, Y.; Hu, H.; Du, B.; Wen, Y.; Tao, D. Federated Learning with Only Positive Labels by Exploring Label Correlations. IEEE Trans. Neural Netw. Learn. Syst. 2024, 36, 7651–7665. [Google Scholar] [CrossRef]
- Han, S.; Ding, H.; Zhao, S.; Ren, S.; Wang, Z.; Lin, J.; Zhou, S. Practical and robust federated learning with highly scalable regression training. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 13801–13815. [Google Scholar] [CrossRef]
- Kang, J.; Xiong, Z.; Niyato, D.; Xie, S.; Zhang, J. Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory. IEEE Internet Things J. 2019, 6, 10700–10714. [Google Scholar] [CrossRef]
- Yu, G.; Wang, X.; Sun, C.; Wang, Q.; Yu, P.; Ni, W.; Liu, R.P. Ironforge: An open, secure, fair, decentralized federated learning. IEEE Trans. Neural Netw. Learn. Syst. 2023, 36, 354–368. [Google Scholar] [CrossRef] [PubMed]
- Shi, Y.; Yu, H.; Leung, C. Towards fairness-aware federated learning. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 11922–11938. [Google Scholar] [CrossRef]
- Lyu, L.; Yu, H.; Ma, X.; Chen, C.; Sun, L.; Zhao, J.; Yang, Q.; Yu, P.S. Privacy and robustness in federated learning: Attacks and defenses. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 8726–8746. [Google Scholar] [CrossRef] [PubMed]
- Jebreel, N.M.; Domingo-Ferrer, J.; Blanco-Justicia, A.; Sánchez, D. Enhanced security and privacy via fragmented federated learning. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 6703–6717. [Google Scholar] [CrossRef]
- Li, K.; Yuan, X.; Zheng, J.; Ni, W.; Dressler, F.; Jamalipour, A. Leverage Variational Graph Representation for Model Poisoning on Federated Learning. IEEE Trans. Neural Netw. Learn. Syst. 2024, 36, 116–128. [Google Scholar] [CrossRef]
- Kang, J.; Xiong, Z.; Niyato, D.; Zou, Y.; Zhang, Y.; Guizani, M. Reliable federated learning for mobile networks. IEEE Wirel. Commun. 2020, 27, 72–80. [Google Scholar] [CrossRef]
- Xu, X.; Lyu, L. A reputation mechanism is all you need: Collaborative fairness and adversarial robustness in federated learning. arXiv 2020, arXiv:2011.10464. [Google Scholar]
- Song, Z.; Sun, H.; Yang, H.H.; Wang, X.; Zhang, Y.; Quek, T.Q.S. Reputation-based federated learning for secure wireless networks. IEEE Internet Things J. 2021, 9, 1212–1226. [Google Scholar] [CrossRef]
- Wang, Y.; Kantarci, B. Reputation-enabled federated learning model aggregation in mobile platforms. In Proceedings of the ICC 2021 IEEE International Conference on Communications, Montreal, QC, Canada, 14–23 June 2021; pp. 1–6. [Google Scholar]
- ur Rehman, M.H.; Salah, K.; Damiani, E.; Svetinovic, D. Towards blockchain-based reputation-aware federated learning. In Proceedings of the IEEE INFOCOM 2020 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Toronto, ON, Canada, 6–9 July 2020; pp. 183–188. [Google Scholar]
- Gao, L.; Li, L.; Chen, Y.; Zheng, W.; Xu, C.; Xu, M. Fifl: A fair incentive mechanism for federated learning. In Proceedings of the 50th International Conference on Parallel Processing, Lemont, IL, USA, 9–12 August 2021; pp. 1–10. [Google Scholar]
- Zheng, Z.; Qin, Z.; Li, D.; Li, K.; Xu, G. A holistic client selection scheme in federated mobile crowdsensing based on reverse auction. In Proceedings of the 2022 IEEE 25th International Conference on Computer Supported Cooperative Work in Design (CSCWD), Hangzhou, China, 4–6 May 2022; pp. 1305–1310. [Google Scholar]
- Zhang, W.; Lin, Y.; Xiao, S.; Wu, J.; Zhou, S. Privacy preserving ranked multi-keyword search for multiple data owners in cloud computing. IEEE Trans. Comput. 2015, 65, 1566–1577. [Google Scholar] [CrossRef]
- Xu, G.; Li, H.; Liu, S.; Yang, K.; Lin, X. Verifynet: Secure and verifiable federated learning. IEEE Trans. Inf. Forensics Secur. 2019, 15, 911–926. [Google Scholar] [CrossRef]
- Fu, A.; Zhang, X.; Xiong, N.; Gao, Y.; Wang, H.; Zhang, J. VFL: A verifiable federated learning with privacy-preserving for big data in industrial IoT. IEEE Trans. Ind. Inform. 2020, 18, 3316–3326. [Google Scholar] [CrossRef]
- Guo, X.; Liu, Z.; Li, J.; Gao, J.; Hou, B.; Dong, C.; Baker, T. VeriFL: Communication-efficient and fast verifiable aggregation for federated learning. IEEE Trans. Inf. Forensics Secur. 2020, 16, 1736–1751. [Google Scholar] [CrossRef]
- Kalapaaking, A.P.; Khalil, I.; Yi, X.; Lam, K.-Y.; Huang, G.-B.; Wang, N. Auditable and Verifiable Federated Learning Based on Blockchain-Enabled Decentralization. IEEE Trans. Neural Netw. Learn. Syst. 2024, 36, 102–115. [Google Scholar] [CrossRef] [PubMed]
- Kang, J.; Xiong, Z.; Niyato, D.; Ye, D.; Kim, D.I.; Zhao, J. Toward Secure Blockchain-Enabled Internet of Vehicles: Optimizing Consensus Management Using Reputation and Contract Theory. IEEE Trans. Veh. Technol. 2019, 68, 2906–2920. [Google Scholar] [CrossRef]
- Jin, L.; Mesiar, R.; Borkotokey, S.; Kalina, M. Certainty aggregation and the certainty fuzzy measures. Int. J. Intell. Syst. 2018, 33, 759–770. [Google Scholar] [CrossRef]
- Yang, Y.; Chen, Z.-S.; Pedrycz, W.; Gómez, M.; Bustince, H. Using I-subgroup-based weighted generalized interval t-norms for aggregating basic uncertain information. Fuzzy Sets Syst. 2024, 476, 108771. [Google Scholar] [CrossRef]
- Dang, T.K.; Tran-Truong, P.T.; Trang, N.T.H. An Enhanced Incentive Mechanism for Crowdsourced Federated Learning Based on Contract Theory and Shapley Value. In Proceedings of the International Conference on Future Data and Security Engineering, Da Nang, Vietnam, 22–24 November 2023; pp. 18–33. [Google Scholar]
- Wang, S.; Zhao, H.; Wen, W.; Xia, W.; Wang, B.; Zhu, H. Contract Theory Based Incentive Mechanism for Clustered Vehicular Federated Learning. IEEE Trans. Intell. Transp. Syst. 2024, 25, 8134–8147. [Google Scholar] [CrossRef]
- Xuan, S.; Wang, M.; Zhang, J.; Wang, W.; Man, D.; Yang, W. An Incentive Mechanism Design for Federated Learning with Multiple Task Publishers by Contract Theory Approach. Inf. Sci. 2024, 644, 120330. [Google Scholar] [CrossRef]
- Lyu, H.; Zhang, Y.; Wang, C.; Long, S.; Guo, S. Federated learning privacy incentives: Reverse auctions and negotiations. CAAI Trans. Intell. Technol. 2023, 8, 1538–1557. [Google Scholar] [CrossRef]
- Le, T.H.T.; Tran, N.H.; Tun, Y.K.; Nguyen, M.N.H.; Pandey, S.R.; Han, Z.; Hong, C.S. An incentive mechanism for federated learning in wireless cellular networks: An auction approach. IEEE Trans. Wirel. Commun. 2021, 20, 4874–4887. [Google Scholar] [CrossRef]
- Zhao, N.; Pei, Y.; Liang, Y.-C.; Niyato, D. Multi-agent deep reinforcement learning based incentive mechanism for multi-task federated edge learning. IEEE Trans. Veh. Technol. 2023, 72, 13530–13535. [Google Scholar] [CrossRef]
- Guo, W.; Wang, Y.; Jiang, P. Incentive mechanism design for Federated Learning with Stackelberg game perspective in the industrial scenario. Comput. Ind. Eng. 2023, 184, 109592. [Google Scholar] [CrossRef]
- Chen, Y.; Zhou, H.; Li, T.; Li, J.; Zhou, H. Multi-factor incentive mechanism for federated learning in IoT: A stackelberg game approach. IEEE Internet Things J. 2023, 10, 21595–21606. [Google Scholar] [CrossRef]
- Zhang, J.; Wu, Y.; Pan, R. Incentive mechanism for horizontal federated learning based on reputation and reverse auction. In Proceedings of the Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021; pp. 947–956. [Google Scholar]
- Xiong, A.; Chen, Y.; Chen, H.; Chen, J.; Yang, S.; Huang, J.; Li, Z.; Guo, S. A truthful and reliable incentive mechanism for federated learning based on reputation mechanism and reverse auction. Electronics 2023, 12, 517. [Google Scholar] [CrossRef]
- Tang, Y.; Liang, Y.; Liu, Y.; Zhang, J.; Ni, L.; Qi, L. Reliable federated learning based on dual-reputation reverse auction mechanism in Internet of Things. Future Gener. Comput. Syst. 2024, 156, 269–284. [Google Scholar] [CrossRef]
- Smith, V.; Chiang, C.-K.; Sanjabi, M.; Talwalkar, A.S. Federated multi-task learning. In Proceedings of the NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing System, Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
- Zou, Y.; Shen, F.; Yan, F.; Lin, J.; Qiu, Y. Reputation-based regional federated learning for knowledge trading in blockchain-enhanced IoV. In Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, 29 March–1 April 2021; pp. 1–6. [Google Scholar]
- Ziller, A.; Trask, A.; Lopardo, A.; Szymkow, B.; Wagner, B.; Bluemke, E.; Nounahon, J.-M.; Passerat-Palmbach, J.; Prakash, K.; Rose, N.; et al. Pysyft: A library for easy federated learning. In Federated Learning Systems: Towards Next-Generation AI; Springer: Cham, Switzerland, 2021; pp. 111–139. [Google Scholar]
- Mothukuri, V.; Parizi, R.M.; Pouriyeh, S.; Huang, Y.; Dehghantanha, A.; Srivastava, G. A survey on security and privacy of federated learning. Future Gener. Comput. Syst. 2021, 115, 619–640. [Google Scholar] [CrossRef]
- Deng, Y.; Kamani, M.M.; Mahdavi, M. Adaptive personalized federated learning. arXiv 2020, arXiv:2003.13461. [Google Scholar] [CrossRef]
- McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; Agüera y Arcas, B. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
- Hard, A.; Rao, K.; Mathews, R.; Ramaswamy, S.; Beaufays, F.; Augenstein, S.; Eichner, H.; Kiddon, C.; Ramage, D. Federated learning for mobile keyboard prediction. arXiv 2018, arXiv:1811.03604. [Google Scholar]
- Zeng, R.; Zhang, S.; Wang, J.; Chu, X. Fmore: An incentive scheme of multi-dimensional auction for federated learning in MEC. In Proceedings of the 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS), Singapore, 29 November–1 December 2020; pp. 278–288. [Google Scholar]













| Reference | Method | Key Idea | Limitation |
|---|---|---|---|
| Kang et al. (2019, 2020) [6,12] | Reputation + Contract | Combines reputation with contract theory. | Increased complexity; limited scalability. |
| Xu et al. (2020) [13]; Song et al. (2021) [14]; Wang et al. (2021) [15] | Reputation-based FL | Credibility via local–global model similarity. | Ignores multi-publisher settings. |
| Gao et al. (2021) [17]; Zhang et al. (2021) [19] | Reputation + Auction | Rewards based on contribution and reputation. | Weak resistance to collusion. |
| Dang et al. (2023) [27]; Wang et al. (2024) [28] | Contract theory | Differentiated contracts by data quality and resources. | Low adaptability in dynamic environments. |
| Lyu et al. (2023) [30]; Le et al. (2021) [31] | Reverse auction/Auction | Participant selection via bidding mechanism. | Prone to collusion and dishonest bids. |
| Zhao et al. (2023) [32] | Deep reinforcement learning | Adaptive incentive adjustment through reinforcement learning. | High training and computation cost. |
| Guo et al. (2023) [33]; Chen et al. (2023) [34] | Stackelberg game | Leader–follower incentive optimization. | Hard to reach equilibrium under malicious behavior. |
| Role | Role Description |
|---|---|
| Task workers | Task workers are participants responsible for receiving tasks, performing local model training, and returning the results to the central server. |
| Task publishers | Task publishers are participants in defining and distributing learning tasks. They are mainly responsible for coordinating task allocation and collecting and aggregating the training results of task workers. |
| Collusive task workers | Collusive task workers are low-quality task workers who seek to falsely enhance their reputation by colluding with dishonest task publishers. |
| Low-quality task workers | Low-quality task workers are participants who may provide inaccurate or unreliable model updates due to poor data quality or insufficient computing power. |
| High-quality task workers | High-quality task workers are participants who possess high-quality data and reliable computing power and can provide accurate and valuable model updates. |
| Trusted authority (TA) | TA is an institution trusted by all participating devices. It is responsible for storing the reputation and reputation reliability of task workers, as well as the reliability of task publishers. Additionally, it needs to complete the calculation and updating of the reliability of task publishers. |
| Dishonest task publishers | Dishonest task publishers are task publishers who have been bribed by task workers to repeatedly publish special tasks. |
| Honest task publishers | Honest task publishers are normal task publishers who will not participate in collusion. |
| Attribute | MNIST | CIFAR-10 |
|---|---|---|
| Image Size | ||
| Category Type | Digits | Objects |
| Training Samples | 60,000 | 50,000 |
| Test Samples | 10,000 | 10,000 |
| Color Type | Grayscale | RGB |
| Parameter | Parameter Description | Settings |
|---|---|---|
| The level of impact of uncertainty | 0.8 | |
| Learning rate | 0.01 | |
| b | Batch | 64 |
| The weight of negative events | 0.6 | |
| The weight of positive events | 0.4 | |
| Unrealistic high score weights | 0.6 | |
| Frequency of selecting collusive workers | 0.4 | |
| The weight of the reputation value provided by honest task publishers | 0.8 | |
| The initial value of belief | 0.8 | |
| The initial value of uncertainty | 0.1 | |
| The weight of data volume for task workers | 0.3 | |
| The weight of data types for task workers | 0.2 | |
| The weight of computational resources for task workers | 0.3 | |
| The weight of bandwidth for task workers | 0.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, J.; Long, M.; Wang, Y.; Liu, L.; Cao, W.; Li, Q.; Peng, H. Collusion-Resistant and Reliable Incentive Mechanism for Federated Learning. Electronics 2025, 14, 4447. https://doi.org/10.3390/electronics14224447
Yang J, Long M, Wang Y, Liu L, Cao W, Li Q, Peng H. Collusion-Resistant and Reliable Incentive Mechanism for Federated Learning. Electronics. 2025; 14(22):4447. https://doi.org/10.3390/electronics14224447
Chicago/Turabian StyleYang, Junfeng, Mingrui Long, Yan Wang, Limei Liu, Wenzhi Cao, Qin Li, and Han Peng. 2025. "Collusion-Resistant and Reliable Incentive Mechanism for Federated Learning" Electronics 14, no. 22: 4447. https://doi.org/10.3390/electronics14224447
APA StyleYang, J., Long, M., Wang, Y., Liu, L., Cao, W., Li, Q., & Peng, H. (2025). Collusion-Resistant and Reliable Incentive Mechanism for Federated Learning. Electronics, 14(22), 4447. https://doi.org/10.3390/electronics14224447

