Towards Collaborative Edge Intelligence: Blockchain-Based Data Valuation and Scheduling for Improved Quality of Service †
Abstract
:1. Introduction
- A time-dependent incentive mechanism is proposed atop blockchain to diversify model rewards. It improves the QoS of collaborative edge intelligence while preventing free-riders from using the system.
- We propose a decentralized data valuation method combining cross-validation and DPoS consensus to mitigate information loss of DML on heterogeneous data. The valuation function achieves group rationality, fairness and additivity at the network edge.
- To maximize the marginal utility of data samples, a curriculum data scheduling approach is designed. With an adaptive moving window, the efficiency of data scheduling is improved with reduced latency.
2. Related Works
2.1. The Integration of Blockchain and Edge Intelligence
2.2. Data Valuation for Collaborative Edge Intelligence
3. System Model and Problem Formulation
3.1. Network Model
3.2. Task Model
3.3. Training Model
3.4. Threat Model
3.5. Blockchain Incentive Model
3.6. Problem Formulation
4. Decentralized Data Valuation
4.1. Block Generation Workflow
4.2. KL Divergence in Distributed Data
4.3. Decentralized Calculation of Data Valuation
- (1)
- Group rationality: The valuation of the per-round data contribution is completely distributed among all data contributors, i.e., .
- (2)
- Fairness: Two data contributors with identical data contributions should have the same valuation, i.e., if datasets and are identical; a free-rider n with zero for all other smartphones has zero valuation, i.e., .
- (3)
- Additivity: In any round r, the data valuation of multiple data contributors equals the sum of the data valuations of individual data contributors, i.e., .
5. Curriculum Data Scheduling
5.1. Marginal Utility of Data Samples
5.2. Principles of Curriculum Learning at the Network Edge
- (1)
- Score functions should depend on the global model: Any instance in is mapped to a numerical value by a score function. As (8) aims to maximize the utility of the global model, this score function should solely rely on the global model.
- (2)
- Pacing functions should be monotonically increasing: The number of data samples scheduled per round is determined by a pacing function. Intuitively, it becomes more difficult to improve the performance as the global model converges. Therefore, more data should be scheduled for the later rounds to push the performance of the global model.
- (3)
- The difficulty level should be progressively increased: As model training progresses through successive rounds, the average difficulty level of is expected to increase with the round number r. Furthermore, should be sorted such that the difficulty level of instances is also progressively increasing during local training.
- (4)
- The amount of data learned per round should be optimized: In a typical edge system, latency should be considered a key QoS factor. Therefore, data samples processed per round should be controlled and optimized; otherwise, curriculum learning may not be practical for the network edge.
5.3. Optimized Data Scheduling with an Adaptive Window
6. Algorithm Design for Improved QoS
6.1. An Optimized Participation Strategy at the Network Edge
6.2. Decentralized Algorithm Design
Algorithm 1 Proposed curriculum data scheduling for any smartphone n in round r. | |
Input: Global model weights , private local dataset and latency requirement . | |
Output: Local training dataset for the current round . | |
1: | for instance ∈ do // obtain difficulty score. |
2: | Calculate by (20); |
3: | end for |
4: | Sorting in ascending order. |
5: | Obtain by sorting accordingly. |
6: | Obtain according to (26). |
Algorithm 2 Proposed aggregation algorithm for round r. | |
Input: Model weights , and data quantity . | |
Output: Global weights . | |
1: | for smartphone n∈ do // decentralized data valuation. |
2: | for i∈, do |
3: | Calculate by (14); |
4: | end for |
5: | end for |
6: | for server m∈ do // delegated servers conduct model aggregation. |
7: | if then // a round-robin leader conducts aggregation. |
8: | for n∈ do |
9: | Calculate by (15); |
10: | Calculate by (19); |
11: | end for |
12: | ←; |
13: | ←; |
14: | ←; // aggregation by valuation of data samples. |
15: | ←; // aggregation by data marginal utility. |
16: | if results in greater validation accuracy then |
17: | ←; |
18: | else |
19: | ←. |
20: | end if |
21: | end if |
22: | end for |
7. Performance Evaluations
7.1. Experiment Settings and Benchmarks
- (1)
- Label distribution skew: For the HAR dataset, we assume each smartphone owns the same number of samples. Six human activities are labeled: sitting, lying down, walking, going upstairs, going downstairs and standing. A total of 34,440 data samples are assigned to smartphones according to a Dirichlet distribution [45]. To match real-world data distributions, we set concentration parameter at 0.5 [46].
- (2)
- Label and data quantity skew: For the CIFAR-10 and CIFAR-100 datasets, we consider that each smartphone owns at most 4 out of 10 labels for CIFAR-10 and at most 8 out of 100 labels for CIFAR-100. Note that we do not consider simple cases wherein labels or the quantity of training data are uniformly distributed. We process CIFAR-10 data samples (10 classes, with 6000 images per class) and CIFAR-100 data samples (100 classes, with 600 images per class) to form our synthetic non-IID datasets based on [46]. To be specific, we simulate label skew by assigning a random subset of classes to each smartphone. The number of classes per phone is generated using the function random.randint( ). Then, we create quantity skew by using a Dirichlet distribution to allocate different amounts of data for each class to different smartphones. The use of the function numpy.random.dirichlet( ) results in non-uniform data quantities across smartphones (https://github.com/IBM/probabilistic-federated-neural-matching/blob/master/experiment.py accessed on 30 June 2024). To simulate different degrees of skews, we set = 10 and = 0.5 for CIFAR-10 and CIFAR-100, respectively.
7.2. Results and Discussion
7.2.1. QoS Improvement with Heterogeneous Data
7.2.2. Effectiveness of Data Scheduling at the Network Edge
7.2.3. Robustness against Free-Riding Attacks
8. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
ML | Machine learning |
AI | Artificial intelligence |
IoT | Internet of Things |
QoS | Quality of service |
DML | Distributed machine learning |
FL | Federated learning |
SGD | Stochastic gradient descent |
SPOF | Single point of failure |
DNN | Deep neural network |
non-IID | Not independent and identically distributed |
PoW | Proof-of-work |
MEC | Multi-access computing |
DPoS | Delegated proof-of-stake |
UMKP | Unbounded multiple knapsack problem |
KL | Kullback–Leibler |
HAR | Human activity recognition |
CIFAR | Canadian Institute for Advanced Research |
SOTA | State-of-the-art |
References
- Li, Z.; Wallace, E.; Shen, S.; Lin, K.; Keutzer, K.; Klein, D.; Gonzalez, J. Train big, then compress: Rethinking model size for efficient training and inference of transformers. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 5958–5968. [Google Scholar]
- Letaief, K.B.; Shi, Y.; Lu, J.; Lu, J. Edge artificial intelligence for 6G: Vision, enabling technologies, and applications. IEEE J. Sel. Areas Commun. 2021, 40, 5–36. [Google Scholar] [CrossRef]
- Yarkoni, T.; Westfall, J. Choosing prediction over explanation in psychology: Lessons from machine learning. Perspect. Psychol. Sci. 2017, 12, 1100–1122. [Google Scholar] [CrossRef]
- Lim, W.Y.B.; Luong, N.C.; Hoang, D.T.; Jiao, Y.; Liang, Y.C.; Yang, Q.; Niyato, D.; Miao, C. Federated Learning in Mobile Edge Networks: A Comprehensive Survey. IEEE Commun. Surv. Tutorials 2020, 22, 2031–2063. [Google Scholar] [CrossRef]
- Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Process. Mag. 2020, 37, 50–60. [Google Scholar] [CrossRef]
- Wang, B.; Li, H.; Liu, X.; Guo, Y. Frad: Free-rider attacks detection mechanism for federated learning in AIoT. IEEE Internet Things J. 2023, 11, 4377–4388. [Google Scholar] [CrossRef]
- Lin, J.; Du, M.; Liu, J. Free-riders in Federated Learning: Attacks and Defenses. arXiv 2019, arXiv:1911.12560. [Google Scholar]
- Fraboni, Y.; Vidal, R.; Lorenzi, M. Free-rider attacks on model aggregation in federated learning. In Proceedings of the Artificial Intelligence and Statistics, Virtual, 13–15 April 2021; pp. 1846–1854. [Google Scholar]
- Strickland, E. Andrew Ng, AI Minimalist: The Machine-Learning Pioneer Says Small is the New Big. IEEE Spectr. 2022, 59, 22–50. [Google Scholar] [CrossRef]
- Abarbanel, H.D.; Rozdeba, P.J.; Shirman, S. Machine learning: Deepest learning as statistical data assimilation problems. Neural Comput. 2018, 30, 2025–2055. [Google Scholar] [CrossRef]
- Rausch, O.; Ben-Nun, T.; Dryden, N.; Ivanov, A.; Li, S.; Hoefler, T. A data-centric optimization framework for machine learning. In Proceedings of the ACM International Conference on Supercomputing, Virtual, 28–30 June 2022; pp. 1–13. [Google Scholar]
- Nguyen, D.C.; Ding, M.; Pham, Q.V.; Pathirana, P.N.; Le, L.B.; Seneviratne, A.; Li, J.; Niyato, D.; Poor, H.V. Federated Learning Meets Blockchain in Edge Computing: Opportunities and Challenges. IEEE Internet Things J. 2021, 8, 12806–12825. [Google Scholar] [CrossRef]
- Xiao, Y.; Zhang, N.; Lou, W.; Hou, Y.T. A Survey of Distributed Consensus Protocols for Blockchain Networks. IEEE Commun. Surv. Tutor. 2020, 22, 1432–1465. [Google Scholar] [CrossRef]
- Zhang, X.; Li, Y.; Li, W.; Guo, K.; Shao, Y. Personalized federated learning via variational bayesian inference. In Proceedings of the International Conference on Machine Learning, Baltimore, MD, USA, 17–23 July 2022; pp. 26293–26310. [Google Scholar]
- Wang, X.; Ren, X.; Qiu, C.; Xiong, Z.; Yao, H.; Leung, V.C. Integrating edge intelligence and blockchain: What, why, and how. IEEE Commun. Surv. Tutorials 2022, 24, 2193–2229. [Google Scholar] [CrossRef]
- Zhang, K.; Zhu, Y.; Maharjan, S.; Zhang, Y. Edge intelligence and blockchain empowered 5G beyond for the industrial Internet of Things. IEEE Netw. 2019, 33, 12–19. [Google Scholar] [CrossRef]
- Du, Y.; Wang, Z.; Leung, C.; Leung, V.C. Accelerating and Securing Blockchain-enabled Distributed Machine Learning. IEEE Trans. Mob. Comput. 2023, 23, 6712–6730. [Google Scholar] [CrossRef]
- Wang, J.; Li, M.; He, Y.; Li, H.; Xiao, K.; Wang, C. A blockchain based privacy-preserving incentive mechanism in crowdsensing applications. IEEE Access 2018, 6, 17545–17556. [Google Scholar] [CrossRef]
- Qiu, C.; Yao, H.; Wang, X.; Zhang, N.; Yu, F.R.; Niyato, D. AI-chain: Blockchain energized edge intelligence for beyond 5G networks. IEEE Netw. 2020, 34, 62–69. [Google Scholar] [CrossRef]
- Wang, X.; Shankar, A.; Li, K.; Parameshachari, B.; Lv, J. Blockchain-Enabled Decentralized Edge Intelligence for Trustworthy 6G Consumer Electronics. IEEE Trans. Consum. Electron. 2024, 70, 1214–1225. [Google Scholar] [CrossRef]
- Xu, C.; Ge, J.; Li, Y.; Deng, Y.; Gao, L.; Zhang, M.; Xiang, Y.; Zheng, X. Scei: A smart-contract driven edge intelligence framework for IoT systems. IEEE Trans. Mob. Comput. 2023, 23, 4453–4466. [Google Scholar] [CrossRef]
- Liang, W.; Tadesse, G.A.; Ho, D.; Fei-Fei, L.; Zaharia, M.; Zhang, C.; Zou, J. Advances, challenges and opportunities in creating data for trustworthy AI. Nat. Mach. Intell. 2022, 4, 669–677. [Google Scholar] [CrossRef]
- Jia, R.; Dao, D.; Wang, B.; Hubis, F.A.; Hynes, N.; Gürel, N.M.; Li, B.; Zhang, C.; Song, D.; Spanos, C.J. Towards efficient data valuation based on the shapley value. In Proceedings of the Artificial Intelligence and Statistics, Naha, Japan, 16–18 April 2019; pp. 1167–1176. [Google Scholar]
- Ghorbani, A.; Zou, J. Data shapley: Equitable valuation of data for machine learning. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 2242–2251. [Google Scholar]
- Ghorbani, A.; Kim, M.; Zou, J. A distributional framework for data valuation. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 3535–3544. [Google Scholar]
- Song, T.; Tong, Y.; Wei, S. Profit allocation for federated learning. In Proceedings of the IEEE International Conference on Big Data, Los Angeles, CA, USA, 9–12 December 2019; pp. 2577–2586. [Google Scholar]
- Wang, T.; Rausch, J.; Zhang, C.; Jia, R.; Song, D. A principled approach to data valuation for federated learning. In Federated Learning: Privacy and Incentive; Springer: Cham, Switzerland, 2020; pp. 153–167. [Google Scholar]
- Liu, Y.; Ai, Z.; Sun, S.; Zhang, S.; Liu, Z.; Yu, H. Fedcoin: A peer-to-peer payment system for federated learning. In Federated Learning: Privacy and Incentive; Springer: Cham, Switzerland, 2020; pp. 125–138. [Google Scholar]
- Le, T.H.T.; Tran, N.H.; Tun, Y.K.; Nguyen, M.N.; Pandey, S.R.; Han, Z.; Hong, C.S. An incentive mechanism for federated learning in wireless cellular networks: An auction approach. IEEE Trans. Wirel. Commun. 2021, 20, 4874–4887. [Google Scholar]
- Koh, P.W.; Liang, P. Understanding black-box predictions via influence functions. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1885–1894. [Google Scholar]
- Yoon, J.; Arik, S.; Pfister, T. Data valuation using reinforcement learning. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 10842–10851. [Google Scholar]
- Warnat-Herresthal, S.; Schultze, H.; Shastry, K.L.; Manamohan, S.; Mukherjee, S.; Garg, V.; Sarveswara, R.; Händler, K.; Pickkers, P.; Aziz, N.A.; et al. Swarm, Learning, for, decentralized, a nd confidential clinical machine learning. Nature 2021, 594, 265–270. [Google Scholar] [CrossRef]
- Wang, Y.; Su, Z.; Zhang, N.; Benslimane, A. Learning in the Air: Secure Federated Learning for UAV-Assisted Crowdsensing. IEEE Trans. Netw. Sci. Eng. 2021, 8, 1055–1069. [Google Scholar] [CrossRef]
- Zhan, Y.; Li, P.; Qu, Z.; Zeng, D.; Guo, S. A Learning-Based Incentive Mechanism for Federated Learning. IEEE Internet Things J. 2020, 7, 6360–6368. [Google Scholar] [CrossRef]
- Blum, A.L.; Rivest, R.L. Training a 3-node neural network is NP-complete. Neural Netw. 1992, 5, 117–127. [Google Scholar] [CrossRef]
- Lopes, U.; Valiati, J.F. Pre-trained convolutional neural networks as feature extractors for tuberculosis detection. Comput. Biol. Med. 2017, 89, 135–143. [Google Scholar] [CrossRef] [PubMed]
- Martello, S.; Toth, P. Knapsack Problems: Algorithms and Computer Implementations; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1990. [Google Scholar]
- van Erven, T.; Harremos, P. Rényi Divergence and Kullback-Leibler Divergence. IEEE Trans. Inf. Theory 2014, 60, 3797–3820. [Google Scholar] [CrossRef]
- Du, Y.; Wang, Z.; Leung, C.; Leung, V. Blockchain-based Data Quality Assessment to Improve Distributed Machine Learning. In Proceedings of the International Conference on Computing, Networking and Communications, Honolulu, HI, USA, 20–22 February 2023; pp. 170–175. [Google Scholar]
- Soviany, P.; Ionescu, R.T.; Rota, P.; Sebe, N. Curriculum learning: A survey. Int. J. Comput. Vis. 2022, 130, 1526–1565. [Google Scholar] [CrossRef]
- Vahidian, S.; Kadaveru, S.; Baek, W.; Wang, W.; Kungurtsev, V.; Chen, C.; Shah, M.; Lin, B. When do curricula work in federated learning? In Proceedings of the IEEE/CVF International Conference on Computer Vision, Vancouver, BC, CA, 17–24 June 2023; pp. 5084–5094. [Google Scholar]
- Wu, W.; He, L.; Lin, W.; Mao, R.; Maple, C.; Jarvis, S. SAFA: A semi-asynchronous protocol for fast federated learning with low overhead. IEEE Trans. Comput. 2020, 70, 655–668. [Google Scholar] [CrossRef]
- Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images. Master’s Thesis, University of Toronto, Toronto, ON, Canada, 2009. [Google Scholar]
- Anguita, D.; Ghio, A.; Oneto, L.; Parra Perez, X.; Reyes Ortiz, J.L. A public domain dataset for human activity recognition using smartphones. In Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 24–26 April 2013; pp. 437–442. [Google Scholar]
- Hsu, T.M.H.; Qi, H.; Brown, M. Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification. arXiv 2019, arXiv:1909.06335. [Google Scholar]
- Yurochkin, M.; Agarwal, M.; Ghosh, S.; Greenewald, K.; Hoang, N.; Khazaeni, Y. Bayesian nonparametric federated learning of neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 7252–7261. [Google Scholar]
- Wong, E.; Schmidt, F.; Metzen, J.H.; Kolter, J.Z. Scaling provable adversarial defenses. In Proceedings of the Conference on Neural Information Processing Systems, Montréal, QC, Canada, 3–8 December 2018; pp. 8410–8419. [Google Scholar]
- Liu, Z.; Hu, H.; Lin, Y.; Yao, Z.; Xie, Z.; Wei, Y.; Ning, J.; Cao, Y.; Zhang, Z.; Dong, L.; et al. Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 12009–12019. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. In Proceedings of the Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 8026–8037. [Google Scholar]
- Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2020, 2, 429–450. [Google Scholar]
- Dinh, C.T.; Tran, N.; Nguyen, J. Personalized federated learning with Moreau envelopes. In Proceedings of the Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 6–12 December 2020; pp. 21394–21405. [Google Scholar]
- Fallah, A.; Mokhtari, A.; Ozdaglar, A. Personalized federated learning: A meta-learning approach. arXiv 2020, arXiv:2002.07948. Available online: https://arxiv.org/abs/2002.07948 (accessed on 20 July 2024).
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics, Ft. Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Du, Y.; Wang, Z.; Leung, C.; Leung, V.C.M. Towards Collaborative Edge Intelligence: Blockchain-Based Data Valuation and Scheduling for Improved Quality of Service. Future Internet 2024, 16, 267. https://doi.org/10.3390/fi16080267
Du Y, Wang Z, Leung C, Leung VCM. Towards Collaborative Edge Intelligence: Blockchain-Based Data Valuation and Scheduling for Improved Quality of Service. Future Internet. 2024; 16(8):267. https://doi.org/10.3390/fi16080267
Chicago/Turabian StyleDu, Yao, Zehua Wang, Cyril Leung, and Victor C. M. Leung. 2024. "Towards Collaborative Edge Intelligence: Blockchain-Based Data Valuation and Scheduling for Improved Quality of Service" Future Internet 16, no. 8: 267. https://doi.org/10.3390/fi16080267
APA StyleDu, Y., Wang, Z., Leung, C., & Leung, V. C. M. (2024). Towards Collaborative Edge Intelligence: Blockchain-Based Data Valuation and Scheduling for Improved Quality of Service. Future Internet, 16(8), 267. https://doi.org/10.3390/fi16080267