Graph Neural Network Based Asynchronous Federated Learning for Digital Twin-Driven Distributed Multi-Agent Dynamical Systems
Abstract
:1. Introduction
- (1)
- The consistency of distributed intelligent labeling.
- (2)
- In a distributed annotation system, security problems such as network attacks and privacy disclosure easily occur during the process of data transmission.
- (1)
- To address the issues of poor data quality and the inability to introduce prior knowledge in a data-driven context, this paper proposes combining digital twins (DTs) with graph convolutional networks (GCNs) to design a hybrid and interpretable reputation assessment method. This approach allows for the incorporation of prior knowledge and enhances the model’s fairness based on data-driven conditions.
- (2)
- To enhance the fairness of federated learning, this paper introduces a joint model of digital twins (DTs) and graph convolutional networks (GCNs) into federated learning (FL). It measures the fairness of federated learning through a contract-based reputation incentive mechanism and ultimately improves the consistency of distributed models.
- (3)
- This paper introduces the method of blockchain integration in the context of federated learning (FL). It combines blockchain with the designed DT+GCN model to enable secure information exchange in FL scenarios.
- (4)
- Finally, through real-data experiments, we provide quantitative and qualitative analyses, demonstrating the potential benefits of our method in addressing contemporary and future challenges in the data middleware domain.
2. Related Work
2.1. Digital Twins
2.2. GCN
2.3. Federated Learning
3. Model
3.1. DT+GCN
3.1.1. DT System Model
- (1)
- Data collection: the system collects various types of data such as environment, devices, and user behavior in real time through sensor networks and IoT devices.
- (2)
- Data fusion and processing: the distributed architecture enables data to be processed in parallel on multiple nodes, and a large quantity of data is preprocessed, cleaned, and analyzed through algorithms such as machine learning (e.g., deep learning).
- (3)
- Digital twin construction: based on the collected data, a virtual, real-time updated physical system model is constructed, which reflects the physical state and behavior of the real world.
- (4)
- Intelligent labeling: using the trained model, automatically identify and classify data, and label each entity or behavior.
- (5)
- Decision support and optimization: based on the feedback from marking, the system can provide real-time decision support, optimize production processes, maintenance strategies, etc., and perform predictive maintenance.
- (6)
- Feedback closed loop: the results of actual operation will further update the digital twin model, forming a closed loop of continuous learning and improvement.
3.1.2. Historical Opinion Model Based on GCN
- (1)
- Attention 1: The 10 feature weights formed by private data are equivalent to the topology formed by the connections between 10 nodes, thus generating attention. The greater the attention is, the better the reputation.
- (2)
- Parameter 1: Private model parameters trained by private subsidiary 1 data. The higher the parameter is, the better the reputation.
- (3)
- The data of the verification set are substituted into the trained private model, and the difference value MAE11 is obtained.
- (1)
- Substitute private data 1 into of the private data obtained by private model 2.
- (2)
- between public data 1 and public data 2.
- (3)
- The difference between parameter 1 and parameter 2 is .
3.1.3. Global Model
3.2. TFL-DT: Trust-Based Reputation (Evaluation/Calculation)
3.2.1. Federated Learning Framework
3.2.2. Trust Behavior Model
3.2.3. Combining Local and Recommended Views
3.3. Consistency Checking under Blockchain
3.3.1. Consistent Goals under Blockchain
- (1)
- Consistency: The BDCVS can ensure the consistency of data interactions between the source chain and the target chain. When the mechanism model where the TC resides does not update the data or is in a modified state, the model does not pass the AC consistency test.
- (2)
- Privacy: In the process of cross-chain information data interaction between SCs and TCs, any privacy information of DO cannot be obtained.
- (3)
- Dynamic: The auxiliary verification information form (AVF) is introduced to perform data update operations with minimal overhead and dynamic information interaction to improve the verification efficiency.
- (4)
- Security: By introducing the advanced gamma multisignature scheme AGMS, the system model encrypts the SC and TC between the two secondary middle stations, which can ensure the security of data update interaction and other operations.
- System initiation
- 2.
- Data processing
- 3.
- Data uploading
- 4.
- Data updating
- 5.
- Data auditing
- (1)
- The TC sends the certificate and signature to the AC for consistency verification.
- (2)
- The SC generates an audit certificate and sends the certificate and signature to the AC for consistency verification.
- (3)
- After receiving proofs from SC and TC, AC computes verification information .
- (4)
- AC checks the signatures of and , uses Equation (33) to verify the correctness of . If the verification fails, TC may not store it as needed, and AC notifies SC of the abnormal situation at this time. If the verification succeeds, AC generates audit log on the blockchain for further inspection.
3.3.2. Security Verification under Blockchain
3.3.3. Analysis of the Advantages of Blockchain
3.4. The Diagram of the Proposed Method
4. Experiment
4.1. Datasets
4.2. Experiment
4.2.1. Comparison of the Results of the Algorithms for Different Participating Individuals
- A.
- Performances of data analysis using five local models
- B.
- Performances of data analysis using 10 local models
- C.
- Performances of data analysis using of 15 local models
4.2.2. Different Degrees of Heterogeneity
4.2.3. Different Numbers of Malicious Nodes
4.2.4. Performance of Algorithms with Different Numbers of Participating Nodes
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wu, X.; Zheng, W.; Xia, X.; Lo, D. Data quality matters: A case study on data label correctness for security bug report prediction. IEEE Trans. Softw. Eng. 2021, 48, 2541–2556. [Google Scholar] [CrossRef]
- Liu, W.; Wang, H.; Shen, X.; Tsang, I.W. The emerging trends of multi-label learning. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 7955–7974. [Google Scholar] [CrossRef]
- Khanchi, S.; Vahdat, A.; Heywood, M.I.; Zincir-Heywood, A.N. On botnet detection with genetic programming under streaming data, label budgets and class imbalance. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, New York, NK, USA, 6 July 2018; pp. 21–22. [Google Scholar]
- Shi, W.C.; Li, J.P. Research on consistency of distributed system based on Paxos algorithm. In Proceedings of the 2012 International Conference on Wavelet Active Media Technology and Information Processing (ICWAMTIP), Chengdu, China, 17–19 December 2012; pp. 257–259. [Google Scholar]
- Howard, H.; Mortier, R. Paxos vs. Raft: Have we reached consensus on distributed consensus? In Proceedings of the 7th Workshop on Principles and Practice of Consistency for Distributed Data, Heraklion, Greece, 27 April 2020; pp. 1–9. [Google Scholar]
- Xu, J.P.; Gu, G.X.; Tang, Y.; Qian, F. Channel modeling and LQG control in the presence of random delays and packet drops. Automatica 2022, 135, 1–15. [Google Scholar] [CrossRef]
- Yang, C.; Wang, R.; Yao, S.; Liu, S.; Abdelzaher, T. Revisiting over-smoothing in deep GCNs. arXiv 2020, arXiv:2003.13663. [Google Scholar]
- Hou, Y.; Jia, S.; Lun, X.; Hao, Z.; Shi, Y.; Li, Y.; Lv, J. GCNs-net: A graph convolutional neural network approach for decoding time-resolved eeg motor imagery signals. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 7312–7323. [Google Scholar] [CrossRef] [PubMed]
- Tao, F.; Qi, Q.; Cheng, J.; Ji, P. Digital twin modeling. J. Manuf. Syst. 2022, 64, 372–389. [Google Scholar] [CrossRef]
- Singh, M.; Fuenmayor, E.; Hinchy, E.P.; Qiao, Y.; Murray, N.; Devine, D. Digital twin: Origin to future. Appl. Syst. Innov. 2021, 4, 36. [Google Scholar] [CrossRef]
- Liu, M.; Fang, S.; Dong, H.; Xu, C. Review of digital twin about concepts, technologies, and industrial applications. J. Manuf. Syst. 2021, 58, 346–361. [Google Scholar] [CrossRef]
- Wang, J.; Li, X.; Wang, P.; Liu, Q. Bibliometric analysis of digital twin literature: A review of influencing factors and conceptual structure. Technol. Anal. Strat. Manag. 2024, 36, 166–180. [Google Scholar] [CrossRef]
- Wang, B.; Zhou, H.; Li, X.; Yang, G.; Zheng, P.; Song, C.; Wang, L. Human Digital Twin in the context of Industry 5.0. Robot. Comput. Manuf. 2024, 85, 102626. [Google Scholar] [CrossRef]
- Kobayashi, K.; Daniell, J.; Alam, S.B. Improved generalization with deep neural operators for engineering systems: Path toward digital twin. Eng. Appl. Artif. Intell. 2024, 131, 107844. [Google Scholar] [CrossRef]
- Liu, X.; Jiang, D.; Tao, B.; Xiang, F.; Jiang, G.; Kong, J.; Sun, Y.; Li, G. A systematic review of digital twin about physical entities, virtual models, twin data, and applications. Adv. Eng. Inform. 2023, 55, 101876. [Google Scholar] [CrossRef]
- Somers, R.J.; Douthwaite, J.A.; Wagg, D.J.; Walkinshaw, N.; Hierons, R.M. Digital-twin-based testing for cyber–physical systems: A systematic literature review. Inf. Softw. Technol. 2023, 156, 107145. [Google Scholar] [CrossRef]
- Yao, Z.; Xia, S.; Li, Y.; Wu, G. Cooperative Task Offloading and Service Caching for Digital Twin Edge Networks: A Graph Attention Multi-Agent Reinforcement Learning Approach. IEEE J. Sel. Areas Commun. 2023, 41, 3401–3413. [Google Scholar] [CrossRef]
- Liu, S.; Lu, Y.; Li, J.; Shen, X.; Sun, X.; Bao, J. A blockchain-based interactive approach between digital twin-based manufacturing systems. Comput. Ind. Eng. 2023, 175, 108827. [Google Scholar] [CrossRef]
- Zhao, J.; Zhang, R.; Sun, Q.; Shi, J.; Zhuo, F.; Li, Q. Adaptive graph convolutional network-based short-term passenger flow prediction for metro. J. Intell. Transp. Syst. 2023, e2209913. [Google Scholar] [CrossRef]
- Xu, J.P.; Gu, G.X.; Gupta, V.; Tang, Y. Optimal stationary state estimation over multiple Markovian packet drop channels. Automatica 2021, 128, 109561. [Google Scholar] [CrossRef]
- Wu, W.; Zhou, K.; Chen, X.D.; Yong, J.H. Light-weight shadow detection via GCN-based annotation strategy and knowledge distillation. Comput. Vis. Image Underst. 2022, 216, 103341. [Google Scholar] [CrossRef]
- Zhou, Y.; Ren, X.; Li, S. Probabilistic Weighted Copula Regression Model with Adaptive Sample Selection Strategy for Complex Industrial Processes. IEEE Trans. Ind. Inform. 2020, 16, 6972–6981. [Google Scholar] [CrossRef]
- Yang, L.; Wei, Y.; Yu, F.R.; Han, Z. Joint routing and scheduling optimization in time-sensitive networks using graph-convolutional-network-based deep reinforcement learning. IEEE Internet Things J. 2022, 9, 23981–23994. [Google Scholar] [CrossRef]
- Yang, Y.; Komatsu, M.; Oyama, K.; Ohkawa, T. SCIRNet: Skeleton-based cattle interaction recognition network with inter-body graph and semantic priority. In Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023. [Google Scholar] [CrossRef]
- Parihar, A.S.; Chakraborty, S.K. Handling of resource allocation in flying ad hoc network through dynamic graph modeling. Multimed. Tools Appl. 2022, 81, 18641–18669. [Google Scholar] [CrossRef]
- Ahmed, A.; Choi, B.J. FRIMFL: A Fair and Reliable Incentive Mechanism in Federated Learning. Electronics 2023, 12, 3259. [Google Scholar] [CrossRef]
- Yang, D.; Ji, Y.; Kou, Z.; Zhong, X.; Zhang, S. Asynchronous Federated Learning with Incentive Mechanism Based on Contract Theory. arXiv 2023, arXiv:2310.06448. [Google Scholar]
- Xiong, A.; Chen, Y.; Chen, H.; Chen, J.; Yang, S.; Huang, J.; Guo, S. A Truthful and Reliable Incentive Mechanism for Federated Learning Based on Reputation Mechanism and Reverse Auction. Electronics 2023, 12, 517. [Google Scholar] [CrossRef]
- Zhu, Y.; Liu, Z.; Wang, P.; Du, C. A dynamic incentive and reputation mechanism for energy-efficient federated learning in 6 g. Digit. Commun. Netw. 2023, 9, 817–826. [Google Scholar] [CrossRef]
- Wang, Z.; Hu, Q.; Li, R.; Xu, M.; Xiong, Z. Incentive mechanism design for joint resource allocation in blockchain-based federated learning. IEEE Trans. Parallel Distrib. Syst. 2023, 34, 1536–1547. [Google Scholar] [CrossRef]
- Zhou, Y.; Li, S.J. A Probabilistic Copula-based Fault detection Method with TrAdaBoost strategy for Industrial IoT. IEEE Internet Things J. 2022, 10, 7813–7823. [Google Scholar] [CrossRef]
- Kang, J.; Xiong, Z.; Niyato, D.; Xie, S.; Zhang, J. Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory. IEEE Internet Things J. 2019, 6, 10700–10714. [Google Scholar] [CrossRef]
- Yu, H.; Liu, Z.; Liu, Y.; Chen, T.; Cong, M.; Weng, X.; Niyato, D.; Yang, Q. A fairness-aware incentive scheme for federated learning. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 7–8 February 2020; pp. 393–399. [Google Scholar]
- Song, Q.; Lei, S.; Sun, W.; Zhang, Y. Adaptive federated learning for digital twin driven industrial Internet of Things. In Proceedings of the 2021 IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, 29 March–1 April 2021; pp. 1–6. [Google Scholar]
- Zhou, Y.; Ye, Q.; Lv, J. Communication-efficient federated learning with compensated overlap-fedavg. IEEE Trans. Parallel Distrib. Syst. 2021, 33, 192–205. [Google Scholar] [CrossRef]
- An, T.; Ma, L.; Wang, W.; Yang, Y.; Wang, J.; Chen, Y. Consideration of FedProx in Privacy Protection. Electronics 2023, 12, 4364. [Google Scholar] [CrossRef]
- Su, L.; Xu, J.; Yang, P. A Non-parametric View of FedAvg and FedProx: Beyond Stationary Points. J. Mach. Learn. Res. 2023, 24, 1–48. [Google Scholar]
- Chen, S.; Lin, Z.; Ma, J. The Effect of Hyper-parameters in Model-contrastive Federated Learning Algorithm. In Proceedings of the 2023 IEEE International Conference on Sensors, Electronics and Computer Engineering (ICSECE), Jinzhou, China, 18–20 August 2023; pp. 1170–1174. [Google Scholar]
n | Arithmetic | Synchronization Accuracy Rate (%) | Asynchronous Accuracy Rate (%) | Client Average Download Latency (ms) | Client Mean Computing Delay (ms) | Client Average Upload Delay (ms) | Client Mean Total Time Delay (ms) |
---|---|---|---|---|---|---|---|
10 | FedAvg | 85.81 | 94.87 | 0.810 | 0.138 | 2.399 | 3.347 |
Fedrep | 86.03 | 95.84 | 0.826 | 0.137 | 2.405 | 3.368 | |
FedDTrep | 87.38 | 97.05 | 0.666 | 0.135 | 2.202 | 3.003 | |
15 | FedAvg | 87.68 | 94.98 | 0.832 | 0.137 | 2.411 | 3.368 |
Fedrep | 88.59 | 95.57 | 0.820 | 0.140 | 2.400 | 3.360 | |
FedDTrep | 89.63 | 97.82 | 0.685 | 0.125 | 2.130 | 2.952 | |
20 | FedAvg | 88.52 | 95.52 | 0.828 | 0.139 | 2.396 | 3.363 |
Fedrep | 89.05 | 96.71 | 0.821 | 0.139 | 2.406 | 3.366 | |
FedDTrep | 90.51 | 97.75 | 0.652 | 0.131 | 2.155 | 2.938 | |
25 | FedAvg | 88.77 | 95.61 | 0.816 | 0.139 | 2.444 | 3.399 |
Fedrep | 89.19 | 96.89 | 0.825 | 0.141 | 2.406 | 3.368 | |
FedDTrep | 90.86 | 98.03 | 0.690 | 0.135 | 2.164 | 2.995 | |
30 | FedAvg | 89.37 | 96.86 | 0.831 | 0.138 | 2.420 | 3.386 |
Fedrep | 90.82 | 97.86 | 0.819 | 0.139 | 2.424 | 3.382 | |
FedDTrep | 91.84 | 98.00 | 0.667 | 0.135 | 2.178 | 2.983 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sheng, X.; Zhou, Y.; Cui, X. Graph Neural Network Based Asynchronous Federated Learning for Digital Twin-Driven Distributed Multi-Agent Dynamical Systems. Mathematics 2024, 12, 2469. https://doi.org/10.3390/math12162469
Sheng X, Zhou Y, Cui X. Graph Neural Network Based Asynchronous Federated Learning for Digital Twin-Driven Distributed Multi-Agent Dynamical Systems. Mathematics. 2024; 12(16):2469. https://doi.org/10.3390/math12162469
Chicago/Turabian StyleSheng, Xuanzhu, Yang Zhou, and Xiaolong Cui. 2024. "Graph Neural Network Based Asynchronous Federated Learning for Digital Twin-Driven Distributed Multi-Agent Dynamical Systems" Mathematics 12, no. 16: 2469. https://doi.org/10.3390/math12162469
APA StyleSheng, X., Zhou, Y., & Cui, X. (2024). Graph Neural Network Based Asynchronous Federated Learning for Digital Twin-Driven Distributed Multi-Agent Dynamical Systems. Mathematics, 12(16), 2469. https://doi.org/10.3390/math12162469