Next Article in Journal
Soft Fault Diagnosis of Analog Circuit Based on EEMD and Improved MF-DFA
Next Article in Special Issue
Decompose Auto-Transformer Time Series Anomaly Detection for Network Management
Previous Article in Journal
SHO-CNN: A Metaheuristic Optimization of a Convolutional Neural Network for Multi-Label News Classification
 
 
Article
Peer-Review Record

Robust Hierarchical Federated Learning with Anomaly Detection in Cloud-Edge-End Cooperation Networks

Electronics 2023, 12(1), 112; https://doi.org/10.3390/electronics12010112
by Yujie Zhou 1,2,3, Ruyan Wang 1,2,3, Xingyue Mo 1, Zhidu Li 1,2,3,* and Tong Tang 1,2,3
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Reviewer 5: Anonymous
Electronics 2023, 12(1), 112; https://doi.org/10.3390/electronics12010112
Submission received: 9 December 2022 / Revised: 21 December 2022 / Accepted: 21 December 2022 / Published: 27 December 2022
(This article belongs to the Special Issue Resource Allocation in Cloud–Edge–End Cooperation Networks)

Round 1

Reviewer 1 Report

·      The paper Robust Hierarchical Federated Learning with Anomaly Detection in Cloud-Edge-End Cooperation Networks introduces how a common ML model can be trained collaboratively from heterogeneous data generated at network edges in order to enable intelligent vehicle scheduling, edge computing and image processing through the federated learning (FL)

·      In traditional FL client-server network is generally adopted in which each client is used by the ML model for being uploaded to the server for parametric estimation and assessment, consisting of cloud-based FL cloud-based and edge-based which are described separately.

·      The two are combined in the hierarchical federated learning as client-edge-client framework to which malicious or malfunctioning heterogeneous clients may upload incorrect or noisy model values to the server (originally named Byzantine attacks or poisoning), which strongly hurts the FL training to degrade the final performance.

·      The authors claim the following contributions:

1.         A novel method called R-HFL is proposed to ensure the training performance under Byzantine attacks by the distributed anomaly detection mechanism customized for the HFL framework.

2.         The effectiveness and convergence of our proposed method are mathematically discussed in the general non-convex case.

3.         Numerical results are provided to experimentally show that our proposed algorithm can effectively minimize the negative impact of abnormal behaviors, illustrating the feasibility of distributive detection in the HFL system.

·      The Learning system architecture is explained as consisting of traditional Federated Learning (Algorithm 1), Hierarchical Federated Learning (Algorithm-2 ), Lightweight detection mechanism (Figure 1) with Robust Hierarchical FL (Algirhtm-3)

 

Is the subject matter presented in a comprehensive manner?

·      The 14-page paper is presented with an enough level of flow to genuinely support the title Robust Hierarchical Federated Learning with Anomaly Detection in Cloud-Edge-End Cooperation Networks.

·      There is theoretical support and related explanation to cover the subject comprehensively, justifying the contribution in the form of results (Figure 4) giving results as function of threshold value of ‘t’ on the R-HFL performance in terms of implementing the proposed architectural frame work of R-HFL.

·      Additionally, Table 1 summarizes the testing accuracy under all scenarios in different settings, where the results show that the proposed R-HFL method has significant advantages over the baselines. Furthermore, we provide the corresponding detection accuracy in brackets, which indicates that the test accuracy is positively correlated with the detection accuracy.

 

Are the references provided applicable and sufficient?

·      The authors take support from thirty seven (37) recent journals and transaction references with some from MDPI and Science Direct.

·      The whole presentation does justify a well-deserved support to the title of Robust Hierarchical Federated Learning with Anomaly Detection in Cloud-Edge-End Cooperation Network

Comments for author File: Comments.pdf

Author Response

Thanks for your supportive review.

We greatly appreciate the positive comments.

Reviewer 2 Report

The authors introduce a hierarchical cloud-edge-end collaboration-based FL framework to reduce communication costs. For which, they design a detection mechanism as partial cosine similarity (PCS) to filter adverse clients to improve performance, where the proposed lightweight technique has high computation parallelization. Their experimental results show that the proposed R-HFL always outperforms baselines in general cases under malicious attacks, which further shows the effectiveness of our scheme.

 

Hence, I do recommend the paper for possible publication in your reputed journal.

Author Response

Thanks for your supportive review.

We greatly appreciate the positive comments.

Reviewer 3 Report

The authors describe a hierarchical federated learning in cloud-edge networks with detection of malicious behaviour. A general remark is that the main proposal of the paper seems to be Equation 6, and thus there doesnt seem to be a large contribution/novelty overall.
Some remarks for the figures at the results:
1. There is no "R-HFL" in the legend of Figures 3-4
2. Why are more edge communication rounds are needed to achieve the same accuracy compared to cloud communication?
3. At the legend of Figures 2c and 2d it is not clear what (i) and (ii) represent
4. Why "the edge testing accuracy is jaggedly increasing due to statistical heterogeneity"? Also why the jagging is so consistent when the number of edge rounds increases?

Author Response

Thanks for your valuable review.

For detailed responses, please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

In this article, the authors propose a robust hierarchical federated learning (R-HFL) framework to enhance inherent system resilience to anomalous behaviours, while enhancing communication efficiency in real-world networks and retaining the benefits of standard FL.

 

The article is technically sound, well written, and beautifully organized. The methodology of the article is correct. However, the authors need to work on the conclusion section. The conclusion section is very short. Moreover, it should include future directions for researchers working in the same field.

Author Response

Thanks for your supportive review.

For detailed responses, please see the attachment.

Author Response File: Author Response.pdf

Reviewer 5 Report

Authors propose R-HFL framework to enhance system resistance and reduce communication costs.

The main research question is well addressed in this paper. Paper is relevant and interesting. Authors sufficiently well emphasized what new they have proposed in comparison with other published materials. The paper is well written, clear and easy to read. However, the Conclusions are not consistent with the evidence and arguments presented. This section is not well written. Please, explain in Conclusions if the enhancement and cost reduction have been achieved. The Conclusions part is too short

In the paper , „ [18–21] analyzed the FL convergence performance 83 in iid settings, where [18] showed the result for non-convex optimization objectives..”

For above ,please, insert authors’ family names, and  explain iid settings,

„Work [33] designed Auror” – who/what is Auror? Explain

In the paper, rows 155 and 156

„In addition, the upload of these poorly   trained local models occupies the network bandwidth, which influences the learning system  efficiency.”

Could you give any references to above? Please, give evidence

Please, explain in Conclusions if the enhancement and cost reduction have been achieved. The Conclusions part is too short.

Author Response

Thanks for your valuable review.

For detailed responses, please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop