Next Article in Journal
Artificial Intelligence in Maritime Cybersecurity: A Systematic Review of AI-Driven Threat Detection and Risk Mitigation Strategies
Previous Article in Journal
Fuzzy PDC-Based LQR Sliding Neural Network Control for Two-Wheeled Self-Balancing Cart
 
 
Article
Peer-Review Record

A Dynamic Spatiotemporal Deep Learning Solution for Cloud–Edge Collaborative Industrial Control System Distributed Denial of Service Attack Detection

Electronics 2025, 14(9), 1843; https://doi.org/10.3390/electronics14091843
by Zhigang Cao 1, Bo Liu 1,2,*, Dongzhan Gao 2, Ding Zhou 1, Xiaopeng Han 1 and Jiuxin Cao 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Electronics 2025, 14(9), 1843; https://doi.org/10.3390/electronics14091843
Submission received: 1 April 2025 / Revised: 25 April 2025 / Accepted: 27 April 2025 / Published: 30 April 2025
(This article belongs to the Section Artificial Intelligence)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

abstract: Consider briefly clarifying what a "feature graph" is and what makes the spatiotemporal modeling dynamic, in plain terms. 
The abstract states that the model outperforms others on CICDDoS2019 and Edge-IIoTset datasets but does not include any numerical results or comparative metrics.
introduction: the gap that this paper fills could be more sharply defined.
which technical aspects are original contributions?
How were the values for α (APPNP propagation), η (dynamic matrix weight), and µ (unsupervised loss weight) selected? Were these optimized via grid search, heuristics, or empirical tuning?
Was data normalization or standardization applied across clients before local training? If yes, was it done globally or locally?
Could the authors elaborate on how the binary classifier D was implemented? What architecture and optimization method were used?
How stable is the training when using the mutual information loss? Did the authors face challenges with instability or overfitting?

Author Response

We sincerely appreciate the reviewers’ valuable and insightful comments, which have greatly contributed to improving the quality of our manuscript. In response, we have carefully reviewed each suggestion and have made several significant revisions to the manuscript.

In this document, we provide a detailed, point-by-point response to each of the reviewers’ comments. Revisions in the text are marked in red to show the relevant corrections.

Thank you very much for your time and effort in processing this submission!

Comments 1: [abstract: Consider briefly clarifying what a "feature graph" is and what makes the spatiotemporal modeling dynamic, in plain terms.]

Response 1: Thank you for the criticism and valuable suggestions. We agree with this comment. A feature graph is a representation that captures the relationships among various traffic characteristics, enabling the model to identify patterns and anomalies in the data. Accordingly, we have reorganized and rewritten the abstract to further elaborate on the concept behind the proposed DDoS attack detection model and the rationale for capturing multi-scale temporal features (see Page 1, lines 8 to 12).

In our revised abstract, we enhanced the clarity of the concepts presented. We defined a feature graph as a way to represent interactions in traffic data, making it easier to understand the model's functionality. We also elaborated on what makes the spatiotemporal modeling dynamic by explaining how we incorporate both long-term traffic patterns and short-term anomalies related to DDoS attacks. By reorganizing and rewriting the abstract, we aimed to provide a clearer rationale for capturing multi-scale temporal features. This rewrite helps readers better appreciate the significance of our proposed DDoS attack detection model, FedDynST, and its novel approach to enhancing detection accuracy in the context of cloud-edge collaborative industrial control systems.

 

Comments 2: [The abstract states that the model outperforms others on CICDDoS2019 and Edge-IIoTset datasets but does not include any numerical results or comparative metrics.]

Response 2: Thank you for pointing this out. We have revised the abstract accordingly to include specific details on the performance of our model, highlighting the significant improvements in detection accuracy and convergence achieved on the CICDDoS2019 and Edge-IIoTset datasets (see Page 1, lines 15 to 17).

 

Comments 3: [introduction: the gap that this paper fills could be more sharply defined. which technical aspects are original contributions?]

 

Response 3: Thanks for your insightful comment. We have, accordingly, reviewed and organized the development history of DDoS attack detection research in ICS and revised the relevant content in the introduction (see Page 2, lines 45 to 71). Additionally, we have rewritten the contributions section to emphasize the original contributions (see Page 3, lines 93 to 104).

In our revisions, we clearly defined the gaps our paper addresses. First, we highlighted the limitations of traditional detection techniques that rely on fixed rules and the challenges faced by machine learning methods in adapting to novel attack patterns. Second, we noted that current deep learning methods primarily focus on learning local features of traffic data, such as the statistical characteristics of individual fields or field groups, while neglecting the macro-level connections and global recognition patterns within industrial traffic datasets.

Our approach addresses these issues by integrating federated learning with deep learning to create feature graphs that enable adaptive and dynamic detection capabilities, leveraging both local and global traffic patterns. We also elaborated on the unique characteristics of cloud-edge collaborative ICSs and their implications for DDoS attack detection. This context emphasizes our original contributions by demonstrating how our model not only improves detection accuracy but also enhances privacy and security through a decentralized learning process. These enhancements better articulate the significance of our research and its potential impact on the field.

 

 

Comments 4: [How were the values for α (APPNP propagation), η (dynamic matrix weight), and µ (unsupervised loss weight) selected? Were these optimized via grid search, heuristics, or empirical tuning?]

 

Response 4: We are grateful for your detailed review and constructive comments. The values for α (APPNP propagation), η (dynamic matrix weight), and µ (unsupervised loss weight) were primarily selected based on empirical tuning through multiple iterations of experiments. We believe that in practical engineering implementations, these parameters can be continuously optimized according to different industrial scenarios. Additionally, for the critical parameters η and µ, we conducted a sensitivity analysis to assess the impact of varying these parameters (see Page 18, Section 4.5).

 

 

Comments 5: [Was data normalization or standardization applied across clients before local training? If yes, was it done globally or locally?]

 

Response 5: Thank you for this comment. Considering that the validation experiments in our paper utilize publicly available datasets, no additional normalization or standardization was performed. However, we did implement some preprocessing steps, such as random sampling and artificial categorization. First, given the excessive number of normal traffic samples and certain attack traffic samples in the CICDDoS2019 and Edge-IIoTset datasets, we conducted random sampling to create manageable subsets. Then, we performed an artificial categorization of attack types so that each client’s dataset contains different DDoS attack types, simulating the data distribution in a real ICS environment (see Page 12, Section 4.1.2).

 

 

Comments 6: [Could the authors elaborate on how the binary classifier D was implemented? What architecture and optimization method were used?]

 

Response 6: We are sorry for NOT explaining the implementation of the binary classifier D clearly. We have revised the manuscript to include the relevant equations and details (see Page 9, lines 295).

 

 

Comments 7: [How stable is the training when using the mutual information loss? Did the authors face challenges with instability or overfitting?]

 

Response 7: We are grateful for your detailed review and constructive comments. In our study, the proposed model demonstrated a relatively fast convergence speed during the experiments (see Page 19, Section 4.6), and we have not encountered any stability-related issues. To prevent overfitting and ensure the sparsity of the adjacency matrix, we employed a filtering method based on the average similarity score of each dynamic adjacency matrix (see Page 8, Equation 9). Consequently, we were not troubled by overfitting issues in the final experiments.

In future work, we will continue exploring the application of this model in real-world ICSs and will pay close attention to the stability and overfitting concerns that you have raised.

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

This paper presents a model named FedDynST for detecting DDoS attacks in cloud-edge collaborative industrial control systems. The work aims to improve detection capabilities while addressing privacy concerns through a federated learning framework. Performance evaluations on two datasets indicate that the model achieves enhanced accuracy in identifying DDoS threats. While I find the work to be of interest, I have feedback for the authors to consider in this work: 

1. The abstract needs to be rewritten and be more focused on the proposed method, as it seemed too generalized. 

2. The introduction seems a bit weak for such an important, widely discussed topic in the recent years. The authors shall improve the introduction and organized the literature from wider discussion (discussion on DDoS in general) going narrowed as they proceed (specific discussion on DDoS on ICS). Also, the contribution needs to be rewritten and highlight the authors contribution more clearly. 

3. Complimenting point (2), the authors is recommended to bring recent works (2024 and 2025) that discussed various DDoS-related literature on various fields, to highlight to readers its effect. A recommended (not imposed) list of recently good work could be:

- Aljohani, T., & Almutairi, A. (2024). Modeling time-varying wide-scale distributed denial of service attacks on electric vehicle charging Stations. Ain Shams Engineering Journal, 15(7), 102860.

- Kaur, A., Krishna, C. R., & Patil, N. V. (2025). A comprehensive review on Software-Defined Networking (SDN) and DDoS attacks: Ecosystem, taxonomy, traffic engineering, challenges and research directions. Computer Science Review, 55, 100692. 

and so on ... Some discussion on the impact of DDoS in various fields would strengthen the work in my opinion. 

4. The integration of federated learning and deep learning is a promising approach, especially given the privacy concerns of centralized data training. However, the methodology could benefit from a more detailed explanation of how the feature graphs are constructed and how exactly the static and dynamic adjacency matrices are determined.

5. While the authors report various metrics such as accuracy, precision, recall, F1-score, and AUC, it would be helpful to explain why these specific metrics were chosen. Moreover, the evaluation could be enhanced by including a comparative analysis with baseline models or a discussion of the trade-offs involved when selecting these specific metrics.

6. The paper briefly mentions that different ICS clients may have varying hardware and software setups, which can affect the model's performance. A more thorough exploration of how these differences were addressed in the training process would improve the robustness of the findings.

7. The claims regarding the model's generalization capabilities across different clients should be backed by additional quantitative data or case studies. 

8. The convergence analysis suggests that the FedDynST model converges faster than other models. However, providing more details on the training process and the number of iterations taken could provide insights into the practical applicability of the model in real-world settings

9. Finally, the itenticate shows a 20% similarity. Reduce this percentage to an acceptable 10-15%.

Comments on the Quality of English Language

English could be improved further

Author Response

We sincerely appreciate the reviewers’ valuable and insightful comments, which have greatly contributed to improving the quality of our manuscript. In response, we have carefully reviewed each suggestion and have made several significant revisions to the manuscript.

In this document, we provide a detailed, point-by-point response to each of the reviewers’ comments. Revisions in the text are marked in red to show the relevant corrections.

Thank you very much for your time and effort in processing this submission!

Comments 1: [The abstract needs to be rewritten and be more focused on the proposed method, as it seemed too generalized.]

Response 1: Thank you for the criticism and valuable suggestions. We agree with your comment and have rewritten the abstract to make it more focused on the proposed method (see Page 1, lines 1 to 17).

In our revisions, we emphasized the unique aspects of our approach by clearly defining the context of DDoS attack detection within cloud-edge collaborative industrial control systems. By detailing the use of dynamic and static adjacency matrices, we highlighted how our model addresses both long-term traffic trends and short-term anomalies, significantly enhancing detection accuracy.

Furthermore, we clarified the advantages of using convolutional neural networks for extracting temporal characteristics and explained how the federated learning framework contributes to privacy and security in data handling. This focused approach aims to convey the novelty and effectiveness of the proposed method, showcasing its superior performance compared to existing detection models validated through rigorous testing on relevant datasets.

 

Comments 2: [The introduction seems a bit weak for such an important, widely discussed topic in the recent years. The authors shall improve the introduction and organized the literature from wider discussion (discussion on DDoS in general) going narrowed as they proceed (specific discussion on DDoS on ICS). Also, the contribution needs to be rewritten and highlight the authors contribution more clearly.]

Response 2: Thank you for pointing this out. We recognize that the introduction needed improvement, and we have revised it to strengthen the discussion. We organized the literature review to focus on the development history of DDoS attack detection research in ICS and revised the relevant content in the introduction (see Page 2, lines 45 to 71). Additionally, we have rewritten the contributions section to emphasize the original contributions of our work (see Page 3, lines 93 to 104).

In our revisions, we aimed to create a more comprehensive overview of the evolution of DDoS attack detection techniques, starting from early methods that relied on simple feature rules and signature detection techniques. We discussed the limitations of these approaches, particularly their lack of adaptability to novel DDoS attacks. As we progressed through the introduction, we highlighted the shift toward machine learning and deep learning methods, specifying their strengths and weaknesses in the context of ICS environments.

We pointed out that while machine learning has improved detection capabilities, it often demands robust labeled datasets and suffers from challenges in feature selection. We also drew attention to the advantages of deep learning models, while identifying that they mainly focus on local features, overlooking critical macro-level patterns in industrial traffic.

Furthermore, we addressed the complexities of cloud-edge collaborative ICS deployments, emphasizing how varied configurations can be exploited by attackers. This context was crucial for highlighting the necessity of our proposed model. We detailed our contributions more explicitly, including the development of a federated learning framework and a detection model that optimizes traffic data analysis across different time scales. These enhancements to the introduction aim to make our research's significance clearer within the broader field of cybersecurity.

 

Comments 3: [Complimenting point (2), the authors is recommended to bring recent works (2024 and 2025) that discussed various DDoS-related literature on various fields, to highlight to readers its effect. A recommended (not imposed) list of recently good work could be:

- Aljohani, T., & Almutairi, A. (2024). Modeling time-varying wide-scale distributed denial of service attacks on electric vehicle charging Stations. Ain Shams Engineering Journal, 15(7), 102860.

- Kaur, A., Krishna, C. R., & Patil, N. V. (2025). A comprehensive review on Software-Defined Networking (SDN) and DDoS attacks: Ecosystem, taxonomy, traffic engineering, challenges and research directions. Computer Science Review, 55, 100692.

and so on ... Some discussion on the impact of DDoS in various fields would strengthen the work in my opinion.]

 

Response 3: Thank you for the positive comment. We have added a discussion on the impact of DDoS attacks across various fields and cited the relevant literature you suggested (see Page 1, lines 31-33).

We emphasized that distributed denial-of-service (DDoS) attacks are among the most harmful threats on the Internet today, significantly affecting critical sectors such as communications, energy, and transportation. In response to your comment, we transitioned from this broader discussion of DDoS attacks to a more specific examination of DDoS attack detection issues within industrial control systems (ICS). By incorporating recent studies, we aimed to provide readers with an updated perspective on the evolving nature of DDoS threats and their implications. By presenting current literature, we strengthen the foundation of our study and clarify its relevance to recent advancements in the field, underscoring the necessity for effective detection and mitigation strategies.

 

 

Comments 4: [The integration of federated learning and deep learning is a promising approach, especially given the privacy concerns of centralized data training. However, the methodology could benefit from a more detailed explanation of how the feature graphs are constructed and how exactly the static and dynamic adjacency matrices are determined.]

 

Response 4: Thank you for your valuable feedback. We agree that a more detailed explanation of the feature graph construction and the determination of static and dynamic adjacency matrices is essential. We have revised the structure and content of the methodology section and provided a comprehensive description of how the feature graphs are constructed in Section 3.3.1. We have also clarified the thought process and methods used to determine the static and dynamic adjacency matrices (see Page 6, lines 222 to 230).

In our revisions, we explained that ICSs typically operate in periodic modes, with devices regularly transmitting and receiving data. The static adjacency matrix is derived from long-term statistics of ICS traffic data, capturing enduring relationships between traffic features. This provides reliable feature associations for the model. Conversely, DDoS attacks in ICSs often induce notable changes in traffic features over short intervals. Therefore, the dynamic adjacency matrix is informed by short-term traffic data and captures relationships between traffic features, enabling real-time detection of anomalous changes in ICS traffic. This bolsters the model's ability to identify sudden attacks.

By integrating these two types of adjacency matrices, we allow for a more comprehensive exploration of traffic data feature relationships across various time scales, enhancing the model's effectiveness in detecting DDoS attacks within ICS environments.

 

 

Comments 5: [While the authors report various metrics such as accuracy, precision, recall, F1-score, and AUC, it would be helpful to explain why these specific metrics were chosen. Moreover, the evaluation could be enhanced by including a comparative analysis with baseline models or a discussion of the trade-offs involved when selecting these specific metrics.]

 

Response 5: We appreciate your suggestion and agree that further explanation is beneficial. We have added content to clarify why we chose specific metrics such as accuracy, precision, recall, F1-score, and AUC (see Page 12, lines 425 to 428). These metrics were selected to provide a comprehensive evaluation of model performance, emphasizing the balance between overall accuracy and the precision of detection versus false positives, which is crucial for the continued operation of ICSs.

 

 

Comments 6: [The paper briefly mentions that different ICS clients may have varying hardware and software setups, which can affect the model's performance. A more thorough exploration of how these differences were addressed in the training process would improve the robustness of the findings.]

 

Response 6: We are grateful for your detailed review and constructive comments. We acknowledge that different hardware and software setups in ICS clients can lead to various types of DDoS attacks. To address this, we simulated different ICS clients on our server by employing dataset sampling to represent these various attack types. This approach allowed us to adaptively optimize the training process based on the specific downstream tasks. Furthermore, we implemented dynamic weight allocation through federated learning to account for these differences effectively. We have rephrased and revised the data preprocessing section of the manuscript to better express how these factors were addressed within our training methodology (see Page 11, Lines 381 to 392).

Additionally, we recognize the importance of exploring these differences further and will include future work to investigate the application of our model in real-world scenarios with varying hardware and software configurations.

 

 

Comments 7: [The claims regarding the model's generalization capabilities across different clients should be backed by additional quantitative data or case studies.]

 

Response 7: We appreciate your insightful comments regarding the model's generalization capabilities. We agree that these claims should be supported by additional quantitative data or case studies. Currently, our approach is based on a generalized theoretical model that distinguishes client differences by simulating various attack types through dataset sampling and classification.

In future research, we plan to focus on practical applications by introducing a hardware and software adapter that will take into account the differences among various industrial clients during federated learning. This will enable adaptive optimization based on specific downstream tasks and dynamic weight allocation for the model, as illustrated in the accompanying figure. Due to the constraints of this manuscript's length, we have opted to reserve this research for our next paper.

practical applications by introducing a hardware and software adapter that will take into account the differences among various industrial clients during federated learning

Changes: Accordingly, we have revised the wording in this manuscript regarding the model's generalization capabilities and clarified the procedures related to dataset sampling and classification, detailing how federated learning contributes to enhancing the model’s generalization across different clients.

 

 

Comments 8: [The convergence analysis suggests that the FedDynST model converges faster than other models. However, providing more details on the training process and the number of iterations taken could provide insights into the practical applicability of the model in real-world settings]

 

Response 8: Thank you for the positive comment. We have added details on the training process and platform parameters in the experimental parameters section (see page 13, lines 449 to 451). Additionally, we clearly specify that the convergence analysis is based on 20 iterations. These enhancements will provide better insights into the model's practical applicability in real-world settings.

 

 

Comments 9: [Finally, the itenticate shows a 20% similarity. Reduce this percentage to an acceptable 10-15%.]

 

Response 9: Thank you for pointing this out. In response to your suggestion, we have already revised the manuscript to reduce the similarity percentage to an acceptable range of 10-15%. This involves rephrasing language and structure while ensuring correct logic and meaning to enhance clarity and originality.

 

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The paper is well-written and can be accepted after making the above comments:

1-Add reference for equation 1 

2-Add a contribution and novelty paragraph for the paper in the introduction section.  The authors should outline the primary objectives of the paper within the introduction, clarifying the significance of the study within the broader field.

Figure 2. The DDoS attack detection model 

 

3-Add some elaboration in the caption to make it easy for the reader to understand the figure

4-Table 1. Traffic division of DDoS attacks (CICDDoS2019) 

In this table, the last two columns are the same. Can you give an explanation?

5-Figure 7. The sensitivity of our model to η 

What does the author mean by sensitivity?

6-Authors should check the mathematical subject classification 2020 and provide the correct details. 

7-Authors must discuss how the study can be extended as a future course.

  1. The references are not written uniformly. Also, authors must cross-check the references.
  2. The paper should be proofread for typos and grammatical errors.

10- Figure 10 must be replaced 

11- What conclusion have you drawn from Tables 7 and 8.?

12- How does the paper enrich the knowledge of the scientific community?
13- Discuss the behavior of graphs in detail. Abbreviations should be initially spelled out when first introduced, beginning; some are not. Please check the whole paper.

Author Response

We sincerely appreciate the reviewers’ valuable and insightful comments, which have greatly contributed to improving the quality of our manuscript. In response, we have carefully reviewed each suggestion and have made several significant revisions to the manuscript.

In this document, we provide a detailed, point-by-point response to each of the reviewers’ comments. Revisions in the text are marked in red to show the relevant corrections.

Thank you very much for your time and effort in processing this submission!

Comments 1: [Add reference for equation 1]

Response 1: Thank you for pointing this out. We have made revisions as required (see Page 4, lines 182).

 

Comments 2: [Add a contribution and novelty paragraph for the paper in the introduction section.  The authors should outline the primary objectives of the paper within the introduction, clarifying the significance of the study within the broader field.]

Response 2: Thank you for the criticism and valuable suggestions. We agree that a contribution and novelty paragraph is essential for clarifying the primary objectives of the paper. We have rewritten the contribution section in the introduction to outline the significance of our study within the broader field more clearly (see Page 3, lines 93 to 104).

In our revisions, we explicitly stated that this study focuses on DDoS attack detection algorithms in cloud-edge collaborative industrial control scenarios leveraging deep learning and federated learning. We highlighted three main contributions:

·         Federated Learning Framework: We proposed a federated learning framework tailored for cloud-edge collaborative ICSs, which optimizes the learning process of the global model by assigning dynamic weights to each industrial client, thereby enhancing overall performance.

·         DDoS Detection Model: We introduced a DDoS attack detection model that constructs static and dynamic adjacency matrices to differentiate between long-term and short-term traffic data. This innovation allows for better extraction of relationships between features of industrial traffic data across various time scales, leading to a deeper understanding of the characteristics of DDoS attacks in industrial contexts.

·         Model Evaluation: The proposed model was rigorously evaluated using the CICDDoS2019 and Edge-IIoTset datasets, demonstrating its effectiveness in comparison to several existing federated learning and deep learning-based DDoS attack detection models. The experimental results highlighted significant advantages over these methods, underscoring the robustness of our approach.

By clearly outlining these contributions, we aim to emphasize the significance of our research and its potential impact on improving DDoS attack detection in industrial environments.

 

Comments 3: [Add some elaboration in the caption to make it easy for the reader to understand the figure]

 

Response 3: We agree with your opinions. We have added more detailed elaboration about the key components in the titles of Figures 1, 2, and 5 to help readers better understand their significance. This should facilitate a clearer interpretation of the figure’s insights and the overall context of our research.

 

 

Comments 4: [Table 1. Traffic division of DDoS attacks (CICDDoS2019)

In this table, the last two columns are the same. Can you give an explanation?]

 

Response 4: We are grateful for your detailed review and constructive comments. We recognize that we initially did not provide sufficient detail about our data preprocessing methods, and we have revised that section accordingly (see Page 11, lines 381 to 401).

In Table 1, the last two columns are the same because we performed random sampling of the dataset, ensuring that the ratio of attack traffic to normal traffic is 1:1. This approach is commonly used to maintain the effectiveness of training by balancing the classes. Additionally, we artificially categorized the attack types to ensure that each client's dataset contains different DDoS attack types, mimicking the realistic data distribution found in actual ICS environments.

 

 

Comments 5: [Figure 7. The sensitivity of our model to η 

What does the author mean by sensitivity?]

 

Response 5: Thank you for your question regarding the term "sensitivity" in Figure 7. In this context, sensitivity refers to how responsive our model’s performance is to changes in the parameter η. We have provided a corresponding explanation in the text to clarify this concept and its implications for model evaluation (see Page 18, Lines 579 to 583).

 

 

Comments 6: [Authors should check the mathematical subject classification 2020 and provide the correct details.]

 

Response 6: Thank you for the valuable suggestion. We have reviewed the Mathematical Subject Classification 2020 and provided the correct details in our paper.

 

 

Comments 7: [Authors must discuss how the study can be extended as a future course.]

 

Response 7: Thank you for your insightful suggestion. We have revised the conclusion section as requested. In future work, we will focus on further investigating the engineering implementation of DDoS attack detection in cloud-edge collaborative industrial control scenarios. This will aim to enhance the identification capabilities of DDoS attack traffic across different industrial contexts and facilitate the implementation of more targeted defense strategies (see Page 21, Lines 648 to 651).

In our revisions, we emphasized the importance of exploring practical applications of our model in real-world industrial settings, which would involve not only refining detection algorithms but also addressing the complexities of integrating these systems with existing infrastructure. We also aim to consider how our framework can adapt to evolving attack patterns and incorporate feedback mechanisms for continuous learning. This forward-looking approach will contribute to building more resilient industrial control systems against DDoS threats and further establish the relevance of our research in the broader cybersecurity landscape.

 

 

Comments 8: [The references are not written uniformly. Also, authors must cross-check the references.]

 

Response 8: Thank you for your feedback regarding the references. We have ensured that all references are now formatted uniformly throughout the paper and have cross-checked them for accuracy.

 

 

Comments 9: [The paper should be proofread for typos and grammatical errors.]

 

Response 9: We appreciate your suggestion for proofreading. The manuscript has undergone a thorough review to correct any typos and grammatical errors, ensuring a polished final submission.

 

 

Comments 10: [Figure 10 must be replaced]

 

Response 10: Thank you for pointing this out. We agree with this comment. We have made uniform revisions to both Figure 9 and Figure 10 to better represent the experimental results of the convergence analysis (see page 20).

 

 

Comments 11: [What conclusion have you drawn from Tables 7 and 8.?]

 

Response 11: We are sorry for confusing you. Tables 7 and 8 present the comparative experimental results for the Edge-IIoTset dataset. The conclusions drawn indicate that the proposed FedDynST model outperforms all other models across all metrics, both on the test sets of the four clients and the global test set. This dataset, which includes industrial traffic data based on the Modbus/TCP protocol, offers more realistic test scenarios for model training that are aligned with actual ICSs. Consequently, these findings validate the FedDynST model's detection capability and its potential application in real-world environments.

 

 

Comments 12: [How does the paper enrich the knowledge of the scientific community?]

 

Response 12: We appreciate your valuable feedback on our manuscript. This paper enriches the knowledge of the scientific community through a comprehensive analysis of DDoS attack detection in cloud-edge collaborative industrial control scenarios and the introduction of an innovative model. The model includes a multi-time-scale feature capture matrix for DDoS attacks, enhancing the understanding of traffic patterns and identifying key factors. Furthermore, by proposing targeted defense strategies, the paper offers practical insights that can be implemented in cloud-edge collaborative industrial control scenarios, fostering further research and innovation in security.

 

 

Comments 13: [Discuss the behavior of graphs in detail. Abbreviations should be initially spelled out when first introduced, beginning; some are not. Please check the whole paper.]

 

Response 13: Thank you for pointing this out. We are sorry for this error. We have carefully reviewed the entire manuscript to ensure that all abbreviations are initially spelled out when first introduced, as per your request. Additionally, we have made the necessary revisions to discuss the behavior of graphs in detail, ensuring clarity and comprehensiveness throughout the paper.

 

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The authors improved the manuscript.

Reviewer 2 Report

Comments and Suggestions for Authors

The authors did well in responding to all my points and feedback. The manuscript has improved significantly, and therefore I'm happy to recommend acceptance. Congratulations.

I'd urge the authors though to incorporate a glossary of terms/ abbreviations to make it easy for the readers to follow the content. 

 

Back to TopTop