Next Article in Journal
Vehicle–Pedestrian Detection Method Based on Improved YOLOv8
Next Article in Special Issue
Research Trends in Artificial Intelligence and Security—Bibliometric Analysis
Previous Article in Journal
Development of Control System for a Prefabricated Board Transfer Palletizer Based on S7-1500 PLC
Previous Article in Special Issue
Towards an AI-Enhanced Cyber Threat Intelligence Processing Pipeline
 
 
Article
Peer-Review Record

Enhancing IoT Security: Optimizing Anomaly Detection through Machine Learning

Electronics 2024, 13(11), 2148; https://doi.org/10.3390/electronics13112148
by Maria Balega 1,2,*, Waleed Farag 1, Xin-Wen Wu 3, Soundararajan Ezekiel 1 and Zaryn Good 1
Reviewer 1: Anonymous
Reviewer 3: Anonymous
Electronics 2024, 13(11), 2148; https://doi.org/10.3390/electronics13112148
Submission received: 12 March 2024 / Revised: 25 May 2024 / Accepted: 27 May 2024 / Published: 31 May 2024
(This article belongs to the Special Issue Machine Learning for Cybersecurity: Threat Detection and Mitigation)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper proposes evaluating three well-known machine learning models for an Intrusion Detection System (IDS), which is an interesting idea. I have provided additional comments and suggestions that could help the authors enhance their proposed work.

1/ 

The introduction does not clearly state the contribution of the paper and does not provide a proper description of the remaining sections. I suggest adding two distinct subsections within section 1 to explain the contribution and paper layout.

2/

The related works section does not provide proper coverage of the use of IDS datasets. There are recent articles on this topic that review a large number of recent and relevant literature, for example:

https://doi.org/10.1016/j.iot.2023.100819

https://doi.org/10.3390/s22166164

3/

The data cleaning and preprocessing did not address the imbalance between the benign and malicious traffic in the used datasets.

4/

It is not clear if host-specific features such as IP addresses and MAC addresses are deleted or not in the used datasets. When we include details like IP addresses and MAC addresses in training, the classifier learns to recognize only those specific ones. So, if it's trained on certain IPs, it may not detect attacks from others. To make sure it works well in general, it's common to remove these details from the training data. We can see their importance from how much they affect the results. Leaving them in can make the results less trustworthy.

5/

A comparison with a literature review is a must. The comparison should be thorough and compare many performance measures.

 

 

 

 

Comments on the Quality of English Language

Minor editing of English language required

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

During the training phases, the datasets used have been different. It is indicated that SVM uses a different set than XGBoost. If the sets are different, the results cannot be compared equally as the information provided to each model is not the same. Would it not be more appropriate to search for training and testing with the same dataset? How do you ensure that with less data the accuracy is comparable?

 

The hyperparameters used in each model should be justified: how have they been empirically fitted? How do you ensure that the difference in accuracy is not due to a poor fit of the hyperparameters?

 

Why is the splitting of the datasets not equal in the algorithms? Splits of 80%-20%, 75%-25%, 70%-30% and 65%-35% are made depending on the model. The training is therefore not done with the same datasets so it cannot be assured that the results can be compared on equal terms.

 

The results show that XGBoost predicts results very effectively, but if it has been trained with a much larger dataset it contains more information and is therefore able to predict better, so the conclusion is not correct.

How can it be explained that by training with less information, a model can have better accuracy, as for example in the case of IoT-23 XGBoost.

Author Response

Please see the attachment. 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

This paper is motivated by the fact that Internet-of-Things (IoT) is emerging as a vital tool for innovation where data sources, composed of tiny internet-connected devices, generate data and, hence, enable smart applications. These devices However, have limitations in computing, storage, energy, and communication capacity. As such, IoT networks suffer from many vulnerabilities and hence security challenges. Authors supported this motivation by 4 references (reference 1 to 4). Unfortunately, all these references are not published papers. I urge the authors to use the following recently published articles to support and hence strengthen their motivation.

·        Hossain M, Kayas G, Hasan R, Skjellum A, Noor S, Islam SR. A Holistic Analysis of Internet of Things (IoT) Security: Principles, Practices, and New Perspectives. Future Internet. 2024 Jan 24;16(2):40.

·        Al-Hejri, I., Azzedin, F., Almuhammadi, S. et al. Lightweight Secure and Scalable Scheme for Data Transmission in the Internet of Things. Arab J Sci Eng (2024). https://doi.org/10.1007/s13369-024-08884-z

I would like to see a conclusion of the “Related Work” in the form of a table to summarize the gaps and how these gaps are addressed by this paper. This is crucial to pinpoint the paper’s contributions.

I would also like to see a frank section discussing the limitations of the proposed study of the three representative machine learning models.

The conclusion section is short and the main results that the author reached as an achievement are unfortunately not stated.

The conclusion section should be renamed to “Conclusion and Future Work”. In this section, the authors should envision and elaborate on the extension of their research.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

I don't have further comments; the authors have addressed all my comments.

Reviewer 2 Report

Comments and Suggestions for Authors

The previous comments have been satisfied.

Back to TopTop