Next Article in Journal
Design and Test of Seedling-Picking Mechanism of Fully Automatic Transplanting Machine
Next Article in Special Issue
Interpreting CNN-RNN Hybrid Model-Based Ensemble Learning with Explainable Artificial Intelligence to Predict the Performance of Li-Ion Batteries in Drone Flights
Previous Article in Journal
Calculation of Maximum Permissible Load of Underground Power Cables–Numerical Approach for Systems with Stabilized Backfill
Previous Article in Special Issue
Recent Applications of Explainable AI (XAI): A Systematic Literature Review
 
 
Article
Peer-Review Record

Helping CNAs Generate CVSS Scores Faster and More Confidently Using XAI

Appl. Sci. 2024, 14(20), 9231; https://doi.org/10.3390/app14209231
by Elyes Manai *, Mohamed Mejri and Jaouhar Fattahi
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Appl. Sci. 2024, 14(20), 9231; https://doi.org/10.3390/app14209231
Submission received: 16 August 2024 / Revised: 8 October 2024 / Accepted: 8 October 2024 / Published: 11 October 2024
(This article belongs to the Special Issue Recent Applications of Explainable AI (XAI))

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The work addresses a very interesting topic to simplify and interpret the cvss assignment process. The paper is easy to read and the experimental part is illustrative.

It is missing that version 4.0 of VSS is not cited.

From a formal point of view there are small typo errors, for example: there is an IF in the summary when it should be an If; It should say Figure 16 instead of figure 16.

It would be necessary to review the format of the tables to unify their appearance, for example, some lines are missing or they confuse reading somewhat.

Comments on the Quality of English Language

The quality of the English is adequate, only minor typo errors would need to be corrected.  

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

XAI terminology:

As stated in line 404, XAI allows users to better understand the decisions made by these systems. I fully agree with the primary purpose of XAI. However, according to the epistemological triangle, each understandable decision system must be divided into feature extraction and classification parts. Explainability is suitable for classification because it uses logical statements that generate decisions based on Boolean (fuzzy) logic. For a feature to be understood properly, wording that features must be interpretable is necessary. This implies that we must understand the meaning of the features regardless of the understanding of the decision result. From my perspective, feature interpretation is a research task for the domain expert and not a task for an algorithm. I particularly appreciate the authors' support of this view in “The Use of Feature Attribution” (l. 507), and in Sections 7.2.-7.3.

I prefer the wording: decision is understandable, features are interpretable, and classification is explainable.

Partial comments:

l. 602: replace the word “retrain” by “update”.

l. 621: Authors are very critical of NN. Generalisation methods, retraining methods and methods for NN learning from the limited training set are studied in the literature.

 

l. 763: unfinished sentence

Editorial remarks:

- If possible, replay abbreviations in the main heading.

- do not introduce abbreviations in the abstract and section headings. Introduce abbreviations just once at the first occurrence of the words.

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

1. The main comparison method of the paper is 22 years and before, which does not represent the optimal method in this field. It is recommended to compare with the SORT method of the last two years to increase persuadability.

2.While the XGBoost model is relatively easy to interpret, it may not perform as well as deep learning models on certain tasks. Papers need to justify the choice of XGBoost over more complex models.

3.Although SHAP is a powerful tool, it can have limitations when dealing with textual data. Papers need to discuss these limitations and how to overcome them. Also, the advantages of SHAP compared to other similar approaches (LIME, ICE, etc.) should be discussed.

4, the paper needs to provide more experimental results to prove the robustness and universality of the model in different datasets and different scenarios.

Comments on the Quality of English Language

Use a professional grammar checker (Grammarly, Turnitin, etc.) to check and correct grammatical errors and typos. Pay attention to temporal consistency, especially when describing research processes, results, and discussions. Make sure the sentence structure is complete and avoid using fragmented sentences or long compound sentences to make the sentence easy to understand.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop