Next Article in Journal
Advancements in Machine Learning-Based Intrusion Detection in IoT: Research Trends and Challenges
Next Article in Special Issue
Patient-Specific Hyperparameter Optimization of a Deep Learning-Based Tumor Autocontouring Algorithm on 2D Liver, Prostate, and Lung Cine MR Images: A Pilot Study
Previous Article in Journal
Geometric Methods and Computational Tools for Switching Structured Linear Systems
Previous Article in Special Issue
Towards Effective Parkinson’s Monitoring: Movement Disorder Detection and Symptom Identification Using Wearable Inertial Sensors
 
 
Article
Peer-Review Record

An Efficient Explainability of Deep Models on Medical Images

Algorithms 2025, 18(4), 210; https://doi.org/10.3390/a18040210
by Salim Khiat 1,*, Sidi Ahmed Mahmoudi 2, Sédrick Stassin 2, Lillia Boukerroui 3, Besma Senaï 4 and Saïd Mahmoudi 2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Algorithms 2025, 18(4), 210; https://doi.org/10.3390/a18040210
Submission received: 9 January 2025 / Revised: 6 March 2025 / Accepted: 17 March 2025 / Published: 9 April 2025
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper studies the explainability of deep learning models for chest X-ray classification, focusing on two datasets with categories like COVID-19, viral pneumonia, and normal cases. It tests pre-trained CNNs like MobileNetV2 and VGG16, and uses methods like GradCAM and LIME to show important regions. Results show that MobileNetV2 performs best on the first dataset, while VGG16 performs best on the second dataset.

The study shows the potential of explainable AI in medical field, but the innovation is limited. It only makes small improvements over existing methods and does not propose new methods. This raises questions about how much it really advances the field.

Also, the study does not give a detailed explanation of how the XAI methods work. It does not focus on the math or algorithms behind GradCAM or LIME. This makes it hard to fully understand why they are effective for medical imaging.

Lastly, the code is not open-source. This is a problem for reproducibility and further research, especially in an important field like medical imaging. Making the code available would improve transparency and usefulness to the research community.

Author Response

Comments 1: This paper studies the explainability of deep learning models for chest X-ray classification, focusing on two datasets with categories like COVID-19, viral pneumonia, and normal cases. It tests pre-trained CNNs like MobileNetV2 and VGG16, and uses methods like GradCAM and LIME to show important regions. Results show that MobileNetV2 performs best on the first dataset, while VGG16 performs best on the second dataset.

The study shows the potential of explainable AI in medical field, but the innovation is limited. It only makes small improvements over existing methods and does not propose new methods. This raises questions about how much it really advances the field.

Response 1: Thank you for your comment.  The contribution of this work is to confirm the power and effectiveness of the GradCam and LIME methods in the COVID19 dataset on the one hand, and the originality of the work is the evaluation of the results with experts in the field (doctors) using the “5-point Likert explanation satisfaction scale” method developed by Hoffman et al., 2018.

Comments 2: Also, the study does not give a detailed explanation of how the XAI methods work. It does not focus on the math or algorithms behind GradCAM or LIME. This makes it hard to fully understand why they are effective for medical imaging

 Response 2: A theoretical section is added just after figure 06.

Comments 3: Lastly, the code is not open-source. This is a problem for reproducibility and further research, especially in an important field like medical imaging. Making the code available would improve transparency and usefulness to the research community.

Response 3: Code extracts have been added to the results section.

 

Reviewer 2 Report

Comments and Suggestions for Authors

Dear Authors,

 

I will sugest to move some info from resuts to the experimental parts of paper. Please edit figure 9 and 7. Please add more current references.

Dicusion neeed gto  be revised .

 

Thank you

Author Response

Comments 1: I Will sugest to move info from results to the experimental parts of paper. Please edit figure 9 and 7.

Response 1: Thank you for your comment.   figures have been highlighted (figure 10 and 12)

Comments 2: Please add more current references.

Response 2: Seven references have been added from reference [11].

Comments 3: Discussion need to be revised

Response  3: This section has been carefully checked, and more explanations are now added to this section

Reviewer 3 Report

Comments and Suggestions for Authors

The subject is very intesting

The work is well analysed

Accept as it is

Author Response

Comments 1:  The subject is very intesting. The work is well analysed. Accept as it is

Response 1: Many thanks you for your positif comment

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The current revision is poorly presented, reflecting a lack of consideration for the reviewers. The authors must significantly improve the clarity and formatting throughout the paper.

  1. Unclear Figures: Figures 1, 4, 5, 6, 10, 11, 12, 16, 17, 18, and 19 must be made as clear and high-resolution as possible.
  2. Code Presentation: Do not include code as figures. Instead, use a GitHub repository to share the code properly.
  3. Equation Formatting: The equations, especially those on Page 9, are not in their standard forms. Please ensure they are correctly formatted for clarity.

Author Response

Comments 1: The current revision is poorly presented, reflecting a lack of consideration for the reviewers. The authors must significantly improve the clarity and formatting throughout the paper.

Response 1: We are grateful for all the efforts you made in handling our paper. The clarity and formatting quality are significatively improved in the new version.

 

Comments 2: Unclear Figures: Figures 1, 4, 5, 6, 10, 11, 12, 16, 17, 18, and 19 must be made as clear and high-resolution as possible.

Response 2: All these figures are provided with a higher resolution in the new version of the paper.

 

Comments 3: Code Presentation: Do not include code as figures. Instead, use a GitHub repository to share the code properly.

Response 3: Thank you for your remark. We removed the figures including code in the new version. We realize that the source code can be of interest to the community. As we are considering making further changes to the base code to implement modified methods in our future work (ongoing), we are maintaining a private repository for the code. However, we would welcome any personal requests from the community after the publication of the paper. We appreciate your understanding of this.

Comments 4: Equation Formatting: The equations, especially those on Page 9, are not in their standard forms. Please ensure they are correctly formatted for clarity.

Response 4: These equations are rewritten in standard forms in the new version of the paper.

Round 3

Reviewer 1 Report

Comments and Suggestions for Authors

The revision is almost OK. Minor comments are:

  1. There should be no 'indent' below equations (1) and (2).
  2. In line 395, the sentence should end with a period. 
  3. Below Eq. (4), you should start as "where ...".

Author Response

Thank you for these suggestions. We are grateful for all the efforts you made in handling our paper.

Comments 1: There should be no 'indent' below equations (1) and (2).

Response 1:  Ok. corrected.

 

Comments 2: In line 395, the sentence should end with a period.

Response 2:  Ok. corrected. 

 

Comments 3: Below Eq. (4), you should start as "where ...".

Response 3:  Ok. corrected. 

 

Back to TopTop