Next Article in Journal
Deep Learning and Its Applications in Computational Pathology
Next Article in Special Issue
Explainable Machine Learning (XAI) for Survival in Bone Marrow Transplantation Trials: A Technical Report
Previous Article in Journal / Special Issue
Quantified Explainability: Convolutional Neural Network Focus Assessment in Arrhythmia Detection
 
 
Article
Peer-Review Record

State-of-the-Art Explainability Methods with Focus on Visual Analytics Showcased by Glioma Classification

BioMedInformatics 2022, 2(1), 139-158; https://doi.org/10.3390/biomedinformatics2010009
by Milot Gashi 1,†, Matej Vuković 1,†, Nikolina Jekic 2,†, Stefan Thalmann 3, Andreas Holzinger 4,5,6, Claire Jean-Quartier 4,5,* and Fleur Jeanquartier 4,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
BioMedInformatics 2022, 2(1), 139-158; https://doi.org/10.3390/biomedinformatics2010009
Submission received: 30 December 2021 / Revised: 12 January 2022 / Accepted: 13 January 2022 / Published: 19 January 2022

Round 1

Reviewer 1 Report

Dear authors, this manuscript does a good job in reviewing the performances of different xAI approaches on diffuse glioma. I believe this review will help the audience understand and choose the best xAI method. However, to improve the completeness and delivery of the paper, I have the following suggestions:

  1. More proofreading to avoid some typos, e.g., "by the" in line 62, comma missing in line 96 and Appendix A.1; what's the meaning of "a.o." in line 466?
  2. Any background introduction of the necessity of explainability of AI?
  3. Is the global vs local introduction in the paragraph (line 87-94) repeated with the introduction in line 35-40?
  4. It would be more comprehensive if you could include some introduction of diffuse glioma and its subtypes.
  5. Is cross-validation applicable to these methods?
  6. In Table 1, what is "Data explanation" and what is the meaning of asterisks?
  7. You have included 13 figures and 2 tables in the manuscript. It would be more concise and increases readability if you could convert some of figures to tables (e.g. Fig. 2), combine some similar figures (e.g. Figs 10&11, Figs 9&13) or put some of them into supplementary materials to reduce the total number of figures.

Author Response

First of all, thank you very much for your valuable comments! 

@1. Regarding the suggested proofreading we again checked our manuscript, we found a few typos, however, we could not find the mentioned ones. We then checked back in the submission system and noticed that there were two manuscript.PDF versions to download. We apologize for the inconveniences! One was an older version that indeed contained named typos such as the “by the by the” and others. We believe the newer version is much more readable and includes corrections for the named issues regarding typos, background and introduction. We also added minor improvements to the newer version as well.

@2., 3., 4. We absolutely agree with your comments: The newer PDF version incorporates a much more scannable and meaningful abstract including a better understandable problem statement and motivation, followed by a short introduction into classification of diffuse glioma and afterwards, some background information on xAI, as well as better structured results and discussion and more information in the conclusion.

@5. Thank you for the question. By applying a 10 fold cross-validation to the RandomForestClassifier, the basic model scored a mean accuracy of 0.87 with a standard deviation of 0.02. Cross-validation with xAI is a matter of ongoing research. We included this in the manuscript.

@6. The asterisks (mark *) was the subjective interpretation of the authors due to the lack of accurate information in some of the libraries’ documentation. We removed this unnecessary and potentially misleading information and updated the table accordingly.

@7. We absolutely agree that there are a lot of figures, and therefore, at least reorganized and combined figures 10 and 11.

Reviewer 2 Report

The authors summarized the SOTAs that enable data visual explainability. The review is comprehensive and inspiring. I would recommend this manuscript with minor format revision.

Comments:

  1. The link to the complete overview table on the GitHub repository is not working. Please check the link and update it.
  2. Figure 7 is not intact, please replot this figure. 

Author Response

First of all, thank you very much for the positive critiques regarding our manuscript’s usefulness and applicability. 

@1. We corrected the link as it was unintentionally set private. Now the full comparison table is accessible.

@2. We further checked Figure 7 regarding plotting issues. We couldn’t find the issue, as the figure is rendered perfectly in Acrobat Reader and also built-in pdf-viewer in Firefox and other built-in features in other browsers on Windows and MacOsX, and even in PDF Editing with Affinity. Even in other versions, the problem is not reproducible. But thank you for this comment too, and we will, of course, check specifically on rendering issues and image resolution within the revised as well as proof version to come. Thank you for this remark.

Reviewer 3 Report

In this manuscript, Milot et. al. compared 11 python packages that could provide visualization and explanation to the ML/DL models using a publicly available Glioma dataset as benchmark. The manuscript is well-written and fits to the scope of the journal. However, I do have several suggestions and questions for the authors.

  1. Would be nice to add compatibility comparisons for these 11 python packages? Eg. some may work only on tensorflow-based models, but not on pytorch-based models. Compatibility can be a real hurdle in real world implementation pipeline.
  2. Would be ideal to have a minimum resource/efficiency comparison as well. Eg. Mem, number of CPU/GPU required.
  3. Several highlighted and incomplete sentences in blue or red need to be modified.
  4. The paper seems to be a good or better fit to the special issue of BioMedInformatics “Machine Learning in Computational Pathology”. Highly recommend the authors to consider that. https://www.mdpi.com/journal/biomedinformatics/special_issues/Deep_Learning_in_Computational_Pathology 

Author Response

Thank you for your valuable comments! 

@1. We agree that information about compatibility of libraries is quite useful. You can find information about availability of R and Python APIs as well as information regarding model support for Tensorflow and others within the complete table in Appendix A.1. We added a sentence to more prominently address the table within the manuscript.

@2.: Thank you for the interesting question. Table 1. already shows quite diverging performances from less than a second up to a bit more than a minute depending on which library is used. Additionally, performance highly depends on the model. Therefore we can only suggest hardware recommendations for our exemplified usage. Further information on requirements and recommendations would need additional experiments and this could be part of future work. 

@3. Regarding the suggested modifications concerning blue and red sentences, we again checked our manuscript in the submission system: We noticed that there were two manuscript.PDF-versions to download. We apologize for the inconveniences! One was an older version that indeed contained red and blue todo markers and was incomplete. We believe the newer version is much more readable and includes corrections for the named issues.

@4. Thank you for this idea regarding another related special issue: We believe that our current work fits very well to the selected special issue with emphasis on explainability. Furthermore, referring to biomarkers, results are not solely intended to be of use to Pathology issues but also to knowledge discovery for signaling insights. The current work exemplifies xAI with processing multiple biomedical comma separated value datasets, of interest to research as well as clinicians.

Back to TopTop