Next Article in Journal
A Novel Finite Element Model for the Study of Harmful Vibrations on the Aging Spine
Next Article in Special Issue
CEAT: Categorising Ethereum Addresses’ Transaction Behaviour with Ensemble Machine Learning Algorithms
Previous Article in Journal
Marine Predators Algorithm for Sizing Optimization of Truss Structures with Continuous Variables
Previous Article in Special Issue
Artificial Neural Network Model for Membrane Desalination: A Predictive and Optimization Study
 
 
Article
Peer-Review Record

The Cost of Understanding—XAI Algorithms towards Sustainable ML in the View of Computational Cost

Computation 2023, 11(5), 92; https://doi.org/10.3390/computation11050092
by Claire Jean-Quartier 1,2,†, Katharina Bein 3, Lukas Hejny 3, Edith Hofer 3, Andreas Holzinger 1,3,4 and Fleur Jeanquartier 1,*,†
Reviewer 1:
Reviewer 2:
Computation 2023, 11(5), 92; https://doi.org/10.3390/computation11050092
Submission received: 10 April 2023 / Revised: 21 April 2023 / Accepted: 26 April 2023 / Published: 4 May 2023
(This article belongs to the Special Issue Intelligent Computing, Modeling and its Applications)

Round 1

Reviewer 1 Report

I found the paper interesting and well-written, with a good bibliography that I appreciate. I also like that the authors give the links to github.

Minor remarks:

- Reference 21: a preprint? nothing published yet?

- In section 4.1, row 210, there is a reference to a "Table ??" that, if my deductions are correct, should be Table A1.

- row 229 (beginning of page 7), probably there are too many commas

Major remarks:

- I think that the authors, in Section 3.3, should better explain why they chose to use CodeCarbon, since in Section 2 they described and cited a lot of other similar tools. Maybe in section 4.1 they mean to give a reason for this but I still think it should be highlighted in the right place.

- I would like a better explanation of figures 2-3-4, actually. The graphics are not easy to interpret and, if you cannot change them, words in the captions or in Section 4 are expected to be helpful and make them more explicit.

 

Author Response

Many thanks for your time to review our work and the appreciation of open science! Answers to comments are given beneath:

> Minor remarks:
> - Reference 21: a preprint? nothing published yet?

Thank you for mentioning; We corrected this reference and also another one, where the author name was misspelled.

> - In section 4.1, row 210, there is a reference to a "Table ??" that, if my deductions are correct, should be Table A1.

Pardon us, you are absolutely right. We corrected the ? reference in the manuscript accordingly .

> - row 229 (beginning of page 7), probably there are too many commas

Thank you again; we removed the wrong one out of the figure label.

> Major remarks:
> - I think that the authors, in Section 3.3, should better explain why they chose to use CodeCarbon, since in Section 2 they described and cited a lot of other similar tools. Maybe in section 4.1 they mean to give a reason for this but I still think it should be highlighted in the right place.

Thank you for pointing that out, we now added a short explanation in 3.3 which is part of the methods section and referred to the longer description in 4.1, presented in the results section.

> - I would like a better explanation of figures 2-3-4, actually. The graphics are not easy to interpret and, if you cannot change them, words in the captions or in Section 4 are expected to be helpful and make them more explicit.

Again, thank you for this comment aiming at increasing comprehensibility of the manuscript. We extended both figure captions as well as results descriptions respectively.

Reviewer 2 Report

This study is one of the first to explore aspects of energy consumption during model development, specifically examining the impact of XAI on sustainability. The results on energy consumption by various models and scenarios show that explainability can contribute to the development of the model in terms of sustainable reduction in energy consumption.

Specifically, the authors present three different object-based classification, regression, and detection models for cancer scenarios, building energy, and image detection, each integrated with explainable artificial intelligence (XAI), or feature reduction.

The authors have focused on using the Python CodeCarbon package along with Python-based machine learning models as the language has become a common method of scientific computing and machine learning.

Detailing the individual results of various test systems, the authors observed significant differences during the test runs.

Thus, reducing energy consumption in calculations performed by neural networks is an urgent task for the near future.

- It is desirable that the authors specify what approximate % of emissions affect computer processing of data at the moment.

- For example, you can try to use the EvalAttAI metric in your future research on this topic (Nielsen et al.: EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models - https://arxiv.org/pdf/2303.08866.pdf)

Author Response

Thank you for taking the time to review our work and your suggestions! Corresponding answers are given beneath:

> - It is desirable that the authors specify what approximate % of emissions affect computer processing of data at the moment.

We extended the introduction section and added a percentage number as well a relation to other industry sectors to underline the growing amount of carbon emissions within this sector.

> - For example, you can try to use the EvalAttAI metric in your future research on this topic (Nielsen et al.: EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models - https://arxiv.org/pdf/2303.08866.pdf)

This work focuses on analyzing energy consumption rather than testing the reliability of XAI approaches. However, we added a note to related work regarding different shortcomings of XAI currently being studied.

Back to TopTop