Next Article in Journal
Analysis on the Spatial-Temporal Distribution Patterns of Major Mine Debris Flows in China
Next Article in Special Issue
HFD: Hierarchical Feature Detector for Stem End of Pomelo with Transformers
Previous Article in Journal
Study on Kinematic Structure Performance and Machining Characteristics of 3-Axis Machining Center
Previous Article in Special Issue
Small-Scale Zero-Shot Collision Localization for Robots Using RL-CNN
 
 
Article
Peer-Review Record

Method for Training and White Boxing DL, BDT, Random Forest and Mind Maps Based on GNN

Appl. Sci. 2023, 13(8), 4743; https://doi.org/10.3390/app13084743
by Kohei Arai
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Appl. Sci. 2023, 13(8), 4743; https://doi.org/10.3390/app13084743
Submission received: 3 February 2023 / Revised: 27 March 2023 / Accepted: 8 April 2023 / Published: 10 April 2023
(This article belongs to the Special Issue Applications of Deep Learning and Artificial Intelligence Methods)

Round 1

Reviewer 1 Report

The white-box concept proposed by the author is highly appreciated by the reviewer for giving the insights of AI when a reader wants to apply it with educated guess instead of blindly implementing it.

The writing style of the paper is more like a tutorial and significant revision is suggested. 

Decision tree and random forest are mentioned in a general way, but how it can be applied specifically in the method should be elaborated.

The definition of symbols used in eq(1)-eq(5) should be provided.

The figure 5 is unclear: what is the majority logic? 

Author Response

Thank you for your comments and suggestions.

>The white-box concept proposed by the author is highly appreciated by the reviewer for giving the insights of AI when a reader wants to apply it with educated guess instead of blindly implementing it.

I have modified the manuscript with the idea that the proposed white-box concept provides AI insights when it is to apply AI with educated inference.

>The writing style of the paper is more like a tutorial and significant revision is suggested. 

Shortened the tutorial part and revised it to describe the white-boxing proposal in more detail.

>Decision tree and random forest are mentioned in a general way, but how it can be applied specifically in the method should be elaborated.

For decision trees and random forests, in addition to general descriptions, I have described how to convert graphs of tree structures and random forest structures, which are necessary when applying GCN and GNN, to matrices.

>The definition of symbols used in eq(1)-eq(5) should be provided.

Not only the notation but also the explanations of equations (1) to (5) have been added.

>The figure 5 is unclear: what is the majority logic? 

The majority logic is defined in the revised version of manuscript. Also, Fig.5 is explained much more clearly. Please find the revised version of manuscript attached herewith. 

Author Response File: Author Response.pdf

Reviewer 2 Report

Dear Author(s);

Thank you for your work.

I think the abstract needs more explanations about the suggested procedure.

The suggested procedure is not clear.

The conclusion must reflect the main important points obtained in the work.

 

Best regards 

Author Response

>I think the abstract needs more explanations about the suggested procedure.

I have added the following explanations, "DL, decision tree, random forest, mind map can be expressed as directed graphs and can be represented as matrices. Therefore, these learning processes can be done with GNN and GCN. The nodes in the hidden layers can be seen so that it becomes visible and accountable."

>The suggested procedure is not clear.

I have added the procedure of the proposed method.

>The conclusion must reflect the main important points obtained in the work.

The main important points are as follows, "DL, decision tree, random forest, mind map can be expressed as directed graphs and can be represented as matrices. Therefore, these learning processes can be done with GNN and GCN. The nodes in the hidden layers can be seen so that it becomes visible and accountable. "

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper presents the roadmap of deep learning interpretability, such as the GNN/GCN and random forest. The topic of interpretability is important and I agree that the GNN or random forest can provide useful information to understand the way the deep learning works. However, the paper acts as an introduction of GNN and other machine learning methods. The “white box” methods presented just stay at the conceptual stage without a solid, direct, and practical (even a simple) application to demonstrate the effectiveness of those “white box” methods, which is also mentioned by the author. Please do add the necessary cases to support the method presented. And please use formal language and paper format to form the paper.

Author Response

This paper presents the roadmap of deep learning interpretability, such as the GNN/GCN and random forest. The topic of interpretability is important and I agree that the GNN or random forest can provide useful information to understand the way the deep learning works. However, the paper acts as an introduction of GNN and other machine learning methods. The “white box” methods presented just stay at the conceptual stage without a solid, direct, and practical (even a simple) application to demonstrate the effectiveness of those “white box” methods, which is also mentioned by the author. Please do add the necessary cases to support the method presented. And please use formal language and paper format to form the paper.

Thank you for your comments. I made corrections. The most important point is "DL, decision tree, random forest, mind map can be expressed as directed graphs and can be represented as matrices. Therefore, these learning processes can be done with GNN and GCN. The nodes in the hidden layers can be seen so that it becomes visible and accountable." I also added the procedure of the proposed method. I intend to provide a hint for white boxing of DL, and the learning processes of decision tree, random forest, mind map with GNN and GCN by visualizing the node in the hidden layers in GCN and GNN. 

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

All comments raised are adequately responded to by the author. There is no further issue.

Author Response

>All comments raised are adequately responded to by the author. There is no further issue.

Thank you for your message above. Please find the pdf file of the revised version of my proposed manuscript attached herewith.

Author Response File: Author Response.pdf

Reviewer 2 Report

Dear author;

Thank you for do the required recommendations. 

1. Please can you explain are the table in figure 4 is similar to the confusion matrix or not?

2. Verification of the proposed method still needs clear examples.

3. Also, it requires a comparison with other related techniques to show the differences between them.

 

Best regards

 

 

Author Response

>1. Please can you explain are the table in figure 4 is similar to the confusion matrix or not?

The table in Fig.4 is the matrix which corresponds to the graph in Fig.4 (a).

>2. Verification of the proposed method still needs clear examples.

I've added the examples of neural network architecture refinement and conversion of architecture to graph with Python codes.

>3. Also, it requires a comparison with other related techniques to show the differences between them.

Alternative techniques for the neural network architecture improvement are back propagation method which does not ensure the global optimum and reach to a local minima. Also, another alternative techniques for the white boxing of deep learning are multi-stage neural networks. Interim results can be seen by looking at the n-th stage of neural network output which results in deep learning processes are accountable. Thank you for your comments and suggestions. Please find the pdf file of the revised version of my proposed manuscript attached herewith.  

Author Response File: Author Response.pdf

Reviewer 3 Report

I am afraid I do not find major improvement in the revised manuscript. The major issues remain: there is no solid, direct, and practical application to demonstrate the effectiveness of those “white box” methods. The paper acts as the introduction of the machine learning methods and plain conceptual “ideas”. I suggest more time to dig deeper into this topic.

Author Response

Thank you for your comments and suggestions below,

> I am afraid I do not find major improvement in the revised manuscript. The major issues remain: there is no solid, direct, and practical application to demonstrate the effectiveness of those “white box” methods. The paper acts as the introduction of the machine learning methods and plain conceptual “ideas”. I suggest more time to dig deeper into this topic.

I've revised my manuscript significantly. The proposed method is useful to improve learning network architecture. Networks can be represented as graphs and as metrices. Therefore, it is possible to tarin the networks with GNN which results in optimization of the networks. I enhanced such possibility with Python codes which allows express the networks with graphs and matrices as well as learning processes. Please find the pdf file of the revised version of my manuscript attached herewith. 

Regards, 

Author Response File: Author Response.pdf

Back to TopTop