Next Article in Journal
Force Analysis of Masonry Cave-Dwelling Structure Based on Elastic Center Method
Next Article in Special Issue
Reliability Inference of Multicomponent Stress–Strength System Based on Chen Distribution Using Progressively Censored Data
Previous Article in Journal
The Improved A* Algorithm for Quadrotor UAVs under Forest Obstacle Avoidance Path Planning
Previous Article in Special Issue
Drowsiness Transitions Detection Using a Wearable Device
 
 
Article
Peer-Review Record

Predicting Astrocytic Nuclear Morphology with Machine Learning: A Tree Ensemble Classifier Study

Appl. Sci. 2023, 13(7), 4289; https://doi.org/10.3390/app13074289
by Piercesare Grimaldi 1,†, Martina Lorenzati 2,3,†, Marta Ribodino 2,3, Elena Signorino 2,3, Annalisa Buffo 2,3,‡ and Paola Berchialla 4,*,‡
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4:
Appl. Sci. 2023, 13(7), 4289; https://doi.org/10.3390/app13074289
Submission received: 3 January 2023 / Revised: 23 February 2023 / Accepted: 22 March 2023 / Published: 28 March 2023
(This article belongs to the Special Issue Applied Biostatistics & Statistical Computing)

Round 1

Reviewer 1 Report

The application of several algorithms based on decision trees in astrocytic nuclear morphology classification is analyzed in this study. Results show that the Boosting algorithms (tree ensemble), leave-one-out, and bootstrap perform better. However, some issues need to be solved before considered being published.

1.      The main work of this study is to apply several existing algorithms to classify astrocytic nuclear morphology. If nothing new has been proposed, what is the innovation of this article?

2.      The principles of the applied algorithms are not introduced in this paper, but at least some super-parameter settings of algorithms need to be explained, which have a significant impact on the application effect of the algorithms.

3.      The comparative analysis of this paper is not rigorous. For example, 5-fold cross-validation is adopted, then fitctree, TreeBagger and fitcesemble are compared to show the advantages of the fitcesemble algorithm. However, different combinations of algorithms have different effects. Maybe the combination of leave-one-out method and fitctree algorithm have the best effects. Obviously, the analysis of this study did not consider this problem.

4.      In Line 234, the authors decomposed each decision into a binary classification. What are the reason and basis for this practice?

5.      Try to put the figures and tables behind the first mentioned position in the text. The positions of Table 1, Figures 2-4 are too messy.

6.      Perhaps it would be better to subdivide the Results Section.

7.      The explanation behind Figures 2-3 title is too long, why not put them in the text?

8.      The explanation or calculation formula of some key indicators is missing, such as the decision score in Liane 236 and the predictor importance score in Figure 5.

9.      This study lacks theoretical analysis of the algorithms in the application, which makes it more like an experimental report.

10.  There are some spelling errors in the text, please check and correct them. For example, “RUSB oost” in Line 219, “classification ree” in Line 221. Besides, the position of Table 1 is wrongly changed.

Author Response

We thank the reviewer for the comments, which were of great help in the process of revision of the manuscript

Author Response File: Author Response.pdf

Reviewer 2 Report

1.      First, the computational complexity of the algorithm needs to be analyzed and compared with SOTA algorithm.

2.      Fourthly, how about the adaptability of the algorithm to different number of training labels, especially small labels. Please compare with the SOAT methods.

3.      What is the adaptability of the algorithm proposed by the authors to image noise? Please use experiments to prove the progressiveness of the algorithm. That is to say, what is the classification performance of the algorithm when images are injected with different noises. In addition, when the classes are unbalanced, what is the classification effect of the algorithm.

4.       Finally, some paper should be investigated in your paper, e.g., doi10.1016/j.eswa.2023.119508,10.1109/TGRS.2022.3202865,10.1109/TGRS.2022.3198842, 10.1007/s00521-020-05514-1.

Author Response

We thank the reviewer for pointing out a different way to analyse data. We have not considered an image data acquisition, and for these reasons we have adopted a supervised approach. However, we will consider the suggested approach a further step of our research, even if we have chosen to not include it in our manuscript. We hope that the reviewer will undestand our point of view.

Author Response File: Author Response.pdf

Reviewer 3 Report

Predicting astrocytic nuclear morphology with machine learning: A tree ensemble classifier study

Scope of work is good,

It handles the unbalanced data set for better results

A novel ensemble is proposed and claimed that better accuracy recall etc. that peer techniques.

It could be handy, if introduction and related works can be described in separate sections.

Then towards end of Literature/related work, there should be a comparative table, which shows the works done till now.

Line 148,149, LVM is defined twice,

Cross check all the short forms throughout the paper and correct them, make sure they are defined at first instance and then short forms are used only.

Figure 2,3 and 4 are not clear, need to present them clearly

Dataset descriptions are required,

How unbalalnced data is handled?

Why ensemble is created ?

How it has offered better results.

Some of recent references can be added for completing the study.

HRDEL: High ranking deep ensemble learning-based lung cancer diagnosis model

EnsembleNet: A hybrid approach for vehicle detection and estimation of traffic density based on faster R-CNN and YOLO models

Weed density estimation in soya bean crop using deep convolutional neural networks in smart agriculture

FETCH: A Deep Learning-Based Fog Computing and IoT Integrated Environment for Healthcare Monitoring and Diagnosis

Intelligent fake reviews detection based on aspect extraction and analysis using deep learning

Convolution network model based leaf disease detection using augmentation techniques

 

 

 

 

 

Author Response

We thank the reviewer for the helpful comments

Author Response File: Author Response.pdf

Reviewer 4 Report

Dear Authors,

I was very pleased to review this article. The paper is  interesting . The structure of the work is very good. I rate the work highly and the topic is very topical, after all we have Industry 4.0 and even already Industry 5.0. Machine learning is very often heavily discussed, so it is nice that the authors write about ML. However, I have a few comments:

1. please pay attention in the titles to the spelling of the word analysis, it is analyses and should be analysis (it is a noun), (2.3.) (2.4)

2. figures 2, 3, 4, 5 are too small, the Reader cannot see what is on them. Please make them larger so that the notations are visible.

3. figure 5 - I suggest using colours. This drawing is poorly captioned, the caption needs to be under the drawing. The drawing is too small, the descriptions on the axes are not visible, please align the description on the X axis vertically.

4. table 1 should be after line 197.

5. the literature review could be enriched by adding more citations to the discussion.

Best wishes and GOOD LUCK

In 2023, I wish you many high-quality scientific papers.

Reviewer

Author Response

We thank the reviewer for the helpful and encouraging comments

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Thank you for the authors' responses. I do not have any concerns.

Author Response

We thank the reviewer for the helpful comments

Reviewer 2 Report

1.      First, the computational complexity of the algorithm needs to be analyzed and compared with SOTA algorithm.

2.      Fourthly, how about the adaptability of the algorithm to different number of training labels, especially small labels. Please compare with the SOAT methods.

3. The ablation experiment should be added to prove the function of each module.

Author Response

Dear Reviewer,

Thank you for your time and for taking the opportunity to review our paper. We appreciate your comments and suggestions. In response to your first comment, we agree that the computational complexity of our algorithm is an important factor to consider.  However, we want to point out that the aim of our study was describing an application of machine learning based on the use of tree classifiers. Our study focused on data that presented several limitations, such as small sample size, high class imbalance, and missing values, that are peculiar to neuroscience experimental data. We aimed at identifying the most accurate algorithm to classify astrocytic nuclear morphology and the most effective cross-validation technique.

Question 1: First, the computational complexity of the algorithm needs to be analyzed and compared with SOTA algorithm.

Reply: We would like to point out that our approach is based on the comparison of three methods: a tree classifier, a tree ensemble and a Tree Bagger, which is very similar to Random Forest, a well-known and widely used state-of-the-art (SOTA) algorithm for classification tasks. Additionally, we also tested the Support Vector Machine (SVM, data not shown), which is also considered SOTA, and the results were very similar to those obtained using the classification tree. Indeed, the aim of our work is to describe an application of machine learning focusing on data with several limitations, such as small sample size, high class imbalance, and missing values, that are peculiar to neuroscience experimental data. We think that the study's results may be valuable for practitioners in the field, even without focusing on the issue of computational complexity.

Question 2: how about the adaptability of the algorithm to different number of training labels, especially small labels. Please compare with the SOTA methods.

Reply: In a set of small sample size, we made a trade-off between algorithm complexity and accuracy of classification. To assess accuracy avoiding optimism, we compared several validation methods, including k-fold cross validation, stratified cross-validation, and leave-one-out cross validation. This was our main goal, which is generalization of the results. So, according to the conceptualization of our study, we were not interested in assessing adaptability of the algorithm to different size of labels.  Regarding the problem of small labels (data imbalance issue in our data description), we assessed the accuracy of the tree classifiers with and without RUSboost procedure.

Question 3: The ablation experiment should be added to prove the function of each module.

Reply: We used a simple classification tree, a tree bagger and a tree boosting. Thus we were not in a framework of deep learning with a complex algorithm to explain using ablation study. Indeed, classification trees and ensemble of classification trees like tree bagger and tree boosting provide an importance variable metric, which is helpful to explain the relevant variable considered for the classification. Thank you again for your review. We hope that this response addresses your concerns and clarifies some of the points you raised.

 

Reviewer 3 Report

The paper is revised as per my previous comments.

Paper is good at present and can be considered

Author Response

We thank the reviewer for the helpful comments

Round 3

Reviewer 2 Report

-The paper should be interesting ;;;

-it is good idea to add block diagram of the proposed research (step by step);;;

-it is good idea to add more photos of measurements, sensors + arrows/labels what is what  (if any);;;

-Figures please add scale what is white/black color;;;

-What is the result of the analysis/review?;;

-figures should have high quality;;;

-labels of figures should be bigger;;;;

-please add photos of the application of the proposed research, 2-3 photos ;;;

-what will society have from the paper?;;

-Is there a possibility to use your research for other problems?

-please compare advantages/disadvantages of other approaches;;;

-references should be from the web of science 2020-2022 (50% of all references, 30 references at least);;;

-Conclusion: point out what have you done;;;;

-please add some sentences about future work;;;

Author Response

 We appreciate reviewer's comments and suggestions. Attached we have provided point-by-point responses. The changes made in the manuscript are highlighted, thus they can be read in the manuscript as well.

Author Response File: Author Response.pdf

Round 4

Reviewer 2 Report

The authors did not take my opinions seriously, and there were many problems in the paper that I commented on that were not revised, and the innovation of the paper was limited. Therefore, I strongly recommend rejecting the manuscript.

Back to TopTop