Next Article in Journal
Overview on Intrusion Detection Systems for Computers Networking Security
Previous Article in Journal
FungiLT: A Deep Learning Approach for Species-Level Taxonomic Classification of Fungal ITS Sequences
 
 
Article
Peer-Review Record

An Empirical Evaluation of Neural Network Architectures for 3D Spheroid Segmentation

by Fadoua Oudouar 1, Ahmed Bir-Jmel 2, Hanane Grissette 3, Sidi Mohamed Douiri 4, Yassine Himeur 5,*, Sami Miniaoui 5, Shadi Atalla 5 and Wathiq Mansoor 5
Reviewer 1:
Reviewer 2: Anonymous
Submission received: 1 December 2024 / Revised: 2 February 2025 / Accepted: 12 February 2025 / Published: 28 February 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper is a strong contribution to biomedical image segmentation, particularly for 3D spheroids. Its robust methodological approach and clear findings highlight the promise of HRNet and DeepLabV3+ in medical applications. This tackles a critical problem in biomedical research, contributing to advancements in cancer diagnosis and treatment.

The study sets out to compare three prominent neural network architectures (U-Net, HRNet, and DeepLabV3+) for 3D spheroid segmentation, providing a focused and well-defined research goal.

The literature review could have included a more critical analysis of how the selected architectures compare to other emerging models in terms of performance and applicability.

The study is limited to a single open dataset, which may not capture the full variability of spheroid images in different experimental conditions. While the paper focuses on three architectures, it does not explore newer or alternative models that might offer improved performance.

The paper explores the effects of different learning rates and optimizers, revealing the critical role of these hyperparameters in segmentation performance. This approach helps mitigate the dataset's size limitations.

While overfitting is mentioned, there is no in-depth analysis of how data augmentation or other regularization techniques could address this issue.

The dataset includes 3D spheroids with different characteristics, but a dataset of 621 images is insufficient for training and evaluating deep learning models without overfitting. The paper does not mention whether the dataset captures diverse imaging conditions or biological variances.

The selected metrics are standard but may not fully reflect biomedical segmentation needs, such as edge precision or cell boundary accuracy.

The HRNet model consistently achieves the highest scores across all metrics, establishing its dominance for this task. The analysis highlights the strengths and weaknesses of each model and optimizer combination, providing actionable insights.

The study mentions overfitting issues, particularly with U-Net and the Adam optimizer, but does not deeply investigate methods to mitigate this problem, such as data augmentation or regularization techniques.

Implementing techniques such as rotation, scaling, or elastic transformations could increase the effective size of the training dataset, helping to mitigate overfitting.

As models become more complex, interpreting their decisions becomes more challenging. Incorporating explainability methods could help in understanding model behavior, particularly important in medical applications.

Author Response

Please refer to the attached "Response_to_Reviewer_#1"

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The manuscript attempts to evaluate segmentation models on medical image datasets. However, the study lacks sufficient experiments to be considered as emperical evaluation and the experiment design needs improvement.  I have following comments to be considered for improvement of the manuscript.

-  It was not possible to check the references. References in the text are all question marks(?) in the PDF file.

- Objective of the study could be defined more clearly in the introduction.

  - It is mentioned in the introduction that this study uses 3 datasets. It is not clear which thee datasets were used and the results do not show this.

- Instead of reporting training and validation performance, it is suggested that the models be evaluated on independent holdout test sets.

 

 

Author Response

Please refer to the attached "Response_to_Reviewer_#"

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

All the comments and suggestions from the previous review have been addressed.  However, in figures, it is recommended that the plots with similar metrics be plotted with same range in y-axis for convenience in comarison. There are no further comments. 

Author Response

Our Answer: We athnk the repectful review of the comment. We have updated and improved the figures accordingly.

Back to TopTop