You are currently viewing a new version of our website. To view the old version click .
by
  • Md Sabbir Hosen1 and
  • Hongxin Zhang1,2,*

Reviewer 1: Anonymous Reviewer 2: Anonymous Reviewer 3: Anonymous

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper proposes a novel deep learning framework for breast and skin cancer segmentation. The paper is overall well-written and interesting to a reader. It is also scientifically sound and inside the scope of the journal. It is also very detailed and with examples. Results also show that this approch is better than the existing ones. One of big advantages is the provided github code for future studies. Figures and Tables are clear and visible. 

I can recommend several minor modifications:

  1. Is your deep learning framework computationally efficient, or more demanding than existing approaches?
  2. Does the accuracy of your approch depends heavily on image quality? Are devies that capture those images decent today and available broadly in world?
  3. Consider adding paper outline in the Introduction.
  4. Emphasize a bit more why did you use Sobel operator.
  5. Did you perform hyperparameter search of your models to determine the best parameters, such as batch size, etc.?
  6. Resolutions of Figures a7 and 9 can be a bit better. 

Author Response

Dear Reviewer!

Based on your comments and suggestions, we have modified and revise our manuscript.

Please have a look the attached PDF File.

Regards!

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The authors have proposed the BAS-SegNet segmentation platform, which they tested on images containing breast and skin cancer lesions, except that in the first case ultrasound images were used, and in the second, dermoscopic ones. Accurate segmentation of all pathological changes is an important diagnostic challenge. From this standpoint, the presented results constitute a valuable contribution to research aimed at supporting physicians in the diagnosis of breast cancer and malignant skin lesions. In this context, the approach based on combining a modified CNN architecture, Sobel-based preprocessing, and a hybrid loss function is particularly interesting. The ablation studies conducted by the authors on two datasets confirmed the high effectiveness of the proposed solution.

In summary, I conclude that the manuscript is well written and should be of interest to readers working on biomedical image segmentation problems.

I recommend accepting the manuscript for publication after minor editorial corrections:

(1) The images presented in Figure 7 are of insufficient quality. The authors should provide higher-resolution images.
(2) Regarding the images shown in Figure 9, I have the same comment as in point (1).

Author Response

Dear Reviewer!

Based on your comments and suggestions, we have modified and revise our manuscript.

Please have a look the attached PDF File.

Regards!

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

This paper proposes BAS-SegNet, a boundary-aware deep learning framework designed for the segmentation of breast and skin cancer lesions in medical images. Addressing the limitations of manual segmentation, the framework integrates three core components: (1) an enhanced CNN architecture with a switchable feature pyramid interface, tunable Atrous Spatial Pyramid Pooling (ASPP) module, and consistent dropout regularization; (2) an edge-aware preprocessing pipeline that combines Sobel-based edge magnitude maps (as additional channels) with geometric augmentations; (3) a boundary-aware hybrid loss function fusing Binary Cross-Entropy (BCE), Dice, and Focal losses, supplemented by auxiliary edge supervision from morphological gradients. The model is validated on the BUSI (breast ultrasound) and ISIC (skin lesion) datasets, achieving Dice scores of 0.814 and 0.935, respectively.

Overall, the experimental setup is reasonable and reproducible. The training configuration (batch size 16, 200 epochs, AdamW optimizer, dropout for regularization) aligns with standard practices for deep learning segmentation models. The use of a Kaggle environment with NVIDIA Tesla P100 GPU ensures computational feasibility, and ablation experiments comparing against UNet, UNet++, and ResUNet validate the contribution of the proposed components.

Pros:

The framework has strong clinical relevance. It directly addresses a critical need in cancer diagnosis—early lesion segmentation—with performance gains that translate to practical value, including reduced interobserver variability, faster interpretation, and improved accuracy for ambiguous cases.

BAS-SegNet exhibits cross-modality generalization. Validated on two distinct imaging modalities (ultrasound and dermoscopy) and two cancer types, it demonstrates broader applicability than single-task models that are tailored to specific datasets or modalities.

 

Cons:

The study relies on limited dataset diversity. The BUSI dataset is from a single hospital (Baheya Hospital, Cairo), and ISIC 2016 is an older dataset (more recent versions like ISIC 2018/2019 include more images and diverse lesions). This limits the model’s demonstrated generalizability to multi-institutional or newer datasets.

There is a lack of component-specific ablation experiments. While the paper compares BAS-SegNet to baseline models, it does not explicitly ablate the three core innovations (enhanced CNN, Sobel preprocessing, hybrid loss) individually. This makes it difficult to quantify the contribution of each component to the overall performance gain.

The model has high computational complexity. With 42.97 million trainable parameters and 188.89 G mult-adds, BAS-SegNet may be less suitable for deployment on low-resource devices (e.g., clinical workstations with limited GPU memory) compared to lighter models.

No clinical validation with end-users is performed. The model is evaluated against expert annotations but not tested in a clinical workflow (e.g., with radiologists/dermatologists assessing its utility in real-world diagnosis). This gap limits the demonstration of its practical clinical impact.

 

Some minor issues:

Table/figure reference mismatches, e.g., “Table 4” in text on page 21 should be Table 3.

Algorithm description is high-level. Algorithm 1 could be more detailed for better reproducibility.

Justification for loss function coefficients should be added. The paper should explain how the balancing coefficients (α, β, γ, λ_b) for the hybrid loss were selected (e.g., grid search on the validation set, empirical tuning) to enhance transparency.

A discussion of computational efficiency trade-offs would be valuable. Comparing the model’s parameter count and inference time to lightweight alternatives (e.g., MobileNet-based segmenters) would help readers assess its deployment feasibility in resource-constrained clinical settings.

Edge case performance analysis is lacking. The paper mentions irregular and low-contrast lesions but does not explicitly analyze performance on edge cases (e.g., very small lesions, lesions overlapping with healthy tissue). A sub-analysis of these cases would highlight the model’s strengths and weaknesses.

Author Response

Dear Reviewer!

Based on your comments and suggestions, we have modified and revise our manuscript.

Please have a look the attached PDF File.

Regards!

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The Authors have addressed my comments successfully.

Reviewer 3 Report

Comments and Suggestions for Authors

I am satisfied with the authors' response to the comments and recommend the manuscript for publication.