Next Article in Journal
Liquid-Augmented MPC in Quadrupedal Robot for Disturbance Learning
Previous Article in Journal
Voronoi-Induced Artifacts from Grid-to-Mesh Coupling and Bathymetry-Aware Meshes in Graph Neural Networks for Sea Surface Temperature Forecasting
 
 
Article
Peer-Review Record

Development of a Knowledge-Distillation-Based Breast Cancer Classifier for LMICs: Comparison with Pruning and Quantization

Electronics 2025, 14(24), 4842; https://doi.org/10.3390/electronics14244842
by Falmata Modu 1,*, Rajesh Prasad 1,2 and Farouq Aliyu 3
Reviewer 1: Anonymous
Reviewer 2:
Electronics 2025, 14(24), 4842; https://doi.org/10.3390/electronics14244842
Submission received: 4 November 2025 / Revised: 6 December 2025 / Accepted: 6 December 2025 / Published: 9 December 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This is a timely and well-structured study comparing lightweight deployment strategies for breast-cancer detection, with a valuable emphasis on practical deployability. The contributions are relevant for LMIC settings and embedded AI. With a few clarifications and presentation tweaks, the paper will be even stronger.

  1. Specify precisely (i) whether splits were stratified; (ii) how KMeans-SMOTE was applied (fitted only on training folds; no leakage into validation/test?); and (iii) whether the reported metrics are from held-out sets or cross-validation test folds only. A brief data-leakage prevention checklist would reassure reproducibility.

  2. The three datasets differ in modality and class balance. Consider adding either (a) a small external validation (train on one dataset, test on another) or (b) a short domain shift analysis to show how robust the distilled student is when the feature distribution changes.

  3. Please report operating points (e.g., ROC/PR curves, sensitivity at fixed specificity) and include calibration (reliability curves or ECE). For screening contexts, high recall is critical; indicating thresholds that achieve, say, ≥95% sensitivity would help clinical readers.

  4. Latency, memory and energy numbers are very helpful, thank you. Please clarify: how many inferences were averaged for latency; did you include warm-up? Whether Raspberry Pi tests used CPU-only and which BLAS/backends were enabled. Double-check model file sizes/units. If correct, add a brief explanation of why it is so compact.

  5. When citing classical baselines (SVM, RF, etc.), clarify whether you re-implemented them under the same splits and preprocessing pipeline or report values from prior work. If re-implemented, provide their hyperparameters and tuning protocol.

  6. Since the motivation includes resource constraints, briefly add approximate training compute time/energy and any fine-tuning overheads for KD vs non-KD, to complement inference-time measurements.

  7. Include a short qualitative analysis of false positives/negatives (per-dataset) and, if possible, feature-level insights (e.g., which inputs most influence misclassifications), even if using tabular features.

  8. Figures. Increase font sizes and ensure color-blind-safe palettes.

  9. Add a brief paragraph on limitations (dataset sizes/modalities, potential overfitting on small/high-dimensional sets, generalizability to real-world screening workflows, ...).

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

Although the paper deals with a recent and intriguing subject, the proposed paper contains redundant information. This redundancy detracts from the overall clarity and focus of the research. To enhance its value, the authors should consider streamlining their content and emphasizing the most significant findings.

Because the proposed paper is not a review, a summary description of models is discouraged, e.g., “Table 2. Summary of Pruning Types in Neural Networks” and “Table 3. Comparison of Quantization Methods.”

Please avoid using acronyms as titles, such as "2.4.1. PBCNT."

For each proposed dataset, please add the link to the web repository and the date of the last access.

The “3. Related works” must be moved after the Introduction section.

The section titled "5. Discussion of Result" should be renamed to "Results and Discussions" to reflect the correct order.

A section entitled Materials must be added; the authors added these in the “5. Discussion of Result” section, and these are harder to follow.

The F1-score must be computed and added in the tables with results and graphs; also, for the best results, the confusion matrices must be added.

The limitations of the study are omitted.

The comparison with cutting-edge papers that use the same datasets must be added.

From the long list of references, the older papers must be removed, e.g., 32 and 34, which are older.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The paper was significantly improved; however, a minor observation: please avoid using the name of the function as in python code, e.g. t f mot.sparsity.keras.prune_low_magnitude(•) , quantize_annotate_layer(•) and quantize_apply(•), and so on. The content must be accessible to all users, regardless of the programming environment, and the function's original name should be used.

Congratulations on your work!

Author Response

Comment 1: The paper was significantly improved; however, a minor observation: please avoid using the name of the function as in python code, e.g. t f mot.sparsity.keras.prune_low_magnitude(•) , quantize_annotate_layer(•) and quantize_apply(•), and so on. The content must be accessible to all users, regardless of the programming environment, and the function's original name should be used.

Response 1:  Thank you for this observation.

We have revised the manuscript by removing all TensorFlow API and python function references (e.g., prune_low_magnitude, quantize_annotate_layer, quantize_apply) from Section 3.2 (Pruning), paragraph 2 and Section 3.3.2 (Quantization-Aware Training), paragraph 2 and other parts of the paper.

Back to TopTop