Next Article in Journal
Accurate Passive 3D Polarization Face Reconstruction under Complex Conditions Assisted with Deep Learning
Next Article in Special Issue
Unveiling the Role of the Beam Shape in Photothermal Beam Deflection Measurements: A 1D and 2D Complex Geometrical Optics Model Approach
Previous Article in Journal
A Deep Reinforcement Learning Algorithm for Smart Control of Hysteresis Phenomena in a Mode-Locked Fiber Laser
 
 
Communication
Peer-Review Record

G-Net Light: A Lightweight Modified Google Net for Retinal Vessel Segmentation

Photonics 2022, 9(12), 923; https://doi.org/10.3390/photonics9120923
by Shahzaib Iqbal 1, Syed S. Naqvi 1, Haroon A. Khan 1, Ahsan Saadat 2 and Tariq M. Khan 3,*
Reviewer 1: Anonymous
Reviewer 3:
Photonics 2022, 9(12), 923; https://doi.org/10.3390/photonics9120923
Submission received: 29 August 2022 / Revised: 14 November 2022 / Accepted: 25 November 2022 / Published: 30 November 2022
(This article belongs to the Special Issue Adaptive Optics and Its Applications)

Round 1

Reviewer 1 Report

Iqbal and colleagues presented a novel method for retinal vessel segmentation using a machine learning supervised method. The paper is well structured and represents an advance in biomedical research area. I consider the proposal as a novel variant of Google Net.  However, I have some comments and suggestions:

Comments:

1. In Figures 4 and 5, it is important to mention against which observer (one or two) the segmentation map is compared. The detail level of segmentation differs between human experts, at least in STARE and DRIVE datasets. It would be better if you include a comparison against both observers.

2. In the last part of the first paragraph of Section 3.2 Implementation and Training, there is an error in the wording, since a text appears without spaces and in italics, please correct it.

3. What parameter or metric is used to consider that this method outperforms the up-to-date methods? The authors claim that this method is better than the previously proposed ones, however, only 4 statistics from 12 provide numerically better results. Not sure if this can be considered an outperforming method, the conclusion section must be rewritten by explaining how or why this proposal surpasses the state-of-the-art ones. I consider this method as a similar result with less number of parameters, hence less computational cost.

Suggestions: 

1. The sentence where you explain where the images were acquired is not relevant to this research. (Lines 117 - 118, 123)

2. Move figures 4 and 5 closer to the paragraph where you explain them.

 

Author Response

  1. In Figures 4 and 5, it is important to mention against which observer (one or two) the segmentation map is compared. The detail level of segmentation differs between human experts, at least in STARE and DRIVE datasets. It would be better if you include a comparison against both observers.

Thank you for the comment. The observers of the segmentation maps are mentioned and highlighted with red colour in the revised manuscript.

  1. In the last part of the first paragraph of Section 3.2 Implementation and Training, there is an error in the wording, since a text appears without spaces and in italics, please correct it.

This for highlighting this. The typing error is corrected in the section 3.2.

  1. What parameter or metric is used to consider that this method outperforms the up-to-date methods? The authors claim that this method is better than the previously proposed ones, however, only 4 statistics from 12 provide numerically better results. Not sure if this can be considered an outperforming method, the conclusion section must be rewritten by explaining how or why this proposal surpasses the state-of-the-art ones. I consider this method as a similar result with less number of parameters, hence less computational cost.

Thank you for the comment. We agree with the comment, the proposed G-Net light achieves state-of-the-art performance and outperforms other lightweight vessel segmentation architectures in terms of accuracy and F1-score with fewer trainable number of parameters. In the submitted paper we didn’t highlighted this. To addressed this, in the updated manuscript, a comparison with lightweight vessel segmentation architectures is carried out in Table 4.

 

Suggestions: 

  1. The sentence where you explain where the images were acquired is not relevant to this research. (Lines 117 - 118, 123)

Thank you for the suggestion. The irrelevant sentences are removed from the revised Manuscript.

  1. Move figures 4 and 5 closer to the paragraph where you explain them.

Thank you for the suggestion. Figures 4 and 5 are moved closer the suggested paragraphs in the revised manuscript.

 

Reviewer 2 Report

1.If the entire inception module is used as the bottleneck in the bottom layer of the proposed block (Fig. 1), there is a chance that images will become blurry. Will it not impact the outcomes of your segmentation? 2. The data set you used is too small. Then, the geometric information in the data augmentation details is not adequately explained. Drive Dataset does not follow the train-to-test split ratio. It won't be mentioned in STARE. Because deep learning requires a large amount of labeled data to train, the question is whether data augmentation can handle the problem. Alternatively, mention the enhanced images when training the model. 3. The authors stated that a benchmark was established using manual ophthalmologist segmentation. Give more information about. 4. Resolutions vary between datasets. Mention the size of the images used for testing and training, 5. Mention the number of channels in each layer in the proposed block (Fig. 1)

Author Response

  1. If the entire inception module is used as the bottleneck in the bottom layer of the proposed block (Fig. 1), there is a chance that images will become blurry. Will it not impact the outcomes of your segmentation?

Thank you for the comment. The residual path is concatenated with the multi-scale features information in the inception block to prevent the images to become blurry as seen in Fig. 2.

  1. The data set you used is too small. Then, the geometric information in the data augmentation details is not adequately explained. Drive Dataset does not follow the train-to-test split ratio. It won't be mentioned in STARE. Because deep learning requires a large amount of labelled data to train, the question is whether data augmentation can handle the problem. Alternatively, mention the enhanced images when training the model.

Thank you for the comment. The details and types of augmentation used in this paper are provided in the section 3.2. The DRIVE dataset is already divided into train and test splits and used as it is. However, there are no defined training or testing sets in the CHASE and STARE dataset. “leave-one-out” strategy is used for train-to-test split for CHASE and STARE dataset discussed and highlighted in section 3.2 of the revised manuscript. We agree with the reviewer that data augmentation can handle the problem of large amount of labelled data to train.

  1. The authors stated that a benchmark was established using manual ophthalmologist segmentation. Give more information about.

Thank you for the comment. The benchmarks are publicly available and used as it is in the proposed research.

  1. Resolutions vary between datasets. Mention the size of the images used for testing and training.

Thank you for the comment. Size of the images used for testing and training varies for each dataset. We used original image without chaining their sizes, which shows the generalizability of the proposed G-Net light.  Image size of each dataset is given in section 3.1 where datasets are discussed.

  1. Mention the number of channels in each layer in the proposed block (Fig. 1).

Thank you for the comment. The number of channels in each convolution layer are mentioned in Fig. 1 in the revised manuscript. The number of channels in inception block are mentioned in Fig. 2.

Reviewer 3 Report

Dear Authors,

Please consider the following suggestions.

Note: The abstract and conclusion section needs improvement. Since, the research findings must he discussed here.

1. The structure of the paper can be improved considerably. Since, in introduction section, the paragraph size seems to be big. Please divides this into various small paragraph sections. Further, ensure the template of the Journal.

2. In G-Net section, please provide the necessary information, which suggested to propose a methodology to extract blood vessel in fundus image. Further, include a section (Related research works) after the introduction to discuss the other similar works found in the literature and its results.

3. Figure 1 looks small and needs improvement. Please discuss the name of blocks in the the proposed technique. 

4. Include the mathematical expressions (equations) for the performance measures, like ""Sensitivity, Specificity, Accuracy and F1-Score:.

5. The results are fine, But a graphical representation for all the tables needs to be included. 

6. In the literature a number of methods are available and in this work, only a few methods alone considered to monitor the performance. Please discuss the limitations of the existing works and the need for the proposed scheme.

Please include the following article and compare the results with  existing. 

Retinal Vessel Segmentation with Slime-Mould-Optimization based Multi-Scale-Matched-Filter

Author Response

Please consider the following suggestions.

Note: The abstract and conclusion section needs improvement. Since, the research findings must he discussed here.

Thank you for the comment. The abstract and conclusion sections are revised and updated in the revised manuscript.

  1. The structure of the paper can be improved considerably. Since, in introduction section, the paragraph size seems to be big. Please divides this into various small paragraph sections. Further, ensure the template of the Journal.

Thank you for the comment. The content of the introduction has been reorganized for better readabilityThe introduction section is divided into small paragraphs in the revised manuscript.

  1. In G-Net section, please provide the necessary information, which suggested to propose a methodology to extract blood vessel in fundus image. Further, include a section (Related research works) after the introduction to discuss the other similar works found in the literature and its results.

Thank you for the comment. The related work section is added in the revised manuscript.

  1. Figure 1 looks small and needs improvement. Please discuss the name of blocks in the proposed technique.

Thank you for the comment. Figure 1 is updated in the revised manuscript.

  1. Include the mathematical expressions (equations) for the performance measures, like ""Sensitivity, Specificity, Accuracy and F1-Score.

Thank you for the comment. The equations of the performance measures are added in the revised manuscript.

  1. The results are fine, but a graphical representation for all the tables needs to be included. 

Thank you for the comment. In Figure 6 the graphical representation for the all tables is added in the revised manuscript.

  1. In the literature a number of methods are available and in this work, only a few methods alone considered to monitor the performance. Please discuss the limitations of the existing works and the need for the proposed scheme.

Thank you for the comment. The related work section is added in the revised manuscript. The limitations of the existing works and the need for the proposed are also discussed in the revised manuscript.

  1. Please include the following article and compare the results with existing. Retinal Vessel Segmentation with Slime-Mould-Optimization based Multi-Scale-Matched-Filter

Thank you for the comment. We have cite and discussed this paper in the introduction.

Back to TopTop