Next Article in Journal
Kernel Density Estimation and Convolutional Neural Networks for the Recognition of Multi-Font Numbered Musical Notation
Next Article in Special Issue
A Lightweight CNN and Class Weight Balancing on Chest X-ray Images for COVID-19 Detection
Previous Article in Journal
An Event Matching Energy Disaggregation Algorithm Using Smart Meter Data
Previous Article in Special Issue
Improving Pneumonia Classification and Lesion Detection Using Spatial Attention Superposition and Multilayer Feature Fusion
 
 
Article
Peer-Review Record

Deep Learning Approach for Automatic Segmentation and Functional Assessment of LV in Cardiac MRI

Electronics 2022, 11(21), 3594; https://doi.org/10.3390/electronics11213594
by Anupama Bhan 1,*, Parthasarathi Mangipudi 2 and Ayush Goyal 3,*
Reviewer 1:
Reviewer 2: Anonymous
Electronics 2022, 11(21), 3594; https://doi.org/10.3390/electronics11213594
Submission received: 22 September 2022 / Revised: 21 October 2022 / Accepted: 27 October 2022 / Published: 3 November 2022
(This article belongs to the Special Issue Medical Image Processing Using AI)

Round 1

Reviewer 1 Report

Discussion should include explanation why the overall performance of the UNet model with Focal Traversky Loss is better than the others.

How images were collected, from which heart cycle period? Manual segmentation were done with two independent experts?

Does calculation of surface for each cross-section can make better performance results?

Author Response

Review Comment #1: Discussion should include explanation why the overall performance of the U- Net model with Focal Tversky Loss is better than the others.

Response to comment:  The Author has included the explanation in the conclusion that U-Net Model with Focal Tversky Loss has outperformed other techniques which has been validated using the evaluation metrics: Average Good Contour Percentage, Dice Metric and Average Perpendicular Distance. The value of the proposed combinational technique is also mentioned in Table XVII.

Review Comment #2:  How images were collected, from which heart cycle period? Manual    

                                      segmentations were done with two independent experts?

 

Response to comment:  The images have been collected from the publicly available Sunny Brook Cardiac Atlas. The manual annotations are done by only one expert and segmented data is provided for validating the proposed techniques.  The images are collected for one complete cardiac cycle (It consists of two periods: one during which the heart muscle relaxes and refills with blood, called End Diastolic following a period of robust contraction and pumping of blood, called End systolic).

Review Comment #3:  Does calculation of surface for each cross-section can make better

                                     performance results?

 

Response to comment:  The calculation of cross-section of surface depends on good tomographic MRI visualization of LV contours which is  possible in 3D quantification of cardiac size. In our manuscript, we have emphasized on 2D images only. The volume can be calculated in voxels directly in three dimensional whereas we have taken the volume in centimetre cube.

Author Response File: Author Response.pdf

Reviewer 2 Report

The manuscript by Anupama Bhan et al. demonstrated segmentation of left ventricle segmentation in Cardiac magnetic resonance imaging. U-Net, one network architecture, was used in this study with different loss functions, including Focal Tversky loss, Log Cosh Dice loss, Tversky loss, Dice loss, and Binary Cross Entropy Loss. The authors compared these different loss functions by various quantitative metrics, including Dice metrics, average good contour detection, and average perpendicular distance. The structure of the manuscript is well organized. However, the reviewer has a few comments the authors could try to address or elaborate on, as listed below.

1. What was the source of Fig. 1? If Fig.1 came from another reference, please add this reference.

2. According to section 3.1 and Table 2, 45 patients or cases were employed in this study. So, how many images for each case were used for the training, validation, and testing sets?

3. Do the authors use cross-validation when training the proposed networks? If the answer is true, there is another question. For example, after using five-fold cross-validation, there will be five independent models; how do the authors choose the final model as the inference network model? For instance, it could be according to the best result from one of the quantitative metrics. 

 

4. The authors concluded that U-Net with Traversky Focal Loss and Elu performed better than other combinations. About this, are there any reasons for that?

Author Response

Review Comment #1:   What was the source of Fig. 1? If Fig.1 came from another reference, please add this reference.

Response to comment: The source of Fig 1 is taken from Reference No. [6] Bernard et al.

Review Comment #2:  According to section 3.1 and Table 2, 45 patients or cases were employed in this study. So, how many images for each case were used for the training, validation, and testing sets?

Response to comment:  The proposed approach has been evaluated using Sunny Brook cardiac dataset which comprises of cine short axis MRI from 45 patients with total of 805 images. Each time series consists of 6 to 12 2D cine stacks with 8 mm slice thickness and 1.3 mm to 1.4 mm in-plane resolution. For the purpose of the research, the MRI image files were originally split into training, validation and testing sets into the ratio of 15:15:15. Each patient has 12 to 28 images and database also has MRI ground truth manually segmented by an expert. We have performed the Affine Transformation (rotation, scaling and translation) to augment the training set  and mitigate overfitting.

Review Comment #3:  Do the authors use cross-validation when training the proposed networks? If the answer is true, there is another question. For example, after using five-fold cross-validation, there will be five independent models; how do the authors choose the final model as the inference network model? For instance, it could be according to the best result from one of the quantitative metrics. 

Response to comment: The authors have achieved the training using cross validation and the final model was selected according to the best result achieved from Model Accuracy and Model Loss. We performed cross-validation by dividing the training dataset into 12 subjects for training and 3 subjects for validation. The hyper-parameters of the network i.e., number of layers and units, number of filters, filter and pooling sizes are determined empirically during cross-validation process.

 

Review Comment #4: The authors concluded that U-Net with Tversky Focal Loss and Elu performed better than other combinations. About this, are there any reasons for that?

 

Response to comment: The Author has included the explanation in the conclusion that U-Net Model with Focal Tversky Loss has outperformed other techniques which has been validated using the evaluation metrics: Average Good Contour Percentage, Dice Metric and Average Perpendicular Distance. The value of the proposed combinational technique is also mentioned in Table XVII.

 

Author Response File: Author Response.docx

Reviewer 3 Report

The last paragraph of section one should be a summary of the remaining part of the article

What makes your study different from other existing works

In section 3.1, the authors talked about the dataset, they should tell us the total data. They should also indicate the ratio or percentage of training, validation, and testing dataset used. They should indicate the database which the data were obtained from.

In section 3.2, the authors mentioned the hyperparameters selected for the study implementation but didn’t tell us the values. The authors should indicate the values of each of the hyperparameters used for the investigation and why those values were used.

In section 4, the authors should let us know why they decided to use those performance metrics used out of many others available.

In section 5.1, the authors should give us a detail of what tasks were performed in the data preprocessing stage.

The authors only showed us the training stage, what happened during the validation and testing phase, they should present it in the results and discussion section.

The authors should compare their proposed study with existing works

They should suggest future works for other scholars to work on.

 

 

Author Response

Review Comment #7:.  The last paragraph of section one should be a summary of the remaining part of the article. What makes your study different from other existing works

Response to comment: The summary of the remaining part of the article is added in the revised manuscript . The explanation of proposed techniques better than existing techniques has been mentioned in state of art Table XVIII.


Review Comment #8:. In section 3.1, the authors talked about the dataset, they should tell us the total data. They should also indicate the ratio or percentage of training, validation, and testing dataset used. They should indicate the database which the data were obtained from.

Response to comment: The proposed approach has been evaluated using Sunny Brook cardiac dataset which comprises of cine short axis MRI from 45 patients with total of 805 images. Each time series consists of 6 to 12 2D cine stacks with 8 mm slice thickness and 1.3 mm to 1.4 mm in-plane resolution. For the purpose of the research, the MRI image files were originally split into training, validation and testing sets into the ratio of 15:15:15. Each patient has 12 to 28 images and database also has MRI ground truth manually segmented by an expert. We have performed the Affine Transformation (rotation, scaling and translation) to augment the training set and mitigate overfitting. The data base description in mentioned in Section 2 and in Reference 26.


Review Comment #9:  In section 3.2, the authors mentioned the hyperparameters selected for the study implementation but didn’t tell us the values. The authors should indicate the values of each of the hyperparameters used for the investigation and why those values were used.

 

Response to comment: The values of hyperparameters have been included in Section 2.1 and explanation of the values selected in mentioned in Table 1.


Review Comment #10: In section 4, the authors should let us know why they decided to use those performance metrics used out of many others available.

 

Response to comment: The Sunny Brook Dataset is used in our paper which is publicly available dataset. The performance of the segmentation model is evaluated by comparing its accuracy with that of the ground truth(manual annotations by experts) based on the software published by MICCAI Clinical Image Segmentation Grand Challenge Workshop. The evaluation proposed for assessing the algorithms submitted in MICCAI  LV segmentation challenge is based on three measures: 1. Percentage of Good Contours( PGC) 2. Average Dice Metric (ADM), 3. Average Perpendicular Distance (APD).


Review Comment #11: In section 5.1, the authors should give us a detail of what tasks were performed in the data pre-processing stage.

 

Response to comment:  The author has included the explanation of Data pre processing in the Section

  1. Orientation: Orientation shift based on the DICOM InPlanePhaseEncoding metadata, which indicates the axis of phase encoding with respect to the A majority of the images were ‘Row’ oriented hence if the image was ‘Col’ oriented, it was flipped to be ‘Row’ oriented.
  2. Rescale: The image is rescaled based on the image’s Pixel Spacing
    1. Rescale with the first Pixel Spacing value in both the x and y directions
    2. Rescale to 1mm x 1mm
  3. Crop: The image is cropped from the center to 256 x In this project, 256x256 and 176x176 were used.
  4. ROI Location

 

Review Comment #12: The authors only showed us the training stage, what happened during the validation and testing phase, they should present it in the results and discussion section. The authors should compare their proposed study with existing works. They should suggest future works for other scholars to work on.

 

Response to comment: The authors have included the explanation of proposed techniques better than existing techniques has been mentioned in state of art Table XVIII. The results of validation and testing stage will increase the number of pages hence authors gave a detailed analysis of training stage. The future work is included in Conclusion section.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Authors satisfied my comments.

Reviewer 3 Report

The authors should correct the labeling of their sections because I can see that after section 2, its section 4 I am seeing.

 

Back to TopTop