Next Article in Journal
Feature Importance Ranking of Random Forest-Based End-to-End Learning Algorithm
Previous Article in Journal
Semantic Attention and Structured Model for Weakly Supervised Instance Segmentation in Optical and SAR Remote Sensing Imagery
 
 
Article
Peer-Review Record

Enhanced Ship/Iceberg Classification in SAR Images Using Feature Extraction and the Fusion of Machine Learning Algorithms

Remote Sens. 2023, 15(21), 5202; https://doi.org/10.3390/rs15215202
by Zahra Jafari 1,2,*, Ebrahim Karami 1, Rocky Taylor 1 and Pradeep Bobby 2
Reviewer 1: Anonymous
Remote Sens. 2023, 15(21), 5202; https://doi.org/10.3390/rs15215202
Submission received: 30 August 2023 / Revised: 25 October 2023 / Accepted: 31 October 2023 / Published: 1 November 2023
(This article belongs to the Section Remote Sensing Image Processing)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Ship-iceberg discrimination is crucial for maritime safety and efficiency. This manuscript try to combine CNN-based features with tranditional machine-learning classification, for identifying ship and icebery. Here, i have some concerns.

 

 

Figure 4: why not conduct feature selection for these concatenated 349 features?

 

Line 217-224: since CNN models could produce many features from multipe layers, these features are different in sizes and scales. how to select them?

 

Table 4: how to deal with overfitting in NN-based, LightGBM, CatBoost-based classification? since there is a accuracy gap of approimately 10% between training and testing.

 

Line 384: how about direct using a CNN model to identify ship and icebery, such as VGGNet.

 

Line 384: please give some result of your classification and compared method.

 

 

On using CNN-based feature, some references:

DCN-based spatial features for improving parcel-based crop classification using high-resolution optical images and multi-temporal SAR data.

A CNN-based fusion method for feature extraction from sentinel data

Author Response

Dear Reviewer,

We would like to appreciate the time and effort you dedicated to reviewing our paper. Your valuable insights and constructive feedback have greatly contributed to improving the quality and credibility of my work.

We truly value the thoughtfulness and attention to detail you demonstrated in your evaluation. Your comments and suggestions have played a significant role in refining and strengthening our paper.

  1. Figure 4: why not conduct feature selection for these concatenated 349 features?

Answer:

Thank you for your insightful comment and questions regarding selected features. We conducted feature selection exclusively for features extracted via CNN models. This choice was made due to the CNN models producing a large number of features (6912), with many of them being considered extremely weak. These weak features not only significantly increase computational complexity, but also degrade the classifier's performance.

For the remaining 49 features, including quantitative features and incident angle, these were manually selected to provide the best classification patterns. Consequently, there was no need to select among them, and we incorporated all of them in conjunction with the 300 best features extracted from the CNN models

 

  1. Line 217-224: since CNN models could produce many features from multiped layers, these features are different in sizes and scales. how to select them?

Answer:

Thank you for your valuable feedback. We greatly appreciate your input, which prompted us to updated the section 2.2 in the revised manuscript (see the text highlighted in red). In section 2.2, we clarified that for each CNN model, the features are extracted specifically from the last layer just before the classification layer. This design choice ensures that the extracted features can be concatenated regardless of their original sizes and scales. Therefore, the varying sizes and scales of features produced by different CNN models do not pose an issue, as they are harmonized at the last layer prior to feature extraction. Consequently, we were able to select the 300 best features based on mutual information between each feature and the target label without concern for their initial size or scale.

  1. Table 4: how to deal with overfitting in NN-based, LightGBM, CatBoost-based classification? since there is an accuracy gap of approximately 10% between training and testing.

Answer:

Thank you for your detailed consideration, for the neural network (NN), we applied L2-norm kernel regularization with a parameter of 0.01 and incorporated a dropout layer to mitigate the overfitting problem. In the case of CatBoost, we employed a 'border count' value of 125 and set a maximum depth limit of 6 to address overfitting. For LightGBM, we specified 'min child samples' as 20 and enforced a maximum depth limit of 6.

These parameter values were determined based on various experiments aimed at minimizing the overfitting issue and achieving the smallest gap between training and test accuracies. Notably, for all models, this gap remains below 10%, which is considered a reasonable value in the context of classifiers

In response to your comment, we have updated section 3.4 in the revised manuscript, with the relevant changes highlighted in red.

  1. Line 384: how about direct using a CNN model to identify ship and iceberg, such as VGGNet.

Answer:

In this project, we began with transfer learning, employing diverse CNN models, including VGGNet. However, due to the insufficient classification accuracy achieved through transfer learning (for instance, the highest accuracy was obtained from VGG16 and it was approximately 87%) so, we opted to enhance our approach. This involved feature extraction from multiple CNN models and integrating the most effective features with statistical and spatial attributes to improve our results.

  1. please give some result of your classification and compared method.

Answer:

Thanks for your comment, in Table 7 in the revised manuscript, we have presented the classification result of our proposed method with relevant existing algorithms that have used same or close enough datasets.

  1. On using CNN-based feature, some references:

DCN-based spatial features for improving parcel-based crop classification using high-resolution optical images and multi-temporal SAR data.

A CNN-based fusion method for feature extraction from sentinel data

Answer:

Based on your comment we cited these two papers in the introduction section (see the text highlighted in red and also references 15 and 16 in the revised manuscripts)

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The article entitled "Enhanced Ship/Iceberg Classification in SAR Images Using Feature Extraction and Fusion of Machine Learn-3 ing Algorithms" presents a proposed workflow based on the use of Deep Learning and machine Learning techniques to identify ships and icebergs on dual-polarised SAR images of Sentinel-1.

The structure of the article seems correct, although the relate Works section included in the introduction section is missing. It is also missing that at the end of the introduction the authors highlight the scientific contributions of the methodological approach.

Given that the authors use different DL network models for the feature extraction tasks (section 2.2) VGG-16 ResNet-50 and ConvNeXt, modifying their architecture by not using the last layers of their architectures in several of the cases, a better description, e.g. graphically, would be desirable.

Figure 4 could be considered the main contribution describing the proposed methodology. B. This part is essential for the reproducibility of the methodology, so in addition to justifying with table 2 and figures 6-9 it would have been in a separate section. It should also better describe how the 300 features are selected from the 6912 features of the three models in section 2.2. This part is not clear and not easily reproducible. The experiments are run and what is calculated and how.

At no point is there any reference to the implementation of the methodology, how and with which software or libraries the models and dataset have been used.

In subsection 4.2 the authors compare their proposal with other models. In the text they indicate that as far as they know only two use the same dataset, although they do not identify them, nor can it be seen at a glance in table 6 where the results are compared. Please state which ones they are either in the text or in the table.

In relation to the conclusions, perhaps a little more effort is required on the part of the authors, where in addition to highlighting the merits of the proposal, they could criticise some of its parts, or try to identify which of them is the one that contributes most to the overall model.

Author Response

Dear reviewer,

We would like to appreciate the time and effort you dedicated to reviewing our paper. Your valuable insights and constructive feedback have greatly contributed to improving the quality and credibility of our work.

We truly value the thoughtfulness and attention to detail you demonstrated in your evaluation. Your comments and suggestions have played a significant role in refining and strengthening our paper.

  1. The structure of the article seems correct, although the relate Works section included in the introduction section is missing. It is also missing that at the end of the introduction the authors highlight the scientific contributions of the methodological approach.

Answer:

Thank you for your insightful comment regarding the relate Works. In the introduction, we have a section for related work and based on your comment we added a title for it. and thanks to your comment, we also added one paragraph to highlight the contributions of our approach (you can see the changes in the introduction section highlighted with red color).

 

  1. Given that the authors use different DL network models for the feature extraction tasks (section 2.2) VGG-16 ResNet-50 and ConvNeXt, modifying their architecture by not using the last layers of their architectures in several of the cases, a better description, e.g. graphically, would be desirable.

Answer:

Thank you for your valuable suggestion, which has helped us improve the manuscript. Upon your comment, we added a table (table 2 in section 3.2.A of the revised manuscript) and explained how these features were extracted from the CNN models. Also added more description to section 2.2 (see the text in red)

 

  1. Figure 4 could be considered the main contribution describing the proposed methodology. B. This part is essential for the reproducibility of the methodology, so in addition to justifying with table 2 and figures 6-9 it would have been in a separate section. It should also better describe how the 300 features are selected from the 6912 features of the three models in section 2.2. This part is not clear and not easily reproducible. The experiments are run and what is calculated and how.

Answer:

Thank you for your valuable feedback. We greatly appreciate your input, which prompted us to include a new figure (figure 5 in the revised manuscript) and explanation and justification on why and how mutual information has been used to extract the best 300 features (see the text highlighted in red in section 3.2.A).

  1. At no point is there any reference to the implementation of the methodology, how and with which software or libraries the models and dataset have been used.

Answer:

In table 2 in section 3.2.A of the revised manuscript, we also added a footnote indicating that these layers have been extracted from tensorflow model. Furthermore, in section 4 we added a description that we used python with TensorFlow library for the implementation of the approach (see the text highlighted in red in Sections 3.2.A, and Section 4, and footprint under table 2).

  1. In subsection 4.2 the authors compare their proposal with other models. In the text they indicate that as far as they know only two use the same dataset, although they do not identify them, nor can it be seen at a glance in table 6 where the results are compared. Please state which ones they are either in the text or in the table.

Answer:

Thank you for your detailed consideration, we added a footnote below table 7 in the revised manuscript indicating the other two papers that used the same SAR dataset

  1. In relation to the conclusions, perhaps a little more effort is required on the part of the authors, where in addition to highlighting the merits of the proposal, they could criticise some of its parts, or try to identify which of them is the one that contributes most to the overall model.

Answer:

 Thanks for your insightful comment, we updated the conclusion section and added a paragraph highlighting the main contributions of the paper and identified the ones that contributes the most as it is reproducible and can be used for other applications.

 

 

 

Reviewer 3 Report

Comments and Suggestions for Authors

I would like to commend the authors on the outstanding work presented in the paper. Your dedication and effort in the development of this research are truly commendable. The paper reads like a comprehensive literature study accompanied by well-designed experiments that are extensively evaluated.

Your research significantly contributes to the field by examining existing methods and introducing a novel combination that is not only computationally efficient but also enhances accuracy. This integration of established techniques with innovative approaches is a notable strength of your work.

However, I would like to suggest considering the inclusion of some emerging models in your future research. This addition could further enhance the paper's relevance and keep it aligned with the evolving trends in the field.

No major issues were found, but would recommend minor edits for grammar and typos in the paper. 

Once again, great work to the entire team. 

Best regards,

Comments on the Quality of English Language

Minor typos were found in the paper. 

Author Response

Dear reviewer,

We would like to appreciate the time and effort you dedicated to reviewing our paper. 

I would like to commend the authors on the outstanding work presented in the paper. Your dedication and effort in the development of this research are truly commendable. The paper reads like a comprehensive literature study accompanied by well-designed experiments that are extensively evaluated.

Your research significantly contributes to the field by examining existing methods and introducing a novel combination that is not only computationally efficient but also enhances accuracy. This integration of established techniques with innovative approaches is a notable strength of your work.

We would like to express our gratitude for your kind words and commendation. Your appreciation of our work means a great deal to us. We are sincerely dedicated to advancing our research, and it is truly rewarding to see our efforts recognized. We are pleased that you found our paper comprehensive and well-structured, and we value your recognition of the extensive evaluation we conducted.

However, I would like to suggest considering the inclusion of some emerging models in your future research. This addition could further enhance the paper's relevance and keep it aligned with the evolving trends in the field.

Answer:

Thank you for your insightful comment. We will certainly consider the latest emerging models in our future work to further improve the quality of our research.

No major issues were found, but would recommend minor edits for grammar and typos in the paper. 

Answer:

Thank you for your detailed attention to our paper. We proofread the paper and tried to fix any typos and grammatical mistakes.

 

 

 

 

 

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Dear Authors,

The revised article largely incorporates modifications and improvements in communication and description based on the recommendations given in the first round of review.

 

The authors have neglected to respond and include information describing the dataset used. They have only indicated in the article that they have used the Tensorflow libraries in a python environment. The comment from the first round of review said: "At no point is there any reference to the implementation of the methodology, how and with which software or libraries the models and dataset have been used".

 

They should check for typos in the changes. At least I have detected two: line 323 minimze and in the request of the new figure 5, line 194 the dot after CNN models is missing.

Author Response

Thank you for your thorough review of our paper. You are correct, and we forgot to mention, in response to your comment, that we had included a full section (see section 2.1) describing both datasets used in this research, along with examples (Figure 1 and 2). Furthermore, we explain how these datasets were used in our research in Section 3.1. Regarding methodology and implementation, in section 3, we comprehensively explain the methodology and implementation including, e.g., tensorflow layers where we extracted our features from them.

As for typos, we had used the proofreading service at our university and only made a few mistakes in the texts that were added later. Thanks for your comment, and we have corrected them.

 

Back to TopTop