Next Article in Journal
Integrating an Expert System, GIS, and Satellite Remote Sensing to Evaluate Land Suitability for Sustainable Tea Production in Bangladesh
Previous Article in Journal
Statistical Prediction of Typhoon-Induced Rainfall over China Using Historical Rainfall, Tracks, and Intensity of Typhoon in the Western North Pacific
 
 
Article
Peer-Review Record

Land-Use and Land-Cover Classification Using a Human Group-Based Particle Swarm Optimization Algorithm with an LSTM Classifier on Hybrid Pre-Processing Remote-Sensing Images

Remote Sens. 2020, 12(24), 4135; https://doi.org/10.3390/rs12244135
by Ganesh B. Rajendran 1, Uma M. Kumarasamy 1, Chiara Zarro 2, Parameshachari B. Divakarachari 3 and Silvia L. Ullo 2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2020, 12(24), 4135; https://doi.org/10.3390/rs12244135
Submission received: 2 December 2020 / Revised: 14 December 2020 / Accepted: 15 December 2020 / Published: 17 December 2020

Round 1

Reviewer 1 Report

Thanks the authors for all the modification! 

I think the manuscript is ready to be published.

Author Response

Please see the attached file

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors have fulfilled a significant improvement of the manuscript.

If the authors address a few more things, I think it can be considered for publication.

These issues concern:

-Moderate English changes and improvements.

-Also, a small enrichment of the literature (some suggestions are given in the attached file)

-The Discussion does not provide any comparison with similar studies and their findings and does not provide any challenges or perspectives for future studies.

-Finally, I think that conclusions are a repetition of the results! The ideal would not be to summarize the main results but to highlight the main findings and highlight the importance/relevance of these findings.

A few more corrections can be found in the attached file

Comments for author File: Comments.pdf

Author Response

Please see the attached file.

Author Response File: Author Response.pdf

Reviewer 3 Report

The paper uses Human group based PSO to optimally select the features of data and then, applies LSTM architecture for the classification task. Here are some of my concerns: 

  • There are several meta-heuristic algorithms developed recently. Have you checked the efficacy of them in terms of feature selection? Or you just tried PSO? 
  • Why the LSTM used for classification? Deeper LSTM makes the results better or not? Why not other classification networks? Usually, LSTM is used when the time series data is available to extract the temporal features. 
  • In Table 4 and 5, the methods presented are Human group based PSO with DNN, MSVM, and LSTM. Why the authors did not compare their approach with some of the state-of-the-art methods? 
  • In Table 4 and 5, for example, the result of the classifications are very close, e.g. 99.65 and 99.90 (precision for Barren land in MSVM and LSTM). How this small improvement can affect visual classification? For all practical purposes, how this can help the agricultural applications? 

Author Response

Please see the attached file.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

-

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

I have reviewed the manuscript “Land use and land cover classification using a human group based particle swarm optimization algorithm with a LSTM classifier on hybrid-preprocessing remote sensing images” by Babu et al. This study proposes to use PSO to select the optimal features from the LGBPHS, HOG and Haralick texture features. Based on the selected features, this study uses the LSTM to classify the LULC classes. The authors also compared their algorithm with previous ones in performance. Overall, this study is important and can be published after some modifications.

 

Major comments:

  1. Training data. The authors randomly split each data set, with 70% for training and 30% for testing. To get more data for training models, I am wondering whether the authors can combine all data sets. To better test the model, I would suggest use one data set only for testing .
  2. Hyper-parameter tunning. Most neural network algorithms are sensitive to the hyper-parameters. The authors may give some discussion on hyper-parameter tunning on LSTM (e.g., number of neurons). In addition, initialization is also very important.
  3. Overfitting. Overfitting is common in machine learning. It is critical that the authors give some discussion whether overfitting happens in their algorithm.
  4. Algorithm comparison. The authors derived three groups of features and then selected the optimal ones through PSO. The compared algorithms are based on all of some of these features. By design, convolutional neural networks can automatically learn these features with the desired scales. Therefore, convolutional neural networks can use the raw data as inputs and do not need the feature selection step. I am wondering whether the authors can compare the LSTM algorithm with convolutional neural networks based on raw data.
  5. Attention mechanism. The authors mentioned that LSTM is superior to other algorithms because LSTM can learn the long dependency. However, recent studies showed that LSTM is difficult to train and cannot capture very long dependency. Alternatively, attention mechanism was suggested by many recent studies. So, I am wondering whether the authors can compare LSTM to the attention mechanism.

 

Minor comments:

Lines 166-173: These sentences are repetitive? It may be better to move to the introduction.

Author Response

Letter to Editors and Reviewers

Dear Editors, dear Reviewers,

Thank you very much for sharing your expert opinions on our work. 

We really appreciate the time and effort taken in reviewing this submission. As a result, we strongly believe that our manuscript has benefited from your constructive comments and suggestions, which were helpful in improving the quality of this paper.

In this revision, we have updated some parts of our manuscript to further enhance the quality of its content.

We are sending in addition to the main file, another complementary pdf file, named “Revised manuscript with modifications marked”, where in red are highlighted all the parts modified; while in blue are highlighted all the parts added, with respect to the previous version of the manuscript.

The focused point-to-point answers to the review comments follow, where the comments from the reviewers are given in regular font, while our corresponding feedback is highlighted in bold.

Reviewer 1

Major comments:

  1. Training data. The authors randomly split each data set, with 70% for training and 30% for testing. To get more data for training models, I am wondering whether the authors can combine all data sets. To better test the model, I would suggest use one data set only for testing.

 

We thank the Reviewer for these considerations. Actually, combining all the three datasets, looks quite complicated since each dataset has different classes. The decision to randomly split each dataset with 70% of data for training and 30% of data for testing, should bring to a more robust method, trained on different datasets and able (this is what we prove within the validation step) to work correctly with different datasets. The choice is also related to the overfitting issue as explained in the section 3.5

 

  1. Hyper-parameter tuning. Most neural network algorithms are sensitive to the hyper-parameters. The authors may give some discussion on hyper-parameter tuning on LSTM (e.g., number of neurons).

 

We thank the Reviewer for this very important observation. We added something on the choice of the hyper-parameters in the case of LSTM model at the end of the section 3.5. Yet, we did not work on hyper-parameter tuning, while we followed in this case what has been done in [43], as specified in the section.

 

  1. Overfitting. Overfitting is common in machine learning. It is critical that the authors give some discussion whether overfitting happens in their algorithm.

 

We thank the Reviewer for this very important observation. The discussion about overfitting has been given in the section 3.5.

 

  1. Algorithm comparison. The authors derived three groups of features and then selected the optimal ones through PSO. The compared algorithms are based on all of some of these features. By design, convolutional neural networks can automatically learn these features with the desired scales. Therefore, convolutional neural networks can use the raw data as inputs and do not need the feature selection step. I am wondering whether the authors can compare the LSTM algorithm with convolutional neural networks based on raw data.

 

We thank the Reviewer for this very important observation. Reviewer is right saying that usually, authors feed the images (raw data) directly in the neural networks for classification. Yet, while using around 500,000 image patches, numbers of sub-classes or unknown classes are generated, which significantly degrade the performance of classification. So, we used hybrid feature extraction and feature selection in this research study. Please, see also the answer to next point for an explanation of the introduction of feature extraction and selection in the case when LSTM networks are used. Here, we compared the LSTM algorithm with convolutional neural networks based on raw data, refer table 14.

 

  1. Attention mechanism. The authors mentioned that LSTM is superior to other algorithms because LSTM can learn the long dependency. However, recent studies showed that LSTM is difficult to train and cannot capture very long dependency. Alternatively, attention mechanism was suggested by many recent studies. So, I am wondering whether the authors can compare LSTM to the attention mechanism.

 We thank the Reviewer for this observation. It is true that in our manuscript, we mentioned like “The LSTM classifier has the default behavior of remembering data information for a long time period”. From the study of the literature, we found this statement more than once, and in particular in the papers “Long Time Series Land Cover Classification in China from 1982 to 2015 Based on Bi-LSTM Deep Learning”, and “Analyzing the Effects of Temporal Resolution and Classification Confidence for Modeling Land Cover Change with Long Short-Term Memory Networks”. These two papers are cited in the section 3.5. However, we agree with the Reviewer about the difficulty for the LSTM to capture very long dependency, and this is the reason why we decided to use hybrid feature extraction and feature selection in our research study. In fact, doing that we can take the advantages of choosing a LSTM, overcoming the issues of long-dependency shortcomings. 

As regards, the Reviewer’s last point: “I am wondering whether the authors can compare LSTM to the attention mechanism”, we thank the Reviewer for this valuable consideration. We will take it into consideration for our future works, where we aim to make comparison of our proposal with other solutions, like the one suggested by the Reviewer.

Minor comments:

  1. Lines 166-173: These sentences are repetitive? It may be better to move to the introduction.

            Done. As per the Reviewer’s suggestion, the repetitive sentences are included in the introduction section.

 

Reviewer 2 Report

The present research is a very interesting study with comprehensive use and analysis of many different algorithms for the optimization of data, and a noteworthy classification methodology and accuracy assessment

Unfortunately, the whole document has significant obstacles, deficiencies, and omissions.

-The main problem of this manuscript regards the extremely bad text structure and analysis of the referred algorithms; and in many cases poor language and syntax quality, which makes the content and meaning of the text incomprehensible in many cases. The document needs to be reconstructed and should be edited by a native English speaker and needs serious proofreading in order to reach an adequate level.

Moreover, I think that such a method should be implemented in a specific study area with specific characteristics in order to be more applicable to other similar areas in the future.

- There are serious problems in the methodology, with significant errors and incomplete or insufficient description and analysis of the used parameters and algorithms

- The authors did not follow the journal guidelines, especially for the references

- Also, enrichment of the references is needed in the text

-Some figures have unsatisfactory resolution and quality,

-A figure showing the final classification result is needed.

-A Discussion section is needed.

 

Overall, I think that the manuscript, is not suitable for publication and the revisions that need to be undertaken are too fundamental to continue considering submission in its current form.

You can see some more specific comments in the attached file

Comments for author File: Comments.pdf

Author Response

Letter to Editors and Reviewers

Dear Editors, dear Reviewers,

Thank you very much for sharing your expert opinions on our work. 

We really appreciate the time and effort taken in reviewing this submission. As a result, we strongly believe that our manuscript has benefited from your constructive comments and suggestions, which were helpful in improving the quality of this paper.

In this revision, we have updated some parts of our manuscript to further enhance the quality of its content.

We are sending in addition to the main file, another complementary pdf file, named “Revised manuscript with modifications marked”, where in red are highlighted all the parts modified; while in blue are highlighted all the parts added, with respect to the previous version of the manuscript.

The focused point-to-point answers to the review comments follow, where the comments from the reviewers are given in regular font, while our corresponding feedback is highlighted in bold.

 

Reviewer 2 

  1. The main problem of this manuscript regards the extremely bad text structure and analysis of the referred algorithms; and in many cases poor language and syntax quality, which makes the content and meaning of the text incomprehensible in many cases. The document needs to be reconstructed and should be edited by a native English speaker and needs serious proofreading in order to reach an adequate level.

            We thank the Reviewer for this feedback aiming to improving our manuscript. Accordingly, we have done our best to reconstruct and correct the document.

  1. Moreover, I think that such a method should be implemented in a specific study area with specific characteristics in order to be more applicable to other similar areas in the future.

            We thank the Reviewer for this observation which helped us to better highlight this point. In the paper, in section 3.1 we had specified that the different datasets have been utilized for experimental analysis to differentiate the things that are not related to human habitats, in both urban and agricultural environments. For Sat 4, Sat 6 and Eurosat datasets the classes are specified, as well. In the updated paper, in the same section we added a comment by explaining that in the future work we intend to extend our analysis also to human habitats.

  1. There are serious problems in the methodology, with significant errors and incomplete or insufficient description and analysis of the used parameters and algorithms.

            We thank the Reviewer for these considerations. In the updated paper, methodology description has been improved and especially the classification section.

  1. The authors did not follow the journal guidelines, especially for the references.

            We thank the Reviewer for this observation. Accordingly, we checked and corrected all the references. We followed the instructions given at the link: https://mdpi-res.com/data/mdpi_references_guide_v5.pdf

  1. Also, enrichment of the references is needed in the text.

       We added new references but we will be very grateful of some valuable suggestions.

  1. Some figures have unsatisfactory resolution and quality.

            Done. In the updated paper, we controlled the figures and we improved those which had unsatisfactory resolution and quality.

  1. A figure showing the final classification result is needed.

           We thank the Reviewer for this suggestion, which is very important. The final classification results of Sat 4, Sat 6, and Eurosat datasets are represented in the new figures 9, 12, and 14. Consider that the images, as specified in section 3.1 are taken from public databases, and what we have extracted and shown depends also on their quality.

 

  1. A Discussion section is needed.

            We thank the Reviewer for this suggestion. In the updated paper, a discussion section is included in section 4.5.

Reviewer 3 Report

Great job on describing the need.

Great job on introducing past works.

The paper was well written and will resonate well with process oriented readers such as myself.  The charts supported the tables well.

Line 202:  Sentence needs rewritten.

Multiple lines should be restated.  For example, Line 396 says 
"The tables 2 and 3" and should say, "Tables 2 and 3".  This goes for Lines 452, 454, 472, 477 (capitalize Figure), 525, and 536.

The headings on Table 5 need corrected (centered).

The downside to this great article is that this method requires a lot of effort for such a small payoff of only a 2.56% increase.  Is there some way the authors could add detail as to how this small percentage increase will help in the overall LULC process?  I recommend adding content to the article to show how this 2.56% increase will help the overall process, save time, decrease cost, or some additional value.

Author Response

Letter to Editors and Reviewers

Dear Editors, dear Reviewers,

Thank you very much for sharing your expert opinions on our work. 

We really appreciate the time and effort taken in reviewing this submission. As a result, we strongly believe that our manuscript has benefited from your constructive comments and suggestions, which were helpful in improving the quality of this paper.

In this revision, we have updated some parts of our manuscript to further enhance the quality of its content.

We are sending in addition to the main file, another complementary pdf file, named “Revised manuscript with modifications marked”, where in red are highlighted all the parts modified; while in blue are highlighted all the parts added, with respect to the previous version of the manuscript.

The focused point-to-point answers to the review comments follow, where the comments from the reviewers are given in regular font, while our corresponding feedback is highlighted in bold.

Reviewer 3

  1. Great job on describing the need. Great job on introducing past works. The paper was well written and will resonate well with process oriented readers such as myself. The charts supported the tables well.

            We are very grateful to the Reviewer, for recognizing our efforts and giving a positive judgment for the manuscript, that have been obviously very appreciated.

  1. Line 202: Sentence needs rewritten.

            We thank the Reviewer for this suggestion. Accordingly, we have rewritten Equation (1) in the updated paper.

  1. Multiple lines should be restated. For example, Line 396 says "The tables 2 and 3" and should say, "Tables 2 and 3". This goes for Lines 452, 454, 472, 477 (capitalize Figure), 525, and 536.

            We thank the Reviewer for these observations. In the updated paper, the sentences have been redefined, and the lines restated.

  1. The headings on Table 5 need corrected (centered).

            Done. In the updated paper, Table 5 is centered.

  1. The downside to this great article is that this method requires a lot of effort for such a small payoff of only a 2.56% increase. Is there some way the authors could add detail as to how this small percentage increase will help in the overall LULC process? I recommend adding content to the article to show how this 2.56% increase will help the overall process, save time, decrease cost, or some additional value.

            We thank the Reviewer for the valuable insights, and in the section 4.4, we added the advantages of achieving a better LULC classification performance. 

Round 2

Reviewer 2 Report

The authors have tried significantly to improve the manuscript based on the comments.

Nevertheless, the text still displays many problems, mainly concerning language and syntax, which are far from gaining an adequate level.

The enrichment of the references and the improvement of the manuscript’s quality was insufficient.

Moreover, the quality of images 9 and 12 that were added is too low.

The Discussion does not provide any comparison with similar studies, mentioning the appropriate references, as well as topics or “food for thoughts” ideas for the improvement or discussion of the methodology and processing in the future.

In summary, I believe that despite the improvements, the text does not meet the quality standards of the magazine, and I insist that it should not be published.

Back to TopTop