Next Article in Journal
GF-5 Hyperspectral Data for Species Mapping of Mangrove in Mai Po, Hong Kong
Previous Article in Journal
Joint Exploitation of SAR and GNSS for Atmospheric Phase Screens Retrieval Aimed at Numerical Weather Prediction Model Ingestion
 
 
Article
Peer-Review Record

Nonlinear Manifold Learning Integrated with Fully Convolutional Networks for PolSAR Image Classification

Remote Sens. 2020, 12(4), 655; https://doi.org/10.3390/rs12040655
by Chu He 1,2,*, Mingxia Tu 1, Dehui Xiong 1 and Mingsheng Liao 2,3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2020, 12(4), 655; https://doi.org/10.3390/rs12040655
Submission received: 18 January 2020 / Revised: 8 February 2020 / Accepted: 11 February 2020 / Published: 17 February 2020

Round 1

Reviewer 1 Report

Manuscript focuses on a novel PolSAR image classification method as a combination of nonlinear manifold learning integrated with fully convolutional networks solution.

Method is clearly described with the sufficient introduction, physical background and references. The structure of the manuscript is correct.

However some improvements should be provided.

 

Critical comments are as follows:

For the experiment proposed authors used three SAR datasets:             a) L-band AIRSAR system image from 1989 of Flevolan region in Netherlands             b) L-band AIRSAR system image from ?(here we do not know, please add a full date) of San Francisco Area             c) L-band Emisar system image from 1998 of the Foulum area in Denmark. Authors do not describe the analytical reason why they picked particularly those images and those test areas, please add clarification of the choice. As a PolSAR experts authors know very well, that the exact date of the analysed images has a crucial role in their interpretation, so please add exact dates (with day and month) of the analysed images. Subsections 4.1.1 - 4.1.3 have to be corrected, body text is swapped and confused. Subsection 4.1.2 - please provide year of the gathered SAR image of San Francisco Area. Authors criticize Pauli decomposition method “poor data adaptability and low feature utilization – vide abstract and conclusions), so my question is why authors show Pauli decomposition maps in the section 4.1. (fig. 6,7,8). Maybe Pauli’s method is an informative one anyhow? Please be consistent.
Moreover, Figures 6,7,and 8 have no analytical value concerning the main research goal of the experiment, and can be deleted. While proper table with the images parameters would be nice and clear. Line 431 - I suggest “Experimental analysis and results”. Tables 2-5 - please clarify what kind of classes you present, captions are not clear & sufficient. Figures 10 -12 - please add legends and coordinates. Figures 10 -12 - I suggest to magnify some selected parts of the maps, those which show polarimetric differences, just for better visualization, but not obliged, just a suggestion. Tables 6 and 7 - please move to section 4. If the method was also tested with any newer imageries than those presented here (20/30 years old) or with any satellite imageries? In Conclusions authors could mention about the application of the proposed method.

Author Response

Really appreciate for your comments. The issues raised in the comments have been carefully addressed, please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

In this paper, the authors proposed classification methods that adapt to PolSAR data characteristics. In fact, a low-dimensional representation learned by a nonlinear manifold method is embedded into Fully Convolutional Networks (FCN’s) deep multi-scale spatial features. Furthermore, the learned representation is fed into the SVM to perform the classification. The proposed approach has been validated on a series of PolSAR datasets.

Generally, the proposed idea is very interesting, however, some revisions have to be made and some parts of these experiments are not complete to claim the advantage of the proposed models :

 

1) In the proposed model, could the authors explain what is the main contribution over standard classification methods based on PolSAR data characteristics, etc? what are their advantages over them? And what is is the motivation to use FCN model? could the authors give more clarifications?

 

2) In the experimental setup, did the authors choose the learning samples randomly for the classification task? What happens when you change the training samples?

 

3) Furthermore, each category is sampled with the same proportion within the range of (1% 5%) for supervised training. Could the authors explain how they set these proportions?

 

4) In the experimental setup, could the authors add some details about the hyper-parameter settings, e.g, epochs, number of hidden layers, batch size,  etc?

 

5) I suggest the authors add in the manuscript these recent references related to 3-D CNN, which aim to preserve the spectral and spatial feature of remote sensing images :

- Spectral-spatial classification of hyperspectral imagery with 3D convolutional neural network, Remote Sensing, 2017.

- Hyperspectral imagery classification based on semi-supervised 3-D deep neural network and adaptive band selection, Expert Systems with Applications, 2019.

6) The English and format of this manuscript should be checked very carefully.

Author Response

Really appreciate for your comments. The issues raised in the comments have been carefully addressed, please see the attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

The authors demonstrate a polarimetric SAR (PolSAR) image classification method by combining deep neural networks for automated feature learning with manifold learning to extract low-dimensional relationships for high-dimensional data. An weighted mixture is used for classification utilizing support vector machines (SVM) as classifiers. The authors report high accuracies for the proposed method. 

The paper is well written with a lot of fundamental examples for better understanding. I have a few questions which I felt were not explained well in the paper:

In line 147 in contributions, the authors state that "deep network is introduced to realize feature learning ...". However, this statement is summarily false as DNNs have been utilized for PolSAR image classification. In line 307, the weights are chosen by previous experience. Is there any reason they are not learnt? Also, in line 310, SVM is chosen as a classifier without providing an explanation for doing so.

Author Response

Really appreciate for your comments. The issues raised in the comments have been carefully addressed, please see the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

The authors have revised the manuscript carefully according to my questions. I have no further questions about this manuscript. It could be accepted.

Back to TopTop