Next Article in Journal
Classification of Tropical Forest Tree Species Using Meter-Scale Image Data
Previous Article in Journal
Mapping Listvenite Occurrences in the Damage Zones of Northern Victoria Land, Antarctica Using ASTER Satellite Remote Sensing Data
Previous Article in Special Issue
Spatiotemporal Mapping and Monitoring of Whiting in the Semi-Enclosed Gulf Using Moderate Resolution Imaging Spectroradiometer (MODIS) Time Series Images and a Generic Ensemble Tree-Based Model
 
 
Article
Peer-Review Record

Large-Area, High Spatial Resolution Land Cover Mapping Using Random Forests, GEOBIA, and NAIP Orthophotography: Findings and Recommendations

Remote Sens. 2019, 11(12), 1409; https://doi.org/10.3390/rs11121409
by Aaron E. Maxwell 1,*, Michael P. Strager 2, Timothy A. Warner 1, Christopher A. Ramezan 1, Alice N. Morgan 2 and Cameron E. Pauley 1
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Remote Sens. 2019, 11(12), 1409; https://doi.org/10.3390/rs11121409
Submission received: 8 May 2019 / Revised: 8 June 2019 / Accepted: 11 June 2019 / Published: 13 June 2019
(This article belongs to the Special Issue Geographic Object-Based Image Analysis (GEOBIA))

Round  1

Reviewer 1 Report

This an interesting paper, well-written and timely. I think it is worthy of publication in this journal, however I recommend some revisions for clarity and narrative flow. Given the tight turn-around on this journal, I haven’t had time to investigate all of the facets of this research, but what I’ve read is quality, comprehensive material.


section 1.1, Machine Learning and Training Data, starts with a review of the fundamentals of random forest algorithms. This level of detail is not necessary and these first two paragraphs can be streamlined a little bit.

Page 2, lines 63-65. References to “super-object information” and “textual measures” should be briefly explained before they are posed in a question.

Yes, and actually I could use a little more explanation on page 3, line 131, as I’m not familiar with the GLCM matrix.

Page 4, lines 158-173: I had to reread this paragraph a couple times to understand it. It’s a very pertinent argument, however. You might want to move the last sentence up closer to the beginning to frame the argument, and perhaps take a pass at the text to try to make sure that the argument comes through.

Page 10, line 407: Without checking on the fundamentals of McNemar’s test, a p-value of 1.645 for a one-tailed test seems low.

Page 10, line 425-6: Because of the confusion in agricultural fields, you may want to consider adding a sentence or two somewhere about how the lines between land cover and land use are a little blurry here. In other words, you want to actually classify on land use (i.e. agriculture) but are picking up a different type of land cover (barren).

Just a minor thing: There is no Table labeled #3.


Author Response

Response has been provided as a Word Doc file. 

Author Response File: Author Response.docx

Reviewer 2 Report

Thank you for your contribution. I appreciate the discussion and details you give to justify your choices. I have some general comments/interrogations/suggestions for the revision:

-          On lines 61-67, you list 5 questions. Maybe you could answer them clearly in the conclusion.

-          You use a mono-date approach, mainly due to the nature of your data. Do the differences of acquisition period in your aerial images produce learning/predicting issues?

-          You mention that you use Arcgis and Ecognition softwares. Do you consider open source alternatives? Do these proprietary choices could have a limitation to a cloud-based processing evolution of your methodology?

-          You mention you use 6 thematic classes. Do you consider using more precise classes and thus increase the number of classes? Maybe more precision could resolve your “mixed developed” class accuracy problem, especially in a GEOBIA approach (see table 5, line UA MD (%)).

-          Figure 1: Not enough information in the caption. Why have you selected these sub-areas?

-          Adding more visual results could help interpret your numerical results, especially in section 3.2 to 3.5. This is not mandatory, but it could help.

-          In sub section 3.3 and 3.4, as it seems that you increase the number of sample and variable randomly, did you consider repeating your experimentation multiple times to obtain significant results? For example, figure 4.c, MD user accuracy has a strange effect when sample size is equal to 512. With multiple time experimentations, you could also obtain standard deviation to help interpret the robustness of the choices.

-          In sub-section 3.5, it is difficult to see if your results are significant as the accuracy differences (on the figures) are very small.

-          Publication year are missing in ref 83, 85, 86 and 99. Could you provide a link for ref 93?


Author Response

Response has been provided in the attached Word Doc file. 

Author Response File: Author Response.docx

Reviewer 3 Report

This manuscript contributes to large-area, high spatial resolution land-cover mapping by addressing a few technical aspects concerned and making some recommendations. However, there are some issues remaining.

 

1.       Object-based accuracy assessments make perfect sense for object-based classifications of high spatial resolution images. Segmentation accuracy (related to image objects’ existence and geometry/shape) seemed to be assumed in the manuscript, while the focus was placed upon accuracy in class labeling. We need to recognize that the image objects concerned therein are artifacts of image segmentation which is itself a result of man and machine (algorithms) interactions involving parameterization and optimization with a subjective dimension. I would thus suggest that a pixel-based assessment of accuracy be done as a complement: some results may be comparable, other may be rather different, in comparing the two approaches. It would be helpful to discuss the implications for GEOBIA which is so common nowadays. Sample pixels may be selected (hopefully easily) from the already collected sample objects. Pay attention to sample design (see point#2 below).

2.       Such a pixel-based approach would also be easier to implement in sampling design than an object-based one due to differing shapes and sizes of image segments. Accuracy evaluations based on error matrices would also be more straightforward (from pixel counts to area proportions) than from image objects (need to double-check if weighting by object areas as in the paper is equivalent to accommodating inclusion probabilities of sample units in case stratified sampling with unequal sampling intensities is adopted). Authors should provide more detail about training and validation sample datasets, in particular the sample design (section 2.5).

3.       It would be valuable to discuss issues related to accuracy assessments of inherently difficult to define or confusing classes (such as barren and impervious) vs. better-defined classes (such as water and forest). Fine (high) spatial resolution images did not seem to be able to solve “fuzziness” issues in land-cover semantics and labeling. This has implications for accuracy assessment practice whereby land-cover information extracted from fine spatial resolution images is usually taken as reference for evaluating accuracies of classifications based on images of coarser spatial resolution.

4.       I wonder why land-cover information of finer thematic resolution was not extracted from such fine spatial resolution images. But there are Level I and Level II land-cover classification systems for NLCD. 


Author Response

Response has been provided in attached Word Doc File. 

Author Response File: Author Response.docx

Round  2

Reviewer 2 Report

Dear authors,


Thanks for answering all my questions and remarks. This is fine for me.

Author Response

We have attached a response document. Thanks. 

Author Response File: Author Response.docx

Reviewer 3 Report

I have no further major comments, as authors seem to have done a thoughtful revision. Authors should, however, add a sentence or two in the abstract and conclusion emphasizing the huge variations in classification accuracies for different classes (i.e., classes that are relatively easy to map vs. those that are difficult to map). This is important because an OA of 96.7% may convey a misleading message regarding large confusion between barren, low vegetation, etc.. 


Author Response

We have attached a response document. Thanks.

Author Response File: Author Response.docx

Back to TopTop