Next Article in Journal
Supervised Contrastive Learning-Based Classification for Hyperspectral Image
Previous Article in Journal
Impact of the Management Scale on the Technical Efficiency of Forest Vegetation Carbon Sequestration: A Case Study of State-Owned Forestry Enterprises in Northeast China
 
 
Article
Peer-Review Record

Globally Scalable Approach to Estimate Net Ecosystem Exchange Based on Remote Sensing, Meteorological Data, and Direct Measurements of Eddy Covariance Sites

Remote Sens. 2022, 14(21), 5529; https://doi.org/10.3390/rs14215529
by Ruslan Zhuravlev 1,*, Andrey Dara 1,2, André Luís Diniz dos Santos 1,3, Oleg Demidov 1 and George Burba 4
Reviewer 1: Anonymous
Reviewer 3:
Remote Sens. 2022, 14(21), 5529; https://doi.org/10.3390/rs14215529
Submission received: 16 September 2022 / Revised: 26 October 2022 / Accepted: 27 October 2022 / Published: 2 November 2022

Round 1

Reviewer 1 Report

In general, the author has done laborious calculation with a large amount of data. However, in my opinion, a lot of data is duplicated and unnecessary. I think the author's assumption is problematic. If the existing remote sensing images are used to fill its gap in time series due to lack of remote sensing data, then the variables are only weather data, and the remote sensing data on neighboring days will lose their meaning. The method is not detailed enough, and the results are not displayed well, many of which are examples of results or unnecessary to display (Figure 4). The discussion reiterates the introduction and is not illuminating.

 

NBS should be explained in abstract so that audience can understanding your manuscript easily and clearly.

 

CO2 should be explained when it first appears, pay attention to the subscript.

 

The paragraphs are incoherent in the introduction, and what should have been one paragraph was split into two. L47-62.

 

The closing parenthesis is missing.

 

What are the two approaches? L76.

 

I suggest to modify this way of writing, not a certain manuscript used for what, but some researchers, and finally cite this document.

 

“Compared with all observations, the geographically weighted regression using MODIS Normalized Difference Vegetation Index (NDVI) obtained an R2 of 0.45.” What variable is this with the result of the NDVI regression? GPP?

 

What dos GFS mean? What is the unit in Figure 3? second?

 

What did you mean “one particle, or group of particles” in your study? Is it a carbon atom?

 

Residence time is the time of an atom takes from entering to exiting the system, thus, how long does your model run? I'm not quite sure how you calculate the residence time.

 

What does the 42 additional features include?

 

Do you use cumulative radiation or average radiation to reflect phenological effects?

 

 

Do you really need so many variables, parameters or features for training? Although some variables are excluded. It seems to me that many variables are repeated, and perhaps doing so will improve the R2 of the results, but the underlying mechanism is not clear. You just added anything you can obtained to the model for training, I think that every data involved in training should not have duplicate features at least.

 

It is recommended to retype Table 3, which is too messy. The abbreviations of some variables need to be explained.

 

What dos the neighboring days mean? Is it the day before and after the given day? How many days of the neighboring days are included in total, a week?

 

What’s reference of NDMI? I don't see any description of the NCEP2 data by the authors, but it is mentioned in Fig.2.

 

Authors should list all fluxnet sites used, remote sensing image, and the time period of both in the appendix.

 

The slope, R2 and P value of the observed and predicted data should be marked in Figure 5.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

I believe that these considerations should be considered before the publication of this manuscript:

Do a review of the acronyms not described in the work. Its extensive use makes it difficult to read the work, especially when its equivalence is not found.

Introducction

-        Clearly indicate the objectives of the work at the end of the introduction.

-        Also explain the scale of work, since it is not clear until the methodology

Methodology

-        Line 219: Given the variety of Landsat sensors used, it is preferable to name the blue channels by their acronyms (B, G, Red) and not by their numbering, since as it is it would be wrong.

Results

-        There is some spatial result on the variations observed in figures 5 and 6? Explain these graphs a little more

It would be convenient to include conclusions or put discussion and conclusions.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

The manuscript titled “Globally scalable approach to estimate net ecosystem exchange based on remote sensing, meteorological data, and direct measurements of eddy covariance sites” by Zhuravlev et al. combines machine learning approaches, source area modelling to match eddy covariance measurements and Landsat data to upscale carbon sequestration globally. I believe that it is a valuable contribution to the journal. However, before acceptance, a few points need to be addressed. The abstract overall could be improved by giving more detail on certain concepts, which as of now appear a bit vague. For example, how is scalability achieved using a validation design based on a separate set of eddy covariance towers?

While the manuscript is overall well written, I would recommend adding some information to assess the full scope of this project. For example, the authors do not mention what processing environment was chosen (I presume Python, due to some library mentions). I would also recommend adding information on how computationally intensive this analysis was, to understand how this process can be used (which seems quite complex) for NBS quantifications in a timely manner. While Figure 2 helps with the overview of the process, I feel that some steps are missing from this graph, particularly regarding the meteorological data, which seem to be processed further according to section 2.2.2 (i.e., training sample augmentation techniques, weather profiles). I also feel that some key information for the interpretation is missing. First, how was Landsat scene overlap and differences in Landsat sensors and reflectance filters (from Landsat 5-8) handled? I would expect that some reflectance indices would vary based on that (between Landsat 8 and others). How is the model handling differences in EC density globally, particularly underrepresentation of EC measurements in certain global locations? The authors briefly mention this in the discussion, but it would be helpful to show some data on this to help with the interpretation. What was the timeframe for this study and how many EC towers were available for each land cover type and for training and validation? I also think that the discussion could be extended. For example,  wetland NEE seems to be underestimated, but I can't find a discussion on why this result was seen. Finally, I suppose I anticipated a global map of NEE from this study, mainly because the title and abstract use the word scalable. 

 

Abstract

 

Line 1: Please define NBS

Line 12: what were these models validated against?

 

Introduction

 

Lines 63-87: I’d recommend adding author names et al. to citations when used at the beginning of the sentence for easier comprehension.

 

Data and Methods

 

Please state the temporal extend of this study and how many towers were available per vegetation type, as well as how many were used for training and validation per IGBP class.

 

Lines 158: It is questionable if daily sums or averages of meteorological data capture the variability in carbon fluxes.

 

Line 173: “original NEE” measurements may be misleading, since half hourly NEE are typically averaged fluxes.

 

Line 187: Should this be 9-10 km2 instead of km?

 

Line 194: Which version? Also please define GFS

 

Line 212: does “selected” refer to pixel values, or averaged values for certain areas? 

 

Line 241: Am I understanding this correctly, that remote sensing scenes were “copied” to days with similar meteorological conditions. I’m curious what biases this may introduce, particularly for managed systems like CRO, that may experience a drastic shift in landcover from one day to the next, due to harvests, etc.

 

Line 262: Please describe in more detail what “with control of the regularization process” is here.

 

Line 265: I’m a bit puzzled about the “small dataset” reference, to me the descriptions in the study sound like very extensive datasets, both spatially and temporally. Mentioning the number of datapoints and features included in the models by landcover type would help.

 

Line 266: Please define SVR

 

Line 277: It looks like for each IGBP class different sets of features were selected for the prediction. It would help to note which features were used for each of the classes. Table 3 shows the model predictors, but it is unclear (and hard to read) if these were applied to all IGBP classes, or if they varied by class (which the text seems to suggest, i.e. line 307).

 

Line 282: Did you use Python for all computations? Please specify.

 

Line 284: Should this be scikit?

 

Figure 4: This is the first mention of a study period.

 

Results

 

Line 315: I would recommend adding the range in R2 here, instead of averaging all R2 for the different vegetation types, as I am not sure that is appropriate.

 

Line 318: How was the standard deviation estimated between the two datasets (EC NEE and predicted NEE?)

 

Line 320: I would recommend rewording this sentence. I presume this refers to agricultural management, such as harvests, growing season, etc.

 

Figure 4: It is unclear whether these values represent global daily NEE estimates for each land class or if these are averaged values from different locations. (Stating the number of datapoints might be helpful too)

 

Figure 6: Please define observation index. Furthermore, are these averaged values for the different vegetation species, or selected stations?

 

Table 4: I would recommend using “standard deviation of the residuals here”, as SD and RMSE are not necessarily the same.

 

Lines 323-332: This section seems to fit better into the methods section. Further, was monthly NEE estimated as an average from 2/3 (or more) of the data?

 

Table 5: The table header seems to be missing some words, please double check.

 

Discussion

 

Line 251: Similar to the abstract, I don’t fully comprehend how separate sets of eddy covariance stations plays into the scalability. Does this mean you trained your model on certain EC towers and validated them on others? If so, how many were used for training and where were they located, and similar please mention the number and location of EC towers for validation.

 

Figure 7: Please add error bars to the graph.

 

Line 392: What does this sentence refer too and what is N, as in, what are these different models? From the sentence it appears that this has been done outside of this study.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The paper has been greatly improved. I have some minor suggestions, like the square meters in the figures, "2" should be the superscript.

Author Response

  • The paper has been greatly improved. I have some minor suggestions, like the square meters in the figures, "2" should be the superscript.

    Response: thank you for your note, we updated all figures and table headers to appropriate conditions.

Reviewer 3 Report

The authors have addressed all my comments, hence I would recommend accepting the manuscript.

Author Response

The authors have addressed all my comments, hence I would recommend accepting the manuscript.

Response: Thank you for your significant adjustments, they have seriously improved the quality and clarity of our work.

Back to TopTop