Next Article in Journal
UAV + BIM: Incorporation of Photogrammetric Techniques in Architectural Projects with Building Information Modeling Versus Classical Work Processes
Next Article in Special Issue
Exploiting High-Resolution Remote Sensing Soil Moisture to Estimate Irrigation Water Amounts over a Mediterranean Region
Previous Article in Journal
Integrating MNF and HHT Transformations into Artificial Neural Networks for Hyperspectral Image Classification
Previous Article in Special Issue
A New Low-Cost Device Based on Thermal Infrared Sensors for Olive Tree Canopy Temperature Measurement and Water Status Monitoring
 
 
Article
Peer-Review Record

IrrMapper: A Machine Learning Approach for High Resolution Mapping of Irrigated Agriculture Across the Western U.S.

Remote Sens. 2020, 12(14), 2328; https://doi.org/10.3390/rs12142328
by David Ketchum 1,*, Kelsey Jencso 1, Marco P. Maneta 2,3, Forrest Melton 4,5, Matthew O. Jones 6 and Justin Huntington 7
Reviewer 2: Anonymous
Remote Sens. 2020, 12(14), 2328; https://doi.org/10.3390/rs12142328
Submission received: 15 May 2020 / Revised: 9 July 2020 / Accepted: 14 July 2020 / Published: 20 July 2020

Round 1

Reviewer 1 Report

General Comments:

The manuscript addresses the development of a system for modeling the irrigation surface in the western states of the US, using machine learning techniques and with the implementation of multi-source, multi-temporal data.

The topic itself is very interesting and relevant, particularly in a productive context where irrigation management is key in terms of farm quality and productivity. It is also worked in a context of growth of the irrigated area. The manuscript emphasizes the existence of public sources of data, which is highly valued.

The analysis and strategies for spatial-temporal distribution of irrigated areas are interesting, with control parameters, conventional area analysis and implementation of ML models such as Random Forest for zonal characterization and regional grouping.

The practical approach of the article goes one step further on the already existing models, and employs multi-source data in different formats (meteorological, RS from GEE, validation with hand data), and counts with the implementation of algorithms tested by other authors in a satisfactory way, congratulations about this.

The conclusions obtained after the development of the IrriMapper platform, both technical and in the study of trends in irrigation, match the objective initially proposed and reinforce the need to develop technologies and techniques that allow a more accurate and precise knowledge of the use of land in agricultural production areas.

The writing and structure of the archive are optimal. An in-depth discussion as shown by the results is appreciated.

In the introduction, I miss a little more information about the use and modelling of large/medium scale biosystems and the use of ML, and in particular RF techniques for area detection aspects (irrigation or other purposes).

Similarly, a brief introduction to the availability of RS data with EEG could be interesting.

With this in mind and once the article has been reviewed, I recommend making minor changes to the document (focusing on the introduction and some minor aspects based on the comments below) prior to being accepted for publication.

 

Some specific comments:

 

  • One of the barriers to the use of this ML-based models in production systems is sometimes the lack of scalability of the solutions they offer. Do the authors consider that this system is commercially scalable? Could a similar parameterization be carried out with Sentinel 2 data? This would imply less data volume due to its shorter period of existence. And move it to other countries/regions?
  • What is the biggest barrier to these platforms or developments? Can the lack of truth-in-training validation be overcome by the addition of hyper-local data?
  • Line 167: The model assumes that there are no changes in non-crop areas or rainfed areas in such a long period and in which there has been an increase in technification and in the establishment of irrigation cannot introduce a bias in the training model?
  • Figure 4: The timeline in figure 4 is a bit confusing. I understand that for lack of space. I suggest adding a common X-axis to the columns to better understand the evolution in precipitation.
  • Line 189: Are the data from these areas used because of their greater availability? Which percentage of the total area constitutes these 63406 km2? Similar question for the others, I think it makes more sense to give percentages of areas studied than absolute values.
  • Line 266: Could the number of false positives obtained be reduced if it were not a binary classification, and the 4 initial types were maintained?
  • Line 276: This is an obvious statement, but to what do the authors believe this difference may be due?
  • Line 283-286: Could there be some overfitting in the model that is reflected in this trend?
  • Line 278-279 indicates that the agreement with the NASS data is low (r2=0.9). On the other hand, line 330 refers again to this data in a positive way. I think this is a good fit for such a complex model, so I think the approach of line 278-279 should be modified.

Author Response

Thank you for the thoughtful review! I've responded point-by-point below.

General Comments:

The manuscript addresses the development of a system for modeling the irrigation surface in the western states of the US, using machine learning techniques and with the implementation of multi-source, multi-temporal data.

The topic itself is very interesting and relevant, particularly in a productive context where irrigation management is key in terms of farm quality and productivity. It is also worked in a context of growth of the irrigated area. The manuscript emphasizes the existence of public sources of data, which is highly valued.

The analysis and strategies for spatial-temporal distribution of irrigated areas are interesting, with control parameters, conventional area analysis and implementation of ML models such as Random Forest for zonal characterization and regional grouping.

The practical approach of the article goes one step further on the already existing models, and employs multi-source data in different formats (meteorological, RS from GEE, validation with hand data), and counts with the implementation of algorithms tested by other authors in a satisfactory way, congratulations about this.

The conclusions obtained after the development of the IrriMapper platform, both technical and in the study of trends in irrigation, match the objective initially proposed and reinforce the need to develop technologies and techniques that allow a more accurate and precise knowledge of the use of land in agricultural production areas.

The writing and structure of the archive are optimal. An in-depth discussion as shown by the results is appreciated.

In the introduction, I miss a little more information about the use and modelling of large/medium scale biosystems and the use of ML, and in particular RF techniques for area detection aspects (irrigation or other purposes).

Similarly, a brief introduction to the availability of RS data with EEG could be interesting.

With this in mind and once the article has been reviewed, I recommend making minor changes to the document (focusing on the introduction and some minor aspects based on the comments below) prior to being accepted for publication.

 

Some specific comments:

  • One of the barriers to the use of this ML-based models in production systems is sometimes the lack of scalability of the solutions they offer. Do the authors consider that this system is commercially scalable? Could a similar parameterization be carried out with Sentinel 2 data? This would imply less data volume due to its shorter period of existence. And move it to other countries/regions?

This system is scalable. The same method and inputs could be scaled to the entire United States, probably with less training data. One limit to scaling to the U.S. is the eastern regions generally receive much more precipitation and thus the natural vegetation has a similar spectral signature (i.e., green the entire summer) to irrigated lands, and irrigation is likely less intense in terms of frequency and depth of water applied.

Scaling to a global scale (or to cover other regions) is possible, though input data limited to the U.S.  (e.g., gridMET) would need to be replaced with comparable data that has global coverage. The different growing seasons around the globe would also need to be parameterized. Landsat has global coverage, so spectral means are available everywhere. The fundamental limitation to ML-based approaches is that the model must be trained and then used to predict on the same set of parameters. While the trained model is rich in information, it is essentially useless if we cannot provide the same data (Landsat, gridMET, etc.) to it into the future. This limits the utility of short-lived Earth-observation missions for the purpose of mapping over long periods of time.

Sentinel 2 could be used to undertake a similar study, though as mentioned above, the period of record is limited. This is the reason we did not use Sentinel; we have a lot of training data that covers the period before Sentinel began operation. It may be worth developing more training data during the Sentinel record to carry this approach forward at higher resolution with Sentinel optical data.

  • What is the biggest barrier to these platforms or developments? Can the lack of truth-in-training validation be overcome by the addition of hyper-local data?

The biggest barrier to our project is bias in the training data. As discussed in Lines 392 - 399, we depended on interpretation of irrigation status by human judgement using remote sensing data. The result is that we tended to select obviously irrigated fields for inclusion in the training data, and omitted fields that may have received partial or infrequent irrigation. This bias in training data development is likely expressed by the model in prediction, where low-intensity irrigation may be inferred as a unirrigated class.

  • Line 167: The model assumes that there are no changes in non-crop areas or rainfed areas in such a long period and in which there has been an increase in technification and in the establishment of irrigation cannot introduce a bias in the training model?

This is very much a major assumption, though difficult to avoid without inspecting all the unirrigated data. We added  “The assumption of static land cover in the unirrigated classes (i.e., dryland, wetland, uncultivated) may also introduce error in the training data where land class has changed during the study period.The assumption is probably best for the uncultivated class (e.g., national forest, roadless areas), and weakest for the dryland class, where conversion to irrigation may occur. We suspect the locations where dryland was converted to irrigated are likely limited in our training data because the geospatial data development occurred recently.“ Line 416.

  • Figure 4: The timeline in figure 4 is a bit confusing. I understand that for lack of space. I suggest adding a common X-axis to the columns to better understand the evolution in precipitation.

We experimented with adding years to each horizontal axis in the sub-plots of this figure, but found it obscured the data. Instead, we moved the empty ‘time’ plot to the left, and added a note stating “All subplots range from 1986 - 2018, as shown in lower left.” to the caption.

  • Line 189: Are the data from these areas used because of their greater availability? Which percentage of the total area constitutes these 63406 km2? Similar question for the others, I think it makes more sense to give percentages of areas studied than absolute values.

We used all the data we could find that was appropriate for our purposes. The most obvious gaps in training data are noted in the dryland class, for example, Oregon has extensive dryland agriculture for which we were unable to find geospatial data. 

We added, parenthetically, the percentages of total training data represented by each class, e.g., “The dryland data consists of 38,259 fields covering 63,406 km2 (10.4% of total training data area).” (Line 201). 

We also added  “We used all the appropriate training data we were able to obtain. The four classes of training data together cover 611,500 km2, about 20% of the study region area.”

  • Line 266: Could the number of false positives obtained be reduced if it were not a binary classification, and the 4 initial types were maintained?

This is a mistake! I accidentally labeled ‘Uncultivated’ what should be labeled ‘Unirrigated’. This table actually just groups the three unirrigated classes into one class, the model is the same. There would therefore be the same number of false positives if it were broken into four classes. The point here is just that the unirrigated classes cover such a larger area, that a low rate of false positives lead to a large (misclassified) irrigated area.

  • Line 276: This is an obvious statement, but to what do the authors believe this difference may be due?

Generally, with the comparison of two quantities that represent a small fraction of a total (e.g., irrigated area in a desert county), the difference between two estimates may be small compared to the total, but the relative difference is likely quite large. So large counties with small irrigated area would likely have a large relative difference in irrigated estimate, even if the error (in terms of absolute magnitude) in each method is low.

For this problem in particular, we suspect the above mentioned explanation might be relevant in arid and/or cold regions such as northern Arizona, Wyoming, and the Sierra Nevada of California. We added “Large relative differences are expected in counties where both estimates are a small fraction of total area (e.g., the northern counties of Arizona). ” Line 347.

Other factors that we discussed:

  • NASS only counts revenue generating farms, missing urban/suburban irrigation. Line 348 - 353.
  • IrrMapper bias toward over-classification (false positives). Line 366.
  • Pacific Northwest: overall wetter climate and potentially lower irrigation intensity (i.e., depth of water applied) makes identification of irrigation more difficult. Line 360.
  • Lack of training data along the coasts of Oregon and California. Line 363.
  • False positive classification of dryland agriculture. Line 363.

 

  • Line 283-286: Could there be some overfitting in the model that is reflected in this trend?

We don’t believe that the model suffers from overfitting, because accuracy was very high on the cross validation, and variable importance implies the model relies on a relatively small number of parameters. Rather, in this case we believe the overall wetter climate and potentially lower irrigation intensity (i.e., depth of water applied) makes identification of irrigation more difficult. Discussed in line 360. If irrigation intensity is low, the spectral contrast between the irrigated area and adjacent natural vegetation is likely short-lived and therefore less likely to be detected in a 2-month spectral mean.

 

  • Line 278-279 indicates that the agreement with the NASS data is low (r2=0.9). On the other hand, line 330 refers again to this data in a positive way. I think this is a good fit for such a complex model, so I think the approach of line 278-279 should be modified.

We modified the language to “...shows good agreement with the NASS…” in Line 290 to make this more consistent.

Reviewer 2 Report

The manuscript presents a tool developed in GEE for mapping irrigated agriculture areas across western state of US. The tool is rather interesting, though authors do not really provide much information regarding any particular novelty on it. Several key pieces are missing all along the paper, and too much attention is paid in certain parts that could be better spent in explaining critical steps such homogenization of "ground truth" data along years and states. Homogenization of data across different path/rows. Features used for the training process. Even proper setup of the Random Forest classifier. Results are not presented in the best way and too much focus is given to certain parts that are not as relevant. Proper definition of classes is required and proper analysis and conclusions are still missing. Comparison to other classifiers in literature must be done and better organization and scientific soundness is in place for the whole manuscript. It would be good to consider reducing the length of the paper by being at the same time more concise. Further details are provided in the next.

Line 7. Just a curiosity. Is this data freely available?. Or how did you access it?. And how reliable is it?.

Lines 36-39. Reading this, I think my previous curiosity becomes a real question regarding how do you guarantee the reliability of your samples.

Line 55. When you refer to irrigated areas, does this class include both artificial and rainfed ones?. I find it hard to believe that the spectral response in artificial and rainfed areas can be so different.

lines 87-92. I strongly suggest to enter a bit more into detail about what the method in reference 50 exactly does. As far as I see, here is the novelty of you work and it is important to understand to which extent your contribution is novel or not.

line 111. It would be good to put the proper naming of each state, instead of just abbreviations.

Figure 1. Could you please add the names of each state in the map?.

Lines 130-132. What about differences in among different path/rows and possible climatic differences in such a big area?.

Line 145. Is this data offered for free?. Please add some more information in this regard.

Lines 163-166. How do you guarantee homogeneity across that many years and such different sources of datasets?. Any particular problem or procedure?. I think this is quiet relevant, specially if a reader sees your paper as a potential way to apply somewhere else in the world.

Lines 167-168. This seems to me a rather strong assumption, specially considering a period of more than 30 years. It is well-known in literature how much have agricultural areas expanded overtaking a lot of other land covers.

Table 1. So, you did not have information for every single year evaluate, and not even for all the states in an homogeneous ways. Once again, how did you integrate all of these datasets?. Such information must be provided. Are all of the years mapped at least once?.

Line 254. I still would like to understand how do you differentiate among irrigated and rainfed crop fields. When you talk about unirrigated lands, what do you exactly refer to?.

Table 2. Looking once again to the names of the classes, I consider that a proper definition of each of them is in place. Not at this stage, but from the very beginning. I continue to have confusions about artificial and natural irrigation and up to now, it is not really clear how is your algorithm able to differentiate among them, if at all. Please go back to your definitions and better describe the different concepts. Also pay attention to a better description of your features in order to justify how can you really separate among these two classes, other than just stating that it is really easy to separate them from a spectral perspective.

line 270. 132 parameters?. Where are these coming from?. There is no clear explanation about the features/parameters used for your training process. Please consider also changing a bit the structure of your whole paper. There are way too many sub-sections all around that do not allow to have a continuous and clear logic of the whole process. There are also some repetitive parts and some missing parts. The manuscript needs to be reduced a bit.

Figure 7. Why 20 in here but 10 in the text?. Please be coherent among information that you want to provide.

line 299. I would like to see some more details regarding the final results. Something interesting to see would be a small portion of the whole area over the 30 years of analyzed data. In order to better show how irrigated areas have increased over the time. Or also a plot showing the areas evolution over time.

Figure 8. Do you make the analysis only every 5 years?.

Figure 9. Not sure if this is really required. At least does not look much informative. Maybe the way in which you have commented it is not the best one.

Line 420. First time you talk about "timely" mapping. In fact, one of the question that I have is regarding the time that it takes to do the whole process. From collection and homogenization of data, to selection of trainings samples, training of RF and production of final maps. The area is indeed quiet big and I know GEE have some limitations when you try to access that much data and for such a long period. Did you have to do the process at tile level and then join all together?. What is the whole "computational" time?. How are the outputs of your method?. How easy or not is it to extrapolate it to other areas?. How sensible is it to the number of training samples?. Wouldn't it be easier to consider some other approach with different sorts of features?. I think that in methodology you should add a block scheme where each of the steps carried out are described. And some comments as per computational burden should be added on each case.

lines 428-430. Any ideas on why is this difference bigger in those states?.

I think that after such a big job, better conclusions could be addressed. What about the exact novelty of your method?. Any comparisons with other state of the art classifiers?. Once you have your training samples, it should be easy to train some other classifiers. Some neural network, Support Vector Machine, even a simple unsupervised k-means. I would expect to see such analysis in here.

 

Author Response

Please see attached file.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Authors have improved the manuscript to a certain level, but not as a 100%. Several of my comments in first review were simply ignored by authors, since they did not agree with my comments or justify themselves because the other reviewer had said it was ok. I do not think this is the way to reply to comments from a reviewer who has spent a good amount of its time while reviewing their manuscript and on trying to contribute to improve the quality of their work.

Many of the suggested comments in my previous review still need to be considered in order to improve quality of manuscript and guarantee the publication of itself with a proper quality. There is no novelty at all in the presented manuscript and I would consider it more as a report of applying well-known methods in the state of the art to a big problem in literature when working on extensive study areas. In this context, the manuscript offers great information, though it is still limited by the number of training samples and their corresponding distribution along the whole study area. Authors are encourage to change the perspective of the manuscript in this manner and present it as a report of a big work. Unless authors present a novel idea introduced to their work, it cannot be considered as a paper.

Regarding your reply to one of my previous comments: "We will address trends in sub-regions and give the necessary detailed explanations on why trends are occurring in our future work". I do not see the need to write a whole new manuscript only to explain in detail this information, when you can perfectly exploit the current manuscript and do so. I see authors simply try to skip all of my suggestions, that will really give a better value to the current manuscript, by giving some justifications that do not neccesarily fit well in here. Specially regarding the organization of the manuscript. I strongly suggest authors to go all over my previous comments and make the good practice of implementing at least half of them to improve the quality of the manuscript.

LINE 129. Please list the 132 features. It is not enough to simply say there are 132 or from where are they coming, but to specify each of them. Even as an annex information. Specially considering that your figure 7 talks about the 10 most relevant features and most of them have not been even defined along the whole manuscript.

One last comment is regarding the decision to use RF instead of any other classifier in literature. Something that for sure would give some good value to your work, would be to perform a proper analysis of why to select a given classifier or not for this type of analysis in such big area. I do not consider it as out of the scope of your paper, since basicallt there is no novelty and not clear scope rather than to present the results of applying well-known methods in literature to a big area in GEE.

 

Author Response

Authors have improved the manuscript to a certain level, but not as a 100%. Several of my comments in first review were simply ignored by authors, since they did not agree with my comments or justify themselves because the other reviewer had said it was ok. I do not think this is the way to reply to comments from a reviewer who has spent a good amount of its time while reviewing their manuscript and on trying to contribute to improve the quality of their work.

Response: Thank you for your review. We regret that you consider our decline to implement every suggestion as us ignoring your good work on the review. We appreciated the review very much and gave a lot of thought to each point you made in our response. We did make changes for most of the points you made in the review. We apologize if our response appeared as though we were inconsiderate of the time and thought placed in your review.  On the contrary, we found your review very informative and either implemented the suggested changes or attempted to thoroughly respond to your concerns in our comments. 

 

Many of the suggested comments in my previous review still need to be considered in order to improve quality of manuscript and guarantee the publication of itself with a proper quality. There is no novelty at all in the presented manuscript and I would consider it more as a report of applying well-known methods in the state of the art to a big problem in literature when working on extensive study areas. In this context, the manuscript offers great information, though it is still limited by the number of training samples and their corresponding distribution along the whole study area. Authors are encourage to change the perspective of the manuscript in this manner and present it as a report of a big work. Unless authors present a novel idea introduced to their work, it cannot be considered as a paper.

Response: Thank you. We acknowledge that the Random Forest supervised learning approach is not novel. As we mention in the manuscript, we are using a similar approach to Deines et al. (2017, 2019). However, the scale of our application and the climatic and spatial diversity (and quantity) of the training data is unprecedented for the problem of mapping irrigation. Indeed, maps such as these do not yet exist in the published literature. We believe that novel applications of Random Forest such as ours certainly have a place in Remote Sensing, as many applications of the Random Forest approach have been published here and have demonstrated great scientific value. These include recent publications using Random Forest implementations to map alfalfa yield (Feng et al., 2020), winter wheat (Xu et al., 2020), human population (Qiu et al., 2020), wetland land cover (Wang et al., 2020), successional forest stages (Sothe et al., 2017), agricultural expansion (Eckert et al., 2017), cropland distribution and extent (Forkuor et al., 2017), and tree types (Waser et al., 2017), among many others. 

We appreciate the reviewers attention to detail and recommendation for rigor at every step in the analysis. This is sincerely appreciated. We note that we did use 60,000 training samples  in our analysis which took more than 12 months of work to compile through manual, expert review of imagery and data. Based on the results of the analysis and the accuracies achieved, we feel that is a large and robust number of samples.

We added “To our knowledge, this represents an unprecedented collection of verified irrigated areas.” to line 200.

Regarding your reply to one of my previous comments: "We will address trends in sub-regions and give the necessary detailed explanations on why trends are occurring in our future work". I do not see the need to write a whole new manuscript only to explain in detail this information, when you can perfectly exploit the current manuscript and do so. I see authors simply try to skip all of my suggestions, that will really give a better value to the current manuscript, by giving some justifications that do not neccesarily fit well in here. Specially regarding the organization of the manuscript. I strongly suggest authors to go all over my previous comments and make the good practice of implementing at least half of them to improve the quality of the manuscript.

Response: Thank you for your interest in this exciting avenue for further research. We do believe this is deserving of an entire manuscript since our analysis covers 11 states with a diverse range of conditions affecting water supply and irrigation status. We would like to explore the drivers of irrigation trends in terms of economic, climatic, and institutional factors. This will include analysis of covariation of climate and economic variables with irrigated area, change in crop type for each subregion, inspection and discussion of water rights management in the region, documentation of the various federal and local incentives to reduce irrigation, and review of water physical availability in each region. We suspect there is a unique story for each subregion of our study area and that the analysis and discussion will need a lot of space.

LINE 129. Please list the 132 features. It is not enough to simply say there are 132 or from where are they coming, but to specify each of them. Even as an annex information. Specially considering that your figure 7 talks about the 10 most relevant features and most of them have not been even defined along the whole manuscript.

Response: Thank you. We added a table of the 132 features to the supplement.

One last comment is regarding the decision to use RF instead of any other classifier in literature. Something that for sure would give some good value to your work, would be to perform a proper analysis of why to select a given classifier or not for this type of analysis in such big area. I do not consider it as out of the scope of your paper, since basicallt there is no novelty and not clear scope rather than to present the results of applying well-known methods in literature to a big area in GEE.

We understand and appreciate this point. We have added sentences on lines 91 - 94 providing additional explanation for our selection of the RF approach. 

“RF has been shown to be a reliable and fast algorithm for remote sensing applications, suited to handling high-dimensional and colinear data, insensitive to overfitting, and explanatory of variable importance (Belgiu and Dragut, 2016).”

Our primary objective in this analysis was to develop a map of irrigation status with an accuracy that was high enough to be useful to the water management community. Given the high accuracy of the results achieved, we feel that selection of the RF approach among other possible approaches is justified. We have no doubt that in the future, others will improve upon our analysis. As additional analyses are performed for other regions around the world, this will facilitate a rigorous evaluation of alternate methods for mapping irrigation status.

 

References:

Eckert, S., Kiteme, B., Njuguna, E. and Zaehringer, J.G., 2017. Agricultural expansion and intensification in the foothills of Mount Kenya: a landscape perspective. Remote sensing, 9(8), p.784.

Feng, L., Zhang, Z., Ma, Y., Du, Q., Williams, P., Drewry, J. and Luck, B., 2020. Alfalfa Yield Prediction Using UAV-Based Hyperspectral Imagery and Ensemble Learning. Remote Sensing, 12(12), p.2028.

Forkuor, G., Conrad, C., Thiel, M., Zoungrana, B.J. and Tondoh, J.E., 2017. Multiscale remote sensing to map the spatial distribution and extent of cropland in the Sudanian Savanna of West Africa. Remote Sensing, 9(8), p.839.

Qiu, G., Bao, Y., Yang, X., Wang, C., Ye, T., Stein, A. and Jia, P., 2020. Local Population Mapping Using a Random Forest Model Based on Remote and Social Sensing Data: A Case Study in Zhengzhou, China. Remote Sensing, 12(10), p.1618.

Sothe, C., Almeida, C.M.D., Liesenberg, V. and Schimalski, M.B., 2017. Evaluating Sentinel-2 and Landsat-8 data to map successional forest stages in a subtropical forest in Southern Brazil. Remote Sensing, 9(8), p.838.

Wang, S., Zhang, L., Zhang, H., Han, X. and Zhang, L., 2020. Spatial–Temporal Wetland Landcover Changes of Poyang Lake Derived from Landsat and HJ-1A/B Data in the Dry Season from 1973–2019. Remote Sensing, 12(10), p.1595.

Waser, L.T., Ginzler, C. and Rehush, N., 2017. Wall-to-wall tree type mapping from countrywide airborne remote sensing surveys. Remote Sensing, 9(8), p.766.

Xu, F., Li, Z., Zhang, S., Huang, N., Quan, Z., Zhang, W., Liu, X., Jiang, X., Pan, J. and Prishchepov, A.V., 2020. Mapping Winter Wheat with Combinations of Temporally Aggregated Sentinel-2 and Landsat-8 Data in Shandong Province, China. Remote Sensing, 12(12), p.2065.

Submission Date

15 May 2020

Date of this review

29 Jun 2020 18:27:00

Back to TopTop