Next Article in Journal
Multi-Channel Ground-Penetrating Radar Array Surveys of the Iron Age and Medieval Ringfort Bårby on the Island of Öland, Sweden
Next Article in Special Issue
Prediction of Winter Wheat Yield Based on Multi-Source Data and Machine Learning in China
Previous Article in Journal
Geodetic Measurements and Numerical Models of Deformation at Coso Geothermal Field, California, USA, 2004–2016
Previous Article in Special Issue
Land Cover Classification of Nine Perennial Crops Using Sentinel-1 and -2 Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Temporal Window Selection for Winter Wheat and Rapeseed Mapping with Sentinel-2 Images: A Case Study of Zhongxiang in China

1
School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China
2
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
3
Hubei Provincial Engineering Research Center of Natural Resources Remote Sensing Monitoring, Wuhan University, Wuhan 430079, China
4
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(2), 226; https://doi.org/10.3390/rs12020226
Submission received: 12 December 2019 / Revised: 1 January 2020 / Accepted: 4 January 2020 / Published: 9 January 2020

Abstract

:
Currently, the main remote sensing-based crop mapping methods are based on spectral-temporal features. However, there has been a lack research on the selection of the multi-temporal images, and most of the methods are based on the use of all the available images during the cycle of crop growth. In this study, in order to explore the optimal temporal window for crop mapping with limited remote sensing data, we tested all possible combinations of temporal windows in an exhaustive manner, and made a comprehensive consideration of the spatial accuracy and statistical accuracy as evaluation indices. We collected all the available cloud-free Sentinel-2 multi-spectral images for the winter wheat and rapeseed growth periods in the study area in southern China, and used the random forest (RF) method as the classifier to identify the optimal temporal window. The spatial and statistical accuracies of all the results were assessed by using ground survey data and local agricultural census data. The optimal temporal window for the mapping of winter wheat and rapeseed in the study area was obtained by identifying the best-performing set of results. In addition, the variable importance (VI) index was used to evaluate the importance of the different bands for crop mapping. The results of the spatial accuracy, statistical accuracy, and the VI showed that the combinations of images from the later stages of crop growth were more suitable for crop mapping.

Graphical Abstract

1. Introduction

With the rapid development of cities and the expanding population, food security has increasingly become an issue of widespread concern. As an important aspect of land-use and land-cover mapping, crop mapping also plays an important role in watershed modeling, as well as crop modeling [1,2,3]. Gathering cropland information can help us to identify the crucial problems we are facing, such as the shrinkage of cropland area caused by urban sprawl, groundwater overdraft due to cropland irrigation, and land degradation due to over-reclamation [4]. Inaccurate crop mapping can hinder the ability of decision makers to correctly estimate the yields of crop areas, irrigation needs, and scheduling, thus affecting food security [5,6]. Establishing the distribution of crops is thus of vital importance for the implementation of agrarian policy actions [7].
In this context, remote sensing plays a key role in the monitoring of agricultural land use and management. Over the past few decades, several cropland and crop-type remote sensing products have been released, from a 1 km resolution to 30 m or even higher. Generally speaking, for large-scale areas, the mapping resolution of the products is coarse [8,9,10,11], as is the case for the GlobCover 2009 [12], Moderate Resolution Imaging Spectroradiometer (MODIS) Cropland [13], and MCD12Q1 [14] products, which makes it difficult to achieve precise analyses of individual regions. Furthermore, the estimated cropland areas of these products are often quite different from the official statistics, and the spatial positioning accuracy is poor, as it is limited by the mixed pixels [15,16]. For moderate-resolution crop mapping products, such as USDA’s Cropland Data Layer (CDL) [17] or the crop inventory datasets generated by Agriculture and Agri-Food Canada (AAFC), most of these products are generated by supervised classification approaches and rely on comprehensive survey and remote sensing data for training, e.g., Landsat, Sentinel-2, the China–Brazil Earth Resources Satellite program (CBERS), the Indian Remote Sensing (IRS) satellites, and the Disaster Monitoring Constellation (DMC) satellites [17].
Spectral and temporal features are taken as the major theoretical basis for distinguishing crops from other vegetation, and one crop type from another crop type [17,18,19]. There are two main strategies used by the existing crop mapping methods. Single-image-based methods, which achieve crop mapping by distinguishing the spectral features of the crop to be separated from the background [20], are characterized by simple implementation, but it is difficult to capture the imagery at the best time for distinguishing different crops. In addition, these methods rely on moderate- to high-resolution satellite data with multiple bands, such as Landsat 8, Gaofen-1 (GF-1), Advanced Wide Field Sensor (AWiFS), etc. However, these instruments have a long revisit period and are easily affected by cloud and rain, so it is difficult to obtain the data of the best period for distinguishing crop types [19,20,21,22,23]. Furthermore, the mapping methods based on single images have difficulty in identifying two or more crops with similar spectral profiles when the planting situation is complex [24]. Time-series-based methods make use of the temporal features of crops, and are thus widely used for crop mapping. The different spectral-temporal characteristics of crops in different periods are used as a basis for crop discrimination, and multi-temporal images are of course better than single-temporal images at capturing the spectral-temporal transformations in the growth of crops. These methods rely on satellites with a high temporal resolution, such as MODIS or the Advanced Very High Resolution Radiometer (AVHRR), etc., but the spatial resolution of these sensors is coarse, and generally less than 250 m, which is not sufficient to accurately map crop categories at the small scale of farmland [25,26].
The previous works have indicated that time-series data are critical to crop mapping [27,28,29,30,31]; however, the moderate-resolution multi-spectral instruments have a low revisit period. Furthermore, cloud and fog greatly limit the usefulness of optical imagery [32], especially in some areas, such as our study area of Zhongxiang City in south-central China, which is in the Asian monsoon region [33], with frequent thick cloud and precipitation in winter. To date, there have been few in-depth investigations of crop mapping in such areas where the imagery is limited.
The purpose of this study was to solve the problem of crop mapping in areas with restricted moderate-resolution cloud-free imagery data. This was done by testing the importance of images acquired in different temporal windows. This work will help to establish which temporal window is most important for crop mapping. As a result, this study could help to reduce the demand for regular-interval long-time-series image data. Furthermore, the results obtained in this paper could help to reduce the redundancy of the remote sensing image data used in the crop mapping process. The major contributions of this study are:
(1)
The variable importance (VI) is calculated from the random forest (RF) framework, using the mean decrease accuracy (MDA) method, to assess the importance of the spectral-temporal features at different image acquisition times.
(2)
An evaluation method is proposed which comprehensively considers the spatial accuracy and statistical accuracy, which ensures that the final mapping results have valuable application significance.
(3)
As a representative of the typical cloudy and rainy weather in south-central China in winter, Zhongxiang City is taken as the study area to provide a reference for the selection of the optimal temporal window for crop mapping under the conditions of limited remote sensing imagery in south-central China in winter.
The rest of this paper is organized as follows. In Section 2, we describe the study area conditions, plus the remote sensing imagery and reference data used in the study. In Section 3, we introduce the RF classification model, the MDA method based on RF, the combinations of all the acquired Sentinel-2 images, and the criteria for evaluating the performance of each combination. In Section 4, we describe the experiments conducted with all the possible combinations and provide a ranking list of all the results. We then compare the top-10 results with every band’s VI obtained by MDA, as well as the visual effect of each mapping result, to analyze the experimental results. Finally, a summary and conclusion are provided in Section 5.

2. Study Area and Datasets

2.1. Study Area

Zhongxiang City is located in the south-central part of China, between 30°42′–31°36′N and 112°07′–113°01′E, with a total area of 4400 km2, as shown in Figure 1. Zhongxiang City is located in Jianghan Plain, which is one of the main winter wheat and rapeseed producing areas in China, due to the reliable water supply and rich soil. The area features a subtropical monsoon climate, with overcast and rainy weather in winter and early spring, which is similar to the climatic characteristics of most parts of south-central China. The Han River flows through the center of Zhongxiang City, forming a plain along the river. The overall terrain of Zhongxiang City is hilly and mountainous in the east and west, and flat in the north and south. The main cropland area is located in the central zone. Winter wheat and rapeseed are the main crop types in winter in the study area. Winter wheat is sown in late October and harvested in mid-May; rapeseed is usually sown and harvested some days earlier than winter wheat. The farmland here does not feature large-scale mechanized farms, but instead relatively fragmented fields.

2.2. Reference Data

The reference data were obtained from a ground survey undertaken in January 2018, as shown in Figure 2. Investigators recorded the Global Positioning System (GPS) coordinates of the winter wheat and rapeseed sample points, and in-situ pictures were taken at the same time. Google Earth satellite images were used to delineate homogeneous regions centered around the sampling points. Before the sampling, we first estimated the crop planting distribution in the experimental area according to the planting distribution of winter wheat and rapeseed in previous years (http://fgw.hubei.gov.cn/), and we then sampled randomly in the main farmland area. For the mountainous area, the sample distribution was relatively sparse, because there are only a few small areas of cultivated land in this area, compared with the plain area.
As shown in Table 1, a total of 101 winter wheat and rapeseed sample areas were selected, covering 4500 pixels. For the winter wheat and rapeseed, 3314 pixels were used as training samples and 1186 pixels were used as validation sample. Others class samples were also selected, including forest, impervious surface, water, idle cropland, and others. Among the “others” class, 4883 pixels were used as training samples and 1890 pixels were used as validation samples.
A comparison between the mapping results and the recent agricultural statistics data for Zhongxiang City was undertaken, which provided a valuable addition to the point-based validation. The latest available statistical dataset was for the year 2017. The crop planting areas obtained from the agricultural statistical data are listed in Table 2.

2.3. Sentinel-2 Data Collection and Preprocessing

All cloud-free Sentinel-2 Multispectral Instrument (MSI) images during the winter wheat and rapeseed growing season (late October to late May) were downloaded from the European Space Agency’s (ESA) website (https://scihub.copernicus.eu/dhus/#/home). As the imagery source of this study, the 9-band 20 m resolution imagery represents the main spectral information of the MSI instrument, including green, blue, red, red edge 1–3, near-infrared (NIR), short-wave infrared (SWIR) 1, and SWIR 2 bands [34]. The cloud-free image collection was implemented in two steps. Firstly, the cloud cover percentage was limited to zero or less than 5%, and then an additional visual inspection of the cloud-free scene selection was implemented following the first step. In principle, there should be no cloud cover for the potential farmland areas. Obtaining numbers of cloud-free images for this study area over a long time period was very difficult to achieve, as many of the images for south-central China are contaminated by cloud. In fact, only six cloud-free Sentinel-2 images could be acquired for the study area, as shown in Figure 3, due to the heavy rain and snow throughout the winter of 2017 and the early spring of 2018.
The Sentinel-2 data were processed reflectance images from top-of-atmosphere (TOA) Level 1C, to bottom-of-atmosphere (BOA) Level 2A, which were processed using the Sen2Cor atmospheric correction tool, which was developed by Telespazio VEGA Deutschland GmbH on behalf of ESA [35].

3. Method

In this section, the method of selecting the optimal temporal window for crop mapping is described. In this study, we determined the optimal temporal window by considering all possible combinations of temporal windows. This approach differs from the approach commonly followed in previous studies, where the images of different acquisition dates are selected based on the hypothesis that the peak crop growth period can be considered as the period with the most spectral-temporal discriminative information [29,36,37,38]. First of all, the preprocessing of the cloud-free images was completed and the image scenes were numbered. The images of the different periods were then combined with each other, and the RF model was trained for each combination. In the meantime, we combined all the images together to calculate the VI for all the bands, so that the relative importance of the bands of different periods could be reflected. The spatial accuracy and the statistical accuracy of each crop mapping result were then calculated. The ranking of the performance of each combination was then established according to the accuracy. Finally, the optimal temporal window for the crop mapping was identified, according to the combinations with a good performance and the VI ranking result. The study workflow is summarized in Figure 4.

3.1. Random Forest Classifier

The RF has been widely used in the previous crop mapping studies because of its robustness [28,39,40,41]. The RF classifier is an ensemble learning algorithm that uses a set of classification and regression trees (CARTs) to make a prediction based on a bagging method [42]. It is not sensitive to the selection of parameters (compared to support vector machine (SVM) and artificial neural networks) and, in most cases, the default parameters can achieve a desirable performance [43]. Moreover, in the RF framework, the most widely used score of importance for a given variable is the MDA, which is the decrease in the mean of the accuracy of a tree in the forest when the observed values of this variable are randomly permuted in the out-of-bag (OOB) samples, which are not selected for training the tree.
The RF adopts the bootstrap approach when selecting the subset of training samples. About two-thirds of the samples are used to generate the trees within the forest, and the remaining one-third, which are called the OOB samples, are used to estimate the generalization error [42]. The number of trees (ntree) is manually defined. Each tree is independently produced without any pruning, and each node is split using a manually defined number of features (mtry), which are selected at random [43]. The generated trees have high variance and low bias because of the random selection of the training subset and features. The classification result is obtained by majority voting with the results of each classifier, and the result with the highest number of votes is the final classification result.
The two parameters are the number of generated trees and the number of features needing to be tuned when splitting the tree node. According to RF theory, the other parameters, including the maximum depth of the tree, the minimum number of samples required to split an internal node, and the weight of the samples of the different categories in the dataset, do not obviously affect the accuracy of the final results of the classifier [43,44]. For a large number of trees, it follows from the “strong law of large numbers” [42] that, as ntree increases, the estimation error drops, and there is no overfitting problem. Although the more trees, the better the classification results, due to the memory limitations and the computational cost, ntree is acceptable if it exceeds 100 [45]. In this study, ntree was set to 500. For most studies, RF is insensitive to the value of ntree, and once the error has converged, ntree only has a slight effect on it [41]. The smaller the value of mtry, the less the variance of the whole forest will be, but the bias of a single tree will also increase. The value of mtry is usually empirically set to the square root of the number of input features [30,46].
We calculated the VI from the RF framework, with the MDA method [43,47], which directly measures the impact of each feature on the accuracy of the model. The general idea is to permute the values of each feature and measure how much the permutation decreases the accuracy of the model. For each feature variable X j (different bands of the multi-spectral image data), the variable importance V I ( X j ) can be calculated by the following equation:
V I ( X j ) = 1 n t r e e t ( e r r O O B t j e r r O O B t )
where e r r O O B t is the error (mean square error, MSE) of the t th tree in this O O B t sample set, e r r O O B t j denotes the error of a permuted tree (where the value of X j has been permuted) in this O O B t sample set, and t is a tree of the forest.

3.2. Selecting the Optimal Temporal Window

We attempted to obtain acceptable results with as little satellite image data as possible, under the objective condition of the whole study area being faced with common precipitation, cloud, and fog in late winter and early spring. The tradeoff between the number of acquisition dates and the classification performance for winter wheat and rapeseed was the optimization goal.
All the images were grouped into all possible combinations, from one image to six images. The quantity of possible temporal combinations (n Choose m) can be calculated using the following formula:
C n m = A n m m ! = n ! m ! ( n m ) !
where n is the number of images, which in our study was 6. m is the number of temporal combinations we need, so all the multi-temporal combinations C A L L can be given as follows:
C A l l = m = 1 n C n m
All the combinations were computed, and the results were compared with both the ground survey data and the agricultural statistical data for Zhongxiang City. Since all the available images are considered equally, the difference in performance when utilizing early and late period images for mapping could be used as a basis for determining the most suitable temporal windows for winter wheat and rapeseed mapping in other regions of south-central China.

3.3. Validation

3.3.1. Sample-Based Accuracy Assessment

The confusion matrix based on the ground survey reference data was computed to estimate the classification result accuracy. The overall accuracy (OA) and Kappa coefficient (Kappa) calculated from the confusion matrix are the two indicators that quantitatively describe how good the classification results are. This sample-based accuracy reflects the mapping results and the rationality of the spatial distribution of the crops, but the spatial accuracy cannot be directly mapped to the assessment of the area estimation. Therefore, it was also necessary to compare the classification maps with the local agricultural statistic data.

3.3.2. Statistics-Based Area Accuracy Assessment

The accurate assessment of the spatial distribution and the statistics of the cropland areas were the goals of this study. Because of the difficulty of obtaining ground-truth reference data, depending only on the confusion matrix based on the ground survey reference data may not completely and objectively reflect the error estimation for the study area. We calculated the number of pixels of winter wheat and rapeseed, respectively, and then calculated the residual between them and the agricultural statistics data for Zhongxiang City. In order to measure the error between the results and the statistical results in an index, with regard to the proportion of the areas of the different crop types, we then calculated the weighted residual error ratio WRes as follows:
WRes = α R e s w × β R e s r
where R e s w is the residual error ratio between the area of winter wheat in the agricultural statistics data and the area calculated from the winter wheat classification results, and α is the proportion of winter wheat planting area for the sum of the two classes. R e s r and β represent the rapeseed, in the same way.

4. Results and Analysis

4.1. Classification Results

The RF classifier was implemented in the scikit-learn v0.20.0 Python package. All the results and analyses were obtained on a desktop computer with four CPU cores of 3.10 GHz and 8 GB of memory. For the goal of selecting the optimal combination, all 63 combinations were processed by RF individually. To facilitate the marking and combination, all the imagery data were represented by different letters, as shown in Table 3. For all the results, we calculated the OA, Kappa coefficient, and the ratio of residual error between the crop area and the agricultural statistics data.
As shown in Figure 5, most of the crop mapping results reflect the regional distribution of winter wheat and rapeseed cultivation in Zhongxiang City, with the large farmland areas concentrated near the river in the middle of the study area, where there is flat and fertile land. The red blocks represent the winter wheat planting areas, most of which are mainly concentrated on land suitable for large-scale irrigation and mechanization. The green blocks represent the rapeseed planting areas. Rapeseed is the other main winter crop in this region, and its spatial distribution is similar to that of winter wheat. However, rapeseed is also planted in fragmented fields in some of the hilly areas, and because it can be used as both vegetable and oil, it represents an extra income source for the farmers.
As Figure 6 shows, it can be clearly seen that the boundaries between the two crop types are well constrained, and it can also be seen from the map that the non-crop areas are also excluded. Furthermore, it is apparent that the harvest time of the rapeseed varies from region to region. For example, the rapeseed in Figure 6a is characterized by a lighter green; however, in the upper-right corner of Figure 6c, the rapeseed appears pale yellow, which is caused by the earlier harvest time in the Figure 6a region than in the Figure 6c region, which means that Figure 6c contains idle cropland after harvesting.
The five assessment indicators (OA, Kappa, R e s w , R e s r , WRes ) were calculated, and the labels were calculated based on the WRes index. The weighted residual error ratio was used as the ranking criterion as this index can more comprehensively reflect the statistical accuracy of the crop mapping result than a single indicator. The assessment indicator results for all the temporal combinations are shown in Figure 7.
Figure 8a,b show the averages of the different assessment indicators for different numbers of temporal images. Figure 8c shows the best performances of the different combinations. As shown in Figure 8, all the assessment indices perform well, to a certain degree, with the increase of the number of temporal images. Nevertheless, the best performance reaches a relatively stable state after just three temporal images. With the increase of the number of temporal images, the ground surface conditions can vary more and more, so that the uncertainty increases, which results in the classification performance reaching a plateau.
The VI was also calculated, as shown in Figure 9, in which the VI index represents the importance of each band for the classification results. Through the VI, it can be seen that the results for the first-time phase (period A) reflect a good score. However, period A wins the 11th-best performance compared with the other multi-temporal combinations under the WRes indicator, which can be explained by the fact that the rapeseed was planted earlier than the winter wheat, so that there were few vegetation characteristics within the winter wheat region on the date of October 30th. Thus, in terms of a single temporal image, the distinct discrimination of winter wheat and rapeseed may not appear during the peak growth stage, so we should consider the individual circumstances and planting plans of the study area.
As the crops grow, the importance of the NIR and SWIR bands gradually surpasses that of the visible bands. This is because, with the growth of the crops, the change patterns of leaf water content in the different crops are different, and the NIR and SWIR bands are very sensitive to the crop water content [48]. This change of ranking begins to emerge in the combinations containing later period data. That is, for the multi-temporal combinations, the performance gets better as the later period images (periods D, E, F) are added.

4.2. Optimal Temporal Window Analysis

Differing from most of the previous studies focusing on feature selection optimization for time-series data, this study mainly focused on the optimization of the selection of the temporal window.
The results demonstrate that more data does not always imply a better performance. As can be seen in Table 4, the best-performing combination is DEF, which means that the combination of spectral-temporal features in the middle and late growing season contains more discriminative information for winter wheat and rapeseed. Next is the six-period ABCDEF combination, as it does make sense that images covering the entire growth cycle can provide sufficient discriminative features. However, in areas with cloudy and rainy climates, such as our study area, cloud-free medium-resolution images covering the entire growth cycle are almost impossible to obtain. The two-period combinations appear next in the rankings, characterized by the inclusion of images from the later growth stages, and some of these combinations perform even better than the four-period images. This phenomenon can be explained as follows. The addition of the early data helps to improve the rapeseed statistical performance, while the addition of the image data of a later stage of growth helps to improve the winter wheat statistical performance. If both early and late data are considered, this tends to offset the performance for winter wheat, which is because rapeseed is sown earlier than winter wheat, and the leaf area of rapeseed is larger than that of winter wheat in the early growth stages, but the vegetative characteristics of winter wheat are not obvious enough in this period.
In terms of the OA and Kappa, in the top 10 ranking, fewer temporal images reduces the spatial accuracy of the mapping result. However, it is worth noting that the OA and Kappa do not change significantly, as these two indicators tend to be “saturated”. The explanation for these findings may rest with the lack of representativeness and adequacy of the samples we collected in the field, and the existence of spatial heterogeneity. This also suggests that statistical accuracy is as important as spatial distribution accuracy for crop mapping [49].
As Figure 8a and Figure 7d show, the residual error ratio of winter wheat does not decrease directly with the increase of the number of multi-temporal images, but first increases and then decreases during the process, so the statistical accuracy of winter wheat seems to be less affected by the increase of the image data. The reason behind this phenomenon is, as mentioned above, that the vegetative characteristics of winter wheat are weak in the remote sensing images acquired in the early growth stage, and the incorporation of these images makes the statistical accuracy for winter wheat fluctuate. As shown in Figure 7d, the images acquired in the middle and late growing stages achieve a better winter wheat statistical accuracy; however, the statistical accuracy fluctuates with the addition of periods A, B, and C. The change of the spectral-temporal characteristics in the early crop growth period is not as great as that in the peak growth period [50]. The frequency of the image mark occurrence in the top 10 ranking was also counted. From Table 5, it can be seen that the images acquired in the middle and late growing stages occur more often. This observation can explain why the combinations containing middle and late stage data perform better. That is, the addition of more middle and late stage image data makes it easier for the classifier to capture the changes between multiple categories.

5. Conclusions

The results of this study revealed that optimizing the temporal window selection for multi-temporal Sentinel-2 images can help us to achieve satisfactory crop-type mapping results. Through the analysis of the 10 best-performing combinations of temporal images, the conclusion can be made that the performance of crop mapping can be improved with the use of data from the middle and later stages of the growth cycle.
Through the comparison with ground reference data and local agricultural statistics data, the spatial accuracy and statistical accuracy both indicated a good performance (with the error less than 6%). The results also indicated that the extraction accuracy for the rapeseed areas was higher than that for winter wheat, as a whole. The main reasons for this phenomenon are the spatial heterogeneity, the differences in planting management models between regions, and the lack of ground samples.
The results of this study will provide important guidance for crop mapping in areas lacking medium-resolution cloud-free data. The optimization of the multi-temporal image data selection can help us to establish which period of the crop growth cycle is more important for crop mapping. As a priori knowledge, this information could reduce our requirement for regular time-series data throughout the growth cycle, which will be valuable for crop mapping in larger areas. In terms of the tradeoff between redundancy and performance, utilizing all the temporal images can result in a sub-optimal solution. In our future work, we will choose the specific periods of data we need to use in a larger area in south-central China, where the cloud-free data are also limited, under the guidance of the results of this study.

Author Contributions

S.M., C.L. and X.H. designed the research; S.M. performed the experiments and wrote the paper; Y.Z. and X.W. provided the ground survey data and advices for the preparation and the revision of the work; S.H. coordinated the plan of research activities. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundation of China under Grant No. 41771385 and China Postdoctoral Foundation.

Acknowledgments

The authors are grateful to the Editor and reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Leh, M.D.K.; Sharpley, A.N.; Singh, G.; Matlock, M.D. Assessing the impact of the MRBI program in a data limited Arkansas watershed using the SWAT model. Agric. Water Manag. 2018, 202, 202–219. [Google Scholar] [CrossRef]
  2. Singh, G.; Saraswat, D.; Pai, N.; Hancock, B. LUU CHECKER: A Web-based Tool to Incorporate Emerging LUs in the SWAT Model. Appl. Eng. Agric. 2019, 35, 723–731. [Google Scholar] [CrossRef]
  3. Jia, Y.; Ge, Y.; Chen, Y.; Li, S.; Heuvelink, G.; Ling, F. Super-Resolution Land Cover Mapping Based on the Convolutional Neural Network. Remote Sens. 2019, 11, 1815. [Google Scholar] [CrossRef] [Green Version]
  4. Thenkabail, P.S. Global Croplands and their Importance for Water and Food Security in the Twenty-first Century: Towards an Ever Green Revolution that Combines a Second Green Revolution with a Blue Revolution. Remote Sens. 2010, 2, 2305–2312. [Google Scholar] [CrossRef] [Green Version]
  5. Nasrallah, A.; Baghdadi, N.; Mhawej, M.; Faour, G.; Darwish, T.; Belhouchette, H.; Darwich, S. A Novel Approach for Mapping Wheat Areas Using High Resolution Sentinel-2 Images. Sensors 2018, 18, 2089. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Skakun, S.; Vermote, E.; Franch, B.; Roger, J.-C.; Kussul, N.; Ju, J.; Masek, J. Winter Wheat Yield Assessment from Landsat 8 and Sentinel-2 Data: Incorporating Surface Reflectance, Through Phenological Fitting, into Regression Yield Models. Remote Sens. 2019, 11, 1768. [Google Scholar] [CrossRef] [Green Version]
  7. Peña-Barragán, J.M.; Ngugi, M.K.; Plant, R.E.; Six, J. Object-based crop identification using multiple vegetation indices, textural features and crop phenology. Remote Sens. Environ. 2011, 115, 1301–1316. [Google Scholar] [CrossRef]
  8. You, L.; Wood, S.; Wood-Sichra, U.; Wu, W. Generating global crop distribution maps: From census to grid. Agric. Syst. 2014, 127, 53–60. [Google Scholar] [CrossRef] [Green Version]
  9. Bontemps, S.; Defourny, P.; Bogaert, E.V.; Arino, O.; Kalogirou, V.; Perez, J.R. GLOBCOVER 2009-Products description and validation report. Foro Mundial De La Salud 2010, 17, 285–287. [Google Scholar]
  10. Bartholomé, E.; Belward, A.S. GLC2000: A new approach to global land cover mapping from Earth observation data. Int. J. Remote Sens. 2005, 26, 1959–1977. [Google Scholar] [CrossRef]
  11. Friedl, M.A.; McIver, D.K.; Hodges, J.C.F.; Zhang, X.Y.; Muchoney, D.; Strahler, A.H.; Woodcock, C.E.; Gopal, S.; Schneider, A.; Cooper, A.; et al. Global land cover mapping from MODIS: Algorithms and early results. Remote Sens. Environ. 2002, 83, 287–302. [Google Scholar] [CrossRef]
  12. Arino, O.; Ramos, J.; Kalogirou, V.; Defourny, P.; Frédéric, A. Globcover 2009. In Proceedings of the Earth Observation for Land-Atmosphere Interaction Science, Frascati, Italy, 1 January 2011; p. 48. [Google Scholar]
  13. Kyle, P.; Hansen, M.; Becker-Reshef, I.; Potapov, P.; Justice, C. Estimating Global Cropland Extent with Multi-year MODIS Data. Remote Sens. 2010, 2, 1844–1863. [Google Scholar] [CrossRef] [Green Version]
  14. Friedl, M.A.; Sulla-Menashe, D.; Tan, B.; Schneider, A.; Ramankutty, N.; Sibley, A.; Huang, X. MODIS Collection 5 global land cover: Algorithm refinements and characterization of new datasets. Remote Sens. Environ. 2010, 114, 168–182. [Google Scholar] [CrossRef]
  15. Zhong, Y.; Luo, C.; Hu, X.; Wei, L.; Wang, X.; Jin, S. Cropland Product Fusion Method Based on the Overall Consistency Difference: A Case Study of China. Remote Sens. 2019, 11, 1065. [Google Scholar] [CrossRef] [Green Version]
  16. Lu, M.; Wu, W.; Zhang, L.; Liao, A.; Peng, S.; Tang, H. A comparative analysis of five global cropland datasets in China. Sci. China Earth Sci. 2016, 59, 2307–2317. [Google Scholar] [CrossRef]
  17. Boryan, C.; Yang, Z.; Mueller, R.; Craig, M. Monitoring US agriculture: The US Department of Agriculture, National Agricultural Statistics Service, Cropland Data Layer Program. Geocarto Int. 2011, 26, 341–358. [Google Scholar] [CrossRef]
  18. Osman, J.; Inglada, J.; Dejoux, J.; Hagolle, O.; Dedieu, G. Crop mapping by supervised classification of high resolution optical image time series using prior knowledge about crop rotation and topography. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium-IGARSS, Melbourne, Australia, 21–26 July 2013; pp. 2832–2835. [Google Scholar]
  19. Foerster, S.; Kaden, K.; Foerster, M.; Itzerott, S. Crop type mapping using spectral–temporal profiles and phenological information. Comput. Electron. Agric. 2012, 89, 30–40. [Google Scholar] [CrossRef] [Green Version]
  20. Zheng, B.; Myint, S.W.; Thenkabail, P.S.; Aggarwal, R.M. A support vector machine to identify irrigated crop types using time-series Landsat NDVI data. Int. J. Appl. Earth Obs. Geoinf. 2015, 34, 103–112. [Google Scholar] [CrossRef]
  21. Zhong, L.; Hu, L.; Yu, L.; Gong, P.; Biging, G.S. Automated mapping of soybean and corn using phenology. ISPRS J. Photogramm. Remote Sens. 2016, 119, 151–164. [Google Scholar] [CrossRef] [Green Version]
  22. Geerken, R.A. An algorithm to classify and monitor seasonal variations in vegetation phenologies and their inter-annual change. ISPRS J. Photogramm. Remote Sens. 2009, 64, 422–431. [Google Scholar] [CrossRef]
  23. Friedl, M.A.; Brodley, C.E.; Strahler, A.H. Maximizing land cover classification accuracies produced by decision trees at continental to global scales. IEEE Trans. Geosci. Remote Sens. 1999, 37, 969–977. [Google Scholar] [CrossRef]
  24. Maus, V.; Camara, G.; Cartaxo, R.; Sanchez, A.; Ramos, F.M.; de Queiroz, G.R. A Time-Weighted Dynamic Time Warping Method for Land-Use and Land-Cover Mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3729–3739. [Google Scholar] [CrossRef]
  25. Löw, F.; Michel, U.; Dech, S.; Conrad, C. Impact of feature selection on the accuracy and spatial uncertainty of per-field crop classification using Support Vector Machines. ISPRS J. Photogramm. Remote Sens. 2013, 85, 102–119. [Google Scholar] [CrossRef]
  26. Tyc, G.; Tulip, J.; Schulten, D.; Krischke, M.; Oxfort, M. The RapidEye mission design. Acta Astronautica-ACTA ASTRONAUT 2005, 56, 213–219. [Google Scholar] [CrossRef]
  27. Petitjean, F.; Inglada, J.; Gancarski, P. Satellite Image Time Series Analysis Under Time Warping. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3081–3095. [Google Scholar] [CrossRef]
  28. Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
  29. Cai, Y.; Guan, K.; Peng, J.; Wang, S.; Seifert, C.; Wardlow, B.; Li, Z. A high-performance and in-season classification system of field-level crop types using time-series Landsat data and a machine learning approach. Remote Sens. Environ. 2018, 210, 35–47. [Google Scholar] [CrossRef]
  30. Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
  31. Hunt, M.L.; Blackburn, G.A.; Carrasco, L.; Redhead, J.W.; Rowland, C.S. High resolution wheat yield mapping using Sentinel-2. Remote Sens. Environ. 2019, 233. [Google Scholar] [CrossRef]
  32. Eberhardt, D.I.; Schultz, B.; Rizzi, R.; Sanches, D.I.; Formaggio, R.A.; Atzberger, C.; Mello, P.M.; Immitzer, M.; Trabaquini, K.; Foschiera, W.; et al. Cloud Cover Assessment for Operational Crop Monitoring Systems in Tropical Areas. Remote Sens. 2016, 8, 219. [Google Scholar] [CrossRef] [Green Version]
  33. Murakami, T.; Ogawa, S.; Ishitsuka, N.; Kumagai, K.; Saito, G. Crop discrimination with multitemporal SPOT/HRV data in the Saga Plains, Japan. Int. J. Remote Sens. 2001, 22, 1335–1348. [Google Scholar] [CrossRef]
  34. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  35. Main-Knorn, M.; Pflug, B.; Louis, J.; Debaecker, V.; Müller-Wilm, U.; Gascon, F. Sen2Cor for Sentinel-2. In Proceedings of the SPIE 2017, 10427, Image and Signal Processing for Remote Sensing XXIII, Bellingham, WA, USA, 11–13 September 2018. [Google Scholar] [CrossRef] [Green Version]
  36. Van Tricht, K.; Gobin, A.; Gilliams, S.; Piccard, I. Synergistic Use of Radar Sentinel-1 and Optical Sentinel-2 Imagery for Crop Mapping: A Case Study for Belgium. Remote Sens. 2018, 10, 1642. [Google Scholar] [CrossRef] [Green Version]
  37. Hao, P.; Wang, L.; Niu, Z.; Aablikim, A.; Huang, N.; Xu, S.; Chen, F. The Potential of Time Series Merged from Landsat-5 TM and HJ-1 CCD for Crop Classification: A Case Study for Bole and Manas Counties in Xinjiang, China. Remote Sens. 2014, 6, 7610–7631. [Google Scholar] [CrossRef] [Green Version]
  38. Van Niel, T.G.; McVicar, T.R. Determining temporal windows for crop discrimination with remote sensing: A case study in south-eastern Australia. Comput. Electron. Agric. 2004, 45, 91–108. [Google Scholar] [CrossRef]
  39. Griffiths, P.; Nendel, C.; Hostert, P. Intra-annual reflectance composites from Sentinel-2 and Landsat for national-scale crop and land cover mapping. Remote Sens. Environ. 2019, 220, 135–151. [Google Scholar] [CrossRef]
  40. Hao, P.; Wu, M.; Niu, Z.; Wang, L.; Zhan, Y. Estimation of different data compositions for early-season crop type classification. PeerJ 2018, 6, e4834. [Google Scholar] [CrossRef]
  41. Rodriguez-Galiano, V.F.; Chica-Olmo, M.; Abarca-Hernandez, F.; Atkinson, P.M.; Jeganathan, C. Random Forest classification of Mediterranean land cover using multi-seasonal imagery and multi-seasonal texture. Remote Sens. Environ. 2012, 121, 93–107. [Google Scholar] [CrossRef]
  42. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  43. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  44. Hapfelmeier, A.; Ulm, K. A new variable selection approach using Random Forests. Comput. Stat. Data Anal. 2013, 60, 50–69. [Google Scholar] [CrossRef]
  45. Guan, H.; Li, J.; Chapman, M.; Deng, F.; Ji, Z.; Yang, X. Integration of orthoimagery and lidar data for object-based urban thematic mapping using random forests. Int. J. Remote Sens. 2013, 34, 5166–5186. [Google Scholar] [CrossRef]
  46. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random Forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  47. Genuer, R.; Poggi, J.-M.; Tuleau-Malot, C. Variable selection using random forests. Pattern Recog. Lett. 2010, 31, 2225–2236. [Google Scholar] [CrossRef] [Green Version]
  48. Ghulam, A.; Li, Z.-L.; Qin, Q.; Yimit, H.; Wang, J. Estimating crop water stress with ETM+ NIR and SWIR data. Agric. For. Meteorol. 2008, 148, 1679–1695. [Google Scholar] [CrossRef]
  49. Hu, Q.; Sulla-Menashe, D.; Xu, B.; Yin, H.; Tang, H.; Yang, P.; Wu, W. A phenology-based spectral and temporal feature selection method for crop mapping from satellite time series. Int. J. Appl. Earth Obs. Geoinf. 2019, 80, 218–229. [Google Scholar] [CrossRef]
  50. Diao, C. Innovative pheno-network model in estimating crop phenological stages with satellite time series. ISPRS J. Photogramm. Remote Sens. 2019, 153, 96–109. [Google Scholar] [CrossRef]
Figure 1. Zhongxiang City satellite image from April 8, 2018, using a true-color composite of the blue, green, and red bands of Sentinel-2.
Figure 1. Zhongxiang City satellite image from April 8, 2018, using a true-color composite of the blue, green, and red bands of Sentinel-2.
Remotesensing 12 00226 g001
Figure 2. The ground survey polygon locations in the study area. The red polygons represent the winter wheat planting areas, and the green polygons represent the rapeseed planting areas. Images p1 and p2 are the Google Earth satellite images corresponding to the locations of the ground survey samples. The in-situ photographs taken during the ground survey in January 2018 are shown in (a1) and (a2) (rapeseed), and (b1) and (b2) (winter wheat).
Figure 2. The ground survey polygon locations in the study area. The red polygons represent the winter wheat planting areas, and the green polygons represent the rapeseed planting areas. Images p1 and p2 are the Google Earth satellite images corresponding to the locations of the ground survey samples. The in-situ photographs taken during the ground survey in January 2018 are shown in (a1) and (a2) (rapeseed), and (b1) and (b2) (winter wheat).
Remotesensing 12 00226 g002
Figure 3. The available cloud-free image data during the entire growth cycle of winter crops in Zhongxiang City. The images are false-color composites of the green, red, and near-infrared (NIR) bands.
Figure 3. The available cloud-free image data during the entire growth cycle of winter crops in Zhongxiang City. The images are false-color composites of the green, red, and near-infrared (NIR) bands.
Remotesensing 12 00226 g003
Figure 4. The workflow of optimal temporal window selection.
Figure 4. The workflow of optimal temporal window selection.
Remotesensing 12 00226 g004
Figure 5. Zhongxiang City winter wheat and rapeseed mapping results with different combinations of Sentinel-2 images. From left to right, top to bottom, are the results of the weighted residual error ( WRes ) rankings 1 to 20.
Figure 5. Zhongxiang City winter wheat and rapeseed mapping results with different combinations of Sentinel-2 images. From left to right, top to bottom, are the results of the weighted residual error ( WRes ) rankings 1 to 20.
Remotesensing 12 00226 g005aRemotesensing 12 00226 g005b
Figure 6. The winter wheat and rapeseed mapping details of the best-performing ranking result: (a) and (b) are the true-color image and the result map, as are (c,d). The true-color image was acquired on the 8 April, 2018. The green and red blocks represent the rapeseed and winter wheat, respectively.
Figure 6. The winter wheat and rapeseed mapping details of the best-performing ranking result: (a) and (b) are the true-color image and the result map, as are (c,d). The true-color image was acquired on the 8 April, 2018. The green and red blocks represent the rapeseed and winter wheat, respectively.
Remotesensing 12 00226 g006
Figure 7. The assessment indicator results for all the temporal combinations. (a) OA; (b) Kappa; (c) R e s r ; (d) R e s w ; (e) WRes . OA, overall accuracy; Kappa, Kappa coefficient.
Figure 7. The assessment indicator results for all the temporal combinations. (a) OA; (b) Kappa; (c) R e s r ; (d) R e s w ; (e) WRes . OA, overall accuracy; Kappa, Kappa coefficient.
Remotesensing 12 00226 g007aRemotesensing 12 00226 g007b
Figure 8. (a) Average R e s w , R e s r , and WRes for all the combinations of temporal images; (b) average OA and Kappa for each temporal combination; (c) the best performance in WRes for each temporal combination.
Figure 8. (a) Average R e s w , R e s r , and WRes for all the combinations of temporal images; (b) average OA and Kappa for each temporal combination; (c) the best performance in WRes for each temporal combination.
Remotesensing 12 00226 g008
Figure 9. VI derived from the RF classifier, with the MDA method and OOB data. The sequential sorting of the X-axis represents the spectral bands of each temporal Sentinel-2 image, i.e., 1-Blue, 2-Green, 3-Red, 4,5,6-Red-Edge, 7-NIR, 8-SWIR-1, 9-SWIR-2. MDA, mean decrease accuracy; OOB, out-of-bag samples, SWIR, short-wave infrared.
Figure 9. VI derived from the RF classifier, with the MDA method and OOB data. The sequential sorting of the X-axis represents the spectral bands of each temporal Sentinel-2 image, i.e., 1-Blue, 2-Green, 3-Red, 4,5,6-Red-Edge, 7-NIR, 8-SWIR-1, 9-SWIR-2. MDA, mean decrease accuracy; OOB, out-of-bag samples, SWIR, short-wave infrared.
Remotesensing 12 00226 g009
Table 1. Training and validation sample pixels for classification.
Table 1. Training and validation sample pixels for classification.
Class NameTrainingValidation
Winter wheat1919720
Rapeseed1395466
Others48831890
Table 2. Crop planting areas obtained from the agricultural statistical data for Zhongxiang City.
Table 2. Crop planting areas obtained from the agricultural statistical data for Zhongxiang City.
Crop TypeYear
20172016
Winter wheat37,184.00 ha36,184.00 ha
Rapeseed29,495.26 ha30,001.53 ha
Table 3. Date marks of the images.
Table 3. Date marks of the images.
Image Acquisition DateMark
30 October 2017A
9 December 2017B
24 December 2017C
3 April 2018D
8 April 2018E
18 April 2018F
Table 4. The assessment metrics for the top 10 combinations.
Table 4. The assessment metrics for the top 10 combinations.
RankingTemporal
Images
OAKappa R e s w R e s r WRes
1DEF0.9350.9140.0610.0610.060
2ABCDEF0.9490.9330.1210.0130.072
3ABDEF0.9430.9290.1080.0470.081
4CDF0.9420.9230.1470.0370.097
5BE0.9350.9150.1760.0020.097
6DF0.9340.9140.1080.0870.098
7CDEF0.9440.9270.1540.0330.099
8EF0.9330.9130.0040.2290.104
9ABCF0.9510.9350.1440.0580.105
10AE0.9280.9060.1750.0250.108
Table 5. The frequency of the image marks in the top 10 ranking.
Table 5. The frequency of the image marks in the top 10 ranking.
Image Acquisition Date MarkFrequency of Occurrence
A3
B4
C4
D6
E7
F8

Share and Cite

MDPI and ACS Style

Meng, S.; Zhong, Y.; Luo, C.; Hu, X.; Wang, X.; Huang, S. Optimal Temporal Window Selection for Winter Wheat and Rapeseed Mapping with Sentinel-2 Images: A Case Study of Zhongxiang in China. Remote Sens. 2020, 12, 226. https://doi.org/10.3390/rs12020226

AMA Style

Meng S, Zhong Y, Luo C, Hu X, Wang X, Huang S. Optimal Temporal Window Selection for Winter Wheat and Rapeseed Mapping with Sentinel-2 Images: A Case Study of Zhongxiang in China. Remote Sensing. 2020; 12(2):226. https://doi.org/10.3390/rs12020226

Chicago/Turabian Style

Meng, Shiyao, Yanfei Zhong, Chang Luo, Xin Hu, Xinyu Wang, and Shengxiang Huang. 2020. "Optimal Temporal Window Selection for Winter Wheat and Rapeseed Mapping with Sentinel-2 Images: A Case Study of Zhongxiang in China" Remote Sensing 12, no. 2: 226. https://doi.org/10.3390/rs12020226

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop