Next Article in Journal
Superpixel based Feature Specific Sparse Representation for Spectral-Spatial Classification of Hyperspectral Images
Previous Article in Journal
Hyperspectral Image Classification with Multi-Scale Feature Extraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feature Comparison and Optimization for 30-M Winter Wheat Mapping Based on Landsat-8 and Sentinel-2 Data Using Random Forest Algorithm

1
Key Laboratory of Digital Earth Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, No. 9 Dengzhuang South Road, Beijing 100094, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Hainan Key Laboratory of Earth Observation, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Sanya 572029, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(5), 535; https://doi.org/10.3390/rs11050535
Submission received: 22 November 2018 / Revised: 26 February 2019 / Accepted: 27 February 2019 / Published: 5 March 2019
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)

Abstract

:
Winter wheat cropland is one of the most important agricultural land-cover types affected by the global climate and human activity. Mapping 30-m winter wheat cropland can provide beneficial reference information that is necessary for understanding food security. To date, machine learning algorithms have become an effective tool for the rapid identification of winter wheat at regional scales. Algorithm implementation is based on constructing and selecting many features, which makes feature set optimization an important issue worthy of discussion. In this study, the accurate mapping of winter wheat at 30-m resolution was realized using Landsat-8 Operational Land Imager (OLI), Sentinel-2 Multispectral Imager (MSI) data, and a random forest algorithm. This paper also discusses the optimal combination of features suitable for cropland extraction. The results revealed that: (1) the random forest algorithm provided robust performance using multi-features (MFs), multi-feature subsets (MFSs), and multi-patterns (MPs) as input parameters. Moreover, the highest accuracy (94%) for winter wheat extraction occurred in three zones, including: pure farmland, urban mixed areas, and forest areas. (2) Spectral reflectance and the crop growth period were the most essential features for crop extraction. The MFSs combined with the three to four feature types enabled the high-precision extraction of 30-m winter wheat plots. (3) The extraction accuracy of winter wheat in three zones with multiple geographical environments was affected by certain dominant features, including spectral bands (B), spectral indices (S), and time-phase characteristics (D). Therefore, we can improve the winter wheat mapping accuracy of the three regional types by improving the spectral resolution, constructing effective spectral indices, and enriching vegetation information. The results of this paper can help effectively construct feature sets using the random forest algorithm, thus simplifying the feature construction workload and ensuring high-precision extraction results in future winter wheat mapping research.

1. Introduction

Land-use mapping is an important subject in the study of surface eco-physics, including vegetation, soil, buildings, water, and other surface elements. Among them, vegetation is the most sensitive to perceiving surface climate change and physical and chemical states. Moreover, vegetation is most closely related to global and regional food security, planting intensity, crop yields, and other Sustainable Development Goals. Future climate change has increased the likelihood of severe, pervasive, and irreversible consequences for human civilization and agriculture [1]. In this context, the rapid and accurate mapping of crops has gradually become an important means for monitoring and evaluating agricultural development.
Presently, regional and global remote sensing data products for crop and land-cover classification are becoming increasingly abundant (Table 1). Low-resolution data is provided by the Land Use and Global Environment Laboratory (LUGE) of McGill University. The data is produced by the Center for Sustainable Development and Global Environment (SAGE) at the Nelson Institute for Environmental Studies at the University of Wisconsin. The dataset includes the global distribution of three major crops (maize, wheat, and rice). It also has a spatial resolution of 5′ × 5′ (10 km) and resampling to a grid of 0.5° × 0.5° (http://www.earthstat.org/). There are also three sets of global irrigation area maps. The first is a satellite sensor-based Global Irrigated Area Map that was released by the International Water Management Institute (IWMI) and includes a 10-km irrigation and farmland global land-use/cover (LULC) map (http://waterdata.iwmi.org/Applications/GIAM2000/). The second is a regional low-resolution 500-m irrigation area map for South Asia (http://waterdata.iwmi.org/). Lastly, the dataset contains regional medium-resolution 30-m irrigation maps of the Syr Darya River Basin and the Krishna River Basin in India (http://www.iwmi.cgiar.org/). Other related products include a United States Geological Survey (USGS) interactive map of 30-m global farmland based on Landsat Global Food Security-Support Analysis Data (GFSAD30). The interactive map also includes a 30-m resolution irrigated farmland and rain-fed farmland distribution layer for Australia [2], South Asia, Iran, Afghanistan [3], and other countries. A 250-m resolution crop distribution layer is also available for the United States, Africa [4], Australia [5], South Asia, and other regions, and has also been applied in certain region researches [6] (https://geography.wr.usgs.gov/science/croplands/index.html). These mapping products provide strong data support for the study of global and regional crop planting intensity and yield. However, the current mapping products are still inadequate. For instance, these products were developed using the traditional band threshold segmentation method, which is both time-consuming and laborious. They are also confined to low resolutions. Lastly, these products are mostly designed for integrated cropland mapping or coarse classification of rain-fed or irrigated agriculture, but few studies have focused on mapping specific crops [7].
The continuous development of capacity for remote sensing data acquisition has gradually led to the adoption of the multi-type, high-dimensional characteristics of remote sensing data [8,9]. This has led to machine learning methods being combined with mesoscale, multi-source remote sensing data. Different characteristics of remote sensing data can be combined, including spectral features such as visible green, red, and near-red bands [10], spatial features such as texture features derived from the gray-level co-occurrence matrix (GLCM) [11,12], temporal features such as Normalized Difference Vegetation Index (NDVI) or Enhanced Vegetation Index(EVI) time-series data [13,14], and auxiliary features such as slope factors generated from a digital elevation map (DEM) [15], to achieve higher precision than single-feature variable remote sensing crop recognition [16]. Furthermore, this can help retrieve higher-value, deeper information at larger scales [7,17,18,19], including high-dimensional remote sensing information. These factors enable applications such as crop classification and recognition [20].
Currently, land classification is conducted by machine learning algorithms and has been widely applied to remote sensing data. The random forest algorithm is a famous ensemble learning technique that is frequently based on bagging integration [21]. The parameters are selected by combining sample perturbation with probability sampling (e.g., self-help sampling method) and attribute perturbation in characteristic subspace. The convergence and generalization error of the classification also provides good robustness [22,23]. The random forest algorithm has been used extensively in land-cover classification applications [6,17,24,25,26,27,28,29]. The method has several capabilities for remote sensing applications, including the ability to handle high volumes of data, an unbiased estimation of generalization error, the reduction of outliers and noise, simple parameter settings, and a low computational cost.
Multi-source remote sensing images are produced by the Advanced Very High-Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectro-radiometer (MODIS) satellites and have a large width and high time resolution. These images are the most advantageous for dynamically monitoring vegetation change at large scales. However, their spatial resolution is often not appropriate for regional-scale vegetation research.
An example of a multispectral sensor is Landsat-8, which has been in operation since 2013. Landsat-8 provides 30-m multi-spectral and repeated imagery with a 16-day revisit rate, and eliminates most atmospheric interference. The satellite’s stable spectral, spatial, and temporal resolution is effective for vegetation observation. As a result, it has provided many applications on multiple platforms such as the Google Earth Engine [17,30,31,32,33,34] with multiple methods [35] and multi-resource satellite images [36,37].
The Sentinel-2 satellite is another valuable resource for vegetation monitoring. The Sentinel-2 was launched in 2015 and is a land monitoring optical satellite that provides 10-m resolution optical imagery with a revisit period of five days. The satellite is equipped with the state-of-the-art MSI (Multispectral Imager) instrument, which covers 13 spectral bands. It is also the only satellite with three bands in the red-edge range, which is effective for monitoring vegetation health information. The Sentinel-2 has historically provided monitoring information for agriculture and forestry, and has become an effective data resource for Earth observation [38,39,40]. The satellite is essential for monitoring land use, crop detection, and grain prediction; it can also help improve food security, as well as provide continuity for the current SPOT satellite Earth Observation System and Landsat missions. Using pre-processing techniques such as well calibration, Landsat-8 and Sentinel-2 can be integrated [41,42,43] and compared [44,45,46] to provide frequent cloud-free surface observations of vegetation growth. It has been well-confirmed that Sentinel-2 data and atmospherically-corrected Landsat-8 data can be highly correlated in bands reflectance, such as red band and near-infrared band, as well as be well-matched in the reflectance characteristics of primary surface objects including vegetation, soil, and water [47,48].
Mapping research is necessary for monitoring and evaluating wheat growth. It is difficult to select suitable features as an extraction factor in wheat mapping due to the spectral similarity between different wheat varieties. However, these characteristics are necessary for efficiently and accurately mapping wheat croplands.
This study had the following objectives related to regional land-cover map accuracy. Firstly, the goal was to extract winter wheat crops from integrated Landsat-8 and Sentinel-2 imagery using machine learning (RF) methods. Secondly, the study explored the best extraction features and feature combinations for winter wheat mapping. Lastly, this study compared the factors affecting extraction precision in different geographical scenes.

2. Materials and Methods

2.1. Study Area

The study area (Figure 1) was located at the periphery of Magdeburg city in Sachsen-Anhalt, central Germany, between 51°34′N and 52°15′N and 11°6′E to 12°10′E. The Elbe River flows through the region from south to north. The two banks of the river are widely cultivated, typically with winter wheat.
Germany has a strong agricultural sector and is the world’s third largest exporter and importer of agricultural products. The North German Plain has low temperatures, cold and wet winters, low sunshine, and is suitable for the forage growth that is necessary for the development of animal husbandry. The south consists of mountain valleys with low latitudes and altitudes, high temperatures, and dense river networks. According to the annual report “Understanding Farming—Facts and figures about German farming” released by Germany’s Federal Ministry of Food and Agriculture, Germany has three typical vegetation types, with 34% of the land area designated for agriculture, 31% designated for forest, and 13% designated for grasslands. A total of 47% of the land area is used for farming.
Germany is famous for its “cultivated landscape” with the main crop types being bread cereals, potatoes, sugar beets, fruit, and vegetables. Previous studies on crop classification at regional [13] and national scales [19] in central Germany have yielded high classification accuracy. The area covered by cereal crops is nearly as high as the total arable land. Most of these crops are concentrated in the central and southwestern regions of Germany. Today, cereals and wheat are the most important agricultural products in the German farming sector, which are used as food, feed, and renewable raw materials.

2.2. Satellite Imagery

Fine-resolution satellite data was required to map crop fields due to their small and fragmented parcels [53]. As a result, Landsat-8 data with 30-m bands (bands two to seven), Sentinel-2 data with 10-m bands (bands two to four, band eight, and bands 11 to 12), and 20-m red-edge bands (bands five to seven) were selected to map cropland extent. Five Landsat-8 operational land imager (OLI) images and two Sentinel-2 multi-spectral instrument (MSI) images with minimum cloud cover during the wheat growth period in 2016 were respectively captured on 2 May (MSI), 5 May (OLI), 6 June (OLI), 25 August (OLI), 9 September (MSI), 10 September (OLI), and 29 November (OLI). These are marked with the corresponding numbers of 0502, 0505, 0606, 0825, 0909, 0910, and 1129. Figure 2 displays the phenology of the main crops in the study area [13,54]. The images in (Figure 2) were captured during the winter wheat growing season and are displayed in band synthesis with zoomed in pictures in the first two rows. The “Early_Age” and “Mid_Age” were defined according to the growth stages of monocotyledonous and dicotyledonous plants released by Federal Biological Research Centre for Agriculture and Forestry [54]. The ‘Early_Age’ consists of the principal growth stage of germination, leaf development and tillering. The ‘Mid_Age’ contains stem elongation, booting, inflorescence emergence and heading, flowering and anthesis, and the development of fruit.

2.3. Reference Data

The reference data (e.g., Mapping Germany’s Agricultural Landscape) was released by the European Space Agency (ESA)’s Copernicus Sentinel-2 Monitoring Program on 29 August 2017, with free access via the website. The dataset covered 21 land-cover categories, including 15 specific crop types between October 2015 and late 2016 with a 30-m resolution. The dataset was used to assist in sketching training sample points. The corresponding types of samples were marked in the same coordinate position as the satellite images based on the crop types in the reference data. It is also helpful in obtaining validation data in ArcGIS 10.4 by manually delineated ground truth polygons on a high-resolution map downloaded from Google Earth that is well based on the corresponding position in the reference data. This reference dataset was invaluable for training the RF classifier, as well as for assessing classification accuracies and identifying uncertainty.

2.4. Methods

The workflow for wheat detection and mapping consisted of the following steps (Figure 3). First, data pre-processing was used to calibrate to an integrated spatial resolution of 30 m, or the same geographical reference based on Landsat data levels. The image sequences were then processed using top of atmosphere reflectance (TOA) data. Second, multi-feature construction and feature combination were used to build different model setups. Then, wheat land detection and mapping were conducted using the random forest algorithm. Lastly, mapping accuracy and comparison analysis were used to assess different model input setups.

2.4.1. Random Forest Algorithm

The random forest algorithm was constructed based on decision trees. The sample subset containing K attributes was randomly selected (sample disturbance) for each node in the decision tree. Then, the optimal attributes were selected as the basis for the division of the nodes (attribute disturbance). The construction process of the random forest algorithm consisted of the following:
(1) The bagging [55] was used to randomly return each tree from the original training set, and a total of n trees were used as training sets.
(2) We trained n_tree decision tree models for n_tree training sets, respectively.
(3) The single decision tree model assumed that the number of attributes in a node was equal to K. The best attributes for the split were randomly selected according to the information gain, information gain ratios, and the Gini index from among all the K best splits [56].
(4) Each tree was continuously split using this method until all the training samples of the node belonged to the same class.
(5) The generated decision trees were formed into a random forest, and the final classification results were determined by a multi-tree classifier by vote.
The sampling scheme was implemented based on an overarching principle. Pure pixels with the type of winter wheat or the others inside the respective pixel boundary were visually identified and assigned with the correct class label. Then, 100 evenly scattered sampling dots in the study area were randomly generated in ArcGIS 10.4 to construct an unbiased training sample dataset. The artificial sample points were supplemented according to the reference classification map and previous cropland knowledge in order to ensure a sufficient number of samples. Four cases (Figure 4) occurred when random sampling points were scattered on remote sensing images. These occurred when: (1) wheat pixels covered the sampling point, (2) non-wheat pixels covered the sampling point, (3) sampling points were near wheat pixels, and (4) when sampling points were near non-wheat pixels. For the first two cases, the pixel blocks in which the sample points fell were directly marked as training samples for the corresponding categories. For the latter two cases, the pure wheat and non-wheat pixel blocks that were closest to the sampling point were manually selected. Through this interactive sampling strategy, a total of 1587 wheat pixels and 4975 non-wheat pixels were selected and labeled as one and zero to construct the sample dataset for the random forest classifier.
A confusion matrix [57] was used to evaluate the performance and accuracy of the wheat extraction map. It is an effective tool to describe classifier accuracy, and consists of square numbers listed in rows and columns. This matrix can express the number of sample units assigned to a specific category relative to the verified ground truth class. As Google Earth images have been proved sufficient for providing reliable validation data and assessing moderate resolution remote sensing data [58], such as Landsat [59] and MODIS [60,61], as well as in regional ground investigation points [62]. Thus, for the validation process, a high-resolution image of 2.94-m was downloaded from the Google Earth platform and carefully calibrated into the same coordinate system as the existed data. The manually delineated ground truth polygon from Google Earth was produced to serve as validation data, and was combined with the classification map generated from remote sensing data to build the confusion matrix. Accuracy indexes containing the overall accuracy, producer’s accuracy, user’s accuracy [63], f1_score, and kappa [64] values were calculated to examine classification accuracy.

2.4.2. Multi-Input Features

Input features are the most important aspect of the classification scheme, and directly affect the final classification accuracy. Many features are used as model inputs in common land mapping problems. These features can be spectral features consisting of original bands or spectral indexes. Such spectral indexes include: the Normalized Difference Vegetation Index(NDVI) [65], Enhanced Vegetation Index(EVI) [66], Green Vegetation Index(GVI) [67], Green Normalized Difference Vegetation Index(GNDVI) [68], Leaf Area Index(LAI) [69], Normalized Difference Built-Up Index(NDBI) [70], Normalized Difference Snow Index(NDSI) [71], Modified Normalized Difference Water Index(MNDWI) [72,73], Soil Adjusted Vegetation Index(SAVI) [74], Sum Green Index(SGI) [75], and Simple Ratio Index(SR) [76]. Texture features can also be included and are constructed by recognizing and extracting pixel grayscale differences in a particular window space, such as: angular second-moment ASM [77], contrast [77], entropy [78], and homogeneity [77]. These features also include compressed information features such as principal component transformation variables, as well as temporal features that are suitable for objectives that change over time. Lastly, red edge features can be included, which are derived from the development of satellite sensors for vegetation detection [79].
The feature space in this study was constructed based on the multispectral remote sensing data mentioned above. Moreover, it was based on the features as follows:
(1) Original spectral bands (B), including the blue band, green band, red band, near-infrared (NIR) band, SWIR1 band, and SWIR2 band, which consist of bands two to seven from Landsat-8 and bands two to four and five to seven from Sentinel-2, respectively. These bands are useful for earth observation, especially vegetation identification.
(2) Spectral indices (S), including NDVI, which is sensitive to vegetation; NDBI, which is helpful for impervious surface recognition, NDSI, which is highly-responsive to snow and cloud; and MNDWI, which is important to distinguish water bodies. All these spectral indices are selected for their effectiveness in specific earth surface objects extraction.
(3) Principal Component Analysis (PCA) (P), which contain the first three components pca1 to pca3. These three components are able to highly generalize band information.
(4) Differences in NDVI (D) information that can reflect phenology information about different crops
(5) The red edge feature space (R) was also included in this study, and was represented by the red edge band and vegetation index derived from Sentinel-2 data, as red-edge bands are very effective for reflecting vegetation health status and growth information.
A total of 131 multi-features (MF) (Table 2) were selected as predictor variables, and were combined into 30 feature subsets (MFS) containing single feature and combined feature types (Figure 5). In these subsets, the first 10 variables with the highest importance scores were used as model inputs. The 30 MFSs were further divided into five multi-patterns (MP) named: I, II, III, IV, and V. These were based on the number of feature types contained in each MFS (Figure 5). For example, the I pattern indicated that the features used for classification were from the same single feature type (e.g., B, S, P, D, and R). The II pattern suggested that the features were from the combined feature type (e.g., BS, BP, BD, BR, SP, SD, SR, PD, PR, and DR).

3. Results

3.1. Key Predictor Variables in MFs

Temporal features were the most critical components among the variables involved in the classifier (Table 2). In the single-temporal feature method, the highest scoring features were concentrated in the periods of 502, 505, 825, and 909. The phenology information revealed that June was the peak growing period for winter wheat, but other crops in the same area were also too similar to distinguish. The periods including May and August through September were the key growth periods where the more obvious characteristics of wheat were revealed. These periods occurred while other crops were sown or harvested. The classification based on these two stages provided the best results for distinguishing between crop objects and improved classification accuracy. Crop phase characteristics were essential in the combination method, and their importance far exceeded other features.
The spectral indices of features were another key factor in crop discrimination. The NDVI was the main factor in identifying single-spectral index features. The winter wheat grown in May had obvious differences with other crops. These differences could be distinguished using the NDVI and NDSI, resulting in high scores on the chart. The winter wheat grown and harvested in August and September had sparse or bare farming plots, which were easily confused with impervious surfaces such as buildings. The NDBI method proved to be valuable for removing buildings and related objects. In addition to these two critical periods, MNDWI and SAVI were slightly inferior to the previously mentioned features. The NDVI difference information contained a certain degree of spectral information in the combination method. Therefore, the other spectral indices were less important than the temporal characteristics.
Band and PCA features were the second most important predictors for wheat detection. In single-band features, B2, B3, and B4 were the key bands for retrieving vegetation information and distinguishing between vegetation objects. As for PCA features, the first and second principal components were sufficient to contain ground information in a single-PCA method. The main function of PCA was to reduce dimensionality and retain the maximum amount of information, thus it was more important for multi-band data, such as PCA (1,2) _502 (Sentinel-2 data) and PCA (1,2) _505 (Landsat-8 data).
The red edge spectrum is closely related to various physical and chemical parameters of vegetation and is an important indicator for describing plant pigment color and overall health. Other indicators include: canopy LAI estimation [80], chlorophyll density evaluation [81], and leaf element absorption [82]. Vegetation information was abundant in May, so the corresponding red edge bands and indices scored higher than in other seasons using the single-red edge method. Red edge information also played an important role in the combination method, which is observed in the histogram of relative combined feature types (Figure 5).

3.2. Mapping Accuracy under 30 MFSs and Five MPs in Three Zones

Three different geo-scenarios (Figure 6) were selected in the study area to display the wheat mapping results for comparison with classification results (Figure 7). These scenarios were organized as pure farmland, urban mixed areas, and forested areas, which were marked as Zone 1, Zone 2, and Zone 3. Wheat crops were extracted using 30 multi-feature subsets (MFSs) and five multi-patterns (MPs), as mentioned above. The areas had the following characteristics. The pure farmland consisted of a homogeneous farming plot, simple surface landscape structure, and different objects with the same spectrum. The urban mixed area featured abundant ground objects and notable differences in spectral characteristics. The forested area had high field fragmentation, low planting density, and irregularly shaped farming plots.
Figure 7 shows the classification results for the three regions using 30 feature subsets (MFSs). The mapping results in each MFS had the same spatial distribution as the ground truth polygons in Figure 6. This reflects the robustness of the random forests algorithm for multiple variables and surface conditions, which is represented in Table 3. Table 3 also summarizes the statistical accuracy index corresponding to the classification map in Figure 7. Each classification map was evaluated using six indexes, and included the overall accuracy (OA), producer’s accuracy (PA), user’s accuracy (UA), area under curve (AUC), f_scores, and kappa coefficient [41]. Manually delineated ground truth polygons (shapefile format) from Section 2.3 were then converted into raster format with the same resolution of 30-m, and were all used as validation samples in a total of 100,445 pixels in Zone 1, 49,968 pixels in Zone 2, and 24,432 pixels in Zone 3.
The lowest overall accuracy (OA) values in the three zones were 84.84%, 91.14%, and 89.94%, while the highest values were 88.15%, 93.70%, and 94.15%, respectively. The lowest values for producer’s accuracy, which has important values for land mapping, were 75.45%, 69.69%, and 57.66%, while the highest values were 84.10%, 85.91%, and 76.60%, respectively. In agriculture, this level of precision is adequate for most applications based on crop mapping. The AUC value represented model classification performance. The lowest AUC values were 0.8403, 0.8316, and 0.7650, and the highest values were 0.8743, 0.9058, and 0.8553, respectively. These results suggest that the 30 MFSs used for modeling in this study provided good classification performance in different regions. The kappa coefficient indicated the consistency between the mapping results and ground truth value. The lowest kappa values were 0.6831, 0.7078, and 0.5000, and the highest values were 0.7519, 0.8028, and 0.6492, respectively. These measurements suggest that the mapping results for the three regions were in good agreement with the actual crop distribution.
Figure 8 displays the precision among multi-patterns (MPs). The single type features had the lowest classification accuracies, which are indicated in Table 3. The classification methods with better accuracy were mostly concentrated in patterns II, II, and IV, which were combined with two to four feature sources. This result suggests that crop classification should contain at least two or more types of classification features to achieve high node purity. However, classification accuracy is not always increased with the enrichment of classification features. Moreover, it may remain stable at a certain level, or even slightly decline.
OA is an example of this effect. The inter-class precision results for Zone 1 and Zone 2 (Figure 9) revealed an initial increase, followed by a decrease in accuracy. Model accuracy was also affected by the bias of the model feature selection process in addition to the model robustness and sample limitations. The accuracy of the contribution of other features was minimal when the model only considered some features to be important, and adding other similar features did not increase the accuracy. In Zone 3, the accuracy increased with the increase of feature types due to a significant change in the topographic element of Region 3 compared to the other two regions. New feature types may be necessary for increased accuracy, such as the terrain features for Zone 3.

3.3. Accuracy Performance in Three Zones with Multiple Geographical Land Surfaces

Figure 10 displays the accuracy results for each zone. In Zone 1 (Figure 10a), each precision index is concentrated around 86.64%. The overall commission error and omission error were lower than 0.2, with the omission error being slightly higher than the commission error. The average kappa coefficient was 0.7197, and the AUC was 0.8563. In Zone 1, the land mass was homogeneous, and the surface spectral structure was similar, and featured a large cropland area. All these factors were conducive to achieving high and stable classification accuracy. However, the “foreign bodies with the same spectrum” effect between crops was apparent. The German cultivated land was small, but the spatial resolution of the image was not high enough, and may have limited the Zone 1 classification accuracy.
In Zone 2 (Figure 10b), the OA had improved performance, with an average of 92.62%. The commission and omission errors were relatively low, fluctuating at about 0.2. The average Kappa coefficient was 0.7679, which revealed a high consistency between classification and inspection data. The AUC value was 0.8835, which revealed that the models had good classification ability. Zone 2 contained water, urban areas, and vegetation, and featured a richer terrain compared with other areas. The spectral and temporal discrimination between crops and other terrain was higher and easier to distinguish. The terrain near rivers and cities revealed certain distribution characteristics, which was conducive to high classification accuracy.
The OA for Zone 3 had the best performance compared to Zone 1 and Zone 2, with an average precision level of 92.72% (Figure 10c). However, the other indexes had a lower precision and larger fluctuation. The commission and omission errors were close to 0.4, the kappa coefficient was 0.6062, and the AUC was 0.8176. These results revealed a slight weakness in consistency and model capability compared to Zone 1 and Zone 2. Zone 3 was located at the edge of an uplift area in central Germany. Forests constituted the main vegetation in the region, along with a small amount of farmland. The differences between ground features allowed for an increase in the extraction precision for winter wheat. However, some farm blocks were small, fragmented, and irregular in shape, and reflectivity characteristics were affected by topographic fluctuations to a certain extent. All these factors may have caused high commission and omission errors in the specified zones.

4. Discussion

4.1. Optimum Season Selection for MFs

Vegetation is one of Earth’s features that has an observable cycle. Moreover, vegetation has different growth periods based on its structure and the climate. Crops are a special vegetation type that is affected by natural water and heat conditions as well as anthropomorphic processes. The “same thing different spectrum” and “foreign thing same spectrum” effect was apparent in our observations. Based on this characteristic, crop phenology information is essential for distinguishing between crop features [83]. The most important task for crop extraction in a certain region is to fully understand the phenological history of local crops and identify the key seasons and growing periods (Figure 11).
In the study area, the main crops were winter wheat, maize, oilseed rape, and sugar beet. The strongest features for winter wheat were revealed during the June growth period. However, oilseed rape and sugar beet also featured strong growth during this period. As a result, the vegetation features in June had little effect on the differentiation between winter wheat and other crops. However, wheat crops experienced vigorous growth in May, while other crops were in the early growth stages and had weaker spectral features. Thus, May was identified as the best period for distinguishing winter wheat features from other crops. The wheat harvest season is in August–September, during which maize and sugar beets are still growing. Winter wheat can be distinguished from these two crops by examining the differences in growth. Thus, in this study, August and September were also effective periods for identifying wheat crops. Wheat crop features were mostly indistinguishable during December due to the crop being recently sown. This phenology analysis also confirms the seasonal differences in Figure 5.

4.2. Optimal MP Selection from MFSs

The parameter settings for the random forest algorithm model are not complicated. Even with the default forest parameter structure, good classification results can be obtained by selecting typical features from the classification target. There are many crops characteristics, but vegetation classification is still limited to spectral reflectance and growth periodicity.
In terms of spectral characteristics, the main feature types include: (1) a basic original broad band, such as the visible green, red, and near-red bands; (2) a narrow band carrying key information for vegetation, such as the red edge band (band selection mainly depends on the spectral resolution of the satellite sensor); (3) principal component analysis used to produce a band consisting of the new dimension obtained by the spatial transformation of the original band (moreover, it can reduce the dimensionality of the original wave band); and (4) a vegetation index obtained using band calculations in the key position of the spectral curve for different ground objects. Then, a spectral index can be constructed to effectively separate ground objects. Common vegetation indices have become effective tools for distinguishing vegetation. They are constructed using the visible red and near infrared band, as well as a red edge vegetation index based on the red edge band. However, the phenomena of “foreign body with same spectrum” and “same body with foreign spectrum” are still inevitable due to the limited spectral resolution of medium-resolution satellites. Therefore, there are still high commission and omission errors (Table 3).
The rapid revisit period and free acquisition of medium-resolution satellite imagery has enabled the construction of time series datasets such as NDVI stack files and EVI trajectory [84,85]. Most scholars perform the classification of ground objects using time series analysis. However, this method requires mastery of the complete time series files for the ground object spectrum. Moreover, cloud interference can also lead to a partial absence of time series images. Thus, the monotype of temporal features that is needed for crop recognition can be unreliable.
These two facts suggest that it is not enough to only use the monotype of features in crop classification based on machine learning algorithms. Nevertheless, the infinite types of features are still unexpected, since they will lead to bias in feature selection and computational redundancy. The crop extraction model should accept MFSs, including at least three or four different feature types (MP-III or MP-IV) in the fields of spectral reflectance and growth periodicity.

4.3. Factors Affecting Accuracy in Three Zones

The factors influencing the three zones can be inferred based on differences in classification accuracy. OA was used as the evaluation index, and the top five accuracies for the 30 MFSs were listed, and feature types were counted in the corresponding MFSs.
The result of Zone 1 (Table 4) reveals that the band (B) information may have been the main influencing factor. This is mainly due to this region being located on a pure farmland environment with homogeneous background information. Furthermore, the differences between crops were mainly identified according to their spectral heterogeneity. Further improvement in the accuracy of crop extraction for pure farmland areas may depend on improvement of the spectral and spatial resolution of satellite sensors. Furthermore, the spectral differences between different crops are best captured using narrower spectral bands.
The results from Table 4 suggest that the spectral indices (S) were the main factor influencing the accuracy of the results in Zone 2 (Table 4). This zone was located in a mixed urban area. Spectral indices can enhance the spectral information of different types of terrain, providing improved classification performance. For example, the NDVI used in this paper can enhance vegetation information, and the NDBI can enhance impermeable surface information. Furthermore, the MNDWI can enhance water information, and the NDSI can enhance cloud, snow, and bare soil information. Crop information can be more effectively retrieved from complex mixed backgrounds using these methods. Thus, the crop classification of urban mixed areas should be combined with an assessment of the regional environment. Moreover, a spectral index must be selected that can effectively enhance the terrain information and improve classification accuracy.
Zone 3 was located in a forested area, which featured a large variety of vegetation. The crops in this area exhibit distinct growth cycles for certain vegetation types (e.g., forests). This made the time-phase information (D) based on crop growth cycles the most important factor affecting classification accuracy in Zone 3 (Table 4). The area also included hilly terrain, which had the effect of disturbing the spectral reflectance characteristics of local terrain objects. Principal component information (P) was based on a spatial transformation, and is used to retain important spectral information. It was one of the main factors affecting classification accuracy. The temporal information for crop extraction in the mountainous forest region was based on crop-specific growth information and was constructed to distinguish vegetation types. Topography was also a major factor affecting the reflectance characteristics of ground objects, which requires further consideration to improve the classification accuracy in the future.

4.4. Prospects of Object-Based Approaches Compared to Pixel-Based Approaches

With the continuous development of high spatial resolution images, the extraction of remote sensing information based on pixels will face challenges because of the large amount of data. In contrast, object-based information extraction can make full use of the spatial information, geometric structure, and texture information of the image, and gradually highlight its advantages in the process of information extraction of high spatial resolution remote sensing images. The object-based approach firstly divides the land into several parts in terms of influence, and then takes the divided land as the smallest classification unit on the basis of obtaining relatively homogeneous land blocks [27]. For example, in prior research, blocks after multi-scale segmentation were taken as basic classification units, and spectral, texture, shape, and other variables were constructed [86]. The random forest algorithm is used to achieve better recognition results. In future research, we can further explore the object-based random forest feature recognition method under multi-source data. The advantage of this method lies in the regularity of the segmented image spots, which can effectively overcome the noises that are easily generated in the pixel-based classification method. At the same time, this method can make full use of the spatial structure and texture information in the image, and can effectively reduce the error rate in the fragmented and discontinuous areas of the ground objects, thus making the result of ground object recognition more accurate. Nowadays, the object-based object recognition method can be combined with the machine learning method to achieve better recognition results.

4.5. Advantages and Limitations of Approach in This Article

This paper proves that crop recognition and extraction with high accuracy can be achieved at the regional scale by using a limited and reasonable combination of feature spaces. To a certain extent, this paper only achieved 30-m winter wheat mapping in a small study area. Based on the experimental results of this paper, there may be some limitations in large-scale crop extraction research. However, the idea of simplifying and optimizing feature space can also provide a reference for crop extraction on a large scale to a certain extent.
Firstly, through the analysis and discussion of feature space in this paper, we can choose efficient feature variables as the input variables of the classifier from the two aspects of spectral and temporal features in future large-scale regional mapping. Secondly, for crop information extraction in large regions, considering operational efficiency and accuracy, it is usually necessary to divide large areas into certain surface units, and then calculate the information for each surface unit separately, and finally integrate the results of partitioning into the results of the whole large area. In this paper, the influence factors on the extraction accuracy of the three different regions discussed in Section 4.3 can provide some ideas for the future research of information extraction by block. For example, when large regions cover farming areas, urban mixed areas, and hilly areas, we can separate them according to their different geomorphological and environmental characteristics. For separated farming areas, it is helpful to focus on building features based on band information. As for urban mixed areas, attention can be paid to features that based on spectral index. This targeted feature building process can help achieve rapid and accurate information extraction.
Of course, information extraction at a large regional scale is also facing a variety of other problems and challenges. Possible factors are still under consideration in the process of implementation to further improve the theory and research methods.

5. Conclusions

In this study, Landsat-8 OLI and Sentinel-2 MSI data were combined for wheat extraction and mapping using the random forest machine learning algorithm. The mapping results for three regions with different background objects suggest that the random forest classifier was conducted well using multi-features (MFs), multi-feature setups (MFSs), and multi-patterns (MPs). Comparison analysis was conducted for MFs, MFSs, MFs, and three spatial zones to find the optimal features and combination modes, as well as the factors affecting zone accuracy. Compared with previous studies on crop classification, which needed to construct a variety of characteristic variables, this paper discusses the construction of feature space and a feature combination method, and concludes that it does not need too many kinds of features to participate in classification in order to achieve high accuracy. The results revealed that the three to four-feature type was an important factor in feature combinations, which was derived from spectral reflectance and growth periodicity. This provides a positive way for us to simplify feature construction and improve classification efficiency in the future. Moreover, the regional surface environment should be considered during feature construction. Certain features can enhance the information of the target object and mask other objects. Such features (e.g., spectral indices, phenological features) aid in extracting objects from background information. In future research, we can construct and extract features according to the characteristics of the local crop growth environment, so as to improve the classification speed and ensure high classification accuracy. Additionally, the classification attributes should be reconsidered when there are new factors affecting the reflectance characteristics of crops in the region, such as: elevation, texture, or shape. Finally, this paper combines the red-edge band features of Sentinel-2 data in the construction of feature space. Experiments show that using red-edge information as a crop extraction feature is effective, which has a certain scientific value in the future to play the role of the red-edge band in crop extraction research. Generally speaking, this paper discusses the optimization of crop extraction features and puts forward some views, which has a positive reference significance for crop information extraction research.

Author Contributions

For this research articles, Y.H., C.W., F.C., H.J., and D.L. conceptualized and conceived the experiments; Y.H. conducted the experiments and wrote the original draft; F.C., H.J. and A.Y. reviewed and edited this paper; C.W., F.C., and H.J. supervised the experiments; F.C., H.J. and A.Y. gave project administration and funding supports; and C.W., D.L. and A.Y. gave the resources support.

Funding

This research and APC was funded by [National Key R&D Program of China] grant number [2017YFE0100800], [International Partnership Program of the Chinese Academy of Sciences] grant number [131211KYSB20170046] and the National Natural Science Foundation of China (41671505).

Acknowledgments

This work was supported by the National Key R&D Program of China (Grant No. 2017YFE0100800), the International Partnership Program of the Chinese Academy of Sciences (Grant No. 131211KYSB20170046), and the National Natural Science Foundation of China (41671505). Sentinel-2 data was provided by the European Space Agency (ESA) and reference data, Mapping Germany’s Agricultural Landscape, was released by ESA’s Copernicus Sentinel-2 Monitoring Program on 29 August 2017.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Intergovernmental Panel on Climate Change (IPCC). Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change; IPCC: Geneva, Switzerland, 2014; p. 151. [Google Scholar]
  2. Teluguntla, P.; Thenkabail, P.S.; Xiong, J.; Gumma, M.K.; Congalton, R.G.; Oliphant, A.J.; Sankey, T.; Poehnelt, J.; Yadav, K.; Massey, R.; et al. NASA Making Earth System Data Records for Use in Research Environments (MEaSUREs) Global Food Security-support Analysis Data (GFSAD) @ 30-m for Australia, New Zealand, China, and Mongolia: Cropland Extent Product (GFSAD30AUNZCNMOCE); USGS, EROS, NASA EOSDIS Land Processes DAAC: Sioux Falls, SD, USA, 2017.
  3. Gumma, M.K.; Thenkabail, P.S.; Teluguntla, P.; Oliphant, A.J.; Xiong, J.; Congalton, R.G.; Yadav, K.; Phalke, A.; Smith, C. NASA Making Earth System Data Records for Use in Research Environments (MEaSUREs) Global Food Security-support Analysis Data (GFSAD) @ 30-m for South Asia, Afghanistan and Iran: Cropland Extent Product (GFSAD30SAAFGIRCE); USGS, EROS, NASA EOSDIS Land Processes DAAC: Sioux Falls, SD, USA, 2017.
  4. Xiong, J.; Thenkabail, P.S.; Gumma, M.K.; Teluguntla, P.; Poehnelt, J.; Congalton, R.G.; Yadav, K.; Thau, D. Automated cropland mapping of continental Africa using Google Earth Engine cloud computing. ISPRS J. Photogramm. Remote Sens. 2017, 126, 225–244. [Google Scholar] [CrossRef]
  5. Teluguntla, P.; Thenkabail, P.S.; Xiong, J.; Gumma, M.K.; Congalton, R.G.; Oliphant, A.; Poehnelt, J.; Yadav, K.; Rao, M.; Massey, R. Spectral matching techniques (SMTs) and automated cropland classification algorithms (ACCAs) for mapping croplands of Australia using MODIS 250-m time-series (2000–2015) data. Int. J. Digit. Earth 2017, 10, 944–977. [Google Scholar] [CrossRef]
  6. Xiong, J.; Thenkabail, P.; Tilton, J.; Gumma, M.; Teluguntla, P.; Oliphant, A.; Congalton, R.; Yadav, K.; Gorelick, N. Nominal 30-m Cropland Extent Map of Continental Africa by Integrating Pixel-Based and Object-Based Algorithms Using Sentinel-2 and Landsat-8 Data on Google Earth Engine. Remote Sens. 2017, 9, 1065. [Google Scholar] [CrossRef]
  7. Song, X.-P.; Potapov, P.V.; Krylov, A.; King, L.; Di Bella, C.M.; Hudson, A.; Khan, A.; Adusei, B.; Stehman, S.V.; Hansen, M.C. National-scale soybean mapping and area estimation in the United States using medium resolution satellite imagery and field survey. Remote Sens. Environ. 2017, 190, 383–395. [Google Scholar] [CrossRef]
  8. Huang, Y.; Chen, Z.; Yu, T.; Huang, X.; Gu, X. Agricultural remote sensing big data: Management and applications. J. Integr. Agric. 2018, 17, 1915–1931. [Google Scholar] [CrossRef]
  9. Zhu, J.; Shi, Q.; Chen, F.; Shi, X.; Do, Z.; Qin, Q. Research status and development trends of remote sensing big data. J. Iamge Graph 2016, 21, 1425–1439. [Google Scholar] [CrossRef]
  10. Waldner, F.; Lambert, M.-J.; Li, W.; Weiss, M.; Demarez, V.; Morin, D.; Marais-Sicre, C.; Hagolle, O.; Baret, F.; Defourny, P. Land Cover and Crop Type Classification along the Season Based on Biophysical Variables Retrieved from Multi-Sensor High-Resolution Time Series. Remote Sens. 2015, 7, 10400–10424. [Google Scholar] [CrossRef]
  11. Rodríguez-Galiano, V.F.; Abarca-Hernández, F.; Ghimire, B.; Chica-Olmo, M.; Atkinson, P.M.; Jeganathan, C. Incorporating Spatial Variability Measures in Land-cover Classification using Random Forest. Procedia Environ. Sci. 2011, 3, 44–49. [Google Scholar] [CrossRef]
  12. Rodriguez-Galiano, V.F.; Chica-Olmo, M.; Abarca-Hernandez, F.; Atkinson, P.M.; Jeganathan, C. Random Forest classification of Mediterranean land cover using multi-seasonal imagery and multi-seasonal texture. Remote Sens. Environ. 2012, 121, 93–107. [Google Scholar] [CrossRef]
  13. Bargiel, D. A new method for crop classification combining time series of radar images and crop phenology information. Remote Sens. Environ. 2017, 198, 369–383. [Google Scholar] [CrossRef]
  14. Hao, P.; Wang, L.; Niu, Z.; Aablikim, A.; Huang, N.; Xu, S.; Chen, F. The Potential of Time Series Merged from Landsat-5 TM and HJ-1 CCD for Crop Classification: A Case Study for Bole and Manas Counties in Xinjiang, China. Remote Sens. 2014, 6, 7610–7631. [Google Scholar] [CrossRef]
  15. Yan, D.; de Beurs, K.M. Mapping the distributions of C3 and C4 grasses in the mixed-grass prairies of southwest Oklahoma using the Random Forest classification algorithm. Int. J. Appl. Earth Obs. Geoinf. 2016, 47, 125–138. [Google Scholar] [CrossRef]
  16. Wang, N.; Li, Q.; Du, X.; Zhang, Y.; Zhao, L.; Wang, H. Identification of main crops based on the univariate feature selection in Subei. J. Remote Sens. 2017, 21, 519–530. [Google Scholar] [CrossRef]
  17. Teluguntla, P.; Thenkabail, P.S.; Oliphant, A.; Xiong, J.; Gumma, M.K.; Congalton, R.G.; Yadav, K.; Huete, A. A 30-m landsat-derived cropland extent product of Australia and China using random forest machine learning algorithm on Google Earth Engine cloud computing platform. ISPRS J. Photogramm. Remote Sens. 2018, 144, 325–340. [Google Scholar] [CrossRef]
  18. Pflugmacher, D.; Rabe, A.; Peters, M.; Hostert, P. Mapping pan-European land cover using Landsat spectral-temporal metrics and the European LUCAS survey. Remote Sens. Environ. 2019, 221, 583–595. [Google Scholar] [CrossRef]
  19. Griffiths, P.; Nendel, C.; Hostert, P. Intra-annual reflectance composites from Sentinel-2 and Landsat for national-scale crop and land cover mapping. Remote Sens. Environ. 2019, 220, 135–151. [Google Scholar] [CrossRef]
  20. Wu, B.; Zhang, M. Remote sensing: Observations to data products. Acta Geogr. Sin. 2017, 72, 2093–2111. [Google Scholar] [CrossRef]
  21. Breiman, L. Random Forest. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  22. Hao, P.; Zhan, Y.; Wang, L.; Niu, Z.; Shakir, M. Feature Selection of Time Series MODIS Data for Early Crop Classification Using Random Forest: A Case Study in Kansas, USA. Remote Sens. 2015, 7, 5347–5369. [Google Scholar] [CrossRef]
  23. Pelletier, C.; Valero, S.; Inglada, J.; Champion, N.; Dedieu, G. Assessing the robustness of Random Forests to map land cover with high resolution satellite image time series over large areas. Remote Sens. Environ. 2016, 187, 156–168. [Google Scholar] [CrossRef]
  24. Gao, J.; Nuyttens, D.; Lootens, P.; He, Y.; Pieters, J.G. Recognising weeds in a maize crop using a random forest machine-learning algorithm and near-infrared snapshot mosaic hyperspectral imagery. Biosyst. Eng. 2018, 170, 39–50. [Google Scholar] [CrossRef]
  25. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random Forests for land cover classification. Pattern. Recogn. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  26. Jhonnerie, R.; Siregar, V.P.; Nababan, B.; Prasetyo, L.B.; Wouthuyzen, S. Random Forest Classification for Mangrove Land Cover Mapping Using Landsat 5 TM and Alos Palsar Imageries. Procedia Environ. Sci. 2015, 24, 215–221. [Google Scholar] [CrossRef]
  27. Melville, B.; Lucieer, A.; Aryal, J. Object-based random forest classification of Landsat ETM+ and WorldView2 satellite imagery for mapping lowland native grassland communities in Tasmania, Australia. Int. J. Appl. Earth Obs. Geoinf. 2018, 66, 46–55. [Google Scholar] [CrossRef]
  28. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  29. Tatsumi, K.; Yamashiki, Y.; Canales Torres, M.A.; Taipe, C.L.R. Crop classification of upland fields using Random forest of time-series Landsat 7 ETM+ data. Comput. Electron. Agric. 2015, 115, 171–179. [Google Scholar] [CrossRef]
  30. Huang, H.; Chen, Y.; Clinton, N.; Wang, J.; Wang, X.; Liu, C.; Gong, P.; Yang, J.; Bai, Y.; Zheng, Y.; et al. Mapping major land cover dynamics in Beijing using all Landsat images in Google Earth Engine. Remote Sens. Environ. 2017, 202, 166–176. [Google Scholar] [CrossRef]
  31. Zurqani, H.A.; Post, C.J.; Mikhailova, E.A.; Schlautman, M.A.; Sharp, J.L. Geospatial analysis of land use change in the Savannah River Basin using Google Earth Engine. Int. J. Appl. Earth Obs. Geoinf. 2018, 69, 175–185. [Google Scholar] [CrossRef]
  32. Wang, X.; Xiao, X.; Zou, Z.; Chen, B.; Ma, J.; Dong, J.; Doughty, R.B.; Zhong, Q.; Qin, Y.; Dai, S.; et al. Tracking annual changes of coastal tidal flats in China during 1986–2016 through analyses of Landsat images with Google Earth Engine. Remote Sens. Environ. 2018. [Google Scholar] [CrossRef]
  33. Liu, X.; Hu, G.; Chen, Y.; Li, X.; Xu, X.; Li, S.; Pei, F.; Wang, S. High-resolution multi-temporal mapping of global urban land using Landsat images based on the Google Earth Engine Platform. Remote Sens. Environ. 2018, 209, 227–239. [Google Scholar] [CrossRef]
  34. Li, H.; Wan, W.; Fang, Y.; Zhu, S.; Chen, X.; Liu, B.; Hong, Y. A Google Earth Engine-enabled software for efficiently generating high-quality user-ready Landsat mosaic images. Environ. Model. Softw. 2019, 112, 16–22. [Google Scholar] [CrossRef]
  35. Yin, H.; Prishchepov, A.V.; Kuemmerle, T.; Bleyhl, B.; Buchner, J.; Radeloff, V.C. Mapping agricultural land abandonment from spatial and temporal segmentation of Landsat time series. Remote Sens. Environ. 2018, 210, 12–24. [Google Scholar] [CrossRef]
  36. Kussul, N.; Lemoine, G.; Gallego, F.J.; Skakun, S.V.; Lavreniuk, M.; Shelestov, A.Y. Parcel-Based Crop Classification in Ukraine Using Landsat-8 Data and Sentinel-1A Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2500–2508. [Google Scholar] [CrossRef]
  37. Tian, H.; Wu, M.; Wang, L.; Niu, Z. Mapping Early, Middle and Late Rice Extent Using Sentinel-1A and Landsat-8 Data in the Poyang Lake Plain, China. Sensors 2018, 18, 185. [Google Scholar] [CrossRef] [PubMed]
  38. Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based timeweighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 209–523. [Google Scholar] [CrossRef]
  39. Frampton, W.J.; Dash, J.; Watmough, G.; Milton, E.J. Evaluating the capabilities of Sentinel-2 for quantitative estimation of biophysical variables in vegetation. ISPRS J. Photogramm. Remote Sens. 2013, 82, 83–92. [Google Scholar] [CrossRef]
  40. Vuolo, F.; Neuwirth, M.; Immitzer, M.; Atzberger, C.; Ng, W.-T. How much does multi-temporal Sentinel-2 data improve crop type classification? Int. J. Appl. Earth Obs. Geoinf. 2018, 72, 122–130. [Google Scholar] [CrossRef]
  41. Lessio, A.; Fissore, V.; Borgogno-Mondino, E. Preliminary Tests and Results Concerning Integration of Sentinel-2 and Landsat-8 OLI for Crop Monitoring. J. Imaging 2017, 3, 49. [Google Scholar] [CrossRef]
  42. Li, Z.; Zhang, H.K.; Roy, D.P.; Yan, L.; Huang, H.; Li, J. Landsat 15-m Panchromatic-Assisted Downscaling (LPAD) of the 30-m Reflective Wavelength Bands to Sentinel-2 20-m Resolution. Remote Sens. 2017, 9, 755. [Google Scholar] [CrossRef]
  43. Skakun, S.; Vermote, E.; Roger, J.C.; Franch, B. Combined use of Landsat-8 and Sentinel-2A images for winter crop mapping and winter wheat yield assessment at regional scale. Aims Geosci. 2017, 3, 163–186. [Google Scholar] [CrossRef]
  44. Clark, M.L. Comparison of simulated hyperspectral HyspIRI and multispectral Landsat 8 and Sentinel-2 imagery for multi-seasonal, regional land-cover mapping. Remote Sens. Environ. 2017, 200, 311–325. [Google Scholar] [CrossRef]
  45. Novelli, A.; Aguilar, M.; Nemmaoui, A.; Aguilar, F.; Tarantino, E. Performance evaluation of object based greenhouse detection from Sentinel-2 MSI and Landsat 8 OLI data: A case study from Almería (Spain). Int. J. Appl. Earth Obs. Geoinf. 2016, 52, 403–411. [Google Scholar] [CrossRef]
  46. Shoko, C.; Mutanga, O. Examining the strength of the newly-launched Sentinel 2 MSI sensor in detecting and discriminating subtle differences between C3 and C4 grass species. ISPRS J. Photogramm. Remote Sens. 2017, 129, 32–40. [Google Scholar] [CrossRef]
  47. Vuolo, F.; Żółtak, M.; Pipitone, C.; Zappa, L.; Wenng, H.; Immitzer, M.; Weiss, M.; Baret, F.; Atzberger, C. Data Service Platform for Sentinel-2 Surface Reflectance and Value-Added Products: System Use and Examples. Remote Sens. 2016, 8, 938. [Google Scholar] [CrossRef]
  48. Chastain, R.; Housman, I.; Goldstein, J.; Finco, M.; Tenneson, K. Empirical cross sensor comparison of Sentinel-2A and 2B MSI, Landsat-8 OLI, and Landsat-7 ETM+ top of atmosphere spectral characteristics over the conterminous United States. Remote Sens. Environ. 2019, 221, 274–285. [Google Scholar] [CrossRef]
  49. Thenkabail, P.S.; Biradar, C.M.; Turral, H.; Noojipady, P.; Li, Y.J.; Vithanage, J.; Dheeravath, V.; Velpuri, M.; Schull, M.; Cai, X.L.; Dutta, R. An Irrigated Area Map of the World (1999) derived from Remote Sensing. IWMI Res. Rep. 2006, 105. [Google Scholar] [CrossRef]
  50. Biradar, C.M.; Thenkabail, P.S.; Noojipady, P.; Li, Y.; Dheeravath, V.; Turral, H.; Velpuri, M.; Gumma, M.K.; Gangalakunta, O.R.P.; Cai, X.L.; et al. A global map of rainfed cropland areas (GMRCA) at the end of last millennium using remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2009, 11, 114–129. [Google Scholar] [CrossRef]
  51. Velpuri, N.M.; Thenkabail, P.S.; Gumma, M.K.; Biradar, C.; Dheeravath, V.; Noojipady, P.; Yuanjie, L. Influence of resolution in irrigated area mapping and area estimation.pdf. Photogramm. Eng. Remote Sens. 2009, 75, 1383–1396. [Google Scholar] [CrossRef]
  52. Dheeravath, V.; Thenkabail, P.S.; Chandrakantha, G.; Noojipady, P.; Reddy, G.P.O.; Biradar, C.M.; Gumma, M.K.; Velpuri, M. Irrigated areas of India derived using MODIS 500 m time series for the years 2001–2003. ISPRS J. Photogramm. Remote Sens. 2010, 65, 42–59. [Google Scholar] [CrossRef]
  53. Radoux, J.; Chomé, G.; Jacques, D.; Waldner, F.; Bellemans, N.; Matton, N.; Lamarche, C.; d’Andrimont, R.; Defourny, P. Sentinel-2’s Potential for Sub-Pixel Landscape Feature Detection. Remote Sens. 2016, 8, 488. [Google Scholar] [CrossRef]
  54. Meier, U. Growth Stages of Mono-and Dicotyledonous Plants—BBCH Monograph; Federal Biological Research Centre for Agriculture and Forestry: Bonn, Germany, 2001. [Google Scholar]
  55. Breiman, L. Bagging Predictors. Mach. Learn. 1996, 26, 123–140. [Google Scholar] [CrossRef]
  56. Dietterich, T.G. An experimental comparison of three methods for constructing ensembles of decision trees_Bagging, boosting and randomization. Mach. Learn. 2000, 40, 139–157. [Google Scholar] [CrossRef]
  57. Congalton, R.G. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  58. Potere, D. Horizontal Positional Accuracy of Google Earth’s High-Resolution Imagery Archive. Sensors (Basel) 2008, 8, 7973–7981. [Google Scholar] [CrossRef]
  59. Jaafari, S.; Nazarisamani, A. Comparison between Land Use/Land Cover Mapping Through Landsat and Google Earth Imagery. Am. Eurasian J. Agric. Environ. Sci. 2013, 13, 763–768. [Google Scholar] [CrossRef]
  60. Dorais, A.; Cardille, J. Strategies for Incorporating High-Resolution Google Earth Databases to Guide and Validate Classifications: Understanding Deforestation in Borneo. Remote Sens. 2011, 3, 1157–1176. [Google Scholar] [CrossRef]
  61. Cha, S.-Y.; Park, C.-H. The Utilization of Google Earth Images as Reference Data for The Multitemporal Land Cover Classification with MODIS Data of North Korea. Korean J. Remote Sens. 2007, 23, 483–491. [Google Scholar]
  62. Goudarzi, M.A.; Landry, R., Jr. Assessing horizontal positional accuracy of Google Earth imagery in the city of Montreal, Canada. Geod. Cartogr. 2017, 43, 56–65. [Google Scholar] [CrossRef]
  63. Story, M.; Congalton, R.G. Accuracy Assessment_A User’s Perspective. Photogramm. Eng. Remote Sens. 1986, 52, 397–399. [Google Scholar]
  64. Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas 1960, 1. [Google Scholar] [CrossRef]
  65. Rouse, J.W., Jr.; Haas, R.H.; Well, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with Erts. In Third Earth Resources Technology Satellite-1 Symposium—Volume I: Technical Presentations; NASA: Washington, DC, USA, 1974; pp. 309–371. [Google Scholar]
  66. Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the Radiometric and Biophysical Performance of the MODIS Vegetation Indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
  67. Kauth, R.J.; Thomas, G.S. The Tasselled Cap—A Graphic Description of the Spectral-Temporal Development of Agricultural Crops as Seen by Landsat; LARS Symposia Paper 159; Purdue University: West Lafayette, IN, USA, 1976; pp. 40–51. [Google Scholar]
  68. Sripada, R.P.; Heiniger, R.W.; White, J.G.; Meijer, A.D. Aerial Color Infrared Photography for Determining Early In-season Nitrogen Requirements in Corn. Remote Sens. 2006, 98, 968–977. [Google Scholar] [CrossRef]
  69. Boegha, E.; Soegaard, H.; Broge, N.; Hasager, C.B.; Jensen, N.O.; Schelde, K.; Thomsen, A. Airborne Multi-spectral Data for Quantifying Leaf Area Index-Nitrogen Concentration and Photosynthetic Efficiency in Agriculture. Remote Sens. Environ. 2002, 81, 179–193. [Google Scholar] [CrossRef]
  70. Zha, Y.; Gao, J.; Ni, S. Use of Normalized Difference Built-Up Index in Automatically Mapping Urban Areas from TM Imagery. Int. J. Remote Sens. 2003, 24, 583–594. [Google Scholar] [CrossRef]
  71. Hall, D.K.; Riggs, G.A.; Salomonson, V.V. Development of Methods for Mapping Global Snow Cover Using Moderate Resolution Imaging Spectroradiometer Data. Remote Sens. Environ. 1995, 54, 127–140. [Google Scholar] [CrossRef]
  72. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 2007, 17, 1425–1432. [Google Scholar] [CrossRef]
  73. Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  74. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  75. Lobell, D.B.; Asner, G.P. Hyperion Studies of Crop Stress in Mexico. In Proceedings of the 12th Annual Jpl Airborne Earth Sci. Workshop, Pasadena, CA, USA, 24–28 February 2003; pp. 1–6. [Google Scholar]
  76. Gerald, S.B.; George, R.M. Measuring the Color of Growing Turf with a Reflectance Spectrophotometer. Agron. J. 1968, 60, 640–643. [Google Scholar]
  77. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  78. Jiang, Q.; Liu, H. Extracting TM Image Information Using Texture Analysis. J. Remote Sens. 2004, 8, 458–464. [Google Scholar]
  79. Fernández-Manso, A.; Fernández-Manso, O.; Quintano, C. SENTINEL-2A red-edge spectral indices suitability for discriminating burn severity. Int. J. Appl. Earth Obs. Geoinf. 2016, 50, 170–175. [Google Scholar] [CrossRef]
  80. Dube, T.; Mutanga, O.; Sibanda, M.; Shoko, C.; Chemura, A. Evaluating the influence of the Red Edge band from RapidEye sensor in quantify. Phys. Chem. Earth 2017, 100, 73–80. [Google Scholar] [CrossRef]
  81. Li, L.; Ren, T.; Ma, Y.; Wei, Q.; Wang, S.; Li, X.; Cong, R.; Liu, S.; Lu, J. Evaluating chlorophyll density in winter oilseed rape ( Brassica napus L.) using canopy hyperspectral red-edge parameters. Comput. Electron. Agric. 2016, 126, 21–31. [Google Scholar] [CrossRef]
  82. Guo, B.-B.; Qi, S.-L.; Heng, Y.-R.; Duan, J.-Z.; Zhang, H.-Y.; Wu, Y.-P.; Feng, W.; Xie, Y.-X.; Zhu, Y.-J. Remotely assessing leaf N uptake in winter wheat based on canopy hyperspectral red-edge absorption. Eur. J. Agron. 2017, 82, 113–124. [Google Scholar] [CrossRef]
  83. Nitze, I.; Barrett, B.; Cawkwell, F. Temporal optimisation of image acquisition for land cover classification with Random Forest and MODIS time-series. Int. J. Appl. Earth Obs. Geoinf. 2015, 34, 136–146. [Google Scholar] [CrossRef]
  84. Smith, V.; Portillo-Quintero, C.; Sanchez-Azofeifa, A.; Hernandez-Stefanoni, J.L. Assessing the accuracy of detected breaks in Landsat time series as predictors of small scale deforestation in tropical dry forests of Mexico and Costa Rica. Remote Sens. Environ. 2019, 221, 707–721. [Google Scholar] [CrossRef]
  85. Zhang, Y.; Ling, F.; Foody, G.M.; Ge, Y.; Boyd, D.S.; Li, X.; Du, Y.; Atkinson, P.M. Mapping annual forest cover by fusing PALSAR/PALSAR-2 and MODIS NDVI during 2007–2016. Remote Sens. Environ. 2019, 224, 74–91. [Google Scholar] [CrossRef]
  86. Schultz, B.; Immitzer, M.; Formaggio, A.; Sanches, I.; Luiz, A.; Atzberger, C. Self-Guided Segmentation and Classification of Multi-Temporal Landsat 8 Images for Crop Type Mapping in Southeastern Brazil. Remote Sens. 2015, 7, 14482–14508. [Google Scholar] [CrossRef]
Figure 1. Study area location depicted by a shaded terrain map, as well as a visualization of the cultivated land in Landsat-8 imagery with random sampling points.
Figure 1. Study area location depicted by a shaded terrain map, as well as a visualization of the cultivated land in Landsat-8 imagery with random sampling points.
Remotesensing 11 00535 g001
Figure 2. Temporal coverage of the Landsat-8 operational land imager (OLI) images and Sentinel-2 multi-spectral instrument (MSI) images time series data used in this study and the mean crop growth period in the area. The images (bands in near-infrared, red, and green as RGB) shown here were respectively captured on 2 May (MSI), 5 May (OLI), 6 June (OLI), 25 August (OLI), 9 September (MSI), 10 September (OLI), and 29 November (OLI). They are marked with the corresponding numbers of 0502, 0505, 0606, 0825, 0909, 0910, and 1129.
Figure 2. Temporal coverage of the Landsat-8 operational land imager (OLI) images and Sentinel-2 multi-spectral instrument (MSI) images time series data used in this study and the mean crop growth period in the area. The images (bands in near-infrared, red, and green as RGB) shown here were respectively captured on 2 May (MSI), 5 May (OLI), 6 June (OLI), 25 August (OLI), 9 September (MSI), 10 September (OLI), and 29 November (OLI). They are marked with the corresponding numbers of 0502, 0505, 0606, 0825, 0909, 0910, and 1129.
Remotesensing 11 00535 g002
Figure 3. Workflow for winter wheat detection and mapping using the random forest algorithm.
Figure 3. Workflow for winter wheat detection and mapping using the random forest algorithm.
Remotesensing 11 00535 g003
Figure 4. Four cases occurred when random sampling points were scattered on remote sensing images.
Figure 4. Four cases occurred when random sampling points were scattered on remote sensing images.
Remotesensing 11 00535 g004
Figure 5. The 30 multi-feature subsets (MFSs) containing the top 10 variables were ranked based on feature importance in each MFS, as well as five multi-patterns (MPs) named I, II, III, IV, and V. (a) Top 10 winter wheat predictor variables derived from the single feature type; (b) Top 10 winter wheat predictor variables derived from the combination of two feature types; (c) Top 10 winter wheat predictor variables derived from the combination of three feature types; (d) Top 10 winter wheat predictor variables derived from the combination of four feature types; and (e) Top 10 winter wheat predictor variables derived from the combination of two feature types. Each bar represents the importance score attributed to a predictor in a model run.
Figure 5. The 30 multi-feature subsets (MFSs) containing the top 10 variables were ranked based on feature importance in each MFS, as well as five multi-patterns (MPs) named I, II, III, IV, and V. (a) Top 10 winter wheat predictor variables derived from the single feature type; (b) Top 10 winter wheat predictor variables derived from the combination of two feature types; (c) Top 10 winter wheat predictor variables derived from the combination of three feature types; (d) Top 10 winter wheat predictor variables derived from the combination of four feature types; and (e) Top 10 winter wheat predictor variables derived from the combination of two feature types. Each bar represents the importance score attributed to a predictor in a model run.
Remotesensing 11 00535 g005aRemotesensing 11 00535 g005b
Figure 6. Diagram of three different geomorphic regions located in the study area. The region with blue validation polygons is marked as Zone 1, and is located in pure farmland areas. The region with red validation polygons is marked as Zone 2, and is located in urban mixed areas. The region with yellow validation polygons is marked as Zone 3, and is located in forested areas.
Figure 6. Diagram of three different geomorphic regions located in the study area. The region with blue validation polygons is marked as Zone 1, and is located in pure farmland areas. The region with red validation polygons is marked as Zone 2, and is located in urban mixed areas. The region with yellow validation polygons is marked as Zone 3, and is located in forested areas.
Remotesensing 11 00535 g006
Figure 7. Wheat mapping results when using 30 multi-feature subsets (MFSs). The columns represent three different geomorphic regions (e.g., Zone 1, Zone 2, and Zone 3) and rows represent 30 multi-feature subsets (MFSs).
Figure 7. Wheat mapping results when using 30 multi-feature subsets (MFSs). The columns represent three different geomorphic regions (e.g., Zone 1, Zone 2, and Zone 3) and rows represent 30 multi-feature subsets (MFSs).
Remotesensing 11 00535 g007aRemotesensing 11 00535 g007b
Figure 8. Precision results among multi-patterns (MPs) over Zone 1, Zone 2, and Zone 3.
Figure 8. Precision results among multi-patterns (MPs) over Zone 1, Zone 2, and Zone 3.
Remotesensing 11 00535 g008
Figure 9. Line chart of precision among multi-patterns (MPs) when using OA as an indicator.
Figure 9. Line chart of precision among multi-patterns (MPs) when using OA as an indicator.
Remotesensing 11 00535 g009
Figure 10. Accuracy results for each zone. (a) Accuracy results for Zone 1; (b) Accuracy results for Zone 2; and (c) Accuracy results for Zone 3.
Figure 10. Accuracy results for each zone. (a) Accuracy results for Zone 1; (b) Accuracy results for Zone 2; and (c) Accuracy results for Zone 3.
Remotesensing 11 00535 g010aRemotesensing 11 00535 g010b
Figure 11. Phenological history of local crops.
Figure 11. Phenological history of local crops.
Remotesensing 11 00535 g011
Table 1. Regional and global remote sensing data products for crop and land-cover classification.
Table 1. Regional and global remote sensing data products for crop and land-cover classification.
InstitutionProductsScaleResolution
European Space Agency (ESA) (30 m Sentinel-2 and Landsat-8)
Mapping Germany’s Agricultural Landscape [19]
Regional30 m
McGill University
Cropland and pasture area in 2000
Global10 km
International Water Management Institute (IWMI)
10-km irrigated and rain-fed cropland [49,50]
Global10 km
Maps 10-km global land-use/cover (LULC) map
Global10 km
500-m irrigated area map for South Asia
Regional500 m
30-m irrigated area maps for the Syr Darya river basin in Central Asia and the Krishna River basin in India [51,52]
Regional30 m
United States Geological Survey (USGS) (GFSAD30 and Landsat)
30-m global land cover [2,3]
Global30 m
Table 2. The 131 multi-features (MF) derived from Landsat-8 and Sentinel-2 imagery used as predictor variables for the random forest model.
Table 2. The 131 multi-features (MF) derived from Landsat-8 and Sentinel-2 imagery used as predictor variables for the random forest model.
Date0502050506060825090909101129
Features
Bands2 3 4 8 11 122 3 4 5 6 72 3 4 5 6 72 3 4 5 6 72 3 4 8 11 122 3 4 5 6 72 3 4 5 6 7
Spectral Indicesndvi_502ndvi_505ndvi_606nfvi_825ndvi_909ndvi_910ndvi_1129
ndbi_502ndbi_825ndbi_606ndbi_825ndbi_909ndbi_910ndbi_1129
ndsi_502ndsi_505ndsi_606ndsi_825ndsi_909ndsi_910ndsi_1129
mndwi_502mndwi_505mdnwi_606mndwi_825mndwi_909mndwi_910mndwi_1129
savi_502savi_505savi_606savi_825savi_909savi_910savi_1129
Principal Component Analysis (PCA)pca1_502pca1_505pca1_606pca1_825pca1_909pca1_910pca1_1129
pca2_502pca2_505pca2_606pca2_825pca2_909pca2_910pca2_1129
pca3_502pca3_505pca3_606pca3_825pca3_909pca3_910pca3_1129
Diff_Ndvindvi_502_505ndvi_505_606ndvi_606_825ndvi_825_909ndvi_909_910ndvi_910_1129
ndvi_502_606ndvi_505_825ndvi_606_909ndvi_825_910ndvi_909_910
ndvi_502_825ndvi_505_909ndvi_606_910ndvi_825_1129
ndvi_502_909ndvi_505_910ndvi_606_1129
ndvi_502_910ndvi_505_1129
ndvi_502_1129
Red_Edge5 6 7 5 6 7
mre_ndvi_502 mre_ndvi_909
mre_sr_502 mre_sr_909
re_ndvi_502 re_ndvi_909
Table 3. This table displays the wheat mapping accuracy for the 30 multi-feature subsets (MFSs). The value of the red background filling represents the lowest value of the index, and the green background filling with different depths indicates the top three highest accuracies. The measurement unit of overall accuracy (OA), producer’s accuracy (PA), and user’s accuracy (UA) is %. AUC: area under curve. B: original spectral bands, S: spectral indices, P: principal component information, D: differences in NDVI information that can reflect phenology information about different crops, R: red edge feature space.
Table 3. This table displays the wheat mapping accuracy for the 30 multi-feature subsets (MFSs). The value of the red background filling represents the lowest value of the index, and the green background filling with different depths indicates the top three highest accuracies. The measurement unit of overall accuracy (OA), producer’s accuracy (PA), and user’s accuracy (UA) is %. AUC: area under curve. B: original spectral bands, S: spectral indices, P: principal component information, D: differences in NDVI information that can reflect phenology information about different crops, R: red edge feature space.
FeatureZone 1Zone 2Zone 3
OAPAUAAUCf_ScoreskappaOAPAUAAUCf_ScoreskappaOAPAUAAUCf_Scoreskappa
IB84.8479.9981.740.84030.80860.683192.5083.5979.710.89150.81610.769089.9464.5948.720.78630.55550.5000
S86.5279.4785.800.85350.82510.715791.9874.6683.330.85470.78760.738491.6757.6657.130.76500.57400.5278
P85.1978.9883.160.84150.81020.688991.1475.8578.860.85400.77330.718391.1175.7953.020.84280.62390.5753
D86.5483.1583.190.85970.83170.719592.6984.5679.910.89640.82170.775892.0665.8158.150.80350.61740.5733
R86.5582.0983.950.85810.83010.718992.9681.1583.080.88520.82100.777292.9469.2262.380.82360.65620.6170
Avg85.9380.7483.570.85060.82110.705292.2679.9680.980.87640.80390.755791.5466.6255.880.80420.60540.5587
IIBS85.8376.8986.200.84340.81280.699393.3482.5083.790.89270.83140.789993.6360.3370.080.78770.64840.6136
BP86.0275.4587.890.84260.81200.701791.2769.6983.720.83160.76060.707892.9162.5163.850.79350.63170.5925
BD84.9779.8682.080.84120.80960.685491.2180.0076.830.87000.78380.728791.6469.9655.590.81970.61950.5733
BR87.9182.9986.270.87090.84600.746592.6478.4583.580.87310.80930.763893.1772.9462.850.84140.67520.6372
SP87.3281.1786.320.86290.83660.733193.3985.9181.790.90580.83800.796592.5471.9259.690.83340.65240.6110
SD87.3080.8186.570.86220.83590.732593.1584.3781.780.89850.83060.787692.6270.1360.420.82590.64910.6082
SR85.8079.9883.810.84830.81850.702092.3581.0780.620.88110.80850.760793.3265.2565.800.80790.65520.6182
PD85.7878.7384.650.84610.81590.700392.4480.9581.040.88120.81000.762793.6666.0167.950.81330.66970.6346
PR87.6584.1084.890.87060.84490.742392.8581.1482.650.88450.81890.774492.7276.6059.850.85530.67200.6317
DR86.3580.7584.460.85420.82560.713692.2480.7580.380.87930.80570.757293.0666.9163.630.81390.65230.6137
Avg86.4980.0785.310.85420.82580.715792.4980.4981.620.87980.80970.762992.9368.2562.970.81920.65250.6134
IIIBSP88.1583.8486.160.87430.84990.751992.5776.6684.560.86590.80410.758493.3873.2563.970.84400.68300.6462
BSD87.8879.8988.710.86550.84070.743493.3081.1984.530.88750.82820.786693.6557.8771.460.77690.63950.6051
BSR87.3581.9785.790.86460.83840.734692.9483.7581.320.89490.82520.780992.2672.4158.220.83400.64540.6025
BPD87.6181.8186.510.86650.84090.739692.3579.4281.670.87490.80530.757792.7469.0361.290.82170.64930.6090
BPR86.7682.1584.360.85990.83240.723092.7585.2979.730.89950.82410.778691.9072.5756.540.83280.63560.5908
BDR87.8681.4587.350.86790.84300.744193.7084.3584.070.90190.84210.802893.1670.9063.290.83230.66880.6308
SPD85.9578.7385.040.84750.81760.703692.3079.8781.150.87630.80500.757194.1564.2672.500.80820.68130.6492
SPR86.9681.4185.330.86040.83330.726393.5885.2083.020.90430.84090.800792.1274.1057.360.84080.64660.6031
PDR87.4580.8186.900.86340.83750.735493.1084.0781.800.89710.82920.786092.8870.6561.720.82960.65880.6193
Avg87.3381.3486.240.86330.83710.733692.9582.2082.430.88910.82270.778892.9169.4562.930.82450.65650.6173
IVBSPD86.5381.0784.620.85620.82810.717493.2785.8781.360.90490.83550.793391.9172.5756.560.83280.63570.5910
BSPR86.8581.9884.680.86040.83310.724793.1085.1881.130.90130.83100.787892.5169.5259.940.82250.64380.6022
BSDR86.0979.3584.910.84970.82040.707192.2579.2681.340.87370.80290.754794.0461.0273.280.79310.66590.6335
BPDR85.7878.5684.780.84580.81550.700192.0078.8680.570.87060.79700.747394.0764.0871.900.80690.67760.6451
SPDR87.5882.6685.790.86760.84190.739793.0383.7481.700.89540.82700.783492.5370.9859.800.82920.64910.6077
Avg86.5780.7284.960.85590.82780.717892.7382.5881.220.88920.81870.773393.0167.6364.300.81690.65440.6159
VBSPDR85.9579.2584.660.84830.81860.704192.2079.9680.680.87600.80320.754693.8266.5868.870.81670.67700.6428
Total86.6480.6585.200.85640.82840.719192.6281.2581.640.88350.81390.767992.7268.1662.110.81760.64640.6062
Table 4. The main factors affecting the zone accuracy are represented by the maximum number marked in red. The measurement unit of OA is %.
Table 4. The main factors affecting the zone accuracy are represented by the maximum number marked in red. The measurement unit of OA is %.
Zone 1Zone 2Zone 3
MFSOABSPDRMFSOABSPDRMFSOABSPDR
BSP88.15163 BDR93.706 22SPD94.15 343
BR87.917 3SPR93.58 62 2BPDR94.074 222
BSD87.8853 2 SP93.39 73 BSDR94.0443 21
BDR87.866 22BS93.3437 BSPDR93.8222321
PR87.65 7 3BSD93.30313 PD93.66 55
Sum 1991048Sum 1221854Sum 10814144

Share and Cite

MDPI and ACS Style

He, Y.; Wang, C.; Chen, F.; Jia, H.; Liang, D.; Yang, A. Feature Comparison and Optimization for 30-M Winter Wheat Mapping Based on Landsat-8 and Sentinel-2 Data Using Random Forest Algorithm. Remote Sens. 2019, 11, 535. https://doi.org/10.3390/rs11050535

AMA Style

He Y, Wang C, Chen F, Jia H, Liang D, Yang A. Feature Comparison and Optimization for 30-M Winter Wheat Mapping Based on Landsat-8 and Sentinel-2 Data Using Random Forest Algorithm. Remote Sensing. 2019; 11(5):535. https://doi.org/10.3390/rs11050535

Chicago/Turabian Style

He, Yuanhuizi, Changlin Wang, Fang Chen, Huicong Jia, Dong Liang, and Aqiang Yang. 2019. "Feature Comparison and Optimization for 30-M Winter Wheat Mapping Based on Landsat-8 and Sentinel-2 Data Using Random Forest Algorithm" Remote Sensing 11, no. 5: 535. https://doi.org/10.3390/rs11050535

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop