Next Article in Journal
A Priori Knowledge Based Ground Moving Target Indication Technique Applied to Distributed Spaceborne SAR System
Previous Article in Journal
High-Resolution and Wide-Swath SAR Imaging with Space–Time Coding Array
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Method for Crop Type Mapping at the Regional Scale Using Multi-Source and Multi-Temporal Sentinel Imagery

1
State Key Laboratory of Resources and Environmental Information Systems, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China
2
College of Resources and Environment, University of Chinese Academy of Sciences, Beijing 100049, China
3
Institute of Ecological Environment Research, Chinese Research Academy of Environmental Sciences, Beijing 100012, China
4
State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing 100012, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(9), 2466; https://doi.org/10.3390/rs15092466
Submission received: 29 March 2023 / Revised: 29 April 2023 / Accepted: 30 April 2023 / Published: 8 May 2023
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)

Abstract

:
Crop type mapping at high resolution is crucial for various purposes related to agriculture and food security, including the monitoring of crop yields, evaluating the potential effects of natural disasters on agricultural production, analyzing the potential impacts of climate change on agriculture, etc. However, accurately mapping crop types and ranges on large spatial scales remains a challenge. For the accurate mapping of crop types at the regional scale, this paper proposed a crop type mapping method based on the combination of multiple single-temporal feature images and time-series feature images derived from Sentinel-1 (SAR) and Sentinel-2 (optical) satellite imagery on the Google Earth Engine (GEE) platform. Firstly, crop type classification was performed separately using multiple single-temporal feature images and the time-series feature image. Secondly, with the help of information entropy, this study proposed a pixel-scale crop type classification accuracy evaluation metric, i.e., the CA-score, which was used to conduct a vote on the classification results of multiple single-temporal images and the time-series feature image to obtain the final crop type map. A comparative analysis showed that the proposed classification method had excellent performance and that it can achieve accurate mapping of multiple crop types at a 10 m resolution for large spatial scales. The overall accuracy (OA) and the kappa coefficient (KC) were 84.15% and 0.80, respectively. Compared with the classification results that were based on the time-series feature image, the OA was improved by 3.37%, and the KC was improved by 0.03. In addition, the CA-score proposed in this study can effectively reflect the accuracy of crop identification and can serve as a pixel-scale classification accuracy evaluation metric, providing a more comprehensive visual interpretation of the classification accuracy. The proposed method and metrics have the potential to be applied to the mapping of larger study areas with more complex land cover types using remote sensing.

1. Introduction

Food security is the basis for a healthy life. However, since 2014, the number of hungry people in the world has been increasing [1]. Climate change and extreme events are important drivers of increases in global hunger, as they directly hinder crop production [1,2,3]. In addition, rapid population growth, and complex and changing international relations can also affect food security [4,5]. To address these challenges, we need to take effective measures to ensure food security. It is important for government departments to quickly and accurately obtain information on the types and spatial distributions of crops for developing food policies, adjusting agricultural structure, and ensuring national and global food security [6,7,8,9]. In addition, crop type mapping also forms the basis for research on crop censuses [6,10], growth monitoring [11], disaster assessment [12], and yield estimation [13,14,15]. Compared with traditional field survey methods, remote sensing has become one of the main means of fast and accurate large-scale crop type mapping due to its long-term dynamic monitoring, wide coverage, and low cost, providing good technical support for extracting crop information [16,17,18,19,20].
Although there have been a large number of studies and achievements around land cover mapping using remote sensing [17,21,22,23], it is still challenging to use remote sensing to accurately map crop types over a large area. The biggest bottleneck is the data quality of remote sensing images, including their spatial, spectral, radiometric, and temporal resolution. To improve the accuracy of crop type mapping using remote sensing, it is necessary to optimize the data quality of remote sensing images and to develop and improve algorithms for image interpretation and information extraction. Agricultural planting systems are highly fragmented [24] and dynamic, especially in small-scale agriculture. This requires high spatial and temporal resolution of remote sensing images, with spatial resolution consistent with the scale of the fields, and higher revisit periods to avoid interference from clouds and other factors, as well as to reveal more detailed changes in crop growth periods and land use [17]. Higher spectral resolution helps to identify crop types. In the past, regional crop type mapping was mainly carried out using satellite data from the Landsat series and MODIS, but their spatial resolution and revisit periods were limiting factors in achieving high-precision crop type mapping. The emergence of Sentinel-2 has provided the unprecedented fine-scale crop monitoring ability [25]. Sentinel-2′s higher spatial and temporal resolution, as well as its wide spectral range, make it an important tool for crop type mapping. However, using the data set from a single satellite is not sufficient to achieve complete coverage of the study area. Therefore, Landsat data are often used as a supplement to Sentinel-2 data to obtain the complete coverage of the study area [26,27,28]. Many studies have shown that compared with a single type of satellite data, the fusion of multi-source remote sensing data provides a wider range of target information content from different perspectives of spatial, spectral, and temporal characteristics and performs better in the monitoring of cultivated land [28,29,30,31]. In current data fusion research, there are more and more methods combining optical data and radar data for cultivated land monitoring. For example, an increasing amount of research is using a combination of images to focus on phenology by combining Sentinel-2 and Landsat data, as well as adding SAR data to enhance it and constructing suitable phenological time-series feature images as input data [32,33,34,35].
The fusion of multi-source data provides a new approach to large-scale crop type mapping, but there are still some problems to be solved: first, due to the limitations of cloud and satellite revisit periods, it is difficult to synthesize specific time series images of the coverage at the regional scale. The existing research on multi-source image fusion usually synthesizes and fuses images within the time window according to the median or mean and fills in the gaps with images from adjacent time periods, which may cause local differences due to inconsistent observation times and observations. Furthermore, the images obtained by such fusions are not original observation data, and the original information from many single-temporal images in the study area is lost. In addition, many studies tend to use the simple and direct layer stacking method for the optical band and the SAR band. With the increase in spectral features and time-series images, the feature parameters of the classifier input rapidly increase to dozens or even hundreds, which not only makes the operation complex and reduces the processing speed but also means that, in the case of limited samples, too many features may cause the classification accuracy to decrease, which is the so-called “dimension disaster” [36].
Other challenges in accurate crop type mapping at the regional scale are the performance of the classification algorithms and the limitations of remote sensing that require big data computing power [29]. Classification algorithms often directly determine the accuracy of crop type mapping. The classification algorithms currently developed for drawing crop types mainly include SVMs (support vector machines), CART (classification and regression tree), RF (random forests), and DL (deep learning) [7,27,37], among which RF is considered one of the most effective, accurate, and robust methods for mapping crop types [30,38,39]. In recent years, some research has shown that DL also has good performance in crop type mapping [40,41], but this method often relies on a large number of samples to train the classification model. Limited by computational power, the study area covered by optical and radar data fusion studies for crop type mapping so far has been relatively small on average, with almost half of the studies conducted in areas less than 1000 km2 [18]. Only one study was conducted at the national level by Torbik et al. [42], covering all of Myanmar’s territory (676,578 km2). In general, relevant research is mainly limited to a few crop types or small research areas, and there is a lack of large-scale mapping being practiced for multiple crop types. This fact may reflect that the processing of massive amounts of data (multi-sensor, high-spatial-resolution, and multi-temporal data) is a challenge. To cope with the era of remote sensing big data, a number of remote sensing cloud computing platforms led by the GEE (Google Earth Engine) have emerged, which makes it possible to achieve accurate crop type mapping using remote sensing data at large spatial scales. At present, much research has used the GEE platform to achieve agricultural remote sensing work [43,44,45].
In summary, this study aims to develop a new crop classification method based on the integrated feature-level fusion and decision-level fusion of the GEE platform. By leveraging the powerful data processing capabilities of the GEE platform, the study aims to evaluate the potential of combining multiple single-temporal images and monthly time-series images to overcome issues such as image gaps and fusion, and achieve fine crop classification at the regional level. Specifically, it includes the following:
(1) Developing a method for crop type mapping that combines multiple single-temporal feature images and time-series feature images derived from Sentinel-1 (SAR) and Sentinel-2 (optical) satellite imagery, and applying it to the classification of multiple crops at the provincial level;
(2) Constructing a new classification accuracy evaluation metric that can be used to evaluate the accuracy of a pixel-scale classification and provide a more comprehensive visual explanation of classification accuracy.

2. Materials and Methods

2.1. Study Area

Henan is a province in central China that covers a total area of approximately 1.67 × 105 km2 (31.38–36.37°N, 110.35–116.65°E; Figure 1). The terrain of Henan is varied, with hills and mountains in the west and the Yellow–Huaihe alluvial plain in the east. The southwestern part is mainly the Nanyang Basin. The plain and basin, as well as the hills and mountains, make up 55.7% and 44.3% of the total area, respectively. Most of Henan has a warm temperate climate, with some areas transitioning into the subtropical zone. It has an annual average temperature ranging from 10.5 to 16.7 °C, with average annual sunshine hours ranging from 1285.7 to 2292.9 h and an average annual precipitation ranging from 407.7 to 1295.8 mm [46]. The favorable geography and climate provide suitable conditions for agricultural production, with the main focus on two crops per year. According to the National Bureau of Statistics, Henan’s sowing area for grain crops was 1.08 × 105 km2 in 2021, accounting for 9.16% of China’s total sowing area for grain crops. Grain production was 6.54 × 107 t, accounting for 9.58% of the national total, and played a significant role in the country’s overall grain production. According to the Henan Provincial Statistical Yearbook, Henan’s autumn harvest crops are mainly maize, peanuts, rice, and soybeans. Maize is widely distributed throughout the province and is the main crop of most prefecture-level cities. Peanuts are also widely distributed throughout the province, mainly in Kaifeng, Nanyang, and Zhumadian. Rice is mainly grown in Xinyang in the south. Soybeans are mainly grown in Zhoukou, Shangqiu, and Xuchang. In addition, sweet potatoes, cotton, tobacco, medicinal herbs (mainly yams), flowers, and different types of fruits and vegetables are grown in the Henan Province using greenhouses and piecemeal planting systems, collectively referred to as “other crops” in this paper. The concentration of other crops is mainly distributed in the central and eastern regions of the Henan Province. Maize, peanuts, and soybeans are typically planted from late May to early June and harvested from late September to early November. Rice is transplanted in May and harvested from late September to early October. The growth period of other crops is generally longer than that of the above crops, with an early planting and late harvesting time.
Due to the diversity of the geography and climate, and the household contract responsibility system, the structure of agriculture is complicated in Henan province. In mountainous and hilly areas, small-scale agriculture predominates, with highly fragmented fields. In flatlands, the originally larger and more regular farmland suitable for mechanical cultivation has also become more fragmented due to the household contract responsibility system. This presents challenges for the fine classification of crops in Henan.

2.2. Datasets

2.2.1. Sentinel Imagery

Sentinel-1 and Sentinel-2 are two of the satellites in the Copernicus program, which is a joint initiative of the European Union (EU) and the European Space Agency (ESA) [47]. Sentinel-2 consists of two satellites, which have become the first choice for regional high-precision mapping in recent years due to their high spatial (10 m), high temporal (5 d), and high spectral (13 bands) resolutions. Sentinel-1 also consists of two satellites, carrying a C-band (5.4 GHz) synthetic aperture radar, which is an active microwave remote sensing satellite that can achieve all-weather observation of the ground. GEE offers a variety of products from both satellites for free, online. Based on prior knowledge, we know that the period from mid-June to mid-September of each year is the common time period for the growth and development of various major autumn crops in the study area. Therefore, 110 Sentinel-1 images (Figure 2) and 1137 Sentinel-2 images (Figure 3) covering Henan Province, China, from 16 June 2021 to 15 September 2021, were used as input data in this study. According to previous studies [48] and experiments, the “VH” polarization mode has better performance in crop identification than the “VV” polarization mode, so this study chose the Sentinel-1 GRD (ground range detection) product with the “VH” polarization mode, which has a spatial resolution of 10 m, and performed a mosaic fusion of ascending and descending track data. Texture features, an important surface and structural attribute of images, can improve the classification accuracy of remote sensing images [49,50] when used as feature variables. In this study, the “VH” band of the Sentinel-1 GRD was used to generate texture features based on the gray-level co-occurrence matrix (GLCM), and the sliding window was set to 3 × 3 after experimentation. The 6 most commonly used texture features were selected: VH_asm (angular second moment, reflecting the uniformity of the image’s gray-scale distribution and the texture’s coarseness), VH_contrast (contrast, reflecting the clarity of the image and the depth of the texture’s grooves), VH_corr (correlation, reflecting the local gray-scale correlation of the image), VH_var (variance, measuring the dispersion of the gray-scale distribution), VH_idm (inverse difference moment, reflecting the size of local texture changes in the image), and VH_ent (entropy, expressing the randomness of the image’s texture). Each texture feature variable was added to the original image as a separate band to form the SAR classification feature image set (Table 1). Sentinel-2 data were the level 2 surface reflectance product (Level-2A) that had been atmospherically corrected, the QA60 band was used for de-cloud processing, and the resolution of all bands was resampled to 10 m. The following spectral features were calculated: NDVI (normalized difference vegetation index, which separates vegetation from water and soil), NDWI (normalized difference water index, highlighting water bodies in the image), EVI (enhanced vegetation index, which improves the ability to detect sparse vegetation), NDBI (normalized difference built-up index, accurately reflecting built-up land information), and LSWI (land surface water index, representing soil moisture changes). Each spectral feature variable was added to the original image as a separate band to form the optical classification feature image set (Table 1).

2.2.2. Ground Reference Dataset

We conducted a field survey of the study area in July–August 2021 and recorded the coordinates of the sample points. Based on the statistical prior knowledge from the Henan Statistical Yearbook, our sampling points cover the main production areas of various crops and try to ensure that samples of a certain type of crop can cover the entire study area. After subsequent adjustments and screening, a total of 4132 crop samples were used for this study, including maize (1000 points), rice (744 points), peanuts (1270 points), soybeans (498 points), and other crops (620 points). In addition, we used the 2021 10-m resolution global land cover map (ESA WorldCover 10 m v200) [51] released by the European Space Agency to generate non-crop samples and combined with Google Earth high-resolution imagery for visual interpretation to determine 1000 non-crop samples uniformly covering the entire study area, including tree cover, shrubland, grassland, built-up, bare/sparse vegetation, permanent water bodies, herbaceous wetland, etc. Finally, the sample points were randomly divided into two groups, 70% of which were used to train the model for classification, and 30% were used to evaluate the accuracy of the classification results (Table 2).

2.3. Methods

The regional-scale crop type mapping method we proposed can be divided into three parts, as shown in Figure 4: (1) using the time-series feature image and multiple single-temporal feature images for crop type classification, (2) voting on the two classification results to determine the final crop types, and (3) accuracy evaluation.

2.3.1. Classifier

Random forest (RF) is a machine learning algorithm that combines multiple decision trees using a voting mechanism to make predictions [52,53]. It is an ensemble classifier that has been widely used in the supervised classification of remote-sensing images due to its simplicity, speed, robustness, and high classification accuracy [54]. In addition to being used for classification, RF can also be used for regression tasks. One of the main advantages of RF is that it is less prone to overfitting compared to other machine learning algorithms. Therefore, in this study, the RF algorithm was used as the classification method. We set the output mode of the RF to “MULTIPROBABILITY” to achieve soft output, and the output result was an array of probabilities that each class was correct. Through experiments, the “numberOfTrees” of the RF in this study was set to 50 and the “bagFraction” was set to 0.8. The JM (Jeffries–Matusita) distance is a widely accepted feature separability measure [45,55] that can be used to characterize the separability of different land cover classes based on the same features, as well as to characterize the separability of different features based on the same land cover classes. Feature selection was not conducted in this study, as we found that the importance of different classification features would vary with the changing growth season through the calculation of the JM distance, and recent research has not found that feature selection can effectively improve classification accuracy [18,33].

2.3.2. Classification Based on Time-Series Feature Image

In the classification of the time-series feature image, the feature image set was divided into three time windows (from 16 June 2021 to 15 July 2021; from 16 July 2021 to 15 August 2021; and from 16 August 2021 to 15 September 2021) for fusion (Figure 2 and Figure 3). For each time window, the SAR classification feature image set and the optical classification feature image set were calculated pixel by pixel to obtain a total of 6 median feature images. The gaps in the median feature images were filled with the median of the median feature images of the adjacent time periods before and after. Then, the 6 median feature images that cover the entire study area were aggregated to obtain the time-series feature image. Next, the time-series feature image was sampled according to the coordinates of the training samples, and the classifier was trained using sampling results. Finally, the time-series feature image was classified using the trained classifier, and a probability image based on the time-series feature image was obtained. Each pixel of the probability image stores the correct probability array of each class, and the class corresponding to the maximum value of the correct probability array is the class of the pixel.

2.3.3. Classification Based on Multiple Single-temporal Feature Images

Compared with single-temporal images, time-series images can enhance image information [29], mainly temporal change information. However, the synthesis process involves operations such as taking the median or mean of the time window images and filling in gaps with images of similar time periods, which inevitably leads to errors due to differences in local image observation time and the number of observations, as well as the fusion method of pixels, etc., and the final result cannot accurately reflect the changes in crops within the time period [56]. To address the effects of image fusion, this study used the classification feature image set from 16 June 2021 to 15 September 2021, to perform single-temporal image classification, maximizing the use of remote sensing images. The single-temporal feature image classification of images within a time period was a cyclic process. First, the single-temporal feature images were screened from the classification feature image set by day, and then the single-temporal feature images were mosaicked. Next, it was determined whether the training samples on the mosaicked image contained all the classes to be classified. If not, the image was discarded. If it contained all the classes, then the single-temporal feature image was sampled according to the training samples contained, and the classifier was trained with the sampling result. Finally, the single-temporal feature image was classified using the trained classifier, and a probability image corresponding to the single-temporal feature image was obtained. Each pixel of the probability image stores the correct probability array of each class. It is worth noting that there may be gaps in the probability image based on a single-temporal image, and these gap pixels are all assigned zero values. The above operations were performed on all single-temporal feature images within the time period, and all the obtained probability images were summed pixel by pixel to form a probability image based on multiple single-temporal images. A probability image based on multiple single-temporal SAR classification feature images and a probability image based on multiple single-temporal optical classification feature images were obtained by classifying the SAR classification feature image set and the optical classification feature image set using this method. Finally, the two probability images were normalized and summed pixel by pixel to obtain a probability image based on the multiple single-temporal SAR classification feature image set and the multiple single-temporal optical classification feature image set within the time period. Each pixel of the probability image stores the correct probability array of each class, and the class corresponding to the maximum value of the correct probability array is the class of the pixel.

2.3.4. Voting on the Two Classification Results to Obtain the Final Crop Type Map

After the previous steps of classification, we obtained two crop type probability images, one of which is a probability image based on the time-series feature image, and the other is a probability image based on multiple single-temporal feature images. In order to obtain a crop type map with higher accuracy, we combined the two.
Information entropy is a measure used in information theory to measure the uncertainty of a random variable. It is used to quantify the amount of information contained in a random variable. In probability theory and information theory, information entropy is the expected value of the information contained in a random variable. Simply put, the more chaotic the situation, the greater the information entropy, and vice versa. A common application of information entropy is in decision tree learning, where it is used to choose the optimal split attribute. In this study, it was used to measure the determinacy of a classification result. Specifically, if a random variable X has n possible values, with corresponding probabilities of P1, P2, …, Pn, and the various values are mutually independent, then its information entropy can be expressed as [57]
H ( X ) = i = 1 n P i log 2 ( P i )
In order to calculate the information entropy, the probability image must first be standardized as follows:
P i = p i i n p i
in which p i is the probability of the i-th class being stored in the probability image, and the entropy of the probability image was calculated according to formula (1). Due to the different classification methods, the amount of information contained in the probability image based on multiple single-temporal feature images differs from the probability image based on the time-series feature image. The probability image based on multiple single-temporal feature images was fused with multiple classification results and contained richer information. This led to different sensitivities in measuring classification accuracy between the probability image based on multiple single-temporal feature images and the probability image based on the time-series feature image. Therefore, the classification accuracy of the probability image based on multiple single-temporal feature images and the probability image based on the time-series feature image cannot be compared using only the numerical value of information entropy. To build a unified evaluation standard, we defined a classification accuracy score (CA-score) that can quantitatively measure the identification accuracy of the crop type of each pixel. The CA-score is defined as follows:
C A - score = f ( H )
in which the information entropy H is treated as an independent variable and the overall accuracy of the classification is treated as a dependent variable in the function f ( H ) . To derive this function, we first used H as a masking threshold and applied it to areas of the classification image with values less than the threshold. We then calculated the overall accuracy of the remaining image and finally obtained f ( H ) using the curve fitting. Then, the CA-score was calculated pixel by pixel for the above two classification images using the f ( H ) and added to the classification image as a new band. Then, according to the numerical value of the CA-score, the two probability images were voted to determine the value of each pixel, resulting in a fused probability image. The class corresponding to the maximum value of the correct probability array of each class in the pixel of the fused probability image was the crop type of the pixel, and finally, the crop type map was obtained.
To evaluate the merits of the new indicator and method established in this study, three schemes were developed as follows:
Scheme 1: Classification based on the time-series feature image.
Scheme 2: Classification based on multiple single-temporal feature images.
Scheme 3: Classification by voting the probability images of Scheme 1 and Scheme 2 pixel by pixel based on the CA-score.

2.3.5. Accuracy Assessment

The accuracy of crop type mapping was assessed using 30% of the ground samples. To assess the accuracy of the classification results and provide a comprehensive overview of the performance of the classification algorithm, we calculated five common statistical indicators, OA (overall accuracy), KC (Kappa coefficient), UA (user’s accuracy), PA (producer’s accuracy), and F1-score, using the classification confusion matrix. F1-score is a metric that combines precision and recall and is used to measure the consistency between the classification samples and the reference samples. It is calculated by taking the harmonic mean of precision and recall [30]:
F 1 - score = 2 × U A × P A U A + P A
In addition, the CA-score can provide a more comprehensive visual interpretation of classification accuracy at the pixel scale. Therefore, we selected a case region from the crop type map, combined with the CA-score images and Sentinel-2 images, to further compare the classification accuracy of different classification methods.

3. Results

3.1. The Relationship between Information Entropy and Classification Accuracy

The relationship between information entropy H and classification accuracy is shown in Figure 5. It was found that, whether it was based on the classification results of multiple single-temporal feature images, or the time-series feature image, OA and KC had a strong correlation with the information entropy; furthermore, as H decreased, OA and KC gradually increased, and the coefficients of determination (R2) were all greater than 0.96. This indicated that using information entropy to measure classification accuracy was very reliable and could accurately measure classification accuracy at the pixel scale and provide a better visual interpretation of classification accuracy. The f ( H ) of the classification results were based on multiple single-temporal feature images and the f ( H ) of the classification results based on time series feature images were obtained through a cubic polynomial fitting. The CA-score was then calculated based on their respective f ( H ) .

3.2. Accuracy of Different Schemes

Table 3 shows the confusion matrix and accuracy of different schemes for crop type classification. The results were as follows. (1) The combination of the classification results based on the time-series feature image and the classification results based on multiple single-temporal feature images proposed can effectively improve the classification accuracy of crop types, with OA and KC of 84.15% and 0.80, respectively, compared with the classification results based on the time-series feature image, in which OA increased by 3.37% and KC increased by 0.03 (Scheme 1, Scheme 2, Scheme 3). (2) The accuracy of the classification based on the time-series feature image was slightly higher than that based on multiple single-temporal feature images, mainly reflected in the more accurate identification of soybeans (Scheme 1, Scheme 2). (3) As maize, rice, and peanuts were planted in contiguous areas and have significant differences in crop community characteristics, they were easy to identify and had higher classification accuracy. However, the planting area of soybeans and other crops was smaller and more scattered, making them more prone to omissions or commissions, resulting in relatively lower classification accuracy.

3.3. Major Autumn Crop Type Map for 2021

Figure 6 shows the distribution of crop types in Scheme 3. The spatial distribution of different crops is consistent with prior knowledge provided by statistical yearbooks and field surveys. To further compare the spatial detail differences of crop type maps for different schemes, we selected a case region and generated Sentinel-2 true-color and false-color images, crop type maps, and CA-score maps for different schemes (Figure 7). It can be seen from Figure 7 that in the classification results based on the time-series feature image, some vacant land was incorrectly identified as cropland. In the results based on multiple single-temporal feature images, the boundaries of different crops are more clearly identified, but there were also some other crops that were incorrectly identified as vacant land (non-crop). Combining the results based on the time-series feature image and the results based on multiple single-temporal feature images through the CA-score can reduce omission and commission and can obtain a crop type map that is more in line with reality, which is roughly the same as the image of Sentinel-2.

3.4. Comparison of Mapping Results and Agricultural Statistical Reports

Although OA and KC achieved considerable success, there may still be some omissions or additions. To further verify the accuracy of the crop type map, it was compared with statistical data. As of the submission of this manuscript, the Henan Provincial Statistical Yearbook had not yet updated the 2021 statistics, so we used the agricultural statistics for 2019 and 2020 for comparison. The statistical data for other crops in agricultural statistics are annual, so we did not compare other crops. Firstly, the areas of different crops were compared at the provincial level. Secondly, we used the mean of the 2019 and 2020 statistical data to conduct a correlation test on the prefecture-level data (Table 4). The coefficients of determination (R2) of maize, rice, peanuts, and soybeans were 0.69, 0.99, 0.98, and 0.30, respectively. Meanwhile, the RMSE values were 871.3 km2, 143.9 km2, 286.4 km2, and 215.4 km2, respectively. According to Figure 8, the areas of maize planting in Luoyang, Sanmenxia, Xinyang, Zhoukou, and Zhumadian were quite different from the statistical data. This discrepancy directly led to the large RMSE and small R2 of maize. Similarly, the soybean planting area in Nanyang, Sanmenxia, Xinyang, Xuchang, and Zhoukou showed significant differences from the statistical data, resulting in large RMSE and small R2 values for soybeans as well. We hypothesized that the possible causes of this could be either the structural adjustments in planting and the severe floods that occurred in July 2021, or the limitations of the images in effectively capturing the variability. Otherwise, the planting areas of various crops were almost the same as the city-level statistical data in the study area. The above results indicate that the mapping results of this study were highly consistent with the statistical data.

4. Discussion

4.1. The Potential of Multi-Source and Multi-Temporal Feature Images for Crop Type Mapping

The crop type classification method proposed in this study achieved high-precision crop type mapping at the regional scale by combining multi-temporal feature images and time-series feature image derived from multi-source Sentinel imagery. This is mainly due to the following two factors.
(1) The application of multi-source satellite data. We compared the crop classification accuracy of Scheme 1 and Scheme 2 developed in Section 2.3.4 using only Sentinel-1 SAR data, only using Sentinel-2 optical data, and the integration of optical data and SAR data (Table 5). Classification based solely on optical data is superior to that based solely on SAR data, which is consistent with some existing studies [58,59]. However, optical images are often affected by clouds and revisit periods, making it difficult to obtain high-quality observation data. SAR has high penetration power and is not affected by clouds, allowing for the acquisition of high-quality observation data. SAR can obtain information about the structure of the crop canopy and the plant water content [60,61], can be used for plant growth monitoring [62,63], and has also performed well in crop type classification [33]. Some studies have shown that combining optical and SAR images can improve classification accuracy [33,58], and our results also support this. Compared to using a single satellite image for crop classification, using both types of data can enhance information and reveal more details of crop growth, not only in the combination of spectral and texture features but also in the enhancement of time information (there is a time interval between optical and SAR observations). By combining both types of data, it is possible to improve the accuracy of classification algorithms and obtain more reliable results.
(2) Classification based on the time-series feature image and multiple single-temporal feature images. Compared with single-temporal images, time-series images can enhance the information content of images by capturing the temporal changes in the images [29]. Using time-series feature images in classification can improve classification accuracy. However, the process of constructing time-series feature images involves image fusion, which inevitably leads to information loss and errors. Single-temporal feature images do not have the information loss and errors caused by image fusion, but the accuracy of classification using only one single-temporal feature image is also not high, due to the limited information it contains. However, combining the classification results of multiple single-temporal feature images based on different times can improve the accuracy of crop classification. With the help of information entropy, the CA-score was defined and calculated, and then it was used to vote on the classification results of the time-series feature image and multiple single-temporal feature images to determine the final classification result, which not only makes full use of time-varying features but also overcomes the errors caused by image fusion.

4.2. Classification Accuracy Index at the Pixel Scale

The accuracy of classification results is uncertain, and information entropy can measure the uncertainty of things. Figure 9 is the information entropy H and the CA-score distribution of Scheme 1 and Scheme 2 developed in Section 2.3.4. Table 3 shows the classification accuracy of different crops, while Figure 6 displays their spatial distribution. The combination of the two can reflect the spatial distribution of crop classification accuracy. Furthermore, when considering the spatial distribution of the information entropy H and the CA-score, as depicted in Figure 9, it can be observed that both have good consistency with classification accuracy in terms of numerical magnitude and spatial distribution. This indicates that, regardless of Scheme 1 or Scheme 2, information entropy can effectively reflect classification accuracy. However, due to the difference in data used and classification methods, the amount of information contained in the results corresponding to different schemes differs. Scheme 2 involves multiple classifications; the amount of information contained in the result is much larger than that of Scheme 1, and the corresponding value of information entropy is also larger than the former. Therefore, the classification accuracy of different classification schemes cannot be compared only using the value of information entropy. In this study, the relationship between information entropy and accuracy was established. Figure 5 shows that information entropy had good consistency with OA and KC in evaluating classification accuracy. Additionally, the CA-score was defined to measure the classification accuracy at the pixel scale and to compare the classification results of different methods. The results show that the CA-score was simple and reliable for measuring the accuracy of crop recognition and could provide a more comprehensive and intuitive explanation of classification accuracy at the pixel level than other metrics. It not only compensates for the limitations of traditional OA and KC evaluation metrics, which rely on limited point data to evaluate surface data but also facilitates the fusion of land cover classification maps obtained from different data and methods, complementing each other’s strengths and resulting in more accurate results. This study covers a wide geographical range with diverse and complex terrains and climates, involving various crop types and intricate planting structures. Despite these challenges, the proposed method and metric exhibit excellent performance, showing strong adaptability and scalability, with great potential to be applied to complex land cover accurate classification and extraction in larger areas and different regions.

4.3. Uncertainty and Algorithm Improvement

The results suggest that the method proposed in this study is capable of accurately identifying crop types at the regional scale, but there is still some uncertainty. First, although texture features were considered, there may still be salt-and-pepper noise in the crop type classification results. To improve classification accuracy and eliminate salt-and-pepper noise, some studies have combined target segmentation and pixel-based classification methods [64,65]. In future research, combining target segmentation methods may improve mapping accuracy and reduce interference from pixel-based classification. Second, in the process of generating probability images of crop type classification based on multiple single-temporal SAR feature images and multiple single-temporal optical feature images, there may be insufficient training samples for the overall classes or a certain class in one certain classification process due to the coverage limitations of single-temporal feature images, which affected the classification results. In future research, it will be possible to classify the time-series feature images first and then select pixels with higher CA-score from the classification results (the pixels are considered to be correctly classified) as additional training samples for the classification based on multiple single-temporal SAR and optical images. Third, through accuracy assessment, it can be known that the classification accuracy differs between crops, so the calculation of the CA-score through the relationship between the OA and information entropy H may not be entirely reliable. If the relationship between the classification accuracy and information entropy H is calculated separately by crop category therefore, further calculation of the CA-score may lead to more accurate results. Fourth, in this study, the cloud removal operation of Sentinel-2 images was performed using the QA60 band, which is only a binary classifier for thick clouds and cirrus clouds and therefore cannot fine-tune what is a cloud. While removing cloud interference, it may also inadvertently remove some valid observations, potentially reducing the number of effective observations and thus affecting classification accuracy. There has been a great deal of research on cloud removal algorithms for optical satellite imagery, such as “S2cloudless” and “InterSSIM” threshold algorithms, which can convert cloud probability maps into cloud masks through threshold processing [66]. Suitable thresholds can be determined based on the unique features of one’s own research to minimize cloud commission and omission errors. Finally, in the field investigation, we found that some farmland has mixed crops, and it is undoubtedly a big challenge to accurately identify the crops at a 10-m resolution, which will also lead to incorrect classification.

5. Conclusions

This study proposed a novel method for multi-crop classification at the regional scale using multiple sources of remote sensing data. Firstly, a classification accuracy index called CA-score was constructed using information entropy. Then, a novel decision fusion method based on the CA-score was designed to integrate multiple single-temporal feature images and the time-series feature image derived from multi-source Sentinel imagery for crop classification. A comparative analysis showed that the proposed classification method had excellent performance and could achieve large-scale accurate mapping of various crops. The OA and KC of the new method were 84.15% and 0.80, respectively, which were 3.37% and 0.03 higher than the OA and KC obtained from the classification results based on the time-series feature image. In addition, the correlation analysis with OA and KC indicated that the CA-score index proposed in this study was effective in reflecting the accuracy of crop identification and can be used as an evaluation index for classification accuracy at the pixel scale, providing a more comprehensive visual interpretation of classification accuracy. It not only makes up for the limitations of the traditional OA and KC evaluation indicators using limited point data to evaluate surface data but also can be used to integrate land cover classification maps obtained from different data and methods, complementing each other’s advantages, so as to obtain more accurate results. The proposed method and metric have the potential to be applied to mapping of larger study areas with more complex land cover types using remote sensing.

Author Contributions

Conceptualization, S.F.; methodology, S.F., X.W. and Y.Y.; validation. X.W., S.F. and J.D.; formal analysis, X.W.; measurement, X.W. and Y.Y.; data curation, X.W.; writing—original draft preparation, X.W.; writing—review and editing, S.F., H.W.; visualization, X.W.; supervision, Y.Y. and J.D.; project administration, S.F.; funding acquisition, S.F. and H.W.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was jointly supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDA28050200), the National Natural Science Foundation of China (No. 41971082), the National Key Research and Development Program of China (2019YFC1510505), and the Key Project of Innovation LREIS (KPI009).

Acknowledgments

We are grateful to the anonymous reviewers whose constructive suggestions have improved the quality of this study. We wish to express our gratitude to the sentinel data and data analysis services provided by the GEE platform. Additionally, we would like to express our sincere thanks to all the field data collectors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Health Organization. The State of Food Security and Nutrition in the World 2018: Building Climate Resilience for Food Security and Nutrition. Available online: https://www.fao.org/agrifood-economics/publications/detail/en/c/1153252/ (accessed on 28 April 2023).
  2. Zhao, C.; Liu, B.; Piao, S.; Wang, X.; Lobell, D.B.; Huang, Y.; Huang, M.; Yao, Y.; Bassu, S.; Ciais, P. Temperature increase reduces global yields of major crops in four independent estimates. Proc. Natl. Acad. Sci. USA 2017, 114, 9326–9331. [Google Scholar] [CrossRef] [PubMed]
  3. Shi, W.; Wang, M.; Liu, Y. Crop yield and production responses to climate disasters in China. Sci. Total Environ. 2021, 750, 141147. [Google Scholar] [CrossRef] [PubMed]
  4. Waldner, F.; Canto, G.S.; Defourny, P. Automated annual cropland mapping using knowledge-based temporal features. ISPRS J. Phot. Remot. Sens. 2015, 110, 1–13. [Google Scholar] [CrossRef]
  5. Najafova, M. Impact of War between Russia and Ukraine on Food Security. Center of Analysis of International Relations: Azerbaijan, 2022. Available online: https://policycommons.net/artifacts/2329915/impact-of-war-between-russia-and-ukraine-on-food-security/3090540/ (accessed on 1 March 2023).
  6. Cai, Y.; Guan, K.; Peng, J.; Wang, S.; Seifert, C.; Wardlow, B.; Li, Z. A high-performance and in-season classification system of field-level crop types using time-series Landsat data and a machine learning approach. Remote Sens. Environ. 2018, 210, 35–47. [Google Scholar] [CrossRef]
  7. Shelestov, A.; Lavreniuk, M.; Kussul, N.; Novikov, A.; Skakun, S. Exploring Google Earth Engine platform for big data processing: Classification of multi-temporal satellite imagery for crop mapping. Front. Earth Sci. 2017, 5, 17. [Google Scholar] [CrossRef]
  8. Becker-Reshef, I.; Justice, C.; Barker, B.; Humber, M.; Rembold, F.; Bonifacio, R.; Zappacosta, M.; Budde, M.; Magadzire, T.; Shitote, C. Strengthening agricultural decisions in countries at risk of food insecurity: The GEOGLAM Crop Monitor for Early Warning. Remote Sens. Environ. 2020, 237, 111553. [Google Scholar] [CrossRef]
  9. Franch, B.; Vermote, E.F.; Skakun, S.; Roger, J.-C.; Becker-Reshef, I.; Murphy, E.; Justice, C. Remote sensing based yield monitoring: Application to winter wheat in United States and Ukraine. Int. J. Appl. Earth. Obs. Geoinf. 2019, 76, 112–127. [Google Scholar] [CrossRef]
  10. Johnson, D.M.; Mueller, R. Pre-and within-season crop type classification trained with archival land cover information. Remote Sens. Environ. 2021, 264, 112576. [Google Scholar] [CrossRef]
  11. Di, Y.; Zhang, G.; You, N.; Yang, T.; Zhang, Q.; Liu, R.; Doughty, R.B.; Zhang, Y. Mapping Croplands in the Granary of the Tibetan Plateau Using All Available Landsat Imagery, A Phenology-Based Approach, and Google Earth Engine. Remote Sens. 2021, 13, 2289. [Google Scholar] [CrossRef]
  12. Mutanga, O.; Dube, T.; Galal, O. Remote sensing of crop health for food security in Africa: Potentials and constraints. Remote Sens. App. Soc. Environ. 2017, 8, 231–239. [Google Scholar] [CrossRef]
  13. Jin, Z.; Azzari, G.; You, C.; Di Tommaso, S.; Aston, S.; Burke, M.; Lobell, D.B. Smallholder maize area and yield mapping at national scales with Google Earth Engine. Remote Sens. Environ. 2019, 228, 115–128. [Google Scholar] [CrossRef]
  14. Donohue, R.J.; Lawes, R.A.; Mata, G.; Gobbett, D.; Ouzman, J. Towards a national, remote-sensing-based model for predicting field-scale crop yield. Field Crop. Res. 2018, 227, 79–90. [Google Scholar] [CrossRef]
  15. Zhuo, W.; Huang, J.; Li, L.; Zhang, X.; Ma, H.; Gao, X.; Huang, H.; Xu, B.; Xiao, X. Assimilating soil moisture retrieved from Sentinel-1 and Sentinel-2 data into WOFOST model to improve winter wheat yield estimation. Remote Sens. 2019, 11, 1618. [Google Scholar] [CrossRef]
  16. Bégué, A.; Arvor, D.; Bellon, B.; Betbeder, J.; De Abelleyra, D.; PD Ferraz, R.; Lebourgeois, V.; Lelong, C.; Simões, M.; Verón, S.R. Remote sensing and cropping practices: A review. Remote Sens. 2018, 10, 99. [Google Scholar] [CrossRef]
  17. Weiss, M.; Jacob, F.; Duveiller, G. Remote sensing for agricultural applications: A meta-review. Remote Sens. Environ. 2020, 236, 111402. [Google Scholar] [CrossRef]
  18. Orynbaikyzy, A.; Gessner, U.; Conrad, C. Crop type classification using a combination of optical and radar remote sensing data: A review. Int. J. Remote Sens. 2019, 40, 6553–6595. [Google Scholar] [CrossRef]
  19. Seifi Majdar, R.; Ghassemian, H. A probabilistic SVM approach for hyperspectral image classification using spectral and texture features. Int. J. Remote Sens. 2017, 38, 4265–4284. [Google Scholar] [CrossRef]
  20. Gao, F.; Anderson, M.C.; Zhang, X.; Yang, Z.; Alfieri, J.G.; Kustas, W.P.; Mueller, R.; Johnson, D.M.; Prueger, J.H. Toward mapping crop progress at field scales through fusion of Landsat and MODIS imagery. Remote Sens. Environ. 2017, 188, 9–25. [Google Scholar] [CrossRef]
  21. Gómez, C.; White, J.C.; Wulder, M.A. Optical remotely sensed time series data for land cover classification: A review. ISPRS J. Photogramm. Remote Sens. 2016, 116, 55–72. [Google Scholar] [CrossRef]
  22. Bontemps, S.; Defourny, P.; Radoux, J.; Van Bogaert, E.; Lamarche, C.; Achard, F.; Mayaux, P.; Boettcher, M.; Brockmann, C.; Kirches, G. Consistent global land cover maps for climate modelling communities: Current achievements of the ESA’s land cover CCI. Proceedings of ESA living planet symposium, Edimburgh, UK, 9–13 September 2013; pp. 9–13. [Google Scholar]
  23. Chen, J.; Cao, X.; Peng, S.; Ren, H. Analysis and applications of GlobeLand30: A review. ISPRS Int. J. Geo-Infor. 2017, 6, 230. [Google Scholar] [CrossRef]
  24. Burke, M.; Lobell, D.B. Satellite-based assessment of yield variation and its determinants in smallholder African systems. Proc. Natl. Acad. Sci. USA 2017, 114, 2189–2194. [Google Scholar] [CrossRef] [PubMed]
  25. Defourny, P.; Bontemps, S.; Bellemans, N.; Cara, C.; Dedieu, G.; Guzzonato, E.; Hagolle, O.; Inglada, J.; Nicola, L.; Rabaute, T. Near real-time agriculture monitoring at national scale at parcel resolution: Performance assessment of the Sen2-Agri automated system in various cropping systems around the world. Remote Sens. Environ. 2019, 221, 551–568. [Google Scholar] [CrossRef]
  26. Lin, C.; Zhong, L.; Song, X.-P.; Dong, J.; Lobell, D.B.; Jin, Z. Early-and in-season crop type mapping without current-year ground truth: Generating labels from historical information via a topology-based approach. Remote Sens. Environ. 2022, 274, 112994. [Google Scholar] [CrossRef]
  27. Huang, X.; Fu, Y.; Wang, J.; Dong, J.; Zheng, Y.; Pan, B.; Skakun, S.; Yuan, W. High-Resolution Mapping of Winter Cereals in Europe by Time Series Landsat and Sentinel Images for 2016–2020. Remote Sens. 2022, 14, 2120. [Google Scholar] [CrossRef]
  28. Joshi, N.; Baumann, M.; Ehammer, A.; Fensholt, R.; Grogan, K.; Hostert, P.; Jepsen, M.R.; Kuemmerle, T.; Meyfroidt, P.; Mitchard, E.T. A review of the application of optical and radar remote sensing data fusion to land use mapping and monitoring. Remote Sens. 2016, 8, 70. [Google Scholar] [CrossRef]
  29. Li, C.; Chen, W.; Wang, Y.; Wang, Y.; Ma, C.; Li, Y.; Li, J.; Zhai, W. Mapping Winter Wheat with Optical and SAR Images Based on Google Earth Engine in Henan Province, China. Remote Sens. 2022, 14, 284. [Google Scholar] [CrossRef]
  30. Ren, T.; Xu, H.; Cai, X.; Yu, S.; Qi, J. Smallholder crop type mapping and rotation monitoring in mountainous areas with Sentinel-1/2 imagery. Remote Sens. 2022, 14, 566. [Google Scholar] [CrossRef]
  31. Rao, P.; Zhou, W.; Bhattarai, N.; Srivastava, A.K.; Singh, B.; Poonia, S.; Lobell, D.B.; Jain, M. Using Sentinel-1, Sentinel-2, and Planet imagery to map crop type of smallholder farms. Remote Sens. 2021, 13, 1870. [Google Scholar] [CrossRef]
  32. Blickensdörfer, L.; Schwieder, M.; Pflugmacher, D.; Nendel, C.; Erasmi, S.; Hostert, P. Mapping of crop types and crop sequences with combined time series of Sentinel-1, Sentinel-2 and Landsat 8 data for Germany. Remote Sens. Environ. 2022, 269, 112831. [Google Scholar] [CrossRef]
  33. Orynbaikyzy, A.; Gessner, U.; Mack, B.; Conrad, C. Crop type classification using fusion of sentinel-1 and sentinel-2 data: Assessing the impact of feature selection, optical data availability, and parcel sizes on the accuracies. Remote Sens. 2020, 12, 2779. [Google Scholar] [CrossRef]
  34. Bargiel, D. A new method for crop classification combining time series of radar images and crop phenology information. Remote Sens. Environ. 2017, 198, 369–383. [Google Scholar] [CrossRef]
  35. Inglada, J.; Vincent, A.; Arias, M.; Marais-Sicre, C. Improved early crop type identification by joint use of high temporal resolution SAR and optical image time series. Remote Sens. 2016, 8, 362. [Google Scholar] [CrossRef]
  36. De Sa, J.M. Pattern Recognition: Concepts, Methods, and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  37. Lary, D.J.; Alavi, A.H.; Gandomi, A.H.; Walker, A.L. Machine learning in geosciences and remote sensing. Geos. Fron. 2016, 7, 3–10. [Google Scholar] [CrossRef]
  38. Song, Q.; Hu, Q.; Zhou, Q.; Hovis, C.; Xiang, M.; Tang, H.; Wu, W. In-season crop mapping with GF-1/WFV data by combining object-based image analysis and random forest. Remote Sens. 2017, 9, 1184. [Google Scholar] [CrossRef]
  39. Kluger, D.M.; Wang, S.; Lobell, D.B. Two shifts for crop mapping: Leveraging aggregate crop statistics to improve satellite-based maps in new regions. Remote Sens. Environ. 2021, 262, 112488. [Google Scholar] [CrossRef]
  40. Wang, Z.; Zhang, H.; He, W.; Zhang, L. Cross-phenological-region crop mapping framework using Sentinel-2 time series Imagery: A new perspective for winter crops in China. ISPRS J. Photogramm. Remote Sens. 2022, 193, 200–215. [Google Scholar] [CrossRef]
  41. Seydi, S.T.; Amani, M.; Ghorbanian, A. A Dual Attention Convolutional Neural Network for Crop Classification Using Time-Series Sentinel-2 Imagery. Remote Sens. 2022, 14, 498. [Google Scholar] [CrossRef]
  42. Torbick, N.; Chowdhury, D.; Salas, W.; Qi, J. Monitoring rice agriculture across myanmar using time series Sentinel-1 assisted by Landsat-8 and PALSAR-2. Remote Sens. 2017, 9, 119. [Google Scholar] [CrossRef]
  43. Teluguntla, P.; Thenkabail, P.S.; Oliphant, A.; Xiong, J.; Gumma, M.K.; Congalton, R.G.; Yadav, K.; Huete, A. A 30-m landsat-derived cropland extent product of Australia and China using random forest machine learning algorithm on Google Earth Engine cloud computing platform. ISPRS J. Photogramm. Remote Sens. 2018, 144, 325–340. [Google Scholar] [CrossRef]
  44. Liu, L.; Xiao, X.; Qin, Y.; Wang, J.; Xu, X.; Hu, Y.; Qiao, Z. Mapping cropping intensity in China using time series Landsat and Sentinel-2 images and Google Earth Engine. Remote Sens. Environ. 2020, 239, 111624. [Google Scholar] [CrossRef]
  45. Ni, R.; Tian, J.; Li, X.; Yin, D.; Li, J.; Gong, H.; Zhang, J.; Zhu, L.; Wu, D. An enhanced pixel-based phenological feature for accurate paddy rice mapping with Sentinel-2 imagery in Google Earth Engine. ISPRS J. Photogramm. Remote Sens. 2021, 178, 282–296. [Google Scholar] [CrossRef]
  46. Fang, P.; Zhang, X.; Wei, P.; Wang, Y.; Zhang, H.; Liu, F.; Zhao, J. The classification performance and mechanism of machine learning algorithms in winter wheat mapping using Sentinel-2 10 m resolution imagery. App. Sci. 2020, 10, 5075. [Google Scholar] [CrossRef]
  47. Fritz, S.; See, L.; Rembold, F. Comparison of global and regional land cover maps with statistical information for the agricultural domain in Africa. Int. J. Remote Sens. 2010, 31, 2237–2256. [Google Scholar] [CrossRef]
  48. Chen, S.; Useya, J.; Mugiyo, H. Decision-level fusion of Sentinel-1 SAR and Landsat 8 OLI texture features for crop discrimination and classification: Case of Masvingo, Zimbabwe. Heliyon 2020, 6, e05358. [Google Scholar] [CrossRef] [PubMed]
  49. Tassi, A.; Vizzari, M. Object-oriented lulc classification in google earth engine combining snic, glcm, and machine learning algorithms. Remote Sens. 2020, 12, 3776. [Google Scholar] [CrossRef]
  50. Ghasemi, M.; Karimzadeh, S.; Feizizadeh, B. Urban classification using preserved information of high dimensional textural features of Sentinel-1 images in Tabriz, Iran. Earth Sci. Infor. 2021, 14, 1745–1762. [Google Scholar] [CrossRef]
  51. Zanaga, D.; Van De Kerchove, R.; Daems, D.; De Keersmaecker, W.; Brockmann, C.; Kirches, G.; Wevers, J.; Cartus, O.; Santoro, M.; Fritz, S. ESA WorldCover 10 m 2021 v200. 2022. Available online: https://zenodo.org/record/7254221#.ZFCsKXYzY6R (accessed on 1 March 2023).
  52. Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K.-i. Assessing the suitability of data from Sentinel-1A and 2A for crop classification. GIS. Remote Sens. 2017, 54, 918–938. [Google Scholar] [CrossRef]
  53. Wang, J.; Li, K.; Shao, Y.; Zhang, F.; Wang, Z.; Guo, X.; Qin, Y.; Liu, X. Analysis of combining SAR and optical optimal parameters to classify typhoon-invasion lodged rice: A case study using the random forest method. Sensors 2020, 20, 7346. [Google Scholar] [CrossRef]
  54. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  55. Tolpekin, V.A.; Stein, A. Quantification of the effects of land-cover-class spectral separability on the accuracy of Markov-random-field-based superresolution mapping. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3283–3297. [Google Scholar] [CrossRef]
  56. Felegari, S.; Sharifi, A.; Moravej, K.; Amin, M.; Golchin, A.; Muzirafuti, A.; Tariq, A.; Zhao, N. Integration of Sentinel 1 and Sentinel 2 Satellite Images for Crop Mapping. App. Sci. 2021, 11, 10104. [Google Scholar] [CrossRef]
  57. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  58. Van Tricht, K.; Gobin, A.; Gilliams, S.; Piccard, I. Synergistic use of radar Sentinel-1 and optical Sentinel-2 imagery for crop mapping: A case study for Belgium. Remote Sens. 2018, 10, 1642. [Google Scholar] [CrossRef]
  59. Denize, J.; Hubert-Moy, L.; Betbeder, J.; Corgne, S.; Baudry, J.; Pottier, E. Evaluation of using sentinel-1 and-2 time-series to identify winter land use in agricultural landscapes. Remote Sens. 2018, 11, 37. [Google Scholar] [CrossRef]
  60. Tian, H.; Qin, Y.; Niu, Z.; Wang, L.; Ge, S. Summer Maize Mapping by Compositing Time Series Sentinel-1A Imagery Based on Crop Growth Cycles. J. Indian Soc. Remote. Sens. 2021, 49, 2863–2874. [Google Scholar] [CrossRef]
  61. Xu, L.; Zhang, H.; Wang, C.; Zhang, B.; Liu, M. Crop classification based on temporal information using sentinel-1 SAR time-series data. Remote Sens. 2018, 11, 53. [Google Scholar] [CrossRef]
  62. Zhao, W.; Qu, Y.; Chen, J.; Yuan, Z. Deeply synergistic optical and SAR time series for crop dynamic monitoring. Remote Sens. Environ. 2020, 247, 111952. [Google Scholar] [CrossRef]
  63. Wang, Y.; Fang, S.; Zhao, L.; Huang, X.; Jiang, X. Parcel-based summer maize mapping and phenology estimation combined using Sentinel-2 and time series Sentinel-1 data. Int. J. Appl. Earth. Obs. Geoinf. 2022, 108, 102720. [Google Scholar] [CrossRef]
  64. Luo, H.; Li, M.; Dai, S.; Li, H.; Li, Y.; Hu, Y.; Zheng, Q.; Yu, X.; Fang, J. Combinations of Feature Selection and Machine Learning Algorithms for Object-Oriented Betel Palms and Mango Plantations Classification Based on Gaofen-2 Imagery. Remote Sens. 2022, 14, 1757. [Google Scholar] [CrossRef]
  65. Guo, L.; Zhao, S.; Gao, J.; Zhang, H.; Zou, Y.; Xiao, X. A Novel Workflow for Crop Type Mapping with a Time Series of Synthetic Aperture Radar and Optical Images in the Google Earth Engine. Remote Sens. 2022, 14, 5458. [Google Scholar] [CrossRef]
  66. Skakun, S.; Wevers, J.; Brockmann, C.; Doxani, G.; Aleksandrov, M.; Batič, M.; Frantz, D.; Gascon, F.; Gómez-Chova, L.; Hagolle, O. Cloud Mask Intercomparison eXercise (CMIX): An evaluation of cloud masking algorithms for Landsat 8 and Sentinel-2. Remote Sens. Environ. 2022, 274, 112990. [Google Scholar] [CrossRef]
Figure 1. The geographical location of Henan Province, China, and the distribution of ground crop samples obtained in this study.
Figure 1. The geographical location of Henan Province, China, and the distribution of ground crop samples obtained in this study.
Remotesensing 15 02466 g001
Figure 2. The number of observations of Sentinel-1. Total number of observations during the period from 16 June 2021 to 15 September 2021 (a), number of observations from 16 June 2021 to 15 July 2021 (b), number of observations from 16 July 2021 to 15 August 2021 (c), number of observations from 16 August 2021 to 15 September 2021 (d).
Figure 2. The number of observations of Sentinel-1. Total number of observations during the period from 16 June 2021 to 15 September 2021 (a), number of observations from 16 June 2021 to 15 July 2021 (b), number of observations from 16 July 2021 to 15 August 2021 (c), number of observations from 16 August 2021 to 15 September 2021 (d).
Remotesensing 15 02466 g002
Figure 3. The number of observations of Sentinel-2 after de-cloud processing. Total number of observations during the period from 16 June 2021 to 15 September 2021 (a), number of observations from 16 June 2021 to 15 July 2021 (b), number of observations from 16 July 2021 to 15 August 2021 (c), number of observations from 16 August 2021 to 15 September 2021 (d).
Figure 3. The number of observations of Sentinel-2 after de-cloud processing. Total number of observations during the period from 16 June 2021 to 15 September 2021 (a), number of observations from 16 June 2021 to 15 July 2021 (b), number of observations from 16 July 2021 to 15 August 2021 (c), number of observations from 16 August 2021 to 15 September 2021 (d).
Remotesensing 15 02466 g003
Figure 4. Flowchart of the proposed method for mapping crop types at a regional scale.
Figure 4. Flowchart of the proposed method for mapping crop types at a regional scale.
Remotesensing 15 02466 g004
Figure 5. The OA and KC of the remaining crop type map as a function of the information entropy H. OA of Scheme 1 (a), KC of Scheme 1 (b), OA of Scheme 2 (c), KC of Scheme 2 (d).
Figure 5. The OA and KC of the remaining crop type map as a function of the information entropy H. OA of Scheme 1 (a), KC of Scheme 1 (b), OA of Scheme 2 (c), KC of Scheme 2 (d).
Remotesensing 15 02466 g005
Figure 6. Spatial distribution of major autumn crops in Henan Province based on scheme 3 in 2021.
Figure 6. Spatial distribution of major autumn crops in Henan Province based on scheme 3 in 2021.
Remotesensing 15 02466 g006
Figure 7. The local comparison of the classification results of different schemes, the specific location is marked with a black box in Figure 6.
Figure 7. The local comparison of the classification results of different schemes, the specific location is marked with a black box in Figure 6.
Remotesensing 15 02466 g007
Figure 8. Comparison of prefectural-level areas of different crops between mapping results of scheme 3 in 2021 and statistical data in 2019 and 2020.
Figure 8. Comparison of prefectural-level areas of different crops between mapping results of scheme 3 in 2021 and statistical data in 2019 and 2020.
Remotesensing 15 02466 g008
Figure 9. The information entropy H and the CA-score distribution of scheme 1 and scheme 2. H of Scheme 1 (a), H of Scheme 2 (b), CA-score of Scheme 1 (c), CA-score of Scheme 2 (d).
Figure 9. The information entropy H and the CA-score distribution of scheme 1 and scheme 2. H of Scheme 1 (a), H of Scheme 2 (b), CA-score of Scheme 1 (c), CA-score of Scheme 2 (d).
Remotesensing 15 02466 g009
Table 1. The parameters of the classification feature image set used in this study.
Table 1. The parameters of the classification feature image set used in this study.
Satellite Data TypeNumberBandResolution (m)
Sentinel-1 GRD110VH10
VH_asm10
VH_contrast10
VH_corr10
VH_var10
VH_idm10
VH _ent10
Sentinel-2 Leve-2A1137Blue10
Green10
Red10
Red Edge 120(Resampled to 10)
Red Edge 220(Resampled to 10)
Red Edge 320(Resampled to 10)
NIR10
Red Edge 420(Resampled to 10)
SWIR 120(Resampled to 10)
SWIR 220(Resampled to 10)
NDVI10
NDWI10
EVI10
NDBI10
LSWI10
Table 2. The number of ground reference samples divided into training and validation.
Table 2. The number of ground reference samples divided into training and validation.
Crop TypeNumber
TrainingValidationTotal
Maize7172831000
Rice539205744
Peanuts8883821270
Soybeans340159499
Other Crops441179 620
Others675325 1000
Total360015335133
Table 3. Accuracy of crop type classification of different schemes.
Table 3. Accuracy of crop type classification of different schemes.
ClassScheme 1Scheme 2Scheme 3
PA
(%)
UA
(%)
F1-Score
(%)
PA
(%)
UA
(%)
F1-Score
(%)
PA
(%)
UA
(%)
F1-Score
(%)
Maize81.6369.7975.2485.8769.4376.7884.4574.2279.01
Rice89.7692.9391.3288.7887.5088.1491.2289.4790.34
Peanut88.2282.8085.4290.3181.7585.8290.3186.2588.24
Soybean54.0986.0066.4144.0397.2260.6162.2692.5274.44
Other Crops57.5464.3860.7756.4266.4561.0362.0170.7066.07
Others94.4691.1092.7591.6990.5891.1395.0891.4293.21
OA (%)81.4180.8284.15
KC0.770.760.80
Table 4. Mapping results of Scheme 3 in 2021 compared to the agricultural statistical data from 2019 and 2020.
Table 4. Mapping results of Scheme 3 in 2021 compared to the agricultural statistical data from 2019 and 2020.
Crop TypeArea in Province-Level of Statistical Data (km2)Area in Province-Level of Mapping Results (km2)RSME in Prefectural-Level(km2)R2 in Prefectural-Level
201920202021
Maize36,923.737,763.837,379.9871.30.69
Rice5997.85752.48275. 9143.90.99
Peanut12,230.712,647.216,749.0286.40.98
Soybean4239.93973.43777.8215.40.30
Table 5. Accuracy of crop type classification based only on the Sentinel-1 SAR data, Sentinel-2 optical data, and integration of optical data and SAR data.
Table 5. Accuracy of crop type classification based only on the Sentinel-1 SAR data, Sentinel-2 optical data, and integration of optical data and SAR data.
SchemeOptical DataSAR DataOptical and SAR Data
OA (%)KCOA (%)KCOA (%)KC
Scheme 177.490.7252.110.4281.410.77
Scheme 278.720.7444.190.3080.820.76
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Fang, S.; Yang, Y.; Du, J.; Wu, H. A New Method for Crop Type Mapping at the Regional Scale Using Multi-Source and Multi-Temporal Sentinel Imagery. Remote Sens. 2023, 15, 2466. https://doi.org/10.3390/rs15092466

AMA Style

Wang X, Fang S, Yang Y, Du J, Wu H. A New Method for Crop Type Mapping at the Regional Scale Using Multi-Source and Multi-Temporal Sentinel Imagery. Remote Sensing. 2023; 15(9):2466. https://doi.org/10.3390/rs15092466

Chicago/Turabian Style

Wang, Xiaohu, Shifeng Fang, Yichen Yang, Jiaqiang Du, and Hua Wu. 2023. "A New Method for Crop Type Mapping at the Regional Scale Using Multi-Source and Multi-Temporal Sentinel Imagery" Remote Sensing 15, no. 9: 2466. https://doi.org/10.3390/rs15092466

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop