Next Article in Journal
Remote Sensing Small Object Detection Network Based on Attention Mechanism and Multi-Scale Feature Fusion
Next Article in Special Issue
Land Cover Classification of SAR Based on 1DCNN-MRF Model Using Improved Dual-Polarization Radar Vegetation Index
Previous Article in Journal
Research on the Monitoring Ability of Fengyun-Based Quantitative Precipitation Estimates for Capturing Heavy Precipitation: A Case Study of the “7·20” Rainstorm in Henan Province, China
Previous Article in Special Issue
Inversion of Soil Moisture on Farmland Areas Based on SSA-CNN Using Multi-Source Remote Sensing Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synergy of Sentinel-1 and Sentinel-2 Imagery for Crop Classification Based on DC-CNN

1
School of Computer and Information Engineering, Henan University, Kaifeng 475004, China
2
Henan Province Engineering Research Center of Spatial Information Processing, Kaifeng 475004, China
3
Henan Key Laboratory of Big Data Analysis and Processing, Kaifeng 475004, China
4
College of Agriculture, Henan University, Kaifeng 475004, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(11), 2727; https://doi.org/10.3390/rs15112727
Submission received: 4 May 2023 / Revised: 21 May 2023 / Accepted: 23 May 2023 / Published: 24 May 2023

Abstract

:
Over the years, remote sensing technology has become an important means to obtain accurate agricultural production information, such as crop type distribution, due to its advantages of large coverage and a short observation period. Nowadays, the cooperative use of multi-source remote sensing imagery has become a new development trend in the field of crop classification. In this paper, the polarimetric components of Sentinel-1 (S-1) decomposed by a new model-based decomposition method adapted to dual-polarized SAR data were introduced into crop classification for the first time. Furthermore, a Dual-Channel Convolutional Neural Network (DC-CNN) with feature extraction, feature fusion, and encoder-decoder modules for crop classification based on S-1 and Sentinel-2 (S-2) was constructed. The two branches can learn from each other by sharing parameters so as to effectively integrate the features extracted from multi-source data and obtain a high-precision crop classification map. In the proposed method, firstly, the backscattering components (VV, VH) and polarimetric components (volume scattering, remaining scattering) were obtained from S-1, and the multispectral feature was extracted from S-2. Four candidate combinations of multi-source features were formed with the above features. Following that, the optimal one was found on a trial. Next, the characteristics of optimal combinations were input into the corresponding network branches. In the feature extraction module, the features with strong collaboration ability in multi-source data were learned by parameter sharing, and they were deeply fused in the feature fusion module and encoder-decoder module to obtain more accurate classification results. The experimental results showed that the polarimetric components, which increased the difference between crop categories and reduced the misclassification rate, played an important role in crop classification. Among the four candidate feature combinations, the combination of S-1 and S-2 features had a higher classification accuracy than using a single data source, and the classification accuracy was the highest when two polarimetric components were utilized simultaneously. On the basis of the optimal combination of features, the effectiveness of the proposed method was verified. The classification accuracy of DC-CNN reached 98.40%, with Kappa scoring 0.98 and Macro-F1 scoring 0.98, compared to 2D-CNN (OA reached 94.87%, Kappa scored 0.92, and Macro-F1 scored 0.95), FCN (OA reached 96.27%, Kappa scored 0.94, and Macro-F1 scored 0.96), and SegNet (OA reached 96.90%, Kappa scored 0.95, and Macro-F1 scored 0.97). The results of this study demonstrated that the proposed method had significant potential for crop classification.

Graphical Abstract

1. Introduction

Agriculture is the foundation of the national economy and the basic condition for ensuring social development [1]. It is of great significance to obtain the spatial distribution information of crops in time for phenology monitoring, yield estimation, disaster assessment, soil moisture inversion, and other fields [2,3,4,5]. Remote sensing (RS) technology, as a macro-real-time, large-scale Earth observation technology, has an obvious advantage over traditional manual statistical methods in identifying crop categories [6]. Therefore, it is widely used by scholars in the study of crop classification [7].
Multispectral (MS) images and Synthetic Aperture Radar (SAR) images are two effective RS data sources. MS images obtained by optical sensors contain rich spectral information, which can identify the biochemical characteristics of different crops [8]. In 2017, Sonobe et al. [9] classified crops in Hokkaido, Japan, and demonstrated that the final accuracy could reach 94.5% when using data obtained from the Operational Land Imager of Landsat-8. In [10], the sown area and spatial distribution of the main crops were extracted using MODIS data in Hebei Province, which verified the feasibility of extracting provincial crop planting information. Kenichi et al. [11] used Landsat-7 data to study crop classification in the Iac area, Peru. The difference in spectral characteristics of different crops in MS images is the basis of crop classification. However, MS imagery is weather-dependent and is affected by clouds, rain, and fog, which hinders its performance when classifying crops [12].
SAR, an active microwave sensor without limitations caused by weather conditions due to its penetration capability, can provide different information from MS images for crop classification [13,14]. In [15], rice was identified based on RADARSAT-2 data with the threshold set according to the ratio of HH to VV, and the identification accuracy reached 92.64%. In order to improve the classification accuracy, Xiang et al. [16] combined Sentinel-1 (S-1) data with elevation and slope information to extract ground feature information in the study area. Based on the polarimetric components of different crops in PolSAR data, Guo et al. [17] put forward a new parameter to realize crop classification. As can be seen from the above studies, SAR images can achieve high accuracy in crop classification, but due to the lack of spectral features and sensitivity to surface parameters, the applicability of SAR in crop classification is limited, and it is often combined with other auxiliary data.
As there is a very complicated nonlinear relationship between environmental elements, a single type of RS data cannot fully and accurately reflect the comprehensive information on the ground [18]. The diversity of local, regional, and global agricultural landscapes and their site-specific challenges have been reflected in many studies, including spectral similarity, crop diversity, weather conditions, and farming systems [19,20,21,22]. In recent years, the information provided by multi-source RS data has been complementary and cooperative [23,24]. The collaborative application of multi-source RS can reduce or eliminate the problems of target features, such as ambiguity, incompleteness, and uncertainty [25]. Multi-source RS image fusion can make use of complementary information from different sources to achieve accurate and comprehensive crop classification [21,24,26,27,28,29].
Recently, several studies have used MS and SAR data for crop classification. In particular, S-1 and Sentinel-2 (S-2) have similar spatial resolution, which has made their synergistic use a research hotspot [30,31,32,33,34]. In [35], wheat and oilseed rape were monitored based on the spectral information of optical data and the backscattering coefficient of SAR, and the accuracy of S-1 and S-2 data combined could reach 92%. Sun et al. combined the spectral information in optical data with the backscattering coefficient of SAR data and used a machine learning algorithm to classify crops. The results showed that VV and VH were effective in the classification of wheat and other crops, and the classification results could reach 93% [36]. Ghassemi, B. et al. used optical features from S2 and composites from S1 together to provide a full coverage map with appropriate OA [37]. However, the polarimetric components have not been fully utilized in the task of crop classification. There are many polarimetric decomposition methods for obtaining polarimetric information from SAR images, such as H/A/α decomposition [38,39] and Freeman decomposition [40], but most of them have been proposed for quad-pol SAR sets and are not suitable for S-1 data. On the other hand, with the continuous development of deep learning technology, it is widely used in the field of classification [41,42], and some scholars use deep learning technology to identify crops through multi-source RS images [43,44,45,46]. Kussul et al. classified corn and wheat in Ukraine and found that the classification result of a convolutional neural network (CNN) was better than that of RF [47]. In [48], CNN, RF, SVM, etc. were used to classify soybeans and wheat in an agricultural region of Canada. The results showed that CNN had the best classification result and that the overall accuracy could reach 96.72%. However, its ability to process multi-source RS data was still limited [49]. The insufficient integration of complementary information between multi-source data, for example, was prone to redundant input. Moreover, most studies analyze crop phenology changes based on time-series remote sensing data to achieve classification. For some areas affected by weather conditions, it is difficult to obtain time series images with quality assurance, and the effectiveness of single-phase and multi-source remote sensing data to improve crop classification accuracy needs to be verified.
Regarding these issues, this paper adapted the polarimetric components of S-1 images, which were extracted based on the newly developed model-based decomposition method [50]. Meanwhile, a Dual-Channel CNN (DC-CNN) based on a combination of the features of single-date S-1 and S-2 was constructed by extracting and fusing the multi-source features. On this basis, the best feature combination of the backscattering components, polarimetric components, and MS features was analyzed. The main contributions of this paper are given as follows:
  • To properly exploit the polarimetric content of S-1 SAR data in crop classification, the outputs of the new polarimetric decomposition conceived in [50] by Mascolo et al., which is adapted for dual-polarimetric SAR data, are extracted from VH-VV S-1 observations. These, along with the VH and VV backscattering coefficients, are combined with MS features, and the best combination strategy was analyzed.
  • A dual-channel CNN model, namely DC-CNN, with shared parameters based on multi-source RS data was constructed. Specifically, the features obtained from S-1 and S-2 data were fed into two CNN channels for independent learning, and they were transformed into high-dimensional feature expressions. Furthermore, the sharing of parameters in the convolution layer made the two branches learn cooperatively. The correlation of multi-source features was maximized while maintaining the unique features of each data source.
The rest of this paper is arranged as follows: In Section 2, the study area and adopted data are described, and the crop classification method is recommended in detail. The results of crop classification are shown in Section 3. In Section 4, the results obtained were discussed and analyzed. Section 5 presents the final conclusion.

2. Materials and Methods

2.1. Study Area

Tongxiang (120°17′E–120°39′E, 30°28′N–30°47′N) is located in the Hangjiahu Plain in the northern Zhejiang Province, China, as shown in Figure 1. The land is flat and fertile, which makes it suitable for the cultivation of wheat, rice, and oilseed rape. Wheat and oilseed rape were sown in late November and harvested in June and July of the following year. Due to the abundant rain in this study area, the availability of optical data is greatly affected. Therefore, this study adopted the method of single-time phase data to achieve high precision crop distribution mapping.

2.2. Data and Preprocessing

2.2.1. Sentinel-1A SAR Data and Preprocessing

Sentinel-1A (S-1A), a SAR satellite, was launched by the European Space Agency (ESA) on 3 April 2014. It is able to operate day and night with a high spatial resolution of 5 m × 20 m in Interferometric Wide (IW) swath mode. The sensor carried by the S-1A can provide large-scale images of 250 km × 250 km.
In this study, the IW-mode Single Look Complex (SLC) S-1A image on 6 May 2021, was downloaded from the Copernicus Open Access Hub (https://scihub.copernicus.eu/). The detailed parameters are shown in Table 1. The main processes include: (1) Orbit correction. Using accurate satellite orbit data to correct orbit information can effectively remove systematic errors caused by orbit errors. (2) Thermal noise removal. In order to reduce the influence of noise in SAR images, noise removal technology [48] was used. (3) Radiometric calibration. Eliminated all kinds of distortions associated with the radiant brightness in the image data as much as possible. (4) Deburst. Each burst that had effective signal parts was merged. (5) Generating a polarimetric matrix C2 according to the complex band in step (3). (6) Multi-looking. The number of range looks and azimuth looks was 4 and 1, respectively. (7) Speckle filtering. (8) Range-Doppler terrain correction. (9) Data conversion to convert the σ 0 band into the common dB standard. (10) Extraction of the study area. After preprocessing the S-1 data, the backscattering coefficient σ 0 was obtained, and the saved C2 matrix was derived for polarimetric decomposition.

2.2.2. Sentinel-2B Data and Preprocessing

Sentinel-2 (S-2) is a multi-spectral imaging satellite that consists of two satellites, 2A and 2B. Each satellite has a revisit period of 10 days, with a complementary double satellite for a temporal resolution of 5 days.
The study area is located in the south of China, which has a subtropical monsoon climate. Due to the influence of weather, the quality of optical data is seriously affected during the critical period of crop growth (April 2021 to June 2021). The weather conditions counted are shown in Figure 2, and the statistical data was obtained from http://www.weather.com.cn/ and https://weather.cma.cn (accessed on 11 April 2022). It can be shown that there were just 14 days from April to June that were sunny, while other days were cloudy, overcast, or rainy.
S-2 images will be affected by weather factors such as clouds, rain, and fog. In order to select effective S-2 images in the study area, the cloud cover percentage of S-2 images was counted (obtained from https://scihub.copernicus.eu/dhus/#/home (accessed on 11 April 2022)), as shown in Table 2.
As can be seen from Table 2, most S-2 images with a high cloud cover percentage cannot be used for crop classification. Only three S-2B images were suitable for crop classification from 3 April to 27 June 2021. As the study area is large, one scene in the S-2B image cannot completely cover the whole study area. Therefore, S-2B images taken on 3 May 2021, with a low cloud cover percentage were chosen for crop classification.
In this study, cloud-free level 2A atmospheric effect-corrected S-2B data on 3 May 2021, were acquired. S-2 data preprocessing processes included: (1) Resampling. Resampling the resolution of all bands to 10 m. (2) Layer stacking. Combining the resampled multiple bands into an MS image. (3) Mosaicking. Two images were stitched together to obtain an image covering the whole study area. (4) Extraction of the study area. The spectral bands (Table 3) were extracted for further crop classification.

2.2.3. Ground-Truth Data and Preprocessing

The vector data for farmland were obtained by manual measurement and the statistics of the local government. A reliable farmland boundary is beneficial to the final mapping of crop distribution. A field investigation was conducted from 10 May to 12 May 2021, to collect samples of crops in the study area. The farmland we studied was a government-planned field provided by the local government. During the study period, the planting status of farmland can be completely divided into three categories: oilseed rape, wheat, and bare land. There were also economic crops such as mulberry in Tongxiang city, but they were not involved in our experiment. In order to ensure the uniform distribution and quantity of samples, three experts interpreted and supplemented the samples according to a 3.8-m high-resolution remote sensing image (GF-1 optical image on 1 May 2021) and Google images (Landsat-8 optical image on 29 April 2021). The number of expert interpretation samples accounted for about 10% of the total sample. Finally, 305 plots were obtained as samples (19,540 pixels), including 101 oilseed rape (6038 pixels), 103 wheat (7418 pixels), and 101 bare lands (6084 pixels). The sample distribution is shown in Figure 3.
The samples were randomly divided into three parts: the training set was 60% (11,713 pixels), the validation set was 20% (3906 pixels), and the testing set was 20% (3921 pixels). In particular, the validation set was used to adjust the hyperparameters to prevent overfitting of the model. The testing set did not participate in the training process and was used to evaluate the performance of the model independently. In Table 4, the detailed parameters of the sample were described.

2.3. Crop Type Classification

2.3.1. Overview

The flow chart of the proposed method is shown in Figure 4. There were three steps in this crop classification method. Data acquisition and preprocessing should be completed beforehand. In step 1, on the one hand, the MS image was obtained from S-2 data. On the other hand, the VH and VV backscattering components and the polarimetric components from the model-based decomposition in [50] were obtained from S-1 data. The above features were grouped into different combinations. In step 2, the DC-CNN model was constructed, which included three modules: a feature extraction module, a feature fusion module, and an encoder-decoder module. In step 3, the optimal feature combination was obtained by analyzing and evaluating the classification results of different feature combinations, and the final crop distribution map was acquired based on the trained DC-CNN model. In order to measure the accuracy of the classification results, qualitative and quantitative evaluations were made. Qualitative evaluation was applied to evaluate the classification results intuitively [51]. Compared with the samples obtained from ground-truth data, the classification results were interpreted and verified intuitively. To evaluate quantitatively the accuracy of each combination, various parameters (i.e., Macro-F1, Overall Accuracy (OA), and Kappa) were considered [52,53].

2.3.2. Polarimetric Decomposition and Feature Combination

The abundant polarimetric information extracted from the SAR data has great potential for the classification of crop types [54]. However, there were few applicable polarimetric decomposition methods for S-1. Mascolo et al. proposed a novel model-based decomposition in [48] that is adapted for S-1 and by which any Stokes vector can be decomposed into a partially polarized and polarized wave component. In this study, this decomposition was applied to exploit the polarimetric components of S-1, which were introduced into crop classification for the first time. The specific process of decomposition was as follows:
The Sentinel-1 dual-polarimetric SAR data can be represented by the following polarimetric covariance matrix [55,56]:
C 2 X 2 = c 11 c 12 c 12 * c 22 = S V H 2 S V H S V V * S V V S V H * S V V 2
where < > is multilook and/or speckle filtering.
The random dipole cloud model, which was widely used in quad-pol decompositions [57,58], was used for the dual-pol decomposition.
Transforming C 2 X 2 into the Stokes vector:
S _ = s 1 s 2 s 3 s 4 = c 11 + c 22 c 11 c 22 2 R e c 12 2 I m c 12
where R e c 12 and I m c 12 represent the real part and the imaginary part of c 12 , respectively.
The Stokes vector S _ can be decomposed according to the model-based decomposition in [50].
S _ = m v s _ v + m s s _ p = m v 1 ± 0.5 0 0 + m s 1 c o s 2 α s i n 2 α c o s δ s i n 2 α s i n δ
where, on the left side, s _ v and s _ p represent the partially polarized and the completely polarized components, respectively, with m v and m s being the corresponding powers. Note that, on the right side, the volume term is modeled according to the random cloud of dipoles model. m v is the power of the partially polarized volume term (also referred to as volume scattering), and m s is the power of the polarized term (also referred to as surface scattering). The α angle measures the separation between the transmitted and received waves [49], and δ is the cross-polarized phase.
For different crop types in the early stages of planting, surface scattering is dominant. With the growth and development of crops, the proportion of volume scattering increases gradually. For leafy crops, the proportion of volume scattering is higher than that of surface scattering. The characteristics of different crop canopies will affect the specific value of volume scattering [59,60].
Therefore, as shown in [50], m v can be obtained by solving the quadratic in Equation (4), where the coefficients a , b , and c, are calculated from the random dipoles cloude model.
a m v 2 + b m v + c = 0 , a = s _ v T G s _ v = 0.75 b = 2 s _ T G s _ v = 2 s 1 ± 0.5 s 2 c = s _ T G s _ = s 1 2 s 2 2 s 3 2 s 4 2
Only one root of the quadratic equation satisfies energy conservation ( m v s 1 ), which provides a unique solution for m v .
The m s can be calculated as follows by hinder qual-pol model-based approaches [61] to avoid all the negative eigenvalue issues:
m s = s 1 m v
The backscattering components VV/VH were obtained from the preprocessed S-1 data, and m s / m v were calculated using model-based decomposition. The MS feature (Bands 2, 3, 4, 5, 6, 7, 8, 8A, 11, and 12) was acquired via preprocessed and band-sifted S-2 data. Table 5 describes in detail different combinations of extracted features that would be used as different input data sets for subsequent experiments.
Combinations A to C were represented using only the different components extracted from the S-1, which made it convenient to analyze the performance of polarimetric components m s and m v in crop classification using only S-1 data. Combinations D to H were different combinations of multi-source features. Combinations F and G were set to evaluate the contribution of polarization features m s and m v to the classification of multi-source crops, respectively.

2.3.3. Framework of DC-CNN

CNNs, which are widely used in deep learning approaches for crop classification [62,63], are helpful for effectively discovering salient features in data. However, the features of crops in multi-source RS images were different. In order to effectively use the features extracted by S-1 and S-2, DC-CNN was constructed to investigate the effects of S-1 and S-2 on crop classification. Due to the different characteristics of crops in S-1 and S-2 images, the features obtained from S-1 were used as input for one branch, and the spectral information obtained from S-2 was used as input for the other. Each of the streams learned sensor-specific representations. Through parameter sharing, the representations can be learned from other branches in order to achieve the best classification effect.
As shown in Figure 5, the structure of the DC-CNN included a feature extraction module, a feature fusion module, and an encoder-decoder module and was connected to a SoftMax layer for classification. The feature extraction and feature fusion modules realized feature learning and achieved the purpose of the deep fusion of multi-source features.
Because the characteristics of crops in S-1A and S-2B are different, two convolution kernels were set in the feature extraction module. The feature extraction process was shown in Equation (6):
Z S , i l = f w S l X S , i + b S l ,             l = 1 ,             f w S l Z S , i l 1 + b S l ,                 l = 2 , , n
where X S represents the source data in the two channels and S = 1 , 2 . Z S , i l is the output feature after the l -layer convolution; and f ( x ) is the activation function in the model structure; X S , i represents the i -th pixel in the S -th channel; w S l is the weight of the l convolutional layer; b S l is the bias of the convolutional kernel; represents the convolutional operation; n is the number of layers in the CNN.
The Batch Normalization (BN) layer was set after the convolution layer to speed up the training process. Following that, there was the pooling layer, which uses the 2 × 2 max pooling to reduce the data variance and computational complexity. Furthermore, ReLU [63] was selected as the activation function; that is, f x = m a x ( 0 , x ) , which maintains a nonlinear mapping relationship and avoids the problem of gradient disappearance by changing the negative value to zero.
The convolution layer consists of two layers, in which two branches share the parameters w and b in the second layer. At this time, the information in the two branches of the network influences each other, and the loss functions of the two branches jointly determine the gradient update in the back propagation, so as to learn more distinguishing features. Following that, the features extracted from the two channels were spliced and used as inputs for the encoder. The spliced feature was denoted as Z 1 , i n + Z 2 , i n .
F i l Z 1 , Z 2 = f w S l Z 1 , i n + Z 2 , i n + b S l , l = n + 1 , , m
The fused feature of F i l Z 1 , Z 2 was used as the input of the decoder. The feature map was mapped back to the spatial information of the original image through the decoder module. The SoftMax classifier was used to get the final classification result.
The loss function adopted the categorical cross-entropy (CCE) commonly used in multi-classification [64].
C C E y , ^ y = 1 N i = 1 N j = 1 C y i , j · log p y ^ i , j
where N represents the number of samples and C is the number of classes. The value y ^ represents the predicted class, with y being the training label.
The detailed parameters of DC-CNN are shown in Table 6.

2.3.4. Model Accuracy Evaluation

In order to select the best combination of multi-source features and measure the effectiveness of the DC-CNN model, qualitative and quantitative evaluations were conducted.
For the qualitative evaluation, three regions were selected and compared with the visual interpretation map; each region contained three types of farmland to be classified. Examples of these regions are shown in Figure 6.
For the quantitative evaluation, Macro-F1, Overall Accuracy (OA), and Kappa were employed, which were illustrated in Equation (9), Equation (13) and Equation (14), respectively.
Macro - F 1 = A v e r a g e ( F 1 s c o r e )
Here, F 1 s c o r e can be expressed as Equation (12):
F 1 s c o r e = 2 P r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
OA = TP + TN TP + TN + FP + FN
Kappa = OA p e 1 p e
p e = ( TP + FN ) × ( FP + TP ) + ( FP + TN ) × ( FN + TN ) ( FP + TN + TP + FN ) 2
where P r e c i s i o n and R e c a l l were calculated via confusion matrices.

3. Classification Results

3.1. Implementation Details

All the experiments were completed in the TensorFlow environment with a GeForce RTX 3080Ti. The weight decay was set to 0.004, and the learning rate was set to 0.001. DC-CNN was trained by an Adam optimizer [65,66] and trained on 100 epochs to obtain the final classification result. The batch size was 32. The hardware and software configurations are presented in Table 7.

3.2. Comparison of Feature Combinations

3.2.1. Polarimetric Components in a SAR-Only Image

Figure 7 shows the local comparison of the classification results of three different feature combinations based on DC-CNN, among which the features were extracted from S-1. Obviously, when Combination A was used as input data, the classification result was not ideal. An OA of 76% with a Kappa of 0.462 was achieved. In the three types, the result of bare land classification was better than the other two types, and there was a serious confusion phenomenon between wheat and oilseed rape, as shown in the oval area in Figure 7a(1)–a(3), where a large amount of wheat was mistaken for oilseed rape. After the polarimetric components were introduced into the classification task (Combination B and Combination C), the confusion phenomenon was obviously reduced.
According to Table 8, when Combination C was used as the input data set, the accuracy of various classes was significantly improved (Figure 7c(1)–c(3)). Compared with Combination A, the overall accuracy of Combination C increased by about 4.5%, and Kappa reached 0.628. These results indicated that the polarimetric component had a positive effect on crop classification and could effectively improve classification accuracy. It should be noted that the sample set for testing was independent of the training sample and did not participate in the model training process.

3.2.2. Polarimetric Components in SAR-Optical Images

It can be seen from Table 8 that polarimetric components played a positive role in crop classification tasks based on SAR data only. In order to explore its potential in multi-source crop classification tasks, the Combinations D, E, F, G, and H in Table 4 were compared. As shown in Figure 8, the misclassification phenomenon of Combination E (e(1)–e(3)), Combination F (f(1)–f(3)), and Combination G (g(1)–g(3)) was obviously reduced compared with the classification result of Combination D (d(1)–d(3)). The only difference between Combinations D, F, and G was that the polarimetric component was added to Combinations F and G. When all the features of S-1 and S-2 were used (h(1)–h(3)), the classification results were more accurate for both the interior and edge of farmland.
The predicted results of the model with different feature combinations were compared with the ground-truth map on the ground. Following that, make a quantitative evaluation. The specific accuracy results are shown in Table 9.
For visually observing the difference between different input precisions, a visual comparison of accuracy evaluation indicators is presented in Figure 9. The accuracy of Combination E can reach more than 90%, which indicates the efficiency of crop classification by combining only the polarimetric components and MS features. Apparently, compared with Combination D, the classification accuracy of Combination F, Combination G, and Combination H was obviously improved. That is, the addition of m s and m v can increase the discrimination between crops and reduce the misclassification. Especially when both m s and m v were added at the same time, the classification result was the best, and the accuracy could reach 98%. This showed that the scattering characteristics reflected by polarimetric information played an effective role in crop classification. Combination H was used as the optimal feature combination in subsequent experiments.
In order to verify the effect of separating polarimetric features m s and m v from SAR data on improving classification accuracy, the feature group without a polarimetric feature (Combination D) and the feature group with a polarimetric feature (Combination H) were analyzed visually, as shown in Figure 10. The multi-dimensional features are mapped to two-dimensional space by the Principal Component Analysis (PCA) method. In the Figure 10, the horizontal and vertical coordinates were the first and second principal components that provided most of the information about the data.
There were three different colored dots in the figure, which represented three kinds of samples. It can be seen that when only the MS of S-2 and the intensity features (VV and VH) of S-1 were combined, confusion between the three categories was evident. In particular, oilseed rape was confused with bare land and wheat to different degrees. Confusion between the three crop types was improved when polarimetric components ( m s and m v ) were added. In particular, the differentiation between bare land and oilseed rape samples increased markedly. More specifically, in areas comprising bare land without sowing, the scattering characteristics mainly reflect surface scattering, so the m s is always larger than the m v . Oilseed rape and wheat were mature during the study period, and thus the m v was dominant, meaning that they could be clearly distinguished from bare land. The differences in plant height and canopy between oilseed rape and wheat mean that their m v values differ to some extent. The results showed that polarimetric components can enhance the differentiation between samples, which is conducive to improving classification accuracy.

3.3. Accuracy Comparison with Other Classifiers

Figure 11 shows the classification results of the DC-CNN model when using Combination D. This taxonomic map clearly identified three crop categories: oilseed rape, wheat, and bare land. The results showed that the fields in the southwest of Tongxiang were complete and wide and that wheat was mainly planted here. Wheat and oilseed rape fields spread throughout the region; some bare land was scattered.
In order to prove the effectiveness of the proposed method in crop classification with multi-source remote sensing data, three deep learning methods were selected for comparison, including the traditional 2D-CNN method, CNN-based FCN, and SegNet [42,43,44,45]. For fairness of comparison, the input data sets of all methods were Combination G, and the training samples, verification samples, and test samples used in classification were consistent.

3.3.1. Qualitative Evaluation

In qualitative evaluation, three areas were selected for comparison with ground-truth in order to visually compare the classification results. As shown in Figure 11, the 2D-CNN, FCN, and SegNet methods all had many outliers and misclassified crops. In the rectangular area in Figure 12a(1)–c(1), oilseed rape was misclassified as bare land to different degrees. The classification results obtained by the proposed method were the closest to the actual situation when the training samples, test samples, and input data were all the same. This indicated that the proposed method can effectively utilize the multi-dimensional information of multi-source images and achieve a good effect.

3.3.2. Quantitative Evaluation

To verify the validity of the DC-CNN method, 2D-CNN, FCN, and SegNet classifiers were used for comparison. All these classifiers were quantitatively evaluated, and confusion matrices were created for them (Figure 13). The results were shown in Table 10.
According to the indicators in Table 10, the accuracy of the DC-CNN model was the highest. The Macro-F1 was 0.9840, the OA was 0.9840, and the Kappa was 0.9760; the accuracy metrics of the other models were slightly lower than those of the DC-CNN. FCN and SegNet acquired Macro-F1 of 0.9630 and 0.9693, respectively; OA of 0.9627 and 0.9690, respectively; and Kappa scored 0.9440 and 0.9534, respectively. Among the models tested, the 2D-CNN model exhibited the worst classification performance, with three indicators of 0.9467, 0.9487, and 0.9230, respectively. The Macro-F1 score of the proposed method was increased by about 4%, the OA was increased by 5%, and the Kappa was increased by around 5% compared with the 2D-CNN. For the oilseed rape and wheat categories, which were easily confused, the accuracy was improved by about 5% and 2%, respectively, and the accuracy of bare land identification was improved by about 3% compared with the traditional CNN. This experiment proved that the ability of the DC-CNN classifier to classify crops using multi-source data was improved to some extent when compared with other existing classifiers.

4. Discussion

Compared with a single data source, the advantages of using multi-source data synergistically in classification have been confirmed in many studies [30,31,32,33]. In fact, we also evaluated the effects of using only SAR data, only optical data, and a SAR-optical combination on crop classification. The results were shown in Table 11.
The classification accuracy obtained by only optical was about 5% higher than that obtained by only SAR, and the target type can be better reflected in optical data, which is the same conclusion as [67,68,69,70,71]. The cooperative use of SAR-optical provided multi-dimensional information, including the physical scattering mechanism of SAR and the MS of optical data, and the classification result was better than that of a single sensor. As shown in Table 11, compared with S-1 and S-2 alone, the OA of synergy between S-1 and S-2 is improved by 15% and 10%, respectively. [72] also showed that compared to the sole use of S-1 and S-2 data, combining S1 and S2 can improve OA by 3% and 10%, respectively. The classification result in this paper was more accurate (OA improved by almost 5%) than that in the literature [45]. On the one hand, there may be relatively few crop species in the study area; on the other hand, the importance of SAR polarization information has been frequently discussed in many crop classification studies, such as wheat and oilseed rape [73,74,75]. The experiment in this paper showed that the polarimetric components obtained by the new decomposition method [49] had a more outstanding contribution to classification than VV/VH. Polarimetric components were highly sensitive to crop canopy structure. Since the vertical state of canopy structure in wheat maturity is obviously different from that in oilseed rape, components performed well in classification tasks. As can be seen from Table 9, the final classification results obtained by the proposed method were more accurate than those obtained by the other comparison methods. Compared with 2D-CNN, FCN, and SegNet, the Macro-F1, OA, and Kappa improved by 2–4%, 2–4%, and 2–5%, respectively. However, in the early growth stages of wheat and oilseed rape, due to their similar physical characteristics, the contribution of polarimetric components will be affected to some extent. In the future, other features that can effectively distinguish wheat from oilseed rape will be used for classification.
Not only feature quality and correlation were important aspects of crop classification with multi-source data, but also the classification model affected classification results. The effectiveness of 2D-CNN, FCN, and SegNet in crop classification has been proven [43,44,45,46]. Good classification results have also been obtained in this experiment, and the accuracy rate is over 90%. However, under the condition of using the same input data set, the DC-CNN obtained more accurate classification results. The proposed method can effectively utilize and fuse the features in multi-source data, integrate the physical and structural properties of the target surface contained in SAR, and utilize the spectral information contained in optics. Meanwhile, 2D-CNN, FCN, and SegNet made insufficient use of features and lost feature information. Nevertheless, the training samples in the proposed method were collected by manual investigation, which required a lot of manpower and material costs. Therefore, in the future, it will be important to design an accurate classification method for crops with a small number of samples.

5. Conclusions

In this paper, DC-CNN was constructed to classify crops more accurately based on S-1 and S-2 data. Advanced features were extracted using the feature extraction and feature fusion modules of the model, and the deep fusion of multi-source features was realized. Moreover, the influence of polarimetric information from dual-pol SAR data on model-based decomposition on crop classification was observed for the first time. Experiments showed that the classification of farmland type was best when the polarimetric components, backscattering coefficient, and spectral information were simultaneously used as model inputs. The difference between different sample categories was augmented, and the probability of correct classification was elevated when polarimetric information was utilized. The results showed that the polarimetric information in SAR images can play an important role in crop classification. Meanwhile, the advantages of the proposed method involving the fusion of multi-source data were verified by comparing the classification results with those of other common methods. By using single-date imagery of S-1 and S-2 for crop classification, the results of the proposed method were more accurate compared with other methods. The proposed method classifies crops more accurately. The method proposed in this paper also provides an idea for areas with similar weather conditions, namely, using limited optical and SAR images to classify crops.
Along with the development and application of multi-source RS for the task of crop classification, which data to use and how to effectively use multi-source data are key issues. The results of this paper clearly showed the effectiveness and superiority of this method in SAR-optical combination crop classification. However, its application ability needs to be further developed to adapt to the classification task in a complex environment with more modalities. Future research will focus on whether other features that can be extracted from multi-source data, such as vegetation index extracted from optical data and texture information extracted from SAR data, can be used to improve classification accuracy. Another aspect is that the model is extended to make it suitable for classification tasks under more data conditions, such as data fusion of more than two modalities or collaboration of multi-source and multi-temporal data.

Author Contributions

Conceptualization, K.Z., D.Y., and N.L.; data curation, H.Y. and N.L.; investigation, K.Z., D.Y., J.Z., and N.L.; methodology, K.Z., H.Y., and J.Z.; supervision, D.Y. and N.L.; writing—original draft, K.Z. and D.Y.; writing—review and editing, H.Y., J.Z., and N.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (42101386), the Plan of Science and Technology of Henan Province (212102210093, 222102110439, 232102211043), the College Key Research Project of Henan Province (22A520021), the Plan of Science and Technology of Kaifeng City (2102005), the Key Laboratory of Natural Resources Monitoring and Regulation in Southern Hilly Region, Ministry of Natural Resources of the People’s Republic of China (NRMSSHR2022Z01) and the Key Laboratory of Land Satellite Remote Sensing Application, Ministry of Natural Resources of the People’s Republic of China (KLSMNR-202302).

Data Availability Statement

The authors would like to thank the ESA for providing the research data at https://scihub.copernicus.eu/dhus/#/home (accessed on 11 April 2022).

Acknowledgments

The authors would like to thank the ESA for providing the sentinel-1A SAR data and sentinel-2 multispectral data for agriculture applications.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, Z.; Che, G.; Zhang, T. A CNN-Transformer Hybrid Approach for Crop Classification Using Multitemporal Multisensor Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 847–858. [Google Scholar] [CrossRef]
  2. Yang, S.; Zhang, Q.; Yuan, X.; Chen, Q.; Liu, X. Super pixel-based Classification Using Semantic Information for Polarimetric SAR Imagery. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 3700–3703. [Google Scholar]
  3. Xie, Y.; Huang, J. Integration of a Crop Growth Model and Deep Learning Methods to Improve Satellite-Based Yield620 Estimation of Winter Wheat in Henan Province, China. Remote Sens. 2021, 13, 4372. [Google Scholar] [CrossRef]
  4. Ezzahar, J.; Ouaadi, N.; Zribi, M.; Elfarkh, J.; Aouade, G.; Khabba, S.; Er-Raki, S.; Chehbouni, A.; Jarlan, L. Evaluation of Backscattering Models and Support Vector Machine for the Retrieval of Bare Soil Moisture from Sentinel-1 Data. Remote Sens. 2020, 12, 72. [Google Scholar] [CrossRef]
  5. Martos, V.; Ahmad, A.; Cartujo, P.; Ordoñez, J. Ensuring Agricultural Sustainability through Remote Sensing in the Era of Agriculture. Appl. Sci. 2021, 11, 5911. [Google Scholar] [CrossRef]
  6. Xie, Q.; Lai, K.; Wang, J.; Lopez-Sanchez, J.M.; Shang, J.; Liao, C.; Zhu, J.; Fu, H.; Peng, X. Crop Monitoring and Classification Using Polarimetric RADARSAT-2 Time-Series Data Across Growing Season: A Case Study in Southwestern Ontario, Canada. Remote Sens. 2021, 13, 1394. [Google Scholar] [CrossRef]
  7. Seifi Majdar, R.; Ghassemian, H. A Probabilistic SVM Approach for Hyperspectral Image Classification Using Spectral and Texture Features. Int. J. Remote Sens. 2017, 38, 4265–4284. [Google Scholar] [CrossRef]
  8. Gao, Z.; Guo, D.; Ryu, D.; Western, A.W. Enhancing the Accuracy and Temporal Transferability of Irrigated Cropping Field Classification Using Optical Remote Sensing Imagery. Remote Sens. 2022, 14, 997. [Google Scholar] [CrossRef]
  9. Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K.I. Mapping Crop Cover Using Multi-temporal Landsat 8 OLI Imagery. Int. J. Remote Sens. 2017, 38, 4348–4361. [Google Scholar] [CrossRef]
  10. Chen, J.; Liu, Y.H.; Yu, Z.R. Planting Information Extraction of Winter Wheat Based on the Time-Series MODIS-EVI. Chin. Agric. Sci. Bulletin 2011, 27, 446–450. [Google Scholar]
  11. Tatsumi, K.; Yamashiki, Y.; Torres, M.A.C.; Taipe, C.L.R. Crop Classification of Upland Fields using Random Forest of Time-series Landsat 7 ETM+ data. Comput. Electron. Agric. 2015, 115, 171–179. [Google Scholar] [CrossRef]
  12. Chabalala, Y.; Adam, E.; Ali, K.A. Machine Learning Classification of Fused Sentinel-1 and Sentinel-2 Image Data towards Mapping Fruit Plantations in Highly Heterogenous Landscapes. Remote Sens. 2022, 14, 2621. [Google Scholar] [CrossRef]
  13. Ma, X.; Huang, Z.; Zhu, S.; Fang, W.; Wu, Y. Rice Planting Area Identification Based on Multi-Temporal Sentinel-1 SAR Images and an Attention U-Net Model. Remote Sens. 2022, 14, 4573. [Google Scholar] [CrossRef]
  14. Guo, Z.; Qi, W.; Huang, Y.; Zhao, J.; Yang, H.; Koo, V.-C.; Li, N. Identification of Crop Type Based on C-AENN Using Time Series Sentinel-1A SAR Data. Remote Sens. 2022, 14, 1379. [Google Scholar] [CrossRef]
  15. Cable, J.W.; Kovacs, J.M.; Jiao, X.; Shang, J. Agricultural Monitoring in Northeastern Ontario, Canada, Using Multi-Temporal Polarimetric RADARSAT-2 Data. Remote Sens. 2014, 6, 2343–2371. [Google Scholar] [CrossRef]
  16. Xiang, H.; Luo, H.; Liu, G.; Yang, R.; Lei, X. Land Cover Classification in Mountain Areas Based on Sentinel-1A Polarimetric SAR Data and Object-Oriented Method. J. Nat. Resour. 2017, 32, 2136–3148. [Google Scholar]
  17. Guo, J.; Wei, P.; Zhou, Z.; Bao, S. Crop Classification Method with Differential Characteristics Based on Multi-temporal PolSAR Images. Trans. Chin. Soc. Agric. Mach. 2017, 48, 174–182. [Google Scholar]
  18. Xie, G.; Niculescu, S. Mapping Crop Types Using Sentinel-2 Data Machine Learning and Monitoring Crop Phenology with Sentinel-1 Backscatter Time Series in Pays de Brest, Brittany, France. Remote Sens. 2022, 14, 4437. [Google Scholar] [CrossRef]
  19. Orynbaikyzy, A.; Gessner, U.; Conrad, C. Crop Type Classification Using a Combination of Optical and Radar Remote Sensing Data: A Review. Int. J. Remote Sens. 2019, 40, 6553–6595. [Google Scholar] [CrossRef]
  20. Snevajs, H.; Charvat, K.; Onckelet, V.; Kvapil, J.; Zadrazil, F.; Kubickova, H.; Seidlova, J.; Batrlova, I. Crop Detection Using Time Series of Sentinel-2 and Sentinel-1 and Existing Land Parcel Information Systems. Remote Sens. 2022, 14, 1095. [Google Scholar] [CrossRef]
  21. Conrad, C.; Fritsch, S.; Zeidler, J.; Rücker, G.; Dech, S. Per-Field Irrigated Crop Classification in Arid CentralAsia Using SPOT and ASTER Data. Remote Sens. 2010, 2, 1035–1056. [Google Scholar] [CrossRef]
  22. Zhang, L.P.; Shen, H.F. Progress and future of remote sensing data fusion. J. Remote Sens. 2016, 20, 1050–1061. [Google Scholar]
  23. West, R.D.; Yocky, D.A.; Vander Laan, J.; Anderson, D.Z.; Redman, B.J. Data Fusion of Very High Resolution Hyperspectral and Polarimetric SAR Imagery for Terrain Classification. Technical Report. 2021. Available online: https://www.osti.gov/biblio/1813672 (accessed on 10 July 2021).
  24. Jia, K.; Li, Q.; Tian, Y.; Wu, B.; Zhang, F.; Meng, J. Crop Classification Using Multi-configuration SAR Data in the North China Plain. Int. J. Remote Sens. 2012, 33, 170–183. [Google Scholar] [CrossRef]
  25. Orynbaikyzy, A.; Gessner, U.; Conrad, C. Spatial Transferability of Random Forest Models for Crop Type Classification Using Sentinel-1 and Sentinel-2. Remote Sens. 2022, 14, 1493. [Google Scholar] [CrossRef]
  26. Mcnairn, H.; Champagne, C.; Shang, J.; Holmstrom, D.; Reichert, G. Integration of Optical and Synthetic Aperture Radar (SAR) Imagery for Delivering Operational Annual Crop Inventories. Isprs J. Photogramm. Remote Sens. 2009, 64, 434–449. [Google Scholar] [CrossRef]
  27. Ren, T.; Xu, H.; Cai, X.; Yu, S.; Qi, J. Smallholder Crop Type Mapping and Rotation Monitoring in Mountainous Areas with Sentinel-1/2 Imagery. Remote Sens. 2022, 14, 566. [Google Scholar] [CrossRef]
  28. Valero, S.; Arnaud, L.; Planells, M.; Ceschia, E. Synergy of Sentinel-1 and Sentinel-2 Imagery for Early Seasonal Agricultural Crop Mapping. Remote Sens. 2021, 13, 4891. [Google Scholar] [CrossRef]
  29. Veloso, A.; Mermoz, S.; Bouvet, A.; Le Toan, T.; Planells, M.; Dejoux, J.-F.; Ceschia, E. Understanding the temporal behavior of crops using Sentinel-1 and Sentinel-2-like data for agricultural applications. Remote Sens. Environ. 2017, 199, 415–426. [Google Scholar] [CrossRef]
  30. Zhao, H.; Chen, Z.; Jiang, H.; Jing, W.; Sun, L.; Feng, M. Evaluation of Three Deep Learning Models for Early Crop Classification Using Sentinel-1A Imagery Time Series—A Case Study in Zhanjiang, China. Remote Sens. 2019, 11, 2673. [Google Scholar] [CrossRef]
  31. Steinhausen, M.J.; Wagner, P.D.; Narasimhan, B.; Waske, B. Combining Sentinel-1 and Sentinel-2 data for improved land use and land over mapping of monsoon regions. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 595–604. [Google Scholar]
  32. Lechner, M.; Dostálová, A.; Hollaus, M.; Atzberger, C.; Immitzer, M. Combination of Sentinel-1 and Sentinel-2 Data for Tree Species Classification in a Central European Biosphere Reserve. Remote Sens. 2022, 14, 2687. [Google Scholar] [CrossRef]
  33. Cai, Y.T.; Lin, H.; Zhang, M. Mapping paddy rice by the object-based random forest method using time series Sentinel-1/Sentinel-2 data. Adv. Space Res. 2019, 64, 2233–2244. [Google Scholar] [CrossRef]
  34. Tricht, K.V.; Gobin, A.; Gilliams, S.; Piccard, I. Synergistic use of radar sentinel-1 and optical sentinel-2 imagery for crop mapping: A case study for belgium. Remote Sens. 2018, 10, 1642. [Google Scholar] [CrossRef]
  35. Mercier, A.; Betbeder, J.; Baudry, J.; Le Roux, L.; Spicher, F.; Lacoux, J.; Roger, D.; Hubert-Moy, L. Evaluation of Sentinel-1 & 2 time series for predicting wheat and rapeseed phenological stages. ISPRS J. Photogramm. Remote Sens. 2020, 163, 231–256. [Google Scholar]
  36. Sun, C.; Bian, Y.; Zhou, T.; Pan, J. Using of Multi-Source and Multi-Temporal Remote Sensing Data Improves Crop-Type Mapping in the Subtropical Agriculture Region. Sensors 2019, 19, 2401. [Google Scholar] [CrossRef] [PubMed]
  37. Ghassemi, B.; Immitzer, M.; Atzberger, C.; Vuolo, F. Evaluation of Accuracy Enhancement in European-Wide Crop Type Mapping by Combining Optical and Microwave Time Series. Land 2022, 11, 1397. [Google Scholar] [CrossRef]
  38. Zhang, L.; Duan, B.; Zou, B. Research development on target decomposition method of polarimetric SAR image. J. Electron. Inf. Technol. 2016, 38, 3289–3297. [Google Scholar]
  39. Cloude, S.R. Target decomposition theorems in radar scattering. Electron. Lett. 1985, 21, 22–24. [Google Scholar] [CrossRef]
  40. Freeman, A. Fitting a two-component scattering model to polarimetric SAR data from forests. IEEE Trans. Geosci. Remote Sens. 2007, 45, 2583–2592. [Google Scholar] [CrossRef]
  41. Geng, J.; Wang, H.; Fan, J.; Ma, X. SAR Image Classification via Deep Recurrent Encoding Neural Networks. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2255–2269. [Google Scholar] [CrossRef]
  42. Wang, L.; Yang, X.; Tan, H.; Bai, X.; Zhou, F. Few-Shot Class-Incremental SAR Target Recognition Based on Hierarchical Embedding and Incremental Evolutionary Network. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–11. [Google Scholar] [CrossRef]
  43. Yang, Q.; Liu, M.; Zhang, Z.; Yang, S.; Ning, J.; Han, W. Mapping Plastic Mulched Farmland for High Resolution Images of Unmanned Aerial Vehicle Using Deep Semantic Segmentation. Remote Sens. 2019, 11, 2008. [Google Scholar] [CrossRef]
  44. Wang, H.; Chen, X.; Zhang, T.; Xu, Z.; Li, J. CCTNet: Coupled CNN and Transformer Network for Crop Segmentation of Remote Sensing Images. Remote Sens. 2022, 14, 1956. [Google Scholar] [CrossRef]
  45. Teimouri, N.; Dyrmann, M.; Jørgensen, R.N. A Novel Spatio-Temporal FCN-LSTM Network for Recognizing Various Crop Types Using Multi-Temporal Radar Images. Remote Sens. 2019, 11, 990. [Google Scholar] [CrossRef]
  46. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  47. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  48. Liao, C.; Wang, J.; Xie, Q.; Baz, A.A.; Huang, X.; Shang, J.; He, Y. Synergistic Use of Multi-Temporal RADARSAT-2 and VENµS Data for Crop Classification Based on 1D Convolutional Neural Network. Remote Sens. 2020, 12, 832. [Google Scholar] [CrossRef]
  49. Gu, L.; He, F.; Yang, S. Crop Classification based on Deep Learning in Northeast China using SAR and Optical Imagery. In Proceedings of the 2019 SAR in Big Data Era (BIGSARDATA), Beijing, China, 5–6 August 2019; pp. 1–4. [Google Scholar]
  50. Mascolo, L.; Cloude, S.R.; Lopez-Sanchez, J.M. Model-Based Decomposition of Dual-Pol SAR Data: Application to Sentinel-1. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5220119. [Google Scholar] [CrossRef]
  51. Wang, H.; Yang, H.; Huang, Y.; Wu, L.; Guo, Z.; Li, N. Classification of Land Cover in Complex Terrain Using Gaofen-3 SAR Ascending and Descending Orbit Data. Remote Sens. 2023, 15, 2177. [Google Scholar] [CrossRef]
  52. Agilandeeswari, L.; Prabukumar, M.; Radhesyam, V.; Phaneendra, K.L.B.; Farhan, A. Crop Classification for Agricultural Applications in Hyperspectral Remote Sensing Images. Appl. Sci. 2022, 12, 1670. [Google Scholar] [CrossRef]
  53. Li, K.; Chen, Y. A Genetic Algorithm-Based Urban Cluster Automatic Threshold Method by Combining VIIRS DNB, NDVI, and NDBI to Monitor Urbanization. Remote Sens. 2018, 10, 277. [Google Scholar] [CrossRef]
  54. Mascolo, L.; Lopez-Sanchez, J.M.; Cloude, S.R. Thermal Noise Removal from Polarimetric Sentinel-1 Data. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4009105. [Google Scholar] [CrossRef]
  55. Geng, J.; Wang, R.; Jiang, W. Polarimetric SAR Image Classification Based on Feature Enhanced Superpixel Hypergraph Neural Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5237812. [Google Scholar] [CrossRef]
  56. Cloude, S.R. Polarisation: Applications in Remote Sensing; Oxford University Press: Oxford, UK, 2009. [Google Scholar]
  57. Lee, J.; Pottier, E. Polarimetric Radar Imaging: From Basics to Applications; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
  58. Freeman, A.; Durden, S.L. A three-component scattering model for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef]
  59. Zhou, X.; Wang, J.; He, Y.; Shan, B. Crop Classification and Representative Crop Rotation Identifying Using Statistical Features of Time-Series Sentinel-1 GRD Data. Remote Sens. 2022, 14, 5116. [Google Scholar] [CrossRef]
  60. Xie, Q.; Dou, Q.; Peng, X.; Wang, J.; Lopez-Sanchez, J.M.; Shang, J.; Fu, H.; Zhu, J. Crop Classification Based on the Physically Constrained General Model-Based Decomposition Using Multi-Temporal RADARSAT-2 Data. Remote Sens. 2022, 14, 2668. [Google Scholar] [CrossRef]
  61. Yamaguchi, Y.; Moriyama, T.; Ishido, M.; Yamada, H. Four-component scattering model for polarimetric SAR image decomposition. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1699–1706. [Google Scholar] [CrossRef]
  62. Van Zyl, J.J.; Arii, M.; Kim, Y. Model-based decomposition of polarimetric SAR covariance matrices constrained for nonnegative eigenvalues. IEEE Trans. Geosci. Remote Sens. 2011, 9, 3452–3459. [Google Scholar] [CrossRef]
  63. Wang, D.; Cao, W.; Zhang, F.; Li, Z.; Xu, S.; Wu, X. A Review of Deep Learning in Multiscale Agricultural Sensing. Remote Sens. 2022, 14, 559. [Google Scholar] [CrossRef]
  64. Cherif, E.; Hell, M.; Brandmeier, M. DeepForest: Novel Deep Learning Models for Land Use and Land Cover Classification Using Multi-Temporal and -Modal Sentinel Data of the Amazon Basin. Remote Sens. 2022, 14, 5000. [Google Scholar] [CrossRef]
  65. Konapala, G.; Kumar, S.; Ahmad, S. Exploring Sentinel-1 and Sentinel-2 diversity for flood inundation mapping using deep learning. ISPRS J. Photogramm. Remote Sens. 2021, 180, 163–173. [Google Scholar] [CrossRef]
  66. Hartmann, A.; Davari, A.; Seehaus, T.; Braun, M.; Maier, A.; Christlein, V. Bayesian U-Net for Segmenting Glaciers in SAR Imagery. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 3479–3482. [Google Scholar]
  67. Seydi, S.T.; Amani, M.; Ghorbanian, A. A Dual Attention Convolutional Neural Network for Crop Classification Using Time-Series Sentinel-2 Imagery. Remote Sens. 2022, 14, 498. [Google Scholar] [CrossRef]
  68. Denize, J.; Hubert-Moy, L.; Betbeder, J.; Corgne, S.; Baudry, J.; Pottier, E. Evaluation of using sentinel-1 and -2 time-series to identify winter land use in agricultural landscapes. Remote Sens. 2019, 11, 37. [Google Scholar] [CrossRef]
  69. Demarez, V.; Helen, F.; Marais-Sicre, C.; Baup, F. In-season mapping of irrigated crops using Landsat 8 and Sentinel-1 time series. Remote Sens. 2019, 11, 118. [Google Scholar] [CrossRef]
  70. Liu, J.; Zhu, W.; Atzberger, C.; Zhao, A.; Pan, Y.; Huang, X. A phenology-based method to map cropping patterns under a wheat-maize rotation using remotely sensed time-series data. Remote Sens. 2018, 10, 1203. [Google Scholar] [CrossRef]
  71. Ghassemi, B.; Dujakovic, A.; Zółtak, M.; Immitzer, M.; Atzberger, C.; Vuolo, F. Designing a EuropeanWide Crop Type Mapping Approach Based on Machine Learning Algorithms Using LUCAS Field Survey and Sentinel-2 Data. Remote Sens. 2022, 14, 541. [Google Scholar] [CrossRef]
  72. Venter, Z.S.; Sydenham, M.A.K. Continental-Scale Land Cover Mapping at 10 m Resolution Over Europe (ELC10). Remote Sens. 2021, 13, 2301. [Google Scholar] [CrossRef]
  73. Qiao, C.; Daneshfar, B.; Davidson, A.; Jarvis, I.; Liu, T.; Fisette, T. (Eds.) Integration of Optical and Polarimetric SAR Imagery for Locally Accurate Crop Classification. In Proceedings of the Geoscience and Remote Sensing Symposium (IGARSS), 2014 IEEE International, Quebec City, QC, Canada, 13–18 July 2014; pp. 13–18. [Google Scholar]
  74. Villa, P.; Stroppiana, D.; Fontanelli, G.; Azar, R.; Brivio, A.P. In-Season Mapping of Crop Type with Optical and X-Band SAR Data: A Classification Tree Approach Using Synoptic Seasonal Features. Remote Sens. 2015, 7, 12859–12886. [Google Scholar] [CrossRef]
  75. Salehi, B.; Daneshfar, B.; Davidson, A.M. Accurate Crop-Type Classification Using Multi-Temporal Optical and Multi-Polarization SAR Data in an Object-Based Image Analysis Framework. Int. J. Remote Sens. 2017, 38, 4130–4155. [Google Scholar] [CrossRef]
Figure 1. Location map and RS images of Tongxiang. (a) pseudo-color image of S-1 (pseudo-color image defined by combination of d B m s , d B m v , and d B m s / d B m v .); (b) pseudo-color image of S-2 (pseudo-color image defined by combination of bands B8 (near infrared), B4 (red), and B3 (green)).
Figure 1. Location map and RS images of Tongxiang. (a) pseudo-color image of S-1 (pseudo-color image defined by combination of d B m s , d B m v , and d B m s / d B m v .); (b) pseudo-color image of S-2 (pseudo-color image defined by combination of bands B8 (near infrared), B4 (red), and B3 (green)).
Remotesensing 15 02727 g001
Figure 2. Weather conditions in the study area from April to June.
Figure 2. Weather conditions in the study area from April to June.
Remotesensing 15 02727 g002
Figure 3. Distribution map of crop samples from field surveys and field photos of major crops corresponding to image acquisition dates. (a) Photo of wheat; (b) photo of bare land; and (c) photo of oilseed rape.
Figure 3. Distribution map of crop samples from field surveys and field photos of major crops corresponding to image acquisition dates. (a) Photo of wheat; (b) photo of bare land; and (c) photo of oilseed rape.
Remotesensing 15 02727 g003
Figure 4. Flow chart of the proposed method.
Figure 4. Flow chart of the proposed method.
Remotesensing 15 02727 g004
Figure 5. Structure of the DC-CNN model.
Figure 5. Structure of the DC-CNN model.
Remotesensing 15 02727 g005
Figure 6. Selected typical areas (the samples not included in the training process were used for testing). (a) Locations of selected typical areas; (b) Ground-truth maps of selected typical areas.
Figure 6. Selected typical areas (the samples not included in the training process were used for testing). (a) Locations of selected typical areas; (b) Ground-truth maps of selected typical areas.
Remotesensing 15 02727 g006
Figure 7. The classification results of three local regions were obtained by using different combinations as input data sets, respectively. a(1)a(3) Results based on Combination A; b(1)b(3) Results based on Combination B; c(1)c(3) Results based on Combination C. Combination A: S-1 (VV, VH); Combination B: S-1 ( m s , m v ); Combination C: S-1 (VV, VH, m s , m v ).
Figure 7. The classification results of three local regions were obtained by using different combinations as input data sets, respectively. a(1)a(3) Results based on Combination A; b(1)b(3) Results based on Combination B; c(1)c(3) Results based on Combination C. Combination A: S-1 (VV, VH); Combination B: S-1 ( m s , m v ); Combination C: S-1 (VV, VH, m s , m v ).
Remotesensing 15 02727 g007
Figure 8. The classification results of three local regions were obtained by using different combinations as input data sets, respectively. d(1)d(3) Results based on Combination D; e(1)e(3) Results based on Combination E; f(1)f(3) Results based on Combination F; g(1)g(3) Results based on Combination G; h(1)h(3) Results based on Combination H. Combination D: S-1 (VV, VH) + S-2(MS); Combination E: S-1 ( m s , m v ) + S-2(MS); Combination F: S-1 (VV, VH, m s ) + S-2(MS); Combination G: S-1 (VV, VH, m v ) + S-2(MS); H: S-1 (VV, VH, m s  , m v ) + S-2(MS).
Figure 8. The classification results of three local regions were obtained by using different combinations as input data sets, respectively. d(1)d(3) Results based on Combination D; e(1)e(3) Results based on Combination E; f(1)f(3) Results based on Combination F; g(1)g(3) Results based on Combination G; h(1)h(3) Results based on Combination H. Combination D: S-1 (VV, VH) + S-2(MS); Combination E: S-1 ( m s , m v ) + S-2(MS); Combination F: S-1 (VV, VH, m s ) + S-2(MS); Combination G: S-1 (VV, VH, m v ) + S-2(MS); H: S-1 (VV, VH, m s  , m v ) + S-2(MS).
Remotesensing 15 02727 g008
Figure 9. Intuitive comparison of evaluation indicators.
Figure 9. Intuitive comparison of evaluation indicators.
Remotesensing 15 02727 g009
Figure 10. Visual images of features. (a) Feature visualization results of combination D (S-1 (VV, VH) + S-2(MS)) (b) Feature visualization results of combination H (S-1 (VV, VH, m s , m v ) + S-2(MS)).
Figure 10. Visual images of features. (a) Feature visualization results of combination D (S-1 (VV, VH) + S-2(MS)) (b) Feature visualization results of combination H (S-1 (VV, VH, m s , m v ) + S-2(MS)).
Remotesensing 15 02727 g010
Figure 11. Classification results of the main crops in the study area obtained by DC-CNN using Combination G. Background: ture color composite of S-2.
Figure 11. Classification results of the main crops in the study area obtained by DC-CNN using Combination G. Background: ture color composite of S-2.
Remotesensing 15 02727 g011
Figure 12. The classification results of three local regions were obtained by different methods (the input data sets are all Combination G). a(1)a(3) results of 2D-CNN; b(1)b(3) results of FCN; c(1)c(3) results of SegNet; d(1)d(3) results of DC-CNN.
Figure 12. The classification results of three local regions were obtained by different methods (the input data sets are all Combination G). a(1)a(3) results of 2D-CNN; b(1)b(3) results of FCN; c(1)c(3) results of SegNet; d(1)d(3) results of DC-CNN.
Remotesensing 15 02727 g012
Figure 13. Confusion matrices for models generated by different classifiers (the input data sets are all Combination G). (a) 2D-CNN; (b) FCN; (c) SegNet; and (d) DC-CNN.
Figure 13. Confusion matrices for models generated by different classifiers (the input data sets are all Combination G). (a) 2D-CNN; (b) FCN; (c) SegNet; and (d) DC-CNN.
Remotesensing 15 02727 g013aRemotesensing 15 02727 g013b
Table 1. S-1A image used in the study.
Table 1. S-1A image used in the study.
S-1A ParametersS-1A
Product typeSLC
Imaging modeIW
PolarizationVV
VH
Pixel size10 m × 10 m
Pass directionAscending
Wave bandC
Dates2021-05-06
Table 2. Cloud cover data from S-2 images during the critical period of crop growth from April 2021 to June 2021.
Table 2. Cloud cover data from S-2 images during the critical period of crop growth from April 2021 to June 2021.
S-2 SatelliteDateCloud Cover Percentage (%)S-2 SatelliteDateCloud Cover Percentage (%)
A8 April 202199.94B3 April 202199.89
A8 April 202171.47B3 April 202199.97
A18 April 202187.58B13 April 202153.41
A18 April 202156.11B13 April 202188.98
A28 April 202198.98B23 April 202198.98
A28 April 202194.74B23 April 202198.46
A8 May 202199.99B3 May 202111.42
A8 May 202199.89B3 May 202119.96
A18 May 202199.77B13 May 202199.33
A18 May 202199.57B13 May 202198.58
A28 May 2021100B23 May 202167.47
A28 May 202194.12B23 May 202188.47
A7 June 202193.43B2 June 202197.03
A7 June 202196.48B2 June 202199.29
A17 June 202193.91B12 June 202183.16
A17 June 202199.64B12 June 202190.27
A27 June 202197.58B22 June 202182.89
A27 June 202194.57B22 June 20219.47
Table 3. S-2B image used in the study.
Table 3. S-2B image used in the study.
S-2B ParametersSpatial Resolution (m)S-2B Spectral Description
Band 210Blue
Band 310Green
Band 410Red
Band 520Vegetation red edge
Band 620Vegetation red edge
Band 720Vegetation red edge
Band 810Near Infrared
Band 8A20Vegetation red edge
Band 1120Short-Wave Infrared
Band 1220Short-Wave Infrared
Dates 2021-05-03
Processing Level Level 2A
Table 4. Ground-truth data and sample allocation of crop types.
Table 4. Ground-truth data and sample allocation of crop types.
LabelTypeNumber of FieldsTotal Number of PixelsNumber of Training SamplesNumber of Validation SamplesNumber of Testing Samples
1Oilseed rape1016038360112071230
2Wheat1037418445814831477
3Bare land1016084365412161214
Total-30519,54011,71339063921
Table 5. Different combinations composed of S-1 and S-2 features were used.
Table 5. Different combinations composed of S-1 and S-2 features were used.
CombinationAbbreviationComment
AS-1 (VV, VH)Only the intensity components VV and VH of S1
BS-1 ( m s , m v )Only the polarimetric components m s and m v of S1
CS-1 (VV, VH, m s , m v )The intensity components VV, VH, and the polarimetric components m s , m v , of S1
DS-1 (VV, VH) + S-2(MS)The intensity components VV, VH of S1 + MS of S-2
ES-1 ( m s , m v ) + S-2(MS)The polarimetric components m s , m v of S1 + MS of S-2
FS-1 (VV, VH, m s ) + S-2(MS)The intensity components VV, VH, and the polarimetric components m s of S1 + MS of S-2
GS-1 (VV, VH, m v ) + S-2(MS)The intensity components VV, VH, and the polarimetric components m v of S1 + MS of S-2
HS-1 (VV, VH, m s , m v ) + S-2(MS)The intensity components VV, VH, and the polarimetric components m s , m v of S1 + MS of S-2
Table 6. Parameters of the DC-CNN.
Table 6. Parameters of the DC-CNN.
S-1S-2
Conv: 3 × 3 × 16(n, 7, 7, 16)Conv: 3 × 3 × 16(n, 7, 7, 16)
BN(n, 7, 7, 16)BN(n, 7, 7, 16)
ReLU(n, 7, 7, 16)ReLU(n, 7, 7, 16)
Max-Pooling: 2 × 2(n, 4, 4, 16)Max-Pooling: 2 × 2(n, 4, 4, 16)
Conv: 3 × 3 × 32(n, 4, 4, 32)Conv: 3 × 3 × 32(n, 4, 4, 32)
BN(n, 4, 4, 32)BN(n, 4, 4, 32)
ReLU(n, 4, 4, 32)ReLU(n, 4, 4, 32)
Max-Pooling: 2 × 2(n,2, 2, 32)Max-Pooling: 2 × 2(n,2, 2, 32)
Flatten128Flatten128
LayerParametersOutput shape
Joint Layer (n, 256)
Encoder1128, activation = ‘ReLU ‘(n, 128)
Encoder264, activation = ‘ReLU ‘(n, 64)
Encoder332, activation = ‘ReLU ‘(n, 32)
Compressed features16, activation = ‘ReLU ‘(n, 16)
Decoder132, activation = ‘ReLU ‘(n, 32)
Decoder264, activation = ‘ReLU ‘(n, 64)
Decoder3128, activation = ‘ReLU ‘(n, 128)
ClassificationSoftMax(n, 3)
Table 7. Hardware and software configurations of the experiments.
Table 7. Hardware and software configurations of the experiments.
ConfigurationVersion
GPUGeForce RTX 3080Ti
Memory64G
LanguagePython 3.8.3
FrameTensorflow 1.14.0
Table 8. Precision, Recall, F1-score, Macro-F1, OA, and Kappa coefficients corresponding to the crop classification results in different Combinations, using only S-1 data. Combination A: S-1 (VV, VH); Combination B: S-1 ( m s , m v ); Combination C: S-1 (VV, VH, m s , m v ).
Table 8. Precision, Recall, F1-score, Macro-F1, OA, and Kappa coefficients corresponding to the crop classification results in different Combinations, using only S-1 data. Combination A: S-1 (VV, VH); Combination B: S-1 ( m s , m v ); Combination C: S-1 (VV, VH, m s , m v ).
Oilseed RapeWheatBare LandMacro-F1OAKappa
Combination APrecision0.7830.6940.8060.76030.76090.462
Recall0.7150.7710.802
F1-score0.7470.7300.804
Combination BPrecision0.8320.7520.8350.8060.80610.572
Recall0.7650.8250.834
F1-score0.7970.7870.834
Combination CPrecision0.8550.7800.8430.8280.83120.628
Recall0.7920.8540.852
F1-score0.8220.8150.847
Table 9. Precision, Recall, F1-score, Macro-F1, OA, and Kappa coefficients corresponding to the crop classification accuracy in different Combinations.
Table 9. Precision, Recall, F1-score, Macro-F1, OA, and Kappa coefficients corresponding to the crop classification accuracy in different Combinations.
Oilseed RapeWheatBare LandMacro-F1OAKappa
Combination DPrecision0.9090.8890.9110.90300.90300.8545
Recall0.8670.9050.937
F1-score0.8880.8970.924
Combination EPrecision0.9180.9410.9430.93400.93400.9010
Recall0.9190.9370.946
F1-score0.9190.9390.945
Combination FPrecision0.9400.9330.9560.94330.94300.9145
Recall0.9280.9460.955
F1-score0.9340.9400.956
Combination GPrecision0.9670.9720.9720.97070.97030.9554
Recall0.9610.9670.983
F1-score0.9640.9700.978
Combination HPrecision0.9760.9900.9860.98400.98400.9760
Recall0.9860.9740.992
F1-score0.9810.9820.989
Table 10. Precisions, Recalls, F1-scores, Macro-F1, OA, and Kappa coefficients corresponding to the crop classification accuracy in different methods (the input data sets are all Combination G).
Table 10. Precisions, Recalls, F1-scores, Macro-F1, OA, and Kappa coefficients corresponding to the crop classification accuracy in different methods (the input data sets are all Combination G).
Oilseed RapeWheatBare LandMacro-F1OAKappa
2D-CNNPrecision0.9400.9340.9580.94670.94870.9230
Recall0.9330.9550.958
F1-score0.9370.9450.958
FCNPrecision0.9630.9530.9720.96300.96270.9440
Recall0.9540.9630.971
F1-score0.9590.9580.972
SegNetPrecision0.9650.9780.9640.96930.96900.9534
Recall0.9660.9590.982
F1-score0.9660.9690.973
DC-CNNPrecision0.9760.9900.9860.98400.98400.9760
Recall0.9860.9740.992
F1-score0.9810.9820.989
Table 11. Overall classification accuracy, using only S-1, only S-2, and both combined.
Table 11. Overall classification accuracy, using only S-1, only S-2, and both combined.
Macro-F1OAKappa
S-1 (VV, VH, m s , m v )0.82800.83120.6280
S-2(MS)0.94300.88540.7403
S-1 (VV, VH, m s , m v ) + S-2(MS)0.98400.98400.9760
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, K.; Yuan, D.; Yang, H.; Zhao, J.; Li, N. Synergy of Sentinel-1 and Sentinel-2 Imagery for Crop Classification Based on DC-CNN. Remote Sens. 2023, 15, 2727. https://doi.org/10.3390/rs15112727

AMA Style

Zhang K, Yuan D, Yang H, Zhao J, Li N. Synergy of Sentinel-1 and Sentinel-2 Imagery for Crop Classification Based on DC-CNN. Remote Sensing. 2023; 15(11):2727. https://doi.org/10.3390/rs15112727

Chicago/Turabian Style

Zhang, Kaixin, Da Yuan, Huijin Yang, Jianhui Zhao, and Ning Li. 2023. "Synergy of Sentinel-1 and Sentinel-2 Imagery for Crop Classification Based on DC-CNN" Remote Sensing 15, no. 11: 2727. https://doi.org/10.3390/rs15112727

APA Style

Zhang, K., Yuan, D., Yang, H., Zhao, J., & Li, N. (2023). Synergy of Sentinel-1 and Sentinel-2 Imagery for Crop Classification Based on DC-CNN. Remote Sensing, 15(11), 2727. https://doi.org/10.3390/rs15112727

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop