Next Article in Journal
Parameterization of the Individual Tree Detection Method Using Large Dataset from Ground Sample Plots and Airborne Laser Scanning for Stands Inventory in Coniferous Forest
Previous Article in Journal
Performance Evaluation of Four Ocean Reflectance Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Crop Classification Using MSCDN Classifier and Sparse Auto-Encoders with Non-Negativity Constraints for Multi-Temporal, Quad-Pol SAR Data

1
School of Electronic Engineering, Xidian University, Xi’an 710071, China
2
Research Institute of Advanced Remote Sensing Technology, Xidian University, Xi’an 710071, China
3
College of Mechanical and Electronic Engineering, Northwest A & F University, Yangling City 712100, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(14), 2749; https://doi.org/10.3390/rs13142749
Submission received: 27 May 2021 / Revised: 6 July 2021 / Accepted: 6 July 2021 / Published: 13 July 2021

Abstract

:
Accurate and reliable crop classification information is a significant data source for agricultural monitoring and food security evaluation research. It is well-known that polarimetric synthetic aperture radar (PolSAR) data provides ample information for crop classification. Moreover, multi-temporal PolSAR data can further increase classification accuracies since the crops show different external forms as they grow up. In this paper, we distinguish the crop types with multi-temporal PolSAR data. First, due to the “dimension disaster” of multi-temporal PolSAR data caused by excessive scattering parameters, a neural network of sparse auto-encoder with non-negativity constraint (NC-SAE) was employed to compress the data, yielding efficient features for accurate classification. Second, a novel crop discrimination network with multi-scale features (MSCDN) was constructed to improve the classification performance, which is proved to be superior to the popular classifiers of convolutional neural networks (CNN) and support vector machine (SVM). The performances of the proposed method were evaluated and compared with the traditional methods by using simulated Sentinel-1 data provided by European Space Agency (ESA). For the final classification results of the proposed method, its overall accuracy and kappa coefficient reaches 99.33% and 99.19%, respectively, which were almost 5% and 6% higher than the CNN method. The classification results indicate that the proposed methodology is promising for practical use in agricultural applications.

Graphical Abstract

1. Introduction

Crop classification plays an important role in remote sensing monitoring of agricultural conditions, and it is a premise for further monitoring of crop growth and yields [1,2]. Once the categories, areas and space distribution information of crops have been acquired in a timely and accurate manner, it can provide scientific evidence of reasonable adjustment for agriculture structure. Therefore, crop classification has great significance for guidance of agriculture production, rational distribution of farming resources and guarantee of national food security [3,4,5].
With the continuous advancement and development of remote sensing technology and its theory, it has been extensively applied in agricultural fields such as crop census, growing monitoring, yield prediction and disaster assessment [6,7,8,9]. Over the past several years, optical remote sensing has been widely applied in crop classification due to its objectivity, accuracy, wide monitoring range and low cost [10]. For example, Tatsumi adopted random forest classifier to classify the eight class crops in southern Peru of time-series Landsat 7 ETM + data, the final overall accuracy and the kappa coefficient were 81% and 0.70, respectively [11]. However, optical remote sensing data is susceptible to cloud and shadow interference during the collection, so it is difficult to obtain effective continuous optical remote sensing data in the critical period of crop morphological changes. In addition, optical remote sensing data only reflect the spectral signature of target surface. For the wide variety of ground objects, there exists the phenomenon of “same object with different spectra and different objects with the same spectrum”. Therefore, the crop classification accuracy based on optical remote sensing data is limited to a certain extent. Unlike the optical remote sensing, PolSAR is an active microwave remote sensing technology, its working conditions cannot be restricted by weather and climate. Meanwhile, besides the signature of target surface, SAR remote sensing data provide other spectral signatures of target due to its penetrability. Therefore, increasing amounts of attention has been paid to the research with PolSAR data in crop classification [12,13]. However, the constraints of developing level for radar technology, the majority of classification research for crops used single-temporal PolSAR data. However, as for crop categories, identification, single-temporal PolSAR image offers only limited information for crops. Therefore, it is very difficult to identify different crop categories due to the same external phenomena in the certain period, especially during the sowing period [14]. Therefore, it is necessary to collect multi-temporal PolSAR data to further improve the crop classification accuracy.
In recent two decades, an increasing number of satellite-borne SAR systems have been launched successfully and operate on-orbit, which made it available to acquire multi-temporal remote sensing data for desired target [15,16,17]. At present, there are several representative systems available for civilian applications, such as L-band Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) [18], C-band Sentinel-1 [19,20], GF-3, RADARSAT-2 and Radarsat Constellation Mission (RCM) [21], and X-band Constellation of Small Satellites for Mediterranean basin observation (COSMO) and COSMO-SkyMed 2nd Generation (CSG) [22]. Through these on-orbit SAR systems, a number of multi-temporal PolSAR images for the same area can be readily acquired for crop surveillance and other related applications. Additionally, it can show different scattering characteristics for crops in different growing periods, which greatly improves the classification accuracy of crops [23,24,25].
Recently, a number of classification algorithms with PolSAR data have been presented in the literature, which can be roughly divided into three categories:
(1)
Algorithms based on statistical models [26]. For example, Lee et al. proposed a classical classifier based on the complex Wishart distribution [27];
(2)
Algorithms based on the scattering mechanisms of polarization [28]. The points with the same physical meaning are classified using the polarization scattering parameters obtained by the coherent and incoherent decomposition algorithms (such as Pauli decomposition [29], Freeman decomposition [30], etc.) [31,32,33,34,35];
(3)
The classification schemes based on machine learning [36], e.g., support vector machine (SVM) [37] and various neural networks [38]. For instance, Zeyada et al. use the SVM to classify four crops (rice, maize, grape and cotton) in the Nile Delta, Egypt [39].
With the collection of multi-temporal PolSAR data, the various classification algorithms based on time-series information have also been developed. For example, long short-term memory (LSTM) network has been exploited to recognize and classify the multi-temporal PolSAR images [40]. Zhong et al. classify the summer crops in Yolo County, California using the LSTM algorithm with Landsat Enhanced Vegetation Index (EVI) time series [25]. It can be seen that the research and application of multi-temporal PolSAR data are constantly progressing. For LSTM algorithm, the performance of this network mainly depends on input features, so a large amount of decomposition algorithms have been developed to extract the polarization scattering characteristics [41,42,43,44]. However, the direct use of polarization features will result in the so-called “dimension disaster” problem for the various classifiers. Therefore, the dimension reduction for the extracted multi-temporal features has become a significant work.
Some methods, such as principle component analysis (PCA) [45] and locally linear embedded (LLE) [46], etc., are popular for feature compression to solve the “dimension disaster” problem. For instance, the PCA method actually provides the optimal linear solution for data compression in the sense of minimum mean square error (MMSE) [47]. The advantage of PCA lies in the fast restoration of original data by subspace projection at a cost of minimum error. However, it cannot be guaranteed that the principle components extracted by PCA provide the most relevant information for crop type discrimination. Independent Component Analysis (ICA) is the generalization of PCA, which can gain independent gains. Bartlett M.S. et al. adopt the ICA to recognize the face images of the FERET face database [48]. Tensor decomposition is often used to extract certain elementary features from image data. Dehghanpoor G. et al. used tensor decomposition method to achieve the feature learning on satellite imagery [49]. Non-negative matrix factorization (NMF) is based on non-negative constraints, which allows learn parts from objects. Ren J.M. et al. applied the reduce dimensionality method NMF as the preprocessing of remote sensing imagery classification [50]. However, they are not suitable for dimensionality reduction about PolSAR data of crops. Additionally, the LLE method can voluntarily extract the low-dimensional feature of nonlinear from high-dimensional data, but it is very sensitive to outliers [51]. In most recent years, with the development of deep learning, the convolutional neural network (CNN) has been gradually applied in remote sensing data analysis [52]. At present, some successful network structures (e.g., auto-encoder [53,54] and sparse auto-encoder (SAE) [17,55].) have been presented, yielding excellent performances in feature compression and image classification. However, the sparsity for the SAE network has not been fully exploited to further extract efficient features for classification, and the existing CNN based classifier do not utilize the multi-scale features of the compressed data. Due to these disadvantages, the crop classification performance still cannot achieve a level for practical use.
Therefore, the main purpose of this study is to propose a new method to improve the performances of crop classification for better application in agricultural monitoring. Firstly, we adopted various coherent and incoherent scattering decomposition algorithms to extract particular parameters from multi-temporal PolSAR data. Secondly, a sparse auto-encoder network with non-negativity constraint (NC-SAE) was bulit to perform feature dimension reduction, which extracts the polarimetric features more efficiently. Finally, a classifier based on crop discrimination network with multi-scale features (MSCDN) was proposed to implement the crop classification, which greatly enhanced the classification accuracy. The main contributions of this paper were to propose a NC-SAE for data compression and a MSCDN for crop discrimination.
The remainder of this paper is organized as follows. Section 2 devotes to our methodology, including the structure of PolSAR data, the polarimetric features decomposition and dimension reduction with proposed NC-SAE network, as well as the architecture of the proposed MSCDN classifier. In Section 3, the experimental results of crop classification for the proposed method are evaluated and compared with traditional method using simulated Sentinel-1 data. Finally, Section 4 concludes the study.

2. Methodology

In order to use the multi-temporal PolSAR data to classify crops, a neural network NC-SAE was employed to compress the data, and then a novel crop discrimination network with multi-scale features (MSCDN) was constructed to achieve the crop classification. The flowchart of the whole study method is shown in Figure 1, which mainly includes three steps: polarization feature decomposition, feature compression and crop classification.

2.1. PolSAR Data Structure

The quad-pol SAR receives target backscattering signals and measures the amplitudes and phases in terms of four combinations: HH, HV, VH and VV, where H represents horizontal mode and V represents vertical mode. A 2 × 2 complex matrix S that collects the scattering information can be obtained for each pixel, these complex numbers relate the incident and the scattered electric fields. The scattering matrix S usually reads:
S = [ S H H S H V S V H S V V ] ,
where S V H denotes the scattering factor of vertical transmitting and horizontal receiving polarization, and the others have similar definitions.
The target feature vector can be readily obtained by vectorizing the scattering matrix. Reciprocal backscattering assumption is commonly exploited, then S H V is approximately equal to S V H and the polarimetric scattering matrix can be rewritten as the Lexicographic scattering vector:
h = [ S H H 2 S H V S V V ] T ,
where the superscript T denotes the transpose of vector. The scale factor 2 on S H V is to ensure consistency in the span computation. Then, a polarimetric covariance matrix C can be constructed as the following format:
C = h h * T = [ | S H H | 2 2 S H H S H V * S H H S V V * 2 S H V S H H * 2 | S H V | 2 2 S H V S V V * S V V S H H * 2 S V V S H V * | S V V | 2 ] ,
where the superscript * denotes the conjugate of a complex number. Alternatively the Pauli-based scattering vector is defined as
k = 1 2 [ S H H + S V V S H H S V V 2 S H V ] T .
By using vector k , a coherency matrix T can be constructed as follows:
T = 1 M m = 1 M k m k m H ,
where M indicates the number of looks. The coherency matrix T is usually spatially averaged to reduce the inherent speckle noise in the SAR data. This preserves the phase information between the polarization channels.
The covariance matrix C has been proved to follow a complex Wishart distribution, while the coherency matrix T contains the equivalent information of the same PolSAR data. They can be easily converted to each other by a bilinear transformation as follows
T = N C N T ,
where N is a constant matrix:
N = 1 2 [ 1 0 0 1 0 1 0 2 0 ] .

2.2. Polarization Decomposition and Feature Extraction

Processing and analyzing the PolSAR data can effectively extract the polarization scattering features, and further achieve classification, detection and identification of quad-Pol SAR data. Therefore, polarization decomposition for PolSAR data is usually adopted to obtain multi-dimensional features. Here, we propose to consider the 36-dimensional polarimetric scattering features, which were derived from a single temporal PolSAR image using various methods. Some of these features can be directly obtained from the measured data, and others were computed with incoherent decomposition (i.e., Freeman decomposition [32], Yamaguchi decomposition [33], Cloude decomposition [34] and Huynen decomposition [35]) and Null angle parameters [52]. The 36-dimensional scattering features obtained from a single temporal PolSAR image are summarized in Table 1. Then, higher dimensional scattering features can be obtained from multiple temporal PolSAR images. The resulting features involve all the potential information of the primitive PolSAR data.

2.3. Feature Compression

Directly classifying the crops with higher dimensional features above is cumbersome, which involves complicated computations and large amount of memory to store the features, and these enormous features would suffer from the great difficulty of the dimensionality disaster. Therefore, to make full use of the wealth of multiple temporal PolSAR data, the dimension reduction in resulting features is indispensable and crucial. In the past few years, the methods of auto-encoder and sparse auto-encoder have attracted more and more attention, which were commonly used to perform the compression of high-dimension data [17,55,56,57]. Therefore, the sparse auto-encoder with non-negativity constraint was proposed to further improve the sparsity of auto-encoder.

2.3.1. Auto-Encoder

An auto-encoder (AE) is a neural network which is an unsupervised learning for data representation and its aim is to set the output values approximately equal to the inputs. The basic structure of a single-layer AE neural network consists of three parts: encoder, activation and decoder, which are shown in Figure 2, where the input layer ( x ), hidden layer ( y ) and output layer ( z ) have, respectively, n neurons, m neurons, and n neurons. The hidden layer is commonly used to implement the encoding for the input data, while the output layer is for the decoding operation.
The weighted input a of each neuron in encoder is defined as
a j ( 1 ) = i = 1 n w j i ( 1 ) x i + b j ( 1 ) , j = 1 , , m ,
where w j i ( 1 ) represents the encoder weight coefficient, b j ( 1 ) is the bias of neuron j Then, the encoder output y can be written as the nonlinear activation of weighted input a as follows
y j = f ( a j ( 1 ) ) , j = 1 , , m ,
where f ( ) is a sigmoid function, which is usually chosen as the logsig function:
f ( x ) = 1 1 + e x .
If m < n , the output y can be viewed as the compressed representation of input x , then the encoder usually plays the role of data compression. Whereas the decoder is a reverse process of reconstructing the compressed data y , which achieves the restoration of the original data, i.e., output z represents the estimate of input x . The weighted input of the decoder is defined as
a i ( 2 ) = j = 1 m w i j ( 2 ) y j + b i ( 2 ) , i = 1 , , n ,
where w i j ( 2 ) is the decoding weight coefficient, and b i ( 2 ) is the bias of neuron i . The decoder output reads
z i = g ( a i ( 2 ) ) .
Here, g ( ) is the sigmoid function for decoder neurons, which is commonly chosen the same as f ( ) .
The training process of AE is based on the optimization of the cost function and obtained the optimal parameters of weight coefficients and bias. The cost function measures the error between the input x and its reconstruction at the output z , which can be written as
J m s e = 1 2 Q q = 1 Q i = 1 n [ x i ( q ) z i ( q ) ] 2 ,
where Q is the number of samples. Furthermore, a restriction term of weight decay is usually incorporated into the cost function to regulate the degree of the weight attenuation, which helps to effectively avoid overfitting and remarkably improve the generalization capacity for the network. Hence, the overall cost function of AE commonly reads
E A E = J m s e + λ Ω w ,
where Ω w is a regularization term on the weights, the most commonly used restriction is the L 2 regularization term and is defined as follows, λ is the coefficient for L 2 regularization term.
Ω w = 1 2 l = 1 L j = 1 m i = 1 n ( w j i ( l ) ) 2 ,
where L = 2 is the number of layers. The weight coefficients and biases are optimized and trained by using the steepest descent algorithm via the classical error back propagation scheme.

2.3.2. Sparse Auto-Encoder with Non-Negativity Constraint

A sparse auto-encoder (SAE) results from an auto-encoder (AE). Based on AE, SAE neural network is achieved by enforcing a sparsity constraint of the output from the hidden layer, which realizes the inhibitory effects and yields fast convergence speed for training process using the back propagation algorithm [17,55]. Hence, the cost function of SAE is given by
E S A E = J m s e + λ Ω w + β Ω s ,
where β is the coefficient of the sparsity regularization term, Ω s is the sparsity regularization term which is usually represented by Kullback–Leibler (KL) divergence [17,55].
The part-based representation of input data usually exhibits excellent performance for pattern classification. The sparse representation scheme usually breaks the input data into parts, while the original input data can be readily reconstructed by combining the parts additively when necessary. Therefore, the input in each layer of an auto-encoder can be divided into parts by enforcing the weight coefficients of both encoder and decoder to be positive [56]. To achieve a better performance in reconstruction, we propose to consider the sparse auto-encoder with non-negativity constraint (NC-SAE), the auto encoder network decompose the input into parts by encoder via (8 and 9), and combine them in an additive manner by decoder via (11 and 12). This is achieved by replacing the regularization term (15) in cost function (16) with a new non-negativity constraint
Φ w = 1 2 l = 1 L j = 1 m i = 1 n ( ϕ ( w j i ( l ) ) ) ,
where
ϕ ( w j i ( l ) ) = { ( w j i ( l ) ) 2 w j i < 0 0 w j i 0 .
Therefore, the proposed cost function for NC-SAE is defined as
E = J m s e + α Φ w + β Ω s ,
where α 0 is the parameter of the non-negativity constraint. By minimizing the cost function (19), the number of nonnegative weights of each layer and the sparsity of the hidden layer activation are all increased, and the overall average reconstruction error is reduced.
Further, steepest descent method is used to update the weight and bias of (19) as follows
w ( k + 1 ) = w ( k ) η E w ( k ) b ( k + 1 ) = b ( k ) η E b ( k ) ,
where k is the number of iteration, and η denotes the learning rate. Then, we adopt the error back-propagation algorithm to compute the partial derivatives in (20). The partial derivatives of the cost function with respect to decoder reads
E w i j ( 2 ) = J m s e w i j ( 2 ) + α Φ w w i j ( 2 ) + β Ω s w i j ( 2 ) ,
The partial derivatives in (21) are straightforward, and shown below
J m s e w i j ( 2 ) = J m s e a i ( 2 ) a i ( 2 ) w i j ( 2 ) = J m s e a i ( 2 ) y j ,
Φ w w i j ( 2 ) = r ( w i j ( 2 ) ) ,
Ω s w i j ( 2 ) = 0 ,
where r ( w ) is shown as follows
r ( w i j ( l ) ) = { w i j ( l ) , w i j ( l ) < 0 0 , o t h e r w i s e .
In order to clarify the computation of derivatives, we define the neuronal error δ as the derivative of cost function with respect to weight input of each neuron, i.e., δ E / a . Then, δ i ( 2 ) can be calculated using the chain rule as follows:
δ i ( 2 ) = J m s e z i z i a i ( 2 ) = 1 Q q = 1 Q [ z i ( q ) x i ( q ) ] f [ a i ( 2 ) ( q ) ] , i = 1 , , n .
Similarly, the neuronal error δ i ( 1 ) of encoder is computed as
δ i ( 1 ) = j = 1 n J m s e a j ( 2 ) a j ( 2 ) y i y i a i ( 1 ) + β Ω s y i y i a i ( 1 ) = f ( a i ( 1 ) ) j = 1 n δ j ( 2 ) w j i ( 2 ) + β Q q = 1 Q f [ a i ( 1 ) ( q ) ] ( 1 ρ 1 ρ ¯ i ρ ρ ¯ i ) , i = 1 , , m .
Now substituting Equations (22) and (24) into (21) leads to
E w i j ( 2 ) = δ i ( 2 ) y j + α r ( w i j ( 2 ) ) ,
Then, the partial derivative of the cost function with respect to the encoding weight reads
E w i j ( 1 ) = δ i ( 1 ) x j + α r ( w i j ( 1 ) ) ,
The partial derivatives with respect to the biases of encoder and decoder are computed in a compact form as
E b ( l ) = δ ( l ) , l = 1 , 2 .

2.4. The Crop Discrimination Network with Multi-Scale Features (MSCDN)

In the deep learning field, convolutional neural network (CNN) has become increasingly powerful to deal with the complicated classification and recognition problems. Recently, CNN has been widely adopted in remote sensing, for example, in image classification, target detection, and semantic segmentation. However, most classical CNNs only use one single convolution kernel to extract the feature images, the resulting single feature map in each convolutional layer make it difficult to distinguish the similar crops, consequently the overall crop classification performance degraded. Just as our previous work [17], the poor overall performance is devoted to the minor category of crops that possess the similar polarimetric scattering characteristics. Therefore, in this paper, a new multi-scale deep neural network called MSCDN is proposed, attempting to further improve the classification accuracy. The MSCDN not only extracts the features with different scales by using multiple kernels in some convolution layers, but also captures the tiny distinctions between feature maps of multi-scales.
The architecture of the proposed MSCDN classifier is shown in Figure 3. The network of MSCDN mainly contains three parts: multi-scale feature extraction, feature fusion and classification. First, multiple convolutional layers and multiple kernels within a certain convolution layers extract feature maps with different scales. Second, the feature information of these diverse scales was fused together as the basis to feed the classification layer. Finally, the softmax layer is adopted to perform the classification.
As shown in Figure 3, the MSCDN comprises seven convolutional layers, two max-pooling layers, four fully connected layers, one concat layer, and a softmax classifier. The Rectified Linear Unit (ReLU) and Batch Normalization (BN) layers are successively connected after Conv_1 to Conv_5. The aim of ReLU layer is avoid the problems of gradient explosion and gradient dispersive to further improve the efficient of gradient descent and back propagation. As for the BN layer, it is a normalized procedure for each batchsize of internal data for the purpose of standardizing the output data as the normal distribution with zero mean and unit variance, which can accelerate the convergences. The branches of Conv_6 and Conv_7 aim to reduce the depth of the output feature image from Conv_3 and Conv_4, and decrease the computational complexity. The detailed parameters of the convolution kernel for each layer and other parameters for the MSCDN structure are listed in Table 2.

3. Experiments and Result Analysis

3.1. PolSAR Data

An experimental site, which was established by the European Space Agency (ESA), was used to evaluate the performances of the proposed method. The experimental area was an approximate 14 km × 19 km rectangular region located in the town of Indian Head (103°66′87.3″ W, 50°53′18.1″ N) in southeastern Saskatchewan, Canada. This area has 14 classes of different type of crops and an ‘unknown’ class including urban areas, transport corridors and areas of natural vegetation. The number of pixels and total area for each crop type are summarized in Table 3. The location maps from Google Earth and ground truth maps of the study area are shown in Figure 4.
The experimental PolSAR data sets were simulated with Sentinel-1 system parameters from real RADARSAT-2 data by ESA before launching real Sentinel-1 systems [58]. The real RADARSAT-2 datasets were collected on 21 April, 15 May, 8 June, 2 July, 26 July, 19 August and 12 September 2009. The multi-temporal PolSAR data in these 7 periods almost covered the entire growth cycle of major crops in the experimental area from sowing to harvesting. The polarization decomposition of the single temporal PolSAR data yields 36 dimensional features. Therefore, 252 dimensional features have been acquired from 7 time-series PolSAR images.

3.2. Evaluation Criteria

For evaluating the performances of different classification methods, the recall rate, overall accuracy (OA), validation accuracy (VA) and kappa coefficient (Kappa) are considered to perform comparison.
The overall accuracy can be defined as follows
O A = M N ,
where M is the total number of pixels that correctly classified, and N is the total number of all pixels. Similarly, VA is the proportion of validation samples that are correctly classified to all validation samples. The recall rate can be written as follows:
R e c a l l = X Y ,
where X is the number of samples that are correctly classified for a certain class, Y is the number of samples of this class.
The kappa coefficient arises from the consistency test and is commonly used to evaluate the classification performance, it measures the consistency of the predicted output and the ground-truth. Here, we use kappa coefficients to evaluate the entire classification accuracy of the model. Unlike OA and recall rate that only involve correctly predicted samples, the kappa coefficient considered various missing and misclassified samples that located at the off-diagonal of confusion matrix. The kappa coefficient can be calculated as follows:
K a p p a = O A P 1 P , P = 1 N 2 i M s i : s : i
where N is the total number of samples, M is the number of crop categories, s i : and s : i are, respectively, the sum of the i -th row and i -th column elements of confusion matrix.

3.3. Results and Analysis

We now report the comparison of our method with other data compression schemes and classifiers. First, 9-dimensional compressed features were derived from the original 252-dimensional multi-temporal features using various methods, namely LLE, PCA, stacked sparse auto-encoder (S-SAE) and the proposed NC-SAE. Then, the compressed 9-dimensional features were fed into the SVM, CNN and the proposed MSCDN classifiers. The ratio of the training samples for each classifier was 1%.

3.3.1. Comparison of the Dimensionality Reduction Methods

Firstly, for the dimensionality reduction, the reconstruction error curves of SAE and NC-SAE in the training processes are shown in Figure 5. It can be seen that the reconstruction error of NC-SAE is slightly less than that of SAE. Moreover, the standard deviation within the same crop class were calculated and plotted in Figure 6A,B for different categories. The six main crops (i.e., lentil, spring wheat, field pea, canola, barley and flax) which have relatively larger cultivated areas shown in Figure 6a were chosen to evaluate the standard deviation. Meanwhile we also choose six easily confused crops shown in Figure 6b (i.e., durum wheat, oat, chemical fallow, mixed hay, barely, mixed pasture) for performance evaluation. We can see that the standard deviation of the proposed method NC-SAE is the smallest. Therefore, a better crop classification performance is expected by using the features that extracted through NC-SAE.
Additionally, by using CNN classifier, the OA, VA, Kappa coefficients and CPU time performances for different dimension reduction methods are listed in Table 4, and the predicted results of the classifier and their corresponding error maps are illustrated in Figure 7. In this experiment, the size of input data for CNN classifier was set to 15 × 15.
We can see that the dimensionality reduction methods of S-SAE and NC-SAE are both superior to PCA and LLE. For the CNN classifier, the OA and Kappa of S-SAE and NC-SAE are approximately 6~8% higher than PCA and LLE. The performances of S-SAE and NC-SAE are nearly equal. However, keep in mind that these two neural networks have different structures. The proposed NC-SAE is a single-layer network, while the S-SAE uses three auto-encoders to sequentially perform the feature compression. Comparing the CPU time that required computing the compressed features, it can be seen that NC-SAE takes almost one tenth as long as S-SAE.

3.3.2. Comparison of the Classifier with Different Classification Methods

In this section, we compare the classification performance of feeding the 9-dimensional features, which are extracted from NC-SAE, into SVM, CNN and MSCDN classifiers. The classification results and error maps for above classifiers are shown in Figure 8. It can be readily seen that the proposed MSCDN classifier behaves the best performance. In order to provide the insight into above result, we further show the OA performances of the different classifiers, along with the recall rates for each crop in Table 5. One sees that the OA performance of MSCDN is 24% and 5% higher than that of SVM and CNN. Observing the recall rate for each crop in Table 5, we see that the poorer OA for SVM and CNN is mainly due to the low recall rates of several individual crops (namely Duw: Durum Wheat, Mip: Mixed Pasture, Mih: Mixed Hay, and Chf: Chemical fallow). By further analyzing the categories of these crops in Table 3, we find that the above mentioned crops are easily confused with others because they have the same growth cycle or similar external morphologies with others. For example, Duw (Durum Wheat) is similar to Spw (Spring Wheat) in terms of external morphology, and Mip (Mixed Pasture) is more easily confused with Gra (Grass) and Mih (Mixed Hay). We conjecture that the poorer OA for SVM and CNN arise from the poorer distinguishable features that extracted by their network architectures.
From the above analysis, we see that the accurate classification for these easily confused crops is the key point of enhancing the overall accuracy. For deeply understanding the improvement of our MSCDN classifier, the confusion matrix of crops Duw, Mip, Mih and Chf for CNN and MSCDN are shown in Table 6. One sees that compared to CNN, MSCDN greatly improves the recall rates of these easily confused crops, whose averaged recall rate increased more than 31%. This is not surprising because MSCDN is a multi-scale neural network, the architecture of which enables to extract the features in different scales by using multiple kernels in convolution layers, and hence MSCDN is able to capture the tiny distinctions between the feature maps. Moreover, it should be pointed out that the above easily confused crops have very small samples in our crop data (only 7.3% of whole samples). Therefore, the improvement of OA performance for MSCDN will be foreseen.

3.3.3. The Performance for the Different Size of Input Sample

The size of input sample for classifiers also affects the performance for crop classification. After the compression data with NC-SAE, Table 7 gives the classification results of MSCDN classifier with different sample size, the corresponding training curves are shown in Figure 9. Firstly, we set the size of input samples for the MSCDN classifier to 15 × 15. In this scenario, slightly over fitting has been observed when training the MSCDN, which is shown in Figure 9a. This problem has been ultimately solved by increasing the size of input sample. Figure 9b shows the training curve for input samples with size of 35 × 35. We see that the over-fitting can be completely eliminated by expanding the input size. Observing Table 7, we see that the OA and Kappa increased by 4.12% and 4.98%, ultimately rise to 99.33% and 99.19%, respectively, when increasing the input size from 15 × 15 to 35 × 35, whereas when the size of input samples was expanded to 55 × 55, both OA and Kappa only raised approximately 0.3% relative to the 35 × 35, while considerable computational burden is needed. Therefore, a moderate size 35 × 35 for input samples is recommended in real applications.
For the CNN classifier, the same conclusion can be made. Table 8 further demonstrates the effect of the different sizes of input sample on classification results. In addition, by comparing the results in Table 7 and Table 8, we can see that classification performance of MSCDN is always better than CNN under the same sample size.

3.3.4. Comparison of Overall Processing Procedures

The overall processing procedures and their performance evaluation are listed in Table 9. For classifiers of the traditional SVM, CNN and the proposed MSCDN, the data compression methods such as PCA, LLE, S-SAE and NC-SAE were used to obtain the compressed 9-dimensional features. Different from the above methods, the LSTM in Zhong et al. [25] can directly perform the classification with the 36 × 7 feature maps for a single pixel. Although the LSTM method avoids the feature compression procedure, the classification accuracy was poor. Whereas the combination of data compressor and trained classifiers can achieve remarkable crop classification performance. From Table 9, we can conclude that: (1) the combination of the proposed NC-SAE and MSCDN obtained the best performance; (2) with the expansion of the input size for CNN and MSCDN, the classification accuracy for these two classifiers has remarkably increased. However, it is worth noting that the phenomenon of over-fitting appears in NC-SAE + MSCDN for 15 × 15 sample case as shown in Figure 9, so the classification accuracy will be somewhat inferior to its competitors.

4. Discussion

From an increasing number of experiments and analysis, the performance of crop classification can be improved remarkably based on multi-temporal quad-pol SAR data. Nowadays, a great number of spaceborne SAR systems launched into orbit around the Earth can enhance the revisiting period of satellite constellation and obtain a growing amount of real data, which provides a tremendous chance for multi-temporal data analysis. Additionally, the wide application of neural network in remote sensing has shown great abilities. Based on these two attentions, this paper attempted to divide two steps which are dimensional reduction based on NCSAE and then classification with MSCDN to achieve the crop classification. The summary for experimental results of Section 3 is discussed in the following.

4.1. The Effect of NC-SAE

In this paper, the NC-SAE was used to reduce the dimension of features from polarimetric decomposition. We can see that the NC-SAE has obtained the best performance compared with other methods through the experimental results in Section 3.3.1. Compared to the traditional dimension reduction methods PCA and LLE, the classification accuracy by using the NC-SAE compressed features has improved more than 6%, while nearly same accuracy compared with S-SAE. However, the S-SAE has three hidden layers with an intricate structure and more node members in each layer. The structure of NC-SAE is simple, it has only one hidden layer with 9 node members. The hyper-parameter λ , β and ρ of NC-SAE were set to 0.1, 2.5 and 0.45, respectively, which are directly inherited from the empirical value of S-SAE. Therefore, the NC-SAE is a computationally cheaper alternate of S-SAE.

4.2. The Effect of MSCDN Classifier

MSCDN was employed to classify the features extracted from NC-SAE dimensional reduction method, where the configuration parameters are empirically determined. The MSCDN network differs from the classical CNN network in its concatenated multi-scale features extracted by multiple kernels with different size. Though the slightly over-fitting has been observed in the training process of MSCDN when setting the input size as 15 × 15 × M, where M is the dimension of input features. This problem is readily resolved by expanding the input size to 35 × 35 × M. Moreover, the classification accuracy has been greatly improved compared to other classifiers. In general, the MSCDN classifier combined with NC-SAE feature compression method has obtained the best performance, and its overall accuracy is about 5% higher than our previous work [17].

4.3. Future Work

First of all, the phenomenon of slightly over-fitting when training the MSCDN network may be resolved by trying to put a dropout layer in MSCDN. Secondly, this study used a two stage processing networks for crop classification (features compression and subsequent classification). A more elegant one single network that implements the crop classification with multi-temporal Quad-Pol SAR data can be foreseen to further simplify the network and reduce the computation burden.

5. Conclusions

In this paper, we proposed a novel classification method, namely MSCDN, for multi-temporal PolSAR data classification. To solve the problem of the dimension disaster, firstly, we constructed a sparse auto-encoder with non-negativity constraints (NC-SAE) which has an improved sparsity to reduce the data dimension of scattering features extracted from multi-temporal PolSAR images. Meanwhile, the simulated multi-temporal Sentinel-1 data provided by the ESA and the established ground truth map for experimental site were used to evaluate the performances of the proposed methodology. Comparing the performance of classification result, we can see that the OA of MSCDN classifier is approximately 20% higher than that of traditional SVM, but only 1–2% higher than CNN. It seems that the advantage of MSCDN over the CNN classifier is somewhat trivial, but we have to note that this insignificant improvement comes from the accuracy improvement of easily confused crops with very small samples. So the overall improvement is somewhat limited. If the proposed method is applied to classify the easily confused crops with higher proportion, the remarkable improvement of OA performance will be anticipated.

Author Contributions

W.-T.Z. proposed the methodology. M.W. designed the experiments and performed the programming work. J.G. contributed extensively to the manuscript writing and revision. S.-T.L. supervised the study. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62071350) and the Key R & D projects of Shaanxi Province (2020GY-162).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in the article.

Acknowledgments

We thank the European Space Agency (ESA) for providing the simulated multi-temporal PolSAR data sets and ground truth information under the Proposal of C1F.21329: Preparation of Exploiting the S1A data for land, agriculture and forest applications.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kolotii, A.; Kussul, N.; Shelestov, A.; Skakun, S.; Yailymov, B.; Basarab, R.; Lavreniuk, M.; Oliinyk, T.; Ostapenko, V. Comparison of biophysical and satellite predictors for wheat yield forecasting in Ukraine. In Proceedings of the 2015 36th International Symposium on Remote Sensing of Environment, Berlin, Germany, 11–15 May 2015. [Google Scholar] [CrossRef] [Green Version]
  2. Boryan, C.; Yang, Z.; Mueller, R.; Craig, M. Monitoring US agriculture: The US Department of Agriculture, National Agricultural Statistics Service, Cropland Data Layer Program. Geocarto Int. 2011, 26, 341–358. [Google Scholar] [CrossRef]
  3. Kavitha, A.; Srikrishna, A.; Satyanarayana, C. Crop image classification using spherical contact distributions from remote sensing images. J. King Saud Univ. Comput. Inf. Sci. 2019. [Google Scholar] [CrossRef]
  4. Tyczewska, A.; Woźniak, E.; Gracz, J.; Kuczyński, J.; Twardowski, T. Towards Food Security: Current State and Future Pro-spects of Agrobiotechnology. Trends Biotechnol. 2018, 36, 1219–1229. [Google Scholar] [CrossRef] [PubMed]
  5. Thenkabail, P.S.; Knox, J.W.; Ozdogan, M.; Gumma, M.K.; Congalton, R.G.; Wu, Z.T.; Milesi, C.; Finkral, A.; Marshall, M.; Mariotto, I.; et al. Assessing future risks to agricultural productivity, water resources and food security: How can remote sensing help? Photogramm. Eng. Remote Sens. 2016, 82, 773–782. [Google Scholar]
  6. Orynbaikyzy, A.; Gessner, U.; Conrad, C. Crop type classification using a combination of optical and radar remote sensing data: A review. Int. J. Remote. Sens. 2019, 40, 6553–6595. [Google Scholar] [CrossRef]
  7. Wang, Y.; Xu, X.; Huang, L.; Yang, G.; Fan, L.; Wei, P.; Chen, G. An Improved CASA Model for Estimating Winter Wheat Yield from Remote Sensing Images. Remote Sens. 2019, 11, 1088. [Google Scholar] [CrossRef] [Green Version]
  8. Xiao, J.; Xu, L. Monitoring impact of heavy metal on wheat leaves from sewage irrigation by hyperspectral remote sensing. In Proceedings of the 2010 Second IITA International Conference on Geoscience and Remote Sensing, Qingdao, China, 28–31 August 2010; Volume 1, pp. 298–301. [Google Scholar]
  9. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  10. Zhang, S.; Lei, Y.; Wang, L.; Li, H.; Zhao, H. Crop classification using MODIS NDVI data denoised by wavelet: A case study in Hebei Plain, China. Chin. Geogr. Sci. 2011, 21, 322–333. [Google Scholar] [CrossRef]
  11. Tatsumi, K.; Yamashiki, Y.; Torres, M.A.C.; Taipe, C.L.R. Crop classification of upland fields using Random forest of time-series Landsat 7 ETM + data. Comput. Electron. Agric. 2015, 115, 171–179. [Google Scholar] [CrossRef]
  12. Li, Y.; Chen, Y.; Liu, G.; Jiao, L. A Novel Deep Fully Convolutional Network for PolSAR Image Classification. Remote Sens. 2018, 10, 1984. [Google Scholar] [CrossRef] [Green Version]
  13. Sabry, R. Terrain and Surface Modeling Using Polarimetric SAR Data Features. IEEE Trans. Geosci. Remote Sens. 2015, 54, 1170–1184. [Google Scholar] [CrossRef]
  14. Skriver, H.; Mattia, F.; Satalino, G.; Balenzano, A.; Pauwels, V.; Verhoest, N.E.C.; Davidson, M. Crop Classification Using Short-Revisit Multitemporal SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 423–431. [Google Scholar] [CrossRef]
  15. Jafari, M.; Maghsoudi, Y.; Zoej, M.J.V. A New Method for Land Cover Characterization and Classification of Polarimetric SAR Data Using Polarimetric Signatures. IEEE J. Sel. Top. Appl. Earth Observe. Remote Sens. 2015, 8, 3595–3607. [Google Scholar] [CrossRef]
  16. Li, Z.; Chen, G.; Zhang, T. Temporal Attention Networks for Multitemporal Multisensor Crop Classification. IEEE Access 2019, 7, 134677–134690. [Google Scholar] [CrossRef]
  17. Guo, J.; Li, H.; Ning, J.; Han, W.; Zhang, W.-T.; Zhou, Z.S. Feature Dimension Reduction Using Stacked Sparse Au-to-Encoders for Crop Classification with Multi-Temporal, Quad-Pol SAR Data. Remote Sens. 2020, 12, 321. [Google Scholar] [CrossRef] [Green Version]
  18. Li, H.; Zhang, C.; Zhang, S.; Atkinson, P.M. Crop classification from full-year fully-polarimetric L-band UAVSAR time-series using the Random Forest algorithm. Int. J. Appl. Earth Obs. Geoinf. 2020, 87, 102032. [Google Scholar] [CrossRef]
  19. Whelen, T.; Siqueira, P. Time-series classification of Sentinel-1 agricultural data over North Dakota. Remote Sens. Lett. 2018, 9, 411–420. [Google Scholar] [CrossRef]
  20. Veloso, A.; Mermoz, S.; Bouvet, A.; Le Toan, T.; Planells, M.; Dejoux, J.-F.; Ceschia, E. Understanding the temporal behavior of crops using Sentinel-1 and Sentinel-2-like data for agricultural applications. Remote Sens. Environ. 2017, 199, 415–426. [Google Scholar] [CrossRef]
  21. White, L.; Millard, K.; Banks, S.; Richardson, M.; Pasher, J.; Duffe, J. Moving to the RADARSAT constellation mission: Com-paring synthesized compact polarimetry and dual polarimetry data with fully polarimetric RADARSAT-2 data for image classification of peatlands. Remote Sens. 2017, 9, 573. [Google Scholar] [CrossRef] [Green Version]
  22. Mattia, F.; Satalino, G.; Balenzano, A.; D’Urso, G.; Capodici, F.; Iacobellis, V.; Milella, P.; Gioia, A.; Rinaldi, M.; Ruggieri, S.; et al. Time series of COSMO-SkyMed data for landcover classification and surface parameter retrieval over agricultural sites. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 6511–6514. [Google Scholar]
  23. Usman, M.; Liedl, R.; Shahid, M.A.; Abbas, A. Land use/land cover classification and its change detection using multi-temporal MODIS NDVI data. J. Geogr. Sci. 2015, 25, 1479–1506. [Google Scholar] [CrossRef]
  24. Zhang, X.-W.; Liu, J.-F.; Qin, Z.; Qin, F. Winter wheat identification by integrating spectral and temporal information derived from multi-resolution remote sensing data. J. Integr. Agric. 2019, 18, 2628–2643. [Google Scholar] [CrossRef]
  25. Zhong, L.H.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
  26. Doualk, A. Application of Statistical Methods and GIS for Downscaling and Mapping Crop Statistics Using Hypertemporal Remote Sensing. J. Stat. Sci. Appl. 2014, 2, 93–101. [Google Scholar]
  27. Lee, J.S.; Grunes, M.R.; Kwok, R. Classification of multi-look polarimetric SAR imagery based on complex Wishart distribu-tion. Int. J. Remote Sens. 1994, 15, 2299–2311. [Google Scholar] [CrossRef]
  28. Maghsoudi, Y.; Collins, M.; Leckie, D.G. Polarimetric classification of Boreal forest using nonparametric feature selection and multiple classifiers. Int. J. Appl. Earth Obs. Geoinf. 2012, 19, 139–150. [Google Scholar] [CrossRef]
  29. Demirci, S.; Kirik, O.; Ozdemir, C. Interpretation and Analysis of Target Scattering From Fully-Polarized ISAR Images Using Pauli Decomposition Scheme for Target Recognition. IEEE Access 2020, 8, 155926–155938. [Google Scholar] [CrossRef]
  30. Nurtyawan, R.; Saepuloh, A.; Harto, A.B.; Wikantika, K.; Kondoh, A. Satellite Imagery for Classification of Rice Growth Phase Using Freeman Decomposition in Indramayu, West Java, Indonesia. HAYATI J. Biosci. 2018, 25, 126–137. [Google Scholar]
  31. Cloude, S.; Pottier, E. A review of target decomposition theorems in radar polarimetry. IEEE Trans. Geosci. Remote Sens. 1996, 34, 498–518. [Google Scholar] [CrossRef]
  32. Freeman, A.; Durden, S. A three-component scattering model for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef] [Green Version]
  33. Yamaguchi, Y.; Moriyama, T.; Ishido, M.; Yamada, H. Four-component scattering model for polarimetric SAR image decom-position. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1699–1706. [Google Scholar] [CrossRef]
  34. Cloude, S.; Pottier, E. An entropy based classification scheme for land applications of polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
  35. Huynen, J.R. Phenomenological Theory of Radar Targets; Technical University: Delft, The Netherlands, 1978; pp. 653–712. [Google Scholar]
  36. Wen, Y.; Shang, S.; Rahman, K.U. Pre-Constrained Machine Learning Method for Multi-Year Mapping of Three Major Crops in a Large Irrigation District. Remote Sens. 2019, 11, 242. [Google Scholar] [CrossRef] [Green Version]
  37. Son, N.T.; Chen, C.F.; Chen, C.R.; Minh, V.Q. Assessment of Sentinel-1A data for rice crop classification using random for-ests and support vector machines. Geocarto Int. 2018, 33, 587–601. [Google Scholar]
  38. Picon, A.; Seitz, M.; Alvarez-Gila, A.; Mohnke, P.; Ortiz-Barredo, A.; Echazarra, J. Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions. Comput. Electron. Agric. 2019, 167, 105093. [Google Scholar] [CrossRef]
  39. Zeyada, H.H.; Ezz, M.; Nasr, A.; Shokr, M.; Harb, H.M. Evaluation of the discrimination capability of full polarimetric SAR data for crop classification. Int. J. Remote Sens. 2016, 37, 2585–2603. [Google Scholar] [CrossRef]
  40. Zhou, Y.; Luo, J.; Feng, L.; Yang, Y.; Chen, Y.; Wu, W. Long-short-term-memory-based crop classification using high-resolution optical images and multi-temporal SAR data. GIScience Remote Sens. 2019, 56, 1170–1191. [Google Scholar] [CrossRef]
  41. Yang, C.; Hou, B.; Ren, B.; Hu, Y.; Jiao, L. CNN-Based Polarimetric Decomposition Feature Selection for PolSAR Image Clas-sification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8796–8812. [Google Scholar] [CrossRef]
  42. Guo, J.; Wei, P.; Liu, J.; Jin, B.; Su, B.; Zhou, Z. Crop Classification Based on Differential Characteristics of H/α Scattering Pa-rameters for Multitemporal Quad- and Dual-Polarization SAR Images. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6111–6123. [Google Scholar] [CrossRef]
  43. Ustuner, M.; Balik Sanli, F. Polarimetric Target Decompositions and Light Gradient Boosting Machine for Crop Classification: A Comparative Evaluation. ISPRS Int. J. Geo-Inf. 2019, 8, 97. [Google Scholar] [CrossRef] [Green Version]
  44. Chen, D.; Peethambaran, J.; Zhang, Z. A supervoxel-based vegetation classification via decomposition and modelling of full-waveform airborne laser scanning data. Int. J. Remote Sens. 2018, 39, 2937–2968. [Google Scholar] [CrossRef]
  45. Jolliffe, I.T. Principal Component Analysis; Springer: Heidelberg/Berlin, Germany, 2002; pp. 1094–1096. [Google Scholar]
  46. Min, X.P.; Wang, H.; Yang, Z.W.; Ge, S.X.; Zhang, J.; Shao, N.X. Relevant Component Locally Linear Embedding Dimension-ality Reduction for Gene Expression Data Analysis. Metall. Min. Ind. 2015, 4, 186–194. [Google Scholar]
  47. Báscones, D.; González, C.; Mozos, D. Hyperspectral Image Compression Using Vector Quantization, PCA and JPEG2000. Remote Sens. 2018, 10, 907. [Google Scholar] [CrossRef] [Green Version]
  48. Bartlett, M.S.; Movellan, J.R.; Sejnowski, T.J. Face recognition by independent component analysis. IEEE Trans. Neural Netw. 2002, 13, 1450–1464. [Google Scholar] [CrossRef] [PubMed]
  49. Dehghanpoor, G.; Frachetti, M.; Juba, B. A Tensor Decomposition Method for Unsupervised Feature Learning on Satellite Imagery. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1679–1682. [Google Scholar] [CrossRef]
  50. Ren, J.; Yu, X.; Hao, B. Classification of landsat TM image based on non negative matrix factorization. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 405–408. [Google Scholar] [CrossRef]
  51. Roweis, S.T.; Saul, L.K. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [Green Version]
  52. Chen, S.-W.; Tao, C.-S. PolSAR Image Classification Using Polarimetric-Feature-Driven Deep Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2018, 15, 627–631. [Google Scholar] [CrossRef]
  53. Xie, G.-S.; Zhang, X.-Y.; Liu, C.-L. Efficient Feature Coding Based on Auto-encoder Network for Image Classification. In Proceedings of the Asian Conference on Computer Vision—ACCV 2014, Singapore, 16 April 2015; pp. 628–642. [Google Scholar]
  54. Kim, H.; Hirose, A. Unsupervised Fine Land Classification Using Quaternion Autoencoder-Based Polarization Feature Extraction and Self-Organizing Mapping. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1839–1851. [Google Scholar] [CrossRef]
  55. Ren, K.; Ye, H.; Gu, G.; Chen, Q. Pulses Classification Based on Sparse Auto-Encoders Neural Networks. IEEE Access 2019, 7, 92651–92660. [Google Scholar] [CrossRef]
  56. Babajide, O.A.; Ayinde, E.H.; Jacek, M.Z. Visualizing and Understanding Nonnegativity Constrained Sparse Autoencoder in Deep Learning. In Proceedings of the International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland, 29 May 2016; pp. 3–14. [Google Scholar]
  57. Huang, L.; Xia, W.; Zhang, B.; Qiu, B.; Gao, X. MSFCN-multiple supervised fully convolutional networks for the osteosar-coma segmentation of CT images. Comput. Methods Progr. Biomed. 2017, 143, 67–74. [Google Scholar] [CrossRef]
  58. Caves, R.; Davidson, G.; Padda, J.; Ma, A. AgriSAR 2009 Final Report: Vol 1 Executive Summary, Data Acquisition, Data Simulation; Tech. Rep. 22689/09; ESA: Paris, France, 2011. [Google Scholar]
Figure 1. Flowchart of the whole study method.
Figure 1. Flowchart of the whole study method.
Remotesensing 13 02749 g001
Figure 2. Single-layer AE neural networks structure.
Figure 2. Single-layer AE neural networks structure.
Remotesensing 13 02749 g002
Figure 3. MSCDN classifier architecture.
Figure 3. MSCDN classifier architecture.
Remotesensing 13 02749 g003
Figure 4. Location maps and Ground truth map of the experimental site.
Figure 4. Location maps and Ground truth map of the experimental site.
Remotesensing 13 02749 g004
Figure 5. The reconstruction error curves of NC-SAE and SAE.
Figure 5. The reconstruction error curves of NC-SAE and SAE.
Remotesensing 13 02749 g005
Figure 6. The selected crops and their standard deviations of compressed data. (a) and (b) are the distribution maps of six selected crops, (a) the six crops with the largest cultivated area, (b) the six crops that are difficult to discriminate; (A) and (B) illustrate the standard deviation within the selected crops in (a) and (b), respectively.
Figure 6. The selected crops and their standard deviations of compressed data. (a) and (b) are the distribution maps of six selected crops, (a) the six crops with the largest cultivated area, (b) the six crops that are difficult to discriminate; (A) and (B) illustrate the standard deviation within the selected crops in (a) and (b), respectively.
Remotesensing 13 02749 g006
Figure 7. Classification results and error maps: (AD) are the classification results maps of different dimension reduction methods which are (A) LLE + CNN, (B) PCA + CNN, (C) S-SAE + CNN, (D) NC- SAE + CNN. (ad) are the error maps of (AD).
Figure 7. Classification results and error maps: (AD) are the classification results maps of different dimension reduction methods which are (A) LLE + CNN, (B) PCA + CNN, (C) S-SAE + CNN, (D) NC- SAE + CNN. (ad) are the error maps of (AD).
Remotesensing 13 02749 g007
Figure 8. Classification results for different classifiers (A) SVM, (B) CNN, (C) MSCDN; (ac) are the error maps of (AC).
Figure 8. Classification results for different classifiers (A) SVM, (B) CNN, (C) MSCDN; (ac) are the error maps of (AC).
Remotesensing 13 02749 g008
Figure 9. Training curves with the sample size of (a) 15 × 15, (b) 35 × 35.
Figure 9. Training curves with the sample size of (a) 15 × 15, (b) 35 × 35.
Remotesensing 13 02749 g009
Table 1. The 36-dimensional decomposition features of single-temporal PolSAR data.
Table 1. The 36-dimensional decomposition features of single-temporal PolSAR data.
Feature Extraction SchemesFeaturesDimension
Features based on measured dataPolarization intensities
| S H H | , | S H V | , | S V V |
3
Amplitude of HH-VV correlation
| S V V S H V * / | S H H | 2 · | S V V | 2 |
1
Phase difference of HH-VV
a t a n ( I m ( S H H S V V * ) / R e ( S H H S V V * ) )
1
Co-polarized ratio
10 log 10 ( 2 | S H V | 2 / | S V V | 2 )
1
Cross-polarized ratio
10 log 10 ( | S V V | 2 / | S H H | 2 )
1
Co-polarization ratio
10 log 10 ( 2 | S H V | 2 / | S H H | 2 )
1
Degrees of polarization
| S V V | 2 / | S H H | 2 , 2 | S H V | 2 / ( | S H H | 2 + | S V V | 2 )
2
Incoherent decompositionFreeman decomposition5
Yamaguchi decomposition7
Cloude decomposition3
Huynen decomposition9
Other decompositionNull angle parameters2
Sum36
Table 2. Detailed configuration of MSCDN network architecture.
Table 2. Detailed configuration of MSCDN network architecture.
LayerDetailed DescriptionOutput Size
Kernel SizeNumberStridePadding
Input--35 × 35 × 9
Conv_15 × 5 × 964[2,2]same18 × 18 × 64
Conv_23 × 3 × 64128[1,1]same18 × 18 × 128
Maxpool_12 × 2 × 128--[2,2][0,0,0,0]9 × 9 × 128
Conv_33 × 3 × 128256[1,1]same9 × 9 × 256
Conv_43 × 3 × 256128[2,2]same5 × 5 × 128
Conv_53 × 3 × 128128[1,1]same5 × 5 × 128
Conv_61 × 1 × 25632[1,1]same9 × 9 × 32
Conv_71 × 1 × 12864[1,1]same5 × 5 × 64
Maxpool_22 × 2 × 128--[2,2][0,0,0,0]2 × 2 × 128
Fc_1,2,3--256----1 × 1 × 256
Fc_4--M----1 × 1 × M
Softmaxsoftmax1 × 1 × M
M denotes the number of categories of crops.
Table 3. Crop type and area statistics of study area.
Table 3. Crop type and area statistics of study area.
Crop TypeCrop CodeNumber of PixelsTotal Crop Area (%)
UnknownUnk1,323,61213,236 ha (39.12%)
LentilLen217,1862172 ha (6.42%)
Durum WheatDuw101,2991013 ha (2.99%)
Spring WheatSpw577,1095771 ha (17.05%)
Field PeaFip255,1082551 ha (7.54%)
OatOat70,643706 ha (2.09%)
CanolaCan459,0964591 ha (13.57%)
GrassGra23,452235 ha (0.69%)
Mixed PastureMip15,799158 ha (0.47%)
Mixed HayMih28,756288 ha (0.85%)
BarleyBar108,1331081 ha (3.20%)
Summer fallowSuf22,445224 ha (0.66%)
FlaxFla131,2961313 ha (3.88%)
Canary seedCas47,202472 ha (1.39%)
Chemical fallowChf268227 ha (0.08%)
Total 3,383,81833,838 ha (100%)
Table 4. Comparison of the classification performance using CNN classifier under various data compression schemes.
Table 4. Comparison of the classification performance using CNN classifier under various data compression schemes.
MethodClassification Performance
VA (%)OA (%)Kappa (%)CPU Time
PCA80.6387.9285.230.4521 s
LLE81.1888.0385.4116.8427 s
S-SAE91.2994.2493.035.5383 s
NC-SAE90.2694.2393.050.5193 s
Table 5. The recall rates and OA of crop classification of different classifier after NC-SAE dimensional reduction.
Table 5. The recall rates and OA of crop classification of different classifier after NC-SAE dimensional reduction.
MethodRecall Rates for 14 Types of Crops and OA (%)
LenDuwSpwFipOatCanGraMipMihBarSufFlaCasChfOA
SVM91.41.898.393.92.395.369.58.542.13.045.440.85.8075.22
CNN97.869.296.599.780.599.889.469.076.384.692.293.390.242.994.23
MSCDN99.698.499.699.897.999.998.697.396.498.198.998.599.489.999.33
Note: The numbers in columns using bold demonstrate the improvements of the recall rates and OA.
Table 6. The partial confusion matrix of crop classification for CNN and MSCDN.
Table 6. The partial confusion matrix of crop classification for CNN and MSCDN.
Ground Truth
CNNMSCDN
DuwMipMihChfDuwMipMihChf
Classified ImageLen0.581.930.7955.90.090.040.268.98
Duw69.21.800.01098.40.6500
Spw22.91.311.7500.780.140.380
Fip0.250.070.130.330.180.040.080.67
Oat1.180.020.36000.610.010
Can0.5900.5300.2200.460
Gra09.0311.70.07002.100.26
Mip0.0169.04.8400.0197.30.050
Mih0.073.2576.3000.7396.30
Bar3.980.060.2100.210.0700
Suf09.54000.0400.230
Fla0.923.993.300.700.010.330.010.11
Cas0.210000000
Chf00042.900089.9
Note: The numbers using bold represent the accuracy of easily confused crops.
Table 7. The classification performance of MSCDN with different sample size.
Table 7. The classification performance of MSCDN with different sample size.
Input Sample SizeClassification Accuracy (%)
VAOAKappa
15 × 1591.0795.2194.21
35 × 3598.5699.3399.19
55 × 5599.1099.4799.37
Note: The numbers using bold represent the best performance.
Table 8. The classification performance of CNN with different sample size.
Table 8. The classification performance of CNN with different sample size.
Input Sample SizeClassification Accuracy (%)
VAOAKappa
15 × 1590.2694.2393.05
35 × 3596.5397.9197.48
55 × 5597.0798.2997.94
Table 9. Classification accuracy with different methods.
Table 9. Classification accuracy with different methods.
MethodInput Sample SizeClassification Accuracy (%)
VAOAKappa
LSTM--73.1076.4370.15
LLE + SVM----65.5155.48
S-SAE + SVM----78.4872.92
PCA + CNN15 × 1580.6387.9285.23
35 × 3593.5596.1095.30
S-SAE + CNN15 × 1591.2994.2493.03
35 × 3596.8198.2597.90
S-SAE + MSCDN15 × 1592.0596.1195.32
35 × 3598.1299.0698.87
NC-SAE + CNN15 × 1590.2694.2393.05
35 × 3596.5397.9197.48
NC-SAE + MSCDN15 × 1591.0795.2194.21
35 × 3598.5699.3399.19
Note: The numbers using bold represent the best performance.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, W.-T.; Wang, M.; Guo, J.; Lou, S.-T. Crop Classification Using MSCDN Classifier and Sparse Auto-Encoders with Non-Negativity Constraints for Multi-Temporal, Quad-Pol SAR Data. Remote Sens. 2021, 13, 2749. https://doi.org/10.3390/rs13142749

AMA Style

Zhang W-T, Wang M, Guo J, Lou S-T. Crop Classification Using MSCDN Classifier and Sparse Auto-Encoders with Non-Negativity Constraints for Multi-Temporal, Quad-Pol SAR Data. Remote Sensing. 2021; 13(14):2749. https://doi.org/10.3390/rs13142749

Chicago/Turabian Style

Zhang, Wei-Tao, Min Wang, Jiao Guo, and Shun-Tian Lou. 2021. "Crop Classification Using MSCDN Classifier and Sparse Auto-Encoders with Non-Negativity Constraints for Multi-Temporal, Quad-Pol SAR Data" Remote Sensing 13, no. 14: 2749. https://doi.org/10.3390/rs13142749

APA Style

Zhang, W. -T., Wang, M., Guo, J., & Lou, S. -T. (2021). Crop Classification Using MSCDN Classifier and Sparse Auto-Encoders with Non-Negativity Constraints for Multi-Temporal, Quad-Pol SAR Data. Remote Sensing, 13(14), 2749. https://doi.org/10.3390/rs13142749

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop