Next Article in Journal
Empirical Analysis of a Super-SBM-Based Framework for Wetland Carbon Stock Safety Assessment
Previous Article in Journal
Determining Riverine Surface Roughness at Fluvial Mesohabitat Level and Its Influence on UAV-Based Thermal Imaging Accuracy
Previous Article in Special Issue
AFMUNet: Attention Feature Fusion Network Based on a U-Shaped Structure for Cloud and Cloud Shadow Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deep Learning Classification Scheme for PolSAR Image Based on Polarimetric Features

1
College of Electronic Science and Engineering, National University of Defense Technology (NUDT), Changsha 410073, China
2
College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
3
National Satellite Ocean Application Service, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(10), 1676; https://doi.org/10.3390/rs16101676
Submission received: 2 April 2024 / Revised: 2 May 2024 / Accepted: 6 May 2024 / Published: 9 May 2024
(This article belongs to the Special Issue Remote Sensing Image Classification and Semantic Segmentation)

Abstract

:
Polarimetric features extracted from polarimetric synthetic aperture radar (PolSAR) images contain abundant back-scattering information about objects. Utilizing this information for PolSAR image classification can improve accuracy and enhance object monitoring. In this paper, a deep learning classification method based on polarimetric channel power features for PolSAR is proposed. The distinctive characteristic of this method is that the polarimetric features input into the deep learning network are the power values of polarimetric channels and contain complete polarimetric information. The other two input data schemes are designed to compare the proposed method. The neural network can utilize the extracted polarimetric features to classify images, and the classification accuracy analysis is employed to compare the strengths and weaknesses of the power-based scheme. It is worth mentioning that the polarized characteristics of the data input scheme mentioned in this article have been derived through rigorous mathematical deduction, and each polarimetric feature has a clear physical meaning. By testing different data input schemes on the Gaofen-3 (GF-3) PolSAR image, the experimental results show that the method proposed in this article outperforms existing methods and can improve the accuracy of classification to a certain extent, validating the effectiveness of this method in large-scale area classification.

1. Introduction

Polarimetric synthetic aperture radar (PolSAR) is able to acquire comprehensive polarization information of land targets, and it can actively detect targets in all-weather and all-day conditions. Compared to single- and dual-polarized images, PolSAR images contain a significant amount of back-scattering information about the objects [1]. Currently, PolSAR classification methods mainly include polarization feature-based approaches, statistical distribution characteristics of PolSAR data, and deep learning classification methods [2,3,4,5,6,7].
A large number of scholars have conducted in-depth research on PolSAR classification and achieved good results. The method mainly adopts polarization decomposition to extract the polarization scattering information of the targets and further classify them based on these features. Cloude et al. have undertaken a lot of work on PolSAR classification [8,9]. C. Lardeux et al. [10] used a support vector machine (SVM) classifier to extract polarization features from PolSAR images of different frequencies and perform classification using these features. Dickinson et al. [11] classified targets in multiple scenarios using polarization decomposition. Yin et al. [12] addressed the issue of insufficient information extraction for temporal polarization spatial features in existing models by using the Vision Transformer 3D attention module to classify multi-temporal PolSAR images, effectively addressing the aforementioned issues. Similarly, Wang et al. [13] also used ViT networks to achieve effective classification of PolSAR images. Hua et al. [14] proposed using a 3D residual module to extract information from PolSAR images. These methods also combine the extracted polarization features with deep learning to achieve the classification of PolSAR images.
The classification method based on statistical features of PolSAR images mainly utilizes the difference in statistical characteristics of target objects to classify different targets in the images. Lee et al. [15] used polarization decomposition and unsupervised classification based on a complex Wishart classifier to classify PolSAR images. Silva et al. [16] used the minimum random distance and Wishart distribution to segment the targets in PolSAR images. Chen et al. [17] used the method of polarimetric similarity and maximum–minimum scattering features to improve the accuracy of classification. Wu et al. [18] used a domain-based Wishart MRF method to classify PolSAR images and produced good results compared with other methods. Dong et al. [19]. proposed the copula-based joint statistical model to extract polarization features and use it for PolSAR image classification. Statistical methods can analyze land features in the data dimension and achieve image classification, but many parameters still need to be manually determined in advance, which brings a significant workload to research.
Although the above methods have achieved good results, they are all based on pixel-level classification, ignoring the relationship between the classified pixels and their neighborhoods. Liu et al. [20] used the information from the center pixel as well as the surrounding neighborhood pixels, combined them into superpixels, and used them as the smallest classification unit to classify PolSAR images, resulting in better classification outcomes.
Many researchers have studied the polarization decomposition method, and some famous algorithms include Pauli decomposition [8], SDH decomposition [21], Freeman decomposition [22], Yamaguchi decomposition [23], and reflection symmetrical decomposition (RSD) [24]. Van Zyl decomposition [25], H/A/Alpha decomposition [9], Huynen decomposition [26], Cameron decomposition [27], and Krogager decomposition [21] are also commonly used. These polarization decomposition algorithms have been applied to PolSAR land cover classification by relevant researchers. Nie et al. [28] utilized 12 polarization features obtained from Freeman–Durden decomposition, Van Zyl decomposition, and H/A/Alpha decomposition and achieved good classification results on limited samples using an enhanced learning framework. Wang et al. [2] applied the Freeman–Durden decomposition method and used a feature fusion strategy to classify PolSAR images of the Flevoland region. Ren et al. [29] utilized polarization scattering features obtained from T-matrix, Pauli decomposition, H/A/Alpha decomposition, and Freeman decomposition. Zhang et al. [30] applied the RSD method to extract polarimetric features from Gaofen3 images and obtained good results. Quan [31] proposed two polarimetric features—scattering contribution combiner (SCC) and scattering contribution angle (SCA)—for unified scattering characterization of manmade targets. The method achieved the physical optimization of scattering modeling. He also proposed a fine polarimetric decomposition method and derived several products to finely simulate the scattering mechanisms of urban buildings, which can also fulfill its use for effective surveillance [32].
Deep learning methods extract information about land targets through a certain number of network layers and utilize deep-level features extracted from the targets to classify objects in the image. Compared to traditional classification methods or machine learning, deep learning can more fully exploit the scattering characteristics inherent in land targets. In PolSAR data analysis, deep belief networks [33], stacked autoencoders [34], generative adversarial networks [35], convolutional neural networks [36,37], and deep stacked networks have achieved tremendous success [38,39,40]. Deep learning is a hierarchical learning method, and features extracted through this method are more discriminative [41]. Therefore, it demonstrates excellent performance in PolSAR image classification and target detection [42,43,44,45,46,47,48,49]. It has also led scholars to use various convolutional neural networks for the classification and information extraction of PolSAR images [50,51,52,53,54]. Liu et al. [55] proposed the active complex value convolutional wavelet neural network, and the Markov random fields method was proposed to classify PolSAR images, extracting information from multiple perspectives and achieving high-precision image classification. Yang et al. [56]. proposed a polarization direction angle composite sequence network, which extracts phase information from nondiagonal elements through real and complex convolutional long short-term memory networks. The network performance is better than that of existing convolutional neural networks based on real or complex numbers. Chu et al. [57] proposed a two-layer multi-objective superpixel segmentation network, with one layer used to optimize network parameters and the other layer used to refine segmentation results can achieve excellent segmentation results without obtaining prior information. These studies all demonstrate that the application of deep learning in the field of PolSAR is very successful. Considering the advantages of deep learning in extracting deep features from images and automatically learning parameters, drives us to use convolutional neural networks in this paper.
The PolSAR image contains multiple polarimetric characteristics and raw information about objects. Adopting an appropriate polarimetric decomposition method could extract features that represent objects, which benefits subsequent neural networks in classifying those features. Through existing research, it has been found that the most commonly used data input scheme is the 6-parameter data input scheme [58,59,60]. This method uses the total power of polarization, the two main diagonal elements of the polarimetric coherence matrix (T-matrix), and the correlation coefficient between the non-main diagonal elements of the matrix. Although this data input scheme has achieved good classification of objects using improved neural networks, some parameters do not have clear physical meanings at the polarimetric feature level, and from the perspective of polarimetric information content, it is not complete. This prompts us to seek a data input scheme that can have physical interpretability and a more complete utilization of polarimetric information at the polarimetric feature level.
This article presents a PolSAR deep learning classification method based on the power values of polarimetric channels. It mainly utilizes horizontal, vertical, left-handed, and right-handed polarization, as well as other equivalent power values of different polarimetric channels, as input schemes for the neural network. This data input scheme is essentially a combination of polarimetric powers. The channels are equivalent to each other and represent power values under different polarization observations, and their addition and subtraction operations have clear physical meanings. Three polarimetric data input schemes were used, and then these polarimetric features were input into the neural network model to classify objects.
The main goals of this study were, therefore, (1) to provide a method for PolSAR image classification based on polarimetric features through deep learning neural networks; (2) to examine the power of classical CNNs for the classification of back-scattering similar ground objects; (3) to investigate the generalization capacity of existing CNNs for the classification of different satellite imagery; (4) to explore polarimetric features which are helpful for wetland classification and provide comparisons with different data input schemes; (5) to compare the performance and efficiency of other two schemes. Thus, this study contributes to the CNN classification tools for complex land cover mapping using polarimetric data based on polarimetric features.

2. Method

A deep learning classification scheme for PolSAR images based on polarimetric features, which mainly includes data preprocessing, polarization decomposition, polarization feature normalization, a data input scheme, and neural network classification.

2.1. Polarization Decomposition Method Based on Polarimetric Scattering Features

Target decomposition is a primary approach in polarimetric SAR data processing, which essentially represents pixels as weighted sums of several scattering mechanisms. In 1998, scholars Anthony Freeman and Stephen L. Durden proposed the first model-based, non-coherent polarimetric decomposition algorithm [22], hereinafter referred to as the Freeman decomposition. The initial purpose of the Freeman decomposition was to facilitate viewers of multi-view SAR images in intuitively distinguishing the major scattering mechanisms of objects.
The Freeman decomposition is entirely based on the back-scattering data observed by radar, and its decomposed components have corresponding physical meanings. Therefore, it later became known as the first model-based, non-coherent polarimetric decomposition algorithm. The introduction of the Freeman decomposition was pioneering at that time. After the proposal of the Freeman decomposition, as scholars extensively utilized and further researched it, they found three main issues with the decomposition method: the overestimation of volume scattering components; the presence of negative power components in the results; and the loss of polarization information. Through research, it was discovered that these three problems are not completely independent. For example, the overestimation of volume scattering components is one of the reasons for the existence of negative power values in subsequent surface scattering and double bounce components, and the loss of polarization information is also one of the reasons for the inappropriate estimation of power values of the volume scattering component.
In 2005, Yamaguchi et al. proposed the second model-based, non-coherent polarimetric decomposition algorithm [23]. This algorithm includes four scattering components, hereinafter referred to as the Yamaguchi algorithm. The Yamaguchi decomposition introduced helical scattering as the fourth scattering component, breaking the reflection symmetry assumption of the Freeman decomposition. This expansion made the algorithm applicable to a wider range of scenarios and achieved better experimental results in urban area analysis. The improved volume scattering model proposed by Yamaguchi opened up the research direction in enhancing the performance of model-based, non-coherent polarimetric decomposition algorithms through improving the scattering model. Both of the above points were pioneering work. However, the Yamaguchi algorithm did not provide a theoretical basis for selecting helical scattering as the fourth component, and according to their paper, the selection of helical scattering was based more on the comparison and preference of multiple basic scattering objects. The main innovative aspect of the Yamaguchi decomposition focused on the scattering model itself, while no improvements were made to the decomposition algorithm itself. It still followed the processing method of the Freeman decomposition. Although the algorithm showed better experimental results, issues such as the overestimation of volume scattering, negative power components, and the loss of polarization information still persisted [24].
Compared to classical polarization decomposition methods such as Freeman decomposition and Yamaguchi decomposition, the reflection symmetric decomposition [24,61] has the advantage of obtaining polarization components with non-negative power values; the decomposed results can completely reconstruct the original polarimetric coherent matrix, and the decomposition aligns strictly with the theoretical models of volume scattering, surface scattering, and double scattering. Therefore, in this paper, we chose this method to extract the polarization features of targets from PolSAR images. The reflection symmetric decomposition (RSD) is a model-based incoherent polarization decomposition method that decomposes the polarimetric coherent matrix (T) into polarization features such as the power of the surface scattering component (PV), the power of the double scattering component (PS), and the power of the volume scattering component (PD). The value range of these three components is [0, +∞).

2.2. Vertical, Horizontal, Left-Handed Circular, Right-Handed Circular Polarization Methods

Currently, radar antennas primarily use two types of polarization bases: linear polarization and circular polarization. Typical linear polarization methods include horizontal polarization (H) and vertical polarization (V), and circular polarization methods include left-handed circular polarization (L) and right-handed circular polarization (R).
When a polarimetric radar uses linear polarization bases, this method first transmits horizontally polarized electromagnetic waves and uses horizontal and vertical antennas for reception. It then transmits vertically polarized electromagnetic waves and uses horizontal and vertical antennas for reception again. In the case of a single-station radar, the back-scattering alignment convention (BSA) is usually used, and the transmitting and receiving antennas use the same coordinate system. In this coordinate system, the Z-axis points towards the target, the X-axis is horizontal to the ground, and the Y-axis, along with the X-Z plane, forms a right-handed coordinate system pointing towards the sky. This coordinate system corresponds well to the horizontal (H) and vertical (V) polarization bases. In this case, the Sinclair scattering matrix can be abbreviated as:
S = S HH S HV S VH S VV
Upon satisfying the reciprocity theorem, the polarization coherency matrix T is derived post multi-look processing, eliminating coherent speckle noise:
T = k k H = T 11 T 12 T 13 T 12 T 22 T 23 T 13 T 23 T 33
Among them,
k = 1 2 S HH + S VV S HH S VV S HV S VH
where k represents the scattering vector of the back-scattering S-matrix in the Pauli basis, and the superscript H denotes the Hermitian transpose. <> represents an ensemble average. The T-matrix is a positive semi-definite Hermitian matrix, which can be represented as a 9-dimensional real vector [T11, T22, T33, Re(T12), Re(T13), Re(T23), Im(T12), Im(T13), Im(T23)]. Tij represents the element in the i-th row and j-th column of the T-matrix. Re(Tij) and Im(Tij) represent the real and imaginary parts of the Tij element, respectively.
The Sinclair matrix can be vectorized using the Pauli basis ψ P , which can be expressed as follows:
ψ P = 2 1 0 0 1 ,   2 1 0 0 1 ,   2 0 1 1 0 ,   2 0 j j 0
The scattering vector ψ P under the Sinclair matrix is:
K 4 P = 1 2 S HH + S VV S HH S VV S HV + S VH j S HV S VH T
For single-polarization radar, under the condition of satisfying the reciprocity theorem, the above equation becomes:
K P = 1 2 S HH + S VV S HH S VV 2 S HV T
Therefore, in single-channel single-polarization SAR data, the polarimetric scattering characteristics of the target in the Pauli basis are represented by the polarimetric coherence matrix as follows:
T 3 × 3 = K P K P T = 1 2 S HH + S VV 2 S HH + S VV S HH S VV 2 S HH + S VV S HV S HH S VV S HH + S VV S HH S VV 2 2 S HH S VV S HV 2 S HV S HH + S VV 2 S HV S HH S VV 4 S HV 2
In the equation, * denotes conjugation, T represents transpose, <·> represents ensemble averaging. Thus,
T 11 + T 22 2 + Re T 12 = S HH 2 = H T 12
T 11 + T 22 2 Re T 12 = S VV 2 = V T 12
In other words, the real part of the T12 element information can be represented by the power values of the horizontal and vertical polarization channels. In the equation above, H(.) and V(.) are channel representation methods used in this paper. Similarly, for T13, T21, and T23, the following four channel representation methods can be obtained.
T 11 + T 33 2 + Re T 13 = H T 13
T 11 + T 33 2 Re T 13 = V T 13
T 22 + T 33 2 + Re T 23 = H T 23
T 22 + T 33 2 Re T 23 = V T 23
Therefore, through equation substitution, we equivalently replace the elements in the T matrix. This can, to some extent, be represented by the horizontal and vertical polarization power components to represent the real part elements in the T-matrix. Similarly, we are also looking for a polarization power method that can represent the imaginary elements in the T-matrix. Under a circular polarization basis, the scattering matrix under the same method can be defined as follows:
E LS E RS = S LL S LR S RL S RR E LI E RI
For a single-station radar, under the condition of satisfying the reciprocity theorem (SLR = SRL), electromagnetic waves can be converted between a linear polarization basis and a circular polarization basis [62]. This enables the conversion of the scattering matrix between the linear polarization basis and circular polarization basis as well. The specific derivation process can be found in [63], and here only the results are given as follows:
S LL 2 S LR S RR = 1 2 j 2 1 2 1 2 0 1 2 1 2 j 2 1 2 S HH 2 S HV S VV
Based on Formulas (6) and (14), the corresponding transformation formula between circular polarization basis and Pauli vector is as follows:
S LL 2 S LR S RR = 0 1 2 j 2 1 0 0 0 1 2 j 2 Κ P
Then,
S LL S RR = 1 2 S HH S VV + j 2 S HV S HH S VV j 2 S HV
By changing the two Equations (16) and (17), we can obtain the following form:
S HH S HV S VH S VV = S LR 1 0 0 1 + S RR 2 1 j j 1 + S LL 2 1 j j 1
K P = S LR 2 0 0 + S RR 2 0 1 j + S LL 2 0 1 - j
From the above equation, it can be inferred that the transformation from horizontal and vertical polarization to circular polarization can also be considered as a process of decomposing the scattering matrix into certain correlation terms. This means that the Sinclair matrix could be decomposed into components such as plane waves, left-handed helices, and right-handed helices, and SLR, SRR, SLL correspond to the phase and power levels of each constituent.
Therefore, it can be inferred the following equations:
T 11 + T 22 2 + I m T 12 = S LL 2 = L T 12
T 11 + T 22 2 I m T 12 = S RR 2 = R T 12
Thus, the imaginary part of the T12 element information can be represented by the power values of the left-hand and right-hand polarization channels, where L(.) and R(.) are also the channel representation methods used in this article. Similarly, for T13, T23, four channel representation methods can be obtained as follows:
T 11 + T 33 2 + I m T 13 = L T 13
T 11 + T 33 2 I m T 13 = R T 13
T 22 + T 33 2 + I m T 23 = L T 23
T 22 + T 33 2 I m T 23 = R T 23
Therefore, through equation substitution, we equivalently replace the elements in the T matrix. This can, to some extent, be represented by the left-hand and right-hand polarization power components to represent the imaginary part elements in the T-matrix.
From the above derivation process, it can be seen that the new classification scheme first uses the power values of the horizontal, vertical, left-hand, and right-hand polarization channels. The other channels following also essentially represent power values of a certain polarization channel; that is to say, the elements in the T matrix are equivalently represented using polarization power features, and the input elements have actual physical meanings. Moreover, the combination of the mentioned channels can fully invert all elements of the T-matrix, making it comprehensive from the perspective of polarization information.

2.3. Input Feature Normalization and Design of Three Schemes

Before inputting these polarizing features into the neural network, it is necessary to normalize these physical quantities to meet the requirements of the network input. In the T-matrix, the total polarized power is converted into a physical quantity in units of dB. For polarized power parameters T11, T22, T33, PS, PD, PV, they are all divided by Span to achieve normalization.
Based on the existing literature and corresponding polarized power values, this paper designs three deep learning polarization data input schemes. First, the decomposed PS, PD, PV with reflection symmetry, and the normalized P0 (10log10Span) were used as the data input Scheme 1. The elements in Scheme 1 were all based on the characteristics of polarization power and contained the main polarization information of the terrain objects. Therefore, this input scheme was used as the basic one. Then, according to references [58,59,60], the correlation coefficients between channels T12, T23, T23, as well as the non-normalized P0 (NonP0) of the T matrix, were used as the research Scheme 2, where the correlation coefficients between channels are defined by Formulas (26)–(28).
c o e T 12 = T 12 / T 11 T 22
c o e T 13 = T 13 / T 11 T 33
c o e T 23 = T 23 / T 33 T 22
In this data input scheme, except for NonP0, the value range of the other five parameters is between 0 and 1. Finally, a total of 16 parameters, including P0, T12, T23, T23, H(T12), H(T13), H(T23), L(T12), L(T13), L(T23), V(T12), V(T13), V(T23), R(T12), R(T13), R(T23), were used as data input Scheme 3, where P0 had been normalized. The other channels were normalized by dividing them by P0, as they represent the power values of specific polarization channels. The three data input schemes are shown in Table 1.
At the same time, the data distribution of the three research schemes was also statistically analyzed. Except for the NonP0 polarization feature of Scheme 2, the polarization characteristics of the other schemes were distributed in the range [0, 1]. In order to have a visual understanding of the polarization feature distribution of each data input scheme, we conducted a histogram analysis of the polarization feature distribution, as shown in Figure 1, Figure 2 and Figure 3.
Through the experiment, we obtained the distribution histogram of P0 and classified features of four experimental images. We discovered that P0 was distributed at [−30 dB, 0 dB]. We obtained the distribution histogram of ground objects and discovered that it was also distributed at [−30 dB, 0 dB]. Therefore, the selected value range distribution was appropriate, and no new categories were introduced.

2.4. Experiment and Pre-Processing

After obtaining the high-resolution Gaofen-3 Level 1A QPSI data, additional data were required for radiometric calibration. The method for radiometric calibration can be found in the Gaofen-3 user manual [64]. Due to inherent speckle noise in the data, an appropriate filtering method was necessary to remove the speckle, reducing its impact on subsequent classification. Compared to traditional filtering methods, the non-local means filtering method [65] considers the influence of neighboring pixels, making it more effective. Therefore, this paper selected this method to denoise the PolSAR images. The polarization coherence matrix data and all polarization characteristic parameters of the reflection symmetry decomposition were obtained by processing the data using the polarization decomposition production algorithm mentioned in reference [24,62].

2.5. Classification Process of Polarization Scattering Characteristics Using Deep Learning

In this paper, based on the scattering mechanism, the polarization characteristics were classified into three different input schemes. Then, these three research schemes were inputted into a network model to extract the features of the objects. Finally, a Softmax classifier was used to obtain the classification results at the end of the network. The Figure 4 is a flowchart of the entire experimental process, in which different colored CNN architectures represent the extracted polarization features of different schemes. After the experiments, it was found that when the network window size was 64 × 64, various research schemes could achieve the best classification results. Therefore, this experiment selected samples of this size for experimentation. The sample dimension sizes input into the neural network were 64 × 64 × 4, 64 × 64 × 6, and 64 × 64 × 16, respectively.
The entire process of this algorithm is shown as follows (Algorithm 1).
Algorithm 1: A deep learning classification scheme for PolSAR image based on polarimetric features
Input: GF-3 PolSAR images.
Output: Predict label Ytest {y1, y2, …, ym}
1: Processing GF-3 PolSAR images.
2: Polarimetric decomposition.
3: Extract polarimetric features.
4: Feature normalization.
5: Three schemes are proposed based on the previous studies and scattering mechanisms.
6: Randomly select a certain proportion of training samples (Patch_Xtrain: {Patch_x1, Patch_x2, …, Patch_xn},
the remaining labeled samples are used as validation samples
7: Inputting Patch_xi into CNN.
for i < N do
the train one time.
If good fitting, then
Save model, and break.
else if over-fitting or under-fitting, then
Adjust parameters include, i.e., learning rate, bias.
End
8: Predict Label: Y = Softmax (Patch_Xtrain)
9: Test images are input to the model and predict the patches of all pixels.
10: Do method evaluation, i.e., Statistic OA, AA, and Kappa coefficient.

3. Experimental and Result Analysis

In this section, high-resolution PolSAR images of the Yellow River Delta area, which have undergone field surveys, were used to verify the effectiveness of the proposed approach. All experiments were conducted on an i7-10700 CPU (Intel, Santa Clara, CA, USA)and RTX 3060 Ti GPU (NVIDIA, Santa Clara, CA, USA).

3.1. Study Area and Dataset

GF-3 has a quad-polarized instrument with different modes. In this article, we used high-resolution QPSI imaging mode PolSAR images (spatial resolution is 8m) of the Yellow River Delta area (Shandong, China) for the experiment (displayed in Figure 5). There are several classical types in this area, such as nearshore water, seawater, spartina alterniflora, tamarix, reed, tidal flat, and suaeda salsa. Restricted by historical resources, we chose four images of this area. The training and validation sets were selected from three different images taken during the same quarter in this region, specifically on 14 September 2021 and 13 October 2021. The test image was taken on 12 October 2017. We used unmanned aerial vehicle (UAV) images (displayed in Figure 6), which were shot in September 2021, combined with empirical knowledge, for marking targets to guarantee the accuracy of the labeled training datasets.
In this study, we classified seven species according to the survey results: nearshore water; seawater; spartina alterniflora; tamarix; reed; tidal flat; and suaeda salsa. We labeled these targets from numbers 1 to 7, respectively. We randomly selected 800 samples of each category for training and 200 for validation. The details are shown in Table 2.

3.2. Classification Results of the Yellow River Delta on AlexNet

In order to quantitatively evaluate the accuracy of the three data input schemes for classification and to avoid random results, in this paper, five independent experiments were conducted on AlexNet for each data input scheme. The overall accuracy of each classification was calculated for each experiment (the results were arranged in descending order, with the highest overall accuracy being the maximum value), as well as the average accuracy of the overall classification for the five experiments and the Kappa coefficient to evaluate the classification results. Both the accuracy of individual land cover classes and the Kappa coefficient were calculated based on the results with the highest overall accuracy.
From the classification results, obviously, it can be seen that when using Scheme 3 for polarized data input, both the highest overall accuracy and average overall accuracy were higher compared to the other two schemes, with values of 86.11% and 78.08%, respectively. We believe this is because Scheme 3 contains more polarization information than the other two schemes. The overall accuracy and Kappa coefficient of five independent experiments stayed at a relatively high level, showing the stronger robustness of Scheme 3.
It is worth noting that for the tidal flat, Scheme 2 and Scheme 3 performed poorly, with accuracies of 49.3% and 44.6%, respectively. This indicates that these two schemes did not contain polarization parameters that effectively represent the scattering characteristics of the tidal flat, resulting in low classification accuracy for this land cover. We also guess that tidal flats are unique ecosystems because the water cover changes intermittently with the tidal phase, which may also lead to low accuracy. It can also be seen from the table that the accuracy of the tamarix in Scheme 1 was lower than the other two schemes, only 40.1%, while the recognition accuracy of Scheme 2 and Scheme 3 reached 100%. This indicates that our proposed method can extract more information from PolSAR images, which is beneficial for improving the overall land use classification accuracy.
For Scheme 2, both the tidal flat and suaeda salsa had low classification accuracies of 49.3% and 50.8%, respectively. For Scheme 1, the classification accuracies for each land cover were lower than the other two schemes due to the limited number of polarization features. This indicates that neither of these two schemes effectively contained polarization information of classified features beyond a certain extent, which means that the polarization characteristics of these two schemes in terms of data input were incomplete.
In Scheme 3, the polarized features inputted into the deep learning network were the power values of the polarization channels. These channels were equivalent and contained all the polarization information using equivalent polarization power values. Apart from intertidal zones, high classification values were achieved for the other six land cover types. This indicates that using equivalent polarization power values can effectively distinguish most land cover types. However, strictly speaking, the polarization information in this scheme still cannot effectively differentiate classified land cover types, and overall classification accuracy needs further improvement. The classification accuracy and overall accuracy of land features for various schemes are shown in Table 3.
We also classified the entire test image, and it is easy to see that the classification result of Scheme 3 was better than the other two schemes. From the overall classification effect, Scheme 3 was also better than the other two schemes in terms of the number of classified objects and the classification effect between different categories.
From the classification maps, we can see that spartina alterniflora, tamarix, and reed were easily classified in Scheme 3. In the other two schemes, there were some misclassification phenomena, which indicates that compared to Scheme 3, the polarization information carried cannot distinguish these wetland vegetation types very well.
The classification results of the entire image are shown in Figure 7.

3.3. Classification Results of the Yellow River Delta on VGG16

To further verify the above conclusion, we also conducted comparative experiments on three schemes through VGG16. Similar to the testing results on AlexNet, when using VGG16 to test three data input schemes, there were still situations where the classification accuracy of certain land objects was low. In Scheme 1, the reed classification accuracy was 26.1%, and the tamarix accuracy was 40.2%. In Scheme 2, the tidal flat accuracy was only 28.5%, while in Scheme 3, the reed accuracy was 44.7%, and the tidal Flat accuracy was 58.3%. This indicates that none of these three schemes could fully classify the selected features beyond a certain extent, but in terms of overall classification accuracy and Kappa coefficient, Scheme 3 was still better than the other two schemes, and it was relatively complete in carrying polarization features.
Overall, Scheme 3 showed better classification performance than the other two schemes. Except for reeds and tidal flats, the classification accuracy of the other five land cover types remained consistently high. The classification accuracy of Tamarix reached 100%, and the classification accuracy of other land features was also above 94.1%. At the same time, the highest overall classification accuracy and average overall classification accuracy were also superior to the other two schemes, indicating that Scheme 3 had relatively more complete polarization information prior to inputting it into the network model.
Scheme 2 slightly underperformed Scheme 3 in the classification accuracy of suaeda salsa, with an accuracy of only 66.2%. Particularly, Scheme 2 struggled to classify tidal flats well, with a classification accuracy of only 28.5%. The polarization information contained in Scheme 2 cannot effectively characterize these two types of ground objects, resulting in relatively low classification accuracy. Instead, Scheme 1, which included surface scattering and volume scattering components, effectively characterized the scattering characteristics of tidal flat and suaeda salsa. Therefore, the effect of Scheme 1 was better than that of Scheme 2.
For Scheme 1, the overall land cover classification accuracy was lower compared to the other two schemes, mainly due to the limited number of polarimetric feature parameters, which failed to effectively represent the classified area in the input network model. The specific classification accuracies are shown in Table 4.
Similarly, we also displayed the classification results of the entire image. The results of the three data input schemes on VGG16 are shown in Figure 8. From Figure 8, it can be observed that the classification results of Scheme 3 were still the best among the three schemes.
When using VGG16 for classification, the classification results of each scheme were clearer overall than when using AlexNet, and the clustering effect of each feature was better.

4. Conclusions

In this paper, a deep learning-based classification scheme for PolSAR images using polarimetric scattering features was proposed through rigorous mathematical derivation. This scheme utilized a combination of polarimetric power features, ensuring that each channel represented power values and was equivalent to other channels. Each channel possessed practical physical meanings and clear mathematical significance. Experimental results demonstrated that compared to the 6-parameter and 4-parameter data input schemes, the proposed scheme had more comprehensive information and achieved higher classification accuracy. The proposed scheme was validated on the GF-3 dataset and showed performance improvement. However, for the classification of certain land objects, this approach lacked sufficient accuracy, and there were situations where the information was not comprehensive enough.
In future work, more comprehensive data input schemes will be explored.

Author Contributions

Methodology, W.A.; Formal analysis, S.Z.; Data curation, L.C.; Writing—original draft, S.Z.; Supervision, Z.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China, grant number 2021YFC2803304 and 2022YFB3902404.

Data Availability Statement

Data were obtained from China Ocean Satellite Data Service System and are available at https://osdds.nsoas.org.cn/ (accessed on 30 March 2024) with the identity of protocol users.

Acknowledgments

We would like to thank the National Satellite Ocean Application Center for providing the Gaofen-3 data. Currently, these data are available to the public, and the data access address is https://osdds.nsoas.org.cn/ (accessed on 30 March 2024).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, J.S.; Grunes, M.R.; Pottier, E. Quantitative comparison of classification capability: Fully polarimetric versus dual and single-polarization SAR. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2343–2351. [Google Scholar]
  2. Wang, Y.; Cheng, J.; Zhou, Y.; Zhang, F.; Yin, Q. A multichannel fusion convolutional neural network based on scattering mechanism for PolSAR image classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4007805. [Google Scholar] [CrossRef]
  3. Wang, X.; Cao, Z.; Cui, Z.; Liu, N.; Pi, Y. PolSAR image classification based on deep polarimetric feature and contextual information. J. Appl. Remote Sens. 2019, 13, 034529. [Google Scholar] [CrossRef]
  4. Dong, H.; Zhang, L.; Lu, D.; Zou, B. Attention-based polarimetric feature selection convolutional network forPolSAR image classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4001705. [Google Scholar] [CrossRef]
  5. Lonnqvist, A.; Rauste, Y.; Molinier, M.; Hame, T. Polarimetric SAR data in land cover mapping in boreal zone. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3652–3662. [Google Scholar] [CrossRef]
  6. McNairn, H.; Shang, J.; Jiao, X.; Champagne, C. The contribution of ALOS PALSAR multipolarization and polarimetric data to crop classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3981–3992. [Google Scholar] [CrossRef]
  7. Qi, Z.; Yeh, A.G.-O.; Li, X.; Lin, Z. A novel algorithm for land use and land cover classification using RADARSAT-2 polarimetric SAR data. Remote Sens. Environ. 2012, 118, 21–39. [Google Scholar] [CrossRef]
  8. Cloude, S.R.; Pottier, E. A review of target decomposition theorems in radar polarimetry. IEEE Trans. Geosci. Remote Sens. 1996, 34, 498–518. [Google Scholar] [CrossRef]
  9. Cloude, S.R.; Pottier, E. An entropy based classification scheme for land applications of polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
  10. Lardeux, C.; Frison, P.L.; Tison, C.; Souyris, J.C.; Stoll, B.; Fruneau, B.; Rudant, J.P. Support vector machine for multifrequency SAR polarimetric data classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 4143–4152. [Google Scholar] [CrossRef]
  11. Dickinson, C.; Siqueira, P.; Clewley, D.; Lucas, R. Classification of forest composition using polarimetric decomposition in multiple landscapes. Remote Sens. Environ. 2013, 131, 206–214. [Google Scholar] [CrossRef]
  12. Yin, Q.; Lin, Z.; Hu, W.; López-Martínez, C.; Ni, J.; Zhang, F. Crop Classification of Multitemporal PolSAR Based on 3-D Attention Module with ViT. IEEE Geosci. Remote Sens. Lett. 2023, 20, 4005405. [Google Scholar] [CrossRef]
  13. Wang, W.; Wang, J.; Lu, B.; Liu, B.; Zhang, Y.; Wang, C. MCPT: Mixed Convolutional Parallel Transformer for Polarimetric SAR Image Classification. Remote Sens. 2023, 15, 2936. [Google Scholar] [CrossRef]
  14. Hua, W.; Zhang, Y.; Zhang, C.; Jin, X. PolSAR Image Classification Based on Relation Network with SWANet. Remote Sens. 2023, 15, 2025. [Google Scholar] [CrossRef]
  15. Lee, J.S.; Grunes, M.R.; Ainsworth, T.L.; Du, L.J.; Schuler, D.L.; Cloude, S.R. Unsupervised classification using polarimetric decomposition and the complex Wishart classifier. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2249–2258. [Google Scholar]
  16. Silva, W.B.; Freitas, C.C.; Sant’Anna, S.J.S.; Frery, A.C. Classification of segments in PolSAR imagery by minimum stochastic distances between Wishart distributions. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2013, 6, 1263–1273. [Google Scholar] [CrossRef]
  17. Chen, Q.; Kuang, G.Y.; Li, J.; Sui, L.C.; Li, D.G. Unsupervised land cover/land use classification using PolSAR imagery based on scattering similarity. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1817–1825. [Google Scholar] [CrossRef]
  18. Wu, Y.H.; Ji, K.F.; Yu, W.X.; Su, Y. Region-based classification of Polarimetric SAR imaged using Wishart MRF. IEEE Trans. Geosci. Remote Sens. Lett. 2008, 5, 668–672. [Google Scholar] [CrossRef]
  19. Dong, H.; Xu, X.; Sui, H.; Xu, F.; Liu, J. Copula-Based Joint Statistical Model for Polarimetric Features and Its Application in PolSAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5777–5789. [Google Scholar] [CrossRef]
  20. Liu, B.; Hu, H.; Wang, H.; Wang, K.; Liu, X.; Yu, W. Superpixel-based classification with an adaptive number of classes for polarimetric SAR images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 907–924. [Google Scholar] [CrossRef]
  21. Krogager, E. New decomposition of the radar target scattering matrix. Electron. Lett. 1990, 26, 1525–1527. [Google Scholar] [CrossRef]
  22. Freeman, A.; Durden, S.L. A three-component scattering model for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef]
  23. Yamaguchi, Y.; Moriyama, T.; Ishido, M.; Yamada, H. Four-component scattering model for polarimetric SAR image decomposition. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1699–1706. [Google Scholar] [CrossRef]
  24. An, W.T.; Lin, M.S. A reflection symmetry approximation of multi-look polarimetric SAR data and its application to freeman-durden decomposition. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3649–3660. [Google Scholar] [CrossRef]
  25. van Zyl, J.J.; Arii, M.; Kim, Y. Model-based decomposition of polarimetric SAR covariance matrices constrained for nonnegative eigenvalues. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3452–3459. [Google Scholar] [CrossRef]
  26. Huynen, J.R. Physical reality of radar targets. Proc. SPIE 1993, 1748, 86–96. [Google Scholar]
  27. Cameron, W.L.; Leung, L.K. Feature motivated polarization scattering matrix decomposition. In Proceedings of the IEEE International Conference on Radar, Arlington, VA, USA, 7–10 May 1990. [Google Scholar]
  28. Nie, W.; Huang, K.; Yang, J.; Li, P. A deep reinforcement learning-based framework for PolSAR imagery classification. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 4403615. [Google Scholar] [CrossRef]
  29. Ren, B.; Zhao, Y.; Hou, B.; Chanussot, J.; Jiao, L. A mutual information-based self-supervised learning model for PolSAR land cover classification. IEEE Trans. Geosci. Remote. Sens. 2021, 59, 9224–9237. [Google Scholar] [CrossRef]
  30. Zhang, S.; An, W.; Zhang, Y.; Cui, L.; Xie, C. Wetlands Classification Using Quad-Polarimetric Synthetic Aperture Radar through Convolutional Neural Networks Based on Polarimetric Features. Remote. Sens. 2022, 14, 5133. [Google Scholar] [CrossRef]
  31. Quan, S.; Qin, Y.; Xiang, D.; Wang, W.; Wang, X. Polarimetric Decomposition-Based Unified Manmade Target Scattering Characterization With Mathematical Programming Strategies. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 1–18. [Google Scholar] [CrossRef]
  32. Quan, S.; Zhang, T.; Wang, W.; Kuang, G.; Wang, X.; Zeng, B. Exploring Fine Polarimetric Decomposition Technique for Built-Up Area Monitoring. IEEE Trans. Geosci. Remote. Sens. 2023, 61, 1–19. [Google Scholar] [CrossRef]
  33. Hinton, G.E.; Osindero, S.; Teh, Y.-W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  34. Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.-A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
  35. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2014, 63, 139–144. [Google Scholar] [CrossRef]
  36. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  37. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  38. Jiao, L.; Liu, F. Wishart deep stacking network for fast POLSAR image classifification. IEEE Trans. Image Process. 2016, 25, 3273–3286. [Google Scholar] [CrossRef] [PubMed]
  39. Liu, F.; Jiao, L.; Tang, X. Task-oriented GAN for PolSAR image classifification and clustering. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2707–2719. [Google Scholar] [CrossRef] [PubMed]
  40. Guo, Y.; Wang, S.; Gao, C.; Shi, D.; Zhang, D.; Hou, B. Wishart RBM based DBN for polarimetric synthetic radar data classifification. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015. [Google Scholar]
  41. Shao, Z.; Zhang, L.; Wang, L. Stacked sparse autoencoder modeling using the synergy of airborne LiDAR and satellite optical and SAR data to map forest above-ground biomass. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2017, 10, 5569–5582. [Google Scholar] [CrossRef]
  42. Zhang, L.; Ma, W.; Zhang, D. Stacked sparse autoencoder in PolSAR data classifification using local spatial information. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1359–1363. [Google Scholar] [CrossRef]
  43. Yu, Y.; Li, J.; Guan, H.; Wang, C. Automated detection of three-dimensional cars in mobile laser scanning point clouds using DBM-Hough-forests. IEEE Trans. Geosci. Remote. Sens. 2016, 54, 4130–4142. [Google Scholar] [CrossRef]
  44. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classifification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  45. Zhang, L.; Shi, Z.; Wu, J. A Hierarchical oil tank detector with deep surrounding features for high-resolution optical satellite imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2015, 8, 4895–4909. [Google Scholar] [CrossRef]
  46. Liang, H.; Li, Q. Hyperspectral imagery classifification using sparse representations of convolutional neural network features. Remote Sens. 2016, 8, 99. [Google Scholar] [CrossRef]
  47. Yu, Y.; Li, J.; Guan, H.; Jia, F.; Wang, C. Learning hierarchical features for automated extraction of road markings from 3-D mobile LiDAR point clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2014, 8, 709–726. [Google Scholar] [CrossRef]
  48. Xie, H.; Wang, S.; Liu, K.; Lin, S.; Hou, B. Multilayer feature learning for polarimetric synthetic radar data classifification. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 2818–2821. [Google Scholar]
  49. Chen, X.; Hou, Z.; Dong, Z.; He, Z. Performance analysis of wavenumber domain algorithms for highly squinted SAR. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2023, 16, 1563–1575. [Google Scholar] [CrossRef]
  50. Dong, H.; Zhang, L.; Zou, B. Exploring vision transformers for polarimetric SAR image classification. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 5219715. [Google Scholar] [CrossRef]
  51. Deng, P.; Xu, K.; Huang, H. When CNNs meet vision transformer: A joint framework for remote sensing scene classification. IEEE Geosci. Remote. Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  52. Ren, S.; Zhou, F.; Bruzzone, L. Transfer-Aware Graph U-Net with Cross-Level Interactions for PolSAR Image Semantic Segmentation. Remote. Sens. 2024, 16, 1428. [Google Scholar] [CrossRef]
  53. Wang, Y.; Zhang, W.; Chen, W.; Chen, C. BSDSNet: Dual-Stream Feature Extraction Network Based on Segment Anything Model for Synthetic Aperture Radar Land Cover Classification. Remote. Sens. 2024, 16, 1150. [Google Scholar] [CrossRef]
  54. Shi, J.; Nie, M.; Ji, S.; Shi, C.; Liu, H.; Jin, H. Polarimetric Synthetic Aperture Radar Image Classification Based on Double-Channel Convolution Network and Edge-Preserving Markov Random Field. Remote. Sens. 2023, 15, 5458. [Google Scholar] [CrossRef]
  55. Liu, L.; Li, Y. PolSAR Image Classification with Active Complex-Valued Convolutional-Wavelet Neural Network and Markov Random Fields. Remote. Sens. 2024, 16, 1094. [Google Scholar] [CrossRef]
  56. Yang, R.; Xu, X.; Gui, R.; Xu, Z.; Pu, F. Composite Sequential Network With POA Attention for PolSAR Image Analysis. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 5209915. [Google Scholar] [CrossRef]
  57. Chu, B.; Zhang, M.; Ma, K.; Liu, L.; Wan, J.; Chen, J.; Chen, J.; Zeng, H. Multiobjective Evolutionary Superpixel Segmentation for PolSAR Image Classification. Remote. Sens. 2024, 16, 854. [Google Scholar] [CrossRef]
  58. Ai, J.; Wang, F.; Mao, Y.; Luo, Q.; Yao, B.; Yan, H.; Xing, M.; Wu, Y. A fine PolSAR terrain classification algorithm using the texture feature fusion-based improved convolutional autoencoder. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 5218714. [Google Scholar] [CrossRef]
  59. Zhou, Y.; Wang, H.; Xu, F.; Jin, Y.-Q. Polarimetric SAR image classifification using deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1935–1939. [Google Scholar] [CrossRef]
  60. Chen, S.-W.; Tao, C.-S. PolSAR image classifification using polarimetric-feature-driven deep convolutional neural network. IEEE Geosci. Remote Sens. Lett. 2018, 15, 627–631. [Google Scholar] [CrossRef]
  61. An, W.; Lin, M.; Yang, H. Modified reflection symmetry decomposition and a new polarimetric product of GF-3. IEEE Geosci. Remote. Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  62. An, W. Polarimetric Decomposition and Scattering Characteristic Extraction of Polarimetric SAR. Ph.D. Thesis, Tusinghua University, Beijing, China, 2010. [Google Scholar]
  63. Yang, J. On Theoretical Problems in Radar Polarimetry. Ph.D. Thesis, Niigata University, Niigata, Japan, 1999. [Google Scholar]
  64. User Manual of Gaofen-3 Satellite Products, China Resources Satellite Application Center. 2016. Available online: https://osdds.nsoas.org.cn/ (accessed on 30 March 2024).
  65. Chen, J.; Chen, Y.; An, W.; Cui, Y.; Yang, J. Nonlocal filtering for polarimetric SAR data: A pretest approach. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1744–1754. [Google Scholar] [CrossRef]
Figure 1. Histogram of polarization feature distribution for data input Scheme 1. (a) P0; (b) PS; (c) PD; (d) PV.
Figure 1. Histogram of polarization feature distribution for data input Scheme 1. (a) P0; (b) PS; (c) PD; (d) PV.
Remotesensing 16 01676 g001
Figure 2. Histogram of polarization feature distribution for data input Scheme 2. (a) NonP0; (b) T22; (c) T33; (d) coeT12; (e) coeT13; (f) coeT23.
Figure 2. Histogram of polarization feature distribution for data input Scheme 2. (a) NonP0; (b) T22; (c) T33; (d) coeT12; (e) coeT13; (f) coeT23.
Remotesensing 16 01676 g002
Figure 3. Histogram of polarization feature distribution for data input Scheme 3. (a) P0, (b) T11, (c) T22, (d) T33, (e) H(T12), (f) H(T13), (g) H(T23), (h) L(T12), (i) L(T13), (j) L(T23), (k) V(T12), (l) V(T13), (m) V(T23), (n) R(T12), (o) R(T13), (p) R(T23).
Figure 3. Histogram of polarization feature distribution for data input Scheme 3. (a) P0, (b) T11, (c) T22, (d) T33, (e) H(T12), (f) H(T13), (g) H(T23), (h) L(T12), (i) L(T13), (j) L(T23), (k) V(T12), (l) V(T13), (m) V(T23), (n) R(T12), (o) R(T13), (p) R(T23).
Remotesensing 16 01676 g003
Figure 4. Data processing flowchart.
Figure 4. Data processing flowchart.
Remotesensing 16 01676 g004
Figure 5. Study area location and its corresponding image.
Figure 5. Study area location and its corresponding image.
Remotesensing 16 01676 g005
Figure 6. UAV images of ground objects. (a) Nearshore water; (b) Seawater; (c) Spartina alterniflora; (d) Suaeda salsa; (e) Tamarix; (f) Reed; (g) Tidal flat.
Figure 6. UAV images of ground objects. (a) Nearshore water; (b) Seawater; (c) Spartina alterniflora; (d) Suaeda salsa; (e) Tamarix; (f) Reed; (g) Tidal flat.
Remotesensing 16 01676 g006
Figure 7. The classification results of the three research schemes on AlexNet. GF-3 Data (a) Pauli pseudo-color map. (b) Ground truth map. Classification results of (c) Scheme 1, (d) Scheme 2, (e) Scheme 3.
Figure 7. The classification results of the three research schemes on AlexNet. GF-3 Data (a) Pauli pseudo-color map. (b) Ground truth map. Classification results of (c) Scheme 1, (d) Scheme 2, (e) Scheme 3.
Remotesensing 16 01676 g007
Figure 8. Classification results of three research schemes on VGG16. GF-3 Data. (a) Pauli pseudo-color map; (b) Ground truth map. Classification results of (c) Scheme 1, (d) Scheme 2, (e) Scheme 3.
Figure 8. Classification results of three research schemes on VGG16. GF-3 Data. (a) Pauli pseudo-color map; (b) Ground truth map. Classification results of (c) Scheme 1, (d) Scheme 2, (e) Scheme 3.
Remotesensing 16 01676 g008
Table 1. List of three polarization data input schemes.
Table 1. List of three polarization data input schemes.
SchemeParametersPolarization Features
14P0, PS, PD, PV
26NonP0, T22, T33, coeT12, coeT13, coeT23
316P0, T12, T23, T23, H(T12), H(T13), H(T23), L(T12), L(T13), L(T23), V(T12), V(T13), V(T23), R(T12), R(T13), R(T23)
Table 2. Samples distribution.
Table 2. Samples distribution.
ImagesNearshore WaterSeawaterSpartina AlternifloraTamarixReedTidal FlatSuaeda Salsa
20210914_15004001000500500500500
20210914_25002000005000
20211013040005005000500
Total1000100010001000100010001000
Table 3. The classification accuracy of their polarization input schemes on AlexNet.
Table 3. The classification accuracy of their polarization input schemes on AlexNet.
Classification Accuracy
Input Scheme
Scheme 1Scheme 2Scheme 3
Nearshore water83.496.8100
Seawater98.796.999.60
Spartina alterniflora87.096.893.3
Tamarix40.1100100
Reed50.494.568.50
Tidal flat61.849.344.6
Suaeda salsa98.250.896.8
Indepent experiments Overall Accuracy74.2383.5986.11
71.3681.4181.53
70.4177.8377.04
6873.6673.73
67.8468.8771.99
Average Overall Accuracy70.36877.07278.08
Kappa coefficient0.69930.80850.8380
Table 4. The classification accuracy of their polarization input schemes on VGG16.
Table 4. The classification accuracy of their polarization input schemes on VGG16.
Classification Accuracy
Input Scheme
Scheme 1Scheme 2Scheme 3
Nearshore water89.395.795.6
Seawater99.497.799.7
Spartina alterniflora87.696.695.9
Tamarix40.298.5100
Reed26.193.844.7
Tidal flat73.228.558.3
Suaeda salsa10066.294.1
Indepent experiments overall accuracy73.6982.4384.04
72.882.2183.57
69.781.4482.07
68.6679.4481.54
67.677.5380.11
Average overall accuracy70.4980.6182.266
Kappa coefficient0.69300.79500.8138
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, S.; Cui, L.; Dong, Z.; An, W. A Deep Learning Classification Scheme for PolSAR Image Based on Polarimetric Features. Remote Sens. 2024, 16, 1676. https://doi.org/10.3390/rs16101676

AMA Style

Zhang S, Cui L, Dong Z, An W. A Deep Learning Classification Scheme for PolSAR Image Based on Polarimetric Features. Remote Sensing. 2024; 16(10):1676. https://doi.org/10.3390/rs16101676

Chicago/Turabian Style

Zhang, Shuaiying, Lizhen Cui, Zhen Dong, and Wentao An. 2024. "A Deep Learning Classification Scheme for PolSAR Image Based on Polarimetric Features" Remote Sensing 16, no. 10: 1676. https://doi.org/10.3390/rs16101676

APA Style

Zhang, S., Cui, L., Dong, Z., & An, W. (2024). A Deep Learning Classification Scheme for PolSAR Image Based on Polarimetric Features. Remote Sensing, 16(10), 1676. https://doi.org/10.3390/rs16101676

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop