Next Article in Journal
Cloud-Free Global Maps of Essential Vegetation Traits Processed from the TOA Sentinel-3 Catalogue in Google Earth Engine
Next Article in Special Issue
Rethinking Representation Learning-Based Hyperspectral Target Detection: A Hierarchical Representation Residual Feature-Based Method
Previous Article in Journal
Global Patterns and Dynamics of Burned Area and Burn Severity
Previous Article in Special Issue
Autonomous Satellite Wildfire Detection Using Hyperspectral Imagery and Neural Networks: A Case Study on Australian Wildfire
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Classification Based on Fusing S3-PCA, 2D-SSA and Random Patch Network

1
School of Computer, China West Normal University, Nanchong 637002, China
2
School of Electronic Information and Automation, Civil Aviation University of China, Tianjin 300300, China
3
The State Key Laboratory of Traction Power, Southwest Jiaotong University, Chengdu 610031, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(13), 3402; https://doi.org/10.3390/rs15133402
Submission received: 2 June 2023 / Revised: 29 June 2023 / Accepted: 3 July 2023 / Published: 4 July 2023

Abstract

:
Recently, the rapid development of deep learning has greatly improved the performance of image classification. However, a central problem in hyperspectral image (HSI) classification is spectral uncertainty, where spectral features alone cannot accurately and robustly identify a pixel point in a hyperspectral image. This paper presents a novel HSI classification network called MS-RPNet, i.e., multiscale superpixelwise RPNet, which combines superpixel-based S3-PCA with two-dimensional singular spectrum analysis (2D-SSA) based on the Random Patches Network (RPNet). The proposed frame can not only take advantage of the data-driven method, but can also apply S3-PCA to efficiently consider more global and local spectral knowledge at the super-pixel level. Meanwhile, 2D-SSA is used for noise removal and spatial feature extraction. Then, the final features are obtained by random patch convolution and other steps according to the cascade structure of RPNet. The layered extraction superimposes the different sparial information into multi-scale spatial features, which complements the features of various land covers. Finally, the final fusion features are classified by SVM to obtain the final classification results. The experimental results in several HSI datasets demonstrate the effectiveness and efficiency of MS-RPNet, which outperforms several current state-of-the-art methods.

Graphical Abstract

1. Introduction

Hyperspectral image (HSI) includes tens to hundreds of wavelength bands with rich spectral and spatial information, which can reflect the material properties of features from different perspectives [1]; therefore, HSI has been increasingly used in major fields, such as environmental monitoring [2], mineral exploration and analysis [3], and land classification [4]. However, Hughes phenomena appear due to the high dimension of the HSI data and the limited number of labeled samples [5]. Apart from spectrum and space information, there is redundancy and noise in HSI data caused by environmental noise, sensor constraints and atmosphere. Therefore, how to effectively extract features and utilize the rich spectral information to achieve accurate classification results is a key issue in hyperspectral image classification.
Because of the highly redundant characteristics of HSI spectrum bands, spectral feature extraction and dimension reduction are important prerequisites to attain a high-precision classification. In general, the dimensions of HSI data can be reduced in two ways: feature selection [6,7,8,9] and feature extraction [10,11]. Some classical statistical feature-extraction techniques have been developed in recent decades, such as principal component analysis (PCA) [12], linear discriminant analysis (LDA) [13], and maximum noise fraction (MNF) [14]. Although PCA has been widely applied in the field of unsupervised downscaling, it is often not possible to extract useful local spectral information. There have been some improvements in PCA, such as structured covariance-PCA (SC-PCA) [15], segmented-PCA (SPCA) [16] and fold-PCA (FPCA) [17], which not only reduce the computation burden and memory, but also incorporate local spectral features. Meanwhile, only the basic spectral information of HSI is considered in many traditional classification methods, ignoring the spatial domain information between pixels, which easily leads to unsmooth hyperspectral classification results [18]. Recently, hyperpixel segmentation has been gradually applied to the classification of hyperspectral images. The superpixel segmentation method can be regarded as the process in which a spatial image is divided into several homogeneous regions, which provides an effective method for the structural distribution of the spatial distribution of HSI and can obtain better results. Jiang et al. [19] proposed the superpixel principal component analysis (SuperPCA) approach based on principal component analysis to extract the HSI based on the homogeneous regions obtained from superpixel segmentation. Zhang et al. [20] proposed the S3-PCA method based on SuperPCA with superpixels, which used the local reconstruction of superpixels to filter HSI and combined global PCA and local PCA to obtain global–local features.
As the spatial resolution [21] increases, it often leads to a decrease in spectral variability. Specifically, the rich information contained in high-resolution images may increase intra-class variability and decrease inter-class variability [22], which affects the classification accuracy. The spatial distribution in HSI is regular and contains abundant textural information, which can be combined with the spectral information extracted by the above methods to enhance the classification performance [23]. For the extraction of different types of spatial features, scholars have proposed morphological attribute profiles (MAPs) [24], extended MPs (EMPs) [25] and extended MAPs (EMAPs) [26], and many other morphological profile (MP) extension methods. After that, a technique named singular spectrum analysis (SSA) [27] facilitates the feature extraction of HSI and is successfully applied to the one-dimensional spectral domain (one-dimensional singular spectrum analysis (1D-SSA)), 1D-SSA-based singular value decomposition (SVD) outperforms other techniques in terms of classification [27]. Compared with PCA, SSA can preserve more spectrum information, so it can be separated more effectively. SSA can also be used in combination with other methods of HSI classification, such as Curvelet [28]. Although 1D-SSA can be applied to HSI analysis, it can increase the precision of classification [27], but only considers spectral correlation and does not consider the relationship between pixels. As spatial properties can also improve the classification accuracy, Zabalza et al. [29] extended SSA in two dimensions to obtain 2D-SSA, which can easily eliminate noise and improve the classification accuracy. However, PCA itself is less efficient without being combined with spatial information, so Yan et al. [30] proposed a framework for fusing PCA and 2D-SSA to extract features, which effectively fuses spectral and spatial features and achieves good classification results, even with small samples.
Recently, a number of deep learning approaches have been applied in the field of hyperspectral image classification, and typical deep neural network models include convolutional neural networks (CNNs) [31], stacked self-encoders (SAEs) [32], and deep belief networks (DBNs) [33]. Although these methods improve clssification by pretraining networks, fine-tuning and adapting parameter remain the key challenges. Some new attention models are proposed for HSI restoration and denoising tasks. For instance, in [34], a variational network for HSI-MSI fusion was proposed, which contains degradation model and data prior. The authors of [35] proposed a well-designed end-to-end deep learning framework for joint denoising and classification. In addition to attention models, as 3D tensors can represent an HSI, tensor-based models are also applied to extract features and classify HSI. For instance, in [36], a novel multilayer sparsity-based tensor decomposition (MLSTD) was applied for low-rank tensor completion (LRTC), which aims to reveal the complexity of hierarchical knowledge with implicit sparsity attributes. Based on low-rank tensor completion, Zeng et al. [37] developed a new multimodal core tensor factorization (MCTF) method, which is expected to restore the data based on few samples. Recently, an unsupervised deep tensor network (UDTN) [38] for HSI-MSI fusion was proposed, which integrates deep learning and tensor theory.
Additionally, a number of new approaches employ hierarchical feature extraction. For example, Chan et al. [39] proposed a hybrid PCA that extracts features based on hierarchical learning and logistic regression for scene classification. Specifically, PCA is employed to learn convolutional kernels from a set of patches, which are used to extract convolutional features from different layers. Moreover, Xu, et al. [40] proposed Random Patches Network (RPNet), where random patches obtained from images are directly used as convolutional kernels without any training. It is not only multi-scale, but can effectively address the information loss problem when extracting hierarchical features. Other backbone networks, i.e., GANs, CapsNet and GCNs, undeniably perform well in learning spectral representation [41], but the insufficient utilization of spectral information is still a key issue. Transformers are a current state-of-the-art structure that adopt a self-attention mechanism. However, they perform poorly in capturing locally contextual relationships. Thus, Hong et al. [41] developed a novel transformers-based network architecture called Spectral Former, which designed two modules, enabling high-performance HSI classification. In addition, some other methods have also been proposed in recent years [42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57].
Since RPNet is primarily concerned with the extraction of deep spatial features, an improved framework is presented in this paper. Firstly, the PCA in the original network is replaced with superpixel-based S3-PCA because PCA itself is less efficient without combining spatial information, while the data-preprocessing stage of the S3-PCA algorithm [20] uses nearest-neighbor pixels in the same superpixel block to reconstruct the data for each pixel, and then performs principal component analysis for each region and the whole region to obtain local and global information. Secondly, the noise and the absence of spatial information in the acquisition of the HSI process affect the model accuracy to a certain extent, and by combining the application with 2D-SSA, the noise can be eliminated and combine the spectral and spatial features effectively, thus improving classification accuracy. Finally, a mature SVM classifier is used to verify the robustness and anti-overfitting ability of the classification model under small sample conditions. Therefore, a fusion based on S3-PCA, 2D-SSA and RPNet is proposed, fully combining the advantages of the three algorithms.

2. Methods

2.1. Spectral–Spatial and SuperPCA (S3-PCA)

The conventional dimensionality reduction methods in feature extraction usually perform global principal component analysis on the whole HSI, ignoring local features. However, hyperspectral images contain many homogeneous regions, and pixels of the same category are often within homogeneous regions, so conventional dimensionality-reduction methods tend to ignore the differences between non-homogeneous regions. Inspired by this, Jiang et al. proposed a “divide and conquer” dimensionality reduction method, SuperPCA [19]. However, SuperPCA and its variants only focus on the local spatial information but ignore the overall structure, which results in less accurate feature extraction. Therefore, Zhang et al. proposed the S3-PCA approach based on SuperPCA: firstly, ERS superpixel segmentation is performed on the hyperspectral image X ∈ RM × N × B to obtain homogeneous regions Xk (1 ≤ k ≤ y, y is the segmented superpixel number), and local spatial reconstruction is performed for each pixel in each superpixel block Xk. Then, the global PCA-based features, Hg, and SuperPCA-based features, H1, are concatenated to obtain new features, H. Finally, the newly enlarged features are subjected to principal component analysis again to reduce their feature dimensionality.

2.2. Two-Dimensional Singular Spectrum Analysis (2D-SSA)

(1) Embedded 2D signal: an image M 2 D of size N m × N n ; the matrix is [29]:
M 2 D = p 1 , 1 p 1 , 2 p 1 , N n p 2 , 1 p 2 , 2 p 2 , N n p N m , 1 p N m , 2 p N m , N n
where a two-dimensional window of size is defined, where L m ( 1 , N m ) , L n ( 1 , N n ) , and the constructed trajectory matrix [29]:
W i , j = p i , j p i + 1 , j p i + L m 1 , n p i , j + 1 p i + 1 , j + 1 p i + L m 1 , j + 1 p i , j + L n 1 p i + 1 , j + L n 1 p i + L m 1 , j + L n 1
The reference point range is i [ 1 , N m L m + 1 ] , j [ 1 , N n L n + 1 ] . There is a given pixel coordinate ( i , j ) whose two-dimensional, window is a renewed column v c o l i , j = ( p i , j p i , j + 1 p i , j + L n 1 p i + 1 , j p i + L m 1 , j + L n 1 ) T R L m , L n + 1 .
There are ( N m L m + 1 ) × ( N n L n + 1 ) possible window positions, which means the trajectory matrix of the image  M  can be deduced X 2 D R L m , L n × ( N m L m + 1 ) ( N n L n + 1 ) , more specifically, X 2 D = ( vcol 1 , 1 , vcol 1 , 2 vcol 1 , N n L n + 1 , vcol 2 , 1 vcol N m L m + 1 , N n L n + 1 ) , where the trajectory matrix X 2 D is called Hankel by Hankel (HbH), and expressed as follows [29].
X 2 D = H 1 H 2 H N m L m + 1 H 2 H 3 H N m L m + 2 H L m H L m + 1 H L m L m ( N m L m + 2 )
H t = p t , 1 p t , 2 p t , N n L n + 1 p t , 1 p 2 , 2 p 2 , N n L n + 2 p t , L n p t , L n + 1 p t , N n L n × ( N n L n + 1 )
The HbH matrix ( X 2 D ) is the Hankel matrix of blocks, and each block ( H t ) is itself a Hankel matrix.
(2) SVD and grouping: use the same steps as in SSA. Additionally, the respective dimensionality of varied matrices becomes two-dimensional. Specifically, K 2 D = ( N m L m + 1 ) ( N n L n + 1 ) and L 2 D = L m × L n .
(3) Diagonal averaging: the matrices X m 2 D obtained in 2D-SSA may not be HbH. As a consequence, there is need to transform this into the HbH matrix by means of the two-step diagonal averaging method shown in (5), i.e., firstly applied within each block applied and then between blocks [29].
y u v = 1 v j = 1 n a j , v j + 1 1 v L 1 L j = 1 n a j , v j + 1 L v < K 1 L j = 1 L a j , v j + 1 K v N
Let the two-dimensional signal Z ω 2 D R N m × N n be transformed by the group matrix  X ω 2 D , which can be expressed as [29]:
Z ω 2 D = z ω 1 , 1 z ω 1 , 2 z ω 1 , N n z ω 2 , 1 z ω 2 , 2 z ω 2 , N n z ω N m , 1 z ω N m , 2 z ω N m , N n
Then, the original 2D image is reconstructed M 2 D = Z 1 2 D + Z 2 2 D + + Z W 2 D = ω = 1 W Z ω 2 D .
In 2D-SSA, primary spatial trend information is included in the first decomposed component, and is therefore used for classification instead of the original image [11,19]. Like SSA, the original image is represented by a fixed number of components (EVG = 1), and the only parameter that influences performance is the window size L m × L n when embedding.

2.3. Random Patches Network (RPNet)

RPNet [40] is a HSI classification model rooted in deep learning, which uses random patches as convolution kernels with a cascade structure. Firstly, the hyperspectral data X R r c × n is processed by principal component analysis, then downscaled to obtain X p R r c × p , and then a whitening operation towards X p is performed to make the variance in different bands similar and reduce the correlation between different bands [58]. k pixels are randomly selected from X W h i t e n R r c × p to obtain k random patches of size w × w × h , which are convolved with X W h i t e n to obtain K feature maps. This method combines shallow and deep features, which effectively solves the loss of information in the hierarchical feature-extraction process.

2.4. Proposed MS-RPNet Model

A central problem in hyperspectral image classification is spectral uncertainty, where spectral features alone cannot accurately and robustly identify a pixel point in a hyperspectral image. This motivates the need for recent spectral spatial classification methods to additionally consider spatial information and reduce the effect of spectral uncertainty, and then consider the noise-induced intra-class variation and higher inter-class similarity. This paper introduces a novel model called MS-RPNet (Figure 1) which uses 2D-SSA for noise removal and spatial feature extraction. Then, global and local features are separately extracted using superpixel-based S3-PCA, and the final features are obtained by random patch convolution and other steps according to the cascade structure of RPNet. The layered extraction superimposes the spatial difference information into multi-scale spatial features, which complements the features of various land covers. Finally, the final fusion features were classified by SVM to obtain the final classification results.

2.4.1. S3-PCA Domain Feature Extraction and Fusion with 2D-SSA

Let a hyperspectral cube be D R D x × D y × D λ , x n = [ x n 1 , x n 2 , , x n D λ ] T ; n [ 1 , n ] , is the each pixel’s spectral vector, where the total number of pixels is N = D x D y . To avoid spectral distortion, the effective features of the homogeneous region are extracted by the superpixel segmentation technique. To reduce the computational effort of superpixel segmentation, the first principal component of HSI to be classified is denoted as  P C 1 = R D x × D y . This is first extracted by PCA, and then the entropy rate superpixel segmentation (ERS) [31] algorithm is used for superpixel segmentation of the first principal component to generate a homogeneous region block P C 1 = k = 1 s B k , where ( B k B g = g ), S is the superpixel number and B k is the kth pixel. 2D-SSA is applied for noise removal and spatial feature extraction: first, an embedding window L R L x × L y  is cfreated; then, the trajectory matrix T R m × n is created, where m = L x × L y , n = ( D x L x + 1 ) ( D y D y + 1 ) . For simplicity, we usually make L x = L y , and then SVD, grouping and diagonal averaging are used to obtain the reconstructed image Z. Based on the homogeneous regions formed by segmentation, the reconstructed image Z is subjected to the superpixel-based S3-PCA method to reduce the data dimensionality and obtain global–local spatial–spectral features H = [ H g , H l ] R D x D y × p (where p is the principal component fraction), and the combined application of 2D-SSA and S3-PCA is a useful method to suppress the noise and strengthen the recognition of spectral space. Compared with the original HSI, the processed image has richer spectral diversity features and lower feature dimensionality.

2.4.2. Convolution with Random Patches

A whitening operation is performed on the reduced dimensional data H, which makes the variance in different bands similar and reduces the different correlation of the bands [39]. Then, k pixels are randomly selected from the data after the Whitening operation, and a block of size w × w × p is taken around each pixel, i.e., k random blocks are obtained. For the pixels distributed at the edges, the neighboring vacant pixels are filled by mirroring. Then, these k random blocks P 1 , P 2 , , P k are used as convolution kernels, and the convolution operation between H W h i t e n i n g and random patches is performed to obtain k feature maps: I i : I i = j = 1 p H W h i t e n i n g ( j ) * P i ( j ) , i = 1 , 2 , , k , where * denotes the 2D convolution operation. The activation function is arranged to improve the sparsity of the features: f ( I ) = max ( 0 , I M ) , M = [ m 2 m 2 ] , where m 2 R D x D y × 1 and M, respectively, denote the average vector and matrix in a two-dimensional space for k times replications. Eventually, the features in the first layer are expressed as Z ( 1 ) = f ( I ) R D x D y × k . Z ( l 1 ) R D x D y × k is assumed to be the features of layer ( l 1 ) , which is input again to extract features and obtain the lth-layer features Z l , from which the features of the different layers can be obtained. Finally, all the features are passed through an SVM (with RFB kernel) classifier to predict the category labels and obtain the classification result map. This was obtained using the algorithm flow in Algorithm 1.
Algorithm 1. The proposed hyperspectral image classification algorithm.
Input: HSI image D, principal component number (PC_num), superpixel number (Pixel_num), layer number (Layernum).
The first layer:
(1)
Apply 2D-SSA to D for spatial feature extraction and noise removal, and the reconstructed image Z is obtained.
(2)
A first principal component analysis of Z to obtain PC1.
(3)
Apply super-pixel segmentation algorithm ERS to PC1 for division into multiple homogeneous regions.
(4)
Apply S3-PCA to obtain the global–local spectral–spatial feature H.
(5)
Extract k random patches for convolution operation to obtain convolution features.
(6)
The other layers L (L ≤ Layernum):
(7)
Update the matrix while repeating steps 2–5 to obtain different features C L .
(8)
Combine C 1 ~ C L  with the raw spectral data to form the final features and normalize them.
(9)
The final classification result is obtained by SVM for classification.
Output: Classification accuracy and classification results

3. Experiments

To test the feasibility and validity of this approach, we chose three data sets as the test case, and PCA, SuperPCA, S3-PCA, PCA-2D-SSA, SuperPCA-2D-SSA, RPNet-5 [40], S3-PCA-RPNet, DMLSR [59], LeNet [40] and SSFTT [60] were used as the control groups. OA is the percent of pixels that are properly sorted, AA is the percent of correctly sorted elements in each class, and Kappa coefficient is obtained using the confusion matrix, which combines OA and AA. The experimental environment was Windows 10 with processor Intel (R) Core (TM) i5-8250U CPU @ 1.60GHz, 8GB of RAM, and NVIDIA GeForce MX 150 graphics card.

3.1. Introduction of Datasets

All experiments were conducted on the Indian Pines dataset, the KSC dataset and the Pavia University dataset. The real feature distribution and the first principal component of each original HSI are shown in Figure 2, Figure 3 and Figure 4. The Indian Pines datasets uses 200 bands after removing the bands covering the water-absorption region, which contain 16 types of feature elements. As for the KSC datasat, the discrimination of land cover according to its environment is difficult due to the similarity of spectral signatures for certain vegetation types. That is the reason that Figure 4b shows the fuzzy features of legend region in the ground-truth map to be different from those in Figure 2 and Figure 3b. More specific information can be found in Table 1; the training and test sets are shown in Table 2, Table 3 and Table 4.

3.2. Parameter Analysis

In this experiment, there were several parameters that the influence classification results to different degrees. For instance, the random block size w and number k of model RPNet in the experiments were selected according to the literature [38], i.e., w = 21 and k = 20, L x × L y fixed at 10 × 10, and the rest of the parameter settings were shown in the following section. The data were repeated 10 times, and the mean was taken as the end result.
(1)
Analyze the effect of the parameter PC_num (number of principal components) on the experiment. The values of PC_num were divided into 9 cases (PC_num = 1, 2, 3, 4, 5, 6, 7, 8, 9), the effect of parameter PC_num on classification accuracy was observed in 9 cases, and the specific classification accuracy is plotted in Figure 5. From the figure, it can be seen that for the Indian Pines dataset, the variation in PCA does not affect the overall precision of classification, and the principal component dimension is taken as 7; for the Pavia University dataset, the change in the principal component dimension has a smaller impact on the overall accuracy, and the low-dimensional matrix is considered to be more beneficial to the subsequent calculation of the model, and PC_num = 5. For the KSC dataset, the change in the principal component dimension causes the overall accuracy to fluctuate, and the curves show a tendency to increase and then level, and PC_num = 8.
(2)
Analyze the effect of the parameter Pixel_num (superpixel number) on the experiment. The PC_nums are fixed according to the optimal number in (1). The specific superpixel segmentation graphs obtained by dividing the Pixel_num values into six cases (Pixel_num = 25, 50, 75, 100, 125, 150 are shown in Figure 6, Figure 7 and Figure 8. The number of superpixels determines the granularity of the segmentation result and various classification results. A larger number of superpixels produces finer-grained segmentation results that can better preserve the detailed image information, but may retain redundant information, while a smaller number of superpixels produces coarser segmentation results, but may lose some details. Thus, we need to choose this according to the specific application requirements and image characteristics. From Figure 5, it is obvious that different superpixel numbers make a greater difference to the classification accuracy for the first two datasets, which further indicates that the introduction of superpixel segmentation helps to improve classification accuracy. For the Pavia University dataset, the effect of the change in the number of superpixels on the overall accuracy is also not significant. An increase in the number of superpixels leads to an increase in the computational complexity of the algorithm. Considering the computational complexity, the parameter Pixel_num is set to Pixel_num = 75, and a high overall accuracy is achieved on the validation set of each dataset.
(3)
Analyze the effect of the parameter Layernum on the experiment. The PC_nums and pixel_nums are fixed according to the optimal number in (1). The classification results are shown in Figure 9, Figure 10 and Figure 11. The overall accuracy gradually increases and then stabilizes when the layer depth increases, which indicates that the random blocks extracted from the HSI contain useful information. However, an architecture that is too deep not only does not improve the accuracy, but also increases the computational complexity. According to Figure 5., the number of layers is taken as 3, 3, 5 respectively.

4. Discussion

This section compares the improvements proposed in this paper with PCA, SuperPCA, S3-PCA, PCA-2D-SSA, SuperPCA-2D-SSA, RPNet-5 [40], S3-PCA-RPNet, DMLSR [59], LeNet [40] and SSFTT [60] to assess the classification accuracy using support vector machines on the three data sets. The diagrams about classification accuracy are straightforwardly drawn in Figure 12, respectively, the classification results are shown in Figure 13, Figure 14 and Figure 15. From the control results in Table 5, Table 6 and Table 7. PCA-2D-SSA and SuperPCA-2D-SSA consistently provide better classification results than PCA and SuperPCA only, while S3-PCA achieves a higher classification accuracy than PCA and SuperPCA based on superpixel local reconstruction. Thus, we are expected to add S3-PCA to the RPNet model to achieve higher classification accuracy. In contrast, the lack of spatial information leads PCA and SuperPCA to achieve lower classification accuracy in the benchmark test method. From the classification result plots presented in Figure 13, Figure 14 and Figure 15, misclassification and noise are shown to occur when only spectral features are used for classification, and these results indicate that the combination of spatial and spectral features is of great importance to HSI classification. Therefore, the S3-PCA-2D-SSA strategy used in this paper can fuse the advantages of both, and always obtains higher accuracy by utilizing local–global spectral spatial features while suppressing data noise. Compared with the original RPNet, the addition of S3-PCA-2D-SSA can achieve the best OA on the three datasets. For the Indian Pine dataset, the OA is improved by 1.66%, for the Pavia University dataset and the OA is improved by 99.76%, and for the KSC dataset, the OA is improved by 1.68% due to the combination of the introduction of superpixels, while 2D-SSA allows for the network to use not only shallow and deep feature extraction, but to effectively utilize spectral–spatial features, reducing information redundancy and loss. In the experiment, the LeNet [40] and SSFIT [60] are a highly accurate competitive approach to the Indian Pines and Pavia University dataset. However, their shortcomings lie in the over-smoothing phenomenon, especially in the KSC dataset, since it is difficult to distinguish land-cover environments with the similar spectral characteristics of some vegetation types. Therefore, the proposed model outperforms others in terms of validity. Finally, speaking of the complexity of the model, it is undeniable that its time complexity is higher than other control group algorithms, but its classification performance is better. However, it is worth nothing that superpixel-based analysis and global and local feature extraction take up part of the execution time. Thus, we will research how to adaptively obtain the downscaling and superpixel number parameters, and also attempt to establish a lightweight network to reduce the network complexity while keeping its performers.

5. Conclusions

In this paper, a fusion algorithm based on S3-PCA, 2D-SSA and RPNet is presented, in which global and local spectral features are sufficiently and separately extracted using superpixel-based S3-PCA, while noise removal and spatial feature extraction are carried out by 2D-SSA. Then, the spectral–spatial features are integrated into the cascade structure of RPNet to achieve shallow and deep convolution and remove the redundant fusion information. The layered extraction superimposes the spatial difference information into multi-scale spatial features, which complements the features of various land covers. It is experimentally verified that the improved method has a higher overall classification accuracy than the related comparison methods on the three open-source datasets. However, it should be noted that the parameters of S3-PCA downscaling and superpixel number are adjusted through a large number of experiments, which increases the computational cost. Thus, how to adaptively obtain the parameters and establish a lightweight network will be explored to improve computational costs.

Author Contributions

Conceptualization, T.W. and H.C.; methodology, T.W. and H.C.; software, T.C.; validation, T.W.; resources, H.C.; data curation, T.W.; writing—original draft preparation, T.W. and H.C.; writing—review and editing, T.C. and W.D.; visualization, T.W.; supervision, T.W.; project administration, T.C.; funding acquisition, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Sichuan Province under Grant 2022NSFSC0536, the Project of Wenzhou Key Laboratory Foundation, China under Grant 2021HZSY0071, the Doctoral Initiation Program of China West Normal University under Grant 22kE018.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Camps-Valls, G.; Tuia, D.; Bruzzone, L.; Benediktsson, J.A. Advances in Hyperspectral Image Classification: Earth Monitoring with Statistical Learning Methods. IEEE Signal Process. Mag. 2013, 31, 45–54. [Google Scholar] [CrossRef] [Green Version]
  2. Matthews, M.W.; Bernard, S.; Evers-King, H.; Lain, L.R. Distinguishing cyanobacteria from algae in optically complex inland waters using a hyperspectral radiative transfer inversion algorithm. Remote Sens. Environ. 2020, 248, 111981. [Google Scholar] [CrossRef]
  3. Carrino, T.A.; Crósta, A.P.; Toledo, C.L.B.; Silva, A.M. Hyperspectral remote sensing applied to mineral exploration in southern Peru: A multiple data integration approach in the Chapi Chiara gold prospect. Int. J. Appl. Earth Obs. Geoinf. 2018, 64, 287–300. [Google Scholar] [CrossRef]
  4. Tong, X.Y.; Xia, G.S.; Lu, Q.; Shen, H.; Li, S.; You, S.; Zhang, L. Land-cover classification with high-resolution remote sensing images using transferable deep models. Remote Sens. Environ. 2020, 237, 111322. [Google Scholar] [CrossRef] [Green Version]
  5. Ma, W.; Gong, C.; Hu, Y.; Meng, P.; Xu, F. The Hughes phenomenon in hyperspectral classification based on the ground spectrum of grasslands in the region around Qinghai Lake. In International Symposium on Photoelectronic Detection and Imaging 2013: Imaging Spectrometer Technologies and Applications; SPIE: Bellingham, WA, USA, 2013; Volume 8910, pp. 363–373. [Google Scholar]
  6. Du, Q.; Yang, H. Similarity-Based Unsupervised Band Selection for Hyperspectral Image Analysis. IEEE Geosci. Remote Sens. Lett. 2008, 5, 564–568. [Google Scholar] [CrossRef]
  7. Yang, H.; Du, Q.; Chen, G. Unsupervised hyperspectral band selection using graphics processing units. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 660–668. [Google Scholar] [CrossRef]
  8. Wang, Q.; Lin, J.; Yuan, Y. Salient Band Selection for Hyperspectral Image Classification via Manifold Ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef]
  9. Jia, S.; Tang, G.; Zhu, J.; Li, Q. A Novel Ranking-Based Clustering Approach for Hyperspectral Band Selection. IEEE Trans. Geosci. Remote Sens. 2015, 54, 88–102. [Google Scholar] [CrossRef]
  10. Bruce, L.; Koger, C.; Li, J. Dimensionality reduction of hyperspectral data using discrete wavelet transform feature extraction. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2331–2338. [Google Scholar] [CrossRef]
  11. Zhao, W.; Du, S. Spectral–Spatial Feature Extraction for Hyperspectral Image Classification: A Dimension Reduction and Deep Learning Approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  12. Prasad, S.; Bruce, L.M. Limitations of Principal Components Analysis for Hyperspectral Target Recognition. IEEE Geosci. Remote Sens. Lett. 2008, 5, 625–629. [Google Scholar] [CrossRef]
  13. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of Hyperspectral Images With Regularized Linear Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  14. Lixin, G.; Weixin, X.; Jihong, P. Segmented minimum noise fraction transformation for efficient feature extraction of hyperspectral images. Pattern Recognit. 2015, 48, 3216–3226. [Google Scholar] [CrossRef]
  15. Zabalza, J.; Ren, J.; Ren, J.; Liu, Z.; Marshall, S. Structured covariance principal component analysis for real-time onsite feature extraction and dimensionality reduction in hyperspectral imaging. Appl. Opt. 2014, 53, 4440–4449. [Google Scholar] [CrossRef] [Green Version]
  16. Jia, X.; Richards, J.A. Segmented principal components transformation for efficient hyperspectral remote-sensing image display and classification. IEEE Trans. Geosci. Remote Sens. 1999, 37, 538–542. [Google Scholar] [CrossRef] [Green Version]
  17. Zabalza, J.; Ren, J.; Yang, M.; Zhang, Y.; Wang, J.; Marshall, S.; Han, J. Novel Folded-PCA for improved feature extraction and data reduction with hyperspectral imaging and SAR in remote sensing. ISPRS J. Photogramm. Remote Sens. 2014, 93, 112–122. [Google Scholar] [CrossRef] [Green Version]
  18. Zhang, H.; Li, J.; Huang, Y.; Zhang, L. A Nonlocal Weighted Joint Sparse Representation Classification Method for Hyperspectral Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 2056–2065. [Google Scholar] [CrossRef]
  19. Jiang, J.; Ma, J.; Chen, C.; Wang, Z.; Cai, Z.; Wang, L. SuperPCA: A Superpixelwise PCA Approach for Unsupervised Feature Extraction of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4581–4593. [Google Scholar] [CrossRef] [Green Version]
  20. Zhang, X.; Jiang, X.; Jiang, J.; Zhang, Y.; Liu, X.; Cai, Z. Spectral–Spatial and Superpixelwise PCA for Unsupervised Feature Extraction of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5502210. [Google Scholar] [CrossRef]
  21. Donoho, D.L. High-dimensional data analysis: The curses and blessings of dimensionality. AMS Math Chall. Lect. 2000, 1, 32. [Google Scholar]
  22. Zhang, L.; Zhang, L.; Tao, D.; Huang, X. On combining multiple features for hyperspectral remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2011, 50, 879–893. [Google Scholar] [CrossRef]
  23. Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.; Tilton, J.C. Advances in spectral-spatial classification of hyperspectral images. Proc. IEEE 2013, 101, 652–675. [Google Scholar] [CrossRef]
  24. Bao, R.; Xia, J.; Mura, M.D.; Du, P.; Chanussot, J.; Ren, J. Combining Morphological Attribute Profiles via an Ensemble Method for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2016, 13, 359–363. [Google Scholar] [CrossRef]
  25. Gu, Y.; Liu, T.; Jia, X.; Benediktsson, J.A.; Chanussot, J. Nonlinear Multiple Kernel Learning With Multiple-Structure-Element Extended Morphological Profiles for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3235–3247. [Google Scholar] [CrossRef]
  26. Xia, J.; Mura, M.D.; Chanussot, J.; Du, P.; He, X. Random Subspace Ensembles for Hyperspectral Image Classification With Extended Morphological Attribute Profiles. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4768–4786. [Google Scholar] [CrossRef]
  27. Zabalza, J.; Ren, J.; Wang, Z.; Marshall, S.; Wang, J. Singular Spectrum Analysis for Effective Feature Extraction in Hyperspectral Imaging. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1886–1890. [Google Scholar] [CrossRef] [Green Version]
  28. Qiao, T.; Ren, J.; Wang, Z.; Zabalza, J.; Sun, M.; Zhao, H.; Li, S.; Benediktsson, J.A.; Dai, Q.; Marshall, S. Effective Denoising and Classification of Hyperspectral Images Using Curvelet Transform and Singular Spectrum Analysis. IEEE Trans. Geosci. Remote Sens. 2016, 55, 119–133. [Google Scholar] [CrossRef] [Green Version]
  29. Zabalza, J.; Ren, J.; Zheng, J.; Han, J.; Zhao, H.; Li, S.; Marshall, S. Novel Two-Dimensional Singular Spectrum Analysis for Effective Feature Extraction and Data Classification in Hyperspectral Imaging. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4418–4433. [Google Scholar] [CrossRef] [Green Version]
  30. Yan, Y.; Ren, J.; Liu, Q.; Zhao, H.; Sun, H.; Zabalza, J. PCA-Domain Fused Singular Spectral Analysis for Fast and Noise-Robust Spectral–Spatial Feature Mining in Hyperspectral Classification. IEEE Geosci. Remote Sens. Lett. 2021, 20, 5505405. [Google Scholar] [CrossRef]
  31. Fukushima, K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 1980, 36, 193–202. [Google Scholar] [CrossRef]
  32. Bengio, Y.; Lamblin, P.; Popovici, D.; Larochelle, H. Greedy layer-wise training of deep networks. In Proceedings of the Advances in Neural Information Processing Systems 19 (NIPS 2006), Vancouver, BC, Canada, 4–7 December 2006. [Google Scholar]
  33. Hinton, G.E.; Osindero, S.; Teh, Y.-W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  34. Yang, J.; Xiao, L.; Zhao, Y.Q.; Chan, J.C. Variational regularization network with attentive deep prior for hyperspectral–multispectral image fusion. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5508817. [Google Scholar] [CrossRef]
  35. Li, X.; Ding, M.; Gu, Y.; Pižurica, A. An End-to-End Framework for Joint Denoising and Classification of Hyperspectral Images. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–15. [Google Scholar] [CrossRef]
  36. Xue, J.; Zhao, Y.; Huang, S.; Liao, W.; Chan, J.C.-W.; Kong, S.G. Multilayer Sparsity-Based Tensor Decomposition for Low-Rank Tensor Completion. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6916–6930. [Google Scholar] [CrossRef] [PubMed]
  37. Zeng, H.; Xue, J.; Luong, H.Q.; Philips, W. Multimodal Core Tensor Factorization and Its Applications to Low-Rank Tensor Completion. IEEE Trans. Multimed. 2022, 1–15. [Google Scholar] [CrossRef]
  38. Yang, J.; Xiao, L.; Zhao, Y.-Q.; Chan, J.C.-W. Unsupervised Deep Tensor Network for Hyperspectral–Multispectral Image Fusion. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–15. [Google Scholar] [CrossRef]
  39. Chen, Y.; Zhao, X.; Jia, X. Spectral–Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
  40. Xu, Y.; Du, B.; Zhang, F.; Zhang, L. Hyperspectral image classification via a random patches network. ISPRS J. Photogramm. Remote Sens. 2018, 142, 344–357. [Google Scholar] [CrossRef]
  41. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking Hyperspectral Image Classification With Transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5518615. [Google Scholar] [CrossRef]
  42. Zhou, X.; Cai, X.; Zhang, H.; Zhang, Z.; Jin, T.; Chen, H.; Deng, W. Multi-strategy competitive-cooperative co-evolutionary algorithm and its application. Inf. Sci. 2023, 635, 328–344. [Google Scholar] [CrossRef]
  43. Li, M.; Zhang, W.; Hu, B.; Kang, J.; Wang, Y.; Lu, S. Automatic Assessment of Depression and Anxiety through Encoding Pupil-wave from HCI in VR Scenes. ACM Trans. Multimed. Comput. Commun. Appl. 2022. [Google Scholar] [CrossRef]
  44. Sun, Q.; Zhang, M.; Zhou, L.; Garme, K.; Burman, M. A machine learning-based method for prediction of ship performance in ice: Part I. ice resistance. Mar. Struct. 2022, 83, 103181. [Google Scholar] [CrossRef]
  45. Duan, Z.; Song, P.; Yang, C.; Deng, L.; Jiang, Y.; Deng, F.; Jiang, X.; Chen, Y.; Yang, G.; Ma, Y.; et al. The impact of hyperglycaemic crisis episodes on long-term outcomes for inpatients presenting with acute organ injury: A prospective, multicentre follow-up study. Front. Endocrinol. 2022, 13, 1057089. [Google Scholar] [CrossRef]
  46. Xie, C.; Zhou, L.; Ding, S.; Liu, R.; Zheng, S. Experimental and numerical investigation on self-propulsion performance of polar merchant ship in brash ice channel. Ocean Eng. 2023, 269, 113424. [Google Scholar] [CrossRef]
  47. Chen, T.; Song, P.; He, M.; Rui, S.; Duan, X.; Ma, Y.; Armstrong, D.G.; Deng, W. Sphingosine-1-phosphate derived from PRP-Exos promotes angiogenesis in diabetic wound healing via the S1PR1/AKT/FN1 signalling pathway. Burn. Trauma 2023, 11, tkad003. [Google Scholar] [CrossRef]
  48. Ren, Z.; Zhen, X.; Jiang, Z.; Gao, Z.; Li, Y.; Shi, W. Underactuated control and analysis of single blade installation using a jackup installation vessel and active tugger line force control. Mar. Struct. 2023, 88, 103338. [Google Scholar] [CrossRef]
  49. Chen, M.; Shao, H.; Dou, H.; Li, W.; Liu, B. Data Augmentation and Intelligent Fault Diagnosis of Planetary Gearbox Using ILoFGAN Under Extremely Limited Samples. IEEE Trans. Reliab. 2022, 1–9. [Google Scholar] [CrossRef]
  50. Song, Y.; Zhao, G.; Zhang, B.; Chen, H.; Deng, W.; Deng, W. An enhanced distributed differential evolution algorithm for portfolio optimization problems. Eng. Appl. Artif. Intell. 2023, 121, 106004. [Google Scholar] [CrossRef]
  51. Yan, S.; Shao, H.; Min, Z.; Peng, J.; Cai, B.; Liu, B. FGDAE: A new machinery anomaly detection method towards complex operating conditions. Reliab. Eng. Syst. Saf. 2023, 236, 109319. [Google Scholar] [CrossRef]
  52. Li, M.; Zhang, J.; Song, J.; Li, Z.; Lu, S. A Clinical-Oriented Non-Severe Depression Diagnosis Method Based on Cognitive Behavior of Emotional Conflict. IEEE Trans. Comput. Soc. Syst. 2022, 10, 131–141. [Google Scholar] [CrossRef]
  53. Cai, J.; Ding, S.; Zhang, Q.; Liu, R.; Zeng, D.; Zhou, L. Broken ice circumferential crack estimation via image techniques. Ocean Eng. 2022, 259, 111735. [Google Scholar] [CrossRef]
  54. Yu, Y.; Tang, K.; Liu, Y. A Fine-Tuning Based Approach for Daily Activity Recognition between Smart Homes. Appl. Sci. 2023, 13, 5706. [Google Scholar] [CrossRef]
  55. Lin, J.; Shao, H.; Zhou, X.; Cai, B.; Liu, B. Generalized MAML for few-shot cross-domain fault diagnosis of bearing driven by heterogeneous signals. Expert Syst. Appl. 2023, 230, 120696. [Google Scholar] [CrossRef]
  56. Huang, C.; Zhou, X.; Ran, X.; Wang, J.; Chen, H.; Deng, W. Adaptive cylinder vector particle swarm optimization with differential evolution for UAV path planning. Eng. Appl. Artif. Intell. 2023, 121, 105942. [Google Scholar] [CrossRef]
  57. Chen, X.; Shao, H.; Xiao, Y.; Yan, S.; Cai, B.; Liu, B. Collaborative fault diagnosis of rotating machinery via dual adversarial guided unsupervised multi-domain adaptation network. Mech. Syst. Signal Process. 2023, 198, 1104270. [Google Scholar] [CrossRef]
  58. Caywood, M.S.; Willmore, B.; Tolhurst, D.J. Independent Components of Color Natural Scenes Resemble V1 Neurons in Their Spatial and Color Tuning. J. Neurophysiol. 2004, 91, 2859–2873. [Google Scholar] [CrossRef] [Green Version]
  59. Zhang, Y.; Li, W.; Li, H.-C.; Tao, R.; Du, Q. Discriminative Marginalized Least-Squares Regression for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3148–3161. [Google Scholar] [CrossRef]
  60. Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–spatial feature tokenization transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
Figure 1. The flow chart of the proposed MS-RPNet model.
Figure 1. The flow chart of the proposed MS-RPNet model.
Remotesensing 15 03402 g001
Figure 2. Indian Pines dataset: (a) ground-truth map; (b) the first principal component; (c) legend.
Figure 2. Indian Pines dataset: (a) ground-truth map; (b) the first principal component; (c) legend.
Remotesensing 15 03402 g002
Figure 3. Pavia University dataset: (a) ground-truth map; (b) the first principal component; (c) legend.
Figure 3. Pavia University dataset: (a) ground-truth map; (b) the first principal component; (c) legend.
Remotesensing 15 03402 g003
Figure 4. KSC dataset: (a) ground-truth map; (b) the first principal component; (c) legend.
Figure 4. KSC dataset: (a) ground-truth map; (b) the first principal component; (c) legend.
Remotesensing 15 03402 g004
Figure 5. The overall accuracy with different parameters: (a) number of principal components; (b) number of superpixels; (c) number of layers.
Figure 5. The overall accuracy with different parameters: (a) number of principal components; (b) number of superpixels; (c) number of layers.
Remotesensing 15 03402 g005aRemotesensing 15 03402 g005b
Figure 6. Segmentation map with different number superpixels for the Indian Pines dataset: (a) pixel_num = 25; (b) pixel_num = 50; (c) pixel_num = 75; (d) pixel_num = 100; (e) pixel_num = 125; (f) pixel_num = 150.
Figure 6. Segmentation map with different number superpixels for the Indian Pines dataset: (a) pixel_num = 25; (b) pixel_num = 50; (c) pixel_num = 75; (d) pixel_num = 100; (e) pixel_num = 125; (f) pixel_num = 150.
Remotesensing 15 03402 g006
Figure 7. Segmentation map with different number superpixels for the Pavia University dataset: (a) pixel_num = 25; (b) pixel_num = 50; (c) pixel_num = 75; (d) pixel_num = 100; (e) pixel_num = 125; (f) pixel_num = 150.
Figure 7. Segmentation map with different number superpixels for the Pavia University dataset: (a) pixel_num = 25; (b) pixel_num = 50; (c) pixel_num = 75; (d) pixel_num = 100; (e) pixel_num = 125; (f) pixel_num = 150.
Remotesensing 15 03402 g007aRemotesensing 15 03402 g007b
Figure 8. Segmentation map with different number superpixels for the KSC dataset: (a) pixel_num = 25; (b) pixel_num = 50; (c) pixel_num = 75; (d) pixel_num = 100; (e) pixel_num = 125; (f) pixel_num = 150.
Figure 8. Segmentation map with different number superpixels for the KSC dataset: (a) pixel_num = 25; (b) pixel_num = 50; (c) pixel_num = 75; (d) pixel_num = 100; (e) pixel_num = 125; (f) pixel_num = 150.
Remotesensing 15 03402 g008
Figure 9. The classification maps with different numbers of layer for the Indian Pines dataset: (a) layernum = 1; (b) layernum = 3; (c) layernum = 5; (d) layernum = 7; (e) layernum = 9; (f) layernum = 11.
Figure 9. The classification maps with different numbers of layer for the Indian Pines dataset: (a) layernum = 1; (b) layernum = 3; (c) layernum = 5; (d) layernum = 7; (e) layernum = 9; (f) layernum = 11.
Remotesensing 15 03402 g009
Figure 10. The classification maps with different numbers of layer for the Pavia University dataset: (a) layernum = 1; (b) layernum = 3; (c) layernum = 5; (d) layernum = 7; (e) layernum = 9; (f) layernum = 11.
Figure 10. The classification maps with different numbers of layer for the Pavia University dataset: (a) layernum = 1; (b) layernum = 3; (c) layernum = 5; (d) layernum = 7; (e) layernum = 9; (f) layernum = 11.
Remotesensing 15 03402 g010
Figure 11. The classification maps with different numbers of layer for the KSC dataset: (a) layernum = 1; (b) layernum = 3; (c) layernum = 5; (d) layernum = 7; (e) layernum = 9; (f) layernum = 11.
Figure 11. The classification maps with different numbers of layer for the KSC dataset: (a) layernum = 1; (b) layernum = 3; (c) layernum = 5; (d) layernum = 7; (e) layernum = 9; (f) layernum = 11.
Remotesensing 15 03402 g011
Figure 12. The comparison of different methods’ accuracy: (a) Indian Pine dataset; (b) Pavia University dataset; (c) the KSC dataset.
Figure 12. The comparison of different methods’ accuracy: (a) Indian Pine dataset; (b) Pavia University dataset; (c) the KSC dataset.
Remotesensing 15 03402 g012
Figure 13. The classification maps with different methods for the Indian Pine dataset: (a) truth; (b) PCA; (c) SuperPCA; (d) S3-PCA; (e) PCA-2D-SSA; (f) SuperPCA-2D-SSA; (g) S3-PCA-RPNet (h) RPNet-5; (i) DMLSR; (j) LeNet; (k) SSFTT; (l) Proposed.
Figure 13. The classification maps with different methods for the Indian Pine dataset: (a) truth; (b) PCA; (c) SuperPCA; (d) S3-PCA; (e) PCA-2D-SSA; (f) SuperPCA-2D-SSA; (g) S3-PCA-RPNet (h) RPNet-5; (i) DMLSR; (j) LeNet; (k) SSFTT; (l) Proposed.
Remotesensing 15 03402 g013
Figure 14. The classification maps with different methods for the Pavia University dataset: (a) truth; (b) PCA; (c) SuperPCA; (d) S3-PCA; (e) PCA-2D-SSA; (f) SuperPCA-2D-SSA; (g) S3-PCA-RPNet (h) RPNet-5; (i) DMLSR; (j) LeNet; (k) SSFTT; (l) Proposed.
Figure 14. The classification maps with different methods for the Pavia University dataset: (a) truth; (b) PCA; (c) SuperPCA; (d) S3-PCA; (e) PCA-2D-SSA; (f) SuperPCA-2D-SSA; (g) S3-PCA-RPNet (h) RPNet-5; (i) DMLSR; (j) LeNet; (k) SSFTT; (l) Proposed.
Remotesensing 15 03402 g014
Figure 15. The classification maps with different methods for the KSC dataset: (a) truth; (b) PCA; (c) SuperPCA; (d) S3-PCA; (e) PCA-2D-SSA; (f) SuperPCA-2D-SSA; (g) S3-PCA-RPNet (h) RPNet-5; (i) DMLSR; (j) LeNet; (k) SSFTT; (l) Proposed.
Figure 15. The classification maps with different methods for the KSC dataset: (a) truth; (b) PCA; (c) SuperPCA; (d) S3-PCA; (e) PCA-2D-SSA; (f) SuperPCA-2D-SSA; (g) S3-PCA-RPNet (h) RPNet-5; (i) DMLSR; (j) LeNet; (k) SSFTT; (l) Proposed.
Remotesensing 15 03402 g015aRemotesensing 15 03402 g015b
Table 1. Information about the three data sets.
Table 1. Information about the three data sets.
Related InformationData Sets
Indian PinesPavia UniversityKSC
SensorAVIRISROSISAVIRIS
Size (pixels)145 × 145610 × 340512 × 614
Bands200103176
Class16913
Spatial resolution (m)201.318
Spectral wavelength (µm)0.4–2.450.43–0.860.4–2.5
Table 2. Number of training and test samples used in Indian Pines dataset.
Table 2. Number of training and test samples used in Indian Pines dataset.
Class NumberClass NameTrainingTest
1Alfalfa3016
2Corn–notill1501278
3Corn–mintill150680
4Corn100137
5Grass–pasture150333
6Grass–trees150580
7Grass–pasture–mowed208
8Hay–windrowed150328
9Oats155
10Soybean–notill150822
11Soybean–mintill1502305
12Soybean–clean150443
13Wheat15055
14Woods1501115
15Buildings–Grass–Trees–Drivers50336
16Stone–Steel–Towers5043
Total17658484
Table 3. Number of training and test samples used in Pavia University dataset.
Table 3. Number of training and test samples used in Pavia University dataset.
Class NumberClass NameTrainingTest
1Asphalt5486083
2Meadows54018,109
3Gravel3921707
4Trees5422522
5Metal sheets2561089
6Bare soil5324497
7Bitumen375955
8Bricks5143168
9Shadows231716
Total393038,846
Table 4. Number of training and test samples used in KSC dataset.
Table 4. Number of training and test samples used in KSC dataset.
Class NumberClass NameTrainingTest
1Scrub33728
2Willow swamp23220
3CP hammock24232
4CP/Oak24228
5Slash pine15146
6Oak/Broadleaf22207
7Hardwood swamp996
8Graminoid marsh38393
9Spartina marsh51469
10Catiail marsh39365
11Salt marsh41378
12Mud flats49454
13Water91836
Total4594752
Table 5. Classification result of the Indian Pines dataset.
Table 5. Classification result of the Indian Pines dataset.
ClassPCASuperPCAS3-PCAPCA
-2D-SSA
SuperPCA
-2D-SSA
RPNet-5S3-PCA
-RPNet
DMLSRLeNetSSFTTProposed
187.5087.5010093.7510010010082.1410095.1287.50
275.0489.3692.4987.8794.9196.4897.9785.9892.7897.5797.10
379.8593.2496.1893.5388.0998.3897.9482.9397.9896.9498.53
478.1092.7097.8194.8997.0897.8210077.6299.4897.10100
594.5998.8099.7099.4097.3099.4098.8093.7798.3199.0499.10
697.0799.6610098.9710099.6699.8399.3298.8799.2099.83
710010087.5087.5087.5010087.5093.7510095.65100
899.0910010010099.7010010099.6699.7499.51100
910010010010080.0010010083.3310092.86100
1080.4194.8990.6393.5593.4393.4398.6687.8494.3597.5699.51
1170.0791.0291.7184.2186.2095.7097.7488.5393.4596.8898.05
1286.2395.4996.3994.3695.2699.3299.1091.8398.2698.9799.32
1398.1810010010010010010099.19100100100
1491.9399.5598.8395.8797.7699.9199.9194.3399.0599.9199.91
1558.6394.3594.6490.7786.3196.1391.6772.4198.1398.7897.32
1697.6997.6797.6710090.7010095.3594.5599.0087.5097.67
OA (%)80.3393.5894.7891.3492.5797.2198.3089.4996.6797.9498.87
Kappa (%)77.3792.5793.9489.9991.4196.8798.0288.0295.8497.6598.05
Table 6. Classification result of the Pavia University dataset.
Table 6. Classification result of the Pavia University dataset.
ClassPCASuperPCAS3-PCAPCA
-2D-SSA
SuperPCA
-2D-SSA
RPNet-5S3-PCA
-RPNet
DMLSRLeNetSSFTTProposed
190.7993.6997.3293.8498.7598.2998.8386.5198.3299.7899.64
292.9497.5897.9796.3698.3599.3799.3288.3197.0599.9999.83
384.0794.4995.8494.3299.8299.3699.1878.5398.0099.9099.82
498.0698.0297.2298.8999.3399.4198.6996.7699.0098.7799.37
599.5499.8299.4599.4599.5410099.8210099.7310099.63
694.6297.8097.9896.2299.6999.9199.2985.6597.5199.8799.91
793.3095.9298.5396.5499.9099.7999.1692.6199.3199.8499.79
887.3195.0496.6293.7599.3499.3499.3480.5498.0598.0299.21
910099.8699.8610010010099.8610099.8898.4399.86
OA (%)92.6096.7497.7195.9798.8899.2799.2187.5498.3499.6499.76
Kappa (%)90.0295.6696.8794.5298.7799.1898.9380.1597.5299.5299.68
Table 7. Classification result of the KSC dataset.
Table 7. Classification result of the KSC dataset.
ClassPCASuperPCAS3-PCAPCA
-2D-SSA
SuperPCA
-2D-SSA
RPNet-5S3-PCA
-RPNet
DMLSRLeNetSSFTTProposed
190.8091.0790.3894.9292.7297.1296.8489.3591.7453.0499.73
282.2790.4598.1897.7398.6498.6494.5592.8990.0960.66100
388.7995.2693.1094.4097.4199.1495.2693.7586.3433.3397.84
467.9878.0785.5390.3592.5490.7995.6175.9876.8039.3392.98
563.7058.9080.8284.2590.4198.6393.1579.3992.4093.9497.26
669.5757.0087.4461.3590.8299.0387.4477.3090.34099.52
792.7110096.8880.2110010092.7178.1690.9450.0096.88
890.5996.9599.2498.4792.1198.2297.9693.2494.3557.5498.73
997.8797.0197.4498.7298.2910010099.2897.8585.92100
1089.3286.0396.1698.0897.8198.6396.7198.7799.4866.7598.90
1197.3593.9297.3598.4194.1898.9499.7499.1199.8994.3499.47
1298.4695.3794.0599.7896.9210099.5690.8698.5579.34100
1399.2899.8899.8899.8899.8899.2810099.6010093.7299.76
OA (%)90.7291.9894.8095.2495.8898.4697.4392.7395.2970.5199.11
Kappa (%)89.7891.0794.2294.7095.4198.2996.2591.9094.9753.9199.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, H.; Wang, T.; Chen, T.; Deng, W. Hyperspectral Image Classification Based on Fusing S3-PCA, 2D-SSA and Random Patch Network. Remote Sens. 2023, 15, 3402. https://doi.org/10.3390/rs15133402

AMA Style

Chen H, Wang T, Chen T, Deng W. Hyperspectral Image Classification Based on Fusing S3-PCA, 2D-SSA and Random Patch Network. Remote Sensing. 2023; 15(13):3402. https://doi.org/10.3390/rs15133402

Chicago/Turabian Style

Chen, Huayue, Tingting Wang, Tao Chen, and Wu Deng. 2023. "Hyperspectral Image Classification Based on Fusing S3-PCA, 2D-SSA and Random Patch Network" Remote Sensing 15, no. 13: 3402. https://doi.org/10.3390/rs15133402

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop