Next Article in Journal
Estimation of Soil Moisture Using Multi-Source Remote Sensing and Machine Learning Algorithms in Farming Land of Northern China
Next Article in Special Issue
A Novel Method Based on GPU for Real-Time Anomaly Detection in Airborne Push-Broom Hyperspectral Sensors
Previous Article in Journal
UCTNet with Dual-Flow Architecture: Snow Coverage Mapping with Sentinel-2 Satellite Imagery
Previous Article in Special Issue
Hyperspectral Super-Resolution Reconstruction Network Based on Hybrid Convolution and Spectral Symmetry Preservation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Collaborative Superpixelwise Autoencoder for Unsupervised Dimension Reduction in Hyperspectral Images

1
School of Computer Science, Shaanxi Normal University, Xi’an 710062, China
2
Space Engineering University, Beijing 101416, China
3
School of Automation, Beijing Institute of Technology, Beijing 100811, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(17), 4211; https://doi.org/10.3390/rs15174211
Submission received: 21 July 2023 / Revised: 23 August 2023 / Accepted: 24 August 2023 / Published: 27 August 2023
(This article belongs to the Special Issue Hyperspectral Remote Sensing Imaging and Processing)

Abstract

:
The dimension reduction (DR) technique plays an important role in hyperspectral image (HSI) processing. Among various DR methods, superpixel-based approaches offer flexibility in capturing spectral–spatial information and have shown great potential in HSI tasks. The superpixel-based methods divide the samples into groups and apply the DR technique to the small groups. Nevertheless, we find these methods would increase the intra-class disparity by neglecting the fact the samples from the same class may reside on different superpixels, resulting in performance decay. To address this problem, a novel unsupervised DR named the Collaborative superpixelwise Auto-Encoder (ColAE) is proposed in this paper. The ColAE begins by segmenting the HSI into different homogeneous regions using a superpixel-based method. Then, a set of Auto-Encoders (AEs) is applied to the samples within each superpixel. To reduce the intra-class disparity, a manifold loss is introduced to restrict the samples from the same class, even if located in different superpixels, to have similar representations in the code space. In this way, the compact and discriminative spectral–spatial feature is obtained. Experimental results on three HSI data sets demonstrate the promising performance of ColAE compared to existing state-of-the-art methods.

Graphical Abstract

1. Introduction

A hyperspectral image (HSI) consists of hundreds of lights that are reflected from an object’s surface at different wavelengths, enabling the detection of subtle variations in color, texture, and shape of objects within a scene. It provides valuable information about specific materials and their properties. Due to the powerful ability to capture both spectral and spatial information, HSI has been widely used in many fields, such as agriculture and environmental monitoring. The HSI classification, which involves accurately assigning labels to each pixel to identify ground classes, such as trees, buildings, or grassland, is a crucial task in hyperspectral technology applications and a highly active research area in remote sensing.
The abundance of spectral information in HSI enables accurate classification based on spectral signatures. However, it also introduces challenges due to the high dimensionality of each pixel. These challenges include (1) redundant and noisy information in high-dimensional data, (2) the “curse-of-dimensionality” problem in machine learning, which arises with an increasing number of features, and (3) the higher computational and storage requirements associated with high-dimensional data. These challenges can degrade the performance of subsequent HSI processing steps. To address these issues, dimension reduction (DR) techniques are employed to obtain a compact representation with significantly fewer dimensions, which is beneficial for subsequent procedures.
Band selection [1] and feature extraction [2] are two families of popular DR techniques for HSI classification. Band selection methods reduce dimensionality by selecting a small subset of hyperspectral bands that retain the wavelength information. However, they often struggle to find the optimal subset of bands, according to [3]. In this paper, our focus is on feature extraction DR methods for HSI classification. These methods aim to find a compact representation of the data in a transformed feature space, effectively addressing the limitations of band selection approaches.
Over the past few decades, numerous feature extraction DR methods have been developed, which can be categorized into supervised, unsupervised, and semi-supervised. Supervised DR methods utilize labels of the samples during the training process. For instance, Schwaller et al. first applied Linear Discriminant Analysis (LDA), a well-known DR method in machine learning, to HSI classification [4]. Studies [5,6,7] proposed methods to address the limited training sample problem in HSIs, studies [8,9,10] tackled the challenge of the nonlinearly separated problem in LDA, and studies [11,12] jointly combined LDA and sparse learning to capture the underlying structure of HSI samples. Unsupervised DR methods, in contrast, do not require label information during training. Lim et al., applied Principal Component Analysis (PCA) to HSI and observed that most energy was concentrated on a few eigenvalues [13]. Then, studies [14,15] utilized PCA for efficient features extraction in the HSI classification task. Studies [16,17,18] employed a local manifold model to capture the geometric structure relationship within the data. Semi-supervised methods make use of both labeled and unlabeled samples for model training. Examples of such methods include studies [19,20,21]. In recent years, Deep Learning (DL) has gained popularity in various applications, including HSI classification tasks [22,23,24]. While DL methods have shown promise, their performance in unsupervised settings, where label information is not utilized, may not meet the requirements of real-world applications. Hence, this paper focuses on the unsupervised scene in the context of HSI classification.
In recent years, there has been a growing interest in utilizing both the spatial and spectral information of HSI to extract more discriminative features for the HSI, which is a typical multi-channel image where the spatial domain also contains rich information. These methods can be broadly categorized into pixel neighbor-based and superpixel-based approaches. Pixel neighbor-based methods consider local pixel patches to incorporate spatial information. For example, He et al. [25] applied LPNPE to the spatial neighbors of each pixel to capture spatial relationships, Fang et al. [26] computed the local covariance matrix for a pixel using its spatial neighbor pixels and used it as a representation for classification, Li et al. [27] used a spatial window of size s × s to formulate a local neighbor space, and defined a new distance measure between samples. Chen et al. [22] directly flattened each sample with its neighbors, and employed a stack Auto-Encoder (AE) to extract spectral–spatial features. The superpixel-based methods involve dividing the HSI into homogeneous regions (superpixels) and applying DR methods to each region separately. Studies [28,29] used PCA to extract features from each superpixel, Zhang et al. [30] re-weighted the pixels belonging to the same superpixel and evaluated sparse representation for classification, Zhang et al. [31] employed kernel PCA on samples within a superpixel and boosted the results from multi-scale segmentation to improve performance. Compared to the pixel neighbor-based methods, superpixel-based methods follow a “divide-and-conquer” approach, offering more flexibility. In this paper, we will focus on the superpixel-based method, due to its flexibility and potential for improved performance from leveraging both spatial and spectral information in HSI.
The existing superpixel-based methods, such as SuperPCA [28], S 3 PCA [29], and S-RAE [32], extract features from each superpixel region individually. While these methods can provide feature extractors for each superpixel region, they often neglect the relationship between samples from different superpixel regions. This can be problematic because samples from the same category may be located in different regions, leading to a loss of intra-class structure in the data. To illustrate this issue, an example using samples from the woods category in the Indian Pines data set is considered. Measuring the disparity of multi-dimensional data remains a challenging problem that lacks a definitive solution, thus t-SNE [33] is applied to the samples for visualization to assess the disparity problem. In the original space (Figure 1a), we can observe that the samples from the woods category are located close to each other, indicating a high level of intra-class consistency. However, after applying SuperPCA (Figure 1b), we can see that the intra-class consistency of the data is completely destroyed. The loss of intra-class structure can have negative consequences for subsequent tasks in HSI. Therefore, there is a need to develop methods that not only capture the features that maintain the structure within individual superpixel regions, but also preserve the relationships between samples from different superpixel regions.
To solve the aforementioned problem, we propose a novel unsupervised DR method that considers the relationship between samples from different superpixels in this paper. To be more specific, the Entropy Rate Segmentation (ERS) [34] is first adopted to generate a 2D superpixel map. Then, Locally Linear Embedding (LLE) is applied to capture the underlying manifold structure of the mean vectors within each superpixel. A collaborative superpixelwise Auto-Encoder (ColAE) model is proposed to learn the compact representations, which can preserve the structure of data within each superpixel by minimizing their reconstruction error, while meanwhile maintaining the learned manifold structure among superpixels by minimizing the graph loss. The representations are finally fed into Support Vector Machine (SVM) to determine their categories. To evaluate the effectiveness of the proposed ColAE, experiments are conducted on three hyperspectral data sets. We compare our method with state-of-the-art DR techniques; the results validate the proposed ColAE can improve the classification performance of extracted features.
The remainder of this paper is organized as follows. Section 2 provides a review of several related works. In Section 3, the details of our proposed method are presented. The experimental setup, comparison results, result analysis, and the influence of the parameters are presented in Section 4. Section 5 finally concludes the paper and discusses potential future research directions.

2. Related Works

In this section, we briefly review entropy rate superpixel segmentation, locally linear embedding, and Autoencoder models.

2.1. Entropy Rate Superpixel Segmentation Model

In computer vision, superpixels are defined as compact regions consisting of adjacent pixels with similar characteristics, such as color, brightness, and texture. In HSI, where each pixel represents a distinct spectral signature, samples belonging to the same category also tend to exhibit spatial similarities. Consequently, existing superpixel segmentation methods can be effectively employed to partition an HSI into a collection of homogeneous regions. By considering both spectral and spatial characteristics, superpixel segmentation enables the grouping of pixels with shared properties, facilitating the extraction of meaningful features.
ERS [34] is adopted in our method due to its promising performance in HSI classification tasks [28,29], as well as its inherent capabilities in adaptive region generation and texture preservation. As a graph-based method, with a given graph G = ( V , E ) for an HSI, where the vertical set V denotes the pixel set and the edge set E means the pairwise similarities, ERS tends to choose a subset of edges A E , so that the resulting graph G * = ( V , A ) contains exactly K connected subgraphs. The objective function of ERS is
A = arg max A T r ( H ( A ) + α B ( A ) , s . t . A E ,
where H ( A ) is an entropy rate term, which tends to find the homogeneous and compact cluster, B ( A ) is a balancing term, which makes the cluster with similar sizes, and α is a weight term to tune the contributions of H ( A ) and B ( A ) . A greedy algorithm is used to solve the problem in (1).

2.2. Locally Linear Embedding Model

Researchers in the machine learning area found that the data in the wild may not follow Gaussian distribution, but reside on a manifold, and locally linear embedding (LLE) [35], an algorithm insensitive to global variations and characterized by parameter flexibility, was proposed to preserve the manifold structure of the data in the low-dimensional space. Denote n samples in d-dimensional space as X = { x 1 , x 2 , , x n } ; LLE first finds the K-nearest neighbors for each sample, where K is the number of nearest neighbors, and K n . LLE assumes the samples within a small neighborhood are linearly located, and the manifold structure of the data is then captured by minimizing the reconstruction error
ε ( W ) = i x i x j N i w i j x j 2 , s . t . j w i j = 1 ,
where N i stands for the K-nearest neighbors set of x i , and the w i j = 0 if x j N i . A least-squares problem can be used to solve the Problem (2) [35].
With the weighting matrix W , LLE maps the x i on to a l-dimensional representation y i by minimizing the cost function as follows:
Φ ( Y ) = i y i j w i j y j 2 .
The problem in (3) is equivalent to
Φ ( Y ) = Y ( I W ) T ( I W ) Y ,
which can be solved by finding the l eigenvectors of Z = ( I W ) T ( I W ) corresponding to the l smallest eigenvalues. Due to fact that the smallest eigenvalue is not stable, LLE always finds the eigenvectors corresponding to the second smallest eigenvalues.

2.3. Auto-Encoder Model

Auto-Encoder (AE) [36] is a well-known neural network architecture used for various tasks. It consists of two main parts: an encoder and a decoder. In the context of a shallow AE, as illustrated in Figure 2a, the encoder takes an input vector a i from the R d space and maps it to a lower-dimensional code f i in R l by f i = f ( W ( 1 ) a i + b ( 1 ) ) , where f ( · ) is an activation function. The decoder then reconstructs the input vector a i from the code f i by a ^ i = g ( W ( 2 ) f i + b ( 2 ) ) , where g ( · ) is also another activation function. The commonly used activation functions for encoder and decoder are the nonlinear Tanh and Sigmoid functions.
The parameters of the AE, denoted as Θ = { W ( 1 ) , b ( 1 ) , W ( 2 ) , b ( 2 ) } , are learned during the training process. These parameters, including the weights { W ( 1 ) , W ( 2 ) } , and biases { b ( 1 ) , b ( 2 ) } , can be optimized by minimizing the reconstruction error R ( Θ ) , defined as
R ( Θ ) = i a i a ^ i 2 ,
which sums up the squared differences between the input vectors a i and their reconstructions a ^ i over all the samples. To perform the optimization, the Backpropagation algorithm (BP) and stochastic gradient descent are commonly used.
Once the AE is trained, the encoder has learned to map the input vector a i R d to a new, lower-dimensional representation f i R l , where l is typically chosen to be smaller than d.
The shallow AE, with only one encoder and one decode layer, has a limited capacity to learn complex and high-level representations. To overcome this limitation, deep AEs are proposed, which have multiple encoder and decoder layers. Increasing the number of layers in the AE architecture enhances its learning ability, and allows for the extraction of more intricate features. A deep AE example is presented in Figure 2b. In a deep AE model, the input passes through a series of hidden layers in the encoder, where each layer applies a non-linear transformation to capture the different levels of relevant features. The final hidden layer produces the encoded representation f i . The encoder can be expressed mathematically as:
f i = f ( m ) ( W ( m ) ( f ( m 1 ) ( W ( m 1 ) ( ) + b ( m 1 ) ) ) + b ( m ) ) ,
where m represents the depth of the encoder. By adding more layers, the deep AE can learn increasingly complex representations of the input data, helping to capture intricate patterns and structures. This results in improved generalization capabilities and potential efficiency gains compared to shallow AEs. The additional layers allow for a more hierarchical and abstract representation of the data, enabling the model to discover more meaningful and discriminative features.
In a deep AE, the decoder takes the code f i and passes it through a series of hidden layers. The final output layer of the decoder produces the reconstructed data a ^ i . The parameters Θ in deep AE include the weights { W ( 1 ) , W ( 2 ) , , W ( 2 m ) } and biases { b ( 1 ) , b ( 2 ) , , b ( 2 m ) } . To optimize the parameters Θ , the aim is to minimize the reconstruction error R ( Θ ) defined in Equation (5). There are two methods to optimize Θ through R ( Θ ) . The first method trains m shallow AEs individually and then stacks them together to form a deep AE [36]. Each shallow AE is trained layer by layer, where the output of one layer is used as the input for the next layer. This approach is also known as Stack AE (SAE). By pretraining the shallow AEs and fine-tuning the entire deep AE, this method allows for the gradual learning of increasingly complex representations. The second method is to initialize the parameters Θ and then use BP and a stochastic gradient descent to iteratively optimize the parameters. This method is known as end-to-end training. In the early stages of deep learning, training deep AEs using this method was challenging because gradients could not propagate effectively to the bottom layer. However, with the development of more effective initialization strategies, such as He initialization [37], and Xavier initialization [38], this issue has been largely mitigated, and it is now possible to directly train deep networks.

3. Collaborative Superpixelwise Auto-Encoder

In this section, we present the details of ColAE, a method designed for extracting spectral–spatial features for HSI. ColAE consists of two key steps: superpixel segmentation and collaborative AE learning, as depicted in Figure 3. During the superpixel segmentation step, the ERS-based superpixel method is employed to partition the HSI into homogeneous regions. This division creates compact and meaningful regions by grouping pixels with similar characteristics. In the collaborative learning step, LLE is first adopted to learn the underlying manifold structure among samples from different superpixels. This allows us to capture the global structure of the HSI. Next, AE models are applied to each superpixel independently. These models seek representations that minimize the local reconstruction error for samples within the same superpixel, while simultaneously minimizing the manifold reconstruction error for samples from different superpixels. In our proposed ColAE approach, the AE models exchange information among different superpixel regions, leveraging a collaborative learning approach. This enhances similar samples from different superpixels to be similar in the code space. In this way, ColAE can alleviate intra-class disparities. Figure 1c provides empirical evidence supporting this claim.
In this paper, HSI data are denoted by X R B × W × H , where B , W , H represent the number of spectral bands, width, and height, respectively. To process the 3D data X , we flatten it into a 2D form, denoted as X 2 = [ x 1 , x 2 , , x N ] R B × N ( N = W × H ) . Each column x i = [ x i 1 , x i 2 , , x i B ] T represents a pixel in the HSI.

3.1. Superpixel Segmentation

Traditional spectral–spatial methods often use fix-sized spatial windows to incorporate spectral and spatial information. However, these methods do not fully explore the spatial information available in the image. Superpixel segmentation, on the other hand, offers a more effective way to divide the image into homogeneous regions based on appearance information, thereby considering spatial structures more effectively. This is why we have chosen to employ superpixel segmentation in our proposed work.
The Entropy Rate Segmentation (ERS) algorithm is capable of efficiently segmenting the grayscale (1 channel) or color (3 channels) images into superpixel regions. However, the HSI typically consists of hundreds of spectral bands. To address this, we first reduce the dimensionality of the HSI data to one channel using PCA before applying ERS.
PCA allows us to reduce an HSI, denoted as X , to its 2D form X 2 . The covariance matrix of the data can be calculated using the formula C = 1 N ( x i μ ) ( x i μ ) T , where μ = 1 N x i is the mean vector of all samples. The eigenvectors v 1 corresponding to the largest eigenvalue of C form the projection matrix V = [ v 1 ] for the grayscale image. Next, the 1-dimensional 2D data Y 2 can is obtained by performing the transformation Y 2 = V T X 2 . Finally, Y 2 can be reshaped into a grayscale image, upon which ERS can be performed on the obtained superpixel segmentation.

3.2. Collaborative AEs

After performing superpixel segmentation on the HSI, the resulting 2D representation can be expressed as X 2 = { X 1 , X 2 , , X J } , where X i = { x 1 i , x 2 i , , x N i i } represents the samples in the i-th superpixel, and N i indicates the number of samples in that particular superpixel.
To capture the underlying manifold structure of the data, samples from different superpixels are used. Then, an AE model is proposed to preserve this manifold structure among superpixels while simultaneously minimizing the reconstruction error within each superpixel. By jointly considering the manifold structure and the within-superpixel reconstruction, our proposed ColAE allows for the efficient extraction of spectral–spatial features while ensuring the preservation of important relationships between superpixels.

3.2.1. Learning the Manifold Structure among Superpixels

In order to preserve the relations among samples from different superpixels, it is crucial to define and obtain such relations. The manifold structure is commonly employed to model the underlying geometric structure of high-dimensional data, which aligns with our requirements. In our method, we adopt LLE, a classical and efficient manifold learning technique, to capture the manifold structure.
Samples within the same superpixel exhibit similarity; hence, the manifold structure is measured using only the mean vectors of each superpixel. The mean vector is calculated by
μ i = 1 N i j x j i .
With the mean vectors { μ 1 , μ 2 , , μ J } , the weighting matrix W can be obtained by minimizing the reconstruction error in Equation (2), where J is the number of superpixels. Denoting the representations in the i-th superpixel in code space as Y i = { y 1 i , y 2 i , , y N i i } , the manifold loss over current code is
L ( Y ) = i j 1 N j y j i k w i k j 1 N k y j k 2 ,
where M Y = [ 1 N 1 j y j 1 , 1 N 2 j y j 2 , , 1 N J j y j J ] represents the mean vectors in the code space. The lower the value of L ( Y ) , the better the preserving ability of the code.
It should be noted that the number of K, representing the number of nearest neighbors in the LLE algorithm, needs to be predefined when calculating the weighting matrix W , and it is commonly chosen such that K J .

3.2.2. AE Model with Manifold Constraints

Based on previous works [28,29,32], we adopt a similar approach and employ a single AE for each superpixel. In this way, multiple AEs are used to efficiently capture the local structure within a superpixel and low-dimensional representations of a given HSI can be obtained. The loss function for the i-th AE is defined as
R ( Θ i ) = j x j i x ^ j i 2 .
were x ^ j i is the output of the i-th deep AE for the j-th sample x j i in the i-th superpixel. The parameters in this AE are denoted as Θ i = { W i ( 1 ) , b i ( 1 ) , , b i ( m ) , W i ( m ) , , W i ( 2 m ) , b i ( 2 m ) } , where m is the number of layers in encoder. The reconstruction error for all the samples can be expressed as
R ( Θ ) = i j x j i x ^ j i 2 ,
where Θ = { Θ 1 , Θ 2 , , Θ J } represents the parameters for all AEs.
To preserve the relations among superpixels, the manifold loss in Equation (8) can be added to Equation (10). This results in the following loss function:
R ( Θ ) = i j x j i x ^ j i 2 + η i j 1 N j y j i k w i k j 1 N k y j k 2 .
In Equation (11), the first term preserves the structure within each superpixel, while the second term maintains the structure between superpixels. The parameter η balances the two terms. By incorporating two terms in Equation (11), the proposed ColAE ensures each AE can preserve the structure of data within its assigned superpixel, while exchange information between superpixel by considering the manifold structure. In this way, the AEs from each superpixel are collaboratively learned. It should be noted that, in Equation (11), the first term relates to all the parameters in Θ , while the second term only relates to the parameters in the encoder part.
To find the parameters that best fit the data, we first initial each AE using the Xavier method [38]. Then, we backpropagate the gradient of Θ to each layer according to Equation (10). Since the number of samples in a superpixel is not large, we feed all the samples in each superpixel once to calculate the loss. After hundreds of iterations, the value of R ( Θ ) can converge to a small value.

3.3. Computational Analysis of ColAE

The procedure of the proposed ColAE is outlined in Algorithm 1. The time complexity of the proposed ColAE can be analyzed as follows. The superpixel segmentation step has a time complexity of O ( max ( B 3 , B 2 N ) + N log N ) . The manifold structure modeling procedure has a time complexity of O ( K J ) , and the calculation of loss in Equation (10) has a time complexity of O ( N B d 1 ) , where d 1 is the dimensionality of the first hidden representation h ( 1 ) . The gradient descent method used to optimize the parameters Θ has a time complexity of O ( T N B d 1 ) . In HSI, the number of bands B is typically much smaller than the number of samples N. Additionally, K and J are also much smaller than N. Therefore, the overall time complexity of ColAE is O ( T N B d 1 ) .
Algorithm 1 Procedures of ColAE.
Input: An HSI X R B × W × H , the number of superpixels J, the number of nearest neighbors
K in LLE, the balancing weight η , the dimensionality L for the code, the number of
 iteration T.
Output: The output Y R L × W × H .
1: Reshape X into 2D form, which is X 2 R B × N . Use PCA to reduce the dimensionality
 of X 2 to 1, and reshape it into the image with three channels;
2: Apply ERS algorithm to segment the image into J non-overlapped regions;
3: Use Equation (7) to compute the mean vector μ i for each superpixel. Then, calculate
 the weights for each mean vector according to Equation (2);
4: Use Xavier initialization to initial the parameters in Θ ( 0 ) ;
5: for  t = 0 to T do
6:    Calculate the loss R ( Θ ( t ) ) by Equation (11);
7:    Calculate the gradient of g ( t ) using existing optimizer, and update the parameters by
        Θ ( t + 1 ) = Θ ( t ) + α g ( t ) ;
8: end for
9: Compute the code by Θ ( T ) , then reshape the code into Y R L × W × H .
10: return  Y .

4. Experimental Results

In this section, to validate the performance of the proposed ColAE, we carry out extensive experiments on several HSIs in comparison with state-of-the-art methods.

4.1. Data Sets

Three HSI data sets are used to evaluate the ColAE in our experiments, which are Indian Pines, the University of Pavia, and Salinas. The details of each data set are as follows.
(1)
Indian Pines. The Indian Pines data set was collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor over an agricultural area in Indiana, USA. It consists of 145 × 145 pixels and 224 spectral bands, covering a wide range of wavelengths from 400 to 2500 nm. In this paper, 24 bands covering the region of water absorption are removed, and a total of 200 bands are used. The data set contains 16 different classes, including various crops, bare soil, and human-made structures. Approximately 10,249 samples with labels are from the ground-truth map.
(2)
University of Pavia. The University of Pavia data set was acquired by the Reflective Optics System Imaging Spectrometer (ROSIS) sensor over an agricultural area in Pavia, Italy. It consists of 610 × 340 pixels and 115 spectral bands, covering wavelengths from 430 to 860 nm. A total of 12 noisy and water bands are removed, and a total of 103 bands are preserved. The data set contains nine different classes, including various crops, bare soil, and meadows. Approximately 42,776 samples with labels are from the ground-truth map.
(3)
Salinas. The Salinas data set was collected by the AVIRIS sensor over an agricultural area in Salinas Valley, California, USA. It consists of 512 × 217 pixels and 224 spectral bands, covering wavelengths from 400 to 2500 nm. A total of 20 bands are removed for noisy and water bands, and 204 bands are used in our experiments. The data set contains 16 different classes, including various crops, bare soil, and human-made structures. A total of 53,129 labeled samples are used in our experiments.
Table 1 lists the number of samples per class for the three datasets. All these datasets are available (https://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes, accessed on 12 July 2021) from the Internet.

4.2. Experimental Setup

In the experiments, we evaluate the learned spectral–spatial feature via its classification performance. We use several well-known handcraft features, including PCA, LPP [39], KPCA [40], and their extensions to superpixel-based methods, SuperPCA [28], SuperLPP [41], SuperKPCA [31]. Deep learning-based features are compared, including AE [36], its superpixel extension version SuperAE, CAE [42], and ConstrastNet [43].
To test the proposed method, three metrics are used to evaluate the performance of different dimension reduction methods, which are overall accuracy (OA), average accuracy (AA), and kappa. The HSIs are used in their original form without any further preprocessing. We apply the DR algorithms on the HSI, then feed their outputs an SVM to determine the categories of the samples. The RBF kernel is used to boost the performance of the SVM for non-linear distributed situations, and the parameters of the RBF are determined by a grid search, as was performed in [28]. Our experiments are conducted on Windows 10 64-bit platform, with an Intel Core i5-12400F CPU (2.5 GHz), and 32 GB memory. The proposed approaches are implemented mainly using Python 3.6, Pytorch 1.8.0, Scikit-learn 1.2.1 (Sklearn), and Shogun (https://github.com/shogun-toolbox/shogun, accessed on 8 December 2020), which is a well-known machine learning toolbox that provides interfaces for Matlab, R, Python, and so on. With this feature, Shogun offers a convenient way to implement various machine learning algorithms easily.
To test the proposed method, the 10 random splits sets in [28] (https://github.com/junjun-jiang/SuperPCA/tree/master/datasets, accessed on 31 October 2020) are used for training and testing. For each class in the three data sets, T = 3 , 5 , 7 , 10 , 15 , 20 samples are selected to train the SVM, and the rest of the samples are used as testing sets, where T denotes the number of training samples. For the classes that posses too few samples, such as Grass-pasture-mowed and Oats in Indian Pines, we select a maximum of half of the total samples in the them. The PCA, KPCA, and SVM are implemented by the Sklearn library. KPCA utilized the RBF kernel, and its best parameter is determined through a grid search based on the reconstruction error of the pre-image [44]. Moreover, LPP is implemented using the Shogun library, and the optimal number of nearest neighbors (K) and the τ for heat kernel are also determined using a grid search. The implementations for CAE and ConstrastNet are available (https://github.com/jjwwczy/ContrastNet-Unsupervised-Feature-Learning-by-Autoencoder-and-Prototypical-Contrastive-Learning, accessed on 8 March 2034) online. In our experiments, the architectures of AE and ColAE remain consistent, and are listed in Table 2. For Equation (6), the tanh is used as the activation function when m = 1 , 4 , and the linear function is used when m = 2 and m = 3 . Furthermore, Xavier initialization is employed to initial the parameters for both AE and ColAE. The ERS is also available (https://github.com/mingyuliutw/EntropyRateSuperpixel, accessed on 19 Auguest 2015) online. The SuperPCA, SuperNPE, SuperLPP, and SuperAE are applied based on the superpixel results obtained from ERS, according to their definitions as mentioned.

4.3. Comparisons with Other Algorithms

Table 3 presents the performances of features acquired by 13 methods on the three data sets with diverse training samples when L = 30 , where L is the dimensionality of the low-dimensional representation. The best classification results in each setting are highlighted in bold. It is worth noting that KPCA consumes too much memory, making it impossible to execute in the University of Pavia and Salinas data sets. From the results in Table 3, several observations can be concluded as follows.
1. In nearly all tested scenarios, the efficacy of our proposed ColAE method surpasses that of the other approaches, highlighting its superior performance. It is important to note that, in the Indian Pines data set, SuperPCA exhibits better average accuracy (AA) results than ColAE. However, when evaluated based on overall accuracy (OA) and kappa, ColAE outperforms SuperPCA. Upon further analysis of the classification outcomes, we present the observation that ColAE consistently exhibits superior performance on categories with larger sample sizes, while its performance diminishes on categories with fewer samples, as illustrated in Table 4, Table 5 and Table 6. This phenomenon is mainly because the proposed ColAE utilizes LLE to model the manifold structure between superpixels. LLE employs the concept of K-nearest neighbors, where K is often set to a value much smaller than the total number of samples, to capture the local structure of the data. However, categories with only a few samples tend to be confined within a limited number of superpixels. Consequently, when modeling the manifold structure, LLE might incorrectly associate these small categories with others, leading to a lower classification accuracy for categories with a small sample size. In cases where a category has sufficient samples, these samples are always located in a set of superpixels, typically surpassing the value of K. Consequently, the inherent structure can be effectively modeled and preserved by ColAE. In this way, the disparity problem can be well solved, leading to a higher classification.
2. ColAE consistently outperforms SuperAE, which proves that the proposed regularization term in Equation (11) can efficiently solve the class disparity problem caused by the superpixel-based method. To validate our findings, we randomly select one split from each data set and map the classification results onto the corresponding images, as shown in Figure 4, Figure 5 and Figure 6. A comparison between Figure 4n,o reveals that ColAE improves the accuracy by mainly relying on correctly classifying the large regions of Soybean-min-till, indicated in pink. Remarkably, based on the superpixel segmentation, it is observed that SuperAE misclassifies samples belonging to the Soybean-min-till class within a superpixel into Soybean-not-till (indicated in blue). In contrast, ColAE successfully minimizes the misclassification rate within the same region, highlighting the efficiency of the proposed graph-regularization term in Equation (11).
3. The performances of features obtained solely from the spectral domain are significantly inferior to those obtained from the spectral–spatial domain, substantiating the importance of incorporating information from the spatial domain for classification purposes. Both SuperAE and ColAE outperform ContrastNet and CAE, despite the fact that the architecture of the network is more complex in ContrastNet and CAE compared to SuperAE and ColAE. This outcome validates the superiority of superpixel-based methods. Additionally, the superpixel-based method consumes much fewer computational resources. Because the unsupervised method process all the data by DR models, then splits the data into training and testing sets, KPCA consumes 207,400 × 207,400 × 4 ≈ 160 GB memory for the University of Pavia and 111,104 × 111,104 × 4 ≈ 46 GB memory for Salinas, with a single-precious point floating point when constructing the kernel matrix. SuperKPCA consumes significantly less memory compared to traditional KPCA, further emphasizing the flexibility of superpixel-based approaches.
4. SuperPCA demonstrates surprisingly strong performance across all settings, which indicates the underlying data structure within a superpixel is relatively simple. That finding justifies our use of an AE with only two layers in both the encoder and decoder. The superior performance of both SuperAE and ColAE, compared to SuperPCA, further emphasizes the enhanced generalization ability. Additionally, it is worth noting that SuperKPCA and SuperLPP do not consistently outperform SuperPCA. We attribute this to the fact that the grid-search strategy employed in parameter tuning requires the inclusion of the best parameters within the search space. However, as the data distribution varies from one superpixel to another, it is challenging to accurately tune the parameters of SuperKPCA and SuperLPP to achieve optimal performance.
5. The performances of PCA on the three data sets are observed to be comparable to that of the raw feature. The proportion of retained principal components in PCA is 99.25% for Indian Pines, 99.96% for the University of Pavia, and 99.99% for Salinas. These results indicate that PCA can remove the components without valuable information, resulting in little accuracy loss. LPP and KPCA outperform PCA due to the inherent complexity of the underlying data structure in the HSIs. LPP and KPCA can preserve the nonlinear structure of the data, thus yielding improved classification performance. It is interesting that AE performs slightly inferior to raw and PCA. This can be attributed to the limited capacity of a two-layer encoder with only a single nonlinear function to capture the intricate data structure. Utilizing neural networks with more complex architectures can improve the accuracy of the AE. It is important to highlight that we maintained uniform architecture across AE, SuperAE, and ColAE intentionally, aiming to discern the influence of the superpixel-based technique and the introduced regularization term specified in Equation (11). Consequently, we do not design a distinct structure for AE within our experimental setup.
It is important to highlight that we maintained uniform architecture across AE, SuperAE, and ColAE intentionally, aiming to discern the influence of the superpixel-based technique and the introduced regularization term specified in Equation (11). Consequently, we refrained from designing a distinct structure for the AE within our experimental setup.

4.4. Parameter Analyses

In the proposed ColAE, several parameters need to be predefined: the number of superpixels J, the number of nearest neighbors K in the LLE, the balancing weight η , and the dimensionality L for the code. Actually, J is intertwined with K, where K is usually far smaller than J.To strengthen the relationships between the parameters, a ratio (R) can be introduced, which establishes a connection between K and J as K = J × R , where · denotes the round operator, ensuring that K is an integer value. This approach ensures that the choice of K is directly proportional to the number of superpixels J by a factor determined by R. η is also influenced by J and K, since it is impacted by the number of samples within a superpixel, which in turn affects the loss values of the terms in Equation (11). Therefore, our analysis starts with a discussion of L, then examines K, J, and η by considering their interconnected relationship.

4.4.1. The Effect of the Dimensionality of the Code

In our experiments, we set K = 100 for Indian Pines and Salinas, and K = 20 for the University of Pavia. Additionally, we use a fixed ratio R = 0.2 to determine the value of K. Furthermore, we choose η = 0.75 . To investigate the effect of the dimensionality of the code L, we vary L from 5 to 50 with an interval of 5, and examine the resulting overall classification accuracies with T = 20 for SVM. The comprehensive experimentation yielded significant insights. For instance, in the case of dimensionality L, the highest Overall Accuracy (OA) of 89.98% was achieved when L = 45 for the Indian Pines data set. Conversely, the lowest OA of 45.34% was observed at L = 5 . Similar trends were discerned for the University of Pavia dataset, where OA ranged from 84.01% to 95.30%, and for the Salinas dataset, where OA varied between 86.09% and 98.14%. To provide a more insightful depiction of these findings, these results are illustrated in Figure 7.
It is evident from the figure that when L = 5 , the OAs are low across all three data sets, which aligns with common sense. A small number of features restricts the ability to carry sufficient discriminative information for effective classification. However, as L increases, the OAs steadily improve. A relatively large value of L is reached where the growth of OAs becomes slow, indicating that the available discriminative information is already well utilized. A larger L will increase the complexity and computational requirements of the classifier without yielding significant performance gains. Based on the observations, we choose L = 30 for all the subsequent experiments.

4.4.2. The Effects of the Number of Superpixels, Number of Nearest Neighbors, and Balance Weight

We set dimensionality of the code L to be 30, and varied the balance weight η within the range of [ 0.5 , 0.75 , 1 , 1.25 ] , as well as the number of superpixels J from the set [ 20 , 50 , 70 , 100 , 120 , 150 ] . Additionally, we examined the ratio R between J and K, considering values from the set [ 0.1 , 0.2 , 0.3 , 0.4 ] , to evaluate the performance of the ColAE on three data sets. Across the above configurations of these parameters, OA spanned from 85.39% to 89.37% for Indian Pines, from 88.25% to 95.06% for University of Pavia, and from 94.01% to 97.38% for Salinas. The classification accuracies obtained from these experiments are also presented in Figure 8.
A notable observation is that the number of superpixels J emerges as the primary factor influencing the performance of ColAE. On the Indian Pines, the classification accuracy of ColAE initially increases and then decreases with the increment of J. This may be attributed to the rich texture information presented in this data set. Too few superpixels can cause different class samples to be merged together, while an excessive number of superpixels may result in too few samples in a superpixel, consequently limiting the learning capabilities of AE within the ColAE framework. Conversely, for the University of Pavia and Salinas data sets, classification accuracy declines if J is set too large. This trend could be attributed to the samples being clustered together in these data sets, where a small number of superpixels is sufficient for effective segmentation. Furthermore, it is worth noting that ColAE is robust with the number of nearest neighbors K and balance weight η , making it adaptable for application to other data sets.

4.4.3. Execution Time

In this work, all the experiments are conducted on a desktop. The implemented codes use the CPU for execution. The running times of nine DR methods on the three data sets are presented in Table 7. It is worth noting that, compared with the training time, projecting the samples onto low-dimensional space demands minimal computational time once the model has been already trained. So we only list the training time in this section. It is important to note that the implementations of CAE and ContrastNet use the GPU to accelerate the training process. However, to ensure fairness in comparing the computational times across different methods, the running times of CAE and ConstrastNet are not included. The number of samples to be processed is 145 × 145 = 21,025 for Indian Pines, 610 × 340 = 207,400 for the University of Pavia, and 512 × 217 = 111,104 for the Salinas.
As indicated in Table 7, PCA exhibits the lowest computational time due to its parameter-free nature. On the other hand, KPCA and SuperKPCA consume the most time, since the parameter τ needs to be tuned and both methods construct a dense kernel matrix of size N × N . The grid search strategy employed for parameter tuning further increases their computational burden. In contrast, LPP and SuperLPP also use the grid search for parameter tuning, but they only construct a sparse matrix with K × K entries, significantly reducing the computational burden. The proposed ColAE requires a similar computational time to SuperAE, although ColAE involves an additional step of constructing a manifold graph matrix. However, the size of the graph matrix is relatively small, being J × J . It should be noted that, while all samples within a superpixel are fed into the optimizer in SuperAE and ColAE, the batch size for AE is set to 256. Hence, the computational time of AE is longer compared to SuperAE and ColAE. Furthermore, it is worth mentioning that the computational time of AE, SuperAE, and ColAE can be greatly reduced when GPU is employed for parallel computation.

5. Conclusions

In this paper, we have discovered that existing superpixel-based DR methods may disrupt the intra-structure of the data. To solve this problem, an unsupervised spectral–spatial DR method called ColAE is proposed. In ColAE, the HSI is first segmented into superpixels, then an LLE graph is constructed to model the similarities between the mean vectors from each superpixel. A set of AEs is applied to the samples within each superpixel, with the LLE graph employed to reduce the intra-disparity of the representations in code space. Experimental results on three HSI data sets can validate the effectiveness of the proposed ColAE in addressing the challenges of superpixel-based DR methods.
It should be noted that the ColAE can be extended to a multiscale superpixel version, which is expected to yield higher classification accuracy. Additionally, exploring the utilization of other manifold learning-based graphs can to model the relationship between superpixels will be a focal point for future research efforts.

Author Contributions

Conceptualization, C.Y.; methodology, C.Y.; software, C.Y., L.Z. and L.F.; validation, M.M. and Z.G.; formal analysis, C.Y.; investigation, C.Y.; writing—original draft preparation, C.Y.; writing—review and editing, M.M., Z.G. and F.Y.; visualization, C.Y.; project administration, C.Y.; funding acquisition, C.Y. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the Fundamental Research Funds for Central Universities under Grant 1301032207; in part by the Regional Innovation Guidance Project of Shaanxi under grant 2022QFY0105; and in part by the Key Research and Development Program in Shaanxi Province under grant 2023-YBGY241.

Data Availability Statement

The data presented in this study are available on request from the corresponding authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sun, W.; Du, Q. Hyperspectral band selection: A review. IEEE Geosci. Remote Sens. Mag. 2019, 7, 118–139. [Google Scholar] [CrossRef]
  2. Rasti, B.; Hong, D.; Hang, R.; Ghamisi, P.; Kang, X.; Chanussot, J.; Benediktsson, J.A. Feature extraction for hyperspectral imagery: The evolution from shallow to deep: Overview and toolbox. IEEE Geosci. Remote Sens. Mag. 2020, 8, 60–88. [Google Scholar] [CrossRef]
  3. Jia, X.; Kuo, B.C.; Crawford, M.M. Feature mining for hyperspectral image classification. Proc. IEEE 2013, 101, 676–697. [Google Scholar] [CrossRef]
  4. Schwaller, M.R. A geobotanical investigation based on linear discriminant and profile analyses of airborne thematic mapper simulator data. Remote Sens. Environ. 1987, 23, 23–34. [Google Scholar] [CrossRef]
  5. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of hyperspectral images with regularized linear discriminant analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  6. Du, Q. Modified Fisher’s linear discriminant analysis for hyperspectral imagery. IEEE Geosci. Remote Sens. Lett. 2007, 4, 503–507. [Google Scholar] [CrossRef]
  7. Fabiyi, S.D.; Murray, P.; Zabalza, J.; Ren, J. Folded LDA: Extending the linear discriminant analysis algorithm for feature extraction and data reduction in hyperspectral remote sensing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 12312–12331. [Google Scholar] [CrossRef]
  8. Li, W.; Prasad, S.; Fowler, J.E.; Bruce, L.M. Locality-preserving discriminant analysis in kernel-induced feature spaces for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2011, 8, 894–898. [Google Scholar] [CrossRef]
  9. Chen, M.; Wang, Q.; Li, X. Discriminant analysis with graph learning for hyperspectral image classification. Remote Sens. 2018, 10, 836. [Google Scholar] [CrossRef]
  10. Luo, F.; Zhang, L.; Du, B.; Zhang, L. Dimensionality reduction with enhanced hybrid-graph discriminant learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5336–5353. [Google Scholar] [CrossRef]
  11. Ly, N.H.; Du, Q.; Fowler, J.E. Sparse graph-based discriminant analysis for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 3872–3884. [Google Scholar]
  12. Luo, F.; Zhang, L.; Zhou, X.; Guo, T.; Cheng, Y.; Yin, T. Sparse-adaptive hypergraph discriminant analysis for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1082–1086. [Google Scholar] [CrossRef]
  13. Lim, S.; Sohn, K.H.; Lee, C. Principal component analysis for compression of hyperspectral images. In Proceedings of the IGARSS 2001, Sydney, NSW, Australia, 9–13 July 2001; Volume 1, pp. 97–99. [Google Scholar]
  14. Rodarmel, C.; Shan, J. Principal component analysis for hyperspectral image classification. Surv. Land Inf. Sci. 2002, 62, 115–122. [Google Scholar]
  15. Machidon, A.L.; Del Frate, F.; Picchiani, M.; Machidon, O.M.; Ogrutan, P.L. Geometrical approximated principal component analysis for hyperspectral image analysis. Remote Sens. 2020, 12, 1698. [Google Scholar] [CrossRef]
  16. Ma, L.; Crawford, M.M.; Tian, J. Local manifold learning-based k-nearest-neighbor for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4099–4109. [Google Scholar] [CrossRef]
  17. Hong, D.; Yokoya, N.; Zhu, X.X. Learning a robust local manifold representation for hyperspectral dimensionality reduction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2960–2975. [Google Scholar] [CrossRef]
  18. Yu, W.; Zhang, M.; Shen, Y. Learning a local manifold representation based on improved neighborhood rough set and LLE for hyperspectral dimensionality reduction. Signal Process. 2019, 164, 20–29. [Google Scholar] [CrossRef]
  19. Camps-Valls, G.; Marsheva, T.V.B.; Zhou, D. Semi-supervised graph-based hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3044–3054. [Google Scholar] [CrossRef]
  20. Liao, W.; Pizurica, A.; Scheunders, P.; Philips, W.; Pi, Y. Semisupervised local discriminant analysis for feature extraction in hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2012, 51, 184–198. [Google Scholar] [CrossRef]
  21. Shao, Z.; Zhang, L. Sparse dimensionality reduction of hyperspectral image based on semi-supervised local Fisher discriminant analysis. Int. J. Appl. Earth Obs. Geoinf. 2014, 31, 122–129. [Google Scholar] [CrossRef]
  22. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  23. Zhou, P.; Han, J.; Cheng, G.; Zhang, B. Learning compact and discriminative stacked autoencoder for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4823–4833. [Google Scholar] [CrossRef]
  24. Chen, Y.; Huang, L.; Zhu, L.; Yokoya, N.; Jia, X. Fine-grained classification of hyperspectral imagery based on deep learning. Remote Sens. 2019, 11, 2690. [Google Scholar] [CrossRef]
  25. He, N.; Paoletti, M.E.; Haut, J.M.; Fang, L.; Li, S.; Plaza, A.; Plaza, J. Feature extraction with multiscale covariance maps for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 755–769. [Google Scholar] [CrossRef]
  26. Fang, L.; He, N.; Li, S.; Plaza, A.J.; Plaza, J. A new spatial–spectral feature extraction method for hyperspectral images using local covariance matrix representation. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3534–3546. [Google Scholar] [CrossRef]
  27. Li, N.; Zhou, D.; Shi, J.; Wu, T.; Gong, M. Spectral-locational-spatial manifold learning for hyperspectral images dimensionality reduction. Remote Sens. 2021, 13, 2752. [Google Scholar] [CrossRef]
  28. Jiang, J.; Ma, J.; Chen, C.; Wang, Z.; Cai, Z.; Wang, L. SuperPCA: A superpixelwise PCA approach for unsupervised feature extraction of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4581–4593. [Google Scholar] [CrossRef]
  29. Zhang, X.; Jiang, X.; Jiang, J.; Zhang, Y.; Liu, X.; Cai, Z. Spectral–spatial and superpixelwise PCA for unsupervised feature extraction of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–10. [Google Scholar] [CrossRef]
  30. Zhang, A.; Pan, Z.; Fu, H.; Sun, G.; Rong, J.; Ren, J.; Jia, X.; Yao, Y. Superpixel nonlocal weighting joint sparse representation for hyperspectral image classification. Remote Sens. 2022, 14, 2125. [Google Scholar] [CrossRef]
  31. Zhang, L.; Su, H.; Shen, J. Hyperspectral dimensionality reduction based on multiscale superpixelwise kernel principal component analysis. Remote Sens. 2019, 11, 1219. [Google Scholar] [CrossRef]
  32. Liang, M.; Jiao, L.; Meng, Z. A superpixel-based relational auto-encoder for feature extraction of hyperspectral images. Remote Sens. 2019, 11, 2454. [Google Scholar] [CrossRef]
  33. Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  34. Liu, M.Y.; Tuzel, O.; Ramalingam, S.; Chellappa, R. Entropy rate superpixel segmentation. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2097–2104. [Google Scholar]
  35. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef]
  36. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef]
  37. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the ICCV 2015, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
  38. Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
  39. He, X.; Niyogi, P. Locality preserving projections. Adv. Neural Inf. Process. Syst. 2003, 16, 186–197. [Google Scholar]
  40. Schölkopf, B.; Smola, A.; Müller, K.R. Kernel principal component analysis. In International Conference on Artificial Neural Networks; Springer: Berlin, Germany, 1997; pp. 583–588. [Google Scholar]
  41. He, L.; Chen, X.; Li, J.; Xie, X. Multiscale superpixelwise locality preserving projection for hyperspectral image classification. Appl. Sci. 2019, 9, 2161. [Google Scholar] [CrossRef]
  42. Mei, S.; Ji, J.; Geng, Y.; Zhang, Z.; Li, X.; Du, Q. Unsupervised spatial-spectral feature learning by 3D convolutional autoencoder for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6808–6820. [Google Scholar] [CrossRef]
  43. Cao, Z.; Li, X.; Feng, Y.; Chen, S.; Xia, C.; Zhao, L. ContrastNet: Unsupervised feature learning by autoencoder and prototypical contrastive learning for hyperspectral imagery classification. Neurocomputing 2021, 460, 71–83. [Google Scholar] [CrossRef]
  44. Bakır, G.H.; Weston, J.; Schölkopf, B. Learning to find pre-images. Adv. Neural Inf. Process. Syst. 2004, 16, 449–456. [Google Scholar]
Figure 1. Visualization of representations in woods of Indian Pines data set. (a) shows the samples in the original space; (b) shows the representations obtained by SuperPCA; (c) shows the representations obtained by our proposed ColAE.
Figure 1. Visualization of representations in woods of Indian Pines data set. (a) shows the samples in the original space; (b) shows the representations obtained by SuperPCA; (c) shows the representations obtained by our proposed ColAE.
Remotesensing 15 04211 g001
Figure 2. Illustration of AE. (a) shows a shallow AE, and (b) presents a deep AE.
Figure 2. Illustration of AE. (a) shows a shallow AE, and (b) presents a deep AE.
Remotesensing 15 04211 g002
Figure 3. The stages in ColAE. loss within AEs is the first term in Equation (11), which sums up the reconstruction loss within each individual superpixel. loss between AEs denotes the second term in Equation (11), which maintains the manifold structure between superpixels.
Figure 3. The stages in ColAE. loss within AEs is the first term in Equation (11), which sums up the reconstruction loss within each individual superpixel. loss between AEs denotes the second term in Equation (11), which maintains the manifold structure between superpixels.
Remotesensing 15 04211 g003
Figure 4. Classification maps produced by different algorithms for the Indian Pines data set. (a) False color map by band 29, 19, 9. (b) Ground truth map. The black area denotes the background pixels. (c) The segmentation with 100 superpixels. (d) Raw feature (OA = 57.72%). (e) PCA (OA = 57.20%). (f) LPP (OA = 69.04%). (g) KPCA (OA = 57.27%). (h) AE (OA = 56.71%). (i) SuperPCA (OA = 83.65%). (j) SuperLPP (OA = 83.34%). (k) SuperKPCA (OA = 85.22%). (l) ContrastNet (OA = 79.98%). (m) CAE (OA = 82.15%). (n) SuperAE (OA = 84.68%). (o) ColAE (OA = 87.24%).
Figure 4. Classification maps produced by different algorithms for the Indian Pines data set. (a) False color map by band 29, 19, 9. (b) Ground truth map. The black area denotes the background pixels. (c) The segmentation with 100 superpixels. (d) Raw feature (OA = 57.72%). (e) PCA (OA = 57.20%). (f) LPP (OA = 69.04%). (g) KPCA (OA = 57.27%). (h) AE (OA = 56.71%). (i) SuperPCA (OA = 83.65%). (j) SuperLPP (OA = 83.34%). (k) SuperKPCA (OA = 85.22%). (l) ContrastNet (OA = 79.98%). (m) CAE (OA = 82.15%). (n) SuperAE (OA = 84.68%). (o) ColAE (OA = 87.24%).
Remotesensing 15 04211 g004
Figure 5. Classification maps produced by different algorithms for the University of Pavia campus data set. (a) False color map by band 60, 30, 2. (b) Ground truth map. The black area denotes the background pixels. (c) The segmentation with 20 superpixels. (d) Raw feature (OA = 82.66%). (e) PCA (OA = 82.81%). (f) LPP (OA = 68.70%). (g) AE (OA = 81.81%). (h) SuperPCA (OA = 95.83%). (i) SuperLPP (OA = 87.70%). (j) SuperKPCA (OA = 94.22%). (k) ContrastNet (OA = 95.32%). (l) CAE (OA = 94.36%). (m) SuperAE (OA = 96.73%). (n) ColAE (OA = 96.78%).
Figure 5. Classification maps produced by different algorithms for the University of Pavia campus data set. (a) False color map by band 60, 30, 2. (b) Ground truth map. The black area denotes the background pixels. (c) The segmentation with 20 superpixels. (d) Raw feature (OA = 82.66%). (e) PCA (OA = 82.81%). (f) LPP (OA = 68.70%). (g) AE (OA = 81.81%). (h) SuperPCA (OA = 95.83%). (i) SuperLPP (OA = 87.70%). (j) SuperKPCA (OA = 94.22%). (k) ContrastNet (OA = 95.32%). (l) CAE (OA = 94.36%). (m) SuperAE (OA = 96.73%). (n) ColAE (OA = 96.78%).
Remotesensing 15 04211 g005
Figure 6. Classification maps produced by different algorithms for the Salinas data set. (a) False color map by band 50, 30, 20. (b) Ground truth map. The black area denotes the background pixels. (c) The segmentation with 100 superpixels. (d) Raw feature (OA = 87.30%). (e) PCA (OA = 86.74%). (f) LPP (OA = 87.86%). (g) AE (OA = 87.89%). (h) SuperPCA (OA = 94.25%). (i) SuperLPP (OA = 94.12%). (j) SuperKPCA (OA = 93.89%). (k) ContrastNet (OA = 93.38%). (l) CAE (OA = 95.27%). (m) SuperAE (OA = 97.43%). (n) ColAE (OA = 97.67%).
Figure 6. Classification maps produced by different algorithms for the Salinas data set. (a) False color map by band 50, 30, 20. (b) Ground truth map. The black area denotes the background pixels. (c) The segmentation with 100 superpixels. (d) Raw feature (OA = 87.30%). (e) PCA (OA = 86.74%). (f) LPP (OA = 87.86%). (g) AE (OA = 87.89%). (h) SuperPCA (OA = 94.25%). (i) SuperLPP (OA = 94.12%). (j) SuperKPCA (OA = 93.89%). (k) ContrastNet (OA = 93.38%). (l) CAE (OA = 95.27%). (m) SuperAE (OA = 97.43%). (n) ColAE (OA = 97.67%).
Remotesensing 15 04211 g006
Figure 7. The OAs vs. L in the Indian Pines, University of Pavia, Salinas data sets.
Figure 7. The OAs vs. L in the Indian Pines, University of Pavia, Salinas data sets.
Remotesensing 15 04211 g007
Figure 8. The OAs with different parameters. (ad) illustrate the OAs obtained with different values for J and e t a when R is set to 0.1, 0.2, 0.3, and 0.4, respectively, on the Indian Pines data set. Similarly, (eh) illustrate the OAs for the University of Pavia data set under the same parameter settings. (il) illustrate the OAs for the Salinas data set under the same parameter settings.
Figure 8. The OAs with different parameters. (ad) illustrate the OAs obtained with different values for J and e t a when R is set to 0.1, 0.2, 0.3, and 0.4, respectively, on the Indian Pines data set. Similarly, (eh) illustrate the OAs for the University of Pavia data set under the same parameter settings. (il) illustrate the OAs for the Salinas data set under the same parameter settings.
Remotesensing 15 04211 g008
Table 1. Number of samples in the Indian Pines, University of Pavia, and Salinas images.
Table 1. Number of samples in the Indian Pines, University of Pavia, and Salinas images.
Indian PinesUniversity of PaviaSalinas
Class NameNumbersClass NameNumbersClass NameNumbers
c1Alfalfa46Asphalt6631Broccoli green weeds 12009
c2Corn-notill1428Meadows18,649Broccoli green weeds 23726
c3Corn-mintill830Gravel2099Fallow1976
c4Corn237Tress3064Fallow rough plow1394
c5Grass-pasture483Mental sheets1345Fallow smooth2678
c6Grass-tress730Bare soil5029Stubble3959
c7Grass-pasture-mowed28Bitumen1330Celery2579
c8Hay-windrowed478Bricks3682Grapes untrained11271
c9Oats20shadow947Soil vineyard develop6203
c10Soybean-nottill972 Corn senesced green seed3278
c11Soybean-mintill2455 Lettuce romaine 4wk1068
c12Soybean-clean593 Lettuce romaine 5wk1927
c13Wheat205 Lettuce romaine 6wk916
c14Woods1265 Lettuce romaine 7wk1070
c15Buildings-grass-trees-dirves386 Vineyard untrained7268
c16Stone-steel-towers93 Vineyard vertical trellis1807
Total number10,249Total number42,776Total number54,129
Table 2. The architecture of AE in the experiments. The shape is defined in Pytorch style, where −1 means batch size in the shape array.
Table 2. The architecture of AE in the experiments. The shape is defined in Pytorch style, where −1 means batch size in the shape array.
LayerOutput Shape
Indian PinesUniversity of PaviaSalinas
input[−1, 200][−1, 103][−1, 203]
Linear[−1, 100][−1, 75][−1, 100]
Tanh[−1, 100][−1, 75][−1, 100]
Linear[−1, L][−1, L][−1, L]
Linear[−1, 100][−1, 75][−1, 100]
Tanh[−1, 100][−1, 75][−1, 100]
Linear[−1, 200][−1, 103][−1, 203]
Table 3. Classification performance of the 13 methods on Indian Pines, University of Pavia, and Salinas images. T.N.s/C denotes the number of training samples from each class.
Table 3. Classification performance of the 13 methods on Indian Pines, University of Pavia, and Salinas images. T.N.s/C denotes the number of training samples from each class.
Data SetT.N.s/CMetricRawPCALPPKPCAAESuper
PCA
Super
LPP
Super
KPCA
Contrast
Net
CAESuperAEColAE
Indian Pines3OA(%)40.8940.8945.0140.8140.3754.5558.2848.2855.2054.5067.7868.81
AA(%)44.3044.2145.6043.9744.0074.6971.3253.7855.3154.0670.1566.41
kappa0.34550.34550.38700.34510.34040.48370.52760.44150.49770.49420.63970.6518
5OA(%)47.4146.9853.5647.5247.7269.8465.8664.3067.8864.2877.2077.72
AA(%)48.6048.3852.2348.4348.6580.9176.4361.2260.7360.8177.4974.79
kappa0.41560.41150.48180.41580.41900.65600.61490.60610.63640.59920.74290.7493
7OA(%)51.3850.8458.4751.4650.6577.0175.0077.6273.3670.2081.3482.03
AA(%)51.5350.7755.7150.9250.5486.1381.1490.3566.8065.1680.7880.18
kappa0.45780.45160.53510.45660.45090.73780.71780.73640.69950.66510.78920.7969
10OA(%)54.6853.9861.3154.4453.7183.1983.8073.9176.6075.8385.0985.10
AA(%)54.0053.4658.9053.6652.9885.3180.2587.4870.1169.5782.8481.96
kappa0.49430.48670.56690.49080.48400.80840.80920.70550.73690.72780.83110.8312
15OA(%)58.8357.6064.5658.2956.8687.8186.2387.8280.0280.9687.6988.02
AA(%)56.6755.7060.8855.8054.6686.8180.6489.9970.1773.0783.3882.04
kappa0.54010.52670.60340.53280.51900.86110.84420.86200.78030.78520.86030.8640
20OA(%)61.5760.5367.2661.2659.8389.1388.2487.9384.4484.4689.1889.20
AA(%)57.3956.4860.8956.9856.3585.1783.6189.6675.1374.8981.2880.98
kappa0.56940.55780.63260.56540.55030.87650.87260.86310.82370.82410.87710.8773
University of Pavia3OA(%)60.5060.5554.40-61.0378.4867.4181.8379.7170.5283.6684.04
AA(%)64.7364.6256.80-65.2573.9472.7273.9981.6774.6183.5484.01
kappa0.51540.51570.4341-0.52030.72220.57360.76150.73330.62390.79110.7957
5OA(%)65.7765.7358.22-65.0382.0271.4985.0683.4978.8987.2187.40
AA(%)68.5368.4959.97-68.5678.9475.2580.4985.1181.1086.4086.70
kappa0.57310.57270.4788-0.56710.76750.62970.80610.78130.73000.83660.8390
7OA(%)70.3670.3460.02-69.0184.4074.9886.9287.8184.7588.8389.43
AA(%)72.0371.9261.92-70.7082.8979.1883.3286.6684.7987.2487.65
kappa0.62530.62470.5016-0.61070.79880.67140.83050.83930.80330.85640.8638
10OA(%)72.6672.4863.43-71.4689.0180.2491.0991.9588.8392.5392.74
AA(%)74.1273.9564.54-72.8587.2283.3389.8790.3787.9790.9491.10
kappa0.65530.65320.5450-0.64140.85770.73870.88360.89390.85450.90310.9057
15OA(%)77.9078.2665.48-76.2691.8681.2692.3094.3892.0394.7694.93
AA(%)77.0377.1366.57-75.3289.5683.7491.2992.7090.5593.1093.29
kappa0.71690.72100.5734-0.69750.89380.75490.89820.92570.89570.93140.9337
20OA(%)80.5780.6670.13-79.3592.6082.4891.3795.0194.0895.1695.39
AA(%)78.8478.8469.32-77.4090.7985.3889.5293.3492.4993.2993.61
kappa0.74970.75120.6254-0.73460.90340.77140.88650.93430.92220.93650.9396
Salinas3OA(%)79.1379.1578.22-80.8670.2175.3076.8480.2180.8488.1489.46
AA(%)83.4883.4883.38-86.3473.7579.1689.2081.9384.3890.9192.43
kappa0.76870.76880.7598-0.78770.67290.72170.74350.78080.78740.86810.8828
5OA(%)81.1381.0982.21-82.4880.6780.9780.4684.9887.1290.9791.97
AA(%)85.8685.8887.55-87.9684.5987.5878.9686.7889.0494.2994.77
kappa0.79060.79010.8035-0.80560.78590.78710.78350.83300.85700.89970.9108
7OA(%)83.6883.6683.58-84.6388.2090.2187.4687.1889.9293.2594.01
AA(%)87.7987.7488.09-89.4790.7593.6990.2888.6291.1895.9496.21
kappa0.81880.81860.8176-0.82930.86920.89060.86020.85760.88830.92510.9334
10OA(%)85.4585.2784.71-85.9491.3890.5989.5888.7191.9894.5394.83
AA(%)89.1589.0989.30-90.3494.4593.9993.0390.2692.9196.5196.61
kappa0.83820.83620.8305-0.84370.90360.89480.88920.87470.91090.93920.9426
15OA(%)86.8986.7786.04-87.2895.2692.6992.3691.5994.1096.0696.14
AA(%)90.6390.5590.68-91.4596.1094.3294.6692.4894.6997.1897.27
kappa0.85430.85300.8450-0.85870.94710.91740.91460.90660.93450.95620.9571
20OA(%)88.1488.1688.39-88.1697.0694.6294.2592.8495.5297.0697.20
AA(%)91.4491.4891.80-92.0996.8993.4394.3893.7095.9197.5597.63
kappa0.86800.86820.8809-0.86660.96330.94030.93590.92040.95030.96730.9687
Table 4. Classification results for each class in Indian Pines when 15 training samples are used.
Table 4. Classification results for each class in Indian Pines when 15 training samples are used.
RawPCALPPKPCAAESuper
PCA
Super
LPP
Super
KPCA
Contrast
Net
CAESuperAEColAE
c134.7035.9140.5731.7231.17100.00100.00100.0053.4564.65100.0098.81
c247.5045.3854.4046.8044.9378.8275.4858.0278.2272.3878.1078.89
c336.8734.9142.8836.1038.1891.5699.5396.2368.6375.1887.4183.61
c432.5230.6640.4731.6428.7982.4469.1086.5648.6658.2969.3165.46
c566.7665.4767.2866.7663.6098.5597.0199.7691.7485.4997.3196.12
c689.9789.4088.2189.3888.0799.8099.43100.0093.1190.2799.7699.89
c722.1720.9823.4421.5220.7750.8039.3934.2126.0045.8959.7451.95
c898.3998.3296.9198.3398.4198.5798.72100.0097.6894.9599.98100.00
c98.257.5414.747.515.9745.5910.20100.0022.7316.5235.1921.74
c1050.5248.5459.4547.6248.8590.9195.0595.8379.5280.0083.8485.78
c1170.3672.2580.0670.0273.2987.9495.6198.7187.5489.2293.5494.20
c1243.0939.1954.2241.9833.2385.0787.7688.7768.7672.7970.4676.54
c1382.8081.6583.3080.8182.05100.00100.00100.0081.6684.3699.79100.00
c1492.8492.5693.8492.3692.2892.4971.2798.4293.6294.8198.5698.56
c1536.9235.5042.0437.3233.0897.5191.9699.4666.9771.1392.7292.18
c1693.1092.9892.3293.0091.8788.9659.6983.8764.4673.2068.3168.88
AA56.6755.7060.8855.8054.6686.8180.6489.9970.1773.0783.3882.04
OA58.8357.6064.5658.2956.8687.8186.2387.8280.2080.9687.6988.02
Table 5. Classification results for each class in the University of Pavia when 20 training samples are used.
Table 5. Classification results for each class in the University of Pavia when 20 training samples are used.
RawPCALPPAESuper
PCA
Super
LPP
Super
KPCA
Contrast
Net
CAESuperAEColAE
c193.9493.9392.5293.5592.4293.5189.1396.1295.2197.5397.68
c291.4890.9388.9791.6798.3386.6297.5899.0898.8798.8898.85
c359.6259.3746.0855.1794.7189.9898.5393.1690.3993.6093.63
c470.2269.5265.8569.5069.5480.2876.8584.0281.2877.9578.38
c595.8096.0174.2596.4398.0096.9596.6199.9898.9499.8699.86
c662.0863.3644.5362.0799.0254.5498.5696.5395.0698.0099.16
c757.6057.6945.1051.6778.5375.6877.1089.1990.3880.9682.28
c857.6057.6945.1051.6778.5375.6871.8983.6184.2880.9682.28
c999.9499.9599.9999.9599.8199.4699.4398.3097.9899.8499.88
AA78.8478.8469.3277.4090.7985.3889.5293.3492.4993.3093.61
OA80.5780.6670.1379.3592.6082.4891.3795.0294.0895.1695.39
Table 6. Classification results for each class in Salinas when 20 training samples are used.
Table 6. Classification results for each class in Salinas when 20 training samples are used.
RawPCALPPAESuper
PCA
Super
LPP
Super
KPCA
Contrast
Net
CAESuperAEColAE
c197.7097.7099.3898.7499.97100.00100.0097.0798.84100.00100.00
c298.4698.4398.3799.0799.8599.17100.0096.3199.0099.8899.88
c391.2391.1689.2091.7398.5298.2396.0595.9596.0899.9999.79
c496.9296.9398.7697.2596.7095.8697.2393.7182.8896.5796.57
c596.7196.6995.0796.5895.6777.2995.7691.8298.6898.2998.14
c699.7899.7899.7299.9599.18100.0099.8299.6899.72100.00100.00
c798.4198.3598.6699.0399.7099.6899.8394.0499.3999.7699.80
c878.4577.9878.2578.5498.3399.9291.2990.0895.4198.0497.18
c998.8998.8499.2499.2398.0598.0490.3298.6199.6799.1399.12
c1084.8985.5681.0586.0794.5888.3188.4395.2995.4891.6891.85
c1178.1077.9975.4880.9188.4162.5789.7677.6789.1598.7998.68
c1295.7995.8296.7096.6994.3997.7983.7196.7597.5098.7998.82
c1394.7694.6598.9996.4398.2199.5596.1093.9898.5698.6398.12
c1486.2486.0692.6689.0590.9388.7185.2097.6897.7891.6392.63
c1568.4669.5469.0666.1898.5790.8299.8784.7587.4190.3492.32
c1698.2298.2898.1698.0199.1598.9996.7795.8598.9999.2999.25
AA91.4491.4891.8092.0996.8993.4394.3893.7095.9197.5597.63
OA88.1488.1688.3988.1697.0694.6294.2592.8495.5297.0697.20
Table 7. Training time (in seconds) of nine DR methods on three HSI data sets.
Table 7. Training time (in seconds) of nine DR methods on three HSI data sets.
PCALPPKPCAAESuperPCASuperLPPSuperKPCASuperAEColAE
Indian Pines0.0985.12628.8753.231.0390.64524.6658.3458.45
University of Pavia0.91104.21-214.121.22277.221245.12158.58160.10
Salinas0.73102.12-198.721.47232.981862.23132.43134.22
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yao, C.; Zheng, L.; Feng, L.; Yang, F.; Guo, Z.; Ma, M. A Collaborative Superpixelwise Autoencoder for Unsupervised Dimension Reduction in Hyperspectral Images. Remote Sens. 2023, 15, 4211. https://doi.org/10.3390/rs15174211

AMA Style

Yao C, Zheng L, Feng L, Yang F, Guo Z, Ma M. A Collaborative Superpixelwise Autoencoder for Unsupervised Dimension Reduction in Hyperspectral Images. Remote Sensing. 2023; 15(17):4211. https://doi.org/10.3390/rs15174211

Chicago/Turabian Style

Yao, Chao, Lingfeng Zheng, Longchao Feng, Fan Yang, Zehua Guo, and Miao Ma. 2023. "A Collaborative Superpixelwise Autoencoder for Unsupervised Dimension Reduction in Hyperspectral Images" Remote Sensing 15, no. 17: 4211. https://doi.org/10.3390/rs15174211

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop