Next Article in Journal
Additional Reference Height Error Analysis for Baseline Calibration Based on a Distributed Target DEM in TwinSAR-L
Previous Article in Journal
Assessing Spatial Variability of Barley Whole Crop Biomass Yield and Leaf Area Index in Silvoarable Agroforestry Systems Using UAV-Borne Remote Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spectral-Locational-Spatial Manifold Learning for Hyperspectral Images Dimensionality Reduction

1
School of Electronics and Information, Northwestern Polytechnical University, 127 West Youyi Road, Xi’an 710072, China
2
Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(14), 2752; https://doi.org/10.3390/rs13142752
Submission received: 9 June 2021 / Revised: 6 July 2021 / Accepted: 8 July 2021 / Published: 13 July 2021

Abstract

:
Dimensionality reduction (DR) plays an important role in hyperspectral image (HSI) classification. Unsupervised DR (uDR) is more practical due to the difficulty of obtaining class labels and their scarcity for HSIs. However, many existing uDR algorithms lack the comprehensive exploration of spectral-locational-spatial (SLS) information, which is of great significance for uDR in view of the complex intrinsic structure in HSIs. To address this issue, two uDR methods called SLS structure preserving projection (SLSSPP) and SLS reconstruction preserving embedding (SLSRPE) are proposed. Firstly, to facilitate the extraction of SLS information, a weighted spectral-locational (wSL) datum is generated to break the locality of spatial information extraction. Then, a new SLS distance (SLSD) excavating the SLS relationships among samples is designed to select effective SLS neighbors. In SLSSPP, a new uDR model that includes a SLS adjacency graph based on SLSD and a cluster centroid adjacency graph based on wSL data is proposed, which compresses intraclass samples and approximately separates interclass samples in an unsupervised manner. Meanwhile, in SLSRPE, for preserving the SLS relationship among target pixels and their nearest neighbors, a new SLS reconstruction weight was defined to obtain the more discriminative projection. Experimental results on the Indian Pines, Pavia University and Salinas datasets demonstrate that, through KNN and SVM classifiers with different classification conditions, the classification accuracies of SLSSPP and SLSRPE are approximately 4.88 % , 4.15 % , 2.51 % , and 2.30 % , 5.31 % , 2.41 % higher than that of the state-of-the-art DR algorithms.

Graphical Abstract

1. Introduction

Hyperspectral images (HSIs) with high spectral resolution and fine spatial resolution are easily accessible on account of advanced sensor technology, which have been intensively studied and widely applied in many fields, such as environmental monitoring [1], precision agriculture [2], urban planning [3], and Earth observation [4]. HSIs contain a large number of consecutive narrow spectral bands, which provide rich information for classification [5]. However, these bands have a strong correlation that results in massive redundant information in HSIs [6]. In addition, the high dimensionality and limited training samples of HSIs lead to the Hughes phenomenon [7]. Accordingly, dimensionality reduction (DR) plays an important role in addressing the aforementioned issue [8,9].
Many DR methods have been designed to transform the original features into a new low-dimensional space for HSI, most of which can be divided into supervised and unsupervised ones [10,11]. The supervised methods need the support of class labels to obtain the discriminant projection [9]. For instance, linear discriminant analysis (LDA) [12] utilizes the a priori class labels to separate the interclass samples and compact the intraclass samples. Nonparametric weighted feature extraction (NWFE) [13] calculates the weighted means and constructs nonparametric between-class and within-class scatter matrices by setting different weights on each sample. Regularized local discriminant embedding (RLDE) [14] constructs a similar graph of intraclass samples and a penalty graph of interclass samples, while adding two regularized terms to preserve the data diversity and address the singularity with limited training samples. To sum up, supervised methods usually aim to compact the homogeneity of intraclass samples and separate the heterogeneity of interclass samples by means of class labels, which is beneficial to improve the separability and classification performance of low-dimensional embedding. However, in practice, the collection of class labels of HSIs requires field exploration and verification by experts, which is expensive and time-consuming. This leads to the inability to obtain class labels in many cases, especially for HSIs covering land [15]. Therefore, in view of the difficulty of obtaining class labels and their scarcity, a superior unsupervised DR (uDR) method with high separability possesses more practical value.
To explore the intrinsic structure, manifold learning (ML) has been widely applied for the uDR of HSIs, such as isometric mapping (ISOMP) [16], local linear embedding (LLE) [17] and Laplacian eigenmaps (LE) [18]. ISOMP preserves the geodesic distances between points in low-dimensional space. LLE applies local neighbor reconstruction to preserve the local linear relationship. LE constructs a similarity graph for presenting the inherent nonlinear manifold structure. To address the out-of-sample problem of LE and LLE, locality preserving projection (LPP) [19,20] and neighborhood preserving embedding (NPE) [21] are proposed. However, these classic unsupervised ML methods simply consider the spectral information but neglect the spatial information that has been shown to be of great importance for HSIs [22,23].
In recent years, many spectral-spatial DR methods have been proposed to fuse spatial correlation and spectral information for improving the classification performance [24,25]. Among them, two strategies for exploring spectral-spatial information can be summarized. One common strategy is preserving the spatial local pixel neighborhood structures, such as discriminative spectral-spatial margin (DSSM) [26], spatial-domain local pixel NPE (LPNPE) [14] and spatial-spectral local discriminant projection (SSLDP) [27]. DSSM finds spatial-spectral neighbors and preserves the local spatial-spectral relationship of HSIs. LPNPE remains the original spatial neighborhood relationship via minimizing the local pixel neighborhood preserving scatter. SSLDP designs two weighted matrices within neighborhood scatter to reveal the similarity of spatial neighbors. Another widely used strategy is to replace the common spectral distance with spatial or spatial-spectral combined distance, such as image patches distance (IPD) [28] and spatial coherence distance (SCD) [29]. IPD maps the distances between two image patches in HSIs as the spatial-spectral similarity measure. SCD utilizes the spatial coherence to measure the pairwise similarity between two local spatial patches in HSI. More recently, Hong et al. [24] proposed the spatial-spectral combined distance (SSCD) to fuse the spatial structure and spectral information for selecting effective spatial-spectral neighbors. Although these spectral-spatial methods use different ways to reveal the spatial intrinsic structure of HSIs, they still have two drawbacks: (1) the exploration of spatial information is merely based on the fixed spatial neighborhood window (or image patch), which may be constrained by the complex distribution of ground objects in HSIs; (2) they only consider the spectral information of local spatial neighborhood but ignore the importance of location coordinates.
In HSIs, rich information provided by high spectral resolution may increase the intraclass variation and decrease the interclass variation, while leading to lower interpretation accuracies. Moreover, different objects may share similar spectral properties (e.g., similar construction materials for both parking lots and roofs in the city area) which make it impossible to classify HSIs by only using spectral information [22]. In this case, location information, as one of the attributes of pixels, can play an important role in classification. The closer the pixels are in location, the more probable it is that they come from the same class and vice versa, especially for HSIs covering land. The contribution of location coordinates to DR and classification has been demonstrated in several existing studies. Kim et al. [30] directly combined the spatial proximity and the spectral similarity through a kernel PCA framework. Hou et al. [31] constructed the joint spatial-pixel characteristics distance instead of the traditional Euclidean distance. Li et al. [32] proposed a new distance metric by combining the spectral feature and spatial coordinate. However, these methods ignore the contribution of the spectral information in the local spatial neighborhood that can improve the robustness of the classifier against noise pixels, since pixels within a small spatial neighborhood usually present similar spectral characteristics.
In short, the methods mentioned above either neglect the location coordinates or the local spatial neighborhood characteristics and lack a comprehensive exploration of spectral-locational-spatial (SLS) information. To address this issue, two unsupervised SLS manifold learning (uSLSML) methods were proposed for uDR of HSIs, called SLS structure preserving projection (SLSSPP) and SLS reconstruction preserving embedding (SLSRPE). SLSSPP aims to preserve the SLS neighbor structure of data, while SLSRPE is designed to maintain the SLS manifold structure of HSIs.
The main contributions of this paper are listed below:
  • To facilitate the extraction of SLS information, a weighted spectral-locational (wSL) datum is generated with a parameter to balance the spectral and locational effects where the spectral information and location coordinates complement each other. Moreover, to discover SLS relationships among pixels. a new distance measurement, SLS distance (SLSD), which fuses the spectral-locational information and the local spatial neighborhood, is proposed for HSIs, which is excellent for finding the nearest neighbor of the same class.
  • In order to improve the separability of low-dimensional embeddings, SLSSPP constructs a new uDR model that compresses adjacent samples and separates cluster centroids to approximately compress intraclass samples and separate interclass samples without any class labels. The SLS adjacency graph is especially constructed based on SLSD instead of the original spectral distance and the cluster index in centroid adjacency graph is generated based on wSL data, which allows SLS information to be integrated into the projection and improves the identifiability of low-dimensional embeddings.
  • Conventional reconstruction weights are calculated only based on spectral information, which cannot truly reflect the relationship among samples because there is inevitable noise and high dimensionality in HSIs and even different objects may have similar spectral properties. To address this issue, SLSRPE redefines new reconstruction weights based on wSL data, which does not only consider the spectral-locational information but also the local spatial neighborhood, which allows SLS information to be integrated into the projection for achieving more efficient manifold reconstruction.
This paper is organized as follows. In Section 2, we briefly introduce the related works. The proposed SLSD, SLSSPP and SLSRPE are described in detail in Section 3. Section 4 presents the experimental results on three datasets that demonstrate the superiority of the proposed DR methods. The conclusion is presented in Section 5.

2. Related Works

In this section, we briefly review the related works, LPP and NPE. Suppose that an HSI dataset consists of D bands and m pixels, it can be defined as X = [ x 1 , , x i , , x m ] D × m . l x i 1 , 2 , , c denotes the class label of x i , where c is the number of land cover types. The low-dimensional embedding dataset is defined as Y = [ y 1 , , y i , , y m ] d × m , in which d denotes the number of embedding dimensionality and d < D . For the linear DR methods, Y is replaced by Y = V T X with the projection matrix V = D × d .

2.1. Locality Preserving Projection

Locality preserving projection (LPP) is a linear approximation of the nonlinear Laplacian eigenmaps [33]. LPP [19] expects that the low-dimensional representation can preserve the local geometric construction of original high dimensional space. The first step in LPP is to construct an adjacency graph. It aims to make nodes related to each other (nodes connected to adjacency graphs) as close as possible in low-dimensional space. It puts an edge between nodes i and j in adjacency graphs if x i and x j are close. Then, LPP weights the edges and the weight is defined as
w i j = exp ( x i x j t ) or 1 , x i N k ( x j ) or x j N k ( x i ) 0 , otherwise
where N k x j is the k nearest neighbors of x i and t is the parameter. The optimization problem of LPP is defined as
min V i , j m 1 2 ( y i y j ) 2 w i j = min V tr V T XL X T V
where L = D W is the Laplacian matrix. D is a diagonal matrix whose entries are column (or row, since W is symmetric) sums of W , D i i = j w j i . The optimization problem Equation (2) can be solved by solving the following generalized eigenvalue problem:
XL X T V = λ XD X T V

2.2. Neighborhood Preserving Embedding

Neighborhood preserving embedding is a linear approximation to locally linear embedding (LLE) [34] and aims to preserving the local manifold structure [21]. Similar to LPP, the first step in NPE is to construct an adjacency graph. Then, it computes the weight matrix W . If there is no edge between the nodes i and j, the weight w i j = 0 . Otherwise, w i j can be calculated by minimizing the following reconstruction error function:
min w i j i = 1 m x i j = 1 m x j w i j 2 , s . t . j = 1 m w i j = 1 .
To preserve the local manifold structure on high-dimensional data, NPE assumes that the low-dimensional embedding y i can be approximated by the linear combination of its corresponding neighbors. The optimization problem of NPE is defined as
min V i = 1 m y i j = 1 m y j w i j 2 = min V i = 1 m j = 1 k w i j ( y i y j ) 2 = min V tr ( V T XM X T V )
where M = I W I W T and I = diag 1 , 1 . The optimization problem Equation (5) can be solved by solving the following generalized eigenvalue problem:
XM X T V = λ X X T V

3. Methodology

In this section, we introduce the proposed SLSD and two uSLSML methods, SLSSPP and SLSRPE, in detail. Their flowcharts are shown in Figure 1 and Figure 2. Figure 1 shows the calculation process of SLSSPP, where the first step is to generate wSL data with the location coordinates and spectral band of each pixels, which is the key to break the locality of spatial information extraction. Then, SLSD is computed based on wSL data which are also clustered to generate a clustering index. Then, based on the SLSD, SLSSPP finds k nearest neighbors and computes the weight matrix to construct the SLS adjacency graph. Meanwhile, according to the clustering index, the cluster centroid is computed based on the raw spectral data. On the basis of the cluster centroid, the weight matrix is calculated to construct the cluster centroid adjacency graph. Eventually, on account of the SLS adjacency graph and the cluster centroid adjacency graph, the SLSSPP model was built to obtain an optimal projection matrix based on the raw spectral data.
Figure 2 also shows the calculation process of SLSRPE, whose first step is also to generate wSL data with the location coordinates and spectral band of each pixel. Based on the wSL data, the second step of SLSRPE is constructing the redefined reconstruction error function to compute the redefined reconstruction weight matrix while computing SLSD to find k nearest neighbors. Then, according to the redefined reconstruction weight matrix and k nearest neighbors, the adjacency graph is constructed. Finally, the SLSRPE model is built to obtain an optimal projection matrix which is harvested to transform the original high-dimensional data into low-dimensional space. In the end, the low-dimensional features are classified by classifiers.

3.1. Spectral-Locational-Spatial Distance

In fact, SLSSPP and SLSRPE are two graph embedding methods. For an HSI dataset X = [ x 1 , , x i , , x m ] with m pixels, its adjacency graph G have m nodes. In general, we put an edge between nodes i and j in G if x i and x j are close (that is x i is the nearest neighbor of x j , or  x j is the nearest neighbor of x i .). As a rule, we expect that the samples of the same class are connected and put edges when constructing G since the connected samples are usually required to gather or maintain a manifold structure. However, if there is a mass of connected samples belonging to different classes, the classification performance of low-dimensional features will inevitably be reduced. Accordingly, how to explore the relationship among samples and find the nearest neighbor of the same class is the key to unsupervised manifold learning. In this subsection, we propose a distance calculation method, SLSD, to measure the similarity among samples.
With the recognition of the importance of spatial information in HSI, many spectral-spatial DR methods design different spectral-spatial distance to replace the raw spectral distance but ignore the location information of pixels. Figure 3 shows the comparison of spectral bands of pixels with different locational relationships. A and B display the spectral curves of pixels that are in the same class and close to each other in location, while their spectral bands are quite different. At the same time, although the two pixels in C or D are of different kinds and located far away, their spectral curves are almost identical. In both cases, it is difficult to determine the correct pixel relationship between them based on spectral information alone. In fact, location information can alleviate this problem well since pixels that are closer to each other in location are more likely to belong to the same class, especially for HSIs covering land.
In this paper, we regard the location information as one of the attributes of pixels and utilize it to break the locality of spatial information extraction and capture more spatial information. For an HSI dataset X = [ x 1 , , x i , , x m ] D × m , the location information can be denoted as C = [ c 1 , , c i , , c m ] 2 × m where c i = [ p i , q i ] T is the coordinate of the pixel x i . To fuse the spectral and locational information of pixels in HSIs, a spectral-locational dataset is constructed as follows:
X C = C X = c 1 , , c i , , c m x 1 , , x i , , x m ( D + 2 ) × m .
However, due to the difference of image size and the complexity of homogeneous domain distribution in HSIs, this simple combination of spectrum and location is not reasonable. In order to balance the effect of location and spectrum on the relationships among samples, the weighted spectral-locational (wSL) data X C = [ x C 1 , , x C i , , x C m ] are redefined as
X C = β C 1 β X = β c 1 , , β c m 1 β x 1 , , 1 β x m
and x C i = β c i 1 β x i . β is a spectral-locational trade-off parameter. It needs to be emphasized that X C is only used to calculate the relationship among pixels, not the DR data. There is therefore no need to discuss the rationality of the physical meaning of X C . In addition, X C with location also breaks the locality of spatial information extraction.
We assume the local neighborhood space of x C i is Ω ( x C i ) with s 2 = 1 , , r in a s × s spatial window, which is formulated as
Ω ( x C i ) = x C i ( p , q ) | p [ p i s 1 2 , p i + s 1 2 ] q [ q i s 1 2 , q i + s 1 2 ] .
Actually, the primary responsibility of SLSD is to find the effective nearest neighbors. To search for highly credible neighbors, SLSD uses a local neighborhood space instead of a central sample. Accordingly, the SLSD of the sample x i and x j is defined as
d SLSD x i , x j = d Ω ( x C i ) , x C j ,
where x i is one of the neighbors of the target sample x j . d Ω ( x C i ) , x C j is the distance between Ω ( x C i ) and x C j and defined as follows:
d Ω ( x C i ) , x C j = r = 1 s 2 t i r x C j x C i r r = 1 s 2 t i r , x C i r Ω ( x C i ) ,
in which t i r is calculated by
t i r = exp γ x C i x C i r , x C i r Ω ( x C i ) .
γ is a constant which is empirically set to 0.2 in the experiments. The window parameter s is the size of the local spatial neighborhood Ω ( x C i ) . x C i r is a pixel in Ω ( x C i ) surrounding x C i . t g i r is the weight of x C i r . The more similar x C i r is to x C i , the larger the value of t i r , and the more important the distance between x C i r and x C j is. The spectral-locational trade-off parameter β can adjust the influence of location information on distance. When β = 0 , SLSD is the spectral-spatial distance only based on spectral domain. When β = 1 , SLSD is the locational-spatial distance only based on coordinate. By choosing the appropriate β value, we can excavate the more realistic relationships among the samples as much as possible. Furthermore, this allows the neighbor samples to be more likely to fall into the same class as the target sample. To sum up, SLSD not only extracts local spatial neighborhood information, but also explores global spatial relations in HSIs based on location information.
To illustrate the effectiveness of SLSD, we compared it with the SD and SSCD proposed in [24]. SD is the simple spectral distance and SSCD is a spectral-spatial distance without location information. Table 1 shows the number of samples with different classes in the top 10 nearest neighbors of samples, which includes all samples in the three datasets. For a fair comparison, SSCD and SLSD has the same spatial window parameters for the three datasets. From Table 1, the number of samples are presented as: SLSD < SSCD < SD. This means that not only the local spatial neighborhood but also the location information are both quite valuable for the exploration of the relationship among the samples. In addition, Table 1 also shows that SLSD rarely has different classes of samples in the top 10 nearest neighbors. This means that the neighbors obtained by SLSD mostly belong to the same class as the target sample, which indicates that SLSD is excellent for correctly determining the pixel relationship in HSIs.

3.2. Spectral-Locational-Spatial Structure Preserving Projection

The core of many supervised DR algorithms to obtain discriminant projections is to shorten the intraclass distance and expand the interclass distance in low-dimensional space [14,27]. In the case of sufficient class labels, this way can indeed achieve excellent DR for classification. However, it is quite difficult to obtain class labels for HSIs covering land. In this paper, SLSSPP was proposed to approach this concept without any class labels. On account of SLSD, SLSSPP can achieve the goal of shortening the intra-class distance in an unsupervised manner, since most of the nearest neighbors belong to the same class as the target sample. Meanwhile, in SLSSPP, expanding the interclass distance is simulated by maximizing the distance among the cluster centroids based on wSL data.
With SLSD, the SLS adjacency graph G SLS = X , W SLS can be constructed, where X is the vertex of the graph and W SLS is the weight matrix. In graph G SLS , if  x j belongs to the k nearest neighbors N k SLS ( x i ) of x i based on SLSD, an edge should be connected between them. In this paper, if  x i N k SLS ( x j ) or x j N k SLS ( x i ) , the weight of each edge is defined as
w i j SLS = exp ( d SLSD x i , x j 2 2 t i 2 ) ,
where:
t i = 1 k x j N k SLS ( x i ) d SLSD x i , x j .
Otherwise, w i j SLS = 0 . d SLSD ( x i , x j ) is the SLSD between x i and x j . Due to the superior performance of SLSD in representing the relationship among samples in HSIs, W SLS based on SLSD can make the low-dimensional space keep the more real structure of the raw space. Actually, k < < m , therefore W SLS is a sparse matrix.
In order to shorten the distance among each sample and its k nearest neighbors in embedding space, the optimization problem is defined as
min V i , j m 1 2 ( V T x i V T x j ) 2 w i j SLS = i m V T x i D i i SLS x i T V i , j m V T x i w i j SLS x j T V = t r V T X D SLS W SLS X T V = t r V T X L S L S X T V ,
in which W SLS is a symmetric matrix and D SLS is a diagonal matrix whose entries are column or row sums of W SLS , D i i SLS = j w i j SLS . L SLS = D SLS W SLS is the Laplacian matrix. In addition, it can be further evolved to:
min V V T X L SLS X T V .
To indirectly expand the interclass distance, we maximize the distance among cluster centroids. Table 2 shows the number of heterogeneous samples in the same cluster when three datasets are divided into 35 clusters. Compared with the raw spectral datum X , the wSL datum X C has better clustering performance. This means that X C should be used to compute the cluster index that guide the low-dimensional features. To facilitate the implementation and calculation, we adopt K-means algorithm to cluster X C . Assuming X C is divided into k m clusters, the index F = [ f 1 , , f i , , f k m ] of k m clusters can be obtained as follows:
F = Kmeans ( X C , k m ) ,
where Kmeans ( ) is the K-means algorithm and f i is the sample index belonging to the i-th cluster. According to the index F, the cluster centroid U = [ u 1 , , u i , , u k m ] of X is calculated as
u i = j f i x j .
The optimization problem of cluster centroid distance maximization is defined as
max V i , j k m 1 2 V T u i V T u j 2 w i j C = i , j k m V T u i D i i C u j T V i , j k m V T u i w i j c u j T V = t r V T U D C W C U T V = t r V T U L C U T V
Here, W C is the weight matrix of the cluster centroid, which is defined as
w i j c = ( 1 + exp ( d E ( u i , u i ) 2 2 t i 2 ) ) 1 , t i = 1 k m j k m d E u i , u j , u i = j f i x j .
d E ( ) is the Euclidean distance function. The definition of W C means that the greater the distance between cluster centroid u i r and u j r , the greater the weight w i j c , and thus the greater the degree of separation between cluster i and j in low-dimensional space, and vice versa. W C is a symmetric matrix and D i i C = j w i j c . This optimization problem can be further evolved to:
max V V T U L C U T V
Aiming to simultaneously minimize the distance among each sample and its k nearest neighbors and maximize the distance among cluster centroids to obtain the discriminant low-dimensional representations, the DR model of SLSSPP is defined as
J ( V ) = max V T U L C U T V V T X L SLS X T V .
As a general rule, this equals to the following optimization program:
max V V T U L C U T V s . t . V T X L SLS X T V = Z
where Z is a non-zero constant matrix. Based on the Lagrangian multipliers, the optimization solution can be obtained through the following generalized eigenvalue problem:
U L C U T V = λ X L SLS X T V ( X L SLS X T ) 1 U L C U T V = λ V
where λ is the eigenvalue of Equation (24). With the eigenvectors v 1 , v 2 , , v d corresponding to the d largest eigenvalues, the optimal projection matrix can be represented as
V = [ v 1 , v 2 , , v d ] D × d
Finally, the low-dimensional embedding dataset can be given by Y = V T X d × m .
The detailed procedure of SLSSPP is given in Algorithm 1. In general, SLSSPP has two contributions: (1) it takes advantage of SLSD to search the nearest neighbor and construct an adjacency graph; and (2) a cluster centroid adjacency graph based on wSL data is constructed. In order to demonstrate their superiority in dimensionality reduction, the comparative experiments based on three datasets are carried out and the classification overall accuracies (OAs) are shown in Table 3 where n i is the number of training samples per class used for the classifiers. The LPP_Cluster represents the combination of LPP and the cluster centroid adjacency graph, while LPP_SLSD represents that traditional LPP uses SLSD to explore the nearest neighbor and construct adjacency graph. From Table 3, both LPP_Cluster and LPP_SLSD outperform traditional LPP under different training conditions of the two classifiers, which means that both designs in SLSSPP are valid. In fact, based on these two designs, the SLSSPP model in Equation (22) can indirectly reduce the intraclass distance and increase the interclass distance in an unsupervised manner, which not only preserves the neighborhood structure of HSIs, but also effectively enhances the separability of low-dimensional embeddings. Table 3 also shows that SLSSPP has better classification performance than LPP_Cluster and LPP_SLSD, which indicates that the proposed SLSSPP is quite meaningful.
Algorithm 1 SLSSPP
Input: A D-dimensional HSI dataset X = [ x 1 , , x i , , x m ] D × m , nearest neighbors number k > 0 , spatial window size s > 0 , embedding dimension d ( d < D ) and trade-off parameter 0 β 1 .
1:
Obtain location information C and generate wSL data X C as Equation (8).
2:
Compute the SLSD d SLSD ( x i , x j ) among samples by Equations (10)–(12).
3:
Find the k nearest neighbors N k SLS ( x i ) of each samples x i based on SLSD.
4:
Compute the weight matrix W SLS in adjacency graph by Equations (13) and (14).
5:
Compute the cluster centroid index F by Equation (17) and the cluster centroid U by Equation (18).
6:
Compute the weight matrix of the cluster centroid W C by Equation (20).
7:
Construct the DR model J ( V ) as Equation (22) and solve the generalized eigenvalue problem as Equation (24).
8:
Obtain the projection matrix with the d largest eigenvalues corresponding eigenvectors: V = [ v 1 , , v d ] D × d .
Output: Y = V T X d × m

3.3. Spectral-Locational-Spatial Reconstruction Preserving Embedding

From Section 2.2, NPE constructs the reconstruction error function simply based on the spectral information, which is unreliable not only due to the spectral redundancy and noise in HSIs, but also because different objects may share the same spectral properties. To address this problem, SLSRPE redefines an SLS reconstruction error function to compute the SLS reconstruction weights, which is based on the wSL data instead of the raw spectral data. In addition, the SLS reconstruction weights also consider the contribution of local spatial neighborhood.
Based on SLSD, the adjacency graph G RPE = X , W RPE is constructed. W RPE is the reconstruction weight matrix. The superiority of SLSD in finding the nearest neighbor proves that SLS information is quite beneficial to characterize the relationship among samples. As a result, SLS information should be used to construct the reconstruction error function and calculate the reconstruction weight. In fact, the closer the SLS relationship is, the more probable it is that the selected neighbor has the same class as the target sample, and the greater its reconstruction weight will be. In this way, the reconstruction error function for the optimal weights is redefined as follows:
min w i j RPE i = 1 m j = 1 k w i j RPE r = 1 s 2 t j r ( x C i x C j r ) r = 1 s 2 t j r 2 s . t . j = 1 m w i j RPE = 1 s . t . w i j R P E = 0 , x j N k SLS ( x i ) ,
in which x C j r Ω ( x C j ) is the local spatial neighbor of x C j and t j r = exp ( 2 d SLSD ( x j , x j r ) 2 ) . d SLSD ( x j , x j r ) is the SLSD between x j and x j r . The more similar x j r is to x j , the greater the contribution x j r makes to the relationship between x j and x i , which improves the robustness of reconstructed weights to noisy samples. By solving the reconstruction error function, we obtain the reconstructed weight matrix W RPE .
Supposing that x i j is the jth nearest neighbor of x i based on SLSD, k is the number of the selected nearest neighbors, x C i j r is the rth local spatial neighbor of x C i j . For the sake of explanation:
h i j = r = 1 s 2 t i j r ( x C i x C i j r ) r = 1 s 2 t i j r
indicates the SLS combined measure between x i and x i j . The reconstruction error function can be simplified to:
min i = 1 m j = 1 k w i j RPE h i j 2 = min i = 1 m ( w i RPE ) T z i w i RPE
where z i = [ h i 1 , , h i j , , h i k ] T [ h i 1 , , h i j , , h i k ] and w i RPE = [ w i 1 RPE , , w i j RPE , , w i k RPE ] . Then, the reconstruction error function can be expressed as the following optimization problem:
min w i j RPE i = 1 m ( w i RPE ) T z i w i RPE s . t . j = 1 k w i j RPE = 1
with the Lagrange multiplier method, w i j RPE is given as follows:
w i j RPE = l = 1 k ( z i j l ) 1 p , q = 1 k ( z i p q ) 1
where z i j l = ( h i j ) T h i l and z i p q = ( h i p ) T h i q . w i j RPE is the reconstruction coefficient of x j to x i . In fact, k < < m , so the reconstruction matrix W RPE is a sparse matrix. According to W RPE , SLSRPE maintains the reconstructed relationship between the target sample and the nearest neighbors in a low-dimensional space. The DR model of SLSRPE is defined as
min V i m | V T x i j m w i j RPE V T x j 2 s . t . i m V T x i = 0 , 1 m V V T = I ,
which can be reduced as
min V i m V T x i j m w i j RPE V T x j 2 = min V i m j m w i j RPE ( V T x i V T x j ) 2 = min V t r ( V T X ( I W RPE ) ( I W RPE ) T X T V ) = min V V T X M RPE X T V ,
where M RPE = ( I W RPE ) ( I W RPE ) T and I = diag [ 1 , , 1 ] . Equation (32) can be solved by the Lagrange multiplier, and it can be transformed into the following form:
X M RPE X T V = λ X X T V ( X X T ) 1 X M RPE X T V = λ V ,
where λ is the eigenvalue of Equation (33). With the eigenvectors v 1 , v 2 , , v d corresponding to the d smallest eigenvalues, the optimal projection matrix can be represented by V = [ v 1 , v 2 , , v d ] D × d . Finally, the low-dimensional embeddings can be given by Y = V T X d × m .
The detailed procedure of the presented SLSRPE approach is given in Algorithm 2. In contrast to traditional NPE, SLSRPE also has two contributions: (1) it takes advantage of SLSD to search the nearest neighbor, (2) it redefines the reconstruction error function to calculate the reconstruction weight matrix. Both of these include an integrated exploration of spectral-locational-spatial information. To demonstrate their effectiveness separately, we conducted experiments on three datasets, and the experimental results are shown in Table 4. NPE_SLS is used to test our proposed reconstruction weights that contain SLS information, while NPE_SLSD indicates that traditional NPE uses SLSD to search the nearest neighbors. From Table 4, both NPE_SLS and NPE_SLSD are superior to traditional NPE. Even more, NPE_SLS has advantages over NPE_SLSD. This means that the two points proposed in SLSRPE are meaningful, and that the reconstruction weights we redefine to include the SLS information are valuable. Accordingly, SLSRPE explores reconstruction relationships among samples not only in spectral domain, but also based on the location and local spatial neighborhood. SLSRPE makes full use of spectral-locational-spatial information of HSIs to obtain discriminating features to improve the classification performance. In fact, Table 4 also shows that SLSRPE is superior to NPE_SLS and NPE_SLSD.
Algorithm 2 SLSRPE
Input: A D-dimensional HSI dataset X = [ x 1 , , x i , , x m ] D × m , nearest neighbors number k > 0 , spatial window size s > 0 , embedding dimension d ( d < D ) and trade-off parameter 0 β 1 .
1:
Obtain location information C and generate wSL data X C as Equation (8).
2:
Compute the SLSD d SLSD ( x i , x j ) by Equations (10)–(12).
3:
Find the k nearest neighbors N k S L S ( x i ) of each samples x i based on SLSD.
4:
Construct the reconstruction error function as Equation (26).
5:
Compute the reconstruction weight w i j RPE by Equation (30).
6:
Solve the generalized eigenvalue problem as Equation (33).
7:
Obtain the projection matrix with the d smallest eigenvalues corresponding eigenvectors: V = [ v 1 , , v d ] D × d .
Output: Y = V T X d × m

4. Experiments

4.1. Description of Datasets

The first dataset covers the University of Pavia, Northern Italy, which was acquired by ROSIS sensor and called the PaviaU Dataset. Its spectral range is 0.4–0.82 μ m. After removing 12 noise bands from the original dataset with 115 spectral bands, 103 bands were employed in this paper. The spatial resolution is 1.3 m and each band has 610 × 340 pixels. This dataset consists of nine ground-truth classes with 42,776 pixels and a background with 164,624 pixels. Figure 4a,b show the color image and the labeled image with nine classes.
The second dataset, Salinas Dataset, covering Salinas Vally, CA, was acquired by AVIRIS sensor in 1998, whose spatial resolution is 3.7 m. There are 224 original bands with spectral ranging from 0.4 to 2.45 μ m. Each band has 512 × 217 pixels including 16 ground-truth classes with 56,975 pixels and a background with 54,129 pixels. The color image and the labeled image with 16 classes are shown in Figure 4c,d.
The third dataset, Indian Pines Dataset, covering the Indian Pines region, northwest Indiana, USA, was acquired by AVIRIS sensor in 1992. The spatial resolution of this image is 20 m. It has 220 original spectral bands in the 0.4–2.5 μ m spectral region and each band contains 145 × 145 pixels. Owing to the noise and water absorption, 104–108, 150–163 and 220 spectral bands were abandoned and the remaining 200 bands are used in this dataset. This dataset contains background with 10,776 pixels and 16 ground-truth classes with 10,249 pixels. The number of pixels in each class ranges from 20 to 2455. The color image and the labeled image with 16 classes are shown in Figure 5.

4.2. Experimental Setup

In order to verify the superiority of two uSLSML methods, seven state-of-art DR algorithms were selected for comparison, including NPE [21], LPP [20], regularized local discriminant embedding (RLDE) [14], LPNPE [14], spatial and spectral RLDE (SSRLDE) [14], SSMRPE [24], and SSLDP [27]. The former three methods are spectral-based DR methods, while the latter four approaches make use of both spatial and spectral information for the DR of HSIs. In addition, the raw spectral feature of HSIs is also used for comparison. SSRLDE has single-scale and multi-scale versions, and we select its single-scale version because SLSSPP and SLSRPE are two single-scale models. SSMRPE [24] proposes a new SSCD to construct the spatial-spectral adjacency graph to reveal the intrinsic manifold structure of HSIs. Among them, RLDE, SSRLDE [14], and SSLDP [27] are three supervised methods and require class labels to implement DR, while others are unsupervised.
Two classifiers, support vector machines (SVMs) and k nearest neighbors (KNNs), are applied to the classification of low-dimensional features. In this paper, we utilized the one nearest neighbor and LibSVM toolbox with a radial basis function. In all experiments, we randomly divided each HSI dataset into training and test sets. It should be emphasized that the training set used for training the DR model and classifier in supervised algorithm was only used to train the classifier in unsupervised algorithm. For unsupervised methods in all experiments, all samples in an HSI dataset are utilized to train the DR model. The test set was projected into the low-dimensional space for classification. The classification overall accuracy (OA), the average accuracy (AA), and the Kappa coefficient κ are used to evaluate classification performance.
To achieve good classification results, we optimized the parameters of various algorithms. For LPP [20] and NPE [21], the number of nearest neighbors k was set to 7 on Indian Pines dataset, 25 on PaviaU and Salinas datasets. We chose the optimal parameters of their source literature for other comparison algorithms. For RLDE, LPNPE and SSRLDE [14], the number of intraclass and interclass neighbors are both 5, the heat kernel parameter is 0.5, the parameters α , β and neighborhood scale are set to 0.1, 0.1, 11 on the Indian Pines and Salinas datasets, and 0.2, 0.3, 7 on the PaviaU dataset. For SSMRPE [24], the spatial window size and neighbor number are set to 7, 10 on Indian Pines dataset, 13, 20 on PaviaU dataset, 15, 15 on Salinas dataset. For SSLDP [27], the intraclass and interclass neighbor number, spatial neighborhood scale and trade-off parameter are set to 7, 63, 15, 0.6 on the Indian Pines and Salinas datasets, and 6, 66, 19, 0.4 on the PaviaU dataset. To reduce the influence of noise in HSIs, the weighted mean filtering with the 5 × 5 window is used to preprocess the pixels. In addition, each experiment in this paper is repeated 10 times in each condition for reducing the experimental random error.

4.3. Parameters Analysis

The two proposed uSLSML methods both have three parameters: nearest neighbor number k, spatial window size s and spectral-locational trade-off parameter β . In order to analyze the influence of three parameters on the DR results, we conduct parameter tuning experiments on three HSI datasets. Thirty samples in each class are randomly selected as the training set and the remaining samples are the testing set for two classifiers. For ease of analysis, Figure 6 and Figure 7 show the classification OAs with different parameters on Indian Pines and PaviaU datasets. In experiments, the parameter values are set to: k = { 1 , 2 , , 30 } , s = { 1 , 3 , , 15 } , β = { 0 , 0.05 , 0.1 , , 1 } . The fixed parameters of SLSSPP and SLSRPE are set to k = 15 , s = 5 , β = 0.4 on Indian Pines dataset, k = 20 , s = 3 , β = 0.01 on PaviaU dataset to analyze the other two parameters.
We first analyzed the effect of SLSSPP parameters on classification. From Figure 6 and Figure 7, the classification OA increases slightly with the increase in k and s when β is fixed, while the OA significantly increases with the increase in β when k and s are fixed, respectively, on two datasets. In particular, the change in β developing from nothing brings a significant improvement to the classification. This proves that the location information is quite beneficial for DR for classification. However, the OAs have not changed much as β continues to grow, because the value of location information is much larger than that of spectral information due to the normalization of spectral bands.
For the proposed SLSRPE method, the OAs increase as s increases on two datasets when β or k is fixed, especially for KNN classifiers, since the large spatial neighborhood is beneficial to characterize the spatial relationship between samples. This means that the spatial neighborhood added in reconstruction weights of SLSRPE is helpful for DR for classification. At the same time, the OAs are improved with the increase in β on two datasets on account of the importance of location information to DR for classification. It is worth noting that compared with KNN, SVM has stronger robustness to parameters in view of the advantages of SVM model training.
In fact, when k is greater than 15, the influence of k on uSLSML tends to be stable. For a new HSI datum, k can be valued between 15 and 30. The setting of s depends on the smoothness of the image. If the homogeneous pixels of the HSI image are relatively clustered, s can take a larger value and vice versa. Actually, the further increase in s after increasing to 5 does not significantly improve uSLSML. Therefore, in order to ensure the low computational complexity and the effectiveness of dimension reduction, s can be set to 5 15 . The value of β is obviously influenced by the size of the image. If the image size is large, the value of β should be small, and vice versa. β usually ranges from 0.01 to 1. In the following experiment, the parameters of SLSSPP are set as k = 28 , s = 11 , β = 0.7 for the Indian Pines dataset, k = 25 , s = 13 , β = 0.05 for the PaviaU dataset, k = 26 , s = 15 , β = 0.03 for the Salinas dataset. The parameters of SLSRPE are set as k = 9 , s = 9 , β = 1 for the Indian Pines dataset, k = 26 , s = 15 , β = 0.02 for the PaviaU dataset, and k = 14 , s = 15 , β = 0.03 for the Salinas dataset.

4.4. Dimension Analysis

In order to analyze the impact of the embedding dimension d of each DR algorithm on classification performance, thirty samples from each class are randomly selected as the training set, and the rest as the test set. If the number of samples in a class is less than 60, half of the samples in this class are used as the training set. Figure 8 gives the OAs with a different embedding dimension for various DR algorithms on three datasets. The embedding dimension d is tuned from 2 to 40 with an interval of 2.
As can be seen from Figure 8, the OAs in the low-dimensional space are mostly higher than those in the raw space. This proves that DR is necessary for classification. Meanwhile, the OAs of each algorithm gradually increases with the increase in embedding dimension, because the higher-dimensional embedding features contain more discriminative information that is helpful for classification. However, as the dimension continues to grow, the OAs tends to be stable or even slightly decreased. The reason is that the discriminative information of the embedding space is gradually approaching saturation and the Hughes phenomenon occurs due to fewer training samples for classifiers. In addition, it is obvious that the classification performance of the DR algorithm fusing spatial and spectral information, LPNPE, SSRLDE [14], SSMRPE [24], SSLDP [27], SLSSPP and SLSRPE, is generally higher than that of the spectral-based algorithm, LPP [20], NPE [21] and RLDE [14], which effectively testifies that spatial information is beneficial to DR for classification. It is worth noting that compared with other DR algorithms, in this experiment, SLSSPP and SLSRPE achieve the best classification performance in almost all embedding dimensions of the three datasets, because SLSSPP and SLSRPE take full advantage of spectral-locational-spatial information in HSIs for DR. In order to ensure each algorithm achieves optimal performance, we set the embedding dimension d = 30 on three datasets in the following experiments.

4.5. Classification Result

In practical applications, the classification accuracy of the DR algorithm is sensitive to the size of training set. To explore the classification performance of DR algorithms under different training conditions, we randomly selected n i ( n i = 5 , 10 , 20 , 30 , 40 , 60 ) samples from each class for training, and the others for testing. If the number of samples in a class is less than 2 n i , half of samples in this class are randomly selected for training. Table 5 shows the classification OAs of the embedding features of different DR algorithms on three datasets using KNN and SVM classifiers under different training conditions.
As shown in Table 5, for three datasets, the larger the number of training samples is, the higher the OA value is, since a large number of training samples with class labels can enable a supervised DR algorithm and classifier to obtain more discriminative information. In the comparison algorithms, the spectral-spatial algorithms, LPNPE, SSRLDE [14], SSMRPE [24], and SSLDP [27] are superior to the spectral-based algorithms, including LPP [20], NPE [21], and RLDE [14]. The supervised spectral-spatial algorithms, SSRLDE [14] and SSLDP [27], are better than the unsupervised spectral-spatial algorithms, including LPNPE [14]. These demonstrate once again that label and spatial information are advantageous to DR for classification.
As mentioned in Section 1, obtaining class labels is time-consuming, expensive, and difficult. Thus, the sensitivity of the classification performance of the DR algorithm to the number of training samples with class labels can also be used to evaluate the DR algorithm. Without doubt, we expected that the classification of a DR algorithm can achieve good performance with fewer training samples with class labels. From Table 5, it is as expected that when n i = 5 , SLSRPE on Indian Pines and PaviaU datasets, and SLSSPP on the Salinas dataset achieve the best and satisfactory classification performance in this experiment. In addition, the proposed uSLSML achieved better classification results than other algorithms under almost all training conditions of this experiment. Because uSLSML presents a new SLSD to extract SLS information to choose the effective neighbor and constructs an SLS adjacency graph and a cluster centroid adjacency graph for SLSSPP to enhance the separability of embedded features, it also redefines the reconstruction weights for SLSRPE to mine the SLS reconstruction relationships among samples to discover the intrinsic manifold structure of HSIs.
In order to explore the classification accuracy of different DR algorithms on each class, we classified the embedding features of different DR algorithms with the KNN and SVM classifier on three datasets. Table 6, Table 7 and Table 8 listed the classification accuracy of each class, OA, AA, and Kappa coefficient. The visualized classification maps of different approaches on three datasets are displayed in Figure 9, Figure 10 and Figure 11.
From Table 6, Table 7 and Table 8, the spatial-spectral combined methods are completely superior to spectral-based methods and supervised spatial-spectral algorithms slightly outperform unsupervised spatial-spectral algorithms. This means that compared with the label information, the spatial information is more conducive to improving the representation of embedded features in this experiment. SLSRPE and SSMRPE [24] are two improved versions of NPE [21], both of which are dedicated to maintaining the local manifold structure of the data. Table 6, Table 7 and Table 8 show that their improvement is effective, and SLSRPE is more outstanding than SSMRPE [24]. The proposed SLSD can find more neighbor samples from the same class than the SSCD of the SSMRPE [24], and more importantly, SLSRPE adds the SLS information to the reconstruction weights to reveal the intrinsic manifold structure of HSIs. This experiment also testifies that SLSSPP is far superior to LPP, which is attributed to the proposed SLSD and the new DR model with an SLS adjacency graph and a cluster centroid adjacency graph.
It is worth mentioning that SLSRPE and SLSSPP are even more outstanding than the supervised spectral-spatial algorithms, SSRLDE [14] and SSLDP [27], which are two graph-based methods. For supervised graph-based methods, the supervised information is usually placed in the adjacency graph. The above implicitly proves the excellence of the extracted SLS information stored in the adjacency graph of uSLSML.
Specifically, SLSSPP achieves the best classification results in 9 and 10 classes on the Indian Pines dataset, 5 and 3 classes on the PaviaU dataset, 8 and 10 classes on the Salinas dataset for KNN and SVM classifiers, respectively. SLSRPE achieves the best classification results in 9 and 7 classes on the Indian Pines dataset, 4 and 3 classes on the PaviaU dataset, and 9 and 6 classes on the Salinas dataset for KNN and SVM classifiers, respectively. From the numerical value of OA, SLSSPP and SLSRPE are more suitable for the KNN classifier because these two algorithms are based on distance. In general, SLSSPP and SLSRPE are more outstanding than other comparison algorithms in this experiment, due to the full exploration of the spectral-locational-spatial information of HSIs.
According to the classification maps in Figure 9, Figure 10 and Figure 11, SLSSPP and SLSRPE produce smoother classification maps and less misclassification pixels compared with other DR methods, especially in the classes that Corn-notill, Soybean-mintill for the Indian Pines dataset, Asphalt, Meadows, Gravel for the Pavia University dataset, Grapes-untrained, Vinyard-untrained for the Salinas dataset. These maps illustrate that the comprehensive exploration of SLS information ignored by other comparison algorithms is very helpful for the low-dimensional representation of HSIs and it is absorbed by SLSSPP and SLSRPE.

5. Concluding Remarks

In this paper, we propose two unsupervised DR algorithms, SLSSPP and SLSRPE, to learn the low-dimensional embeddings for HSI classification based on the spectral-locational-spatial information and manifold learning theory. A wSL datum is generated to facilitate the extraction of SLS information. A new SLSD is designed to search the proper nearest neighbors most probably belonging to the class of target samples. Then, SLSSPP constructs a DR model with an SLS adjacency graph based on SLSD and a cluster centroid adjacency graph based on wSL data to preserve SLS structure in HSIs, which compresses the nearest neighbor distance and expands the distance among clustering centroids to enhance the separability of embedding features. SLSRPE constructs an adjacency graph based on the redefined reconstruction weights with SLS information, which maintains the intrinsic manifold structure to extract the discriminant projection. As a result, two uSLSML methods can extract two discriminative low-dimensional features which can effectively improve the classification performance.
Extensive experiments on the Indian Pines, PaviaU and Salinas datasets demonstrated that the points we proposed are effective and the proposed uSLSML algorithms perform much better than some state-of-the-art DR methods in classification. Compared with LPP, the average improvements of OA are about 3.50%, 2.44%, 2.05% by the cluster centroid adjacency graph, 8.24%, 6.55%, 3.09% by SLSD, and 9.04%, 8.67%, 3.26% by SLSSPP on three datasets, while compared with NPE, the improvements are about 5.31%, 12.25%, and 5.05% by redefined reconstruction weights with SLS information, 4.38%, 3.75%, 1.75% by SLSD, 9.66%, 13.27%, 5.72% by SLSRPE.
This work just considers the neighbor samples and ignores the target samples in exploring the local spatial neighborhood information. Thus, our future work will focus on solving this problem while reducing the computational complexity.

Author Contributions

Conceptualization, N.L. and J.S.; methodology, N.L.; software, N.L.; validation, N.L., J.S. and T.W.; formal analysis, N.L.; investigation, N.L.; resources, D.Z. and M.G.; data curation, N.L.; writing—original draft preparation, N.L.; writing—review and editing, N.L.; visualization, N.L.; supervision, D.Z. and M.G.; project administration, D.Z.; funding acquisition, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by National Natural Science Foundation of China (Grant No. 62,076,204), the National Natural Science Foundation of Shaanxi Province under Grantnos. 2018JQ6003 and 2018JQ6030, the China Postdoctoral Science Foundation (Grant nos. 2017M613204 and 2017M623246).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HSIshyperspectral images
uDRunsupervised dimensionality reduction
DRdimensionality reduction
MLmanifold learning
SLSspectral-locational-spatial
uSLSMLunsupervised SLS manifold learning
SLSSPPSLS structure preserving projection
SLSRPESLS reconstruction preserving embedding
wSLweighted spectral-locational
SLSDspectral-locational-spatial distance
SDspectral distance
SSCDspatial-spectral combined distance
IPDimage patches distance
SCDspatial coherence distance
LPP_ClusterLPP with cluster centroid adjacency graph
LPP_SLSDLPP with SLSD
NPE_SLSNPE with the redefined reconstruction weights
NPE_SLSDNPE with SLSD
SVMsupport vector machines
KNNk nearest neighbors
OAoverall accuracy
AAaverage accuracy
κ Kappa coefficient

References

  1. Stuart, M.B.; Mcgonigle, A.J.S.; Willmott, J.R. Hyperspectral Imaging in Environmental Monitoring: A Review of Recent Developments and Technological Advances in Compact Field Deployable Systems. Sensors 2019, 19, 3071. [Google Scholar] [CrossRef] [Green Version]
  2. Zarcotejada, P.J.; Gonzalezdugo, M.V.; Fereres, E. Seasonal stability of chlorophyll fluorescence quantified from airborne hyperspectral imagery as an indicator of net photosynthesis in the context of precision agriculture. Remote Sens. Environ. 2016, 179, 89–103. [Google Scholar] [CrossRef]
  3. Zhou, Y.; Wetherley, E.B.; Gader, P.D. Unmixing urban hyperspectral imagery using probability distributions to represent endmember variability. Remote. Sens. Environ. 2020, 246, 111857. [Google Scholar] [CrossRef]
  4. Pandey, P.C.; Anand, A.; Srivastava, P.K. Spatial distribution of mangrove forest species and biomass assessment using field inventory and earth observation hyperspectral data. Biodivers. Conserv. 2019, 28, 2143–2162. [Google Scholar] [CrossRef]
  5. Huang, H.; Li, Z.; He, H.; Duan, Y.; Yang, S. Self-adaptive manifold discriminant analysis for feature extraction from hyperspectral imagery. Pattern Recognit. 2020, 107, 107487. [Google Scholar] [CrossRef]
  6. Zhang, L.; Luo, F. Review on graph learning for dimensionality reduction of hyperspectral image. Geo-spatial Inf. Sci. 2020, 23, 98–106. [Google Scholar] [CrossRef] [Green Version]
  7. Chen, W.; Yang, Z.; Ren, J.; Cao, J.; Cai, N.; Zhao, H.; Yuen, P. MIMN-DPP: Maximum-information and minimum-noise determinantal point processes for unsupervised hyperspectral band selection. Pattern Recognit. 2020, 102, 107213. [Google Scholar] [CrossRef]
  8. Dong, Y.; Du, B.; Zhang, L.; Zhang, L. Dimensionality reduction and classification of hyperspectral images using ensemble discriminative local metric learning. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2509–2524. [Google Scholar] [CrossRef]
  9. Wu, H.; Prasad, S. Semi-supervised dimensionality reduction of hyperspectral imagery using pseudo-labels. Pattern Recognit. 2018, 74, 212–224. [Google Scholar] [CrossRef]
  10. Zhang, M.; Ma, J.; Gong, M. Unsupervised hyperspectral band selection by fuzzy clustering with particle swarm optimization. IEEE Geosci. Remote Sens. Lett. 2017, 14, 773–777. [Google Scholar] [CrossRef]
  11. Zhang, M.; Gong, M.; Mao, Y.; Li, J.; Wu, Y. Unsupervised feature extraction in hyperspectral images based on wasserstein generative adversarial network. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2669–2688. [Google Scholar] [CrossRef]
  12. Du, Q.; Ren, H. Real-time constrained linear discriminant analysis to target detection and classification in hyperspectral imagery. Pattern Recognit. 2003, 36, 1–12. [Google Scholar] [CrossRef]
  13. Kuo, B.C.; Li, C.H.; Yang, J.M. Kernel nonparametric weighted feature extraction for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1139–1155. [Google Scholar]
  14. Zhou, Y.; Peng, J.; Chen, C.P. Dimension reduction using spatial and spectral regularized local discriminant embedding for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2014, 53, 1082–1095. [Google Scholar] [CrossRef]
  15. Feng, J.; Jiao, L.; Liu, F.; Sun, T.; Zhang, X. Unsupervised feature selection based on maximum information and minimum redundancy for hyperspectral images. Pattern Recognit. 2016, 51, 295–309. [Google Scholar] [CrossRef]
  16. Li, W.; Zhang, L.; Zhang, L.; Du, B. GPU parallel implementation of isometric mapping for hyperspectral classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1532–1536. [Google Scholar] [CrossRef]
  17. Kim, D.H.; Finkel, L.H. Hyperspectral image processing using locally linear embedding. In Proceedings of the First International IEEE EMBS Conference on Neural Engineering, Capri, Italy, 20–22 March 2003; pp. 316–319. [Google Scholar]
  18. Belkin, M.; Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 2003, 15, 1373–1396. [Google Scholar] [CrossRef] [Green Version]
  19. He, X.; Niyogi, P. Locality preserving projections. Adv. Neural Inf. Process. Syst. 2004, 16, 153–160. [Google Scholar]
  20. Wang, Z.; He, B. Locality perserving projections algorithm for hyperspectral image dimensionality reduction. In Proceedings of the 19th International Conference on Geoinformatics, Shanghai, China, 24–26 June 2011; pp. 1–4. [Google Scholar]
  21. He, X.; Cai, D.; Yan, S.; Zhang, H.J. Neighborhood preserving embedding. In Proceedings of the Tenth IEEE ICCV, Beijing, China, 17–21 October 2005; Volume 2, pp. 1208–1213. [Google Scholar]
  22. Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  23. Zheng, X.; Yuan, Y.; Lu, X. Dimensionality reduction by spatial–spectral preservation in selected bands. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5185–5197. [Google Scholar] [CrossRef]
  24. Huang, H.; Shi, G.; He, H.; Duan, Y.; Luo, F. Dimensionality reduction of hyperspectral imagery based on spatial-spectral manifold learning. IEEE Trans. Cybern. 2019, 50, 2604–2616. [Google Scholar] [CrossRef] [Green Version]
  25. Shi, G.; Huang, H.; Liu, J.; Li, Z.; Wang, L. Spatial-Spectral Multiple Manifold Discriminant Analysis for Dimensionality Reduction of Hyperspectral Imagery. Remote Sens. 2019, 11, 2414. [Google Scholar] [CrossRef] [Green Version]
  26. Feng, Z.; Yang, S.; Wang, S.; Jiao, L. Discriminative spectral–spatial margin-based semisupervised dimensionality reduction of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2014, 12, 224–228. [Google Scholar] [CrossRef]
  27. Huang, H.; Duan, Y.; He, H.; Shi, G.; Luo, F. Spatial-spectral local discriminant projection for dimensionality reduction of hyperspectral image. ISPRS J. Photogramm. Remote Sens. 2019, 156, 77–93. [Google Scholar] [CrossRef]
  28. Pu, H.; Chen, Z.; Wang, B.; Jiang, G.M. A novel spatial–spectral similarity measure for dimensionality reduction and classification of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7008–7022. [Google Scholar]
  29. Feng, W.; Mingyi, H.; Shaohui, M. Hyperspectral data feature extraction using spatial coherence based neighborhood preserving embedding. Infrared Laser Eng. 2012, 41, 1249–1254. [Google Scholar]
  30. Kim, W.; Crawford, M.M.; Lee, S. Integrating spatial proximity with manifold learning for hyperspectral data. Korean J. Remote Sens. 2010, 26, 693–703. [Google Scholar]
  31. Hou, B.; Zhang, X.; Ye, Q.; Zheng, Y. A novel method for hyperspectral image classification based on Laplacian eigenmap pixels distribution-flow. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 1602–1618. [Google Scholar] [CrossRef]
  32. Li, D.; Wang, X.; Cheng, Y. Spatial-spectral neighbour graph for dimensionality reduction of hyperspectral image classification. Int. J. Remote Sens. 2019, 40, 4361–4383. [Google Scholar] [CrossRef]
  33. Belkin, M.; Niyogi, P. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems; Carnegie Mellon’s School of Computer Science: Pittsburgh, PA, USA, 2002; pp. 585–591. [Google Scholar]
  34. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Flowchart of the proposed SLSSPP method.
Figure 1. Flowchart of the proposed SLSSPP method.
Remotesensing 13 02752 g001
Figure 2. Flowchart of the proposed SLSRPE method.
Figure 2. Flowchart of the proposed SLSRPE method.
Remotesensing 13 02752 g002
Figure 3. Comparison of spectral bands of pixels with different locational relationships (AD).
Figure 3. Comparison of spectral bands of pixels with different locational relationships (AD).
Remotesensing 13 02752 g003
Figure 4. (a,b) are Pavia University dataset; (c,d) are Salinas dataset.
Figure 4. (a,b) are Pavia University dataset; (c,d) are Salinas dataset.
Remotesensing 13 02752 g004
Figure 5. Indian Pines dataset.
Figure 5. Indian Pines dataset.
Remotesensing 13 02752 g005
Figure 6. The classification OAs with respect to different parameters of SLSSPP and SLSRPE on Indian Pines dataset from two classifiers, KNN and SVM.
Figure 6. The classification OAs with respect to different parameters of SLSSPP and SLSRPE on Indian Pines dataset from two classifiers, KNN and SVM.
Remotesensing 13 02752 g006
Figure 7. The classification OAs with respect to different parameters of SLSSPP and SLSRPE on Indian Pines dataset from two classifiers, KNN and SVM.
Figure 7. The classification OAs with respect to different parameters of SLSSPP and SLSRPE on Indian Pines dataset from two classifiers, KNN and SVM.
Remotesensing 13 02752 g007
Figure 8. The classification OAs with different embedding dimensions d for various DR algorithms on three datasets.
Figure 8. The classification OAs with different embedding dimensions d for various DR algorithms on three datasets.
Remotesensing 13 02752 g008
Figure 9. Classification maps of different DR methods on Indian Pines dataset: (aj) are for KNN classifier; and (kt) are for SVM classifier.
Figure 9. Classification maps of different DR methods on Indian Pines dataset: (aj) are for KNN classifier; and (kt) are for SVM classifier.
Remotesensing 13 02752 g009
Figure 10. Classification maps of different DR methods on the Pavia University dataset: (aj) are for KNN classifier; and (kt) are for SVM classifier.
Figure 10. Classification maps of different DR methods on the Pavia University dataset: (aj) are for KNN classifier; and (kt) are for SVM classifier.
Remotesensing 13 02752 g010
Figure 11. Classification maps of different DR methods on the Salinas dataset: (aj) are for the KNN classifier; and (kt) are for SVM classifier.
Figure 11. Classification maps of different DR methods on the Salinas dataset: (aj) are for the KNN classifier; and (kt) are for SVM classifier.
Remotesensing 13 02752 g011
Table 1. The number of samples with different classes in the top 10 nearest neighbors of all class samples in the three datasets.
Table 1. The number of samples with different classes in the top 10 nearest neighbors of all class samples in the three datasets.
Indian PinesPavia UniversitySalinas
ClassSDSSCDSLSDSDSSCDSLSDSDSSCDSLSD
C11010011203900000
C2310170102102000100
C3220130011602700000
C41401000720226001201300
C520200018010909040
C6020081022000150250
C7000620100030400
C80001520410024007400
C900007000010
C10502600 801500
C1131016020 109080
C122501100 01010
C1310100 40200
C1470100 30330130
C154000 222083080
C160600 0020
total1430106030616039201050202590620
Table 2. The number of heterogeneous samples in the same cluster when three datasets are divided into 35 clusters.
Table 2. The number of heterogeneous samples in the same cluster when three datasets are divided into 35 clusters.
Indian PinesPavia UniversitySalinas
Spectral-locational data206340553333
Raw spectral data405499368164
Table 3. Classification OAs of the embedding features (dim = 30) of different algorithms under different training conditions of the two classifiers.
Table 3. Classification OAs of the embedding features (dim = 30) of different algorithms under different training conditions of the two classifiers.
Dataset n i 51020304060
ClassifiersKNNSVMKNNSVMKNNSVMKNNSVMKNNSVMKNNSVM
IndianLPP69.472.276.880.085.386.788.290.390.292.392.993.7
LPP_Cluster73.676.483.184.289.491.591.392.893.294.195.095.5
LPP_SLSD82.182.989.989.694.494.596.396.497.896.898.397.9
SLSSPP84.182.793.192.096.095.296.796.198.096.598.397.8
Pavia ULPP67.268.674.381.083.688.686.991.588.893.790.994.3
LPP_Cluster68.679.577.788.883.692.685.693.787.795.489.596.0
LPP_SLSD82.672.991.182.893.591.395.494.096.694.797.096.2
SLSSPP82.686.790.689.693.993.495.895.396.395.397.596.5
SalinasLPP89.288.391.190.892.893.194.094.194.894.795.295.6
LPP_Cluster91.792.693.293.595.295.195.796.195.895.596.797.3
LPP_SLSD93.692.894.793.996.695.897.396.197.896.398.397.6
SLSSPP93.594.694.794.496.795.797.196.197.797.098.397.1
Table 4. Classification OAs of the embedding features (dim = 30) of different algorithms under different training conditions of the two classifiers.
Table 4. Classification OAs of the embedding features (dim = 30) of different algorithms under different training conditions of the two classifiers.
Dataset n i 51020304060
ClassifiersKNNSVMKNNSVMKNNSVMKNNSVMKNNSVMKNNSVM
IndianNPE69.472.677.379.985.286.388.389.990.091.993.093.7
NPE_SLS78.978.985.786.690.591.893.193.295.095.196.296.3
NPE_SLSD77.278.483.887.189.491.091.692.793.494.595.195.9
SLSRPE88.186.492.091.597.394.197.195.598.496.898.997.4
Pavia UNPE58.266.563.779.673.486.876.790.280.893.383.893.9
NPE_SLS85.674.491.483.394.689.096.093.596.595.997.296.5
NPE_SLSD64.472.372.680.978.288.982.092.784.393.487.095.2
SLSRPE87.777.492.485.996.290.796.393.696.995.197.796.3
SalinasNPE81.685.585.289.987.391.388.092.988.594.490.195.0
NPE_SLS90.889.093.392.494.494.196.195.096.495.397.396.3
NPE_SLSD83.889.986.591.389.992.989.495.189.995.590.496.1
SLSRPE91.490.295.992.396.193.997.094.797.395.198.196.4
Table 5. Classification OAs of the embedding features (dim = 30) of different DR algorithms on three datasets using KNN and SVM classifiers under different training conditions.
Table 5. Classification OAs of the embedding features (dim = 30) of different DR algorithms on three datasets using KNN and SVM classifiers under different training conditions.
ClassifierKNN SVM
Dataset n i 5102030406051020304060
IndianRAW54.364.372.577.279.783.458.270.681.285.488.491.3
NPE69.477.385.288.390.093.072.679.986.389.991.993.7
LPP69.476.885.388.290.292.972.280.086.790.392.393.7
RLDE68.478.486.589.491.493.467.878.385.990.091.693.9
LPNPE77.785.092.495.195.496.980.587.192.694.796.196.9
SSRLDE78.483.688.291.393.395.078.382.988.691.092.795.1
SSMRPE72.781.888.090.492.995.073.981.888.791.793.094.9
SSLDP72.081.888.493.094.296.273.881.286.991.792.594.5
SLSSPP84.190.696.096.798.098.382.792.095.296.196.597.8
SLSRPE88.192.097.397.198.498.986.491.594.195.596.897.4
Pavia URAW59.764.771.675.077.080.970.077.084.689.292.093.1
NPE58.263.773.476.780.883.866.579.686.890.293.393.9
LPP67.274.383.686.988.890.968.681.088.691.593.794.3
RLDE71.679.584.986.988.590.469.477.886.388.691.493.8
LPNPE61.371.677.581.784.286.866.780.488.090.292.693.3
SSRLDE78.184.789.391.393.194.669.478.686.189.290.993.0
SSMRPE82.086.090.593.694.395.974.882.688.491.192.895.2
SSLDP70.583.187.791.386.692.969.980.086.890.191.594.1
SLSSPP82.690.693.995.896.397.686.789.693.495.395.396.5
SLSRPE87.792.496.296.396.997.777.485.990.793.695.196.3
SalinasRAW83.386.288.889.190.491.086.088.690.992.493.194.2
NPE81.685.287.388.088.590.185.589.991.392.994.495.0
LPP89.291.192.894.094.895.288.490.893.194.194.795.6
RLDE88.890.092.794.394.695.784.786.589.390.991.992.8
LPNPE86.188.590.491.692.193.285.388.290.591.892.593.7
SSRLDE86.992.389.795.896.397.080.588.890.892.493.194.3
SSMRPE89.691.393.594.194.996.189.391.694.094.595.195.9
SSLDP90.692.892.895.195.696.089.590.091.992.492.793.5
SLSSPP93.594.796.797.197.798.394.694.495.796.197.097.1
SLSRPE91.495.996.197.097.398.190.292393.994.795.196.4
Table 6. Classification accuracy of the embedding features on each class with SVM and KNN classifiers in Salinas dataset.
Table 6. Classification accuracy of the embedding features on each class with SVM and KNN classifiers in Salinas dataset.
ClassClassifierRAWNPELPPRLDELPNPESSRLDESSMRPESSLDPSLSSPPSLSRPE
C1KNN99.199.899.199.9100100100100100100
SVM99.599.799.810010010010010099.6100
C2KNN97.899.398.599.9100100100100100100
SVM99.510099.8100100100100100100100
C3KNN96.893.999.7100100100100100100100
SVM96.8100100100100100100100100100
C4KNN99.398.499.298.598.897.998.498.999.799.3
SVM99.598.899.598.898.596.398.599.399.199.4
C5KNN96.096.096.699.592.499.999.899.298.999.2
SVM96.699.399.697.796.299.898.999.299.099.7
C6KNN99.399.299.799.810010010010010099.9
SVM99.899.999.898.799.799.610010010099.5
C7KNN99.699.198.699.610099.999.910010099.9
SVM10099.499.999.699.999.999.899.9100100
C8KNN69.674.972.285.076.689.584.885.686.593.6
SVM83.581.380.286.079.681.976.284.690.887.8
C9KNN98.297.698.310099.9100100100100100
SVM97.110010010010098.010010010099.9
C10KNN91.784.695.697.895.895.497.897.899.197.0
SVM91.895.099.496.097.493.896.896.299.097.7
C11KNN97.497.197.910099.799.810010099.6100
SVM94.099.810099.310099.698.910099.899.1
C12KNN98.110010010099.9100100100100100
SVM99.999.910010010099.999.9100100100
C13KNN99.798.498.699.999.510099.910099.9100
SVM99.899.199.999.598.499.799.799.399.198.1
C14KNN93.893.195.197.697.391.595.998.998.699.1
SVM92.499.397.999.397.695.596.599.898.799.6
C15KNN74.074.180.189.577.794.892.789.592.291.0
SVM83.083.891.366.076.480.376.291.990.585.4
C16KNN97.697.299.298.499.499.298.699.510098.6
SVM98.498.699.597.399.098.698.099.099.099.3
OAKNN88.589.190.495.191.396.695.595.395.997.1
SVM92.793.494.591.892.092.791.395.396.695.2
AAKNN94.293.995.597.896.198.098.098.198.498.6
SVM95.797.197.996.196.496.496.298.198.497.8
κ KNN87.287.989.394.690.496.295.094.895.596.8
SVM91.992.793.990.991.191.890.494.896.294.6
Table 7. Classification accuracy of the embedding features for each class with SVM and KNN classifiers in Indian Pines dataset.
Table 7. Classification accuracy of the embedding features for each class with SVM and KNN classifiers in Indian Pines dataset.
ClassClassifierRAWNPELPPRLDELPNPESSRLDESSMRPESSLDPSLSSPPSLSRPE
C1KNN95.7100100100100100100100100100
SVM100100100100100100100100100100
C2KNN66.567.078.985.883.991.678.893.494.896.3
SVM85.889.293.691.194.393.388.694.895.494.5
C3KNN76.373.182.391.096.892.889.492.196.098.6
SVM87.393.492.189.697.486.894.987.897.198.5
C4KNN88.996.198.199.099.097.697.110099.599.5
SVM90.897.197.199.599.010094.210098.199.5
C5KNN89.890.795.499.193.899.893.296.099.396.0
SVM94.596.094.797.898.298.594.796.598.195.4
C6KNN96.797.797.999.199.998.799.499.7100100
SVM99.310010098.010099.610099.199.199.6
C7KNN100100100100100100100100100100
SVM100100100100100100100100100100
C8KNN93.894.096.999.610098.997.8100100100
SVM93.810010099.310098.010010099.6100
C9KNN100100100100100100100100100100
SVM100100100100100100100100100100
C10KNN83.082.691.390.695.391.890.892.697.194.4
SVM80.393.188.689.096.692.091.987.996.689.2
C11KNN65.170.675.587.191.289.385.393.094.497.5
SVM73.267.883.382.680.685.274.487.391.593.2
C12KNN67.775.772.692.595.993.692.493.398.992.7
SVM82.496.494.595.798.696.396.493.394.197.2
C13KNN99.497.798.398.999.498.399.499.4100100
SVM97.798.910099.499.499.499.499.4100100
C14KNN91.795.090.495.199.893.898.796.199.798.7
SVM93.393.993.896.497.597.595.696.698.598.4
C15KNN82.981.593.399.297.899.488.210099.7100
SVM91.095.598.999.299.796.397.599.799.799.4
C16KNN100100100100100100100100100100
SVM100100100100100100100100100100
OAKNN78.380.585.191.994.293.490.094.997.197.4
SVM85.487.991.891.593.392.789.692.996.095.7
AAKNN87.388.991.996.197.196.694.497.298.798.4
SVM91.895.196.096.197.696.495.596.498.197.8
κ KNN75.577.983.190.893.392.488.794.296.797.0
SVM83.486.390.690.392.491.688.291.995.495.1
Table 8. Classification accuracy of the embedding features on each class with SVM and KNN classifiers in the Pavia University dataset.
Table 8. Classification accuracy of the embedding features on each class with SVM and KNN classifiers in the Pavia University dataset.
ClassClassifierRAWNPELPPRLDELPNPESSRLDESSMRPESSLDPSLSSPPSLSRPE
C1KNN73.474.283.390.870.392.784.988.092.896.0
SVM87.094.594.491.590.791.990.891.488.791.4
C2KNN67.261.186.988.280.491.196.195.397.298.1
SVM89.688.486.095.694.895.694.492.097.194.8
C3KNN72.175.188.974.281.579.480.489.188.995.7
SVM85.581.690.080.386.976.975.289.591.088.5
C4KNN86.187.594.093.987.595.092.094.190.096.5
SVM94.293.394.393.794.293.696.193.394.092.8
C5KNN99.899.710010010010010099.8100100
SVM99.610010099.910099.999.9100100100
C6KNN82.175.190.792.292.395.197.799.499.899.9
SVM94.994.994.092.094.990.993.892.897.198.0
C7KNN85.869.593.695.197.297.899.298.899.796.2
SVM96.798.595.997.498.598.296.899.299.497.4
C8KNN75.777.979.490.963.193.487.587.789.483.4
SVM86.883.382.169.771.875.885.485.689.273.4
C9KNN99.899.898.899.810099.910099.999.999.9
SVM99.899.999.910099.799.999.810099.8100
OAKNN74.571.187.689.980.892.493.193.995.496.5
SVM90.590.689.891.792.292.092.592.194.992.7
AAKNN82.480.090.691.785.893.893.194.795.396.2
SVM92.792.793.091.192.491.492.593.895.292.9
κ KNN67.963.983.986.875.590.190.892.094.095.4
SVM87.687.786.789.189.789.590.289.693.290.4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, N.; Zhou, D.; Shi, J.; Wu, T.; Gong, M. Spectral-Locational-Spatial Manifold Learning for Hyperspectral Images Dimensionality Reduction. Remote Sens. 2021, 13, 2752. https://doi.org/10.3390/rs13142752

AMA Style

Li N, Zhou D, Shi J, Wu T, Gong M. Spectral-Locational-Spatial Manifold Learning for Hyperspectral Images Dimensionality Reduction. Remote Sensing. 2021; 13(14):2752. https://doi.org/10.3390/rs13142752

Chicago/Turabian Style

Li, Na, Deyun Zhou, Jiao Shi, Tao Wu, and Maoguo Gong. 2021. "Spectral-Locational-Spatial Manifold Learning for Hyperspectral Images Dimensionality Reduction" Remote Sensing 13, no. 14: 2752. https://doi.org/10.3390/rs13142752

APA Style

Li, N., Zhou, D., Shi, J., Wu, T., & Gong, M. (2021). Spectral-Locational-Spatial Manifold Learning for Hyperspectral Images Dimensionality Reduction. Remote Sensing, 13(14), 2752. https://doi.org/10.3390/rs13142752

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop