Next Article in Journal
Control System for Vertical Take-Off and Landing Vehicle’s Adaptive Landing Based on Multi-Sensor Data Fusion
Previous Article in Journal
Autonomous Navigation of a Center-Articulated and Hydrostatic Transmission Rover using a Modified Pure Pursuit Algorithm in a Cotton Field
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dimensionality Reduction of Hyperspectral Images Based on Improved Spatial–Spectral Weight Manifold Embedding

1
School of Electronics and Information Engineering, Hebei University of Technology, Tianjin 300401, China
2
School of Mechanical Engineering, Hebei University of Technology, Tianjin 300401, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(16), 4413; https://doi.org/10.3390/s20164413
Submission received: 17 June 2020 / Revised: 26 July 2020 / Accepted: 30 July 2020 / Published: 7 August 2020
(This article belongs to the Section Remote Sensors)

Abstract

:
Due to the spectral complexity and high dimensionality of hyperspectral images (HSIs), the processing of HSIs is susceptible to the curse of dimensionality. In addition, the classification results of ground truth are not ideal. To overcome the problem of the curse of dimensionality and improve classification accuracy, an improved spatial–spectral weight manifold embedding (ISS-WME) algorithm, which is based on hyperspectral data with their own manifold structure and local neighbors, is proposed in this study. The manifold structure was constructed using the structural weight matrix and the distance weight matrix. The structural weight matrix was composed of within-class and between-class coefficient representation matrices. These matrices were obtained by using the collaborative representation method. Furthermore, the distance weight matrix integrated the spatial and spectral information of HSIs. The ISS-WME algorithm describes the whole structure of the data by the weight matrix constructed by combining the within-class and between-class matrices and the spatial–spectral information of HSIs, and the nearest neighbor samples of the data are retained without changing when embedding to the low-dimensional space. To verify the classification effect of the ISS-WME algorithm, three classical data sets, namely Indian Pines, Pavia University, and Salinas scene, were subjected to experiments for this paper. Six methods of dimensionality reduction (DR) were used for comparison experiments using different classifiers such as k-nearest neighbor (KNN) and support vector machine (SVM). The experimental results show that the ISS-WME algorithm can represent the HSI structure better than other methods, and effectively improves the classification accuracy of HSIs.

1. Introduction

With the development of science and technology, hyperspectral images (HSIs) have become the main research direction in the field of modern remote sensing technology. HSIs have a large number of spectral bands, which provide detailed spectral information about objects [1,2]. However, due to the strong correlation between adjacent spectra, there is much redundant information in HSIs, which take up a large storage space and require much computation time. Moreover, when classifying HSIs, classification accuracy is subject to the curse of dimensionality [3]. In order to improve classification accuracy, a dimensionality reduction (DR) method is a necessary and feasible preprocessing measure for HSI [4,5].
A DR method aims at extracting important features of images, mapping high-dimensional data to low-dimensional space, and using the data in low-dimensional space to describe high-dimensional features [5]. In recent years, scholars have put forward many DR methods, which can be divided into the following two categories: linear dimensionality reduction (LDR) algorithms and manifold dimensionality reduction (MDR) algorithms [6]. The former includes principal component analysis (PCA) [7], linear discriminant analysis (LDA) [8], and independent component analysis (ICA) [9], and so on. These methods project images to the low-dimensional space by linear transformation and find the optimal transformation projection. However, because ground-truth features reflected in HSI are often nonlinear topological structures, important features of the images are lost if only an LDR method is used. Therefore, MDR algorithms are gradually appearing, including local linear embedding (LLE) [10], local preserving project (LPP) [11], Laplacian eigenmaps (LE) [12], and so on. By learning the intrinsic geometric structure of data, manifold learning [13] can obtain the potential manifold structure of the high-dimensional data to achieve the goal of dimensionality reduction.
The purpose of the MDR method in HSI is to find the manifold structure in the high-dimensional space. LLE [14] obtains the reconstruction weight by characterizing the local adjacency sample of the data and keeps the neighborhood relationship in the local range unchanged when mapping to the low-dimensional space. However, an LLE algorithm only determines the neighbor relationship between points and cannot describe the structural features of data. Therefore, the linear neighbor representation weight matrix of different samples is different. When the LLE algorithm is used for different samples, the algorithm needs to be re-run, which is time consuming. It has a considerably low efficiency. Wu et al. [15] proposed an improved weighted local linear embedding (WLE-LLE) algorithm, which constructs the weight matrix by calculating the Euclidean distance and geodesic distance between samples. In addition, it merges LLE with LE algorithms to form a new objective function to effectively represent the topology structure of the data. Huang et al. [16] proposed a sparse discriminant manifold embedding (SDME) algorithm, which forms a dimensionality reduction framework based on graph embedding and sparse representation methods to make full use of the prior label information. Xu et al. [17] proposed a superpixel-based spatial–spectral dimension reduction (SSDR) algorithm by integrating the similarity between space and spectrum. The mapping matrix of the spatial domain is found by using superpixel segmentation to explore spatial similarity. Pixels from the same label construct a label-guided graph to explore the spectral similarity. Furthermore, integrating the labels and spatial information contributes to learning a discriminant projection matrix. Wu et al. [18] proposed a correlation coefficient-based supervised locally linear embedding (SC2SLLE) algorithm, which introduces the Spearman correlation coefficient to determine the appropriate nearest neighbor points, and increases the discriminability of embedding data on the basis of supervising the LLE method. Zhang et al. [19] proposed a SLIC (Sample Linear Iterative Clustering) superpixel based for Schroedinger eigenmaps (SSSE) algorithm, which uses SLIC segmentation to obtain spatial information for superpixels of different scales and sizes. The use of an SE method yields low-dimensional data. Hong et al. [20] proposed a robust local manifold representation (RLMR) algorithm based on LLE, to learn a novel manifold representation methodology, and then combine the new method with spatial–spectral information to improve the robustness of the algorithm.
In this paper, an improved spatial–spectral weight manifold embedding (ISS-WME) algorithm is proposed to combine spatial–spectral information and manifold structure to extract the features of HSI. First, the spatial–spectral information of HSI is extracted with the Gaussian variant function. The product of the spatial distance matrix and the spectral distance matrix is then used to be the distance weight matrix. Then, the collaborative representation method is used to express the characteristics of the HSI structure. Samples from the same class are as much as possible in the same hyperplane after projection, and samples from the different classes are as far apart as possible. The structural weight matrix is obtained by combining the within-class and between-class weight representation matrices. The product of the distance weight matrix and the structure weight matrix is used as the new weight matrix. When the data is mapped from high-dimensional manifold space to low-dimensional space, it is easy to make abnormal points appear if only considering the structural distribution between the data points. Furthermore, it is easy to cause the problem of sparseness if only keeping the data nearest neighbor relationship unchanged during projection transformation. To overcome abnormal points and the sparseness problem, both the structure and neighbor sample relationship are taken into account in this paper. Finally, the model can be efficiently solved by solving the minimum eigenvalue to the generalized eigenvalue problem and obtaining a projection matrix. The main contributions of the proposed algorithm are as follows:
  • A new weight matrix is constructed to describe the structure between samples, in which the product of the spatial–spectral distance weight matrix and the structure weight matrix is taken as a new data weight matrix. Compared with the previous weight matrix, which only considers spectral distance or spatial distance, the new weight matrix integrates the spatial–spectral information and structural characteristic of the data.
  • The model not only makes the manifold structure invariant, but also preserves the nearest neighbor relationship of the samples, when the high-dimensional data are projecting to the low-dimensional space.
This paper is arranged as follows. Section 2 briefly summarizes the LLE and LE methods and reviews the related works of these models. Section 3 provides the detailed description and the solving process of ISS-WME. Section 4 compares the performance of the proposed method and other DR methods with respect to three public data sets. Finally, the conclusions and perspectives are provided in Section 5.

2. Related Works

2.1. Local Linear Embedding

Given the data set X = [ x 1 , , x N ] R D × N , where x i R D , this denotes the ith sample with D-dimension features and N is the number of the samples. We assume that the D-dimensional-sample x i projects to d-dimension space M, d D . Therefore, the low-dimensional coordinate of the transformed data is Y = [ y 1 , , y N ] R d × N , where y i R d . The core of the LLE algorithm is to retain x i and its local neighbor samples are unchanged after DR. We consider the point and its local neighbor points as belonging to the same class. Under the principle of minimizing reconstruction errors, the sample x i can be linearly represented by these neighbor samples. By reconstructing the weight matrix, the original space is connected with the low-dimensional embedding space. Moreover, the reconstruction weight matrix between each sample and its nearest neighbor samples is kept unchanged, and the embedding result in the low-dimensional space is obtained by minimizing the reconstruction errors. Therefore, the weight coefficient matrix of the relationship between x i and its local neighbors can be obtained by solving the following optimization problem [21]:
{ min i = 1 N x i j = 1 k w ij x j 2 s . t . i = 1 k w ij = 1
In Equation (1), x j ( j = 1 , , k ) is one of the k samples, which is closest to Xi(i = 1,⋯N), and w ij , andstands for the weight neighbor relationship between samples x i and x j ; if they are not neighbors then w ij = 0 . Assuming the projection of D-dimensional samples into d-dimension space, it is desirable to maintain the same linear relationship:
{ min i = 1 N y i j = 1 k w ij y j 2 s . t 1 N i = 1 N y i y i T = I , i = 1 N y i = 0
where I is the identity matrix and y i = YI i . Then, we have the following: M = ( I W ) T ( I W ) , hence, Equation (2) can be changed into the following problem:
argmin Y i = 1 N YI i Yw i 2 = argmin Y tr ( YMY T )
Using the method of Lagrangian multiplier, Equation (3) can be easily solved by the generalized eigenvalue decomposition approach as follows:
MY T = λ Y T
Then, we can obtain the eigenvector corresponding to the dth smallest non-zero eigenvalues, and the low-dimensional embedding matrix can be represented as Y = [ y 1 , , y d ] .
The LLE algorithm [22] can successfully maintain the local neighbor geometric structure and have a fast calculation speed. However, as the number of data dimension and data size increases, it has large sparsity, poor noise, and other problems.

2.2. Laplacian Eigenmaps

Given the data set X = [ x 1 , , x N ] R D × N and using the KNN method to find the k-nearest neighbors of the sample x i , an overall data structure matrix is then formed, and x j is the jth nearest sample of x i . Then, there is a weight value as h ij = exp ( x i x j ) . Let Y = [ y 1 , , y N ] R d × N , which denotes the low-dimensional embedding samples of data set X, and Y = P T X . Then, Y can be solved by constructing the following optimization problem [23]:
{ min i , j = 1 N y i y j 2 h ij s . t i = 1 N y i = 0 , i = 1 N y i 2 = I
Similarly, Equation (5) constraints ensure it has a solution. And it can be solved by using the generalized eigenvalue decomposition approach as follows:
L y = λ D y
where D ii = j h ij is a diagonal matrix and L = D H is the Laplacian matrix. H is the weight matrix made up of h ij . The embedding samples in the d-dimensional space are constructed by the eigenvectors corresponding to the d minimum eigenvalues.
The LE algorithm [24] introduces the graph theory to achieve the purpose of DR methods. Nevertheless, due to the inaccurate weight matrix in the LE algorithm, the traditional LE algorithm cannot accurately describe the structure for complex hyperspectral data, resulting in the fact that the data in the low-dimensional space cannot fully express the original data features.

3. Improved Spatial–Spectral Weight Manifold Embedding

To solve large sparsity, inexact weight, and other problems of the LLE and LE [25] algorithms, an improved spatial–spectral weight manifold embedding (ISS-WME) algorithm is proposed in this paper. It combines spatial–spectral and high-dimensional manifold structure information to construct a weight matrix corresponding to the HSI structure. Considering the multi-manifold structure of HSI, the combination of its structure and the nearest neighbor samples simultaneously makes the data neighbor relationship invariable, without breaking the original structure when embedding to the low-dimensional space. In this regard, Section 3.1 specifically analyzes how to construct a weight matrix that is more consistent with the sample structure. In addition, Section 3.2 describes the final optimization objective function.

3.1. Spatial–Spectral Weight Setting

Through experimental study, researchers have found that classification accuracy can be improved by combining the spatial information in the analysis of HSI. Hence, the ISS-WME method is based on spatial and spectral information. It uses the variation of Gaussian function to represent the spatial and spectral distance, respectively. Given the HSI data set X = [ x f , x p ] , where x f is the spectral reflectance of a pixel and x p is the spatial coordinates of a pixel, to construct D ij , we find each pair of samples x i = [ x i f , x i p ] and x j = [ x j f , x j p ] , where i , j = 1 , , N . Therefore, the spatial distance matrix and spectral distance matrix are represented, respectively, as follows:
{ D ij f = 1 exp ( x i f x j f 2 σ f 2 ) D ij p = 1 exp ( x i p x j p 2 σ p 2 )
Therefore, the spatial–spectral distance weight matrix is as follows:
D ij = D ij f · D ij p
In HSI, adjacent pixels in the same homogenous region usually belong to the same class, so any sample in the same class can be linearly represented by homogeneous neighbor samples. Similarly, the whole data sample centers can be represented by different classes of sample centers [26]. Hence, the HSI still maintains this characteristic after DR. We want to obtain a within-class representation coefficient matrix by minimizing the error of the collaboration representation model. To prevent overfitting, regularization constraints are added to the optimization model. The objective function of the within-class collaboration representation model is as follows:
k = 1 c i = 1 i τ k l k ( P T x i P T X k θ k w 2 2 + λ θ k w θ k w ¯ 2 2 )
In Equation (9), l k is the sample number in the kth class and τ k is the sample set other than the kth class, and X k is expressed as the sample set from the same class as x i , except x i . θ k w is the within-class linear representation coefficient matrix of the kth class sample, and the within-class mean coefficient matrix is θ k w ¯ = [ 1 n 1 , , 1 n 1 ] R ( n 1 ) × 1 , and θ w denotes all the within-class linear representation coefficients [ θ k w | i = 1 c ] .
Likewise, the objective function of the between-class representation coefficient matrix is as follows:
k = 1 c ( P T x ¯ P T X ¯ k θ k b 2 2 + λ θ k b θ k b ¯ 2 2 )
In Equation (10), x _ is the mean of the total samples, and X ¯ k is the central sample set of each class sample. θ k b denotes the between-class representation coefficient matrix of the kth class sample, and the between-class mean coefficient matrix is θ k b ¯ = [ 1 n 1 , , 1 n 1 ] R ( n 1 ) × 1 , and θ b denotes all the between-class representation coefficients [ θ k b | k = 1 c ] .
The within-class representation matrix θ k w is obtained by solving the minimum value of Equation (9) (setting the derivative of objective function about within-class representation coefficients to be 0):
f θ k w = 2 Y k T ( x i Y k θ k w ) + 2 λ ( θ k w θ k w ¯ ) = 0
Therefore, the within-class coefficient matrix is as follows:
θ k w = ( Y k T Y k + λ I ) 1 ( Y k T y i + λ θ k w ¯ )
In the same way, we can set the derivative of objective function about between-class representation coefficients to be 0, so the between-class coefficient matrix is as follows:
θ k b = ( Y ¯ k T Y ¯ k + λ I ) 1 ( Y ¯ k T y ¯ k + λ θ k b ¯ )

3.2. ISS-WME Model

Given the HSI data set X = [ x 1 , , x N ] , x i R D , we assume that the projection matrix P R D × d is expected to project the data X into the low-dimensional space. Y = [ y 1 , , y N ] , y i R d represents the samples in low-dimensional space, and Y = P T X . As proposed by Wu [15], both of the distance and structural factors are taken into account in this paper. Then, we regard the spatial–spectral matrix as the distance weight W i j D and coefficient matrices as the structure weight W ij S , then W ij D and W ij S constitute the new weight matrix between samples, such as the following:
{ W i j = W i j D · W i j S W i j D = D i j W i j S = β θ w + ( 1 β ) θ b
where β represents the proportion of the within-class matrix and the between-class matrix in the structure weight.
Furthermore, the high-dimensional data mapping to the low-dimensional space not only makes the local manifold structure unchanged, but also maintains the local neighbor relationship invariant. Introducing the weight of Equation (14) to increase the robustness of the model, the improved weight manifold embedding optimization problem is as follows:
{ min 1 2 i , j = 1 N y i y j 2 W ij + α i = 1 N y i j = 1 k G ij y j 2 s . t i = 1 N y i 2 = I , i = 1 N y i = 0 , j = 1 k G ij = 1
where α is a compromise parameter. W i j is the spatial–spectral matrix in Equation (14) and G i j is still the weight matrix representing the nearest neighbor relationship. If x i and x j are neighbors, G i j = d G ( i , j ) represents the geodesic distance; otherwise, G i j = 0 . According to Equations (2) and (5), the optimization problem (15) is equivalent to the following:
min 1 2 i , j = 1 N P T x i P T x j 2 W ij + α i = 1 N P T x i P T j = 1 k G ij x j 2 = i , j = 1 N P T ( x i W ij x i T x i W ij x j T ) P + α i = 1 N P T ( XI i XG i ) 2 P = tr ( P T X ( D ' W ) X T P ) + α tr ( P T X ( I G ) T ( I G ) X T P ) = tr ( P T XL X T P ) + α tr ( P T XM X T P ) = tr ( P T XBX T P )
where L = D W is the Laplacian matrix, D ii = j = 1 N W ii is a diagonal matrix, W = [ W i j ] N × N is a symmetric matrix, and M = ( I G ) T ( I G ) . Moreover, B = L + α M . Finally, the objective function can be conducted as the following optimization problem:
{ min t r ( P T X B X T P ) s . t . P T X D X T P = I
With the method of Lagrange multiplier, the optimization problem is formed as follows:
X B X T p d = λ X D X T p d
where p d is the generalized eigenvector of Equation (18) according to their eigenvalue λ 1 λ d . Then, we can learn a projection matrix P = [ p 1 , , p d ] . In summary, Algorithm 1 is as follows:
Algorithm 1 Process of the ISS-WME Algorithm
Input: HSI data set X = [ x 1 , , x N ] R D × N and x i = ( x i f , x i p ) , low-dimensional space d D , K is the nearest neighbor.
1: HSI is segmented into superpixels using the SLIC segmentation method and randomly select training samples (for Pavia University, training samples are 2%, 4%, 6%, 8%, 10%), and then use Equations (7) and (8) to calculate the spatial–spectral distance matrix between superpixels. In addition, make sure the number of superpixels and training samples is the same.
2: Then, use Equations (12) and (13) to obtain the structure representation matrix between training samples. The product of the two types of matrices is taken as the new matrix Equation (14).
3: According to the local manifold structure and nearest neighbor relationship of the samples, the objective function of Equation (16) is constructed.
4: By solving the generalized feature of Equation (18), the corresponding eigenvector is obtained.
5: Learn a projection matrix P.
Output: The data in low-dimensional space is Y = P T X

4. Experiments and Discussion

In order to verify the effectiveness of the proposed algorithm ISS-WME, we conducted experiments on three commonly used HSI data sets, namely Indian Pines, Pavia University, and Salinas scene. We considered the overall accuracy (OA) [19], classification accuracy (CA), average accuracy (AA), and kappa coefficient (kappa) [27] of the classification results as evaluation values. We compared the ISS-WME algorithm with six other representative DR algorithms, i.e., PCA, Isomap [28], LLE, LE, SSSE, and WLE-LLE. We used two more commonly used classifiers, i.e., the Euclidean distance-based k-nearest neighbor (KNN) algorithm [29] and the support vector machine (SVM), to classify the low-dimensional data. We performed the experiment using MATLAB on an Intel Core CPU 2.59 GHz and 8 GB RAM computer.

4.1. Data Sets and Parameter Setting

4.1.1. Data Sets

The Indian Pines, Pavia University, and Salinas scene data sets were subjected to experiments in the paper.
The Indian Pines data set [30,31] and Salinas scene data set [2,30] were the scenes gathered by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. Indian Pines consisted of 145 × 145 pixels and 220 spectral bands. However, several spectral bands with noise and water absorption phenomena were removed from the data set, leaving a total of 200 radiance channels to be used in the experiments. Salinas had 512 × 217 pixels and 204 spectral bands.
The Pavia University data set [30,32] was acquired by the Reflective Optics System Imaging Spectrometer (ROSIS) sensor. Its size was 610 × 340 pixels. Some channels were removed due to noise and the remaining number of spectral bands was 103.

4.1.2. Experimental Parameter Settings

For this paper, six different DR algorithms were compared with the proposed ISS-WME method. These comparison algorithms are described as follows. PCA, Isomap, LLE, and LE are four classical DR algorithms. The SSSE algorithm combines the spectral and spatial information and WLE-LLE combines the spectral and structural information. In addition, for the LE, LLE, WLE-LLE, and SSSE algorithms, the number of nearest neighbor samples must be set in the experiment. To compare and analyze the classification results in the experiment, the nearest neighbor samples were set as 15 in all experiments. The SSSE and ISS-WME algorithms also require computational spatial and spectral information, so we set the parameters as ( σ f , σ p ) = ( 0.1 , 100 ) .
In each experiment, each data set was divided into training samples and testing samples. We used different DR algorithms to learn a projection matrix on the training samples, and then utilized the acquired embedding matrix to project the testing samples into the low-dimensional space. Finally, we used a KNN or SVM classifier to classify the data in the low-dimensional space. Moreover, to reduce the systematic error, the results were computed 10 times to calculate the average value for each experimental result with the associated standard deviation. We used OA, CA, AA, and K to evaluate the different algorithm performances. In the Indian Pines experiment, the parameters were set to ( β , α ) = ( 0.5 , 0.2 ) . In the same way, the parameters were set to ( β , α ) = ( 0.5 , 0.1 ) in the Pavia University experiment. Finally, in the Salinas scene, the parameters were ( β , α ) = ( 0.5 , 0.2 ) .

4.2. Results for the Indian Pines Data Set

To fully attest the algorithm performance of the ISS-WME method, experiments were carried out under the conditions of different numbers of training samples, different embedding dimensions, and different DR methods. We randomly selected n% (n = 10, 20, 30, 40, 50) samples from each class as the training sample set, and the rest were the testing sample set. We also set the hyperspectral dimensionality (HD) of low-dimensional embedding from 10 to 50. The results of the proposed ISS-WME method were compared with those of the other comparison DR methods.
Figure 1 and Figure 2 show the OA of the KNN and SVM classifiers on different embedding dimensions using different DR methods. Specifically, (a)–(e) represent different training sample sets. The OA of Indian Pines with different training samples directly classified by a KNN or SVM classifier was used as the baseline. Compared with the five other dimensionality reduction methods, ISS-WME and WLE-LLE achieved the best and the second-best overall accuracy, respectively, under different dimensions or different training samples. Comparing Figure 1 and Figure 2, the overall accuracy of SVM is higher than KNN.
As can be seen in Figure 1 and Figure 2, the OA decreases as the dimension increases. In Figure 1, it can be observed that, for the KNN classifier, the proposed ISS-WME method obtains similar classification results with those of WLE-LLE in almost all cases of embedding dimensions, and achieves the best classification result in hyperspectral dimensionality (HD) = 50. Figure 2c shows the OA of the HD for the 30% samples of the Indian Pines data as training set. Compared with RAW, PCA, Isomap, LLE, LE, SSSE, and WLE-LLE, when HD = 50, ISS-WME increases the OA by 12.01%, 8.2%, 7.98%, 5.28%, 4.15%, and 2.69%, respectively. To further demonstrate intuitively the classification results of the DR algorithms, the comparison results for the 50% of the Indian Pines data trained with the SVM classifier in HD = 20 are presented visually in Figure 3 with the best overall accuracy. It includes (a) the false-color image, (b) the corresponding ground-truth map, and the different DR methods’ classification maps (c)–(j). It can be observed that the proposed ISS-WME algorithm performs better in land-over classes than the other compared DR methods.
In order to further describe the comparison results, the quantitative comparison of classification accuracy using SVM classifiers under HD = 20 for different DR methods is summarized in Table 1. The results include the OA and kappa coefficient for each method, and each result is the average of the results of 10 runs with the associated standard deviation. As can be seen in Table 1, in most cases, the classification results (OA and kappa) generated by ISS-WME are the best.
Table 2 provides the training and testing sample numbers of each class in the Indian Pines data set in the experiment, as well as the classification results of the SVM classifier using different DR methods. Compared to Table 1, Table 2 shows the evaluation index CA, where the best results are shown in bold numbers. It can also be seen in Table 2 that the ISS-WME method achieves the best accuracy in 10 classes of samples.

4.3. Results for the Pavia University Data Set

In order to fully attest the algorithm performance of ISS-WME, experiments were carried out under the conditions of different numbers of training samples, different embedding dimensions, and different DR methods. We randomly selected n% (n = 2, 4, 6, 8, 10) samples from each class as the training set, and the rest were the testing set. We also set the hyperspectral dimensionality (HD) of low dimensional embedding from 10 to 50. The results of the proposed ISS-WME method were compared with those of the other DR methods.
Figure 4 and Figure 5 show the OA of the KNN and SVM classifiers on different embedding dimensions using different DR methods. Specifically, (a)–(e) represent different training sets. The OA directly obtained by using different classifiers in dimensions was used as the baseline. Compared to the six other algorithms, ISS-WME achieved the best OA in almost all cases with different embedding dimensions under different numbers of training samples. As can be seen in Figure 5, image classification accuracies are more or less susceptible to distortion with the increase in embedding dimensions. No matter which DR algorithms are adopted, the curse of dimensionality occurs to a certain extent. Compared with Figure 4 and Figure 5, the distortion is serious when using the SVM classifier.
As can be seen in Figure 4, the OA of different DR methods is relatively stable with the increase in training sets when KNN is used as the classifier. Figure 5e shows the impact of the hyperspectral dimensionality (HD) on the OA for 10% samples of Pavia University data as training set. Compared with RAW, PCA, Isomap, LLE, LE, SSSE, and WLE-LLE, when HD = 50, ISS-WME increases the OA by 0.13%, 0.55%, 1.24%, 3.34%, 0.64%, and 0.31%, respectively. In order to further demonstrate the classification results of DR algorithms, the classification result maps for the 10% of the Pavia University data trained with the SVM classifier in HD = 20 are presented visually in Figure 6, including (a) the false-color image, (b) the corresponding ground-truth map, and the different DR methods’ classification maps (c)–(j). It can be observed that the proposed ISS-WME algorithm performs better than the other compared DR methods, in most land-over classes.
To further describe the comparison results, the quantitative comparison of OA of different DR methods at HD = 20 is summarized in Table 3. The results include the overall accuracy and kappa coefficients of each method, and each result is an average of the results of 10 runs with the associated standard deviation. As can be seen in Table 3, the classification results (OA and kappa) produced by ISS-WME are the best in most cases. In addition, it can be seen in Table 4 that ISS-WME obtained the best classification accuracy about six classes, and the best results of the indexes are shown in bold.
Table 4 provides the number of training and test samples for each class in the Pavia University data set in the experiment, as well as the classification results under the SVM classifier using different dimensionality reduction methods. Compared with Table 3, the classification accuracy (CA) is displayed in Table 4, where the best results are shown in bold numbers. Moreover, it can be seen in Table 4 that the ISS-WME method achieves the best accuracy in sixclasses of samples.

4.4. Results for the Salinas Scene Data Set

To describe the comparison results, the quantitative comparison of OA of different DR methods when HD = 20 is summarized in Table 5. The results include the overall accuracy and kappa coefficients of each method, and each result is an average of the results of 10 runs with the associated standard deviation. As can be seen in Table 5, the classification results (OA and kappa) produced by ISS-WME are the best in most cases. In addition, it can be seen in Table 6 that ISS-WME obtained the best classification accuracy about 12 classes, and the best results of the indexes are shown in bold. Moreover, the results of three classes are the same as the WLE-LLE algorithm.
Table 5 provides the number of training and test samples for each class in the Salinas scene data set in the experiment, as well as the classification results under the SVM classifier using different dimensionality reduction methods. Compared with Table 5, the classification accuracy (CA) is displayed in Table 6, where the best results are shown in bold numbers. And the visual representation of different dimensional reduction methods of Salinas data set is supplemented in the Appendix A.

5. Conclusions

In this paper, a dimensionality reduction method combining the manifold structure of high-dimensional data with a linear nearest neighbor relationship was proposed. The method aimed to keep the data nearest neighbor relationship unchanged when the high-dimensional data were projecting to the low-dimensional space. Furthermore, the manifold structure of the data combined the spatial–spectral distance and structural features. To fully verify the superiority of the proposed method, the data obtained by the ISS-WME method and the six other dimensionality reduction methods were classified by two common classifiers. The results of several experiments show that the ISS-WME algorithm improves the ground object recognition ability of hyperspectral data, and the OA and kappa coefficients also support this conclusion. In the future, the dimensionality reduction labeling will be further considered to improve the classification effect through the framework of semi-supervised learning.

Author Contributions

Conceptualization, H.L., K.X., and E.O.; Methodology, H.L., and K.X.; Project administration, K.X., T.L., and J.M.; Software, H.L.; Supervision, K.X., T.L., and J.M.; Writing—original draft, H.L.; Writing—review and editing, H.L., K.X., and E.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. U1813222), the Tianjin Natural Science Foundation (No. 18JCYBJC16500), the Key Research and Development Project from Hebei Province (No. 19210404D), and the Hebei Natural Science Foundation (No. F2020202045).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. OA obtained by using an SVM classifier, with respect to (ae), different numbers of training sets (2%, 4%, 6%, 8%, 10%) and different HD (from 10 to 50) for the Salinas scene data set.
Figure A1. OA obtained by using an SVM classifier, with respect to (ae), different numbers of training sets (2%, 4%, 6%, 8%, 10%) and different HD (from 10 to 50) for the Salinas scene data set.
Sensors 20 04413 g0a1
Figure A2. Classification maps of the different DR methods using an SVM classifier for the Salinas scene data in HD = 20: (a) false-color image (R:42,G:16,B:11), (b) ground-truth map, (c) original (SVM), (d) PCA, (e) Isomap, (f) LLE, (g) LE, (h) SSSE, (i) WLE-LLE, and (j) ISS-WME; (k) representation of different classes.
Figure A2. Classification maps of the different DR methods using an SVM classifier for the Salinas scene data in HD = 20: (a) false-color image (R:42,G:16,B:11), (b) ground-truth map, (c) original (SVM), (d) PCA, (e) Isomap, (f) LLE, (g) LE, (h) SSSE, (i) WLE-LLE, and (j) ISS-WME; (k) representation of different classes.
Sensors 20 04413 g0a2aSensors 20 04413 g0a2b

References

  1. Gao, F.; Wang, Q.; Dong, J. Spectral and Spatial Classification of Hyperspectral Images Based on Random Multi-Graphs. Remote Sens. 2018, 10, 1271. [Google Scholar] [CrossRef] [Green Version]
  2. Zu, B.; Xia, K.; Du, W.; Li, Y.; Ali, A.; Chakraborty, S. Classification of Hyperspectral Images with Robust Regularized Block Low-Rank Discriminant Analysis. Remote Sens. 2018, 10, 817. [Google Scholar] [CrossRef] [Green Version]
  3. Feng, Z.; Yang, S.; Wang, S.; Jiao, L. Discriminative Spectral–Spatial Margin-Based Semisupervised Dimensionality Reduction of Hyperspectral Data. IEEE Geosci. Remote Sens. Lett. 2014, 12, 224–228. [Google Scholar] [CrossRef]
  4. Xu, H.; Zhang, H.; He, W.; Zhang, L. Superpixel based dimension reduction for hyperspectral imagery. In Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2018, Valencia, Spain, 22–27 July 2018; pp. 2575–2578. [Google Scholar]
  5. Zu, B.; Xia, K.; Li, T.; He, Z.; Li, Y.; Hou, J.; Du, W. SLIC Superpixel-based l2, 1-norm robust principal component analysis for hyperspectral image classification. Sensors 2019, 19, 479. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Zeng, S.; Wang, Z.; Gao, C.; Kang, Z.; Feng, D.D. Hyperspectral image classification with global-local discriminant analysis and spatial-spectral context. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 5005–5018. [Google Scholar] [CrossRef]
  7. Huang, H.; Chen, M.; Duan, Y. Dimensionality Reduction of Hyperspectral Image Using Spatial-Spectral Regularized Sparse Hypergraph Embedding. Remote Sens. 2019, 11, 1039. [Google Scholar] [CrossRef] [Green Version]
  8. Wold, S.; Esbensen, K.; Geladi, P. Principal component analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
  9. Chen, H.T.; Chang, H.W.; Liu, T.L. Local discriminant embedding and its variants. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 846–853. [Google Scholar]
  10. Stone, J.V. Independent component analysis: An introduction. Trends Cogn. Sci. 2002, 6, 59–64. [Google Scholar] [CrossRef]
  11. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, Z.Y.; He, B.B. Locality perserving projections algorithm for hyperspectral image dimensionality reduction. In Proceedings of the 2011 19th International Conference on Geoinformatics, Shanghai, China, 24–26 June 2011; pp. 1–4. [Google Scholar]
  13. Hou, B.; Zhang, X.; Ye, Q. A Novel Method for Hyperspectral Image Classification Based on Laplacian Eigenmap Pixels Distribution-Flow. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 1602–1618. [Google Scholar] [CrossRef]
  14. Huang, K.K.; Dai, D.Q.; Ren, C.X. Regularized coplanar discriminant analysis for dimensionality reduction. Pattern Recognit. 2017, 62, 87–98. [Google Scholar] [CrossRef] [Green Version]
  15. Rajaguru, H.; Prabhakar, K.S. Performance analysis of local linear embedding (LLE) and Hessian LLE with Hybrid ABC-PSO for Epilepsy classification from EEG signals. In Proceedings of the 2018 International Conference on Inventive Research in Computing Applications(ICIRCA), Coimbatore, India, 11–12 July 2018; pp. 1084–1088. [Google Scholar]
  16. Wu, Q.; Qi, Z.X.; Wang, Z.C. An improved weighted local linear embedding algorithm. In Proceedings of the 2018 14th International Conference on Computational Intelligence and Security(CIS), Hangzhou, China, 16–19 November 2018; pp. 378–381. [Google Scholar]
  17. Huang, H.; Luo, F.L.; Liu, J.M. Dimensionality reduction of hyperspectral images based on sparse discriminant manifold embedding. ISPRS J. Photogramm. Remote Sens. 2015, 106, 42–54. [Google Scholar] [CrossRef]
  18. Xu, H.; Zhang, H.; He, W.; Zhang, L. Superpixel-based spatial-spectral dimension reduction for hyperspectral imagery classification. Neurocomputing 2019, 360, 138–150. [Google Scholar] [CrossRef]
  19. Wu, P.; Xia, K.; Yu, H. Correlation Coefficient based Supervised Locally Linear Embedding for Pulmonary Nodule Recognition. Comput. Methods Programs Biomed. 2016, 136, 97–106. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Zhang, X.W.; Chew, S.E.; Xu, Z.L. SLIC superpixels for efficient graph-based dimensionality reduction of hyperspectral imagery. In Proceedings of the XXI Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery, Baltimore, MD, USA, 21–23 April 2015; pp. 947209–947225. [Google Scholar]
  21. Sun, L.; Wu, Z.B.; Liu, J.J. Supervised spectral-spatial hyperspectral image classification with weighted markov random fields. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1490–1503. [Google Scholar] [CrossRef]
  22. Yu, W.B.; Zhang, M.; Shen, Y. Learning a local manifold representation based on improved neighborhood rough set and LLE for hyperspectral dimensionality reduction. Signal Process. 2019, 164, 20–29. [Google Scholar] [CrossRef]
  23. Yang, Y.; Hu, Y.L.; Wu, F. Sparse and low-rank subspace data clustering with manifold regularization learned by local linear embedding. Appl. Sci. 2018, 8, 2175. [Google Scholar] [CrossRef] [Green Version]
  24. Hong, D.F.; Yokoya, N.; Zhu, X.X. Learning a robust local manifold representation for hyperspectral dimensionality reduction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2960–2975. [Google Scholar] [CrossRef] [Green Version]
  25. Arena, P.; Patané, L.; Spinosa, A.G. Data-based analysis of Laplacian Eigenmaps for manifold reduction in supervised Liquid State classifiers. Inf. Sci. 2019, 478, 28–39. [Google Scholar] [CrossRef]
  26. Cahill, N.D.; Chew, S.E.; Wenger, P.S. Spatial-spectral dimensionality reduction of hyperspectral imagery with partial knowledge of class labels. In Proceedings of the XXI Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery, Baltimore, MD, USA, 21–23 April 2015; pp. 9472–9486. [Google Scholar]
  27. Kang, X.D.; Xiang, X.L.; Li, S.T. PCA-Based Edge-Preserving Features for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7140–7151. [Google Scholar] [CrossRef]
  28. Yang, W.K.; Wang, Z.Y.; Sun, C.Y. A collaborative representation based projections method for feature extraction. Pattern Recognit. 2015, 48, 20–27. [Google Scholar] [CrossRef]
  29. Tenenbaum, J.B.; De Silva, V.; Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef]
  30. Hyperspectral Remote Sensing Scenes. Available online: http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 6 August 2020).
  31. Ou, D.P.; Tan, K.; Du, Q. A novel tri-training technique for the semi-supervised classification of hyperspectral images based on regularized local discriminant embedding feature extraction. Remote Sens. 2019, 11, 654. [Google Scholar] [CrossRef] [Green Version]
  32. Zhai, H.; Zhang, H.; Zhang, L.; Li, P. Total variation regularized collaborative representation clustering with a locally adaptive dictionary for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2018, 57, 166–180. [Google Scholar] [CrossRef]
Figure 1. OA obtained by using a k-nearest neighbor (KNN) classifier, with respect to (ae), different numbers of training sets (10%, 20%, 30%, 40%, 50%) and different dimensions (from 10 to 50) for the Indian Pines data set.
Figure 1. OA obtained by using a k-nearest neighbor (KNN) classifier, with respect to (ae), different numbers of training sets (10%, 20%, 30%, 40%, 50%) and different dimensions (from 10 to 50) for the Indian Pines data set.
Sensors 20 04413 g001
Figure 2. OA obtained by using a support vector machine (SVM) classifier, with respect to (ae), different sizes of training sets (10%, 20%, 30%, 40%, 50%) and different hyperspectral dimensionality (HD) (from 10 to 50) for the Indian Pines data set.
Figure 2. OA obtained by using a support vector machine (SVM) classifier, with respect to (ae), different sizes of training sets (10%, 20%, 30%, 40%, 50%) and different hyperspectral dimensionality (HD) (from 10 to 50) for the Indian Pines data set.
Sensors 20 04413 g002
Figure 3. Classification maps of an SVM classifier using different dimensionality reduction (DR) algorithms for the Indian Pines data in HD = 20: (a) false-color image (R:57,G:27,B:17); (b) ground-truth map; (c) original (SVM); (d) principal component analysis (PCA); (e) Isomap; (f) local linear embedding (LLE); (g) Laplacian eigenmaps (LE); (h) SLIC superpixel based for Schroedinger eigenmaps (SSSE); (i) weighted local linear embedding (WLE-LLE); and (j) improved spatial–spectral weight manifold embedding (ISS-WME); (k) representation of different classes.
Figure 3. Classification maps of an SVM classifier using different dimensionality reduction (DR) algorithms for the Indian Pines data in HD = 20: (a) false-color image (R:57,G:27,B:17); (b) ground-truth map; (c) original (SVM); (d) principal component analysis (PCA); (e) Isomap; (f) local linear embedding (LLE); (g) Laplacian eigenmaps (LE); (h) SLIC superpixel based for Schroedinger eigenmaps (SSSE); (i) weighted local linear embedding (WLE-LLE); and (j) improved spatial–spectral weight manifold embedding (ISS-WME); (k) representation of different classes.
Sensors 20 04413 g003
Figure 4. OA obtained by using a KNN classifier, with respect to (ae), different numbers of training sets (2%, 4%, 6%, 8%, 10%) and different HD (from 10 to 50) for the Pavia University data set.
Figure 4. OA obtained by using a KNN classifier, with respect to (ae), different numbers of training sets (2%, 4%, 6%, 8%, 10%) and different HD (from 10 to 50) for the Pavia University data set.
Sensors 20 04413 g004
Figure 5. OA with respect to (ae), different sizes of training sets (2%, 4%, 6%, 8%, 10%) and different HD (from 10 to 50) for the Pavia University data set, combined with the SVM classifier.
Figure 5. OA with respect to (ae), different sizes of training sets (2%, 4%, 6%, 8%, 10%) and different HD (from 10 to 50) for the Pavia University data set, combined with the SVM classifier.
Sensors 20 04413 g005
Figure 6. SVM classification maps of the different methods with the Pavia University data set in HD=20: (a) false-color image (R:102,G:56,B:31), (b) ground-truth map, (c) original (SVM), (d) PCA, (e) Isomap, (f) LLE, (g) LE, (h) SSSE, (i) WLE-LLE, and (j) ISS-WME; (k) representation of different classes.
Figure 6. SVM classification maps of the different methods with the Pavia University data set in HD=20: (a) false-color image (R:102,G:56,B:31), (b) ground-truth map, (c) original (SVM), (d) PCA, (e) Isomap, (f) LLE, (g) LE, (h) SSSE, (i) WLE-LLE, and (j) ISS-WME; (k) representation of different classes.
Sensors 20 04413 g006aSensors 20 04413 g006b
Table 1. Results of the different DR methods for the Indian Pines data set ( O A % ± A S D % ) .
Table 1. Results of the different DR methods for the Indian Pines data set ( O A % ± A S D % ) .
SamplesClassifierIndexRAWPCAIsomapLLELESSSEWLE-LLEISS-WME
10%KNNOA49.44 ± 1.9461.42 ± 1.3565.23 ± 1.6260.85 ± 1.3064.82 ± 1.4759.35 ± 1.9665.86 ± 1.3966.46 ± 1.90
Kappa32.79 ± 1.6544.98 ± 1.1549.26 ± 1.5944.21 ± 1.5949.42 ± 1.8942.95 ± 2.1148.31 ± 1.5648.86 ± 1.85
SVMOA49.82 ± 1.3768.40 ± 1.1464.12 ± 1.6265.93 ± 1.7171.77 ± 1.6168.17 ± 1.1075.03 ± 1.2675.38 ± 1.47
Kappa33.19 ± 1.3552.82 ± 1.1148.26 ± 1.6551.94 ± 1.2356.19 ± 1.5152.93 ± 1.0359.71 ± 1.3562.08 ± 1.56
20%KNNOA51.97 ± 1.1766.00 ± 1.4868.42 ± 1.3566.00 ± 1.2069.72 ± 1.3566.03 ± 1.7468.60 ± 1.4369.56 ± 1.71
Kappa32.56 ± 1.5250.10 ± 1.5952.90 ± 1.2550.20 ± 1.2354.37 ± 1.3650.44 ± 1.7153.42 ± 1.4154.27 ± 1.75
SVMOA51.86 ± 1.5972.34 ± 1.4771.36 ± 1.6271.42 ± 1.2775.01 ± 1.7573.22 ± 1.4577.80 ± 1.9381.25 ± 1.51
Kappa35.08 ± 1.6557.15 ± 1.4255.74 ± 1.7956.15 ± 1.3958.56 ± 1.6758.13 ± 1.5362.83 ± 1.9266.29 ± 1.42
30%KNNOA54.19 ± 1.2868.13 ± 1.6470.83 ± 1.5368.43 ± 1.6672.30 ± 1.3267.91 ± 1.0372.46 ± 1.3573.02 ± 1.43
Kappa37.55 ± 1.3652.60 ± 1.7455.54 ± 1.5652.94 ± 1.5157.29 ± 1.2252.48 ± 1.8757.34 ± 1.1857.84 ± 1.55
SVMOA53.11 ± 1.3574.82 ± 1.6974.72 ± 1.3374.37 ± 1.4877.83 ± 1.5477.54 ± 1.3380.48 ± 1.7683.83 ± 1.73
Kappa36.51 ± 1.6359.63 ± 1.6659.45 ± 1.3459.07 ± 1.5663.60 ± 1.5560.53 ± 1.1965.57 ± 1.7169.73 ± 1.71
40%KNNOA54.67 ± 1.6270.03 ± 1.1373.33 ± 1.8475.91 ± 1.4773.94 ± 1.2068.92 ± 1.1473.94 ± 1.7974.08 ± 1.44
Kappa38.36 ± 1.1454.64 ± 1.1058.26 ± 1.7960.86 ± 1.4359.05 ± 1.1553.58 ± 1.3558.99 ± 1.7759.25 ± 1.50
SVMOA54.28 ± 1.8176.07 ± 1.4475.91 ± 1.3775.98 ± 1.2180.02 ± 1.6576.31 ± 0.9682.07 ± 1.5384.80 ± 1.80
Kappa37.72 ± 1.7161.13 ± 1.4660.94 ± 1.4260.86 ± 1.2363.00 ± 1.6361.45 ± 0.9367.35 ± 1.5770.36 ± 2.34
50%KNNOA55.41 ± 1.5070.38 ± 1.4473.67 ± 1.2971.29 ± 1.4775.10 ± 1.5669.48 ± 1.5175.22 ± 1.4774.56 ± 1.36
Kappa39.14 ± 1.4455.19 ± 1.4058.71 ± 1.2455.93 ± 1.4160.31 ± 1.4454.28 ± 1.5060.27 ± 1.4859.73 ± 1.24
SVMOA54.77 ± 1.3976.65 ± 1.7676.93 ± 1.3876.67 ± 1.2781.84 ± 1.2478.46 ± 1.5982.54 ± 1.2784.71 ± 0.93
Kappa38.12 ± 1.3561.75 ± 1.7661.89 ± 1.4961.63 ± 1.3466.89 ± 1.2163.51 ± 1.7767.74 ± 1.3770.11 ± 1.06
Table 2. Results of each class of samples in different DR methods for the Indian Pines data set (HD = 20).
Table 2. Results of each class of samples in different DR methods for the Indian Pines data set (HD = 20).
ClassSampleDR + SVM Classifier (%)
TrainTestRAWPCAIsomapLLELESSSEWLE-LLEISS-WME
Alfalfa232313.0452.1726.0930.7740.5852.1756.5286.96
Corn-N71471438.4269.3769.4240.0670.0765.9277.0872.17
Corn-M41541525.0648.7649.9644.3462.3358.4762.2556.47
Corn11911814.4177.1138.1426.2747.7446.0562.1553.95
Grass-P24224159.0690.8789.2162.3889.7689.3594.4794.65
Grass-T36536586.4897.8197.6397.9097.4497.6398.2698.86
Grass-P-M141435.7176.1952.3850.2890.4876.1983.3383.81
Hay-W23923988.7099.8699.7297.1399.4498.6199.7299.68
Oats101011.2443.3360.0030.0080.0080.0070.0086.67
Soybean-N48648625.1763.5169.8294.2478.2679.0875.0375.17
Soybean-M1228122771.1583.3281.8373.6287.4883.4686.6679.52
Soybean-C29729656.4164.7557.3252.7067.9160.4773.9974.07
Wheat10310274.5194.1298.6975.2197.0697.3999.6799.87
Woods63363294.5797.6897.6386.7197.3196.7897.4297.66
Buildings-G-T-D
Stone-S-T
19319329.0245.7745.6034.2043.5244.3951.8152.85
474691.3090.5894.2088.7086.9692.0396.3898.41
OA54.7776.6576.9376.6781.8478.4682.5484.71
AA54.0474.7070.4761.5377.2776.1280.3081.92
kappa38.1261.7561.8961.6366.8963.5167.7470.11
Table 3. Results of the different DR methods for the Pavia University data set.
Table 3. Results of the different DR methods for the Pavia University data set.
SamplesClassifierIndexRAWPCAIsomapLLELESSSEWLE-LLEISS-WME
2%KNNOA61.55 ± 1.6469.75 ± 0.9068.29 ± 2.2063.25 ± 1.8173.92 ± 1.2773.70 ± 1.1075.65 ± 1.4775.84 ± 1.27
Kappa44.23 ± 1.2357.78 ± 1.4955.49 ± 3.1347.71 ± 2.9463.99 ± 1.1763.31 ± 1.5266.12 ± 1.2966.38 ± 1.42
SVMOA58.42 ± 1.1379.71 ± 1.1871.84 ± 3.2279.05 ± 1.4277.70 ± 1.8278.30 ± 0.8882.86 ± 0.8384.17 ± 0.87
Kappa44.61 ± 2.9671.99 ± 1.6060.66 ± 4.6170.84 ± 1.7169.24 ± 2.6669.91 ± 1.2376.65 ± 1.1378.32 ± 1.19
4%KNNOA61.83 ± 1.4676.32 ± 1.2372.89 ± 1.7668.69 ± 1.7678.44 ± 1.7072.45 ± 1.3378.99 ± 1.2680.64 ± 1.51
Kappa45.04 ± 1.0667.18 ± 1.4862.40 ± 1.2556.39 ± 1.4570.09 ± 1.8461.89 ± 1.2770.09 ± 1.4573.34 ± 1.59
SVMOA59.62 ± 1.5182.60 ± 1.7473.84 ± 3.4382.52 ± 1.0481.71 ± 1.2176.63 ± 1.2885.53 ± 1.1685.96 ± 0.99
Kappa42.32 ± 1.1976.07 ± 2.5163.60 ± 5.0376.12 ± 1.4474.96 ± 1.2567.80 ± 1.7480.34 ± 1.2980.95 ± 1.16
6%KNNOA71.73 ± 1.3979.08 ± 1.1872.35 ± 1.8371.35 ± 1.3780.48 ± 1.2971.74 ± 2.3980.51 ± 1.5082.38 ± 1.44
Kappa63.93 ± 1.8071.16 ± 1.6361.88 ± 1.9659.82 ± 2.4473.03 ± 1.4260.56 ± 3.5873.18 ± 1.5875.76 ± 1.64
SVMOA71.81 ± 1.2784.98 ± 1.8475.06 ± 1.8685.32 ± 1.3283.45 ± 1.4678.05 ± 2.0685.93 ± 1.6987.13 ± 1.49
Kappa65.34 ± 1.6179.49 ± 1.2065.34 ± 2.7877.38 ± 1.4977.52 ± 1.7169.81 ± 2.9480.92 ± 1.9382.56 ± 1.68
8%KNNOA71.80 ± 1.5180.45 ± 1.3673.72 ± 1.8973.81 ± 1.2381.65 ± 1.1676.63 ± 2.2581.54 ± 1.5183.09 ± 1.19
Kappa65.14 ± 1.8373.11 ± 1.5463.93 ± 2.6163.96 ± 1.3874.74 ± 1.2362.39 ± 2.5574.47 ± 1.7476.79 ± 1.28
SVMOA70.14 ± 1.2285.52 ± 0.8777.23 ± 2.5584.56 ± 1.2184.55 ± 1.3079.03 ± 1.4386.75 ± 1.3086.91 ± 1.40
Kappa61.92 ± 1.4680.31 ± 1.2368.58 ± 3.7278.96 ± 1.2979.06 ± 1.4271.17 ± 1.6482.08 ± 1.4380.20 ± 1.55
10%KNNOA71.96 ± 1.1881.36 ± 1.4775.48 ± 1.5674.15 ± 0.9581.74 ± 0.7273.11 ± 1.7382.62 ± 1.4683.83 ± 0.59
Kappa65.60 ± 1.4674.21 ± 1.0066.23 ± 2.2464.20 ± 1.4374.85 ± 1.0262.48 ± 2.8476.11 ± 1.6977.85 ± 0.86
SVMOA70.99 ± 1.3185.75 ± 1.2976.42 ± 0.6575.02 ± 0.7385.07 ± 1.1879.24 ± 1.0086.15 ± 0.2486.98 ± 1.12
Kappa63.95 ± 1.2980.68 ± 1.7767.42 ± 1.1959.69 ± 0.8679.80 ± 1.3171.46 ± 1.4082.65 ± 0.9382.37 ± 1.12
Table 4. Results of each class of samples in different DR methods for the Pavia University data set (HD = 20).
Table 4. Results of each class of samples in different DR methods for the Pavia University data set (HD = 20).
ClassSampleDR+SVM Classifier (%)
TrainTestRAWPCAIsomapLLELESSSEWLE-LLEISS-WME
Asphalt657656562.9688.4174.4186.1095.1981.9884.9388.03
Meadows18461846391.9097.7295.7696.6698.7896.1996.5097.42
Gravel208207845.7250.7147.8959.0480.9352.3864.5584.23
Trees303303339.9684.7579.5586.3487.0075.2589.9089.65
Metal sheets133133298.5199.6199.9499.72100.0099.23100.00100.00
Bare Soil498497946.5466.3680.7748.2587.3958.2665.5687.44
Bitumen132131745.0249.7242.4765.2779.8866.1674.0280.38
Bricks365364554.4384.4575.4483.2784.2580.5782.7886.38
Shadows9493746.5799.3762.5099.6895.8899.8493.38100.00
OA70.9985.7576.4275.0285.0779.2486.1586.98
AA59.0780.1273.1980.4889.9278.8783.5190.39
kappa63.9580.6867.4259.6979.8071.4682.6582.37
Table 5. Results of the different DR methods for the Salinas scene data set (HD = 20).
Table 5. Results of the different DR methods for the Salinas scene data set (HD = 20).
SamplesClassifierIndexRAWPCAIsomapLLELESSSEWLE-LLEISS-WME
2%KNNOA75.23 ± 1.6475.98 ± 2.9076.52 ± 2.2082.56 ± 2.8178.56 ± 1.2780.12 ± 1.1081.53 ± 1.4782.67 ± 1.43
Kappa59.89 ± 1.2365.12 ± 1.4963.29 ± 3.1372.69 ± 2.9465.23 ± 1.1769.96 ± 1.2270.69 ± 2.2969.12 ± 1.96
SVMOA63.34 ± 1.1375.12 ± 2.1878.22 ± 3.2289.23 ± 2.4286.23 ± 1.7286.23 ± 1.8888.93 ± 0.8388.53 ± 1.23
Kappa50.96 ± 2.9664.36 ± 1.6069.35 ± 4.6179.96 ± 1.7175.69 ± 2.6674.15 ± 1.0380.63 ± 1.5375.63 ± 1.56
4%KNNOA78.20 ± 1.8676.52 ± 1.2375.89 ± 1.6683.63 ± 1.7679.23 ± 1.7081.23 ± 1.3383.56 ± 1.1683.84 ± 1.51
Kappa60.93 ± 2.0664.78 ± 1.4863.25 ± 1.8572.56 ± 1.4566.36 ± 1.8470.36 ± 1.2772.12 ± 2.0570.34 ± 1.09
SVMOA66.47 ± 1.5177.25 ± 1.5479.94 ± 2.4389.38 ± 2.9488.23 ± 1.7189.23 ± 1.8888.99 ± 1.0690.22 ± 0.99
Kappa55.63 ± 2.1965.23 ± 1.5167.89 ± 4.0378.96 ± 3.4475.63 ± 1.8575.63 ± 1.7480.34 ± 1.2980.95 ± 1.16
6%KNNOA78.91 ± 1.3976.23 ± 1.1878.59 ± 1.8385.26 ± 2.3780.23 ± 2.2983.23 ± 2.3985.13 ± 1.2385.02 ± 1.64
Kappa62.36 ± 2.8063.63 ± 2.6362.56 ± 1.9674.23 ± 2.4469.36 ± 2.4270.32 ± 3.2873.25 ± 1.7872.76 ± 1.64
SVMOA68.01 ± 1.2777.56 ± 1.8480.49 ± 1.8691.61 ± 2.3289.56 ± 1.8690.12 ± 2.2689.96 ± 1.2991.90 ± 1.29
Kappa59.13 ± 1.6168.96 ± 1.2072.06 ± 1.7880.65 ± 2.4977.96 ± 1.7176.12 ± 2.4878.92 ± 1.6382.56 ± 1.68
8%KNNOA78.82 ± 1.5176.63 ± 1.3681.56 ± 1.8985.96 ± 1.2381.63 ± 2.1684.63 ± 2.2585.17 ± 1.3186.23 ± 1.19
Kappa65.34 ± 1.8363.59 ± 1.5465.75 ± 2.6175.26 ± 1.3870.23 ± 1.2372.12 ± 2.5573.69 ± 1.5473.79 ± 1.28
SVMOA68.96 ± 2.2277.96 ± 1.8781.20 ± 2.5591.92 ± 3.2190.05 ± 1.3088.96 ± 1.7389.69 ± 1.3091.16 ± 1.04
Kappa57.69 ± 2.4665.36 ± 2.2373.96 ± 3.7279.86 ± 3.2981.02 ± 1.4278.02 ± 1.6476.08 ± 1.2380.20 ± 1.55
10%KNNOA80.21 ± 1.1877.69 ± 1.4784.72 ± 1.5686.95 ± 0.9581.23 ± 1.7285.23 ± 1.7386.33 ± 1.4686.78 ± 1.72
Kappa68.23 ± 1.4665.26 ± 2.0069.89 ± 2.2467.36 ± 1.4371.53 ± 2.0272.36 ± 2.8476.11 ± 1.6972.19 ± 1.02
SVMOA69.13 ± 1.2179.02 ± 2.2980.99 ± 0.6592.16 ± 2.7389.92 ± 2.1890.13 ± 1.2090.57 ± 1.2492.19 ± 1.02
Kappa58.63 ± 1.0968.32 ± 2.7779.63 ± 2.1981.96 ± 2.8678.69 ± 1.9176.98 ± 1.4082.65 ± 1.9384.23 ± 1.62
Table 6. Results of each class of samples in different DR methods for the Salinas scene data set (HD = 20).
Table 6. Results of each class of samples in different DR methods for the Salinas scene data set (HD = 20).
ClassSampleDR+SVM Classifier (%)
TrainTestRAWPCAIsomapLLELESSSEWLE-LLEISS-WME
Brocoil_green_weeds_1201180891.2693.1499.3496.6899.2398.2399.5698.01
Brocoil_green_weeds_2373335399.2299.2899.8895.5399.6491.6599.8899.88
Fallow198177861.7581.3394.6093.5999.5296.2999.7893.36
Fallow_rough_plow139125596.4997.2997.6395.3798.3497.6199.3699.20
Fallow_smooth268241080.0083.2497.4283.6599.8889.4698.3498.34
Stubble396356395.2996.0794.5086.5999.9494.39100.0099.94
Celery358322197.2189.6790.1888.3388.6298.6399.4499.75
Grapes_untrained1131115875.7583.2888.8683.5997.8287.3884.2799.48
Soil_vinyard_develop620558398.6490.6999.2597.1394.5894.7599.8699.89
Corn_senesced_green_weed328295083.1284.6193.4295.2596.2098.3199.1799.17
Lettuce_romaine_4wk10796110.1981.5097.3083.5892.7789.8191.8999.77
Lettuce_romaine_5wk193173490.2692.1799.8891.4798.7994.2396.7799.77
Lettuce_romaine_6wk9282494.9097.5798.0692.7390.2397.0998.0698.79
Lettuce_romaine_7wk10796378.1799.3897.7789.8158.4594.3992.5292.72
Vinyard_untrained727654141.3654.3264.7256.2358.4566.7357.1167.25
Vinyard_vertical_trellis181162687.7198.4098.6598.5397.2697.5496.6598.77
OA69.1379.0280.9992.1689.9290.1390.5792.19
AA80.0888.8794.4789.2591.8692.9194.5496.51
kappa58.6368.3279.6381.9678.6976.9882.6584.23

Share and Cite

MDPI and ACS Style

Liu, H.; Xia, K.; Li, T.; Ma, J.; Owoola, E. Dimensionality Reduction of Hyperspectral Images Based on Improved Spatial–Spectral Weight Manifold Embedding. Sensors 2020, 20, 4413. https://doi.org/10.3390/s20164413

AMA Style

Liu H, Xia K, Li T, Ma J, Owoola E. Dimensionality Reduction of Hyperspectral Images Based on Improved Spatial–Spectral Weight Manifold Embedding. Sensors. 2020; 20(16):4413. https://doi.org/10.3390/s20164413

Chicago/Turabian Style

Liu, Hong, Kewen Xia, Tiejun Li, Jie Ma, and Eunice Owoola. 2020. "Dimensionality Reduction of Hyperspectral Images Based on Improved Spatial–Spectral Weight Manifold Embedding" Sensors 20, no. 16: 4413. https://doi.org/10.3390/s20164413

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop