Next Article in Journal
TCANet for Domain Adaptation of Hyperspectral Images
Previous Article in Journal
PolSAR-Decomposition-Based Extended Water Cloud Modeling for Forest Aboveground Biomass Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gaussian Process Graph-Based Discriminant Analysis for Hyperspectral Images Classification

1
School of Computer Science, China University of Geosciences, Wuhan 430074, China
2
Discipline of Business Analytics, The University of Sydney Business School, The University of Sydney, Sydney 2006, Australia
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(19), 2288; https://doi.org/10.3390/rs11192288
Submission received: 21 August 2019 / Revised: 24 September 2019 / Accepted: 26 September 2019 / Published: 30 September 2019
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Dimensionality Reduction (DR) models are highly useful for tackling Hyperspectral Images (HSIs) classification tasks. They mainly address two issues: the curse of dimensionality with respect to spectral features, and the limited number of labeled training samples. Among these DR techniques, the Graph-Embedding Discriminant Analysis (GEDA) framework has demonstrated its effectiveness for HSIs feature extraction. However, most of the existing GEDA-based DR methods largely rely on manually tuning the parameters so as to obtain the optimal model, which proves to be troublesome and inefficient. Motivated by the nonparametric Gaussian Process (GP) model, we propose a novel supervised DR algorithm, namely Gaussian Process Graph-based Discriminate Analysis (GPGDA). Our algorithm takes full advantage of the covariance matrix in GP to constructing the graph similarity matrix in GEDA framework. In this way, more superior performance can be provided with the model parameters tuned automatically. Experiments on three real HSIs datasets demonstrate that the proposed GPGDA outperforms some classic and state-of-the-art DR methods.

Graphical Abstract

1. Introduction

Hyperspectral images (HSIs) contain considerable different reflections of electromagnetic waves from visible light to near-infrared or even far-infrared [1,2]. This characteristic allows various ground objects to be discriminated based on HSIs with abundant information. Because of it, HSIs are widely used in astronomy [3], agriculture [4], biomedical imaging [5], geosciences [6] and military surveillance [7]. However, the abundant features in HSIs could lead to significant redundancy. When using traditional classification algorithms to distinguish the class/object of each pixels in HSIs, the curse of dimensionality or the so-called “Hughes Phenomenon” would occur [8]. Chang et al. found that up to 94% of the spectral bands can be brushed aside without affecting the classification accuracy [9]. Therefore, Dimensionality Reduction (DR), a pre-processing procedure which tries to discover low-dimensional latent features from high-dimensional HSIs, plays a vital role in HSIs data analysis and classification.
In general, DR methods can be divided into two categories: feature selection and feature extraction. The former attempts to select a small subset of bands from the original bands based on some criteria, while the latter tries to find a low-dimensional subspace embedded in high-dimensional observations [10]. As reviewed in [11], discovering optimal bands from large numbers of possible feature combinations by feature selection methods could be suboptimal, so we only focus on feature extraction based DR methods for HSIs instead of feature selection in this paper.
A variety of feature extraction based DR models have been introduced for HSIs data analysis over the past decades. They can be roughly divided into two categories: unsupervised and supervised DR techniques. Unsupervised DR methods try to find low-dimensional representations that could preserve the intrinsic structure of the high-dimensional observations without using labels, while supervised methods make use of the available labels to find low-dimensional and discriminant features. The most representative unsupervised DR algorithm could be Principal Component Analysis (PCA), which tries to use the linear model to project the observed data into low-dimensional space with maximal variances [12,13]. Based on PCA, various extensions have been proposed, such as Probabilistic PCA (PPCA), Robust PCA (RPCA), Sparse PCA, Tensor PCA, etc. [11,14,15,16,17,18]. However, the aforementioned algorithms belong to the linear DR methods. When dealing with nonlinear structures embedded into the high-dimensional HSIs data, PCA and its linear extensions could be unable to provide satisfactory performance. Therefore, many nonlinear DR methods have been introduced, among which manifold learning based DR models have been widely employed in HSIs data analysis [19,20,21].
Representative manifold learning based DR algorithms include Isometric Mapping (ISOMAP) [20], Locally Linear embedding (LLE) [19], Laplacian Eigenmaps (LE) [22], Local Tangent Space Alignment (LTSA) [23], etc. The idea behind these algorithms is to assume that the data lie along a low-dimensional manifold embedded in a high-dimensional Euclidean space, and to uncover this manifold structure [24] with different criteria. For example, ISOMAP, an extension of Multi-dimensional Scaling (MDS) [25], seeks a low-dimensional embedding that preserves geodesic distances of all the pairs of points. In LLE, each sample is reconstructed by a linear combination of its neighbors and then the corresponding low-dimensional representations that could preserve the linear reconstruction relationship in original space are solved. LE utilizes a similarity graph to represent the neighbor relationships of pairwise points in low-dimensional space. The local geometry via the tangent space is modeled in LTSA to learn the low-dimensional embedding. However, most of the manifold learning models encounter the so-called out-of-sample problem [26], which means it could be ineffective to find the low-dimensional representation corresponding to a new testing sample. An effective solution to this problem is to add a linear mapping that projects observed samples to low-dimensional subspace. For instance, Locality Preserving Projections (LPP) [27], Neighborhood Preserving Embedding (NPE) [28] and Linear Local Tangent Space Alignment (LLTSA) [29] are the linear extensions of LE, LLE and LTSA, respectively. In [30], a Graph Embedding (GE) framework has been proposed to unify these manifold learning methods on the basis of geometry theory. Recently, the representation-based algorithms have also been introduced to the GE framework to construct various similarity graphs [31]. For example, Sparse Representation (SR), Collaborative Representation (CR) and Low Rank Representation (LRR) [32] are utilized to constitute the sparse graph ( 1 graph), collaborative graph ( 2 graph) and low-rank graph, leading to Sparsity Preserving Projection (SPP) [33], Collaborative Representation based Projection (CRP) [34] and Low Rank Preserving Projections (LRPP) [35], respectively.
Nevertheless, the aforementioned algorithms are all unsupervised DR models, which means that extra labels available in HSIs data are not utilized. To take advantage of these label information, the unsupervised DR models can be extended to the supervised versions, which could improve the discriminative power of DR models [24]. In this line, Linear Discriminant Analysis (LDA), as the most well-known supervised DR model, attempts to improve the class-separability by maximizing the distance between heterogeneous samples and minimizing the distance between homogeneous samples [36]. However, LDA can only extract up to c 1 features with c being the number of label classes. Thus, Nonparametric Weighted Feature Extraction (NWFE) was proposed to tackle this problem by using the weighted mean to calculate the nonparametric scatter matrices which could obtain more than c -dimension features [37]. Other related works including Regularized LDA (RLDA) [38], Modified Fisher’s LDA (MFLDA) [39] and Supervised PPCA (SPPCA) [11,40] have been introduced for supervised HSIs feature extraction.
Apparently, the above supervised linear DR models may fail to discover the nonlinear geometric structure in HSIs data, resulting in unsatisfactory performance of DR models. Therefore, many supervised nonlinear DR methods have been introduced to find the complex geometric structure embedded in high-dimensional data. For example, Local Fisher Discriminant Analysis (LFDA) effectively combine the advantages of Fisher Discriminant Analysis (FDA) [41] and LPP by maximizing the between-class separability and minimizing the within-class distance simultaneously [42,43]. Local Discriminant Embedding (LDE) extends the concept of LDA to perform local discrimination [44,45]. Low-rank Discriminant Embedding (LRDE) learns the latent embedding space by maximizing the empirical likelihood and preserving the geometric structure [46]. Other related techniques include Discriminative Gaussian Process Latent Variable Model (DGPLVM) [47], Locally Weighted Discriminant Analysis (LWDA) [48], Multi-Feature Manifold Discriminant Analysis (MFMDA) [49], etc. Similarly, the representation based algorithms have also been introduced to supervised DR framework, such as Sparse Graph-based Discriminate Analysis (SGDA) [50], Weighted Sparse Graph-based Discriminate Analysis (WSGDA) [51], Collaborative Graph-based Discriminate Analysis (CGDA) [52], Laplacian regularized CGDA (LapCGDA) [53], Discriminant Analysis with Graph Learning (DAGL) [54], Graph-based Discriminant Analysis with Spectral Similarity (GDA-SS) [55], Local Geometric Structure Fisher Analysis (LGSFA) [56], Sparse and Low-Rank Graph-based Discriminant Analysis (SLGDA) [57], Kernel CGDA (KCGDA) [53], Laplacian Regularized Spatial-aware CGDA (LapSaCGDA) [58], etc. A good survey of these discriminant analysis models can be found in [31].
Although the graph embedding based DR methods are effective for extracting discriminative spectral features of HSIs data, these models are significantly affected by two factors: similarity graphs and model parameters. The similarity graph is the key for all the graph embedding models, while the performance of the models largely relies on the manual settings of model parameters, which is time-consuming and inefficient. Motivated by the nonparametric Gaussian Process (GP) model [59], we constitute the similarity graphs with GP in this paper. A Gaussian process is a type of continuous stochastic process, which defines a probability distribution over functions. With various covariance/kernel functions, GP can nonparametrically model complex and nonlinear mappings. Furthermore, all parameters of covariance functions typically termed hyperparameters can be learned automatically in GP. Inspired by the benefits of GP, we try to learn the similarity matrix in the graph embedding framework with GP. Specifically, the learned covariance matrix in GP is considered as the similarity graphs, giving rise to the Gaussian Process Graph based Discriminate Analysis (GPGDA), which could learn more efficient similarity graphs and avoid manually tuning parameters compared to existing algorithms. Experimental results on three HSIs datasets demonstrate that the proposed GPGDA can effectively improve the classification accuracy without time-consuming model parameters tuning.
The rest of the paper is organized as follows. In Section 2, we briefly review the related works, including the Gaussian Process (GP), Graph-Embedding Discriminate Analysis framework. The proposed Gaussian Process Graph-based Discriminate Analysis (GPGDA) is introduced in Section 3. Then, three HSIs datasets are used to evaluate the effectiveness of the proposed GPGDA in Section 4. Finally, a brief summary is given in Section 5.

2. Related Works

In this section, we briefly review the Gaussian Process and Graph-Embedding Discriminate Analysis framework. For the sake of consistency, we make use of the following notations throughout this paper: X = [ x 1 , , x N ] T R N × D are the original high-dimensional data with each sample x n R D ; Y = [ y 1 , , y N ] T R N × 1 are the outputs where each sample y n R (real values for regression tasks and discrete labels in { 1 , 2 , , C } for classification tasks); Z = [ z 1 , , z N ] R d × N are the projected low-dimensional data with dimension d D and each z n corresponding to x n and y n . For the sake of convenience, we further denote X as an D × N matrix, Y an N × 1 vector and Z an d × N matrix.

2.1. Gaussian Process

Gaussian Process is typically used in Gaussian Process Regression (GPR), where we assume that each output sample y n is generated from the unknown function f with independent and identically-distributed noise variables ϵ with distributions N ( 0 , σ 2 ) , which is y = f ( x ) + ϵ . A Gaussian Process prior is placed over the latent function f, i.e., f N ( f | 0 , K X X ) , with the covariance matrix defined by the positive-semidefinite kernel function K X X = k ( X , X | θ ) . θ are the parameters of kernel function and typically termed as hyperparameters. The choice of kernel function and its hyperparameters settings determine the behavior of GP, which are fairly significant. With Bayesian theorem, the latent function f can be marginalized analytically P ( Y | X , θ ) = N ( Y | 0 , K X X + σ 2 I ) . Generally, the parameter σ 2 of Gaussian noise can be easily merged into the covariance function with K Y = K X X + σ 2 I . Thus, the hyperparameters of kernel function can be optimized by maximizing the log marginal likelihood
log P ( Y | X , θ ) = 1 2 Y T K Y 1 Y 1 2 log | K Y | n 2 log 2 π .
Considering a testing data point { ( x * , y * ) } , the prediction distribution for a new test point x * can be calculated as follows
f * | x * , X , Y N ( K x * X ( K X X + σ 2 I ) 1 Y , K x * x * K x * X ( K X X + σ 2 I ) 1 K X x * )
where Ks are the matrices of the covariance/kernel function values at the corresponding points X and/or x * .
For the classification task with discrete outputs, active functions such as the sigmoid function τ ( x ) = 1 / ( 1 + e x p ( x ) ) as the likelihood model in p ( y n = 1 | x n ) = τ ( f ( x n ) ) are typically introduced in Gaussian Process Classification (GPC) for binary classification. When making prediction, the predictive distributions over the f * = f ( x * ) and y * for a new test point x * are
p ( f * | x * , X , Y ) = p ( f * | x * , X , f ) p ( f | X , Y ) d f p ( y * = 1 | x * , X , Y ) = p ( y * | f * ) p ( f * | x * , X , Y ) d f * .
It is worth noting that the two integrals become analytically intractable due to the non-Gaussianity of the logistic function τ ( f ( x n ) ) , making it impossible to get the exact posterior in GPC. In such case, approximation techniques such as Laplace Approximation (LA), Expectation Propagation (EP), etc. are adopted to acquire the approximated GP posterior and conduct model optimization.

2.2. Graph-Embedding Discriminant Analysis

Many DR approaches have been proposed recently for HSIs feature extraction and classification, among which the Graph-Embedding Discriminant Analysis methods have shown promising performance [31]. Typically, Graph-Embedding Discriminant Analysis (GEDA) models try to find the projection matrix P in the mapping function z n = P T x n by preserving the similarities of samples in the original observation space. The objective function of GEDA can be denoted by
P ˜ = argmin P i j z i z j W i j = argmin P i j P T x i P T x j W i j = argmin P trace ( P T X L X T P ) , s . t . P T X L p X T P = I ,
where the similarity matrix W is an undirected intrinsic graph with each element W i j describing the similarity between samples x i and x j , L = T W is the Laplacian matrix of graph W, T is the diagonal matrix with T i i = j = 1 N W i j and L p is the constraint matrix defined to find a non-trivial solution of the objective function. Typically, L p is the diagonal matrix T for scale normalization and may also be the Laplacian matrix of penalty graph W p . The intrinsic graph W characterizes intraclass compactness while the penalty graph W p describes interclass separability. By simply re-formulating the objective function, we can obtain
P ˜ = argmin P | P T X L X T P | | P T X L p X T P | .
When L p is the Laplacian matrix of penalty graph W p , Equation (5) is named as Marginal Fisher Analysis (MFA) [60]. The solution of Equation (5) can be easily obtained by solving the generalized eigenvalue decomposition problem
X L X T P = λ X L p X T P ,
where P R D × d is constructed by the eigenvectors corresponding to the d smallest eigenvalues. As we can see from the above formulations, the most significant step of GEDA is to build an intrinsic graph.
A popular approach that estimate the similarity between samples x i and x j is the heat kernel, which is utilized in unsupervised LPP
W i j = e x i x j 2 r ,
where r > 0 denotes the local scaling of data samples. Different from unsupervised LPP that estimates the similarities of all the vertices, supervised discriminant analysis methods build the affinity matrix with label information, which will further improve the discriminative power. Thus, the similarity matrix W typically becomes a block-diagonal matrix
W 1 W 2 W C ,
where { W l } l = 1 C is the affinity matrix of size n l × n l only from the lth class.
Recently, representation based algorithms have been introduced to construct the within-class similarity matrix. For example, the sparse representation coefficients ( 1 norm) is used to construct the similarity matrix W l = [ w 1 l ; w 2 l ; ; w n l l ] R n l × n l in SGDA [50] ( n l is the number of training data from the lth class). To reduce the computational complexity of SGDA, the collaborative representation ( 2  norm) instead of sparse representation is used in CGDA [52]. Similarly, SLGDA [57], LapCGDA [53] and LapSaCGDA [58] were developed recently with different objective functions to construct the similarity matrices as follows,
SGDA : argmin w n l x n l X n l w n l 2 2 + α w n l 1 , CGDA : argmin w n l x n l X n l w n l 2 2 + α w n l 2 , SLGDA : argmin W l X l X l W l 2 2 + α W l * + β W l 1 , LapCGDA : argmin w n l x n l X n l w n l 2 2 + α w n l 2 + β w n l T H n w n l , LapSaCGDA : argmin w n l x n l X n l w n l 2 2 + α Γ w n l 2 2 + β diag ( s n ) w n l 2 2 + γ w n l T H n w n l ,
where x n l is a training sample from lth class, X l denotes all training samples from lth class, X n l is X l excluding x n , · * in SLGDA indicates the nuclear norm, H n in LapCGDA and LapSaCGDA is the Laplacian matrix constructed by Equation (7), and Γ and s are similarly defined by Γ i i = x n l x i l 2 and s n = [ dist ( ( p n , q n ) , ( p i , q i ) ) ] t with the pixel coordinate ( p i , q i ) for samples in lth class ( i = 1 , 2 , , n l ) , respectively.

3. The Proposed Method

To effectively learn the similarity graphs without time-consuming parameters tuning in the Graph-Embedding Discriminant Analysis (GEDA) framework, the Gaussian Process Graph based Discriminant Analysis (GPGDA) method is proposed to address it. The GPGDA method makes use of the nonparametric and nonlinear GP model to learn the similarity/affinity matrix adaptively.
The flowchart of the proposed GPGDA method is shown in Figure 1. Firstly, the HSIs data will be divided into training and testing dataset randomly. Then, we try to construct the block-diagonal similarity matrix. Inspired by CGDA, we make use of GPR to model training samples from each class. To be specific, when we handle the training data from lth class, their labels are manually set to be 1 while the rest of the training data are labeled to be 0 conversely, which implicitly enforces interclass separability when learning the similarity graphs of each class. Thus, the learned similarity matrix should be more efficient than those from CGDA, KCGDA and other related algorithms. Subsequently, the similarity matrices of all classes are reassembled into the block-diagonal matrix, resulting a complete similarity matrix in the GEDA framework. Finally, the projection matrix can be acquired by solving the generalized eigenvalue decomposition problem and the dimension-reduced testing data can be obtained accordingly. To further measure the proposed method, the dimension-reduced training and testing data will be fed to different classifiers to get the prediction results.
From Section 2.1, we can learn that the convariance/kernel function is vital to GP since the corresponding kernel matrix measures the similarities between all pairs of samples. In view of this, kernel matrix in GP can be used to represent the similarity graph in GEDA framework. Because we want the intrinsic graph reflecting class-label information, it will be eventually expressed in the form of block-diagonal structure as in Equation (8). Here, we straightforwardly make use of GPR rather than GPC to model the high-dimensional training data with discrete labels, because time-consuming approximation methods in GPC could increase the model complexity. In addition, GPR is enough to model the class-specific training data.
Given a dataset D = { ( x i , y i ) } i = 1 N , in which x i is a high-dimensional HSI sample and y i is the corresponding label. First, to efficiently learn the similar matrix, we set the label of training samples from lth class to be 1 while the rest are 0. The new binary labels can be denoted by T = { t i } i = 1 N ( t i { 0 , 1 } ) . Then, let us model the mapping from x i to t i with nonlinear GPR,
P ( T | X , θ ) = N ( T | 0 , K T )
The optimal hyperparameters θ of specific kernel function can be automatically estimated by optimizing the GPR objective function in Equation (1) with gradients based optimization algorithms. With the optimal hyperparameters, we could obtain the corresponding kernel matrix as follows,
K T = K ( X l , X | θ ) ,
where X l = { x n l } n = 1 n l are the training samples from lth class with the number of the class-specific data n l , and x denotes training data from other categories.
At this moment, we only care about training samples from lth class, which have been labeled to be 1, so we choose the n l × n l block from the kernel matrix, which corresponds to the samples from lth class in order to form the similarity matrix W l . Once we have repeatedly obtained all the class-specific similarity matrix W l ( l = 1 , , C ) by GPR, the block-diagonal matrix W in Equation (8) can be simply constructed
W = diag ( W 1 , W 2 , , W C ) with W l = k ( x 1 l , x 1 l ) k ( x 1 l , x n l l ) k ( x n l l , x 1 l ) k ( x n l l , x n l l )
Finally, based on the GEDA framework, it is easy to solve the optimal projection matrix P by solving the eigenvalue decomposition in Equation (6).
The complete GPGDA algorithm is outlined in Algorithm 1. To boost the performance of the models, we preprocess the HSIs data by a simple average filtering initially.
Algorithm 1 GPGDA for HSIs dimensionality reduction and classification.
Input: High-dimensional training samples X R D × N , training ground truth y R N , pre-fixed latent dimensionality d, and testing pixels X * R D × M , testing ground truth y * R M .
Output: s = { A c c , P}.
1:
Preprocess all the training and testing data by average filtering;
2:
Estimate the hyperparameters’ set θ of kernel function by Equation (1);
3:
Evaluate the similarity matrix W by Equation (11) and Equation (12);
4:
Evaluate the optimal projection matrix P by solving the eigenvalue decomposition in Equation (6);
5:
Evaluate low-dimensional features for all the training and testing data by z n = P T x n ;
6:
Perform KNN/SVM in low-dimensional feature space and return classification accuracy A c c ;
7:
returns
As for the model complexity, since only small-scale training data are considered, we do not make use of the approximation methods such as Fully Independent Training Conditional (FITC) model [59] for GPR. Therefore, the time complexity of the proposed GPGDA is O ( C n l 3 ) where C is the number of the discrete classes and n l is the maximum number of samples in each class. By comparison, other discriminant analysis based methods such as CGDA and LapCGDA are with O ( C n l 3 ) as well because there are matrix inversion operation. Thus, the proposed GPGDA does not increase the model complexity theoretically.

4. Experiments

We validated the effectiveness of the proposed GPGDA for HSI feature extraction and classification by comparing with SPPCA, NWFE, DGPLVM, SLGDA, LapCGDA, KCGDA and LGSFA on three typical HSIs datasets. In addition, the traditional Support Vector Machine (SVM) and Convolutional Neural Network (CNN) [61] were applied in the original high-dimensional spectral feature space for comparison. K-Nearest Neighbors (KNN) with Euclidean distance and SVM with Radial Basis Function (RBF) kernel were adopted as classifiers in the learned low-dimensional space to verify all the DR models in terms of the classification Overall Accuracy (OA), the classification Average Accuracy (AA) and the Kappa Coefficient (KC). The parameter K in KNN was set to 5. The optimal parameters of kernel in SVM were selected by grid searching within a given set { 10 6 , 10 5 , , 10 4 } . The architecture of CNNs for each dataset is shown in Table 1 based on the experiments settings in [61]. For a fair comparison, all the data were preprocessed by average filtering with a 7 × 7 spatial window, which is a simple and efficient method for smoothing HSIs. Experiments to verify the effect of different window sizes were also conducted; please refer to the Supplementary Materials for details.
Firstly, the most suitable kernel in GPGDA was chosen from 18 kernels in the fast Gaussian process latent variable model toolbox (FGPLVM) (http://inverseprobability.com/fgplvm/index.html). The corresponding hyperparameters of each kernel can be learned automatically in the proposed GPGDA. The regularization parameters such as α , β for DGPLVM, SLGDA, LapCGDA and KCGDA were selected by the grid search with a given set { 10 6 , 10 5 , , 10 4 } . Table 2 displays the best parameter values for the above four DR models. Then, after obtaining the optimal kernel and its corresponding hyperparameters, we compared and chose the best dimensionality of each DR model in the range of 1–30 in terms of the classification accuracy based on SVM in the projected low-dimensional space. Finally, we further compared all the DR models on the selected optimal dimension when different numbers of training data were chosen. All the experiments were repeated ten times and the average results are reported with standard deviation (STD). All methods were tested on MATLAB R2017a using an Intel Xeon CPU with 2.50 GHz and 64G memory PC with Linux platform.

4.1. Data Description

Three popular HSIs datasets were selected in our experiments (http://www.ehu.es/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes).
The Indian Pines scene (IndianPines) was captured by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor over Northwest Indiana in 1992, which contains 145 × 145 pixels and 200 spectral reflectance bands after discarding 24 bands affected by water absorption. Sixteen ground truth classes are discriminated in this dataset, and the False Color Composition (FCC) and Ground Truth (GT) are shown in Figure 2.
The University of Pavia scene (PaviaU) was gathered by Reflective Optics System Imaging Spectrometer (ROSIS-3) sensor over the University of Pavia, Italy in 2002. There are 610 × 340 valid pixels with 103 spectral bands after removing some samples of the original scene containing no information. Nine ground truth classes are considered in this dataset, and the false color composition and ground truth are shown in Figure 3.
The Salinas Scene (Salinas) was collected by AVIRIS sensor over Salinas Valley, California in 1998, which consists of 512 × 217 pixels with 224 spectral bands. Similar to Indian Pines scene, 20 water absorption and atmospheric effects bands are discarded, then the number of spectral bands becomes 204. Sixteen ground truth classes are labeled in this dataset, and the false color composition and ground truth are shown in Figure 4.

4.2. Sensitivity Analysis for Kernel

In this section, we mainly analyze the impact of all the 18 kernels in terms of OAs, because the type of kernel is the only thing that has to be manually selected in our proposed model. We randomly chose 30 samples from each class as training data and the remainder were testing data with the number of reduced dimensionality being 30. Table 3 demonstrates the OAs based on the proposed GPGDA with different kernels on three HSIs datasets, where we can see that many kernels (up to 10 kernels on each dataset) could provide satisfactory results in terms of OAs in bold. Thus, choosing the best kernel function is not a big problem for our model. For the Indian Pines, University of Pavia and Salinas scenes, we have recommended ten kernels for each dataset with respect to the following experimental results. As for other HSI datasets that have not been studied in this paper, the kernels “dexp” and “lin” in the proposed GPGDA can be firstly considered as they could provide high accuracy for all the three datasets.

4.3. Experiments on the Indian Pines Data

Initially, an optimal kernel was selected for the proposed GPGDA based on the KNN and SVM classification results. The corresponding hyperparameters of each kernel could be learned via empirical Bayesian approach according to the training data. In this experiment, 30 samples were randomly chosen from each class as training data. When the number of training data in a certain category was less than 30, then 60% samples were chosen. For the fair comparison, the remaining data were split into the verification set (50%) and test set (50%). The optimal number of reduced dimensionality are chosen based on the verification set, and the reported results in Figure 5 are based on the test set.
As can be seen in Figure 5a,b, KNN and SVM based classification results from the proposed GPGDA with ten kernel functions are depicted. Here, we do not show all the results from 18 kernel functions because it could be chaos to plot 18 curves simultaneously. We only demonstrate ten kernels of poly, lin, mlpard, polyard, linard, matern52, rbfard2, sqexp, dexp and gibbs [59], which have better classification results than others. Although there are some differences for the ten kernels in terms of the KNN classification results, the classification results from SVM are similar, which means each one out of the ten kernels can be used efficiently. Since the ultimate goal of dimensionality reduction is classification, we only select the appropriate kernel based on SVM results, which are usually higher than KNN results. According to Figure 5b, kernel dexp is selected for the following experiments.
After setting the optimal kernel for GPGDA, we further conducted experiments to choose the best dimensionality of the projection space in terms of the OAs based on SVM. The optimal number of dimensionality of the projection space was selected from the range of 1–30. For the sake of fairness, the optimal regularization parameters in other DR models to be compared were set beforehand, as shown in the Table 2. It can be viewed in Figure 5c that, the optimal number of dimensionality for each DR model in Indian pines data is 30. It is also worth noting that, the OA of GPGDA surpasses other DR models on most dimensions, meaning that the learned low-dimensional features are very discriminatory. Table 4 illustrates the average classification accuracy of each class, the AAs, OAs, and KCs, as well as their STDs of eight DR models when dimensionality is 30 and classifier is SVM. Table 4 shows that the proposed GPGDA outperforms traditional CNN, SVM and other models in terms of the AA, OA and KC. Accordingly, Figure 6 tells us that classification maps from the proposed methods are more accurate than other contrastive approaches.
Finally, to verify the discriminating power of the proposed method even further, more classification experiments were conducted when different numbers of training data were randomly chosen: 10–60 samples were randomly chosen from each class, and the remainder were testing samples. For those classes with less samples, no more than 60% samples were randomly chosen. Table 5 shows that the OAs, AAs and KCs improve as the number of training samples increases for all methods. In addition, when classifier is KNN, LGSFA outperforms other DR models, followed by GPGDA. It is because LGSFA takes the intraclass neighbor reconstruction relationship of each training pixels into consideration, thus enhancing the class-separability of projected low-dimensional testing data. However, the local geometric structure is not utilized in GPGDA, thus it could be added in our future works. As for SVM classification accuracy, GPGDA demonstrates better OAs compared to other DR techniques. In general, the proposed GPGDA is capable of obtaining more discriminating features.

4.4. Experiments on the University of Pavia Data

To further demonstrate the effectiveness of the proposed algorithms, we chose the University of Pavia data to conduct experiments. Similarly, we firstly selected the optimal kernel and learned their corresponding hyperparameters for the proposed GPGDA. In this experiment, 30 training samples were randomly picked from each class while the remaining data were split into the verification set (50%) and test set (50%). The optimal number of reduced dimensionality was picked based on the verification set, and the reported results in Figure 7 are based on the test set.
According to Figure 7a,b, OAs based on KNN and SVM from the proposed GPGDA with ten kernel functions are illustrated. Considering the display effects, only ten well-performing kernels (rbf, mlp, lin, rgfard, mlpard, linard, dexp, gaussian, gg and gibbs) are shown. Similarly, there are some differences for the ten kernels in terms of the OAs based on KNN, but the classification results from SVM are almost coincidence with each other. Thus, choosing arbitrary kernel from these ten kernel functions would have little impact on the SVM classification results, indicating that selecting the best kernel function is not a troublesome problem for our model. In view of the fact that the OAs of dexp are the highest in both Figure 7a,b, we chose kernel dexp for the next experiments.
Once the optimal kernel for GPGDA was chosen, we chose the best dimensionality of the projection space in terms of the SVM results acquired in the low-dimensional space. All the experiments were conducted on low dimensions from 1 to 30. To be fair, the optimal regularization parameters in other contrastive DR models were determined via parameter sensitivity experiments, as shown in Table 2. Figure 7c shows that the proposed GPGDA is superior to other DR models in almost all the low-dimensional projection space. Moreover, the optimal number of dimensionality for each DR model in University of Pavia data is 30. Table 6 displays the AAs, OAs, KCs and detailed classification accuracy for each class based on SVM when dimensionality is 30. In Table 6 we can see that GPGDA excels traditional CNN, SVM and other DR algorithms in terms of AA, OA and KC. Accordingly, the classification maps in Figure 8 give us a similar conclusion.
Finally, based on the optimal projection dimensionality, we further compared the discriminating power of the eight DR approaches by randomly selecting different number of pixels as training data. Here, 10–60 training samples were randomly chosen from each class, and the remainder were testing samples. Table 7 shows that the OAs, AAs and KCs rise as the number of training samples increases for all methods. Specifically, LGSFA demonstrates comparative OAs based on KNN than the proposed GPGDA while GPGDA surpasses LGSFA and other methods in terms of the SVM classification results. This illustrates that GPGDA exceeds LGSFA, which preserves local geometric structure, in extracting discriminative features for HSIs data.

4.5. Experiments on the Salinas Data

Another challenging HSI data is the Salinas data. As we did before, we firstly chose the optimal kernel for the proposed GPGDA, in which the hyperparameters can be automatically learned via empirical Bayesian approach. Thirty training samples were randomly picked from each class in this experiment, while the remaining data were split into the verification set (50%) and test set (50%). The optimal number of reduced dimensionality and the reported results in Figure 7 are based on the verification set and the test set, respectively.
Figure 9a,b shows the KNN and SVM classification accuracy of the proposed GPGDA, respectively. For the sake of simplicity, ten better-performing kernels of all the kernels are demonstrated: lin, polyard, linard, matern32, matern52, rbfard2, sqexp, dexp, gg and gibbs. As in the experiments results for Indian Pines and University of Pavia datasets, slight differences are shown in Figure 9a,b, indicating that any of the ten kernels can be used efficiently. Since the SVM classification result of rbfard2 is slightly higher than others, kernel rbfard2 is finally picked.
Having chosen the optimal kernel for GPGDA, the best dimensionality of the projection space can be obtained, as shown in Figure 9c, which describes the results based on SVM performed in the low-dimensional projection space. All the experiments were conducted on low-dimensional space in a range from 1 to 30. To make it fair, the best regularization parameters in other DR models were confirmed in advance. The optimal parameter values of other DR models on Salinas dataset are displayed in Table 2. Once again, the optimal number of dimensionality for all the DR models in Salinas data is 30. Table 8 displays the outperformance of GPGDA in terms of OA and KC. Accordingly, Figure 10 demonstrates that the classification map of GPGDA is more accurate than other methods.
Finally, based on the optimal projection dimensionality, we also conduct experiments when different amounts of training data were selected. In addition, 10–60 samples were randomly picked from all the labeled data, and the rest of data were the testing data. Table 9 displays that the OAs, AAs and KCs keep an upward tendency as the number of training samples increases for all methods. Moreover, the OAs of GPGDA based on SVM are higher than other DR models, while LGSFA is superior to GPGDA in terms of KNN classification results because of the preserving local geometric structure. Generally, the experimental results in Table 9 corroborate the discriminating power of the proposed GPGDA.

5. Discussion

Based on above experimental results, we can provide the following discussions.
(i)
The proposed GPGDA outperforms SPPCA, NWFE, DGPLVM, SLGDA, LapCGDA, KCGDA and LGSFA in terms of OA, AA and KC based on SVM. As for the KNN classification results, LGSFA always takes the first place, but GPGDA is superior to other methods such as DGPLVM, SLGDA, and LapCGDA. The explanation could be that, although the learned similarity graph from GPGDA is more representative than that from other models, LGSFA preserves the intraclass neighbor reconstruction relationship in the objection function which considers the local manifolds. However, compared to other DR models of which regularization parameters needs be to be manually tuned via parameters sensitivity experiments, the only parameters of kernels in GPGDA can be learned automatically by gradients based optimization algorithms. Furthermore, the procedure of dividing the multi-class data into two classes enforces interclass separability when learning the kernel matrix from training data of each class.
(ii)
It is clear that KCGDA is a kernel based discriminant analysis method which combines the advantages of kernel and GEDA framework. However, it is quite difficult to set the optimal kernel parameters for KCGDA. Sometimes, KCGDA is even inferior to conventional CGDA because of the unsuitable parameters. DGPLVM is a GP based supervised DR model, but, unlike GPGDA, DGPLVM still has regularization parameters to be tuned. Thus, parameter sensitivity experiments should be carried out. By contrast, the proposed GPGDA outperforms the two models in terms of classification accuracy and automatic parameters tuning.
(iii)
The time complexity of GPGDA is O ( C n l 3 ) , which is the same with other discriminant analysis based methods such as CGDA and LapCGDA. However, traditional discriminant analysis based methods are able to reach the close-form solutions straightforwardly, while the GPR in the proposed GPGDA is often optimized by gradients based optimization algorithms, which could be more time-consuming. Nevertheless, when the number of training samples is small, which is the case in HSIs classification, the training time can be short. In Table 4, Table 6 and Table 8, we report the running times of extracting the dimensionality reduced features with different algorithms on three HSIs datasets. It should be noted that, although the proposed GPGDA needs more running times than most contrastive DR models, the hyperparameters of GPGDA can be automatically learned, indicating that the time consumption for parameter sensitivity experiments with respect to other DR models can be saved significantly. Furthermore, if parallel computation is adopted to calculate the class-specific similarity matrix, the experiment time will be further greatly reduced.
(iv)
Deep Learning based methods such as Convolutional Neural Networks (CNN) can take spatial information into account while extracting spectral information from HSIs. Its performance can be sensitive to the depths and widths of the deep network. However, more layers means more parameters that needs to be learned. Due to the large number of learnable parameters, sufficient training samples are needed in CNNs to avoid the overfitting problem. Unfortunately, the lack of labeled training samples is a common bottleneck in HSI classification tasks, which could decrease the performance of CNN. By contrast, the only thing we need to select is the type of kernel in the proposed nonparametric model GPGDA, which could significantly outperform CNN especially in the small sample size scenario.

6. Conclusions

This paper introduces a novel supervised DR technique GPGDA for HSIs data based on the GEDA framework. The proposed GPGDA utilizes the kernel function in GP to calculate all the within-class matrices, and then constructs the block-diagonal intrinsic graph in the GEDA framework. Once we get the intrinsic graph, the optimal projection matrix can be evaluated based on the GEDA framework. To proves this, various experimental results illustrate that discriminative information for classification can be effectively extracted by the proposed DR methods. Our future work would focus on how to introduce the local geometric manifold structure of HSIs data into our GPGDA algorithm.

Supplementary Materials

The following are available online at https://www.mdpi.com/2072-4292/11/19/2288/s1.

Author Contributions

X.S. carried out the experiments and wrote the paper. X.J. was mainly responsible for mathematical modeling and experimental design. J.G. contributed to some ideas of this paper and revised the paper. Z.C. reviewed and edited the draft.

Funding

This work was supported by the National Natural Science Foundation of China under Grants 61402424, 61773355, and 61403351.

Acknowledgments

The authors would like to thank Wei Li and Fulin Luo for sharing the MATLAB codes of SLGDA, LapCGDA and LGSFA for comparison purposes.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Solomon, J.; Rock, B. Imaging spectrometry for earth remote sensing. Science 1985, 228, 1147–1152. [Google Scholar]
  2. Vane, G.; Duval, J.; Wellman, J. Imaging spectroscopy of the Earth and other solar system bodies. Remote Geochem. Anal. Elem. Mineral. Compos. 1993, 108, 121–166. [Google Scholar]
  3. Hege, E.K.; O’Connell, D.; Johnson, W.; Basty, S.; Dereniak, E.L. Hyperspectral imaging for astronomy and space surviellance. Opt. Sci. Technol. 2004, 5159, 380–391. [Google Scholar]
  4. Lacar, F.; Lewis, M.; Grierson, I. Use of hyperspectral imagery for mapping grape varieties in the Barossa Valley, South Australia. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Sydney, Australia, 9–13 July 2001; pp. 2875–2877. [Google Scholar]
  5. Lu, G.; Fei, B. Medical hyperspectral imaging: A review. J. Biomed. Opt. 2014, 19, 010901. [Google Scholar] [CrossRef] [PubMed]
  6. Kruse, F.A.; Boardman, J.W.; Huntington, J.F. Comparison of airborne hyperspectral data and EO-1 Hyperion for mineral mapping. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1388–1400. [Google Scholar] [CrossRef]
  7. Yuen, P.W.; Richardson, M. An introduction to hyperspectral imaging and its application for security, surveillance and target acquisition. Imaging Sci. J. 2010, 58, 241–253. [Google Scholar] [CrossRef]
  8. Hughes, G. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef]
  9. Chang, C.I. Hyperspectral Data Exploitation: Theory and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
  10. Jia, X.; Richards, J.A. Segmented principal components transformation for efficient hyperspectral remote-sensing image display and classification. IEEE Trans. Geosci. Remote Sens. 1999, 37, 538–542. [Google Scholar] [Green Version]
  11. Xia, J.; Chanussot, J.; Du, P.; He, X. (Semi-) supervised probabilistic principal component analysis for hyperspectral remote sensing image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2224–2236. [Google Scholar] [CrossRef]
  12. Wold, S.; Esbensen, K.; Geladi, P. Principal component analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
  13. Rodarmel, C.; Shan, J. Principal component analysis for hyperspectral image classification. Surv. Land Inf. Sci. 2002, 62, 115. [Google Scholar]
  14. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  15. Candès, E.J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM 2011, 58, 11. [Google Scholar] [CrossRef]
  16. Zou, H.; Hastie, T.; Tibshirani, R. Sparse principal component analysis. J. Comput. Graph. Stat. 2006, 15, 265–286. [Google Scholar] [CrossRef]
  17. Kutluk, S.; Kayabol, K.; Akan, A. Classification of Hyperspectral Images using Mixture of Probabilistic PCA Models. In Proceedings of the 24th European Signal Processing Conference, Budapest, Hungary, 29 August–2 September 2016; pp. 1568–1572. [Google Scholar]
  18. Ren, Y.; Liao, L.; Maybank, S.J.; Zhang, Y.; Liu, X. Hyperspectral image spectral-spatial feature extraction via tensor principal component analysis. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1431–1435. [Google Scholar] [CrossRef]
  19. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [PubMed]
  20. Tenenbaum, J.B.; De Silva, V.; Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef] [PubMed]
  21. He, J.; Zhang, L.; Wang, Q.; Li, Z. Using diffusion geometric coordinates for hyperspectral imagery representation. IEEE Geosci. Remote Sens. Lett. 2009, 6, 767–771. [Google Scholar]
  22. Belkin, M.; Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 2003, 15, 1373–1396. [Google Scholar] [CrossRef]
  23. Zhang, Z.; Zha, H. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. SIAM J. Sci. Comput. 2004, 26, 313–338. [Google Scholar] [CrossRef]
  24. Lunga, D.; Prasad, S.; Crawford, M.M.; Ersoy, O. Manifold-learning-based feature extraction for classification of hyperspectral data: A review of advances in manifold learning. IEEE Signal Process. Mag. 2014, 31, 55–66. [Google Scholar] [CrossRef]
  25. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  26. Bengio, Y.; Paiement, J.F.; Vincent, P.; Delalleau, O.; Roux, N.L.; Ouimet, M. Out-of-sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering. In Proceedings of the 16th International Conference on Neural Information Processing Systems, Whistler, BC, Canada, 9–11 December 2003; pp. 177–184. [Google Scholar]
  27. He, X.; Niyogi, P. Locality preserving projections. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2004; pp. 153–160. [Google Scholar]
  28. He, X.; Cai, D.; Yan, S.; Zhang, H.J. Neighborhood preserving embedding. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; pp. 1208–1213. [Google Scholar]
  29. Zhang, T.; Yang, J.; Zhao, D.; Ge, X. Linear local tangent space alignment and application to face recognition. Neurocomputing 2007, 70, 1547–1553. [Google Scholar] [CrossRef]
  30. Yan, S.; Xu, D.; Zhang, B.; Zhang, H.J.; Yang, Q.; Lin, S. Graph embedding and extensions: A general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 40–51. [Google Scholar] [CrossRef] [PubMed]
  31. Li, W.; Feng, F.; Li, H.; Du, Q. Discriminant analysis-based dimension reduction for hyperspectral image classification: A survey of the most recent advances and an experimental comparison of different techniques. IEEE Geosci. Remote Sens. Mag. 2018, 6, 15–34. [Google Scholar] [CrossRef]
  32. Li, W.; Du, Q. A survey on representation-based classification and detection in hyperspectral remote sensing imagery. Pattern Recognit. Lett. 2016, 83, 115–123. [Google Scholar] [CrossRef]
  33. Qiao, L.; Chen, S.; Tan, X. Sparsity preserving projections with applications to face recognition. Pattern Recognit. 2010, 43, 331–341. [Google Scholar] [CrossRef] [Green Version]
  34. Yang, W.; Wang, Z.; Sun, C. A collaborative representation based projections method for feature extraction. Pattern Recognit. 2015, 48, 20–27. [Google Scholar] [CrossRef]
  35. Lu, Y.; Lai, Z.; Xu, Y.; Li, X.; Zhang, D.; Yuan, C. Low-rank preserving projections. IEEE Trans. Cybern. 2015, 46, 1900–1913. [Google Scholar] [CrossRef] [PubMed]
  36. Friedman, J.H. Regularized discriminant analysis. J. Am. Stat. Assoc. 1989, 84, 165–175. [Google Scholar] [CrossRef]
  37. Kuo, B.C.; Landgrebe, D.A. Nonparametric weighted feature extraction for classification. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1096–1105. [Google Scholar]
  38. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of hyperspectral images with regularized linear discriminant analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  39. Du, Q. Modified Fisher’s linear discriminant analysis for hyperspectral imagery. IEEE Geosci. Remote Sens. Lett. 2007, 4, 503–507. [Google Scholar] [CrossRef]
  40. Yu, S.; Yu, K.; Tresp, V.; Kriegel, H.P.; Wu, M. Supervised probabilistic principal component analysis. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Philadelphia, PA, USA, 20–23 August 2006; pp. 464–473. [Google Scholar]
  41. Mika, S.; Ratsch, G.; Weston, J.; Scholkopf, B.; Mullers, K.R. Fisher discriminant analysis with kernels. In Proceedings of the Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop, Madison, WI, USA, 25 August 1999; pp. 41–48. [Google Scholar]
  42. Sugiyama, M. Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis. J. Mach. Learn. Res. 2007, 8, 1027–1061. [Google Scholar]
  43. Li, W.; Prasad, S.; Fowler, J.E.; Bruce, L.M. Locality-preserving dimensionality reduction and classification for hyperspectral image analysis. IEEE Trans. Geosci. Remote Sens. 2012, 50, 1185–1198. [Google Scholar] [CrossRef]
  44. Chen, H.T.; Chang, H.W.; Liu, T.L. Local discriminant embedding and its variants. In Proceedings of the Computer Vision and Pattern Recognition, 2005. CVPR 2005, San Diego, CA, USA, 20–25 June 2005; pp. 846–853. [Google Scholar]
  45. Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  46. Li, J.; Wu, Y.; Zhao, J.; Lu, K. Low-rank discriminant embedding for multiview learning. IEEE Trans. Cybern. 2016, 47, 3516–3529. [Google Scholar] [CrossRef] [PubMed]
  47. Urtasun, R.; Darrell, T. Discriminative Gaussian process latent variable model for classification. In Proceedings of the 24th International Conference on Machine Learning, Corvalis, OR, USA, 20–24 June 2007; pp. 927–934. [Google Scholar]
  48. Li, X.; Zhang, L.; You, J. Locally Weighted Discriminant Analysis for Hyperspectral Image Classification. Remote Sens. 2019, 11, 109. [Google Scholar] [CrossRef]
  49. Huang, H.; Li, Z.; Pan, Y. Multi-Feature Manifold Discriminant Analysis for Hyperspectral Image Classification. Remote Sens. 2019, 11, 651. [Google Scholar] [CrossRef]
  50. Ly, N.H.; Du, Q.; Fowler, J.E. Sparse graph-based discriminant analysis for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 3872–3884. [Google Scholar]
  51. He, W.; Zhang, H.; Zhang, L.; Philips, W.; Liao, W. Weighted sparse graph based dimensionality reduction for hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2016, 13, 686–690. [Google Scholar] [CrossRef]
  52. Ly, N.H.; Du, Q.; Fowler, J.E. Collaborative graph-based discriminant analysis for hyperspectral imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2688–2696. [Google Scholar] [CrossRef]
  53. Li, W.; Du, Q. Laplacian regularized collaborative graph for discriminant analysis of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7066–7076. [Google Scholar] [CrossRef]
  54. Chen, M.; Wang, Q.; Li, X. Discriminant analysis with graph learning for hyperspectral image classification. Remote Sens. 2018, 10, 836. [Google Scholar] [CrossRef]
  55. Feng, F.; Li, W.; Du, Q.; Zhang, B. Dimensionality reduction of hyperspectral image with graph-based discriminant analysis considering spectral similarity. Remote Sens. 2017, 9, 323. [Google Scholar] [CrossRef]
  56. Luo, F.; Huang, H.; Yang, Y.; Lv, Z. Dimensionality reduction of hyperspectral images with local geometric structure Fisher analysis. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 52–55. [Google Scholar]
  57. Li, W.; Liu, J.; Du, Q. Sparse and low-rank graph for discriminant analysis of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4094–4105. [Google Scholar] [CrossRef]
  58. Jiang, X.; Song, X.; Zhang, Y.; Jiang, J.; Gao, J.; Cai, Z. Laplacian regularized spatial-aware collaborative graph for discriminant analysis of hyperspectral imagery. Remote Sens. 2019, 11, 29. [Google Scholar] [CrossRef]
  59. Rasmussen, C.E. Gaussian processes in machine learning. In Summer School on Machine Learning; Springer: Berlin/Heidelberg, Germany, 2003; pp. 63–71. [Google Scholar]
  60. Xu, D.; Yan, S.; Tao, D.; Lin, S.; Zhang, H.J. Marginal fisher analysis and its variants for human gait recognition and content-based image retrieval. IEEE Trans. Image Process. 2007, 16, 2811–2821. [Google Scholar] [CrossRef] [PubMed]
  61. Chen, Y.; Zhu, L.; Ghamisi, P.; Jia, X.; Li, G.; Tang, L. Hyperspectral Images Classification With Gabor Filtering and Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2355–2359. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed GPGDA method for HSIs feature extraction and classification.
Figure 1. Flowchart of the proposed GPGDA method for HSIs feature extraction and classification.
Remotesensing 11 02288 g001
Figure 2. The false color composition and ground truth of Indian Pines data with numbers of samples for each class in brackets.
Figure 2. The false color composition and ground truth of Indian Pines data with numbers of samples for each class in brackets.
Remotesensing 11 02288 g002
Figure 3. The false color composition and ground truth of University of Pavia data with numbers of samples for each class in brackets.
Figure 3. The false color composition and ground truth of University of Pavia data with numbers of samples for each class in brackets.
Remotesensing 11 02288 g003
Figure 4. The false color composition and ground truth of Salinas data with numbers of samples for each class in brackets.
Figure 4. The false color composition and ground truth of Salinas data with numbers of samples for each class in brackets.
Remotesensing 11 02288 g004
Figure 5. Classification accuracy w.r.t. different kernels and different dimensionality of the projection space on Indian Pines data: (a) KNN classification results of GPGDA based on different kernels; (b) SVM classification results of GPGDA based on different kernels, and (c) SVM classification results of all the DR methods based on different dimensions.
Figure 5. Classification accuracy w.r.t. different kernels and different dimensionality of the projection space on Indian Pines data: (a) KNN classification results of GPGDA based on different kernels; (b) SVM classification results of GPGDA based on different kernels, and (c) SVM classification results of all the DR methods based on different dimensions.
Remotesensing 11 02288 g005
Figure 6. Classification maps of CNN and different DR models based on SVM on the Indian Pines data: (a) Training GT; (b) Testing GT; (c) CNN (OA = 89.1%); (d) SVM (OA = 86.0%); (e) SPPCA (OA = 83.5%); (f) NWFE (OA = 87.6%); (g) DGPLVM(OA = 88.5%); (h) SLGDA (OA = 89.4%); (i) LapCGDA (OA = 89.4%); (j) KCGDA (OA = 87.9%); (k) LGSFA (OA = 89.8%); and (l) GPGDA (OA = 91.7%).
Figure 6. Classification maps of CNN and different DR models based on SVM on the Indian Pines data: (a) Training GT; (b) Testing GT; (c) CNN (OA = 89.1%); (d) SVM (OA = 86.0%); (e) SPPCA (OA = 83.5%); (f) NWFE (OA = 87.6%); (g) DGPLVM(OA = 88.5%); (h) SLGDA (OA = 89.4%); (i) LapCGDA (OA = 89.4%); (j) KCGDA (OA = 87.9%); (k) LGSFA (OA = 89.8%); and (l) GPGDA (OA = 91.7%).
Remotesensing 11 02288 g006
Figure 7. Classification accuracy w.r.t. different kernels and different dimensionality of the projection space on University of Pavia data: (a) KNN classification results of GPGDA based on different kernels; (b) SVM classification results of GPGDA based on different kernels; and (c) SVM classification results of all the DR methods based on different dimensions.
Figure 7. Classification accuracy w.r.t. different kernels and different dimensionality of the projection space on University of Pavia data: (a) KNN classification results of GPGDA based on different kernels; (b) SVM classification results of GPGDA based on different kernels; and (c) SVM classification results of all the DR methods based on different dimensions.
Remotesensing 11 02288 g007
Figure 8. Classification maps of CNN and different DR models based on SVM on the University of Pavia data: (a) Training GT; (b) Testing GT; (c) CNN (OA = 90.4%); (d) SVM (OA = 91.2%); (e) SPPCA (OA = 92.5%); (f) NWFE (OA = 91.3%); (g) DGPLVM (OA = 92.6%); (h) SLGDA (OA = 91.4%); (i) LapCGDA (OA = 91.8%); (j) KCGDA (OA = 92.4%); (k) LGSFA (OA = 91.2%); and (l) GPGDA (OA = 93.9%).
Figure 8. Classification maps of CNN and different DR models based on SVM on the University of Pavia data: (a) Training GT; (b) Testing GT; (c) CNN (OA = 90.4%); (d) SVM (OA = 91.2%); (e) SPPCA (OA = 92.5%); (f) NWFE (OA = 91.3%); (g) DGPLVM (OA = 92.6%); (h) SLGDA (OA = 91.4%); (i) LapCGDA (OA = 91.8%); (j) KCGDA (OA = 92.4%); (k) LGSFA (OA = 91.2%); and (l) GPGDA (OA = 93.9%).
Remotesensing 11 02288 g008
Figure 9. Classification accuracy w.r.t. different kernels and different dimensionality of the projection space on Salinas data: (a) KNN classification results of GPGDA based on different kernels; (b) SVM classification results of GPGDA based on different kernels; and (c) SVM classification results of all the DR methods based on different dimensions.
Figure 9. Classification accuracy w.r.t. different kernels and different dimensionality of the projection space on Salinas data: (a) KNN classification results of GPGDA based on different kernels; (b) SVM classification results of GPGDA based on different kernels; and (c) SVM classification results of all the DR methods based on different dimensions.
Remotesensing 11 02288 g009
Figure 10. Classification maps of different DR models based on SVM on the Salinas data: (a) Training GT; (b) Testing GT; (c) CNN (OA = 84.2%); (d) SVM (OA = 93.2%); (e) SPPCA (OA = 94.7%); (f) NWFE (OA = 94.0%); (g) DGPLVM(OA = 94.1%); (h) SLGDA (OA = 94.1%); (i) LapCGDA (OA = 94.2%); (j) KCGDA (OA = 94.1%); (k) LGSFA (OA = 94.2%); and (l) GPGDA (OA = 95.6%).
Figure 10. Classification maps of different DR models based on SVM on the Salinas data: (a) Training GT; (b) Testing GT; (c) CNN (OA = 84.2%); (d) SVM (OA = 93.2%); (e) SPPCA (OA = 94.7%); (f) NWFE (OA = 94.0%); (g) DGPLVM(OA = 94.1%); (h) SLGDA (OA = 94.1%); (i) LapCGDA (OA = 94.2%); (j) KCGDA (OA = 94.1%); (k) LGSFA (OA = 94.2%); and (l) GPGDA (OA = 95.6%).
Remotesensing 11 02288 g010
Table 1. Architecture of the CNN.
Table 1. Architecture of the CNN.
No.ConvolutionBatch NormalizationRectified Linear UnitPoolingStrideDropout
1 1 × 1 × 32 YESYES 2 × 2 2NO
2 5 × 5 × 48 YESYES 2 × 2 250%
3 4 × 4 × 64 NOYES 2 × 2 250%
Table 2. Optimal parameters settings for DGPLVM, SLGDA, LapCGDA and KCGDA on three HSI datasets.
Table 2. Optimal parameters settings for DGPLVM, SLGDA, LapCGDA and KCGDA on three HSI datasets.
IndianPinesPaviaUSalinas
Model α β α β α β
DGPLVM1100100
SLGDA100110101010
LapCGDA0.011000.11000.011
KCGDA0.10.0001100
Table 3. Classification accuracy based on the proposed GPGDA with 18 kernels on Indian Pines, University of Pavia and Salinas datasets.
Table 3. Classification accuracy based on the proposed GPGDA with 18 kernels on Indian Pines, University of Pavia and Salinas datasets.
IndianPinesPaviaUSalinas
KernelKNNSVMKNNSVMKNNSVM
rbf78.23 ± 1.9989.66 ± 0.5583.48 ± 1.4293.98 ± 0.3691.94 ± 1.1494.97 ± 0.35
mlp83.73 ± 2.9789.18 ± 1.9985.87 ± 1.2293.68 ± 0.5991.09 ± 1.5494.62 ± 0.98
poly83.27 ± 0.9491.70 ± 0.6984.07 ± 6.1789.71 ± 2.5292.55 ± 0.7894.91 ± 0.72
lin84.01 ± 0.6091.32 ± 0.8286.61 ± 1.8393.84 ± 0.7592.21 ± 0.5195.06 ± 0.40
rbfard79.13 ± 3.1789.10 ± 0.9781.26 ± 3.4393.32 ± 0.8790.41 ± 0.4994.66 ± 0.33
mlpard84.63 ± 1.8390.54 ± 0.5985.57 ± 1.7493.67 ± 0.5791.27 ± 1.5294.49 ± 1.12
polyard83.38 ± 1.7191.51 ± 0.6781.35 ± 7.8487.57 ± 6.5892.56 ± 0.7594.96 ± 0.68
linard85.15 ± 0.7091.17 ± 1.0087.29 ± 1.5693.77 ± 0.5492.28 ± 0.5895.13 ± 0.48
matern3279.95 ± 1.5189.53 ± 0.7375.65 ± 20.7482.82 ± 23.1392.37 ± 1.5895.30 ± 0.56
matern5280.60 ± 1.8790.34 ± 0.5081.47 ± 2.6792.79 ± 0.7592.80 ± 1.0695.32 ± 0.25
rbfard279.95 ± 0.5890.27 ± 0.6277.34 ± 4.6591.20 ± 0.9992.87 ± 1.1995.58 ± 0.06
sqexp81.28 ± 2.6290.19 ± 0.9482.80 ± 3.1393.20 ± 1.0092.76 ± 1.0095.39 ± 0.51
tensor63.19 ± 2.0585.54 ± 0.3467.01 ± 0.8690.96 ± 1.9487.36 ± 0.5793.22 ± 0.81
dexp83.34 ± 1.4291.71 ± 0.5887.97 ± 1.4193.88 ± 0.6091.82 ± 1.8695.18 ± 1.06
exp64.03 ± 1.9985.84 ± 0.3267.57 ± 0.8391.05 ± 1.8787.52 ± 0.6293.24 ± 0.83
gaussian78.23 ± 1.9989.66 ± 0.5583.48 ± 1.4293.98± 0.3691.94 ± 1.1494.97 ± 0.35
gg78.48 ± 2.1089.60 ± 0.6082.85 ± 2.0493.43 ± 1.0992.43 ± 1.4895.11 ± 0.37
gibbs80.96 ± 2.4890.24 ± 1.4183.89 ± 3.7093.41 ± 0.6793.01 ± 0.8095.36 ± 0.54
Table 4. Classification results of CNN and different dimensionality reduction methods based on SVM on the Indian Pines data.
Table 4. Classification results of CNN and different dimensionality reduction methods based on SVM on the Indian Pines data.
ClassSamplesDR Models
TrainTestCNNSVMSPPCANWFEDGPLVMSLGDALapCGDAKCGDALGSFAGPGDA
1281861.5 ± 14.554.1 ± 9.757.7 ± 16.765.8 ± 11.487.3 ± 11.793.0 ± 6.171.4 ± 14.865.9 ± 13.180.1 ± 13.783.8 ± 12.9
230139891.1 ± 4.286.3 ± 5.277.9 ± 3.587.3 ± 4.186.4 ± 3.587.1 ± 4.890.8 ± 4.187.4 ± 4.489.6 ± 3.492.5 ± 3.8
33080081.0 ± 7.375.4 ± 5.582.1 ± 6.081.7 ± 4.884.1 ± 5.584.2 ± 3.482.9 ± 5.981.0 ± 3.681.1 ± 4.588.5 ± 4.3
43020787.0 ± 8.969.4 ± 7.275.7 ± 7.675.3 ± 7.381.6 ± 6.083.0 ± 9.081.4 ± 8.578.3 ± 8.585.5 ± 6.184.9 ± 8.1
53045395.4 ± 4.486.6 ± 8.183.5 ± 6.389.1 ± 6.592.7 ± 5.488.0 ± 7.991.5 ± 5.190.8 ± 5.895.1 ± 3.995.8 ± 3.0
63070087.3 ± 5.394.8 ± 2.998.3 ± 1.397.2 ± 1.998.0 ± 1.096.8 ± 1.398.6 ± 1.097.9 ± 1.297.1 ± 1.498.9 ± 0.9
7171180.7 ± 16.547.3 ± 22.661.5 ± 15.864.0 ± 32.882.9 ± 19.590.2 ± 8.572.1 ± 28.962.7 ± 31.484.6 ± 19.285.1 ± 18.4
83044894.9 ± 4.1100.0 ± 0.0100.0 ± 0.0100.0 ± 0.099.6 ± 0.798.6 ± 4.3100.0 ± 0.0100.0 ± 0.099.8 ± 0.4100.0 ± 0.1
912847.8 ± 26.712.5 ± 2.548.7 ± 22.417.1 ± 4.661.1 ± 18.360.6 ± 22.738.0 ± 11.326.7 ± 8.234.5 ± 19.162.0 ± 13.4
103094284.1 ± 5.276.6 ± 5.575.1 ± 5.477.5 ± 6.577.8 ± 4.681.5 ± 6.878.1 ± 4.476.5 ± 5.580.8 ± 6.179.3 ± 5.2
1130242593.7 ± 2.992.5 ± 1.691.6 ± 2.694.0 ± 2.392.0 ± 3.093.8 ± 2.994.2 ± 1.992.8 ± 2.194.7 ± 1.394.8 ± 1.3
123056381.5 ± 5.778.6 ± 3.060.0 ± 5.181.6 ± 6.274.2 ± 7.084.2 ± 4.682.1 ± 5.581.0 ± 3.583.9 ± 5.983.9 ± 6.9
133017593.5 ± 5.794.1 ± 3.499.2 ± 1.398.2 ± 1.399.3 ± 1.598.4 ± 2.899.3 ± 1.099.0 ± 1.796.6 ± 2.699.6 ± 0.7
1430123597.1 ± 2.099.1 ± 0.598.6 ± 0.599.0 ± 0.898.6 ± 0.599.1 ± 0.798.9 ± 1.099.1 ± 0.798.9 ± 0.698.6 ± 0.6
153035666.0 ± 13.476.9 ± 3.774.6 ± 6.984.0 ± 4.483.5 ± 6.686.6 ± 4.585.4 ± 5.683.2 ± 6.389.7 ± 2.985.9 ± 5.5
16306374.3 ± 10.982.8 ± 8.089.2 ± 6.485.6 ± 8.389.3 ± 7.185.2 ± 10.486.6 ± 9.987.7 ± 8.279.1 ± 8.886.5 ± 8.7
AA(%)82.2 ± 3.076.7 ± 1.779.6 ± 2.481.1 ± 2.686.8 ± 2.188.1 ± 2.684.5 ± 2.781.9 ± 2.885.7 ± 2.388.8 ± 1.9
OA(%)89.2 ± 0.986.0 ± 1.083.5 ± 1.787.6 ± 1.688.5 ± 1.489.4 ± 1.389.4 ± 1.487.9 ± 1.289.8 ± 1.291.7 ± 0.6
KC0.87 ± 0.010.77 ± 0.020.80 ± 0.020.81 ± 0.030.87 ± 0.020.88 ± 0.030.84 ± 0.030.82 ± 0.030.86 ± 0.020.89 ± 0.01
Runtime (in seconds)38.411.310.1718.18848.8524.422.483.184.35112.32
Table 5. Classification results with different amounts of training data on the Indian pines data (OA ± STD (%)).
Table 5. Classification results with different amounts of training data on the Indian pines data (OA ± STD (%)).
ClassifierDR Model n l = 10 n l = 20 n l = 30 n l = 40 n l = 50 n l = 60
NNSPPCA54.26 ± 2.7366.04 ± 1.4270.77 ± 1.3076.35 ± 1.3078.98 ± 1.4381.18 ± 0.89
NWFE62.62 ± 2.1469.50 ± 1.7574.89 ± 1.5777.93 ± 1.5579.88 ± 0.8481.30 ± 1.06
DGPLVM68.09 ± 1.8574.75 ± 2.1478.40 ± 1.9781.97 ± 1.5084.86 ± 0.9787.92 ± 1.16
SLGDA67.66 ± 1.4976.75 ± 1.3380.89 ± 1.8884.59 ± 1.3786.45 ± 1.0688.28 ± 0.98
LapCGDA62.28 ± 31.4867.31 ± 23.6878.85 ± 1.7981.09 ± 1.3582.89 ± 1.2283.90 ± 1.44
KCGDA57.35 ± 2.7768.10 ± 1.8672.79 ± 2.4476.41 ± 2.0479.33 ± 1.5181.17 ± 1.09
LGSFA74.08 ± 1.8684.34 ± 1.6290.12 ± 1.2892.63 ± 1.1793.88 ± 0.9995.15 ± 0.75
GPGDA66.78 ± 2.2277.26 ± 2.0982.50 ± 1.6885.43 ± 1.6388.09 ± 1.5190.09 ± 0.86
SVMSPPCA71.80 ± 1.7179.26 ± 1.5183.53 ± 1.6885.89 ± 1.1287.80 ± 0.9189.30 ± 0.73
NWFE76.50 ± 1.0183.35 ± 1.4987.62 ± 1.6289.90 ± 0.8291.33 ± 1.0892.37 ± 0.83
DGPLVM76.70 ± 1.4984.77 ± 1.0388.52 ± 1.4191.58 ± 0.7593.10 ± 0.4294.55 ± 0.46
SLGDA77.11 ± 1.2985.17 ± 1.3988.96 ± 0.9391.23 ± 0.9492.71 ± 0.6594.17 ± 0.62
LapCGDA66.39 ± 37.1977.18 ± 26.9189.44 ± 1.3691.04 ± 0.9292.50 ± 1.0693.61 ± 0.77
KCGDA74.74 ± 1.4983.34 ± 1.0487.85 ± 1.1690.05 ± 0.8891.89 ± 0.6592.69 ± 0.60
LGSFA74.17 ± 1.9484.05 ± 1.9489.75 ± 1.1692.59 ± 1.1593.97 ± 0.9495.37 ± 0.50
GPGDA79.87 ± 1.3687.15 ± 1.2191.71 ± 0.5893.37 ± 0.6894.80 ± 0.4795.52 ± 0.34
Table 6. Classification results of CNN and different DR methods based on SVM on the University of Pavia data.
Table 6. Classification results of CNN and different DR methods based on SVM on the University of Pavia data.
ClassSamplesDR Models
TrainTestCNNSVMSPPCANWFEDGPLVMSLGDALapCGDAKCGDALGSFAGPGDA
130660197.1 ± 2.292.2 ± 1.993.7 ± 2.593.0 ± 1.794.2 ± 1.192.8 ± 3.393.5 ± 1.692.7 ± 1.791.6 ± 1.994.3 ± 2.1
2301861996.1 ± 1.698.2 ± 0.797.9 ± 0.898.4 ± 0.698.1 ± 0.698.3 ± 0.598.7 ± 0.598.1 ± 1.097.4 ± 0.898.6 ± 0.4
330206976.3 ± 9.384.5 ± 6.380.1 ± 7.985.1 ± 5.081.4 ± 8.281.8 ± 6.586.5 ± 5.085.1 ± 6.474.6 ± 6.086.5 ± 3.6
430303497.3 ± 1.284.0 ± 7.790.1 ± 8.484.0 ± 9.985.3 ± 8.688.6 ± 4.586.6 ± 7.084.9 ± 7.493.1 ± 5.093.5 ± 1.3
530131599.5 ± 0.598.7 ± 1.499.8 ± 0.299.8 ± 0.299.8 ± 0.198.9 ± 2.999.2 ± 1.999.3 ± 1.099.9 ± 0.198.7 ± 2.4
630499976.7 ± 9.284.2 ± 4.482.5 ± 2.787.3 ± 4.488.9 ± 3.986.3 ± 5.087.5 ± 4.285.0 ± 4.884.4 ± 4.888.9 ± 5.1
730130079.8 ± 8.479.2 ± 5.592.0 ± 6.282.1 ± 5.676.2 ± 2.283.5 ± 7.983.4 ± 6.780.0 ± 3.892.7 ± 4.987.6 ± 5.2
830365289.8 ± 4.581.6 ± 3.584.4 ± 3.982.9 ± 4.382.7 ± 2.983.6 ± 3.284.6 ± 3.081.6 ± 3.478.9 ± 3.684.2 ± 3.2
93091795.5 ± 4.590.5 ± 5.587.0 ± 1.890.1 ± 5.788.8 ± 5.987.2 ± 7.489.0 ± 5.291.3 ± 5.392.1 ± 4.190.7 ± 4.1
AA(%)89.8 ± 0.988.1 ± 1.189.7 ± 0.889.2 ± 0.888.4 ± 1.089.0 ± 1.789.9 ± 1.388.7 ± 0.889.4 ± 1.391.5 ± 0.6
OA(%)90.4 ± 1.391.2 ± 0.992.5 ± 1.091.3 ± 1.1892.6 ± 0.891.4 ± 1.591.8 ± 1.292.4 ± 0.891.2 ± 1.093.9 ± 0.6
KC0.90 ± 0.010.88 ± 0.010.90 ± 0.010.89 ± 0.010.88 ± 0.010.89 ± 0.020.90 ± 0.010.89 ± 0.010.89 ± 0.010.91 ± 0.01
Runtime (in seconds)23.611.820.223.45483.9319.371.40.91.8162.82
Table 7. Classification results with different amounts of training data on the University of Pavia data (OA ± STD (%)).
Table 7. Classification results with different amounts of training data on the University of Pavia data (OA ± STD (%)).
ClassifierDR Model n l = 10 n l = 20 n l = 30 n l = 40 n l = 50 n l = 60
NNSPPCA63.69 ± 2.0369.58 ± 1.7973.73 ± 2.6076.67 ± 3.2779.91 ± 2.1381.53 ± 2.57
NWFE63.59 ± 3.1069.18 ± 3.3473.53 ± 2.1675.87 ± 2.5579.51 ± 2.5880.93 ± 2.25
DGPLVM69.98 ± 2.8777.98 ± 3.0782.93 ± 2.9185.61 ± 1.6287.63 ± 1.6888.96 ± 1.11
SLGDA65.96 ± 4.1678.56 ± 7.0583.93 ± 2.9886.45 ± 1.3288.57 ± 1.9888.73 ± 2.36
LapCGDA66.75 ± 5.4571.74 ± 4.9874.72 ± 3.3678.65 ± 3.7582.26 ± 3.0384.16 ± 1.87
KCGDA58.76 ± 3.4368.53 ± 2.4972.91 ± 2.4576.80 ± 2.2280.48 ± 2.4982.55 ± 2.01
LGSFA63.65 ± 4.5981.80 ± 3.6888.11 ± 1.1389.57 ± 1.3292.05 ± 1.2692.98 ± 0.99
GPGDA65.03 ± 4.4179.46 ± 2.1687.97 ± 1.8389.31 ± 2.1791.95 ± 1.4093.42 ± 0.47
SVMSPPCA85.34 ± 3.7190.39 ± 2.4692.51 ± 1.0193.33 ± 1.0693.96 ± 1.0094.60 ± 0.75
NWFE85.06 ± 2.8189.65 ± 1.9491.30 ± 1.1892.55 ± 1.0493.56 ± 0.5694.24 ± 0.75
DGPLVM84.24 ± 3.6390.16 ± 1.2192.63 ± 0.7693.58 ± 0.7194.67 ± 0.6295.58 ± 0.59
SLGDA84.72 ± 3.0389.03 ± 2.3291.39 ± 1.5392.19 ± 1.3793.55 ± 1.1093.65 ± 0.71
LapCGDA85.26 ± 2.8190.47 ± 1.7891.80 ± 1.2492.94 ± 0.9194.11 ± 0.4494.84 ± 0.49
KCGDA83.53 ± 4.3490.08 ± 2.1392.40 ± 0.7993.16 ± 0.8694.10 ± 0.6494.73 ± 0.60
LGSFA78.18 ± 3.5785.76 ± 2.0991.15 ± 1.0492.84 ± 1.1894.12 ± 0.8695.32 ± 0.34
GPGDA87.78 ± 1.1491.71 ± 0.2893.88 ± 0.6093.88 ± 1.4595.52 ± 0.6295.98 ± 0.52
Table 8. Classification results of CNN and different DR methods based on SVM on the Salinas data.
Table 8. Classification results of CNN and different DR methods based on SVM on the Salinas data.
ClassSamplesDR Models
TrainTestCNNSVMSPPCANWFEDGPLVMSLGDALapCGDAKCGDALGSFAGPGDA
130197991.2 ± 5.599.0 ± 1.499.7 ± 0.999.5 ± 1.299.8 ± 0.4100.0 ± 0.099.9 ± 0.399.8 ± 0.6100.0 ± 0.099.9 ± 0.1
230369699.8 ± 0.599.2 ± 1.0100.0 ± 0.099.3 ± 0.999.6 ± 0.399.7 ± 0.399.6 ± 0.699.6 ± 0.699.9 ± 0.299.7 ± 0.4
330194699.6 ± 0.695.6 ± 1.898.6 ± 0.896.9 ± 2.699.9 ± 0.299.6 ± 0.598.0 ± 1.297.2 ± 1.199.2 ± 1.399.0 ± 0.8
430136493.2 ± 3.194.1 ± 2.696.5 ± 1.897.6 ± 1.297.3 ± 1.697.8 ± 0.895.7 ± 2.296.7 ± 1.597.1 ± 1.498.1 ± 1.6
530264898.6 ± 1.698.0 ± 0.798.9 ± 0.698.0 ± 0.998.7 ± 0.699.3 ± 0.498.8 ± 0.698.7 ± 0.799.2 ± 0.599.0 ± 0.3
630392999.4 ± 0.9100.0 ± 0.0100.0 ± 0.099.7 ± 1.199.9 ± 0.199.9 ± 0.2100.0 ± 0.0100.0 ± 0.099.9 ± 0.4100.0 ± 0.0
730354998.9 ± 2.298.4 ± 0.999.5 ± 0.799.1 ± 1.099.8 ± 0.399.7 ± 0.599.1 ± 0.999.1 ± 0.9100.0 ± 0.099.5 ± 0.7
8301124184.7 ± 10.490.1 ± 2.790.9 ± 1.991.6 ± 1.890.8 ± 2.291.7 ± 1.891.4 ± 1.991.9 ± 2.486.9 ± 2.693.9 ± 1.8
930617399.7 ± 0.499.5 ± 0.499.4 ± 0.199.4 ± 0.399.6 ± 0.199.6 ± 0.199.6 ± 0.399.4 ± 0.499.6 ± 0.299.3 ± 0.5
1030324892.5 ± 5.189.7 ± 5.295.0 ± 2.692.0 ± 4.693.5 ± 2.890.2 ± 7.993.3 ± 2.992.2 ± 4.695.4 ± 2.092.2 ± 5.7
1130103895.4 ± 4.692.4 ± 2.698.2 ± 1.697.0 ± 3.899.0 ± 2.698.9 ± 1.897.8 ± 2.397.7 ± 2.999.9 ± 0.299.2 ± 1.1
1230189799.0 ± 1.895.8 ± 1.896.6 ± 4.896.8 ± 2.096.7 ± 4.898.5 ± 1.397.7 ± 1.997.2 ± 1.699.8 ± 0.398.3 ± 1.4
133088693.8 ± 7.094.9 ± 4.596.7 ± 2.997.8 ± 2.694.7 ± 4.996.8 ± 3.197.6 ± 3.297.5 ± 2.699.2 ± 1.399.0 ± 0.9
1430104095.2 ± 6.288.4 ± 7.595.9 ± 3.496.1 ± 2.095.5 ± 5.397.6 ± 2.397.5 ± 1.995.8 ± 4.199.1 ± 0.898.2 ± 1.0
1530723849.2 ± 10.481.8 ± 2.281.2 ± 4.882.9 ± 2.980.3 ± 4.081.7 ± 3.983.2 ± 2.782.6 ± 2.980.1 ± 2.885.5 ± 2.8
1630177799.2 ± 1.792.9 ± 6.596.2 ± 6.893.7 ± 7.295.0 ± 5.096.8 ± 3.894.4 ± 3.594.1 ± 7.399.9 ± 0.295.2 ± 7.5
AA(%)93.1 ± 1.894.4 ± 0.896.5 ± 0.496.1 ± 0.596.2 ± 0.696.8 ± 0.696.5 ± 0.396.2 ± 0.597.2 ± 0.397.0 ± 0.4
OA(%)84.2 ± 1.693.2 ± 0.694.7 ± 0.394.0 ± 0.694.1 ± 0.894.1 ± 0.894.2 ± 0.894.1 ± 0.594.2 ± 0.695.6 ± 0.1
KC0.83 ± 0.020.94 ± 0.010.96 ± 0.000.96 ± 0.000.96 ± 0.010.97 ± 0.010.96 ± 0.000.96 ± 0.000.97 ± 0.000.97 ± 0.00
Runtime (in seconds)41.242.050.4120.87985.7225.372.283.394.77233.12
Table 9. Classification results with different amounts of training data on the Salinas data (OA ± STD (%)).
Table 9. Classification results with different amounts of training data on the Salinas data (OA ± STD (%)).
ClassifierDR Model n l = 10 n l = 20 n l = 30 n l = 40 n l = 50 n l = 60
NNSPPCA86.34 ± 3.6789.47 ± 5.9091.09 ± 4.4592.08 ± 3.3591.71 ± 3.3492.74 ± 2.03
NWFE86.14 ± 2.1388.57 ± 1.8190.49 ± 0.7491.08 ± 0.6191.61 ± 0.3292.24 ± 0.58
DGPLVM86.65 ± 1.8689.25 ± 1.5690.16 ± 0.8991.94 ± 0.5892.50 ± 0.6292.98 ± 0.51
SLGDA85.45 ± 1.8489.53 ± 1.8591.60 ± 0.9492.70 ± 0.6293.05 ± 0.9393.67 ± 0.71
LapCGDA86.34 ± 1.9488.39 ± 2.2591.02 ± 0.9891.76 ± 1.1091.83 ± 1.1892.71 ± 1.48
KCGDA82.97 ± 1.5686.60 ± 1.3789.25 ± 0.6290.16 ± 0.8391.12 ± 0.6391.94 ± 0.39
LGSFA90.33 ± 2.2892.56 ± 1.5394.33 ± 0.4995.16 ± 0.5595.34 ± 0.5195.76 ± 0.51
GPGDA85.20 ± 2.0589.83 ± 2.5492.86 ± 1.2093.56 ± 0.4793.86 ± 0.7294.52 ± 0.30
SVMSPPCA90.02 ± 1.1892.92 ± 0.9694.70 ± 0.3495.06 ± 0.5395.45 ± 0.3795.91 ± 0.37
NWFE90.81 ± 0.7492.85 ± 1.0193.99 ± 0.5694.62 ± 0.5694.91 ± 0.4695.31 ± 0.67
DGPLVM90.61 ± 1.4193.01 ± 1.0294.12 ± 0.8395.45 ± 0.5495.74 ± 0.4296.13 ± 0.32
SLGDA90.72 ± 1.2492.90 ± 1.2594.13 ± 0.7895.25 ± 0.6095.64 ± 0.4596.02 ± 0.34
LapCGDA91.30 ± 1.3492.66 ± 1.2594.15 ± 0.7994.98 ± 0.8795.43 ± 0.9195.65 ± 0.91
KCGDA90.61 ± 1.3292.79 ± 1.0294.07 ± 0.5494.85 ± 0.4295.23 ± 0.4895.60 ± 0.39
LGSFA91.49 ± 1.4692.46 ± 1.3794.17 ± 0.6495.06 ± 0.5695.48 ± 0.5496.08 ± 0.43
GPGDA90.83 ± 2.1893.23 ± 1.0495.58 ± 0.0696.07 ± 0.2096.36 ± 0.3796.77 ± 0.40

Share and Cite

MDPI and ACS Style

Song, X.; Jiang, X.; Gao, J.; Cai, Z. Gaussian Process Graph-Based Discriminant Analysis for Hyperspectral Images Classification. Remote Sens. 2019, 11, 2288. https://doi.org/10.3390/rs11192288

AMA Style

Song X, Jiang X, Gao J, Cai Z. Gaussian Process Graph-Based Discriminant Analysis for Hyperspectral Images Classification. Remote Sensing. 2019; 11(19):2288. https://doi.org/10.3390/rs11192288

Chicago/Turabian Style

Song, Xin, Xinwei Jiang, Junbin Gao, and Zhihua Cai. 2019. "Gaussian Process Graph-Based Discriminant Analysis for Hyperspectral Images Classification" Remote Sensing 11, no. 19: 2288. https://doi.org/10.3390/rs11192288

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop