Next Article in Journal
CRTransSar: A Visual Transformer Based on Contextual Joint Representation Learning for SAR Ship Detection
Next Article in Special Issue
Composite Style Pixel and Point Convolution-Based Deep Fusion Neural Network Architecture for the Semantic Segmentation of Hyperspectral and Lidar Data
Previous Article in Journal
On the 3D Reconstruction of Coastal Structures by Unmanned Aerial Systems with Onboard Global Navigation Satellite System and Real-Time Kinematics and Terrestrial Laser Scanning
Previous Article in Special Issue
Application of ASTER Data for Differentiating Carbonate Minerals and Evaluating MgO Content of Magnesite in the Jiao-Liao-Ji Belt, North China Craton
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Manifold-Based Multi-Deep Belief Network for Feature Extraction of Hyperspectral Image

Key Laboratory on Opto-Electronic Technique and Systems, Ministry of Education, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(6), 1484; https://doi.org/10.3390/rs14061484
Submission received: 27 January 2022 / Revised: 13 March 2022 / Accepted: 14 March 2022 / Published: 19 March 2022
(This article belongs to the Special Issue Information Retrieval from Remote Sensing Images)

Abstract

:
Deep belief networks (DBNs) have been widely applied in hyperspectral imagery (HSI) processing. However, the original DBN model fails to explore the prior knowledge of training samples which limits the discriminant capability of extracted features for classification. In this paper, we proposed a new deep learning method, termed manifold-based multi-DBN (MMDBN), to obtain deep manifold features of HSI. MMDBN designed a hierarchical initialization method that initializes the network by local geometric structure hidden in data. On this basis, a multi-DBN structure is built to learn deep features in each land-cover class, and it was used as the front-end of the whole model. Then, a discrimination manifold layer is developed to improve the discriminability of extracted deep features. To discover the manifold structure contained in HSI, an intrinsic graph and a penalty graph are constructed in this layer by using label information of training samples. After that, the deep manifold features can be obtained for classification. MMDBN not only effectively extracts the deep features from each class in HSI, but also maximizes the margins between different manifolds in low-dimensional embedding space. Experimental results on Indian Pines, Salinas, and Botswana datasets reach 78.25%, 90.48%, and 97.35% indicating that MMDBN possesses better classification performance by comparing with some state-of-the-art methods.

1. Introduction

Hyperspectral sensors can capture images in hundreds of narrow and continuous bands which provide fine spectral details for different ground objects, and hyperspectral image (HSI) has been widely used for various research fields, such as environmental science, geological science, urban planning, and precision agriculture [1,2,3,4]. In recent years, the analysis of HSI data has gained substantial attention and become an increasingly active research topic [5,6,7,8]. In real applications, it is a key task to classify each pixel in hyperspectral data [9,10,11]. However, the high dimensionality of HSI data generally leads to a problem named curse-of-dimensionality. While dimensionality reduction (DR) is an effective tool that can tackle this problem, it explores the transformation process from high-dimensional space to low-dimensional space to obtain embedding features [12,13].
In general, the DR methods can be categorized as feature selection (FS) and feature extraction (FE) [14,15]. The former is tried to search a subset of original variables for removing irrelevant or redundant features, while the latter projects data into low-dimensional embedding space and preserves most intrinsic information [16,17,18,19]. This paper mainly focuses on FE algorithms for dimensionality reduction of HSI.
The FE methods are divided into unsupervised, supervised, and semi-supervised ones [20,21]. Unsupervised FE algorithms learn low-dimensional features without exploring label information of training sets [22,23]. A number of unsupervised approaches have been proposed; such methods include principal component analysis (PCA) [24], locality preserving projections (LPP) [25,26], locally linear embedding (LLE) [27], neighborhood preserving embedding (NPE) [28], Laplacian eigenmaps (LE) [29], and local tangent space alignment (LTSA) [30]. However, the unsupervised nature limits the discriminant capability of embedding features for classification [31]. Supervised FE methods exploit prior knowledge of training samples to obtain discriminant features in low-dimensional space for classification [32,33], such as linear discriminant analysis (LDA) [34], locality sensitive discriminant analysis (LSDA) [35], local geometric structure Fisher analysis (LGSFA) [36], and marginal Fisher analysis (MFA) [37]. Although the aforementioned FE approaches may achieve good performance in some scenes, they heavily depend on shallow-based descriptors, which will limit the applicability of those methods in difficult scenes. The shallow features usually cannot deal with the nonlinear relationship between collected spectral information and corresponding land covers [38,39]. Therefore, extracting deep discriminant features is considered to be of great significance in HSI classification.
Recently, deep learning (DL) has been explored as an effective FE strategy to address nonlinear problems and it has shown the advantages in different fields such as natural language processing and computer vision [40,41,42,43]. Motivated by these encouraging applications, DL has been introduced into the classification of HSI [44,45,46,47]. Compared with the traditional shallow descriptors-based FE method, DL techniques can obtain discriminant information from original spectral features with hierarchical layers [48,49,50,51]. Chen et al. [52] designed a stacked autoencoder (SAE)-based method to classify the hyperspectral data by directly using spectral information in the DL model, and then learned features are classified by logistic regression. Chen et al. [53] extracted the deep features of HSI using convolutional neural network (CNN), and obtained high classification performance by spatial–spectral feature extraction. Li et al. [54] developed the manifold-based maximization margin discriminant network (M 3 DNet) to enhance the feature extraction ability of DL models. Although aforementioned deep models effectively explore deep features to enhance classification performance, they fail to consider the intrinsic manifold structure of HSI when constructing network models, which limits the discriminant ability of extracted features.
To address the above issues, a novel FE method termed manifold-based multi-deep belief network (MMDBN) is proposed by fusing deep network and manifold learning. MMDBN developed a new network initialization method based on local geometric structure among samples, and then built a multi-DBN model by training multiple DBNs with samples from each class to learn intrinsic information in different classes. After that, deep features extracted from the multi-DBN model are exploited to construct a discrimination manifold layer. In the manifold layer, a penalty graph and an intrinsic graph are explored to reveal the manifold structure of deep features for HSI data, which can further enhance the interclass separability and intraclass compactness in low-dimensional embedding space.
The contributions of the proposed approach are concluded as follows:
  • A hierarchical initialization strategy is designed to utilize the local geometric structure to initialize the network;
  • A multi-DBN structure is proposed to learn deep features from samples in each class, and the extracted abstract features are conducive to representing the deep information in hyperspectral data;
  • A discrimination manifold layer is constructed by using the prior knowledge of training samples, and this will reveal the intrinsic manifold structure of deep features and bring the benefit to improve the discriminant capability of embedding features.
The remainder of this paper is organized as follows: A brief description of RBM, DBN, and graph embedding framework are presented in Section 2. Section 3 describes the proposed algorithm in detail. Section 4 gives experimental results to demonstrate the effectiveness of the MMDBN. We summarize this paper and provide recommendations for future work in Section 5.

2. Related Works

Let us denote a hyperspectral dataset by X = [ x 1 , x 2 , x 3 , , x n ] D × N , where N is the number of pixels and D indicates the number of spectral bands. The class label of x i is denoted by l ( x i ) = { 1 , 2 , , c } , and c is the class number of land covers. The purpose of FE is to learn a low-dimensional space Z = [ z 1 , z 2 , z 3 , , z n ] d × N , where d ( d D ) is the dimension of embedding features.

2.1. Restricted Boltzmann Machine (RBM)

RBM consists of visible a layer and a hidden layer. The visible layer is responsible for input, and the hidden layer learns high-level semantic features from input data. The visible unit and hidden unit are binary variables whose state is 1 or 0. The whole network is a bipartite graph, and there is no joining edge inside the visible layer or hidden layer, which exists between the visible unit v = [ v 1 , , v m , , v k ] and the hidden unit h = [ h 1 , , h m , , h k ] . Figure 1 displays the network structure of RBM.
As an energy-based model, the joint configuration energy of visible unit v and hidden unit h for RBM is defined as follows:
E ( v , h ; θ ) = m b m v m n a n h n m n w m n v m h n
where θ = ( w m n , a n , b m ) is the parameter of RBM, a n and b m define the bias vectors of the hidden unit and visible unit, respectively, and w m n is the weight between visible unit v m and hidden unit h n .
The joint probability distribution of v and h is calculated by
P ( v , h ; θ ) = 1 Z ( θ ) e E ( v , h ; θ )
in which Z ( θ ) = v h E ( v , h ; θ ) is the normalization factor.
The likelihood function of v and h are given as
P ( h n = 1 | v ) = g ( m = 1 w m n v m + a n )
P ( v m = 1 | h ) = g ( n = 1 w m n h n + b m )
where g ( x ) = 1 / ( 1 + e x p ( x ) ) is the logistic function.
The RBM model is trained by iteration, and the parameter θ = ( w m n , a n , b m ) can be obtained through the following gradient descent algorithm:
θ = θ + η × ln [ m = 1 k p ( v | θ ) ] θ
where η is a learning rate. With high-dimensional data, the gradient descent method is difficult to solve the model expectation. However, the training efficiency of RBM can greatly improve by using the contrastive divergence (CD) algorithm [55] as
( v m h n ) d a t a ( v m h n ) r e c = l n p ( v | θ ) w m n
where ( ) d a t a indicates the mathematical expectation of training data, and ( ) r e c represents the expectation of the reconstructed model. Then, the updated criteria for obtaining the DBN weight and bias are defined as follows:
Δ w m n = η ( ( v m h n ) d a t a ( v m h n ) r e c )
Δ a m = η ( ( v m ) d a t a ( v m ) r e c )
Δ b n = η ( ( h n ) d a t a ( h n ) r e c )
After that, the parameters of RBM can be adjusted to the appropriate values to avoid the local optimal solution. RBM has a strong feature learning ability, and it can be used for information extraction. However, the performance of RBM for FE is limited when it is applied to complex nonlinear data.

2.2. Deep Belief Network (DBN)

To improve the representation ability of a single RBM, a DBN model is established by stacking multiple RBMs together. Thus, the DBN can explore a deep hierarchical representation of training samples. Figure 2 shows the structure of DBN.
As in Figure 2, the two adjacent layers of DBN can be considered as a single RBM. Every RBM is trained by the greedy layer-wise unsupervised learning, and an RBM does not consider other RBMs during its learning process [56].

2.3. Graph Embedding (GE)

The GE framework is designed to unify most classical DR approaches [37]. GE explores the desirable geometrical or statistical properties through an intrinsic graph G ( X , W ) and avoids undesirable characteristics by a penalty graph G p ( X , W p ) , where X represents the vertex set of a graph. Both G and G P are undirected weighted graphs, W and W p are the weight matrices of two graphs, in which w i j measures the similarity between vertices x i and x j in intrinsic graph, and w i j p calculates the dissimilarity of vertices x i and x j in penalty graph.
The similarity relationship between vertex pairs should be preserved in low-dimensional embedding space, and the objective function can be designed as
arg min Y T H Y = h 1 2 i = 1 n j = 1 n y i y j 2 w i j = t r ( Y T ( D W ) Y ) = t r ( Y T L Y )
where h is a constant, D is a diagonal matrix, and D i i = j = 1 N w i j , L and H are Laplacian matrices of graph G and G p . H is a constraint matrix for scale normalization, i.e., H = L p = D p W p , D i i p = j = 1 N w i j p .

3. Proposed Method

In this section, a manifold-based multi-deep belief network (MMDBN) is proposed to extract deep discriminant features for HSI classification. X = [ X 1 , , X i , , X c ] represents the hyperspectral dataset, where X i = [ x i 1 , , x i j , , x i N i ] D × N i indicates the samples from the i-th class, and N i is the number of samples in it. Y = [ Y 1 , , Y i , , Y c ] represents the deep features extracted by multi-DBN structure, and Y i = [ y i 1 , , y i j , , y i N i ] D × N i represents the features of corresponding class, where D is the dimension of deep features. The output of MMDBN can be denoted as z i j = A T y i j , where A D × d is the projection matrix and d is the dimension of low-dimensional features.
At first, MMDBN develops a local geometric structure-based initialization method and constructs a multi-DBN structure to train a DBN model for each class, then the deep features of different classes will be extracted with the corresponding DBN model. To further analyze the deep features extracted from the ( L 1 ) -th layer, we designed a discrimination manifold layer as the last layer of the whole network. In this layer, the label information of each pixel is introduced as the prior knowledge to discover the manifold structure contained within hyperspectral data, and the layer separates the intermanifold samples while compacting the intramanifold samples, which increases the margins among different manifolds. Figure 3 displays the process of the MMDBN.

3.1. Local Geometric Structure-Based Network Initialization

Different from traditional DBN that initializes the network by random initialization, MMDBN develops a hierarchical initialization strategy based on the manifold structure in HSI.
Assume a DBN model consists of L layers, for i-th training labeled sample at l-th ( 1 l L ) layer, the output and input are h i l and v i l , respectively. MMDBN builds a neighbor graph G n in each layer, v i l is connected to samples from the same class, and the weights s i j l are represented as
s i j l = e x p ( v i l v j l 2 v i l v j l 2 2 ( t i ) 2 2 ( t i ) 2 )
where s i j l is determined by the spectral Euclidean distance between the j-th and i-th training samples, and t i = 1 n j = 1 n v i l v j l is the heat kernel parameter.
Given that v i l and v j l are neighbor points, we expect that the relationship can be maintained between h i l and h j l . The corresponding objective function is represented by
R ( M l ) = m i n ( 1 2 i = 1 N j = 1 N M l v i l M l v j l 2 s i j l )
where M l is the network parameters matrix of layer l.
By some algebraic operations, Equation (12) can be reformulated as
1 2 i = 1 N j = 1 N M l v i l M l v j l 2 s i j l = 1 2 t r ( i = 1 N j = 1 N ( M l v i l s i j l ( v i l ) T M l T 2 M l v i l s i j l ( v j l ) T M l T M l v j l s i j l ( v j l ) T M l T ) ) = t r ( M l V l ( D l S l ) ( V l ) T M l T ) = t r ( M l V l L l ( V l ) T M l T )
in which S l = [ s i j l ] i , j = 1 N , D l = d i a g ( [ j = 1 N s i j l ] i = 1 N ) .
To reduce the influence of scaling factors in the projection, a constraint M l v l ( v l ) T M l T = I is imposed on the following objective function:
min ( t r ( M l V l L l ( V l ) T M l T ) ) s . t . M l V l ( V l ) T M l T = I
By introducing the Lagrange multiplier method, the optimization program of Equation (14) is transformed to tackle a generalized eigenvalue issue:
V l L l ( V l ) T M l T = λ V l ( V l ) T M l T
where M l consists of d smallest eigenvalues of Equation (15) corresponding to eigenvectors.

3.2. Multi-DBN Structure

In order to extract deep features for each class in hyperspectral image, we designed a multi-DBN structure to fully extract the information in each class. As illustrated in Figure 3, the 1st to ( L 1 ) -th layers in MMDBN belong to a multi-DBN structure. According to the properties of HSI, Gaussian distribution is introduced to model the input data, which realizes real-valued RBM instead of binary RBM [57,58]. The energy function and conditional probability distributions are defined as follows:
E ( v , h ; θ ) = m = 1 ( v m i b m i ) 2 2 ( σ m i ) 2 n = 1 a n i h n i m = 1 n = 1 w m n i v m i σ m i h n i
P ( h n i | v i ; θ ) = g ( m = 1 w m n i v m i + a n i )
P ( v n i | h i ; θ ) = N ( b m i + σ m i j = 1 h n i w m n i , ( σ m i ) 2 )
where σ i is the standard deviation of Gaussian visible units, and N ( μ , σ 2 ) is the Gaussian distribution with mean μ and variance σ .
The multi-DBN structure extracts features from the perspective of deep learning, and the features obtained from the ( L 1 ) -th layer contain deep abstract information for hyperspectral data. To improve the discriminative capability of extracted features, MMDBN explores the manifold structure in HSI using label information.

3.3. Discrimination Manifold Layer

The discrimination manifold layer allows the proposed method to discover the manifold structure in deep features, so that the extracted features can maintain large margins between different manifolds in low-dimensional embedding space. Figure 4 displays the illustration of the discrimination manifold layer.
The motivation of designing the discrimination manifold layer is to keep local geometric neighboring relation and label information in low-dimensional embedded space. To achieve this goal, it constructs the intra-DBN graph G w and the inter-DBN graph G b to explore the discriminant manifold structure from deep features. The weights between y i and y j for G w is represented by
w i j w = e x p ( y i y j 2 2 ( t i ) 2 ) , y i N w ( y j ) o r y j N w ( y i ) 0 , o t h e r w i s e
For graph G b , the weight is defined as
w i j b = e x p ( y i y j 2 2 ( t i ) 2 ) , y i N b ( y j ) o r y j N b ( y i ) 0 , o t h e r w i s e
where N w ( y i ) is the k w intra-DBN neighbors of y i and N b ( y i ) indicates the k b inter-DBN neighbors of y i .
The purpose of the discrimination manifold layer is to separate deep features extracted from different DBNs and compact features learned from the same DBN. The objective functions are represented as
J 1 ( A ) = i = 1 N j = 1 N A T y i A T y j 2 w i j w
J 2 ( A ) = i = 1 N j = 1 N A T y i A T y j 2 w i j b
With some mathematical operations, Equations (21) and (22) can be reduced as
i = 1 N j = 1 N A T y i A T y j 2 w i j w = t r ( i = 1 N j = 1 N ( A T y i w i j w ( y i ) T A 2 A T y i w i j w ( y j ) T A A T y j w i j w ( y j ) T A ) ) = t r ( A T Y ( D w W w ) Y T A ) = t r ( A T Y L 1 Y T A )
i = 1 N j = 1 N A T y i A T y j 2 w i j b = t r ( i = 1 N j = 1 N ( A T y i w i j b ( y i ) T A 2 A T y i w i j b ( y j ) T A A T y j w i j b ( y j ) T A ) ) = t r ( A T Y ( D b W b ) Y T A ) = t r ( A T Y L 2 Y T A )
where W w = [ w i j w ] i , j = 1 N , D w = d i a g ( [ j = 1 N w i j w ] i = 1 N ) , W b = [ w i j b ] i , j = 1 N , D b = d i a g ( [ j = 1 N w i j b ] i = 1 N ) .
As discussed above, the discriminant manifold layer not only preserves the local geometric structure of HSI, but also maximizes the margins between different manifolds. Therefore, it possesses a discriminant capability at low-dimensional space, and the optimization of the following objective functions is an acceptable criterion for selecting an appropriate projection matrix:
m i n t r ( A T Y L 1 Y T A ) m a x t r ( A T Y L 2 Y T A )
The optimization problem of the multi-objective function in Equation (25) can be equivalent to
J ( A ) = m i n t r ( A T Y L 1 Y T A ) t r ( A T Y L 2 Y T A )
Then, the optimization solution can be formulated by Lagrange multiplier method into the following form:
A t r ( A T Y L 1 Y T A λ ( A T Y L 2 Y T A ) ) = 0
Based on the above mathematical transformation, Equation (27) can be further simplified as
Y L 1 Y T A = λ Y L 2 Y T A
where the optimal projection matrix A = [ a 1 , a 2 , , a d ] consists of d eigenvectors corresponding to the d minimum eigenvalues of Equation (28). The low-dimensional embedding features are given as
Z = A T Y d × N

4. Experimental Results and Analysis

In this section, three real HSI datasets, Indian Pines, Salinas, and Botswana, are introduced to evaluate the effectiveness of the proposed MMDBN.

4.1. Experiment Datasets

Indian Pines dataset [59]: This dataset is a scene of Northwest Indiana acquired by the AVIRIS sensor. It contains 684 × 453 pixels with 220 bands, and its spatial resolution is 20 m. There are 200 radiance channels remaining after removing water vapor and atmospheric effect. It has 16 classes of land covers in total, and its false-color image and ground truth with detailed type information are given in Figure 5. Brackets list the sample size of each class.
Salinas dataset [59]: The second dataset was captured in Salinas Valley, Southern California through the AVIRIS sensor. The set is composed of 224 spectral channels after the 20 bands were removed due to the noise and water absorption. The spatial size of this dataset is up to 512 × 217 pixels, and its geometric resolution is 3.7 m. There are sixteen land cover types and the ground truth with corresponding classes are displayed in Figure 6.
Botswana dataset [59]: This dataset is a scene of Botswana Okavango Delta, South Africa collected by the Hyperion sensor on NASA EO-1 satellite on 31 May 2001. The size of the image is 1476 × 256 pixels and the spatial resolution is 30 m. A total of 145 spectral bands are utilized for the experiments after removing 20 channels seriously affected by noise. Figure 7 exhibits its false-color image and ground truth.

4.2. Experimental Setup

In all experiments, each HSI dataset was randomly divided into training and test set. Meanwhile, we set 10 training samples per class for small classes such as G r a s s / p a s t u r e - m o w e d , O a t s , and A l f a l f a in Indian Pines dataset. A low-dimensional embedding space was constructed through DR method with the training samples, while the test set was mapped into the embedding space. Then, the K-nearest neighborhood (KNN) classifier with Euclidean distance was employed for classification. After that, k a p p a coefficient ( K a p p a ), average accuracy (AA), overall accuracy (OA), and classification accuracy for each class (CA) were introduced to investigate the performance of different DR approaches. Under each condition, the experiment was repeated 10 times to obtain a result with mean and standard deviation (STD). Experiments in this paper were completed on a computer with MATLAB 2014b, 32 GB memory, and i7-7800X CPU. The deep learning toolbox in MATLAB was used as a toolkit to develop the code of MMDBN.

4.3. Parameter Sensitivity Analysis

To evaluate the influence of parameters on the performance for MMDBN, 10%, 2%, and 10% samples in Indian Pines, Salinas, and Botswana datasets were randomly selected in each class of ground object for training, and the remaining samples were utilized for test.

4.3.1. Evaluation of the Model with Embedding Dimension

In this subsection, a series of experiments were designed to analyze the influence of embedding dimension for all FE algorithms. Figure 8 displays the OAs with different embedding dimensions.
As shown in Figure 8, the OAs of all algorithms first improve with the increase of embedding dimension, because there is more information that can be used for classification. However, the classification results for most algorithms tend to be stable or even reduced when the dimension reaches a certain degree, for the reason that valuable information contained in embedding features is close to saturation. Compared with other approaches, MMDBN achieved the best performance on all datasets, because it fuses multi-DBN structure and discrimination manifold layer to extract deep manifold features, which possesses a good discriminant ability for HSI classification. Based on the above analysis, we chose 30 as the feature dimension to achieve satisfactory performance. For LDA, the embedding dimension was set to c 1 , in which c is the class number in HSI dataset.

4.3.2. Evaluation of the Model with Different Value of Neighbors in Discrimination Manifold Layer

This subsection investigates the relationship between different number of interclass and intraclass neighbors and classification performance for the MMDBN method. In the experiment, parameters k b and k w are tuned with { 20 , 40 , 60 , , 200 } and { 5 , 10 , 15 , , 50 } , respectively. The OAs with different values of k w and k b are shown in Figure 9.
From Figure 9, the OA improves and then stabilizes with the increase of k b , because a large number of interclass neighbor points are conduciveto explore the discriminative structure for maximizing the manifold margins of HSI data. Meanwhile, an appropriate size of k w can discover the local manifold structure and compact samples from the same class. For three HSI datasets, we set the parameters k w and k b to 10 and 100, respectively.

4.3.3. Evaluation of the Model with Different Number of Model Layers

To analyze the impact with the number of layers on the performance of MMDBN, experiments were repeated ten times to obtain a result of OA with mean and standard deviation at each condition, and the relationship between the number of layers and the OAs on three HSI datasets are displayed in Figure 10.
As illustrated in Figure 10, the number of hidden layers within the multi-DBN structure plays a significant role in feature extraction of hyperspectral data. It can be easily observed that when the value of L is 3 or 4, the proposed approach will achieve better classification performance. This is because the parameter number in MMDBN increases dramatically with the increase of layer number, which easily leads to overfitting of limited training data. To obtain better classification results, the number of layers in MMDBN algorithm was set to 4 in three datasets.

4.3.4. Evaluation of the Model with Different Number of Nodes

To set a proper number of hidden nodes for the proposed model, we investigated the performance of MMDBN with a different number of nodes, and the nodes within each layer were tuned with { 30 , 60 , 90 , , 300 } . Figure 11 shows the relationship between the node number in each layer and the classification accuracies.
According to Figure 11, nodes number has a considerable impact on the classification results of MMDBN. It can be easily observed that the OA first raises and then reduces. This indicates that too many nodes will bring negative effects for classification, and a high number of nodes will make the model redundant or even cause overlearning when limited training samples are available. Based on the above analysis, the optimal number of nodes is 60 for the three HSI datasets.

4.4. Comparisons with Other State-of-the-Art DR Methods

To evaluate the effectiveness of MMDBN, we compared it with several state-of-the-art DR approaches; such methods include Baseline, PCA [24], LDA [29], LPP [26], NPE [28], LGSFA [36], and MFA [37], and Baseline denotes that the test set is classified through KNN classifier without the process of DR. The cross-validation was adopted to obtain the optimal parameters for all methods. The number of neighbor points for LPP and NPE was chosen as 9. The numbers of intraclass and interclass neighbor points for LGSFA and MFA were set to 9 and 180, respectively.
In order to analyze the classification performance of each method with different size of training set, we selected n i ( n i = 10 , 20 , 30 , 40 , 50 , 60 , 80 , 100 ) samples in each class for training, and the rest of the samples were set for test samples. Table 1, Table 2 and Table 3 report the mean OA with STD for different DR approaches on three HSI datasets.
As shown in Table 1, Table 2 and Table 3, the OAs of all methods raise with the increase in the sample size of the training set. The experiment results of supervised learning methods outperform unsupervised learning methods in most conditions, the reason is that prior knowledge of training data brings benefits to improve the discriminative ability of extracted features. The proposed approach produces the best classification results among all DR methods, especially when there are a few available training samples. This is because the MMDBN not only extracts deep abstract features, but also explores a discrimination manifold layer to reveal the manifold structure within HSI.
To investigate the classification performance of MMDBN on different types of land covers, 10%, 2%, and 10% samples in each class were randomly selected from three datasets for training. Table 4, Table 5 and Table 6 list the classification accuracy of each class for different methods on HSI datasets, and corresponding classification maps are shown in Figure 12, Figure 13 and Figure 14, respectively.
As illustrated in Table 4, Table 5 and Table 6, the proposed approach achieves better classification results on most classes of three datasets, especially for the Indian Pines dataset. Compared with other methods without a multi-manifold model, MMDBN needs longer FE time because it learns the intrinsic information of each type of land cover by constructing a multi-DBN structure. However, it is acceptable owing to the competitive results of MMDBN. As displayed in Figure 12, Figure 13 and Figure 14, it is clear that the proposed approach generated more homogenous regions in classification maps than comparison methods. The results show that the MMDBN extracts the deep discriminant features of each class, and maximizes the manifold margins among different classes.

4.5. Comparisons with Some Deep Learning Methods

To further compare the performance of MMDBN with deep learning models, ANN [43], CNN [53], SAE [52], M 3 DNet [54], and MDBN were introduced as compared algorithms. MDBN means the multi-DBN structure without the discrimination manifold layer, and we set the parameters of each model empirically to achieve the best performance in each condition.
In experiments, the training set contained n i ( n i = 20 , 40 , 60 , 80 , 100 ) samples randomly selected in each class, and the test set consisted of the remaining samples. The average OA with STD for different methods is given in Table 7.
From Table 7, the classification accuracies of different deep learning models on three datasets are improved as the training samples increased, because a larger size of the training set can bring rich discriminant information for extracted features. Different from traditional deep learning methods, MMDBN initializes the network by exploring the local geometric structure. Meanwhile, it designs a multi-DBN structure to learn intrinsic information of different classes, and then improves the separability of features by constructing a discrimination manifold layer. As a result, the proposed approach achieves the best results in most cases. Compared with MDBN, the competitive performance of MMDBN shows the effectiveness of the discrimination manifold layer, and it can compact samples for the same class and separate samples for different classes; this is conducive to extracting discriminant features for land cover classification.
To investigate the running time of different deep learning methods, we randomly selected 10%, 2%, and 10% samples per class from three datasets for training and the rest of the data were chosen as a test set. Table 8 lists Kappas, OAs, and running times for different methods on three datasets.
The running times of MDBN and MMDBN are significantly less than other deep learning models under the training phase. While in the test phase, the subsequent classification process of MMDBN needs to explore the k-nearest neighbors of extracted features, so it takes a longer time than some comparison methods.

5. Conclusions

The deep belief network (DBN) model is an unsupervised deep learning method and it fails to discover manifold structure in HSI. This paper proposes a novel FE method called MMDBN, which combines manifold learning and deep learning to address these issues. MMDBN extracts the deep abstract features contained in various land covers by constructing a multi-DBN structure. Then, under the GE framework, it designs the discrimination manifold layer for supervised learning, which can separate interclass samples while compactingintraclass samples. As a result, the proposed approach effectively extracts discriminant features and significantly improves classification accuracy for hyperspectral data. Experiment results on Indian Pines, Salinas, and Botswana HSI datasets show that the proposed MMDBN has a better ability for feature extraction than some state-of-the-art DR methods.
In the future, we are interested in exploring the spatial information of hyperspectral data to solve the limitation of MMDBN only considering spectral information, and designing the spectral–spatial combined deep manifold networks to further improve the classification performance of the MMDBN model.

Author Contributions

Conceptualization, Z.L. and H.H.; methodology, Z.L.; software, Z.L.; validation, G.S., H.H., and Z.Z.; formal analysis, Z.L.; investigation, Z.Z.; resources, H.H.; data curation, H.H.; writing—original draft preparation, Z.L.; writing—review and editing, H.H.; visualization, G.S.; supervision, H.H.; project administration, H.H.; funding acquisition, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Science Foundation of China under Grant 42071302, the Basic and Frontier Research Programmes of Chongqing under Grant cstc2018jcyjAX0093, and the graduate research and innovation foundation of Chongqing under Grant CYS18035.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers and associate editor for their valuable comments and suggestions to improve the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhong, Z.L.; Li, J.; Clausi, D.A.; Wong, A. Generative Adversarial Networks and Conditional Random Fields for Hyperspectral Image Classification. IEEE Trans. Cybern. 2020, 50, 3318–3329. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Luo, F.L.; Zhang, L.P.; Du, B. Dimensionality Reduction with Enhanced Hybrid-graph Discriminant Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5336–5353. [Google Scholar] [CrossRef]
  3. Zhong, Y.F.; Wang, X.Y.; Xu, Y.; Wang, S.Y.; Jia, T.Y.; Hu, X.; Zhao, J.; Wei, L.F.; Zhang, L.P. Mini-UAV-Borne Hyperspectral Remote Sensing: From Observation and Processing to Applications. IEEE Geosci. Remote Sens. Mag. 2018, 6, 46–62. [Google Scholar] [CrossRef]
  4. Kuras, A.; Brell, M.; Rizzi, J.; Burud, I. Hyperspectral and Lidar Data Applied to the Urban Land Cover Machine Learning and Neural-Network-Based Classification: A Review. Remote Sens. 2021, 13, 3393. [Google Scholar] [CrossRef]
  5. Vali, A.; Comai, S.; Matteucci, M. Deep Learning for Land Use and Land Cover Classification Based on Hyperspectral and Multispectral Earth Observation Data: A Review. Remote Sens. 2020, 12, 2495. [Google Scholar] [CrossRef]
  6. Peng, J.T.; Li, L.Q.; Tang, Y.Y. Maximum Likelihood Estimation-Based Joint Sparse Representation for the Classification of Hyperspectral Remote Sensing Images. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1790–1802. [Google Scholar] [CrossRef] [PubMed]
  7. Gao, H.M.; Yang, Y.; Li, C.M.; Gao, L.R.; Zhang, B. Multiscale Residual Network With Mixed Depthwise Convolution for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3396–3408. [Google Scholar] [CrossRef]
  8. Zhang, X.R.; Gao, Z.Y.; Jiao, L.C.; Zhou, H.Y. Multifeature Hyperspectral Image Classification With Local and Nonlocal Spatial Information via Markov Random Field in Semantic Space. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1409–1424. [Google Scholar] [CrossRef] [Green Version]
  9. Su, H.J.; Zhao, B.; Du, Q.; Du, P.J. Kernel Collaborative Representation With Local Correlation Features for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1230–1241. [Google Scholar] [CrossRef]
  10. Hong, D.F.; Yokoya, N.; Chanussot, J.; Zhu, X.X. An Augmented Linear Mixing Model to Address Spectral Variability for Hyperspectral Unmixing. IEEE Trans. Image Process. 2019, 28, 1923–1938. [Google Scholar] [CrossRef] [Green Version]
  11. Jiao, C.Z.; Chen, C.; McGarvey, R.G.; Bohlman, S.; Jiao, L.C. Multiple Instance Hybrid Estimator for Hyperspectral Target Characterization and Sub-pixel Target Detection. ISPRS J. Photogramm. Remote Sens. 2018, 146, 235–250. [Google Scholar] [CrossRef] [Green Version]
  12. Luo, F.L.; Du, B.; Zhang, L.P.; Zhang, L.F.; Tao, D.C. Feature Learning Using Spatial-Spectral Hypergraph Discriminant Analysis for Hyperspectral Image. IEEE Trans. Cybern. 2019, 49, 2406–2419. [Google Scholar] [CrossRef] [PubMed]
  13. Huang, H.; Shi, G.Y.; He, H.B.; Duan, Y.L.; Luo, F.L. Dimensionality Reduction of Hyperspectral Imagery Based on Spatial-spectral Manifold Learning. IEEE Trans. Cybern. 2020, 50, 2604–2616. [Google Scholar] [CrossRef] [Green Version]
  14. Luo, F.L.; Zhang, L.P.; Zhou, X.C.; Guo, T.; Cheng, Y.X.; Yin, T.L. Sparse-Adaptive Hypergraph Discriminant Analysis for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1082–1086. [Google Scholar] [CrossRef]
  15. Li, Z.Y.; Huang, H.; Zhang, Z. Deep Manifold Reconstruction Neural Network for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5502105. [Google Scholar] [CrossRef]
  16. Zhang, L.F.; Zhang, Q.; Du, B.; Huang, X.; Tang, Y.Y.; Tao, D.C. Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images. IEEE Trans. Cybern. 2018, 48, 16–28. [Google Scholar] [CrossRef] [Green Version]
  17. Feng, J.; Jiao, L.C.; Liu, F.; Sun, T.; Zhang, X.R. Unsupervised feature selection based on maximum information and minimum redundancy for hyperspectral images. Pattern Recognit. 2016, 51, 295–309. [Google Scholar] [CrossRef]
  18. Zhu, Z.; Jia, S.; He, S.; Sun, Y.W.; Ji, Z.; Shen, L.L. Three-dimensional Gabor feature extraction for hyperspectral imagery classification using a memetic framework. Inf. Sci. 2015, 298, 274–287. [Google Scholar] [CrossRef]
  19. Su, H.J.; Zhao, B.; Du, Q.; Du, P.J.; Xue, Z.H. Multi-feature dictionary learning for collaborative representation classification of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2467–2484. [Google Scholar] [CrossRef]
  20. Zabalza, J.; Ren, J.C.; Zheng, J.B.; Han, J.W.; Zhao, H.M.; Li, S.T.; Marshall, S. Novel Two-Dimensional Singular Spectrum Analysis for Effective Feature Extraction and Data Classification in Hyperspectral Imaging. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4418–4433. [Google Scholar] [CrossRef] [Green Version]
  21. Hong, D.F.; Yokoya, N.; Ge, N.; Chanussot, J.; Zhu, X.X. Learnable Manifold Alignment (LeMA): A Semi-supervised Cross-modality Learning Framework for Land Cover and Land Use Classification. ISPRS J. Photogramm. Remote Sens. 2019, 147, 193–205. [Google Scholar] [CrossRef] [PubMed]
  22. Zhang, M.Y.; Gong, M.G.; Mao, Y.S.; Li, J.; Wu, Y. Unsupervised Feature Extraction in Hyperspectral Images Based on Wasserstein Generative Adversarial Network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2669–2688. [Google Scholar] [CrossRef]
  23. Wei, X.L.; Li, W.; Zhang, M.M.; Li, Q.L. Medical Hyperspectral Image Classification Based on End-to-End Fusion. IEEE Trans. Instrum. Meas. 2019, 68, 4481–4492. [Google Scholar] [CrossRef]
  24. Seghouane, A.; Iqbal, A. The adaptive block sparse PCA and its application to multi-subject FMRI data analysis using sparse mCCA. Signal Process. 2018, 153, 311–320. [Google Scholar] [CrossRef]
  25. Shi, G.Y.; Huang, H.; Liu, J.M.; Li, Z.Y.; Wang, L.H. Spatial-Spectral Multiple Manifold Discriminant Analysis for Dimensionality Reduction of Hyperspectral Imagery. Remote Sens. 2018, 11, 2414. [Google Scholar] [CrossRef] [Green Version]
  26. Deng, Y.J.; Li, H.C.; Pan, L.; Shao, L.Y.; Du, Q.; Emery, W.J. Modified Tensor Locality Preserving Projection for Dimensionality Reduction of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 277–281. [Google Scholar] [CrossRef]
  27. Roweis, S.T.; Saul, L.K. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [Green Version]
  28. Lu, G.F.; Jin, Z.; Zou, J. Face recognition using discriminant sparsity neighborhood preserving embedding. Knowl.-Based Syst. 2012, 31, 119–127. [Google Scholar] [CrossRef]
  29. Li, W.; Zhang, L.P.; Zhang, L.F.; Du, B. GPU Parallel Implementation of Isometric Mapping for Hyperspectral Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1532–1539. [Google Scholar] [CrossRef]
  30. Su, Z.Q.; Tang, B.P.; Liu, Z.R.; Qin, Y. Multi-fault diagnosis for rotating machinery based on orthogonal supervised linear local tangent space alignment and least square support vector machine. Neurocomputing 2015, 157, 208–222. [Google Scholar] [CrossRef]
  31. Zhong, P.; Gong, Z.Q.; Li, S.T. Learning to Diversify Deep Belief Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3516–3530. [Google Scholar] [CrossRef]
  32. Datta, A.; Ghosh, S.; Ghosh, A. Supervised Feature Extraction of Hyperspectral Images Using Partitioned Maximum Margin Criterion. IEEE Geosci. Remote Sens. Lett. 2017, 14, 82–86. [Google Scholar] [CrossRef]
  33. Condessa, F.; Dias, J.B.; Kovacevic, J. Supervised Hyperspectral Image Classification With Rejection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2321–2332. [Google Scholar] [CrossRef]
  34. Li, W.; Du, Q. Laplacian Regularized Collaborative Graph for Discriminant Analysis of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2016, 11, 7066–7076. [Google Scholar] [CrossRef]
  35. Yu, H.Y.; Gao, L.R.; Li, W.; Du, Q.; Zhang, B. Locality Sensitive Discriminant Analysis for Group Sparse Representation-Based Hyperspectral Imagery Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1358–1362. [Google Scholar] [CrossRef]
  36. Luo, F.L.; Huang, H.; Duan, Y.L.; Liu, J.M.; Liao, Y.H. Local geometric structure feature for dimensionality reduction of hyperspectral imagery. Remote Sens. 2017, 9, 790. [Google Scholar] [CrossRef] [Green Version]
  37. Yan, S.C.; Xu, D.; Zhang, B.Y.; Zhang, H.J.; Yang, Q.; Lin, S. Graph embedding and extensions: A general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 40–51. [Google Scholar] [CrossRef] [Green Version]
  38. Jiao, L.C.; Liang, M.M.; Chen, H.; Yang, S.Y.; Liu, H.Y.; Cao, X.H. Deep Fully Convolutional Network-Based Spatial Distribution Prediction for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5585–5599. [Google Scholar] [CrossRef]
  39. Xu, Y.H.; Du, B.; Zhang, F.; Zhang, L.P. Hyperspectral image classification via a random patches network. ISPRS J. Photogramm. Remote Sens. 2018, 142, 344–357. [Google Scholar] [CrossRef]
  40. Song, W.W.; Li, S.T.; Fang, L.Y.; Lu, T. Hyperspectral Image Classification With Deep Feature Fusion Network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3173–3184. [Google Scholar] [CrossRef]
  41. Li, Z.Y.; Huang, H.; Zhang, Z.; Pan, Y.S. Manifold Learning-Based Semisupervised Neural Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5508712. [Google Scholar] [CrossRef]
  42. Li, S.T.; Song, W.W.; Fang, L.Y.; Chen, Y.S.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
  43. Zhang, L.; Li, H.; Kong, X.G. Evolving feedforward artificial neural networks using a two-stage approach. Neurocomputing 2019, 360, 25–36. [Google Scholar] [CrossRef]
  44. Li, W.; Wu, G.D.; Du, Q. Transferred Deep Learning for Anomaly Detection in Hyperspectral Imagery. IEEE Geosci. Remote Sens. Lett. 2017, 14, 597–601. [Google Scholar] [CrossRef]
  45. Zhong, Z.L.; Li, J.; Luo, Z.M.; Chapman, M. Spectral–spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
  46. Wang, L.Z.; Zhang, J.B.; Liu, P.; Choo, K.K.R.; Huang, F. Spectral–spatial multi-feature-based deep learning for hyperspectral remote sensing image classification. Soft Comput. 2017, 21, 213–221. [Google Scholar] [CrossRef]
  47. Hong, D.F.; Gao, L.R.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5966–5978. [Google Scholar] [CrossRef]
  48. Pan, B.; Shi, Z.W.; Xu, X. MugNet: Deep learning for hyperspectral image classification using limited samples. ISPRS J. Photogramm. Remote Sens. 2018, 145, 108–119. [Google Scholar] [CrossRef]
  49. Mou, L.C.; Ghamisi, P.; Zhu, X.X. Deep Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef] [Green Version]
  50. Rasti, B.; Hong, D.F.; Hang, R.L.; Ghamisi, P.; Kang, X.D.; Benediktsson, J.A. Feature Extraction for Hyperspectral Imagery: The Evolution from Shallow to Deep: Overview and Toolbox. IEEE Geosci. Remote Sens. Mag. 2020, 8, 60–88. [Google Scholar] [CrossRef]
  51. Zhang, M.M.; Li, W.; Du, Q.; Gao, L.R.; Zhang, B. Feature Extraction for Classification of Hyperspectral and LiDAR Data Using Patch-to-Patch CNN. IEEE Trans. Cybern. 2020, 50, 100–111. [Google Scholar] [CrossRef] [PubMed]
  52. Chen, Y.S.; Lin, Z.H.; Zhao, X.; Wang, G.; Gu, Y.F. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  53. Chen, Y.S.; Jiang, H.L.; Li, C.Y.; Jia, X.P.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  54. Li, Z.Y.; Huang, H.; Li, Y.; Pan, Y.S. M3DNet: A manifold-based discriminant feature learning network for hyperspectral imagery. Expert Syst. Appl. 2020, 144, 113089. [Google Scholar] [CrossRef]
  55. Chen, D.D.; Lv, J.C.; Yi, Z. Graph Regularized Restricted Boltzmann Machine. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 2651–2659. [Google Scholar] [CrossRef]
  56. Zhou, S.S.; Chen, Q.C.; Wang, X.L. Fuzzy deep belief networks for semi-supervised sentiment classification. Neurocomputing 2013, 131, 312–322. [Google Scholar] [CrossRef]
  57. Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
  58. Salakhutdinov, R.; Hinton, G. Using deep belief nets to learn covariance kernels for Gaussian processes. Proc. Adv. Neural Inf. Process. Syst. 2008, 20, 1249–1256. [Google Scholar]
  59. Hyperspectral Data Set. Available online: http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_\Remote_Sensing_Scenes (accessed on 26 January 2022).
Figure 1. The network structure of RBM.
Figure 1. The network structure of RBM.
Remotesensing 14 01484 g001
Figure 2. The structure of DBN.
Figure 2. The structure of DBN.
Remotesensing 14 01484 g002
Figure 3. Process of the proposed multi-DBN-manifold structure.
Figure 3. Process of the proposed multi-DBN-manifold structure.
Remotesensing 14 01484 g003
Figure 4. Schematic diagram of the discrimination manifold layer.
Figure 4. Schematic diagram of the discrimination manifold layer.
Remotesensing 14 01484 g004
Figure 5. Indian Pines hyperspectral image. (a) False-color image. (b) Ground-truth map.
Figure 5. Indian Pines hyperspectral image. (a) False-color image. (b) Ground-truth map.
Remotesensing 14 01484 g005
Figure 6. Salinas hyperspectral image. (a) False-color image. (b) Ground-truth map.
Figure 6. Salinas hyperspectral image. (a) False-color image. (b) Ground-truth map.
Remotesensing 14 01484 g006
Figure 7. Botswana hyperspectral image. (a) False-color image. (b) Ground-truth map.
Figure 7. Botswana hyperspectral image. (a) False-color image. (b) Ground-truth map.
Remotesensing 14 01484 g007
Figure 8. Classification results of MMDBN with different dimensions on the (a) Indian Pines, (b) Salinas, and (c) Botswana datasets.
Figure 8. Classification results of MMDBN with different dimensions on the (a) Indian Pines, (b) Salinas, and (c) Botswana datasets.
Remotesensing 14 01484 g008
Figure 9. Classification results of MMDBN with different values of k w and k b on (a) Indian Pines, (b) Salinas, and (c) Botswana datasets.
Figure 9. Classification results of MMDBN with different values of k w and k b on (a) Indian Pines, (b) Salinas, and (c) Botswana datasets.
Remotesensing 14 01484 g009
Figure 10. Classification accuracies with numbers of layers on MMDBN.
Figure 10. Classification accuracies with numbers of layers on MMDBN.
Remotesensing 14 01484 g010
Figure 11. Classification accuracies with numbers of nodes on MMDBN.
Figure 11. Classification accuracies with numbers of nodes on MMDBN.
Remotesensing 14 01484 g011
Figure 12. Classification results of different approaches for Indian pines dataset. (a) Ground truth; (b) Training samples; (c) Baseline; (d) PCA; (e) LDA; (f) NPE; (g) LPP; (h) MFA; (i) LGSFA; (j) MMDBN.
Figure 12. Classification results of different approaches for Indian pines dataset. (a) Ground truth; (b) Training samples; (c) Baseline; (d) PCA; (e) LDA; (f) NPE; (g) LPP; (h) MFA; (i) LGSFA; (j) MMDBN.
Remotesensing 14 01484 g012
Figure 13. Classification results of different approaches for Salinas dataset. (a) Ground truth; (b) Training samples; (c) Baseline; (d) PCA; (e) LDA; (f) NPE; (g) LPP; (h) MFA; (i) LGSFA; (j) MMDBN.
Figure 13. Classification results of different approaches for Salinas dataset. (a) Ground truth; (b) Training samples; (c) Baseline; (d) PCA; (e) LDA; (f) NPE; (g) LPP; (h) MFA; (i) LGSFA; (j) MMDBN.
Remotesensing 14 01484 g013
Figure 14. Classification results of different approaches for Botswana dataset. (a) Ground truth; (b) Training samples; (c) Baseline; (d) PCA; (e) LDA; (f) NPE; (g) LPP; (h) MFA; (i) LGSFA; (j) MMDBN.
Figure 14. Classification results of different approaches for Botswana dataset. (a) Ground truth; (b) Training samples; (c) Baseline; (d) PCA; (e) LDA; (f) NPE; (g) LPP; (h) MFA; (i) LGSFA; (j) MMDBN.
Remotesensing 14 01484 g014
Table 1. The classification results of different DR algorithms for Indian Pines dataset (overall accuracy ± Std (%)).
Table 1. The classification results of different DR algorithms for Indian Pines dataset (overall accuracy ± Std (%)).
Algorithm n i = 10 n i = 20 n i = 30 n i = 40 n i = 50 n i = 60 n i = 80 n i = 100
Baseline49.98 ± 1.9155.49 ± 1.7857.50 ± 2.1258.99 ± 0.2661.07 ± 0.5061.51 ± 1.7162.62 ± 0.9163.74 ± 0.87
PCA50.62 ± 2.4155.04 ± 1.9357.26 ± 0.8258.70 ± 0.9560.26 ± 0.2061.89 ± 1.3862.09 ± 1.0163.45 ± 0.88
LDA50.15 ± 2.2251.18 ± 0.7958.01 ± 1.1662.24 ± 1.1365.23 ± 1.0366.44 ± 1.1669.30 ± 0.9471.01 ± 1.18
NPE49.10 ± 2.0253.09 ± 2.6253.58 ± 0.9056.99 ± 1.0358.48 ± 0.4359.98 ± 1.4860.42 ± 0.6561.72 ± 0.92
LPP26.98 ± 6.9245.73 ± 1.7755.57 ± 1.0757.18 ± 0.8558.99 ± 0.4360.68 ± 1.2762.18 ± 0.6764.04 ± 0.80
MFA48.48 ± 1.7754.11 ± 2.7257.25 ± 1.2458.50 ± 0.6960.20 ± 0.8562.49 ± 1.2262.76 ± 1.2265.67 ± 0.96
LGSFA45.49 ± 3.4756.71 ± 1.2262.61 ± 1.3367.18 ± 0.5870.11 ± 1.6171.84 ± 0.6474.23 ± 0.9875.63 ± 0.76
MMDBN55.95 ± 3.6564.06 ± 2.3767.80 ± 2.2171.21 ± 1.1273.47 ± 0.9275.30 ± 1.1477.35 ± 0.6978.25 ± 1.39
Table 2. The classification results of different DR algorithms for Salinas dataset (overall accuracy ± Std (%)).
Table 2. The classification results of different DR algorithms for Salinas dataset (overall accuracy ± Std (%)).
Algorithm n i = 10 n i = 20 n i = 30 n i = 40 n i = 50 n i = 60 n i = 80 n i = 100
Baseline79.56 ± 2.4581.91 ± 0.4683.18 ± 1.0783.42 ± 0.6284.73 ± 0.9084.81 ± 0.4984.91 ± 1.0185.20 ± 0.43
PCA79.19 ± 2.3982.89 ± 0.6083.44 ± 0.8583.82 ± 0.7184.07 ± 0.7784.37 ± 0.8384.70 ± 0.7884.93 ± 0.30
LDA83.03 ± 1.7284.06 ± 1.2385.48 ± 0.9487.62 ± 0.7088.02 ± 1.0288.85 ± 0.5088.92 ± 0.8989.09 ± 0.65
NPE58.35 ± 3.1781.28 ± 1.5485.14 ± 1.0685.43 ± 0.7985.89 ± 0.9886.15 ± 1.0386.57 ± 1.1186.68 ± 0.48
LPP77.15 ± 2.6081.83 ± 0.7582.32 ± 0.6683.05 ± 0.6683.52 ± 0.9183.91 ± 0.7784.17 ± 0.8184.50 ± 0.27
MFA84.34 ± 1.9386.57 ± 1.0386.66 ± 0.3786.91 ± 0.7187.31 ± 0.8187.58 ± 0.6087.92 ± 1.4488.66 ± 0.60
LGSFA86.64 ± 1.0187.20 ± 0.8687.41 ± 0.4987.89 ± 0.9188.24 ± 0.8888.36 ± 0.7388.51 ± 0.8789.62 ± 0.45
MMDBN86.82 ± 2.1088.44 ± 0.8489.02 ± 0.8189.31 ± 0.9489.61 ± 0.7789.95 ± 0.5790.11 ± 0.5290.48 ± 0.38
Table 3. The classification results of different DR algorithms for Botswana dataset (overall accuracy ± Std (%)).
Table 3. The classification results of different DR algorithms for Botswana dataset (overall accuracy ± Std (%)).
Algorithm n i = 10 n i = 20 n i = 30 n i = 40 n i = 50 n i = 60 n i = 80 n i = 100
Baseline82.91 ± 0.9886.71 ± 0.7187.87 ± 0.6488.83 ± 0.3590.03 ± 0.6490.36 ± 0.4491.21 ± 0.7291.49 ± 0.48
PCA81.14 ± 1.5084.86 ± 0.6687.26 ± 0.8687.79 ± 0.8988.00 ± 0.7888.78 ± 0.4089.12 ± 0.3390.01 ± 0.55
LDA24.67 ± 6.1983.19 ± 1.3390.38 ± 0.5691.75 ± 0.7892.64 ± 0.3293.28 ± 0.4093.76 ± 0.6794.40 ± 0.58
NPE78.17 ± 1.4183.62 ± 1.0386.12 ± 0.6286.95 ± 1.0687.51 ± 0.5888.40 ± 0.3988.74 ± 0.1889.72 ± 0.65
LPP21.40 ± 0.5079.21 ± 2.3185.41 ± 0.6587.47 ± 0.1588.34 ± 1.0588.79 ± 0.8289.12 ± 0.6190.10 ± 0.88
MFA88.05 ± 1.4490.17 ± 0.3091.65 ± 0.2992.41 ± 0.8592.74 ± 0.9393.61 ± 0.7794.32 ± 0.4195.25 ± 0.31
LGSFA78.75 ± 3.4988.16 ± 0.7791.36 ± 0.7193.30 ± 0.6893.58 ± 0.4394.13 ± 0.6395.47 ± 0.5296.00 ± 0.43
MMDBN87.12 ± 1.6991.55 ± 1.0493.16 ± 0.5194.67 ± 0.3995.70 ± 0.3095.91 ± 0.4096.35 ± 0.5297.35 ± 0.28
Table 4. Classification results of each class samples via different DR methods for Indian Pines dataset (%).
Table 4. Classification results of each class samples via different DR methods for Indian Pines dataset (%).
ClassSamplesDR with KNN Classifier
TrainTestBaselinePCALDANPELPPMFALGSFAMMDBN
1103661.11 ± 16.9761.11 ± 16.9748.33 ± 15.2360.00 ± 20.0049.44 ± 16.6343.06 ± 14.0653.61 ± 7.1768.61 ± 8.27
2143128553.32 ± 2.2353.18 ± 2.2866.80 ± 1.5750.87 ± 1.9855.69 ± 2.5055.27 ± 1.7663.53 ± 2.4672.94 ± 1.75
38374750.15 ± 2.4149.95 ± 2.3855.91 ± 1.9146.78 ± 2.8652.65 ± 2.7049.91 ± 2.3354.92 ± 2.2768.08 ± 3.34
42421333.36 ± 5.5433.46 ± 5.5239.68 ± 7.7432.63 ± 5.0636.64 ± 4.1531.89 ± 4.9631.98 ± 5.2249.17 ± 4.33
54943482.63 ± 2.3982.54 ± 2.3186.88 ± 0.8178.94 ± 3.2782.45 ± 2.2981.18 ± 1.6585.68 ± 1.7190.14 ± 1.74
67365789.55 ± 2.3089.46 ± 2.3391.56 ± 2.7886.44 ± 2.7288.41 ± 2.4489.44 ± 3.0590.71 ± 2.8894.33 ± 2.95
7101889.44 ± 4.8689.44 ± 4.8491.67 ± 7.5290.00 ± 4.3888.89 ± 5.8688.89 ± 6.4289.44 ± 7.6191.11 ± 4.69
84843091.96 ± 2.0091.89 ± 1.9494.39 ± 2.8291.45 ± 2.3188.32 ± 1.3690.70 ± 2.0191.64 ± 2.2297.39 ± 1.18
9101071.00 ± 18.5370.00 ± 19.4477.00 ± 12.5261.00 ± 13.7072.00 ± 14.7672.00 ± 12.2977.00 ± 14.9482.00 ± 11.98
109887464.97 ± 4.3464.80 ± 4.3171.53 ± 3.1562.67 ± 4.1566.09 ± 3.2364.04 ± 3.0968.73 ± 3.7874.75 ± 3.21
11246220969.82 ± 1.5269.79 ± 1.4775.13 ± 1.1067.20 ± 1.7971.47 ± 1.9671.66 ± 1.5074.17 ± 1.9682.13 ± 1.41
126053341.69 ± 2.2941.50 ± 2.3260.73 ± 2.6039.10 ± 2.9843.11 ± 2.9244.92 ± 2.1554.22 ± 2.4774.95 ± 4.59
132118487.14 ± 6.0087.19 ± 5.9996.27 ± 2.7581.35 ± 8.0685.20 ± 7.4988.60 ± 7.1192.06 ± 6.4597.51 ± 2.28
14127113889.75 ± 1.8989.70 ± 1.8792.88 ± 2.2589.52 ± 2.1589.81 ± 2.0289.18 ± 2.4291.97 ± 2.2694.02 ± 1.72
153934735.98 ± 2.9335.81 ± 2.7852.46 ± 3.4430.38 ± 3.0234.16 ± 3.9937.63 ± 5.3243.44 ± 3.7455.58 ± 3.48
16108388.19 ± 3.2588.18 ± 3.2585.30 ± 3.0087.71 ± 3.4085.30 ± 3.0583.74 ± 3.3784.70 ± 2.7388.43 ± 4.79
OA67.71 ± 0.4867.61 ± 0.4874.80 ± 0.8765.28 ± 0.6368.47 ± 0.9168.25 ± 0.6372.43 ± 0.9381.56 ± 0.87
AA68.75 ± 1.7668.63 ± 1.8774.16 ± 1.9666.00 ± 1.9368.10 ± 1.9667.63 ± 1.9171.74 ± 1.6580.07 ± 1.66
Kappa0.63 ± 0.010.63 ± 0.010.71 ± 0.010.60 ± 0.010.64 ± 0.010.64 ± 0.010.69 ± 0.010.78 ± 0.01
FE time (s)-0.030.030.310.110.1313.0826.08
Classification time (s)0.410.360.320.330.320.310.620.38
Table 5. Classification results of each class samples via different DR methods for Salinas dataset (%).
Table 5. Classification results of each class samples via different DR methods for Salinas dataset (%).
ClassSamplesDR with KNN Classifier
TrainTestBaselinePCALDANPELPPMFALGSFAMMDBN
141196897.27 ± 0.5897.27 ± 0.5899.58 ± 0.5596.70 ± 0.6998.81 ± 0.4298.45 ± 0.6399.67 ± 0.2699.22 ± 0.35
275365199.37 ± 0.3099.37 ± 0.3099.92 ± 0.0499.25 ± 0.3299.60 ± 0.2199.67 ± 0.3199.83 ± 0.1899.90 ± 0.10
340193695.77 ± 2.4095.76 ± 2.4098.14 ± 1.1794.84 ± 2.4598.19 ± 1.1296.25 ± 1.9097.83 ± 1.1698.81 ± 1.17
428136699.23 ± 0.8699.23 ± 0.8699.43 ± 0.3199.11 ± 0.9599.44 ± 0.5199.63 ± 0.2299.55 ± 0.2199.48 ± 0.38
554262499.22 ± 0.8696.21 ± 0.8498.19 ± 0.4695.35 ± 0.7097.10 ± 0.8596.50 ± 1.6998.10 ± 0.5698.35 ± 0.59
680387999.59 ± 0.1999.59 ± 0.1999.87 ± 0.1299.46 ± 0.2299.69 ± 0.1499.78 ± 0.0999.88 ± 0.0899.89 ± 0.08
772350799.12 ± 0.2099.12 ± 0.2099.75 ± 0.1598.98 ± 0.2499.25 ± 0.2299.57 ± 0.2099.82 ± 0.0899.63 ± 0.19
822611,04571.64 ± 0.8271.62 ± 0.7976.53 ± 1.9669.91 ± 1.3573.86 ± 1.4675.81 ± 2.5980.05 ± 2.2383.53 ± 1.55
9125607898.13 ± 0.4598.14 ± 0.4599.81 ± 0.2297.64 ± 0.6099.45 ± 0.2099.60 ± 0.2199.88 ± 0.1299.80 ± 0.16
1066321286.17 ± 3.3686.18 ± 3.3695.85 ± 0.6784.47 ± 3.3891.92 ± 2.2391.96 ± 2.4894.99 ± 0.7295.75 ± 0.94
1122104689.00 ± 4.5088.97 ± 4.4995.25 ± 2.7688.18 ± 4.1894.69 ± 2.4592.33 ± 3.4996.04 ± 2.7695.98 ± 2.63
1239188899.06 ± 1.1299.06 ± 1.1299.56 ± 0.4198.81 ± 1.1999.62 ± 0.5099.69 ± 0.2299.95 ± 0.1099.99 ± 0.02
131989797.77 ± 0.8697.77 ± 0.8698.76 ± 0.9297.51 ± 0.8698.20 ± 0.7498.37 ± 0.6398.97 ± 0.6098.61 ± 0.77
1422104890.57 ± 2.1390.57 ± 2.1394.40 ± 1.8290.01 ± 2.2092.44 ± 2.3393.09 ± 2.3494.28 ± 1.7594.71 ± 1.96
15146712259.91 ± 1.5959.87 ± 1.5567.17 ± 2.0257.73 ± 1.7065.08 ± 1.8466.56 ± 2.8167.82 ± 2.6963.60 ± 2.55
16108393.97 ± 3.0093.96 ± 3.0098.60 ± 0.6091.72 ± 3.2196.96 ± 1.9598.06 ± 0.8498.64 ± 0.4798.89 ± 0.35
OA83.39 ± 0.2986.38 ± 0.2889.93 ± 0.3085.34 ± 0.4188.55 ± 0.3689.11 ± 0.6090.73 ± 0.3191.96 ± 0.32
AA92.05 ± 0.4692.04 ± 0.4695.05 ± 0.1991.23 ± 0.5094.02 ± 0.3494.08 ± 0.3895.32 ± 0.1495.40 ± 0.16
Kappa0.85 ± 0.000.85 ± 0.000.89 ± 0.000.84 ± 0.000.87 ± 0.000.88 ± 0.010.90± 0.000.90 ± 0.00
FE time (s)-0.050.030.260.130.153.9119.49
Classification time (s)2.512.382.312.362.342.293.385.39
Table 6. Classification results of each class samples via different DR methods for Botswana dataset (%).
Table 6. Classification results of each class samples via different DR methods for Botswana dataset (%).
ClassSamplesDR with KNN Classifier
TrainTestBaselinePCALDANPELPPMFALGSFAMMDBN
12724399.91 ± 0.1899.21 ± 0.18100.00 ± 0.0099.88 ± 0.2899.96 ± 0.1399.83 ± 0.5399.88 ± 0.2899.92 ± 0.26
2119086.70 ± 8.0386.81 ± 7.9896.15 ± 2.5081.99 ± 9.7582.86 ± 6.7893.96 ± 5.7595.60 ± 4.6395.28 ± 4.86
32622594.97 ± 2.3194.98 ± 2.3197.74 ± 1.5291.67 ± 3.8496.24 ± 3.3598.46 ± 0.6898.41 ± 1.6198.37 ± 1.40
42219393.28 ± 2.7293.28 ± 2.7296.94 ± 2.6091.23 ± 2.3496.41 ± 1.7397.85 ± 1.2597.95 ± 1.7498.05 ± 1.49
52724275.94 ± 4.0875.86 ± 4.1085.52 ± 5.4774.44 ± 3.9275.36 ± 5.6279.83 ± 5.1785.02 ± 5.6788.66 ± 4.76
62724259.54 ± 4.6059.62 ± 4.5575.73 ± 5.0855.15 ± 3.0759.33 ± 5.9072.76 ± 5.5475.02 ± 4.5979.50 ± 4.68
72623397.03 ± 0.9496.99 ± 1.0298.87 ± 0.6996.11 ± 1.3497.90 ± 0.7497.77 ± 1.2698.60 ± 0.5499.04 ± 0.54
82118294.10 ± 3.7894.21 ± 3.7294.10 ± 3.0292.46 ± 3.6591.42 ± 4.1798.09 ± 1.7993.66 ± 3.9996.94 ± 2.22
93228277.47 ± 3.3677.57 ± 3.4183.45 ± 5.1073.98 ± 3.6574.12 ± 5.1984.97 ± 2.8886.66 ± 6.0988.45 ± 3.38
102522384.04 ± 3.9883.99 ± 3.8890.18 ± 4.3681.51 ± 4.8283.62 ± 4.8588.58 ± 5.0189.77 ± 3.4591.46 ± 3.59
113127489.49 ± 3.9989.42 ± 3.9490.66 ± 3.0788.62 ± 4.1387.16 ± 3.9490.15 ± 4.6191.20 ± 4.0392.95 ± 3.77
121916292.24 ± 2.8792.30 ± 2.8988.51 ± 3.7591.80 ± 2.1191.93 ± 2.9091.62 ± 3.0691.93 ± 4.3393.98 ± 2.17
132724186.01 ± 4.7686.01 ± 4.7588.91 ± 4.0283.40 ± 5.7786.72 ± 4.3890.71 ± 3.1989.50 ± 2.9391.77 ± 2.77
14108598.00 ± 1.8498.00 ± 1.8493.18 ± 5.6898.12 ± 1.8696.94 ± 1.1497.06 ± 2.1794.71 ± 4.3895.77 ± 3.10
OA86.77 ± 1.8486.78 ± 0.7090.85 ± 0.5884.71 ± 0.6686.22 ± 0.5790.75 ± 0.8591.41 ± 0.5094.01 ± 0.72
AA87.77 ± 0.9087.78 ± 0.8991.43 ± 0.8385.74 ± 0.8187.14 ± 0.7091.55 ± 0.8391.99 ± 0.7093.58 ± 0.78
Kappa0.86 ± 0.010.86 ± 0.010.91 ± 0.010.83 ± 0.010.85 ± 0.010.90 ± 0.10.91 ± 0.010.92 ± 0.01
FE time (s)-0.010.030.060.050.353.133.70
Classification time (s)0.060.070.030.040.040.030.090.04
Table 7. The classification results for different deep learning methods (overall accuracy ± Std (%)).
Table 7. The classification results for different deep learning methods (overall accuracy ± Std (%)).
Algorithm n i = 20 n i = 40 n i = 60 n i = 80 n i = 100
Indian PinesANN49.06 ± 4.3856.79 ± 4.1160.86 ± 5.8281.78 ± 5.6366.21 ± 4.07
SAE56.06 ± 2.3765.22 ± 1.2768.99 ± 1.8571.24 ± 1.7473.52 ± 1.01
CNN60.47 ± 2.1168.19 ± 3.3174.26 ± 2.8276.76 ± 3.1378.11 ± 2.52
M 3 DNet53.01 ± 2.5960.89 ± 3.0564.70 ± 4.1565.39 ± 4.1468.94 ± 2.74
MDBN56.05 ± 1.6660.04 ± 1.1463.96 ± 0.9464.58 ± 0.8765.02 ± 0.85
MMDBN64.06 ± 2.3771.21 ± 1.1275.30 ± 1.1477.35 ± 0.6978.25 ± 1.39
SalinasANN83.12 ± 1.6085.77 ± 1.1687.03 ± 1.2588.22 ± 1.0488.25 ± 1.11
SAE83.95 ± 0.9886.42 ± 1.0587.82 ± 0.4788.77 ± 0.7988.74 ± 0.80
CNN78.55 ± 3.0585.16 ± 2.3286.03 ± 2.2784.88 ± 4.7189.02 ± 3.00
M 3 DNet83.96 ± 1.4086.39 ± 1.1687.50 ± 1.1188.13 ± 1.7588.53 ± 0.89
MDBN82.80 ± 1.4483.23 ± 1.2386.15 ± 0.5586.41 ± 0.5285.84 ± 0.67
MMDBN88.38 ± 1.6689.31 ± 0.9489.95 ± 0.5790.11 ± 0.5290.48 ± 0.38
BotswanaANN86.95 ± 1.6288.53 ± 1.0189.54 ± 1.2891.00 ± 1.7391.53 ± 1.04
SAE88.51 ± 1.6191.53 ± 0.5192.92 ± 0.9093.64 ± 0.7494.38 ± 0.54
CNN87.42 ± 4.8292.66 ± 3.6594.04 ± 3.5096.45 ± 1.9397.14 ± 2.17
M 3 DNet88.60 ± 1.6690.11 ± 1.0491.46 ± 1.0092.69 ± 1.1592.85 ± 0.73
MDBN89.21 ± 0.7991.58 ± 0.5993.39 ± 0.6594.02 ± 0.5594.33 ± 0.67
MMDBN91.55 ± 1.0494.67 ± 0.3995.91 ± 0.4096.35 ± 0.5297.35 ± 0.28
Table 8. Classification accuracy (%) and running time (seconds) of different deep learning models.
Table 8. Classification accuracy (%) and running time (seconds) of different deep learning models.
DatasetAlgorithmClassification Accuracy (%)Running Time (s)
OAKappaTrainTest
Indian PinesANN66.1160.65301.250.06
SAE73.9369.66288.260.04
CNN80.4177.70568.035.51
M 3 DNet68.8263.93635.210.04
MDBN69.0664.6713.080.62
MMDBN81.5078.6126.080.38
SalinasANN87.8486.45222.690.18
SAE89.6288.44197.230.14
CNN90.9889.96328.6643.05
M 3 DNet88.0388.03429.700.18
MDBN87.4586.033.913.38
MMDBN91.7990.8519.495.39
BotswanaANN86.5285.3836.110.03
SAE90.7589.9828.030.01
CNN92.7392.11125.052.03
M 3 DNet88.9087.9763.990.01
MDBN90.3489.533.130.09
MMDBN94.0693.563.700.04
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, Z.; Huang, H.; Zhang, Z.; Shi, G. Manifold-Based Multi-Deep Belief Network for Feature Extraction of Hyperspectral Image. Remote Sens. 2022, 14, 1484. https://doi.org/10.3390/rs14061484

AMA Style

Li Z, Huang H, Zhang Z, Shi G. Manifold-Based Multi-Deep Belief Network for Feature Extraction of Hyperspectral Image. Remote Sensing. 2022; 14(6):1484. https://doi.org/10.3390/rs14061484

Chicago/Turabian Style

Li, Zhengying, Hong Huang, Zhen Zhang, and Guangyao Shi. 2022. "Manifold-Based Multi-Deep Belief Network for Feature Extraction of Hyperspectral Image" Remote Sensing 14, no. 6: 1484. https://doi.org/10.3390/rs14061484

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop