Next Article in Journal
Remote Sensing in Studies of the Growing Season: A Bibliometric Analysis
Next Article in Special Issue
Registration of Multisensor Images through a Conditional Generative Adversarial Network and a Correlation-Type Similarity Measure
Previous Article in Journal
Lidar and Radar Signal Simulation: Stability Assessment of the Aerosol–Cloud Interaction Index
Previous Article in Special Issue
Unsupervised Generative Adversarial Network with Background Enhancement and Irredundant Pooling for Hyperspectral Anomaly Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

HybridGBN-SR: A Deep 3D/2D Genome Graph-Based Network for Hyperspectral Image Classification

1
School of Information Engineering, Zhengzhou University, Zhengzhou 450001, China
2
The Kenya Forest Service, Nairobi P.O. Box 30513-00100, Kenya
3
Microbial BioSolutions, Troy, New York, NY 12180, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(6), 1332; https://doi.org/10.3390/rs14061332
Submission received: 22 February 2022 / Revised: 6 March 2022 / Accepted: 7 March 2022 / Published: 9 March 2022
(This article belongs to the Special Issue Machine Learning for Remote Sensing Image/Signal Processing)

Abstract

:
The successful application of deep learning approaches in remote sensing image classification requires large hyperspectral image (HSI) datasets to learn discriminative spectral–spatial features simultaneously. To date, the HSI datasets available for image classification are relatively small to train deep learning methods. This study proposes a deep 3D/2D genome graph-based network (abbreviated as HybridGBN-SR) that is computationally efficient and not prone to overfitting even with extremely few training sample data. At the feature extraction level, the HybridGBN-SR utilizes the three-dimensional (3D) and two-dimensional (2D) Genoblocks trained using very few samples while improving HSI classification accuracy. The design of a Genoblock is based on a biological genome graph. From the experimental results, the study shows that our model achieves better classification accuracy than the compared state-of-the-art methods over the three publicly available HSI benchmarking datasets such as the Indian Pines (IP), the University of Pavia (UP), and the Salinas Scene (SA). For instance, using only 5% labeled data for training in IP, and 1% in UP and SA, the overall classification accuracy of the proposed HybridGBN-SR is 97.42%, 97.85%, and 99.34%, respectively, which is better than the compared state-of-the-art methods.

Graphical Abstract

1. Introduction

Remote sensing works by moving a vision system (satellite or aircraft) across the Earth’s surface at various spatial resolutions and in different spectral bands of the magnetic spectrum to capture hyperspectral images (HSI) [1]. The vision system uses both imaging and spectroscopic methods to spatially locate specific components within the image scene under investigation based on their spectral features. The collected HSI data are a three-dimensional data structure with the x and y axes capturing the dimensions of the spatial images, and the z-axis is the number of spectral bands. Consequently, each pixel located on the x–y spatial domain contains a label representing the physical land cover of the target location [2].
For feature extraction and classification purposes, the voluminous spectral–spatial cues present in the HSI image represent an advantage in the detailed representation of the analyzed samples. However, they contain high spectral redundancy caused by significant interclass similarity and intraclass variability caused by changes in atmospheric, illumination, temporal, and environmental conditions, leading to data handling, storage, and analysis challenges [2]. For instance, an HSI system with a spatial resolution of 145 × 145 pixels will produce an image with 21,025 pixels for one spectral band. If the data contain 200 spectral bands, then a single image would produce over 4 million (145 × 145 × 200) data points. To overcome the challenges of spectral redundancy, most of the HSI classification methods first employ dimensionality reduction methods to solve the curse of dimensionality introduced by spectral bands before extracting discriminative features from the resultant HSI data cube [3,4]. Some of the dimensionality reduction methods employed include, but are not limited to, the independent component analysis (ICA) [5], linear discriminant analysis (LDA) [6], and principal component analysis (PCA) [7]. Of all the aforementioned methods, PCA has become a popular dimensionality method among hyperspectral imaging [7,8,9,10,11,12]. Therefore, this paper uses PCA in dimensionality reduction.
This paper uses a convolutional neural network (CNN) in feature learning and extraction. Over the years, CNN has replaced rule-based methods because of its ability to extract reliable and effective features. CNN aims to extract highly discriminative features from input data [1]. Early feature extraction and learning experiments separately extracted spectral and spatial features, resulting in unsatisfactory classification results. Recent studies have recorded improved HSI classification accuracy when spectral–spatial features are simultaneously extracted, causing a shift of focus to developing models that utilize 3D convolutions in their network structure. Several researchers, such as Chen et al. [3] and Li et al. [8], simultaneously processed spectral–spatial features using the 3D-CNN model, which takes cubes of spatial size 7 × 7 and 5 × 5, respectively. Since then, numerous authors have implemented the 3D-CNN method to purposely extract deep spectral–spatial information concurrently [9,10]. Although the joint extraction of spectral–spatial features using 3D-CNNs achieved better classification accuracy, they are computationally expensive to be uniquely employed in HSI analysis and decrease in precision as the network deepens [11,12]. Several approaches have been proposed to address the challenge introduced by the 3D-CNNs to develop deep lightweight models that simultaneously process spectral–spatial cues for HSI classification. For instance, Roy et al. [13] replaced some 3D-CNN layers with the low-cost 2D-CNN in the network structure to develop a hybrid model that achieved state-of-the-art accuracies across all the HSI experimental datasets. Garifulla et al. [14] replaced the fully-connected (FC) layer with the global average pooling to reduce the network parameters and improve its inference speed [15]. Other researchers have used atrous (dilated) or deep-wise separable convolution instead of the conventional convolution in their network design to create lightweight models [16,17].
This paper extends the work of designing deep HSI classification models by proposing an optimal HybridGBN model variant abbreviated as HybridGBN-SR that trains on very few labeled training samples while increasing the classification accuracy. Unlike other methods that focus only on accuracy or speed, our network emphasizes the trade-off between these two; seemingly in a contrary aspect, the proposed model reduces the computation time while guaranteeing high classification accuracy. Therefore, the contributions of this paper are as follows: First, the proposed HybridGBN model variants utilize Genoblocks (a concept borrowed from biological genome graphs) in their network design. The Genoblocks contain identical and non-identical residual connections to enhance the feature learning of the HSI model even with very few training samples. Secondly, we further demonstrate the potential of residual learning in discriminative feature extraction by reinforcing the Genoblocks with various residual connectors, which resulted in the development of HybridGBN variants: HybridGBN-Vanilla, HybridGBN-SR, and HybridGBN-SSR. Lastly, we further the research of developing 3D/2D hybrid models in remote image classification to reduce model complexity.
The rest of this paper is organized as follows: Section 2 describes the proposed network; Section 3 contains the experimental setup and results discussion; Section 4 contains the conclusion of this research.

2. The Context of the Proposed Model

The proposed model seeks to extend research by developing deep models for HSI classification that can train on extremely few training samples while achieving high classification accuracy. The framework of the proposed model consists of preprocessing and feature learning classification steps as shown in Figure 1.
In the data preprocessing step, as illustrated in Figure 1, the dimensionality of the original HSI data cube is reduced using the PCA method, and overlapping 3D patches are extracted using the neighborhood extraction approach. The extracted patches are then fed into the feature learning and classification step. First, the (Bottom) Geno3Dblock performs 3D convolutions on the input data. Then, its output is reshaped and once again fed into the second (top) Geno2Dblock, which performs 2D convolutions to extract more discriminative features. This approach was inspired by Roy et al. [13] in developing the HybridSN, which implements a bottom-heavy approach where 3D-CNN is employed at the bottom, followed by a spatial 2D-CNN at the top. Roy argues that the 3D-CNN at the bottom of the architecture facilitates the joint spectral–spatial feature representation, while the 2D-CNN on the top layers learns more abstract-level spatial representation. We then vectorize the feature maps of the last layer using global average pooling (GAP) [14] before forwarding to the FC layers and then to softmax layers for classification.

2.1. HSI Data Preprocessing Step

Assume a raw HSI data cube H with spatial dimensionality k R s × s and b number of spectral bands. HSI data cube H can be viewed as a two-dimensional matrix k × b with each pixel composed of b spectral bands to form a one-hot label vector V = ( v 1 , v 2 , vz )   R 1 × 1 × c   , where c denotes the class categories for each dataset. We apply the PCA method to reduce the data redundancy along the spectral dimension b in original HSI data cube H . The resulting HSI data cube I contains lesser spectral bands w such that w   <   b while maintaining the spatial dimension k . We begin the PCA process by computing the covariance matrix, the product of the preprocessed data matrix, and its transpose (See line 2 of pseudo Algorithm 1). These steps aim to determine the variance of the input data variables from the mean with respect to each other to discern their correlation [18].
Algorithm 1: Spectral Data Reduction and Neighborhood Extraction
1Input: H ( k × b ) HSI data matrix, k pixels, b number of bands.
2Compute the covariance matrix Q =   1 B H T H.
3Compute the eigenvalues and eigenvector of Q.
4Sort the eigenvectors to decrease eigenvalues: D ,   E ,   F , and normalize the columns to unity.
5Make the diagonal entries of D and F non-negative.
6Choose the value ww such that w   <   b .
7Construct the transform matrix I ( k × w ) from the selected w eigenvectors.
8Transform H ( k × b ) to I ( k × w ) in eigenspace to express data in terms of eigenvectors reduced from b to w . This gives a new set of basis vectors and a reduced b dimensional subspace of b vectors where data resides.
9Reduced HSI data cube I will have dimensionality s × s × w , where w   <   b .
10Perform neighborhood extraction on the new data cube I R s × s × w .
11Output: G number of small overlapping 3D patches of spatial dimension p × p and depth q .
The following process involves the extraction of eigenvectors and eigenvalues associated with the covariance matrix to identify the principal components (See line 3). For each eigenvector, there is an eigenvalue, which indicates the variance in each principal component. The number of eigenvectors is equal to the number of eigenvalues, equivalent to the number of spectral bands b in the raw HSI data cube. Here, dimensionality reduction is attributed to the non-zero eigenvalues of the data matrix H of dimensionality k × b   .
The data matrix H ( k × b ) is decomposed using singular value decomposition (SVD) into H   =   DEF T where D ( k × k ) is the matrix of eigenvectors of the covariance matrix HH T , E ( k × b ) is a diagonal matrix with eigenvalues as the main diagonal entries, and F ( b × b ) is the matrix of eigenvectors of the covariance matrix H T H . Therefore, the total size for decomposition representation of H is k × k   +   k × b   +   b × b , which is larger than k × b   , the size of H . Organizing information in principal components enables dimensionality reduction of spectral bands without losing valuable information. Therefore, the goal of PCA is to find an integer w smaller than b and use the first w columns of D while restricting E to the first w eigenvalues to show the effect of dimensionality reduction (See line 6).
The computed eigenvectors are ranked in the descending order, i.e., from highest to the lowest in order of their eigenvalues, to find the principal components in order of significance. If we choose to keep w components (eigenvectors) out of b and discard the rest, we have a data matrix   I ( k × w ) , which can form a feature vector. Therefore, a feature vector is a matrix of vectors with eigenvectors of the components that we retain as columns. In this way, we have reduced the spectral dimensionality from b to w to form matrix I of k × w dimensions. Finally, we use this k × w eigenvector matrix to transform the samples to the new subspace. Applying PCA as a data reductionist method (see Figure 2) reduces dimensionality in the new space, not the original space [11].
The new data cube I R s × s × w is divided into G small overlapping 3D patches of spatial dimension p × p and depth q as shown in Figure 2. The label of the central pixel decides the truth labels at the spatial location ( x , y ) .

2.2. Feature Extraction and Classification Step

This is the second step in our model design, as shown in Figure 1. We propose using a biological genome graph in feature extraction and classification and replacing 3D convolutions at the top of the network with low-cost 2D convolutions.
According to Manolov et al. [19], a tetraploid genome shown as variegated blocks (see Figure 3a) can be intertwined to form a complex pattern of the assembly graph without repeats or sequencing error (see Figure 3b). Graph genomics use graph-based alignment, which can correctly position all reads on the genome, as opposed to linear alignment, which is reference-based and cannot align all reads or use all of the available genome data. A graph genome is constructed from a population of genome sequences, such that a sequence path represents each haploid genome in this population through the graph [20]. Schatz et al. [21] and Rakocevic et al. [22] experimentally demonstrated that a graph genome could improve the volume of aligned reads, resolve haplotypes, and create a more accurate depiction of population diversity [18,20]. In this perspective, we propose HybridGBN models that utilize Genoblocks, a concept borrowed from genomics, in their network design.

2.2.1. The Architecture of Genoblocks

The Genoblocks use CNNs in their design. The CNNs have three parts: the input, hidden, and output layers. The role of the hidden (convolutional) layer is to perform the convolution operation, i.e., transforming the input received into some form and passing it to the next layer without losing its characteristics.
Mathematically, an individual neuron is computed by striding a weight filter T with bias n on a vector of inputs E to produce an output feature map m . The term striding in CNN refers to the number of pixels (in integer) by which the filter window shifts (either from left to right and from top to bottom) after each operation until all pixels are convolved. Mathematically, this can be explained as follows
m = f ( TE +   n )
where f ( ° ) is a nonlinear function used as an activation function to introduce the nonlinearity.
We use the rectified linear unit (ReLU) function since it is more efficient than the sigmoid function in the convergence of the training procedure [23]. The ReLU function is defined as follows
f = max ( 0 ,   x )
Research in computer vision has shown that the depth of the network has a higher advantage than the width of the network in terms of better feature learning and fitting [15,24]. Successful training of deep networks using small samples can be realized through residual connections [11]. Works by Mou et al. [24] and Zhong et al. [10] exhibited extensive network residual learning (RL) models to extract additional discriminative characteristics for HSI classification [11] to sufficiently solve the degradation problem profound in deep networks. Hence, the strength of Genoblocks lies in its utilization of the residual connections and multi-scale kernels that extract abundant contextual features to attain a high rate of generalizability [25]. A vanilla Genoblock shown in Figure 4 utilizes identical and non-identical residual connections to recover lost features during convolution

2.2.2. The Genoblock Variants

The Geno3Dblock is the first (bottom) block in the structure of the proposed HybridGBN model. Figure 4 illustrates the basic Genoblock from which we created the three variants: Geno3Dblock-Vanilla (see Figure 5), Geno3Dblock-SR (see Figure 6), and Geno3Dblock-SSR (see Figure 7). We use ReLU as the activation function for each convolution layer. Therefore, the activation value of these Geno3Dblock variants at spectral–spatial position ( x , y , z ) in the j th feature map of the i th layer is denoted as v i , j x , y , z , and is given by
v i , j x , y , z = f ( n i , j + ( T E ) i , j )
where parameter n i , j is the bias value for the j th feature map of the i th layer, T is the kernel function with the learned weights, E is the input or the layer, and denotes the convolution operator.
Similarly, the convolution operator ( T E ) i , j is given by
( T E ) i , j = m   = 1 M r = 0 R i 1 q = 0 Q i 1 p   = 0 P 1 w i , j , m r , q , p × v ( i 1 ) , m ( x + r ) , ( y + q ) , ( z + p )
Parameters R i ,   Q i ,   and   P i denote the kernel width, height, and depth dimensions, respectively. M is the total number of feature maps in the ( i 1 ) th layer connected to the current feature map. w i , j , m r , q , p is the value of the weight parameter for position ( r , q , p ) kernel connected to the m th feature map in the previous layer.
We apply padding P to facilitate the use of an identical residue connection that requires preserving the input image dimensions. In CNN, padding refers to the number of pixels added to the border of an image when the kernel processes it to avoid shrinking. Padding is vital in image processing using CNN as it extends the image area, which assists the kernel in producing more accurate image analyses. We can perform padding by either replicating the edge of the original image or zero padding. Zero padding is a popular technique to pad the input volume with zeros. Zero padding an output image O for any given layer is given by
O = [ ( G F + 2 P ) S ] × [ ( H F + 2 P ) S ] × [ D y ]
where O is the output dimension, G is the width of the input, H is the height of the input, F is the filter size, P is the padding, and S is the stride. D y is the depth of the output image.
The Geno3Dblock-Vanilla: This block utilizes multi-scale kernels (i.e., 3 × 3 × 3, 5 × 5 × 5) to extract multi-scale features from the image map. The structure of the Geno3Dblock-Vanilla is as shown in Figure 5. This is the basic building block used to develop the HybridGBN-Vanilla model.
The Geno3Dblock-SR: This block adds an extra spatial residual (SR) connection to the basic building block (Geno3Dblock-Vanilla), as shown in Figure 6. This block is used in the development of the HybridGBN-SR model.
The Geno3Dblock-SSR: Here, the spatial residual (SR) connection in Geno3Dblock-SR is replaced with a spectral–spatial residual (SSR) connector, as shown in Figure 7. We utilized this block in the development of the HybridGBN-SSR model.
The output of the above Geno3Dblocks is reshaped before passing to the top Geno2Dblock for further feature learning. Reshaping is the deformation of 3D features to 2D features to reduce the model operational cost. For instance, a convolutional layer with 64 feature map data of the size of 3 × 3 × 3 can be reshaped into 192 2D feature maps of size 3 × 3, as shown in Figure 8.
Geno2Dblock: To reduce the model complexity, we developed the Geno2Dblock shown in Figure 9, which is used in the second (top) block of the proposed HybridGBN model variants to learn more discriminative spatial features. It utilizes maxpooling2D and dilated convolution arranged in parallel to capture the context information and multi-scale features [26].
The activation value of the 2D convolution in the Geno2Dblock at the i th layer at spatial position ( x , y ) in the j th feature map is given by
v i , j x , y = f ( n i , j + ( T E ) i , j )
The convolution operator ( T E ) i , j for a 2D layer at spatial position ( x , y ) in the j th feature map can be further explained as
( T E ) i , j = m = 1 M r = 0 R i 1 q = 0 Q i 1 w i , j , m r , q × v ( i 1 ) , m ( x + r ) , ( y + q )
where, w i , j , m r , q is the weight for spatial position ( r , q ) kernel connected to the previous   layer s   m th feature map.
We utilized max2Dpooling in the Geno2Dblock to help recover lost features during the convolution process and control overfitting. The max2Dpooling function partitions the input feature map into a set of rectangles and outputs the maximum value for each sub-region. Mathematically, the general pooling function can be computed as follows
cZ l k = g p   ( cF l k )
where cZ l k represents the pooled feature map of l th layer for k th input feature map cF l k , and g p   ( . ) defines the type of pooling operation. In this research, g p   ( . ) is a MaxPooling2D.
In place of flattening, we used the global average pooling (GAP) to reduce the number of network parameters and effectively avert the model from overfitting. GAP achieves this by reducing each h × w feature map to a single number by taking the average of all hw values. We then used two FC layers to learn more discriminative features further.
The output from the FC layers is then passed to the softmax layer to perform classification. The softmax function is a probabilistic-based function that uses a probability score to measure the correlation between output and reference values. Therefore, the probability that a given input belongs to the class c label of the HSI dataset is given by
p ( y i ) = e y i j = 1 C e y j p ( y i ) for   i = 1 , c ,   ,   C ,   and   y = y 1 , ,   y C R C
where y i = 1 , ,   C are the target ground truth values of the input vector to the softmax function and p ( y i ) as the output class membership distribution with i as the index of the test pixel.
The number of kernels of the last layer is set to equal the number of classes defined in the HSI dataset under study.
We can treat the whole procedure of training HybridGBN model variants as optimizing parameters to minimize the multiclass loss function between the network outputs and the ground truth values for the training data set. The network is fine-tuned through the backpropagation. The loss function L   is given by
L = i = 1 N c = 1 C max ( 0 , 1 p ( y i ) )
Finally, the prediction label is decided by taking the argmin value y i ^ of the loss function
y i ^ = a r g m i n   L c

3. Experimental Results and Discussion

In this section, we report the quantitative and qualitative results of the proposed HybridGBN model variants in comparison with the other state-of-the-art methods such as 2D-CNN, M3D-DCN, HybridSN, R-HybridSN, and SSRN over the selected publicly available HSI datasets, namely Indian Pines (IP), University of Pavia (UP), and Salinas Scene (SA). This section is divided into Section 3.1, Section 3.2, Section 3.3, Section 3.4, Section 3.5 and Section 3.6 that describe experimental datasets, experimental setup, evaluation criteria, experimental results, and discussions on very small training sample data, varying training sample data, and the time complexity of the selected models over IP, UP, and SA datasets.

3.1. Experimental Datasets

The IP dataset was collected by the airborne visible/infrared imaging spectrometer (AVIRIS) sensor flying over the IP test site in Northwestern Indiana. The original image is 145 × 145 × 220 in dimension. After discarding 20 spectral bands due to water absorption, the data used in this experiment have the size of 145 × 145 × 200. The ground truth of the IP scene dataset consists of 16 not mutually exclusive labels [27].
The UP dataset was collected by a Reflective Optics System Imaging Spectrometer (ROSIS) flying over Pavia city, northern Italy. The spectral–spatial dimension of the original HSI image is 610 × 340 × 1155 . We reduced these dimensions to 610 × 340 × 103 by eliminating 12 noisy bands [1,18]. The UP dataset has nine classes; except for one class (Shadows), the rest of the classes have more than 1000 labeled pixels.
The SA dataset was captured by the AVIRIS sensor flying over the Salinas Valley, California. The original size of the HSI image is 512 × 217 × 224 . After eliminating 20 bands covering the water absorption region, the resultant image size used in this experiment is 512 × 217 × 204 . The land cover has been categorized into 16 class labels [27].

3.2. Experimental Setup

All experiments were conducted online using Google Colab Inc. We split our datasets into training and testing sets. We report the results as the average of seven runs. Moreover, we applied the grid search method to select the best optimizer method, learning rate, dropout, and epochs for the proposed method. For all the datasets, we chose Adam optimizer with learning rates of 0.0005, 0.0007, and 0.001 for IP, UP, and SA datasets, respectively. The optimal dropout for IP, UP, and SA was 0.35, 0.5, and 0.4, respectively, while the optimal epochs for IP, UP, and SA were 100, 150, and 100. Using HybridGBN-Vanilla as the basic building block, we varied the spatial window size over IP, UP, and SA datasets to obtain the optimal window size. Considering the overall accuracy (OA), average accuracy (AA), and Kappa coefficient (Kappa), the optimal spatial window size of the HybridGBN-Vanilla over IP, UP, and SA datasets is 19 × 19, 15 × 15, and 23 × 23 respectively. Therefore, the dimensions of the overlapping 3D patches of the input volume are set to 19 × 19 × 30, 15 × 15 × 15, and 23 × 23 × 15, respectively. We used the same window size on HybridGBN variants (e.g., HybridGBN-SSR and HybridGBN-SR) for a fair comparison.

3.3. Evaluation Criteria

To assess the performance of the proposed HSI models, we use the Kappa, OA, and AA evaluation measures.
The OA represents the percentage of correctly classified samples, with 100% accuracy being a perfect classification where all samples were classified correctly. It is given by
OA = Correctly   classified   samples   The   total   number   of   samples
where Correctly   classified   samples are cases where the predicted results are the same as the actual ground truth label.
The AA gives the mean result of per class classification accuracies, and it is given by
AA = 1   c i = 1 C ( x )
where c is the number of classes, and x is the percentage of correctly classified pixels in a single class.
The Kappa provides information on what percentage of the classification map concurs with the ground truth map, and it is given by
Kappa   = P o P e 1 P e
P o denotes the observed agreement, which is the model classification accuracy, and P e symbolizes the expected agreement between the model classification map and the ground truth map by chance probability. When the Kappa value is 1, it indicates perfect agreement, while 0 indicates agreement by chance.

3.4. Experimental Results and Discussions on Very Small Training Sample Data

This section aims to show the robustness of the models on very little training sample data, i.e., 5% for IP, and 1% for UP and SA datasets, respectively. We use the remaining sample data portion for testing.

3.4.1. Distribution of the Training and Testing Sample Data over IP, UP, and SA Datasets on Very Little Sample Data

Table 1, Table 2 and Table 3 provide the detailed distribution of the training and testing samples of IP, UP, and SA datasets.
From Table 1, we can observe that the IP dataset is unbalanced, with some classes having one or two training samples when a minimal sample size of 5% is chosen. Table 2 shows that the UP dataset is a slightly balanced dataset with most classes well represented even at minimal training sample data of 1%. Hence, we expect the classifiers to have better classification accuracies than the IP dataset. We can see in Table 3 that all classes are well represented at 1% training sample data for the SA dataset. Therefore, we conclude that the IP is the most unstable dataset, followed by the UP and SA datasets.

3.4.2. The Performance of Selected Models over IP, UP, and SA Datasets Using Very Limited Training Sample Data

This subsection presents per class accuracy, the Kappa, OA, and AA of the compared methods in an extreme condition of very small sample data over IP, UP, and SA datasets, as shown in Table 4, Table 5 and Table 6.
We observe in Table 4 and Table 6 that the Kappa, OA, and AA of the proposed HybridGBN variants (HybridGBN-Vanilla, HybridGBN-SR, and HybridGBN-SSR) are higher than the compared state-of-the-art methods such as the 2D-CNN, M3D-DCNN, SSRN, R-HybridSN, and HybridSN for IP and SA datasets. However, only the proposed HybridGBN-SR method records superior performance in the UP dataset over all the compared models.
Across all the datasets, the M3D-DCNN recorded the lowest classification accuracy because it mainly uses multi-scale 3D dense blocks in its network structure, which is prone to overfitting when subjected to limited training sample data. We note that the accuracy of 2D-CNN is slightly higher than that of M3D-DCNN due to its ability to extract more discriminative spatial features vital to HSI classification. On the other hand, HybridSN [13] utilizes both 3D-CNNs and 2D-CNN in the HSI network structure. The SSRN posits better classification accuracy than 2D-CNN and M3D-CNN in all datasets. It defeats HybridSN in the UP dataset because it uses skip connections to extract deep features, effectively addressing the degradation problem in the deep depth network. The R-HybridSN [28] achieved better classification performance than all the earlier mentioned methods because it utilizes non-identical multi-scale residual connections in its network structure.
Comparing the HybridGBN variants, we observe that the classification performance of HybridGBN-SR and HybridGBN-SSR is better than that of HybridGBN-Vanilla. Hence, adding a non-identical residual connection to the basic building block can significantly improve the classification accuracy across all the experimental datasets. For example, in Table 4, we can observe that the proposed HybridGBN-SR improves OA and AA of HybridGBN-SSR by +0.1% and +0.58%, respectively, and that of HybridGBN-Vanilla by +0.27% and +1%, respectively, in the IP dataset. In the UP dataset (see Table 5), the proposed HybridGBN-SR increases the OA and AA of HybridGBN-SSR by +0.83% and +1.4%, and that of HybridGBN-Vanilla by +0.57% and +0.84%, respectively. In SA (see Table 6), the OA and AA of the proposed HybridGBN-SR are higher than HybridGBN-SSR by +0.18% and +0.57%, and higher than HybridGBN-Vanilla by +0.43% and +1.14%, respectively. We can observe a significant improvement in average accuracy compared with the overall accuracy. The HybridGBN-SR performs better on classes with meager training sample data. For instance, at class 7 of the IP dataset, the HybridGBN-SR improves the per class accuracy of 2D-CNN, M3D-CNN, SSRN, HybridSN, and R-HybridSN by +89.1%, +80.95%, +99.47%, +30.95%, and +3.91%, respectively. The difference in performance between HybridGBN-SSR and HybridGBN-SR can be attributed to the additional residual connections at the bottom block of the network. In HybridGBN-SSR, the additional residual connection simultaneously extracts spatial–spectral features. Unlike in HybridGBN-SSR, the additional residual connection in HybridGBN-SR extracts spatial features only while preserving raw spectral features, resulting in the convolution of high-level spatial features with low-level spectral features in the top network layers. To the best of our knowledge, this leads to the extraction of more discriminative features and, hence, increases classification accuracy.
In comparison with other state-of-the-art methods the HybridGBN-SR improves the overall accuracy of 2D-CNN, M3D-CNN, SSRN, HybridSN, and R-HybridSN by +21.95, +28.54, +4.03, +3.18, +0.96 on the IP dataset (see Table 4), +6.72%, +3.22%, +2.76%, +1.26%, +0.18% on the UP dataset (see Table 5), and +5.79%, +11.32%, +0.62%, +1.09%,+2.4% on the SA dataset (see Table 6), respectively. This trend is more pronounced in classes with less than 1% training sample points. In this perspective, we propose the HybridGBN-SR that can learn more discriminate features at extremely small training data samples and in unbalanced datasets.

3.4.3. Training Accuracy and Loss Graph of the Selected Models on Very Limited Sample Data

We can observe in Figure 10, Figure 11 and Figure 12 that the proposed HybridGBN-SR converges better than SSRN and HybridSN and worse than R-HybridSN over the IP and UP datasets. The HybridGBN-SR training and loss graph in the SA dataset is comparable to R-HybridSN and HybridSN, indicating its competitiveness over the SA dataset (See Figure 12).

3.4.4. Confusion Matrix

This subsection further demonstrates the competitiveness of the proposed HybridGBN-SR using the confusion matrix over the IP, UP, and SA datasets.
With a closer look at the confusion matrix in Figure 13, Figure 14 and Figure 15 we can observe that most of the sample data of the proposed HybridGBN-SR lie in the diagonal line even with limited training data compared to SSRN, HybridSN, and R-HybridSN over the IP, UP, and SA datasets. Therefore, the proposed model correctly classified most sample data, demonstrating its robustness over a small training data.

3.4.5. Classification Diagrams

We observe in Figure 16, Figure 17 and Figure 18 that the SSRN, HybridSN, and R-HybridSN have more noisy scattered points in the classification maps, unlike the proposed HybridGBN-SR method over the IP, UP, and SA datasets. Therefore, the proposed method can remove the noisy scattered points and leads to smoother classification results without blurring the boundaries than the compared models when subjected to less training sample data.

3.5. Varying Training Sample Data

To further compare the performance of the proposed Hybrid-SR with the selected state-of-the-art models, we varied the training sample data. We randomly trained the models at 2%, 5%, 10%, and 20% of the IP dataset, and 0.4%, 0.8%, 1%, 2%, and 5% of the UP and SA datasets, and then tested the models on the remaining data portion. The purpose of these experiments was to observe the variation trend and sensitivity of OA with the changing amount of training samples of the proposed HybridGBN-SR model compared with the selected state-of-the-art methods over the IP, UP, and SA datasets. The result is summarized in Table 7, Table 8 and Table 9.
We observe in Table 7, Table 8 and Table 9 that the proposed HybridGBN-SR model has better overall accuracy than the state-of-the-art models in almost all the training sample data splits. Figure 19 illustrates that as the training sample data are reduced, the classification accuracy gap between the proposed HybridGBN-SR model and the selected state-of-the-art models widens, demonstrating different reduction speeds among the compared models. For instance, in the IP dataset (see Table 7 and Figure 19a), at 8% of the training sample data, the HybridGBN-SR improves the overall accuracy of 2D-CNN, M3D-CNN, SSRN, HybridSN, and R-HybridSN by +15.88%, +20.27%, +1.98%, +1.94%, and +0.19%, respectively. In comparison, at 2% training sample data, the HybridGBN-SR improves the overall accuracy of 2D-CNN, M3D-CNN, SSRN, HybridSN, and R-HybridSN by +24.31%, +29.16%, +7.14%, +8.3%, and +4.77%, respectively.
In the SA dataset (See Table 9 and Figure 19c), at 5% and 0.04% training sample data, the proposed HybridGBN-SR model increases the overall accuracy (OA) of the second-best model (HybridSN) by +0.11% and +0.92, respectively. It shows an increase in the performance gap between our model and other models as the training sample data drastically reduce. The same trend is observed in the UP dataset (See Table 8 and Figure 19b). Therefore, we can conclude that the robustness of the proposed HybridGBN-SR model is more pronounced as the amount of training portion decreases across all the experimental datasets. This implies that the proposed HybridGBN-SR model can extract sufficient discriminative features even at minimal training sample data. We attribute this to the genomic residue connection in the design of the HybridGBN-SR model.

3.6. The Time Complexity of the Selected Models over IP, UP, and SA Datasets

Table 10 summarizes the training and testing time in seconds of SSRN, HybridSN, R-HybridSN, and the proposed HybridGBN over the IP dataset on 5% training and 95% testing sample data, and over UP and SA datasets on 1% training and 99% testing sample data.
The computational efficiency in training and testing time (in seconds) shown in Table 10 indicates that the proposed HybridGBN-SR model performs better than SSRN, is comparable with R-HybridSN, and worse than HybridSN over the three datasets. However, we observe that the training and testing time of the deep learning model is related to the experimental environment, model structure, number of training epochs, amount of training samples, patch size, etc. For instance, the HybridSN model trains and tests faster than the other models due to its simple network structure. The SSRN is the slowest in training and testing because it contains a deep network structure and takes a long time to learn (number of epochs). Lastly, the speed difference between the HybridGBN-SR and R-HybridSN can be attributed to the network optimization parameters.

4. Conclusions

This work seeks to further the scientific work of developing deep networks for HSI classification. We propose a deep 3D/2D genome graph-based network (abbreviated as HybridGBN-SR) model that extracts discriminative spectral–spatial features from very few training samples. In the network design, the proposed HybridGBN-SR model uses the Genoblocks, a concept borrowed from biological graph genomes. The Genoblocks innovatively utilize multi-scale kernels, identical and non-identical residual connections, to extract abundant contextual features vital to attaining a high generalizability rate. The residual connections promote the backpropagation of gradients to extract more discriminative features and prevent overfitting, leading to high classification accuracy. The proposed HybridGBN-SR model achieves reduced computational cost by replacing the Geno3Dblock with the low-cost Geno2Dblock at the top of the network structure. The Geno2Dblock contains dilated 2D convolutions to extract further discriminative HSI features, resulting in increased model computational efficiency while maintaining classification accuracy. The proposed HybridGBN-SR model’s robustness is evidenced by its better convergence than SSRN and HybridSN across all the datasets and its ability to achieve better classification accuracy with a small number of training samples as compared to the state-of-the-art methods such as SSRN, HybridSN, and R-HybridSN over the IP, UP, and SA.

Author Contributions

Conceptualization, H.C.T., E.C., R.M.M. and D.O.N.; software, H.C.T. and D.O.N.; resources, E.C.; writing—original draft preparation, H.C.T., D.O.N. and R.M.M.; writing—review and editing, H.C.T., E.C., L.M., D.O.N. and R.M.M.; supervision, E.C. and L.M.; funding acquisition, E.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grants U1804152 and 62101503.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All datasets used in this research are openly accessible online (http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes (accessed on 15 February 2022)).

Acknowledgments

The authors express gratitude to http://www.ehu.eus (accessed on 15 February 2022) for publicly providing the original hyperspectral images to advance research remote sensing.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
  2. Khan, M.J.; Khan, H.S.; Yousaf, A.; Khurshid, K.; Abbas, A. Modern Trends in Hyperspectral Image Analysis: A Review. IEEE Access 2018, 6, 14118–14129. [Google Scholar] [CrossRef]
  3. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  4. Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar]
  5. Villa, A.; Benediktsson, J.A.; Chanussot, J.; Jutten, C. Hyperspectral Image Classification With Independent Component Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4865–4876. [Google Scholar] [CrossRef] [Green Version]
  6. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of Hyperspectral Images With Regularized Linear Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  7. Licciardi, G.; Marpu, P.R.; Chanussot, J.; Benediktsson, J.A. Linear Versus Nonlinear PCA for the Classification of Hyperspectral Data Based on the Extended Morphological Profiles. IEEE Geosci. Remote Sens. Lett. 2011, 9, 447–451. [Google Scholar] [CrossRef] [Green Version]
  8. Lin, Z.; Chen, Y.; Zhao, X.; Wang, G. Spectral-spatial classification of hyperspectral image using autoencoders. In Proceedings of the 2013 9th International Conference on Information, Communications & Signal Processing, Tainan, Taiwan, 10–13 December 2013; pp. 1–5. [Google Scholar] [CrossRef] [Green Version]
  9. Ben Hamida, A.; Benoit, A.; Lambert, P.; Ben Amar, C. 3-D Deep Learning Approach for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434. [Google Scholar] [CrossRef] [Green Version]
  10. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
  11. Nyabuga, D.O.; Song, J.; Liu, G.; Adjeisah, M. A 3D-2D Convolutional Neural Network and Transfer Learning for Hyperspectral Image Classification. Comput. Intell. Neurosci. 2021, 2021, 1759111. [Google Scholar] [CrossRef]
  12. Qiu, Z.; Yao, T.; Mei, T. Learning Spatio-Temporal Representation with Pseudo-3D Residual Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5534–5542. [Google Scholar] [CrossRef] [Green Version]
  13. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
  14. Garifulla, M.; Shin, J.; Kim, C.; Kim, W.H.; Kim, H.J.; Kim, J.; Hong, S. A Case Study of Quantizing Convolutional Neural Networks for Fast Disease Diagnosis on Portable Medical Devices. Sensors 2021, 22, 219. [Google Scholar] [CrossRef] [PubMed]
  15. Chen, L.; Li, S.; Bai, Q.; Yang, J.; Jiang, S.; Miao, Y. Review of Image Classification Algorithms Based on Convolutional Neural Networks. Remote Sens. 2021, 13, 4712. [Google Scholar] [CrossRef]
  16. Tan, M.; Le, Q.V. MixConv: Mixed depthwise convolutional kernels. arXiv 2019, arXiv:1907.09595. [Google Scholar]
  17. Tan, M.; Le, Q.V. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
  18. Tinega, H.; Chen, E.; Ma, L.; Mariita, R.M.; Nyasaka, D. Hyperspectral Image Classification Using Deep Genome Graph-Based Approach. Sensors 2021, 21, 6467. [Google Scholar] [CrossRef] [PubMed]
  19. Manolov, A.; Konanov, D.; Fedorov, D.; Osmolovsky, I.; Vereshchagin, R.; Ilina, E. Genome Complexity Browser: Visualization and quantification of genome variability. PLoS Comput. Biol. 2020, 16, e1008222. [Google Scholar] [CrossRef]
  20. Yang, X.; Lee, W.-P.; Ye, K.; Lee, C. One reference genome is not enough. Genome Biol. 2019, 20, 104. [Google Scholar] [CrossRef] [Green Version]
  21. Schatz, M.C.; Witkowski, J.; McCombie, W.R. Current challenges in de novo plant genome sequencing and assembly. Genome Biol. 2012, 13, 243. [Google Scholar] [CrossRef]
  22. Rakocevic, G.; Semenyuk, V.; Lee, W.-P.; Spencer, J.; Browning, J.; Johnson, I.J.; Arsenijevic, V.; Nadj, J.; Ghose, K.; Suciu, M.C.; et al. Fast and accurate genomic analyses using genome graphs. Nat. Genet. 2019, 51, 354–362. [Google Scholar] [CrossRef]
  23. Cheng, G.; Xie, X.; Han, J.; Guo, L.; Xia, G.-S. Remote Sensing Image Scene Classification Meets Deep Learning: Challenges, Methods, Benchmarks, and Opportunities. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3735–3756. [Google Scholar] [CrossRef]
  24. Mou, L.; Ghamisi, P.; Zhu, X.X. Unsupervised Spectral–Spatial Feature Learning via Deep Residual Conv–Deconv Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 56, 391–406. [Google Scholar] [CrossRef] [Green Version]
  25. He, M.; Li, B.; Chen, H. Multi-scale 3D deep convolutional neural network for hyperspectral image classification. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3904–3908. [Google Scholar]
  26. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Liu, G.; Qi, L.; Tie, Y.; Ma, L. Hyperspectral Image Classification Using Kernel Fused Representation via a Spatial-Spectral Composite Kernel With Ideal Regularization. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1422–1426. [Google Scholar] [CrossRef]
  28. Feng, F.; Wang, S.; Wang, C.; Zhang, J. Learning Deep Hierarchical Spatial–Spectral Features for Hyperspectral Image Classification Based on Residual 3D-2D CNN. Sensors 2019, 19, 5276. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The framework of the proposed HybridGBN Model Variants.
Figure 1. The framework of the proposed HybridGBN Model Variants.
Remotesensing 14 01332 g001
Figure 2. Preprocessing of raw HSI data cube.
Figure 2. Preprocessing of raw HSI data cube.
Remotesensing 14 01332 g002
Figure 3. (a) tetraploid genome (b) assembly graph.
Figure 3. (a) tetraploid genome (b) assembly graph.
Remotesensing 14 01332 g003
Figure 4. Identity and non-identity residual network in a vanilla Genoblock.
Figure 4. Identity and non-identity residual network in a vanilla Genoblock.
Remotesensing 14 01332 g004
Figure 5. Framework of Geno3Dblock-Vanilla.
Figure 5. Framework of Geno3Dblock-Vanilla.
Remotesensing 14 01332 g005
Figure 6. Framework of Geno3Dblock-SR.
Figure 6. Framework of Geno3Dblock-SR.
Remotesensing 14 01332 g006
Figure 7. Framework of Geno3Dblock-SSR.
Figure 7. Framework of Geno3Dblock-SSR.
Remotesensing 14 01332 g007
Figure 8. Framework for 3D to 2D feature deformation.
Figure 8. Framework for 3D to 2D feature deformation.
Remotesensing 14 01332 g008
Figure 9. The Framework of Geno2Dblock.
Figure 9. The Framework of Geno2Dblock.
Remotesensing 14 01332 g009
Figure 10. Training graphs for R-HybridSN, HybridSN, SSRN, and HybridGBN-SR for each epoch over IP dataset: (a) The training accuracy graph; (b) The loss convergence graph.
Figure 10. Training graphs for R-HybridSN, HybridSN, SSRN, and HybridGBN-SR for each epoch over IP dataset: (a) The training accuracy graph; (b) The loss convergence graph.
Remotesensing 14 01332 g010
Figure 11. Training graphs for R-HybridSN, HybridSN, SSRN, and HybridGBN-SR for each epoch over UP dataset: (a) The training accuracy graph; (b) The loss convergence graph.
Figure 11. Training graphs for R-HybridSN, HybridSN, SSRN, and HybridGBN-SR for each epoch over UP dataset: (a) The training accuracy graph; (b) The loss convergence graph.
Remotesensing 14 01332 g011
Figure 12. Training graphs for R-HybridSN, HybridSN, SSRN, and HybridGBN-SR for each epoch over SA dataset: (a) The training accuracy graph; (b) The loss convergence graph.
Figure 12. Training graphs for R-HybridSN, HybridSN, SSRN, and HybridGBN-SR for each epoch over SA dataset: (a) The training accuracy graph; (b) The loss convergence graph.
Remotesensing 14 01332 g012
Figure 13. The confusion matrix of IP dataset: (a) R-HybridSN; (b) HybridSN; (c) SSRN; (d) HybridGBN-SR.
Figure 13. The confusion matrix of IP dataset: (a) R-HybridSN; (b) HybridSN; (c) SSRN; (d) HybridGBN-SR.
Remotesensing 14 01332 g013
Figure 14. The confusion matrix of UP dataset: (a) R-HybridSN; (b) HybridSN; (c) SSRN; (d) HybridGBN-SR.
Figure 14. The confusion matrix of UP dataset: (a) R-HybridSN; (b) HybridSN; (c) SSRN; (d) HybridGBN-SR.
Remotesensing 14 01332 g014
Figure 15. The confusion matrix of SA dataset: (a) R-HybridSN; (b) HybridSN; (c) SSRN; (d) HybridGBN-SR.
Figure 15. The confusion matrix of SA dataset: (a) R-HybridSN; (b) HybridSN; (c) SSRN; (d) HybridGBN-SR.
Remotesensing 14 01332 g015
Figure 16. Classification maps of IP dataset: (a) Ground truth; (b) R-HybridSN; (c) HybridSN; (d) SSRN (e) HybridGBN-SR.
Figure 16. Classification maps of IP dataset: (a) Ground truth; (b) R-HybridSN; (c) HybridSN; (d) SSRN (e) HybridGBN-SR.
Remotesensing 14 01332 g016
Figure 17. Classification maps of UP dataset: (a) Ground truth; (b) R-HybridSN; (c) HybridSN; (d) SSRN (e) HybridGBN-SR.
Figure 17. Classification maps of UP dataset: (a) Ground truth; (b) R-HybridSN; (c) HybridSN; (d) SSRN (e) HybridGBN-SR.
Remotesensing 14 01332 g017
Figure 18. Classification maps of SA dataset: (a) Ground truth; (b) R-HybridSN; (c) HybridSN; (d) SSRN (e) HybridGBN-SR.
Figure 18. Classification maps of SA dataset: (a) Ground truth; (b) R-HybridSN; (c) HybridSN; (d) SSRN (e) HybridGBN-SR.
Remotesensing 14 01332 g018
Figure 19. Varying training sample data for (a) IP; (b) UP; (c) SA datasets.
Figure 19. Varying training sample data for (a) IP; (b) UP; (c) SA datasets.
Remotesensing 14 01332 g019
Table 1. Per Class information for IP dataset.
Table 1. Per Class information for IP dataset.
Class NoClass LabelTotal Samples (Pixels)Total Samples (%)TrainingTesting
1Alfalfa460.45244
2Corn-notill142813.93711357
3Corn-mintill8308.141789
4Corn2372.3112225
5Grass-pasture4834.7124459
6Grass-trees7307.1237693
7Grass-pasture-mowed280.27127
8Hay-windrowed4784.6624454
9Oats200.2119
10Soybean-notill9729.4849923
11Soybean-mintill245523.951232332
12Soybean-clean5935.7930563
13Wheat205210195
14Woods126512.34631202
15Buildings-Grass-Trees-Drives3863.7719367
16Stone-Steel-Towers930.91588
Table 2. Per Class information for the UP dataset.
Table 2. Per Class information for the UP dataset.
Class NoClass LabelTotal Samples (Pixels)Total Samples (%)TrainingTesting
1Asphalt663115.5666565
2Meadows18,64943.618618,463
3Gravel20994.91212078
4Trees30647.16313033
5Painted13453.14131332
6Bare502911.76504979
7Bitumen13303.11131317
8Self-Blocking36828.61373645
9Shadows9472.2110937
Table 3. Per Class information for the SA dataset.
Table 3. Per Class information for the SA dataset.
Class NoClass LabelTotal Samples (Pixels)Total Samples (%)TrainingTesting
1Broccoli_green_weeds_120093.71201989
2Broccoli_green_weeds_237266.88373689
3Fallow19763.65201956
4Fallow_rough_plow13942.58141380
5Fallow_smooth26784.95272651
6Stubble39597.31393920
7Celery35796.61363543
8Grapes_untrained11,27120.8211311,158
9Soil_vineyard_develop620311.46626141
10Corn_senesced_green_weeds32786.06333245
11Lettuce_romaine_4wk10681.97111057
12Lettuce_romaine_5wk19273.56191908
13Lettuce_romaine_6wk9161.699907
14Lettuce_romaine_7wk10701.98111059
15Vineyard_untrained726813.43727196
16Vineyard_vertical_trellis18073.34181789
Table 4. The Kappa, OA, and AA results in a percentage of the compared models at 5% training sample data over IP dataset.
Table 4. The Kappa, OA, and AA results in a percentage of the compared models at 5% training sample data over IP dataset.
Class2D-CNNM3D-DCNNHybridSNR-HybridSNSSRNHybridGBN
-Vanilla
HybridGBN
-SSR
HybridGBN
-SR
17.9527.561.824512.9984.6881.1383.12
270.6959.1592.2595.4593.0494.7695.9695.55
352.8445.0792.9797.3693.7298.8998.5899.51
427.5138.4978.2294.872.3893.4493.796.38
590.4470.3396.698.8598.1699.6999.7299.6
698.5997.298.1199.3299.8699.0798.9999.09
710.3718.5268.5295.56098.3795.7799.47
899.9698.0499.9610099.94100100100
916.3225.7983.6865.26064.6676.6978.2
1067.8455.8596.1295.991.0197.8397.2996.61
1178.1676.296.6698.0995.6397.8998.3698.21
1242.0133.8985.4489.1587.990.5791.2992.46
1398.9791.2394.9799.7498.5397.0598.5398.1
1497.6594.6899.3499.2699.8299.5699.399.73
1562.6242.3782.9287.6682.0994.3692.7692.41
1676.0249.328088.1882.3191.5291.0689.94
Kappa0.718 ± 0.010.642 ± 0.0450.934 ± 0.0120.96 ± 0.0040.923 ± 0.490.968 ± 0.430.97 ± 0.40.971 ± 0.25
OA (%)75.47 ± 0.8168.88 ± 3.7794.24 ± 1.0196.46 ± 0.3393.39 ± 0.4397.15 ± 0.3897.32 ± 0.3597.42 ± 0.22
AA (%)62.37 ± 1.6457.73 ± 6.5287.97 ± 1.9390.6 ± 1.5375.28 ± 1.2593.9 ± 1.1194.32 ± 1.8994.9 ± 2.4
Table 5. The Kappa, OA, and AA results in a percentage of the compared models at 1% training sample data over the UP dataset.
Table 5. The Kappa, OA, and AA results in a percentage of the compared models at 1% training sample data over the UP dataset.
Class2D-CNNM3D-DCNNHybridSNR-HybridSNSSRNHybridGBN
-Vanilla
HybridGBN
-SSR
HybridGBN
-SR
196.8890.5695.7296.9498.7697.5498.0298.13
299.0189.4799.6899.6999.9199.6599.8199.55
375.0859.1184.3887.1785.7290.7790.6793.81
487.7493.2587.789.1594.8590.2387.4991.07
598.1793.6698.9999.5199.7699.7599.599.09
675.5169.6396.8298.4496.1197.5597.4998.85
761.3265.7184.4295.8295.9899.2995.7599.44
880.6178.3589.1893.2894.9693.892.2295.82
997.9794.4171.7177.8299.8991.6594.2292.07
Kappa0.881 ± 0.0080.798 ± 0.0160.935 ± 0.0110.955 ± 0.0070.97 ± 0.540.964 ± 0.560.960 ± 0.820.972 ± 0.53
OA (%)91.13 ± 0.5584.63 ± 1.2195.09 ± 0.896.59 ± 0.597.67 ± 0.497.28 ± 0.4297.02 ± 0.6197.85 ± 0.4
AA (%)85.81 ± 1.4881.57 ± 1.7989.84 ± 1.9393.09 ± 1.296.22 ± 0.8295.58 ± 0.7995.02 ± 1.2696.42 ± 0.54
Table 6. The Kappa, OA, and AA results in a percentage of the compared models at 1% training sample data over the SA dataset.
Table 6. The Kappa, OA, and AA results in a percentage of the compared models at 1% training sample data over the SA dataset.
Class2D-CNNM3D-DCNNHybridSNR-HybridSNSSRNHybridGBN
-Vanilla
HybridGBN
-SSR
HybridGBN
-SR
199.9794.8899.99100100100100100
299.8699.6110099.97100100100100
399.4391.8999.8299.4999.9699.9299.97100
498.8398.3398.3898.7299.7299.2397.6499.67
596.7798.8399.2698.4398.7398.4398.6699
699.7998.0999.9399.910099.8999.8999.71
799.3397.6799.9599.9699.9999.9599.94100
887.3982.497.7798.2395.0699.7599.6499.69
999.9798.1499.9999.99100100100100
1093.9887.698.3697.998.3398.7898.7899
1189.6286.7296.0696.4697.4299.5398.7299.18
1299.9996.9997.4499.0910099.9899.5999.69
1398.5297.1497.4282.8293.0282.8985.4293.89
1497.6491.7899.5297.2595.6294.9498.0395.71
1579.4664.4297.0695.1288.1897.298.3198.21
1695.7178.1410099.7199.4999.9899.9899.96
Kappa0.928 ± 0.0030.867 ± 0.0020.985 ± 0.0070.98 ± 0.0040.966 ± 0.610.989 ± 0.610.991 ± 0.340.993 ± 0.16
OA (%)93.55 ± 0.2688.02 ± 1.3598.72 ± 0.5998.25 ± 0.496.94 ± 0.5598.91 ± 0.5599.16 ± 0.3199.34 ± 0.14
AA (%)96.02 ± 0.4291.41 ± 0.8198.81 ± 0.597.69 ± 0.6997.84 ± 0.5297.84 ± 198.41 ± 1.0698.98 ± 0.32
Table 7. The effect of varying the training sample data for the SSRN, Hybrid, R-HybridSN, and HybridGBN-SR models on the overall accuracy (OA) over the IP dataset.
Table 7. The effect of varying the training sample data for the SSRN, Hybrid, R-HybridSN, and HybridGBN-SR models on the overall accuracy (OA) over the IP dataset.
Training Sample Data in Percentage
Model20%10%8%5%2%
2D-CNN91.23 ± 0.2183.86 ± 182.43 ± 0.6275.47 ± 0.8167.13 ± 1.12
M3D-DCNN90.03 ± 2.1880.1 ± 4.5678.04 ± 2.1368.88 ± 3.7762.28 ± 3.18
HybridSN99.3 ± 0.1897.66 ± 0.2396.37 ± 1.1994.24 ± 1.0183.14 ± 1.6
R-HybridSN99.52 ± 0.1698.44 ± 0.4498.12 ± 0.3596.46 ± 0.3386.67 ± 1.02
SSRN98.91 ± 0.1297.25 ± 0.3596.33 ± 0.4193.39 ± 0.4384.3 ± 1.61
HybridGBN-SR99.3 ± 0.298.62 ± 0.2298.31 ± 0.2697.42 ± 0.2291.44 ± 0.39
Table 8. The effect of varying the training sample data for the SSRN, Hybrid, R-HybridSN, and HybridGBN-SR models on the overall accuracy (OA) over the UP dataset.
Table 8. The effect of varying the training sample data for the SSRN, Hybrid, R-HybridSN, and HybridGBN-SR models on the overall accuracy (OA) over the UP dataset.
ModelTraining Sample Data
5%2%1%0.80%0.40%
2D-CNN96.59 ± 0.2194.5 ± 0.491.82 ± 0.5689.98 ± 0.3885.27 ± 0.90
M3D-DCNN92.8 ± 0.9589.27 ± 1.3587.19 ± 1.7182.75 ± 2.8476.53 ± 3.94
HybridSN99.45 ± 0.0997.86 ± 0.5695.86 ± 0.9393.3 ± 1.4185.95 ± 1.58
SSRN99.57 ± 0.1399.07 ± 0.1797.67 ± 0.497.12 ± 0.2893.41 ± 0.77
R-HybridSN99.47 ± 0.1498.47 ± 0.2796.4 ± 1.6695.64 ± 0.5291.60 ± 1.12
HybridGBN-SR99.54 ± 0.0799.13 ± 0.1797.85 ± 0.497.33 ± 0.4594.14 ± 0.61
Table 9. The effect of varying the training sample data for the SSRN, Hybrid, R-HybridSN, and HybridGBN-SR models on the overall accuracy (OA) over the SA dataset.
Table 9. The effect of varying the training sample data for the SSRN, Hybrid, R-HybridSN, and HybridGBN-SR models on the overall accuracy (OA) over the SA dataset.
ModelTraining Sample Data
5%2%1.00%0.80%0.40%
2D-CNN96.63 ± 0.2494.67 ± 0.1593.55 ± 0.2693.03 ± 0.2691.38 ± 0.44
M3D-DCNN92.65 ± 0.4990.17 ± 0.5688.02 ± 1.3586.82 ± 1.1883.42 ± 1.6
HybridSN99.83 ± 0.199.57 ± 0.2598.72 ± 0.5997.78 ± 0.7894.88 ± 0.9
R-HybridSN99.82 ± 0.0499.36 ± 0.1498.25 ± 0.496.97 ± 0.5794.33 ± 0.48
SSRN98.7 ± 0.5198.02 ± 0.1696.94 ± 0.5596.87 ± 0.2993.64 ± 0.22
HybridGBN-SR99.94 ± 0.0299.72 ± 0.1199.34 ± 0.1498.37 ± 0.4395.8 ± 1.19
Table 10. The training and testing time in seconds over IP, UP, and SA datasets using SSRN, HybridSN, R-HybridSN, and HybridGBN-SR.
Table 10. The training and testing time in seconds over IP, UP, and SA datasets using SSRN, HybridSN, R-HybridSN, and HybridGBN-SR.
DatasetSSRNHybridSNR-HybridSNHybridGBN-SR
TrainTestTrainTestTrainTestTrainTest
IP91.12.631.93.223.12.643.63.4
UP108.97.312.46.930.19.421.36.2
SA122.612.313.28.916.412.330.212.9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tinega, H.C.; Chen, E.; Ma, L.; Nyasaka, D.O.; Mariita, R.M. HybridGBN-SR: A Deep 3D/2D Genome Graph-Based Network for Hyperspectral Image Classification. Remote Sens. 2022, 14, 1332. https://doi.org/10.3390/rs14061332

AMA Style

Tinega HC, Chen E, Ma L, Nyasaka DO, Mariita RM. HybridGBN-SR: A Deep 3D/2D Genome Graph-Based Network for Hyperspectral Image Classification. Remote Sensing. 2022; 14(6):1332. https://doi.org/10.3390/rs14061332

Chicago/Turabian Style

Tinega, Haron C., Enqing Chen, Long Ma, Divinah O. Nyasaka, and Richard M. Mariita. 2022. "HybridGBN-SR: A Deep 3D/2D Genome Graph-Based Network for Hyperspectral Image Classification" Remote Sensing 14, no. 6: 1332. https://doi.org/10.3390/rs14061332

APA Style

Tinega, H. C., Chen, E., Ma, L., Nyasaka, D. O., & Mariita, R. M. (2022). HybridGBN-SR: A Deep 3D/2D Genome Graph-Based Network for Hyperspectral Image Classification. Remote Sensing, 14(6), 1332. https://doi.org/10.3390/rs14061332

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop