Next Article in Journal
Specific Responses to Environmental Factors Cause Discrepancy in the Link Between Solar-Induced Chlorophyll Fluorescence and Transpiration in Three Plantations
Previous Article in Journal
The Response Mechanism of Ecosystem Service Trade-Offs Along an Aridity Gradient in Humid and Semi-Humid Regions: A Case Study of Northeast China
Previous Article in Special Issue
Segment Anything Model-Based Hyperspectral Image Classification for Small Samples
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Triple Graph Convolutional Network for Hyperspectral Image Feature Fusion and Classification

1
Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran P.O. Box 14115-111, Iran
2
Remote Sensing Technology Institute, German Aerospace Center (DLR), Muenchener Strasse 20, 82234 Wessling, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(9), 1623; https://doi.org/10.3390/rs17091623
Submission received: 19 March 2025 / Revised: 16 April 2025 / Accepted: 30 April 2025 / Published: 3 May 2025

Abstract

:
Most graph-based networks utilize superpixel generation methods as a preprocessing step, considering superpixels as graph nodes. In the case of hyperspectral images having high variability in spectral features, considering an image region as a graph node may degrade the class discrimination ability of networks for pixel-based classification. Moreover, most graph-based networks focus on global feature extraction, while both local and global information are important for pixel-based classification. To deal with these challenges, superpixel-based graphs are overruled in this work, and a Graph-based Feature Fusion (GF2) method relying on three different graphs is proposed instead. A local patch is considered around each pixel under test, and at the same time, global anchors with the highest informational content are selected from the entire scene. While the first graph explores relationships between neighboring pixels in the local patch and the global anchors, the second and third graphs use the global anchors and pixels of the local patch as nodes, respectively. These graphs are processed using graph convolutional networks, and their results are fused using a cross-attention mechanism. The experiments on three hyperspectral benchmark datasets show that the GF2 network has high classification performance compared to state-of-the-art methods, while imposing a reasonable number of learnable parameters.

1. Introduction

Hyperspectral images consisting of up to hundreds of contiguous and narrow spectral channels enable identifying different materials on the ground based on their spectral signatures [1]. They have been successfully used in different applications, such as land cover classification, target detection [2], vegetation monitoring, agriculture [3], urban mapping, and mineral identification [4,5,6].
Among hyperspectral image classification methods proposed over the years, early approaches feed the spectral features to soft or hard classifiers, such as maximum likelihood [7], support vector machine (SVM) [8], nearest neighbor [9], random forest [10], and spectral angle [11]. These methods are not particularly robust to the intraclass spectral variability in hyperspectral images, usually yielding salt and pepper noise in the generated classification maps [12]. To address this problem, contextual information such as shape and texture have been taken into account in the literature to be coupled with spectral features, such as gray level co-occurrence matrix (GLCM) [13], Gabor filters [14], morphological profiles [15], and Markov random fields [16]. Object or segmentation-based methods [17] have also been suggested for considering spatial correlations among pixels, where unified labels are assigned to all pixels of a superpixel (object). For example, simple linear iteration clustering (SLIC) is a well-known segmentation algorithm that takes spatial information into account, along with a limited computational burden [18,19].
Although spectral-spatial feature extraction and the feeding of extracted features to an appropriate classifier can improve the classification accuracy, the two steps of feature extraction and classification are usually carried out separately, and the extracted features may not sufficiently fit the chosen classifier, whenever their training processes are individually completed. Furthermore, high-order semantic features cannot be well recognized by the mentioned methods. Deep learning opened new perspectives in remote sensing, with automatic feature extraction and classification through an end-to-end training of a unified framework providing reliable results for both data processing and decision making [20,21].
Convolutional neural networks (CNNs) with hierarchical feature extraction have shown high ability in extracting high-level features and semantic information [22,23]. While two-dimensional CNNs (2DCNN) [24] mostly extract spatial information in subsequent layers, three-dimensional CNNs (3DCNN) [25] have shown superior performance for simultaneous spectral-spatial feature extraction and classification of hyperspectral images, due to the three-dimensional nature of a hyperspectral dataset.
Although CNNs have a high ability in local feature extraction from neighborhood regions, due to their limited receptive fields, they do not consider middle and long-range dependencies. Therefore, they may fail at capturing global information in the image. Transformers utilizing self-attention mechanisms have been introduced to solve this disadvantage [26,27]. However, CNNs and transformers are appropriate networks for regular data in Euclidean space, i.e., data structured in a traditional grid-like format, such as images arranged by rows and columns [28]. Moreover, CNNs apply convolutional operations on fixed square image regions and cannot adapt to more irregular or varied shapes and sizes of regions within the image. In some cases, these may not be flexible enough for regions with different geometric properties. As an extension of CNN for non-gridded data, graph convolutional networks (GCN) [29,30] aggregate contextual relations and propagate them across graph nodes. As a result, GCNs have a higher ability in processing irregular data in a non-Euclidean space so that the neighborhood considered can be adapted to non-homogeneous or complex regions, such as target boundaries in hyperspectral images.
For the deployment of multiscale information in GCNs, multiple graphs with different neighborhood scales are considered in the multiscale dynamic GCN (MDGCN) [31]. This approach introduces a dynamic and multiscale graph convolution operation instead of using a predefined fixed graph, with the fused feature embeddings updating the similarity measured between pixels, and using superpixels as graph nodes for complexity reduction. However, MDGCN performs graph convolution separately at different spatial scales and is limited by neighborhoods having a fixed size. To consider the interaction of multiscale information, the dual interactive GCN (DIGCN) relies on dual GCN branches, where the edge information of one branch is refined by the other one [32].
Since the receptive field of GCN is often limited to a fairly small region, the context-aware dynamic GCN (CAD-GCN) [33] captures long-range contextual relations through successive graph convolutions, simultaneously refining the graph edges and connective relationships among image regions.
Existing GCN models usually rely on predefined receptive fields, which may limit their ability to adaptively select the most significant neighborhood for a specific location. To deal with this issue, the dynamic adaptive sampling GCN (DAS-GCN) [34] dynamically obtains the receptive field through adaptive sampling. DAS-GCN discovers the most meaningful receptive field adaptively and simultaneously adjusts the edge adjacency weights after implementing each adaptive sampling. After each iteration, the graph is updated and refined dynamically.
Existing GCNs usually utilize superpixel segmentation as a pre-processing step in order to reduce computational complexity. However, a superpixel may contain pixels with different labels. Moreover, the spectral-spatial features in the local regions of a superpixel may be ignored. To handle these hindrances, the end-to-end mixhop superpixel-based GCN (EMS-GCN) introduces a differentiable superpixel segmentation algorithm, which is able to refine the superpixel boundaries with the network training [12]. Subsequently, the constructed superpixel graph is given to a mixhop superpixel GCN where long-range dependencies among superpixels are explored.
Due to the limited availability of labeled samples, supervised information is not usually sufficient. To improve the feature representation of GCN, the contrastive GCN (ConGCN) explores supervision signals from both spectral and spatial information using contrastive learning [35]. ConGCN utilizes a semi-supervised contrastive loss function for maximizing the agreement among different views of the same node or nodes related to the same category, adopting a generative loss function which benefits from considering graph topology.
Most of the existing graphs are manually constructed and updated. The automatic GCN (Auto-GCN) models the interaction of high-order tensors [36]. Auto-GCN uses the representation learning abilities of CNNs, embedding a semi-supervised Siamese network into a GCN to yield dynamic updating and automatic learning of the graph. A Confucius tri-learning paradigm of learning according to the Confucius remarks is introduced in [37]. To this end, three models are trained together: two classifiers and one generator. While each of the two classifiers can learn from good examples achievable by the other, they can also learn from bad examples provided by the generator. This approach is useful for classification tasks when limited training samples are available because the labeled data samples are augmented by good examples, and the discrimination ability of the classifier is enhanced against fake targets using bad examples.
As mentioned, most of the graph-based convolutional networks first apply a pixel-to-region assignment by performing a superpixel segmentation method such as SLIC, and consider the obtained image regions as graph nodes. Thus, the constructed graph explores relationships among different regions of the image. However, due to the high spectral variability of hyperspectral images in different areas of an acquired scene, considering a global graph for spatial information propagation and contextual feature aggregation may not be efficient for pixel-based classification. Moreover, constructing a global graph from superpixels of the whole scene may yield a large graph with high complexity. To deal with these issues, a simple and light triple graph-based network, fusing both local and global information, is introduced in this work, which is not based on superpixel generation.
The proposed graph-based feature fusion (GF2) network is composed of three types of graphs for pixel-based classification. To define the graphs, on the one hand, a local patch around each pixel is considered. On the other hand, clusters’ centroids, derived from an unsupervised clustering of the whole hyperspectral image with the highest local entropy points selected as seeds, are considered anchors. The first graph explores relationships among pixels of the local patch with the global anchors, and is therefore a local-global graph. The second graph finds relationships among the anchors and is therefore global. Finally, the third graph is local and computes the relationships among pixels within the considered image patch.
The graphs are processed using individual GCNs. The outputs of the first two graphs are multiplied and fused with the third graph through a cross-attention mechanism. The fused local-global features are finally used for hyperspectral image classification. The experimental results show the efficiency of the proposed GF2 method compared to several state-of-the-art algorithms. The remainder of the paper is organized as follows: Section 2 describes the proposed network in detail. Section 3 presents the experimental results and an ablation study, with comparisons with benchmark methods, including several graph-based networks. Finally, Section 4 concludes the paper and outlines future lines of work.

2. Method

A graph-based feature fusion (GF2) network is proposed for hyperspectral image classification. To improve the network learning process, the dimensionality of the hyperspectral image is first reduced from b to d   d < b by applying the principal component analysis (PCA) transform [38]. An alternative would be the application of Minimum Noise Fraction (MNF) [39]. For each pixel under test, three small graphs are considered.
Let us consider a p × p patch around each given pixel, where n = p 2 is the number of pixels within the patch. In parallel, m anchors are selected from the entire scene to derive a global characterization of the image. The number of anchors should be larger than the number of semantic classes in the image, in order to account for intra-class spectral variability, considering spectral classes not covered by the available semantic labels, and conveying information related to classes with multiple clusters. To this end, we set m = 1.5 n c where n c is the number of semantic classes (the coefficient 1.5 is a catch-all value showing empirically good results for different datasets in our experiments, and keeping the size of the global graph small).
The anchors are chosen as the centroids of the output of a K-means clustering applied to the hyperspectral image using m as number of clusters. Instead of randomly initializing the cluster centroids, we select the points having the highest entropy values, as clustering algorithms such as K-means are sensitive to the initialization step. The assumption is that global anchors should be representative points for the structural distribution of the entire dataset, serving as graph nodes to model relationships across the image. So, they ideally should be informative, diverse, and spatially distributed. Because such points lie in informative regions such as boundaries, transitions, or complex mixtures, the selection of high-entropy points as initial seeds helps in capturing the structure and variability in the image, which improves the clusters’ quality meaningfully. On the other hand, pixels in homogeneous regions tend to have similar spectra, i.e., are characterized by a low entropy. To this end, around each pixel, the entropy of a 9 × 9 neighborhood is considered. This is conducted for each principal component, and the average entropy in all bands is assigned to the central pixel. For each given pixel, the entropy is derived as:
E = 1 d j = 1 d i L p i j log 2 p i j
where p i j is the normalized histogram obtained from the image, L is the neighborhood window, and d is the number of dimensions after the PCA rotation. The computed cluster centers are considered to have the highest informational content and selected as the m anchors containing the global representation of the image.
The block diagram of the proposed GF2 network is shown in Figure 1. Depicted are three graph convolutional networks (GCNs), each one taking two inputs, a feature matrix ( X ) , and an adjacency matrix A . The inner structure of the suggested GCN block is shown in Figure 2. The cross product in Figure 1 and Figure 2 represents the matrix product. In each GCN, the two inputs of the feature matrix X and the adjacency matrix A are multiplied. The result is passed through a convolutional layer containing one filter having a size of 3 × 3 , i.e., 1 @ c o n v 3 × 3 , followed by a Leaky ReLU layer as a nonlinear activation function. The output of the first Leaky ReLU is added to the feature matrix X through a residual connection, and the result is multiplied by the adjacency matrix A . This process is repeated three times, with the difference that in the second and third times, instead of X , the output of the previous additional layer is added to the output of the Leaky ReLU layer through the residual connection. The introduced GCN model is a series of operations in the following form:
Z l + 1 = σ l A Z l W l + Z l
where σ l is the activation function (Leaky ReLU here), Z l is the feature matrix in layer l   Z 1 = X , and W l the learnable parameters of the convolutional filter. The output of the last multiplication operation is passed through a 1 @ c o n v 3 × 3 to provide the feature matrix of the graph G . In the remainder of this section, details of the three constructed graphs and their fusion are detailed.

2.1. Graph 1

The first graph explores the relationships between local pixels in a p × p patch and the m global anchors, considering the feature matrix X 1 R n × m and adjacency matrix A 1 R n × n . The elements of the feature matrix are obtained as follows:
X 1 i j = p i q j 2 ; i = 1,2 , , n ; j = 1,2 , , m
where X 1 i j is the element i , j in matrix X 1 , p i R d × 1 ; i = 1,2 , , n is the i th pixel in the local patch, and q j R d × 1 ; j = 1,2 , , m is the j th global anchor. So, feature matrix X 1 contains the differences between pixels in a local neighborhood and the global anchors.
The inverse distance between feature vectors of each pair of nodes creates stronger connections between close nodes in the adjacency matrix. However, the regular distance is highly sensitive to small distances, as small changes in distance may result in larger changes in the computed adjacency weight. To deal with this issue, logarithmic scaling is used. The log transformation compresses large values and expands small values. Thus, it mitigates the extreme influence of very small distances in inverse-distance schemes. This enhances the global connectivity by avoiding fragmented graphs with disconnected or weakly connected components. This step makes the weights more uniform, improves the eigenvalue spectrum, and in turn smooths the spectrum of the adjacency matrix, enhancing training stability in the graph neural network. The elements of the adjacency matrix are computed as:
A 1 i j = log 1 x 1 i x 1 j 2 + 1 + 1
where · represents the absolute value of its argument, x 1 i R m × 1 is the i th row of X 1 , and A 1 i j is the element i , j of matrix A 1 . The quantity 1 in the denominator is added to avoid indefinite results with infinite values in the fraction. Therefore, in A 1 , the difference among each pair of rows in the feature matrix X 1 is computed; x 1 i and x 1 j contain the differences between the i th and j th neighboring pixel within the local patch with the m global anchors, respectively. If the two vectors x 1 i and x 1 j are similar, pixels at i and j inside the local patch have close similarities with the m anchors, i.e., with the global representation of the image. If x 1 i and x 2 i are similar, they belong to the same cluster or class with higher likelihood.

2.2. Graph 2

The second graph is global and contains the relations among the m anchors. The feature matrix of graph 2, indicated by X 2 R m × d , is represented by:
X 2 = q 1   q 2   q m T
where q i R d × 1 ; i = 1 , , m is the feature vector of the i th anchor containing d spectral features in the hyperspectral image and · T denotes the transpose operation. The elements of the adjacency matrix of graph 2, A 2 R m × m , are derived as:
A 2 i j = log 1 x 2 i x 2 j 2 + 1 + 1
where x 2 i R d × 1 ; i = 1 , , m is the i th row of X 2 containing the spectral features of the i th anchor.
As graph 2 is global and does not contain information on the local patch centered at a given pixel, the feature matrix X 2 and adjacency matrix A 2 are the same for all pixels in the image.

2.3. Graph 3

The third graph is local and considers relationships between the n pixels within a local patch. The feature matrix, X 3 R n × d , is represented by:
X 3 = p 1   p 2   p n T
where p i R d × 1 ; i = 1,2 , , n is the i th pixel within the local patch centered around the given pixel. The elements of the adjacency matrix of graph 3, A 3 R n × n , are derived as:
A 3 i j = log 1 x 3 i x 3 j 2 + 1 + 1
where x 3 i R d × 1 ; i = 1 , , n is the i th row of X 3 , x 3 i = p i   ; i = 1 , , n , i.e., the i th pixel within the local patch, and A 3 quantifies the similarities between pixels in the local patch.

2.4. Fusion of Graphs

The variables X 1 to X 3 and A 1 to A 3 are the inputs of the GCN blocks GCN1 to GCN3, where G 1 R n × m , G 2 R m × d , and G 3 R n × d are the feature matrices obtained as output of GCN1, GCN2, and GCN3. To combine the information of the feature maps of the three graphs, the features of the first two graphs are multiplied as follows:
G 12 = G 1 × G 2
where G 12 R n × d contains the combined information of graph 1 and 2. Therefore, the result of this multiplication is related to the dependencies of n pixels based on their similarities (or differences) with respect to the global representatives (anchors). Subsequently, the two feature maps G 12 and G 3 should be combined. To this end, a cross-attention mechanism is suggested as follows.
The query component Q R n × f is obtained from G 12 , while the component key K R n × f and the component value V R n × f are computed from G 3 . Here, f is the number of features in the projected feature space obtained by applying the projection matrices W Q R d × f , W K R d × f , and W V R d × f (here we consider f = d ). The obtained components are:
Q = G 12 W Q
K = G 3 W K
V = G 3 W V
Note that W Q , W K , and W V are the learnable parameters. The attended feature maps are derived as:
G f u s e d = V   s o f t m a x   1 f Q T K
where G f u s e d is the fusion result of feature maps G 12 and G 3 , which contains information of all three graphs. In other words, the similarity among the query component from G 12 and the key component from G 3 is computed through their scaled product, 1 f Q T K , and is normalized by the softmax operation. Then, multiplication of the normalized weight with the value component V from G 3 yields the weighted feature maps of graph 3 as the fusion result G f u s e d .

2.5. Output

After the fusion of feature maps of the three graphs through the cross-attention mechanism, the feature matrix of graph 3, X 3 R n × d , is added to G f u s e d R n × f , where f = d is enforced to enable the addition operation among them. The result, i.e., X 3 + G f u s e d is fed into the output block, which consists of 16 @ c o n v 3 × 3 , followed by the ReLU activation function and dropout, with a dropping probability of 0.2. Finally, the final part of the network is composed by a fully connected (FC) layer with n c neurons, where n c is the number of classes, a softmax, and a classification layer. The obtained label in output is assigned to the central pixel of the local patch in input to the network.

3. Results

3.1. Datasets and Parameter Settings

Three hyperspectral datasets are used in this section [40]. The Indian Pines dataset was collected in Northwestern Indiana in 1992 by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). This hyperspectral image comprises 200 spectral channels in the spectral range of 0.4 to 2.5   μ m , after removal of 20 water absorption bands. The Indian Pines dataset has a nominal spectral resolution of 10 n m , a spatial resolution of 20 m , 145 × 145 pixels, and 16 agricultural-forest labeled classes.
The University of Pavia dataset was acquired by the Reflective Optics System Imaging Spectrometer (ROSIS) in 2001. After the removal of noisy bands, the number of spectral channels is 103 in the spectral range of 0.43 to 0.86 μ m . The University of Pavia dataset has a spectral resolution of 4 n m , a spatial resolution of 1.3 m per pixel, 610 × 340 pixels, and nine labeled semantic classes.
The Salinas dataset was collected over the valley of Salinas, Southern California, in 1998 by AVIRIS. It has 204 spectral channels in the range of 0.4–2.5 μ m , after the removal of water absorption bands. This image has a nominal spectral resolution of 10 n m , with a ground sampling distance of 3.7 m , 512 × 217 pixels, and 16 classes. The ground truth maps (GTM) of three datasets with associated legends are shown in Figure 3, Figure 4 and Figure 5.
In all datasets, 30 labeled pixels in each class are randomly chosen as training samples for classes containing more than 30 pixels, while 15 labeled pixels are considered otherwise. From the remaining samples, 5% are randomly chosen as validation data and the remaining are used as test samples.
The experiments are implemented on a laptop with Intel Core i5 and 32G RAM using MATLAB R2022b. The proposed network and the networks of all competitors are trained with Adam optimizer, an initial learning rate of 0.001, and 50 epochs. For the Indian Pines and Pavia datasets, the mini-batch size is set to 64, while for Salinas it is set to 16.
We conduct some preliminary experiments in order to set the spatial patch size p × p , using the possible values 5 × 5 ,   7 × 7 ,   9 × 9 ,   11 × 11 . Although a larger patch size would contain additional information, this could be unrelated to the central pixel, which may degrade the classification results. For the Indian Pines and Pavia datasets, a patch size of 7 × 7 provides the highest accuracy, while in Salinas this happens for a size of 5 × 5 . Since more computational resources are required when increasing the patch size, we finally consider a catch-all value of 5 × 5 for all cases.
We select a number of PCs containing 99% of the total variance in each dataset. The guided filtering [41,42] is applied for post-processing of the classification outputs of all methods (including, aside from the proposed method, the competitors SVM, SLIC-SVM, 2DCNN, Res2DCNN, 3DCNN, and Res3DCNN, which are introduced and discussed in the next sections). The first three PCs of the hyperspectral image are considered as a color guidance image. The filtering size 2 r + 1 and the regularization parameter ε of the guided filter are set as follows: r = 5 and ε = 10 3 in Indian Pines, r = 17 and ε = 10 6 in Pavia, and r = 15 and ε = 10 3 in Salinas.

3.2. Ablation Study

To assess the performance of each part of the proposed network, the following cases are compared:
  • G12: in this case, the feature map G 12 , which is the multiplication of the outcomes of graphs 1 and 2 as outputs of the GCN1 and GCN2 blocks, respectively, is fed into the output block for classification.
  • G3: in this case, the feature map G 3 , which is the result of Graph 3 as output of the GCN3 block, is fed into the output block for classification.
  • G12 + G3: in this case, the feature maps G 12 and G 3 are fused through the addition operation, with the result fed into the output block.
  • GF2: this is the main proposed method, where the feature maps G 12 and G 3 are fused through a cross-attention mechanism, with the result fed into the output block.
In Table 1, the classification results for the four mentioned cases are reported for the Indian Pines dataset. In addition to accuracy (Acc.) and reliability (Rel.) [43] of each class, the average accuracy of classes, average reliability of classes, overall accuracy, and Kappa coefficient [44] are reported. To show that the difference between each pair of classifiers is statistically significant, McNemar’s test [45] is computed, and the obtained Z-scores are reported in Table 2. In the results, GF2, with a significant difference with respect to the other methods, ranks first. After that, G3 provides the highest accuracy. The efficiency of G12 and G12 + G3 with some difference is similar, where the difference of G12 + G3 with respect to G12 is not statistically significant, according to the corresponding Z-score (Z = 0.29). The GTM and classification maps of different cases for the Indian Pines dataset are shown in Figure 6.
The classification results and the obtained Z-scores for the University of Pavia dataset are reported in Table 3 and Table 4, respectively. Also, in this case, GF2 ranks first with a significant difference with respect to the other cases. Next, G3 and G12 provide the best results in this order, while G12+G3 provides the weakest classification results. The classification maps for the different configurations are shown in Figure 7.
The classification and McNemar’s test results for the Salinas dataset are reported in Table 5 and Table 6, respectively, and the corresponding classification maps are shown in Figure 8. As in the previous cases, GF2 ranks first, followed by G3 and G12 with a small difference between them, with a McNemar’s score of |Z| = 1.76 < 1.96, suggesting that the difference between these classifiers is not statistically meaningful.
The following results can be summarized from the reported results:
(1)
Although the performances of G12 and G3 are close, G3 provides slightly better classification results. As illustrated, G3 is the result of graph 3 containing relationships among pixels within a local patch; G12 is the result of multiplying graphs 1 and 2, which quantify the similarity between the global anchors and, respectively, each pixel within the local patch and the global anchors themselves. This shows that considering local features within a neighborhood contains higher discriminative information compared to relationships between the local neighborhood and the global anchors selected from the entire scene.
(2)
Adding G12 and G3, i.e., G12+G3, generally results in weaker performance with respect to each of them taken separately. This implicitly means that the addition operation degrades the discriminative features of G12 and G3.
(3)
GF2, combining the feature maps G12and G3 through the cross-attention mechanism, yields the best classification results with a significant difference with respect to the other methods.

3.3. Comparison with Other Methods

In this section, the performance of the proposed GF2 network is compared with the following classifiers:
-
SVM: a pixel-based hard classifier where a third-order polynomial is used as a kernel function. In this paper, we use the LIBSVM implementation [46] with default parameters.
-
SLIC-SVM: a superpixel-based classifier. At first, the SLIC algorithm is applied to the first PC of the hyperspectral image and normalized in [0, 1] to provide a segmentation mask. Then, the obtained mask is applied to all PCs to provide the superpixels. The mean of the feature vectors in each superpixel is assigned to all pixels of that superpixel. Then, superpixels are classified using the SVM classifier with the same parameters used for classical SVM. In each dataset, the number of superpixels is set as N p 1 2 [47], where · denotes the nearest integer number, N is the number of total pixels in the image, and p is the spatial patch size.
-
2DCNN: a network composed of four convolutional layers, each of which contains 16 @ c o n v 3 × 3 filters with the “same” padding. Each layer is followed by a batch normalization (BN) and ReLU activation function. Moreover, after the second and fourth ReLU layers, a dropout layer with a dropping probability of 0.2 is used. The final part of the network is composed of FC, softmax, and classification layers.
-
3DCNN: the structure of this network is the same as 2DCNN, with the difference that two-dimensional convolutional layers 16 @ c o n v 3 × 3 are replaced by three-dimensional convolutional layers 16 @ c o n v 3 × 3 × 3 .
-
Residual 2DCNN (Res2DCNN): the layers of this network are the same as 2DCNN, with the difference that three addition (add) layers are used after the first, second and third ReLU activation layers. The input is passed from 16 @ c o n v 1 × 1 and fed into the first addition (add1) layer through the residual connection. The output of add1 is fed into the second addition (add2) layer, and the output of add2 is in turn fed into the third addition (add3) layer through skip connections.
-
Residual 3DCNN (Res3DCNN): this network is the same as Res2DCNN, with the difference that it contains three-dimensional convolutional layers instead of two-dimensional ones.
The classification results and associated Z-scores obtained by the McNemar’s test for Indian Pines are reported in Table 7 and Table 8, respectively, with the corresponding classification maps shown in Figure 9. In general, GF2 provides the best classification results with a statistically significant difference with respect to all competitors except 3DCNN. Here, the fusion of local features of the neighborhood patches with the global information of the anchors results in a higher discrimination ability, and more accurate classification maps.
3DCNN and Res3DCNN rank respectively second and third, with a significant difference with respect to the other methods. The 3D convolutional layers simultaneously extract hierarchically spatial and spectral features from the three-dimensional image patch in input, leading to a separation of the different classes with high accuracy and reliability.
Following up, SLIC-SVM yields a satisfactory performance. On the one hand, the use of SLIC for providing superpixels considers spatial features in the neighborhood regions, and leads to smoothed classification maps with reduced noise. On the other hand, the use of SVM as a classifier with low sensitivity to the training set size can lead to highly accurate classification results.
Results from 2DCNN, Res2DCNN, and SVM are ranked next. Here, 2D convolutional networks applying 2D filters to explore the spatial features, thus ignoring the spectral information of the images, result in weaker performances compared to 3D filters. Similarly, the pixel-based SVM classifier just considers spectral features, and not considering the spatial information results in the worst classification results.
The classification results, Z-scores and classification maps obtained for the University of Pavia dataset are reported in Table 9 and Table 10 and Figure 10, respectively. The proposed GF2 generally yields the highest classification accuracy with a statistically significant difference with respect to other methods. SVM, which only uses spectral features, provides here better classification results compared to 2D convolutional networks, which explore the spatial features. This suggests that in this dataset, the spectral information is more relevant than the spatial information.
The classification accuracies and Z-scores related to the Salinas dataset are reported in Table 11 and Table 12, respectively, and their classification maps are shown in Figure 11. Also here, GF2 ranks first with a significant difference with respect to all competitors. With a slight difference and low Z-score of |Z| = 1.78 < 1.96, 3DCNN and Res3DCNN denotes statistical mutual dependence in their results, and are ranked as the next best methods. The worst result is obtained by Res2DCNN.
The number of learnable parameters for the different networks are represented in Table 13. Here, 2DCNN and its residual version are, in these terms, the lightest networks, having the lowest number of learnable parameters. However, 2DCNN and Res2DCNN cannot achieve accurate classification results across the different datasets. GF2, with about 163k learnable parameters, is still approximately a light network, imposing a reasonable computational burden. The 3DCNN and its residual version, Res3DCNN, have about 181k and 188k learnable parameters, respectively, and, in spite of this, underperform with respect to GF2. Therefore, the proposed GF2 results as a good candidate for hyperspectral image classification from both the classification accuracy and the computational complexity points of view.

4. Discussion

In this section, the performance of GF2 is discussed compared to several other graph-based neural networks. Table 14 reports the overall accuracy (OA) obtained by the different methods for the considered datasets, along with the running time (seconds) as reported from the respective references. In all cases, 30 training samples per class are used, or 15 for classes with less than 30 labeled samples. For each competitor, we report the highest achievable accuracy associated with the best parameter settings reported in its associated published reference. Because of different types of input data due to the different definition of the graphs, considering the same hyperparameters for all the different methods is not appropriate. For example, in Auto-GCN, rectangular regions of the image are considered as graph nodes, while in the proposed GF2 the graph nodes are defined as local pixels or global anchors. Thus, considering the same mini-batch size for these two methods is not reasonable because of the scale of the graph nodes, and the input size of the networks should be set differently. A brief description of the benchmark methods is represented next.
In the automatic GCN (Auto-GCN), both graph design and its learning are carried out by neural networks. A semi-supervised Siamese network is used to construct the high-order tensor graph, with an intersection over union (IoU) based metric introduced for relabeling the dataset. The GCNs, the Siamese network and classification network are jointly trained, which results in a meaningful graph representation.
In the contrastive GCN (ConGCN), a semi-supervised contrastive loss is designed to jointly extract supervision information from the scarce labeled data and the abundant unlabeled data. ConGCN uses a semi-supervised contrastive loss for exploiting the supervision signals from the spectral domain, and graph generative loss for exploring the spatial relations of the hyperspectral image, and simultaneously performs hierarchical and localized graph convolution to extract both global and local contextual information. Moreover, the use of an adaptive graph augmentation is suggested to improve the performance of contrastive learning.
The mixhop superpixel-based GCN (EMS-GCN) is an end-to-end superpixel-based GCN method. It utilizes a multiscale spectral-spatial CNN for feature extraction and an adaptive clustering distance for introducing an improved learning-based superpixel algorithm. In EMS-GCN, a mixhop superpixel-based GCN module is introduced for adaptive integration of local and long-range superpixel representation.
The context-aware dynamic GCN (CAD-GCN) does not limit the receptive field of GCN to a small region. Instead, it captures long-range dependencies through translating the hyperspectral image into a region-induced graph and encoding the contextual relations among different regions. Subsequently, CAD-GCN iteratively adapts the graph to refine the contextual relations among image regions.
The dynamic adaptive sampling GCN (DAS-GCN) dynamically refines the receptive field of each given node and corresponding connections through successively applying two complementary components in each round of the adaptive sampling. As a result, it exploits both spectral and spatial information from local neighbors and far image elements.
The multiscale dynamic GCN (MDGCN) utilizes a dynamic graph convolution instead of a fixed graph, which can be refined using the convolutional process of the GCN. In MDGCN, multiple graphs with different local scales are constructed to provide spatial information at different scales, with varied receptive fields. To mitigate the computational burden, the total number of image elements to process is lowered by grouping from homogeneous areas in superpixels, treating each of them as a graph node.
In the dual interactive GCN (DIGCN), the interaction of multiscale spatial information is used to refine the input graph. To this end, the edge information contained in one GCN can be refined by feature representation from the other branch, and results in benefits of multiscale spatial information. Moreover, the generated region representation is enhanced by learning the discriminative region-induced graph.
For the Indian Pines dataset, Auto-GCN, ConGCN and the proposed GF2 rank first to third, respectively, with a slight difference. The lowest overall accuracy is obtained by CAD-GCN and DIGCN. For the University of Pavia dataset, EMS-GCN and GF2 rank respectively first and second, and provide highly accurate classification results with a significant difference with respect to the other methods. For the Salinas dataset GF2, Auto-GCN, and ConGCN rank first to third, respectively. Although Auto-GCN and ConGCN are among the best methods for the Indian Pines and Salinas datasets, they do not work as well for the University of Pavia dataset. The proposed GF2 network exhibits instead high accuracy across all datasets, and shows a robust behavior both for the different images of the University of Pavia, and images dominated by agricultural fields (Indian Pines and Salinas), also characterized by a different spatial resolution.
From the running time point of view, DAS-GCN is the slowest method for all datasets. Despite that, it is not among the most accurate methods. For the Indian Pines dataset, EMS-GCN, GF2, and DIGCN are the fastest methods. For the University of Pavia dataset, GF2 has the lowest running time. After that, EMS-GCN and CAD-GCN are the fastest methods. For the Salinas dataset, the lowest running time is reported for Auto-GCN.
While the proposed GF2 network is trained in 50 epochs, the Auto-GCN is trained in 150 epochs. Because of the following reasons, the Auto-GCN has relatively high complexity in the training phase: (1) Auto-GCN performs multitask learning and performs collaborative training for the Siamese network, GCNs, and hyperspectral image classification. (2) To compute the node similarities, the dual-input Siamese network is implemented in a semi-supervised manner. (3) A pre-training phase is performed in Auto-GCN such that the parameters of its feature extractor are initialized with the parameters of the trained feature extractor in the Siamese network, obtained during the offline training. However, the reported running time is the computation time of the test phase, i.e., the prediction time of the classification map. In Auto-CGN, the graph nodes are the image regions, while in GF2, they represent local pixels or global anchors. So, in datasets such as Salinas that have a higher number of pixels, the prediction time of GF2, which has pixel-based nodes, is higher with respect to Auto-GCN, which has region-based nodes.

5. Conclusions

While most graph-based networks use a superpixel algorithm to compose the hyperspectral image nodes, a graph-based network is proposed in this work, utilizing the pixels within a local patch as nodes of local graphs, and anchors with the highest entropy selected from the whole image as nodes of a global graph. Three graphs are composed, their processed features are fused, and finally used for pixel-based classification. An ablation study is carried out to assess the performance of each graph. Finally, the proposed GF2 network is compared with the classic SVM classifier, two versions of pixel-based and superpixel-based networks, namely 2DCNN, 3DCNN, and their residual versions, and several graph convolutional networks. The experiments on three hyperspectral datasets show a high classification accuracy for GF2. Moreover, GF2 is a relatively simple method, characterized by a relatively low number of learnable parameters. Therefore, GF2 can be a robust candidate for hyperspectral image classification. However, GF2 is fully supervised and therefore cannot exploit unlabeled samples for classification, except for the global information estimated from the anchors. The extension of GF2 to allow a semi-supervised workflow will be the subject of our future work. Due to the definition of the individual graphs for the pixels under test, the prediction time increases for large datasets. Especially in that case, the global anchors should be selected from a sub-region of the scene to increase the correlation between local and global spectral-spatial features. These aspects will be studied in future works.

Author Contributions

Conceptualization, M.I. and D.C.; methodology, M.I.; software, M.I.; validation, M.I.; investigation, M.I. and D.C.; review and editing, M.I. and D.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data is used in this paper. The datasets used for the experiments are benchmark datasets.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lopez, S.; Vladimirova, T.; Gonzalez, C.; Resano, J.; Mozos, D.; Plaza, A. The Promise of Reconfigurable Computing for Hyperspectral Imaging Onboard Systems: A Review and Trends. Proc. IEEE 2013, 101, 698–722. [Google Scholar] [CrossRef]
  2. Manolakis, D.; Shaw, G. Detection algorithms for hyperspectral imaging applications. IEEE Signal Process. Mag. 2002, 19, 29–43. [Google Scholar] [CrossRef]
  3. Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent Advances of Hyperspectral Imaging Technology and Applications in Agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
  4. Yang, J.; Lee, Y.K.; Chi, J. Spectral unmixing-based Arctic plant species analysis using a spectral library and terrestrial hyperspectral Imagery: A case study in Adventdalen, Svalbard. Int. J. Appl. Earth Obs. Geoinf. 2023, 125, 103583. [Google Scholar] [CrossRef]
  5. Imani, M. Attribute profile based target detection using collaborative and sparse representation. Neurocomputing 2018, 313, 364–376. [Google Scholar] [CrossRef]
  6. Peyghambari, S.; Zhang, Y. Hyperspectral remote sensing in lithological mapping, mineral exploration, and environmental geology: An updated review. J. Appl. Remote Sens. 2021, 15, 031501. [Google Scholar] [CrossRef]
  7. Özdemir, O.B.; Çetin, Y.Y. Improvements on hyperspectral classification algorithms. In Proceedings of the 2013 5th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Gainesville, FL, USA, 26–28 June 2013; pp. 1–4. [Google Scholar]
  8. Tan, K.; Zhang, J.; Du, Q.; Wang, X. GPU Parallel Implementation of Support Vector Machines for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 4647–4656. [Google Scholar] [CrossRef]
  9. Jin, S.; Zhang, F.; Zheng, Y.; Zhou, L.; Zuo, X.; Zhang, Z.; Zhao, W.; Zhang, W.; Pan, X. CSKNN: Cost-sensitive K-Nearest Neighbor using hyperspectral imaging for identification of wheat varieties. Comput. Electr. Eng. 2023, 111, 108896. [Google Scholar] [CrossRef]
  10. Kandpal, K.C.; Kumar, A. Identification and Classification of medicinal plants of the Indian Himalayan region using Hyperspectral remote sensing and random forest techniques. In Proceedings of the 2022 IEEE Mediterranean and Middle-East Geoscience and Remote Sensing Symposium (M2GARSS), Istanbul, Turkey, 7–9 March 2022; pp. 177–180. [Google Scholar]
  11. Christovam, L.E.; Pessoa, G.G.; Shimabukuro, M.H.; Galo, M.L.B.T. Land use and land cover classification using hyperspectral imagery: Evaluating the performance of spectral angle mapper, support vector machine and random forest. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 1841–1847. [Google Scholar] [CrossRef]
  12. Zhang, H.; Zou, J.; Zhang, L. EMS-GCN: An End-to-End Mixhop Superpixel-Based Graph Convolutional Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5526116. [Google Scholar] [CrossRef]
  13. Imani, M.; Ghassemian, H. GLCM, Gabor, and Morphology Profiles Fusion for Hyperspectral Image Classification. In Proceedings of the 2016 24th Iranian Conference on Electrical Engineering (ICEE), Shiraz, Iran, 10–12 May 2016; pp. 460–465. [Google Scholar]
  14. Zhu, Z.; Jia, S.; He, S.; Sun, Y.; Ji, Z.; Shen, L. Three-dimensional Gabor feature extraction for hyperspectral imagery classification using a memetic framework. Inf. Sci. 2015, 298, 274–287. [Google Scholar] [CrossRef]
  15. Hou, B.; Huang, T.; Jiao, L. Spectral–Spatial Classification of Hyperspectral Data Using 3-D Morphological Profile. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2364–2368. [Google Scholar] [CrossRef]
  16. Cao, X.; Xu, L.; Meng, D.; Zhao, Q.; Xu, Z. Integration of 3-dimensional discrete wavelet transform and Markov random field for hyperspectral image classification. Neurocomputing 2017, 226, 90–100. [Google Scholar] [CrossRef]
  17. Zhang, K.; Deng, J.; Zhou, C.; Liu, J.; Lv, X.; Wang, Y.; Sun, E.; Liu, Y.; Ma, Z.; Shang, J. Using UAV hyperspectral imagery and deep learning for Object-Based quantitative inversion of Zanthoxylum rust disease index. Int. J. Appl. Earth Obs. Geoinf. 2024, 135, 104262. [Google Scholar] [CrossRef]
  18. Xu, X.; Li, J.; Wu, C.; Plaza, A. Regional clustering-based spatial preprocessing for hyperspectral unmixing. Remote Sens. Environ. 2018, 204, 333–346. [Google Scholar] [CrossRef]
  19. Liu, Y.; Zhao, X.; Song, Z.; Yu, J.; Jiang, D.; Zhang, Y.; Chang, Q. Detection of apple mosaic based on hyperspectral imaging and three-dimensional Gabor. Comput. Electron. Agric. 2024, 222, 109051. [Google Scholar] [CrossRef]
  20. Liu, Y.; Wang, J.; Li, W.; Li, F.; Fang, Y.; Meng, X. A Stable Method for Estimating the Derivatives of Potential Field Data Based on Deep Learning. IEEE Geosci. Remote Sens. Lett. 2025, 22, 7501205. [Google Scholar] [CrossRef]
  21. Picon, A.; Galan, P.; Bereciartua-Perez, A.; Benito-Del-Valle, L. On the analysis of adapting deep learning methods to hyperspectral imaging. Use case for WEEE recycling and dataset. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2024, 330, 125665. [Google Scholar] [CrossRef]
  22. Imani, M. Low frequency and radar’s physical based features for improvement of convolutional neural networks for PolSAR image classification. Egypt. J. Remote Sens. Space Sci. 2022, 25, 55–62. [Google Scholar] [CrossRef]
  23. Tang, X.; Zhang, K.; Zhou, X.; Zeng, L.; Huang, S. Enhancing Binary Convolutional Neural Networks for Hyperspectral Image Classification. Remote Sens. 2024, 16, 4398. [Google Scholar] [CrossRef]
  24. Liu, X.; Wang, H.; Meng, Y.; Fu, M. Classification of Hyperspectral Image by CNN Based on Shadow Area Enhancement Through Dynamic Stochastic Resonance. IEEE Access 2019, 7, 134862–134870. [Google Scholar] [CrossRef]
  25. Praveen, B.; Menon, V. Study of Spatial–Spectral Feature Extraction Frameworks with 3-D Convolutional Neural Network for Robust Hyperspectral Imagery Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 1717–1727. [Google Scholar] [CrossRef]
  26. Imani, M.; Cerra, D. Phase space deep neural network with Saliency-based attention for hyperspectral target detection. Adv. Space Res. 2024, 75, 3565–3588. [Google Scholar] [CrossRef]
  27. Ma, Y.; Lan, Y.; Xie, Y.; Yu, L.; Chen, C.; Wu, Y.; Dai, X. A Spatial–Spectral Transformer for Hyperspectral Image Classification Based on Global Dependencies of Multi-Scale Features. Remote Sens. 2024, 16, 404. [Google Scholar] [CrossRef]
  28. Yu, C.; Zhou, S.; Song, M.; Gong, B.; Zhao, E.; Chang, C.-I. Unsupervised Hyperspectral Band Selection via Hybrid Graph Convolutional Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5530515. [Google Scholar] [CrossRef]
  29. Shang, R.; Zhu, K.; Chang, H.; Zhang, W.; Feng, J.; Xu, S. Hyperspectral image classification based on mixed similarity graph convolutional network and pixel refinement. Appl. Soft Comput. 2025, 170, 112657. [Google Scholar] [CrossRef]
  30. Cao, H.; Cao, J.; Chu, Y.; Wang, Y.; Liu, G.; Li, P. Global-local manifold embedding broad graph convolutional network for hyperspectral image classification. Neurocomputing 2024, 602, 128271. [Google Scholar] [CrossRef]
  31. Wan, S.; Gong, C.; Zhong, P.; Du, B.; Zhang, L.; Yang, J. Multiscale Dynamic Graph Convolutional Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3162–3177. [Google Scholar] [CrossRef]
  32. Wan, S.; Pan, S.; Zhong, P.; Chang, X.; Yang, J.; Gong, C. Dual Interactive Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5510214. [Google Scholar] [CrossRef]
  33. Wan, S.; Gong, C.; Zhong, P.; Pan, S.; Li, G.; Yang, J. Hyperspectral Image Classification With Context-Aware Dynamic Graph Convolutional Network. IEEE Trans. Geosci. Remote Sens. 2020, 59, 597–612. [Google Scholar] [CrossRef]
  34. Ding, Y.; Feng, J.; Chong, Y.; Pan, S.; Sun, X. Adaptive Sampling Toward a Dynamic Graph Convolutional Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5524117. [Google Scholar] [CrossRef]
  35. Yu, W.; Wan, S.; Li, G.; Yang, J.; Gong, C. Hyperspectral Image Classification With Contrastive Graph Convolutional Network. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5503015. [Google Scholar] [CrossRef]
  36. Chen, J.; Jiao, L.; Liu, X.; Li, L.; Liu, F.; Yang, S. Automatic Graph Learning Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5520716. [Google Scholar] [CrossRef]
  37. Ren, P.; Han, Z.; Yu, Z.; Zhang, B. Confucius tri-learning: A paradigm of learning from both good examples and bad examples. Pattern Recognit. 2025, 163, 111481. [Google Scholar] [CrossRef]
  38. Kang, X.; Duan, P.; Li, S. Hyperspectral image visualization with edge-preserving filtering and principal component analysis. Inf. Fusion 2020, 57, 130–143. [Google Scholar] [CrossRef]
  39. Yang, M.-D.; Huang, K.-S.; Yang, Y.F.; Lu, L.-Y.; Feng, Z.-Y.; Tsai, H.P. Hyperspectral Image Classification Using Fast and Adaptive Bidimensional Empirical Mode Decomposition with Minimum Noise Fraction. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1950–1954. [Google Scholar] [CrossRef]
  40. Zuo, X. Hyperspectral Data. Available online: https://ieee-dataport.org/documents/hyperspectral-data (accessed on 29 April 2025).
  41. Kang, X.; Li, S.; Benediktsson, J.A. Spectral–Spatial Hyperspectral Image Classification With Edge-Preserving Filtering. IEEE Trans. Geosci. Remote Sens. 2013, 52, 2666–2677. [Google Scholar] [CrossRef]
  42. Imani, M. A random patches based edge preserving network for land cover classification using Polarimetric Synthetic Aperture Radar images. Int. J. Remote Sens. 2021, 42, 4942–4960. [Google Scholar] [CrossRef]
  43. Imani, M.; Ghassemian, H. Binary coding based feature extraction in remote sensing high dimensional data. Inf. Sci. 2016, 342, 191–208. [Google Scholar] [CrossRef]
  44. Cohen, J. A Coefficient of Agreement for Nominal Scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
  45. Foody, G.M. Thematic Map Comparison: Evaluating the Statistical Significance of Differences in Classification Accuracy. Photogramm. Eng. Remote Sens. 2004, 70, 627–633. [Google Scholar] [CrossRef]
  46. Chang, C.; Linin, C. LIBSVM—A Library for Support Vector Machines. 2008. Available online: http://www.csie.ntu.edu.tw/~cjlin/libsvm (accessed on 29 April 2025).
  47. Imani, M. Attention based network for fusion of polarimetric and contextual features for polarimetric synthetic aperture radar image classification. Eng. Appl. Artif. Intell. 2024, 139, 109665. [Google Scholar] [CrossRef]
Figure 1. Block diagram of the proposed GF2 network (cross product represents the matrix product and plus symbol means the elementwise addition).
Figure 1. Block diagram of the proposed GF2 network (cross product represents the matrix product and plus symbol means the elementwise addition).
Remotesensing 17 01623 g001
Figure 2. The GCN block (cross product represents the matrix product and plus symbol means the elementwise addition).
Figure 2. The GCN block (cross product represents the matrix product and plus symbol means the elementwise addition).
Remotesensing 17 01623 g002
Figure 3. GTM and legend for the Indian Pines dataset.
Figure 3. GTM and legend for the Indian Pines dataset.
Remotesensing 17 01623 g003
Figure 4. GTM and legend for the University of Pavia dataset.
Figure 4. GTM and legend for the University of Pavia dataset.
Remotesensing 17 01623 g004
Figure 5. GTM and legend for the Salinas dataset.
Figure 5. GTM and legend for the Salinas dataset.
Remotesensing 17 01623 g005
Figure 6. Classification maps for different network configurations for the Indian Pines dataset.
Figure 6. Classification maps for different network configurations for the Indian Pines dataset.
Remotesensing 17 01623 g006
Figure 7. Classification maps for different network configurations for the University of Pavia dataset.
Figure 7. Classification maps for different network configurations for the University of Pavia dataset.
Remotesensing 17 01623 g007
Figure 8. Classification maps of different configurations for the Salinas dataset.
Figure 8. Classification maps of different configurations for the Salinas dataset.
Remotesensing 17 01623 g008
Figure 9. Classification maps of different methods for the Indian Pines dataset.
Figure 9. Classification maps of different methods for the Indian Pines dataset.
Remotesensing 17 01623 g009
Figure 10. Classification maps for the considered methods for the University of Pavia dataset.
Figure 10. Classification maps for the considered methods for the University of Pavia dataset.
Remotesensing 17 01623 g010
Figure 11. Classification maps of different methods for the Salinas dataset.
Figure 11. Classification maps of different methods for the Salinas dataset.
Remotesensing 17 01623 g011
Table 1. Classification results for different network configurations for the Indian Pines dataset (the highest values of the values in the last two rows are highlighted in bold).
Table 1. Classification results for different network configurations for the Indian Pines dataset (the highest values of the values in the last two rows are highlighted in bold).
G12G3G12 + G3GF2
NoName of Class# SamplesAcc.Rel.Acc.Rel.Acc.Rel.Acc.Rel.
1Alfalfa46100.0098.18100.0098.18100.0094.74100.00100.00
2Corn-notill142892.6193.1994.6390.0592.6892.8195.2695.99
3Corn-mintill83090.0590.4889.9394.7089.9393.4089.5798.42
4Corn237100.0070.91100.0075.73100.0073.12100.0074.05
5Grass-pasture48395.1790.4494.9783.9994.3796.3096.5893.75
6Grass-trees73099.0699.7398.6699.8698.93100.00100.0098.55
7Grass-pasture-mowed2896.1580.6596.1580.6596.1580.6596.1580.65
8Hay-windrowed478100.00100.00100.00100.00100.00100.00100.00100.00
9Oats20100.00100.0095.00100.00100.00100.0045.00100.00
10Soybean-notill97295.2585.2993.8090.8092.5689.3396.0791.81
11Soybean-mintill245589.1096.3292.1897.7792.9595.8295.7597.36
12Soybean-clean59396.4297.8597.3998.5297.0796.9197.7297.40
13Wheat205100.0098.15100.0097.25100.0098.60100.00100.00
14Woods126595.3699.3693.2899.5190.4999.4197.6099.76
15Buildings-Grass-Trees-Drives386100.0096.20100.0095.00100.0078.51100.0099.48
16Stone-Steel-Towers93100.0095.96100.0095.96100.0095.96100.0095.96
Average accuracy and Average reliability (%)96.8293.2996.6293.6296.5792.8594.3695.20
Overall accuracy (%)94.0494.6694.0996.41
Kappa coefficient (%)93.2493.9393.2895.92
Table 2. Z-scores for different network configurations for the Indian Pines dataset.
Table 2. Z-scores for different network configurations for the Indian Pines dataset.
G12G3G12 + G3GF2
G120−4.08−0.29−104.14
G34.0803.85−103.52
G12 + G30.29−3.850−103.99
GF2104.14103.52103.990
Table 3. Classification results for different network configurations for the University of Pavia dataset (the highest values of the values in the last two rows are highlighted in bold).
Table 3. Classification results for different network configurations for the University of Pavia dataset (the highest values of the values in the last two rows are highlighted in bold).
G12G3G12 + G3GF2
NoName of Class# SamplesAcc.Rel.Acc.Rel.Acc.Rel.Acc.Rel.
1Asphalt663192.1999.6491.8499.2886.7799.5794.6599.41
2Meadows18,64995.3198.8597.1799.3694.8599.4799.8398.74
3Gravel2099100.0097.13100.0096.6484.8598.13100.0097.95
4Trees306496.8395.3794.5588.7896.2598.4393.2498.96
5Painted metal sheets134599.9399.8599.9399.9399.9399.4899.9399.78
6Bare Soil5029100.0090.13100.0098.1899.9684.5799.6099.80
7Bitumen1330100.0091.16100.0090.05100.0084.28100.0097.08
8Self-Blocking Bricks368299.7889.0299.7387.9599.6278.1899.4391.64
9Shadows94787.5494.6389.6599.8888.4999.8889.3399.88
Average accuracy and Average reliability (%)96.8495.0996.9995.5694.5293.5597.3398.14
Overall accuracy (%)96.2296.8694.4098.28
Kappa coefficient (%)95.0395.8692.6597.71
Table 4. Z-scores for different network configurations for the University of Pavia dataset.
Table 4. Z-scores for different network configurations for the University of Pavia dataset.
G12G3G12 + G3GF2
G120−7.6618.46−22.57
G37.66022.18−17.46
G12 + G3−18.46−22.180−35.68
GF222.5717.4635.680
Table 5. Classification results of different configurations for the Salinas dataset (the highest values of the values in the last two rows are highlighted in bold).
Table 5. Classification results of different configurations for the Salinas dataset (the highest values of the values in the last two rows are highlighted in bold).
G12G3G12 + G3GF2
NoName of Class# SamplesAcc.Rel.Acc.Rel.Acc.Rel.Acc.Rel.
1Brocoli_green_weeds_1200999.85100.0099.8599.95100.00100.00100.0099.90
2Brocoli_green_weeds_23726100.0099.92100.0099.92100.0099.57100.00100.00
3Fallow1976100.00100.00100.0099.6099.95100.00100.00100.00
4Fallow_rough_plow139497.7899.2798.6499.1399.0098.7199.6499.07
5Fallow_smooth267899.6697.0299.3797.6299.4097.9499.6399.03
6Stubble395999.82100.0099.82100.0099.82100.0099.80100.00
7Celery3579100.0099.83100.0099.83100.0099.83100.0099.83
8Grapes_untrained11,27199.0799.1198.7699.6098.3898.3799.4199.92
9Soil_vineyard_develop6203100.0099.60100.0099.82100.0099.87100.0099.97
10Corn_senesced_green_weeds327897.3896.2098.4796.9199.0595.9299.2496.64
11Lettuce_romaine_4weeks1068100.00100.0099.9195.95100.0099.5398.88100.00
12Lettuce_romaine_5 weeks1927100.0099.6499.9099.7499.8499.95100.0099.28
13Lettuce_romaine_6 weeks91698.9192.7399.3490.5599.7888.2297.82100.00
14Lettuce_romaine_7 weeks107088.8893.6986.0795.8482.9997.3793.4696.62
15Vineyard_untrained726897.9899.6898.7399.2996.8998.7899.5799.63
16Vineyard_vertical_trellis1807100.00100.0099.78100.00100.00100.00100.00100.00
Average accuracy and Average reliability (%)98.7198.5498.6798.3698.4498.3899.2299.37
Overall accuracy (%)99.0499.0998.7799.54
Kappa coefficient (%)98.9498.9998.6399.49
Table 6. Z-scores for different network configurations in the Salinas dataset.
Table 6. Z-scores for different network configurations in the Salinas dataset.
G12G3G12 + G3GF2
G120−1.767.76−239.05
G31.7609.59−239.04
G12 + G3−7.76−9.590−239.40
GF2239.05239.04239.400
Table 7. Classification results of different methods for the Indian Pines dataset (the highest values of the classification measures in each row are highlighted in bold).
Table 7. Classification results of different methods for the Indian Pines dataset (the highest values of the classification measures in each row are highlighted in bold).
NoName of Class# SamplesSVMSLIC-SVM2DCNN3DCNNRes2DCNNRes3DCNNGF2
1Alfalfa46100.00100.00100.00100.00100.00100.00100.00
2Corn-notill142885.3693.5884.7395.7571.8394.1495.26
3Corn-mintill83074.2294.2471.7089.6962.5989.8189.57
4Corn237100.00100.00100.00100.00100.00100.00100.00
5Grass-pasture48394.9798.5979.6893.7689.1397.1896.58
6Grass-trees73099.0698.39100.00100.0099.87100.00100.00
7Grass-pasture-mowed2896.1588.4696.1596.15100.0096.1596.15
8Hay-windrowed478100.00100.0099.80100.00100.00100.00100.00
9Oats2095.0095.0050.0050.0050.0045.0045.00
10Soybean-notill97296.0793.9095.6697.6291.9495.6696.07
11Soybean-mintill245564.9589.1096.3593.7685.9093.1195.75
12Soybean-clean59385.0291.2192.0297.8893.6597.7297.72
13Wheat205100.00100.00100.00100.00100.00100.00100.00
14Woods126579.9894.5190.9699.9296.7598.8497.60
15Buildings-Grass-Trees-Drives38698.6897.6399.74100.00100.00100.00100.00
16Stone-Steel-Towers93100.00100.0098.95100.0098.95100.00100.00
Average accuracy (%)91.8495.9190.9894.6690.0494.2394.36
Overall accuracy (%)83.4393.9791.6396.3387.5795.7996.41
Kappa coefficient (%)81.3993.1690.4795.8385.9295.2295.92
Table 8. Z-scores of different methods for the Indian Pines dataset.
Table 8. Z-scores of different methods for the Indian Pines dataset.
SVMSLIC-SVM2DCNN3DCNNRes2DCNNRes3DCNNGF2
SVM0−28.75−20.20−34.14−9.50−32.77−34.12
SLIC-SVM28.7507.48−10.0118.47−8.07−10.50
2DCNN20.20−7.480−18.1111.69−15.86−19.05
3DCNN34.1410.0118.11028.234.40−0.68
Res2DCNN9.50−18.47−11.69−28.230−27.07−28.21
Res3DCNN32.778.0715.86−4.4027.070−4.94
GF234.1210.5019.050.6828.214.940
Table 9. Classification results of different methods for the University of Pavia dataset (the highest values of the classification measures in each row are highlighted in bold).
Table 9. Classification results of different methods for the University of Pavia dataset (the highest values of the classification measures in each row are highlighted in bold).
NoName of Class# SamplesSVMSLIC-SVM2DCNN3DCNNRes2DCNNRes3DCNNGF2
1Asphalt663191.5293.1585.2293.6292.3195.3794.65
2Meadows18,64987.3195.9292.1997.6691.7496.0799.83
3Gravel209981.0987.3381.4799.8199.9099.81100.00
4Trees306488.4193.7398.7697.2695.3397.5293.24
5Painted metal sheets134599.4899.9399.4899.8599.9399.9399.93
6Bare Soil502996.0699.0399.9299.9897.8799.9899.60
7Bitumen133098.0599.1099.8599.7099.5599.85100.00
8Self-Blocking Bricks368298.5999.4880.4594.0219.9392.2699.43
9Shadows94798.9498.9422.1871.7051.7468.8589.33
Average accuracy (%)93.2796.2984.3994.8483.1594.4097.33
Overall accuracy (%)90.7195.8889.8796.6386.6496.0298.28
Kappa coefficient (%)87.9394.5786.7995.5582.5394.7697.71
Table 10. Z-scores of different methods for the University of Pavia image.
Table 10. Z-scores of different methods for the University of Pavia image.
SVMSLIC-SVM2DCNN3DCNNRes2DCNNRes3DCNNGF2
SVM0−41.354.47−35.7919.36−31.04−42.96
SLIC-SVM41.35035.99−6.0250.62−1.06−14.24
2DCNN−4.47−35.990−47.3718.57−45.19−53.77
3DCNN35.796.0247.37061.948.33−11.85
Res2DCNN−19.36−50.62−18.57−61.940−59.12−63.17
Res3DCNN31.041.0645.19−8.3359.120−17.60
GF242.9614.2453.7711.8563.1717.600
Table 11. Classification results of the different methods for the Salinas dataset (the highest values of the classification measures in each row are highlighted in bold).
Table 11. Classification results of the different methods for the Salinas dataset (the highest values of the classification measures in each row are highlighted in bold).
NoName of Class# SamplesSVMSLIC-SVM2DCNN3DCNNRes2DCNNRes3DCNNGF2
1Brocoli_green_weeds_12009100.00100.0099.85100.0099.10100.00100.00
2Brocoli_green_weeds_23726100.00100.00100.00100.00100.00100.00100.00
3Fallow1976100.00100.00100.00100.00100.00100.00100.00
4Fallow_rough_plow1394100.00100.0099.6498.2193.6999.8699.64
5Fallow_smooth267898.5898.8199.3799.6399.5199.4499.63
6Stubble395999.8799.9099.9099.8099.8599.8799.80
7Celery3579100.00100.00100.00100.0099.97100.00100.00
8Grapes_untrained11,27192.2486.0881.4099.0932.0999.2899.41
9Soil_vineyard_develop6203100.00100.00100.00100.005.71100.00100.00
10Corn_senesced_green_weeds327897.9698.4499.1899.2799.2799.2499.24
11Lettuce_romaine_4weeks106897.8598.8897.4798.9798.8898.9798.88
12Lettuce_romaine_5 weeks1927100.00100.00100.00100.00100.00100.00100.00
13Lettuce_romaine_6 weeks91696.8396.7239.6398.3661.4698.2597.82
14Lettuce_romaine_7 weeks107096.0794.6794.3991.9694.1186.2693.46
15Vineyard_untrained726879.1096.4599.8899.6099.7499.5799.57
16Vineyard_vertical_trellis1807100.00100.00100.00100.00100.00100.00100.00
Average accuracy (%)97.4198.1294.4299.0686.4698.8099.22
Overall accuracy (%)95.2096.2894.8399.4373.9599.3899.54
Kappa coefficient (%)94.6595.8794.2699.3671.5999.3299.49
Table 12. Z-scores of the different methods for the Salinas dataset.
Table 12. Z-scores of the different methods for the Salinas dataset.
SVMSLIC-SVM2DCNN3DCNNRes2DCNNRes3DCNNGF2
SVM0−12.443.09−45.8694.01−44.79−47.24
SLIC-SVM12.44013.24−39.66103.12−38.36−41.21
2DCNN−3.09−13.240−48.42103.98−47.43−49.67
3DCNN45.8639.6648.420117.051.78−6.16
Res2DCNN−94.01−103.12−103.98−117.050−116.41−117.44
Res3DCNN44.7938.3647.43−1.78116.410−6.89
GF247.2441.2149.676.16117.446.890
Table 13. The number of learnable parameters for different networks.
Table 13. The number of learnable parameters for different networks.
2DCNN3DCNNRes2DCNNRes3DCNNGF2
No. of learnable parameters17.1k181.3k16.2k187.8k162.6k
Table 14. Comparison of GF2 with other graph-based networks.
Table 14. Comparison of GF2 with other graph-based networks.
MethodGF2 Auto-GCNCon-GCNEMS-GCNDAS-GCNCAD-GCNMDGCNDIGCN
Indian PinesOA96.4196.9896.7495.8795.6394.1393.4794.16
Time (s)49.8760-23.02860629553
PaviaOA98.2895.5595.9798.4796.4092.9195.6893.24
Time (s)56.32102-71.0739273244187
SalinasOA99.5499.4399.25-99.0898.28-97.61
Time (s)1160.76218--3410826-616
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Imani, M.; Cerra, D. Triple Graph Convolutional Network for Hyperspectral Image Feature Fusion and Classification. Remote Sens. 2025, 17, 1623. https://doi.org/10.3390/rs17091623

AMA Style

Imani M, Cerra D. Triple Graph Convolutional Network for Hyperspectral Image Feature Fusion and Classification. Remote Sensing. 2025; 17(9):1623. https://doi.org/10.3390/rs17091623

Chicago/Turabian Style

Imani, Maryam, and Daniele Cerra. 2025. "Triple Graph Convolutional Network for Hyperspectral Image Feature Fusion and Classification" Remote Sensing 17, no. 9: 1623. https://doi.org/10.3390/rs17091623

APA Style

Imani, M., & Cerra, D. (2025). Triple Graph Convolutional Network for Hyperspectral Image Feature Fusion and Classification. Remote Sensing, 17(9), 1623. https://doi.org/10.3390/rs17091623

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop