Next Article in Journal
Physics-Driven SAR Target Detection: A Review and Perspective
Previous Article in Journal
Research on Multi-Stage Optimization for High-Precision Digital Surface Model and True Digital Orthophoto Map Generation Methods
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SSGTN: Spectral–Spatial Graph Transformer Network for Hyperspectral Image Classification

1
School of Electronic and Communication Engineering, Guangzhou University, Guangzhou 510182, China
2
School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou 510182, China
3
School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(2), 199; https://doi.org/10.3390/rs18020199
Submission received: 1 December 2025 / Revised: 31 December 2025 / Accepted: 5 January 2026 / Published: 7 January 2026

Highlights

What are the main findings?
  • We propose a Spectral–Spatial Graph Transformer Network (SSGTN), a dual-branch framework for hyperspectral image classification that combines local feature extraction with global context reasoning.
  • The framework builds region-level graphs from superpixels, applies lightweight spectral denoising, and introduces a parameter-free Spectral–Spatial Shift Module (SSSM) to strengthen spectral–spatial feature interaction.
What are the implications of the main findings?
  • With only 1% training samples, the proposed method achieves state-of-the-art performance on three benchmark datasets (Indian Pines, WHU-Hi-LongKou, and Houston2018).
  • The results suggest that combining region-level structural modeling with global reasoning is an effective and efficient strategy for hyperspectral remote sensing under scarce labels, and may benefit other spectral–spatial learning problems.

Abstract

Hyperspectral image (HSI) classification is fundamental to a wide range of remote sensing applications, such as precision agriculture, environmental monitoring, and urban planning, because HSIs provide rich spectral signatures that enable the discrimination of subtle material differences. Deep learning approaches, including Convolutional Neural Networks (CNNs), Graph Convolutional Networks (GCNs), and Transformers, have achieved strong performance in learning spatial–spectral representations. However, these models often face difficulties in jointly modeling long-range dependencies, fine-grained local structures, and non-Euclidean spatial relationships, particularly when labeled training data are scarce. This paper proposes a Spectral–Spatial Graph Transformer Network (SSGTN), a dual-branch architecture that integrates superpixel-based graph modeling with Transformer-based global reasoning. SSGTN consists of four key components, namely (1) an LDA-SLIC superpixel graph construction module that preserves discriminative spectral–spatial structures while reducing computational complexity, (2) a lightweight spectral denoising module based on 1 × 1 convolutions and batch normalization to suppress redundant and noisy bands, (3) a Spectral–Spatial Shift Module (SSSM) that enables efficient multi-scale feature fusion through channel-wise and spatial-wise shift operations, and (4) a dual-branch GCN-Transformer block that jointly models local graph topology and global spectral–spatial dependencies. Extensive experiments on three public HSI datasets (Indian Pines, WHU-Hi-LongKou, and Houston2018) under limited supervision (1% training samples) demonstrate that SSGTN consistently outperforms state-of-the-art CNN-, Transformer-, Mamba-, and GCN-based methods in overall accuracy, Average Accuracy, and the κ coefficient. The proposed framework provides an effective baseline for HSI classification under limited supervision and highlights the benefits of integrating graph-based structural priors with global contextual modeling.

1. Introduction

Hyperspectral images (HSIs) capture the continuous reflectance spectrum of surface materials across hundreds of narrow and contiguous spectral bands. As a result, a HSI can be represented as a three-dimensional data cube that combines spatial information ( H × W pixels) with rich spectral signatures (B bands). This dense spectral resolution enables the discrimination of subtle material differences that are difficult to observe using conventional RGB or multispectral sensors. Therefore, HSIs are valuable for precision agriculture [1], environmental monitoring [2], mineral exploration [3], and urban studies [4]. Hyperspectral image classification (HSIC) is a central task in these applications, in which each pixel is assigned a semantic land-cover or material label to support large-scale mapping and automated decision support.
Early HSIC methods largely relied on handcrafted feature engineering and shallow classifiers. Techniques such as band selection [5], spectral derivatives [6], and linear dimensionality reduction (e.g., PCA and LDA) were commonly used to alleviate spectral redundancy and the curse of dimensionality. Spatial context was often incorporated through morphological profiles [7] or heuristic filtering. Although these approaches are interpretable, they have limited adaptability to complex nonlinear spectral mixing patterns, depend heavily on domain expertise, and often generalize poorly across diverse scenes [8]. These limitations are particularly pronounced in high-dimensional, spatially heterogeneous, and label-scarce environments. Beyond classification-oriented pipelines, non-deep-learning hyperspectral image analysis has also explored noise-aware weighting and outlier removal to improve the robustness of spectral–spatial criteria for object-based processing and scale selection [9].
The advent of deep learning has dramatically reshaped the HSIC landscape. Convolutional Neural Networks (CNNs) have become a dominant paradigm in this area. In particular, 2D CNNs [10] extract spatial textures from spectral bands, and 3D CNNs [11] jointly model spectral–spatial dependencies. Hybrid architectures such as HybridSN [12] further improve efficiency by combining 2D and 3D convolutions. Despite their success, CNNs are limited by local receptive fields and fixed grid processing, which restrict their ability to capture long-range dependencies and adapt to irregular object boundaries. To mitigate these limitations, Transformers have been introduced and use self-attention to model global contextual relationships across both spatial and spectral dimensions [13,14]. More recently, state–space models (SSMs) such as Mamba [15] have emerged as efficient alternatives to Transformers by offering linear complexity with global receptive fields. However, these sequence-based models can be sensitive to spectral noise, may not preserve fine-grained local structures, and do not explicitly model irregular spatial relationships.
In parallel, graph neural networks (GNNs) have gained traction due to their ability to model non-Euclidean relationships among pixels or superpixels. Early graph convolutional networks (GCNs) [16] demonstrated promising results in semi-supervised HSIC by propagating node features over adjacency graphs. Subsequent efforts introduced multi-scale GCNs [17], cross-attention GCNs [18], and object-based graph constructions [19] to enhance feature aggregation and boundary preservation. More recently, hybrid graph state–space models such as Graph Mamba [20] have bridged graph structural learning with sequence modeling. Nevertheless, graph-based approaches still face several fundamental challenges. First, graph convolutions primarily capture local connectivity and may fail to model long-range dependencies effectively. Second, graph construction is often heuristic and scene-dependent. Third, many models lack dynamic multi-scale fusion mechanisms and do not fully integrate spatial and spectral cues. Finally, spectral noise and redundancy can degrade input quality and reduce robustness.
To address these limitations, we propose the Spectral–Spatial Graph Transformer Network (SSGTN), a unified dual-branch architecture that integrates graph-based structural modeling with Transformer-based global reasoning. The proposed framework includes four key components. First, an LDA-SLIC superpixel graph construction module combines linear discriminant analysis (LDA) for spectral compaction with Simple Linear Iterative Clustering (SLIC) for spatially homogeneous region segmentation to obtain a structurally informed and computationally efficient graph representation. Second, a lightweight spectral denoising module based on 1 × 1 convolutions and batch normalization suppresses redundant and noisy spectral bands while preserving discriminative features. Third, a Spectral–Spatial Shift Module (SSSM) performs cyclic shifts along spectral, height, and width dimensions to enable efficient multi-scale feature interaction without introducing additional parameters. Fourth, a dual-branch GCN-Transformer block jointly models local graph topology and global dependencies, where a spatial Transformer guided by GCNs captures long-range spatial information and a spectral Transformer models cross-band correlations; the two branches are fused through a residual graph convolution.
The main contributions of this work are summarized as follows:
(1)
We propose a novel dual-branch graph–Transformer hybrid architecture that jointly models local graph structures and global spectral–spatial dependencies, effectively overcoming the limitations of conventional single-paradigm models.
(2)
We design a dynamic Spectral–Spatial Shift Module that enables efficient multi-dimensional feature fusion through parameter-free shift operations, enhancing the model’s ability to capture contextual interactions across scales.
(3)
We develop a superpixel-driven graph construction strategy using LDA-SLIC, which adaptively captures spatial homogeneity and spectral discriminability while maintaining computational efficiency via sparse graph representations.
(4)
We introduce a spectral denoising module that refines input representations through lightweight convolutions and normalization, improving robustness to spectral noise and redundancy.
(5)
We conduct comprehensive experiments and ablation studies across multiple datasets and training regimes, validating the superiority, generality, and interpretability of SSGTN in HSI classification under limited supervision.
The remainder of this paper is organized as follows. Section 2 introduces related work in hyperspectral remote sensing image classification. Section 3 presents the proposed SSGTN architecture. Section 4 reports experimental results on three benchmark hyperspectral datasets. Finally, Section 6 concludes the paper and outlines future research directions.

2. Related Work

In this section, we systematically review the evolution of deep learning-based hyperspectral image classification (HSIC) methods, which can be broadly categorized into convolutional, attention-based, and graph-based approaches. We highlight the strengths and limitations of each paradigm, paving the way to introduce our proposed Spectral–Spatial Graph Transformer Network (SSGTN).

2.1. CNN-Based Hyperspectral Image Classification Methods

Convolutional Neural Networks have become a cornerstone in HSIC due to their strong ability to extract spatially structured features [12,21,22,23,24,25,26,27,28,29]. Early work by Hu et al. [10] demonstrated that 2D CNNs can effectively leverage local spatial textures within HSI patches, significantly improving classification accuracy over purely spectral methods. To better model the spectral–spatial dependencies inherent in HSIs, Li et al. [11] extended CNNs to three dimensions and proposed 3D CNNs that jointly process spectral cubes. Further innovations led to hybrid architectures, such as the synergistic 2D/3D CNN by Yang et al. [30], which integrates spectral–spatial fusion through 3D convolutions and uses complementary 2D spatial context modeling to balance accuracy and computational efficiency. Overall, these developments reflect a progression from purely spatial 2D CNNs to more advanced 3D and hybrid architectures for comprehensive spectral–spatial integration.
Despite these advances, CNN-based methods exhibit several intrinsic limitations. Standard 2D-CNNs often disrupt spectral continuity by treating bands independently, leading to potential misclassification of spectrally similar materials. While 3D-CNNs can preserve spectral–spatial coherence, they dramatically increase model size and computational burden, creating scalability issues for high-dimensional HSIs. Moreover, the fixed receptive fields and inherently local inductive biases of convolutional kernels restrict their ability to capture long-range dependencies and multi-scale contextual information. These limitations hinder the generalization of CNN-based models in heterogeneous environments and motivate the exploration of more flexible architectures beyond convolution.

2.2. Attention-Based Hyperspectral Image Classification Methods

To overcome the locality bias of convolutions, attention-based architectures have been introduced into HSIC to model long-range spatial–spectral dependencies [13,15,31,32,33,34,35,36,37,38]. Representative examples include Transformers and, more recently, state–space models such as Mamba. For instance, Hong et al. [14] proposed SpectralFormer to strengthen inter-band relationships via self-attention, yielding competitive gains over convolutional baselines. Gu et al. [39] designed a multi-scale lightweight Transformer to reduce computational cost while preserving global modeling capacity. On the state–space side, He et al. [15] introduced 3DSS-Mamba, which organizes spectral–spatial tokens for efficient long-range dependency modeling. CenterMamba [38] adopts a center-scan strategy to enhance semantic representation with linear-complexity sequence processing. These designs provide two complementary approaches to scalable global spatial–spectral representation learning in HSIC.
Notwithstanding their progress, attention-based approaches still face several limitations. First, Transformer models can be computationally demanding and may struggle to reconcile global dependency modeling with fine-grained local detail, especially under high spectral dimensionality and limited labels. Second, many Transformer pipelines rely on fixed tokenization or single-scale processing, leading to insufficient dynamic multi-scale adaptation across heterogeneous scenes. Third, both Transformers and Mamba variants can be sensitive to spectral redundancy and noise, benefiting from explicit denoising or channel re-weighting to stabilize training. Finally, while Mamba/SSM models offer efficiency gains, they may suffer from slow convergence and hyper-parameter sensitivity, and by design, they do not explicitly account for irregular spatial relations. These shortcomings have spurred increasing interest in graph-based architectures, which provide a more flexible representation for non-Euclidean spatial–spectral structures.

2.3. Graph-Based Hyperspectral Image Classification Methods

Graph-based methods have recently emerged as powerful tools for HSIC because they can represent spatial–spectral relations on irregular and non-Euclidean domains [16,19,40,41,42,43,44,45,46,47]. Early studies demonstrated that graph convolutional networks can capture contextual dependencies through message passing over pixels or superpixels [16]. Subsequent advances introduced more adaptive designs. For example, Wan et al. [17] proposed a multi-scale dynamic GCN that aggregates information across spatial neighborhoods, while Yang et al. [18] introduced a cross-attention-driven spatial–spectral GCN to better integrate heterogeneous features. More recently, object-based strategies such as MOB-GCN [19] have further emphasized multi-scale structural cues, improving boundary delineation and robustness to noise.
Building on these advances, researchers have extended attention mechanisms to graph formulations. Zheng et al. [48] proposed a graph Transformer that fuses spatial–spectral features via self-attention to enhance long-range dependency modeling. In parallel, Ahmad et al. [20] introduced a hybrid Graph Mamba model that tokenizes hyperspectral data into graph representations and leverages state–space modeling to balance efficiency and global context capture.
Although graph-based methods have significantly advanced hyperspectral image classification, they remain constrained by several factors. First, neighborhood aggregation in graph convolutions primarily captures local connectivity, limiting the capture of complex long-range dependencies. Second, graph construction is often heuristic and scene-dependent, reducing adaptability across diverse scenes. Third, most models process features at fixed scales, hindering their adaptability from heterogeneous spatial–spectral patterns. Fourth, spatial and spectral cues are not always effectively integrated, leading to suboptimal joint representations. Finally, redundant or noisy bands degrade input quality and reduce classification robustness, particularly under scarce supervision. These challenges highlight the need for a more integrated approach that combines the strengths of graph structural learning with dynamic multi-scale fusion and global dependency modeling.
The proposed SSGTN is designed to address the aforementioned limitations in a unified framework. Unlike CNNs, SSGTN captures long-range dependencies via Transformer blocks while preserving local structure through graph convolutions. In contrast to pure Transformers, it incorporates an LDA-SLIC superpixel graph to model non-Euclidean spatial relationships and employs a spectral denoising module to enhance input representations. Compared to existing graph-based methods, SSGTN introduces a novel Spectral–Spatial Shift Module for dynamic multi-scale feature fusion and a dual-branch GCN-Transformer architecture to jointly model local topology and global dependencies. By synergistically integrating adaptive graph priors, spectral purification, shift-based feature interaction, and Transformer-based global reasoning, SSGTN achieves expressive and efficient hyperspectral representation learning under high-dimensional and structurally complex conditions, particularly under limited supervision.

3. Materials and Methods

The overall architecture of the proposed SSGTN is depicted in Figure 1. This hybrid framework synergistically integrates convolutional operations for local feature extraction, graph convolutions for topological modeling, and Transformer blocks for global dependency learning. The network comprises four meticulously designed components: (1) LDA-SLIC superpixel segmentation with graph construction (inspired by the superpixel-based graph modeling paradigm in CEGCN [42]), (2) spectral denoising module, (3) Spectral–Spatial Shift Module, and (4) dual-branch spectral–spatial GCN-Transformer module. Each component addresses specific challenges in hyperspectral image classification while maintaining computational efficiency.

3.1. LDA-SLIC Superpixel Segmentation Module

The LDA-SLIC module integrates Linear Discriminant Analysis (LDA) for spectral dimensionality reduction with Simple Linear Iterative Clustering (SLIC) for spatial superpixel segmentation, providing a compact, class-discriminative representation while producing spatially homogeneous regions that serve as graph nodes.

3.1.1. LDA-Based Spectral Dimensionality Reduction

Given an input hyperspectral image X R H × W × B with sparse supervision Y R H × W , LDA projects the spectral data into a lower-dimensional subspace by maximizing inter-class separability:
X LDA = X W LDA R H × W × C ,
where W LDA R B × C denotes the projection matrix learned by maximizing the Fisher criterion [49] J ( W ) = W T S B W W T S W W , with S B and S W representing between-class and within-class scatter matrices, respectively. In practice, LDA is fitted only on the labeled training pixels (1% of the image in our low-label setting), while all unlabeled, validation, and test pixels are treated as background and excluded from the optimization, preventing any information leakage from the test set. The resulting projection reduces the spectral dimensionality from B to at most, C 1 class-discriminative components, which are then applied to the entire cube. Because LDA is a shallow linear transformation and the main model capacity resides in the subsequent CNN-GCN-Transformer modules, this supervised pre-processing acts as a light-weight spectral pre-conditioning step rather than a deep classifier, empirically yielding stable performance across different random training splits even under limited supervision.

3.1.2. SLIC Superpixel Segmentation

The dimension-reduced representation X LDA is subsequently partitioned into superpixels using the Simple Linear Iterative Clustering (SLIC) algorithm [50]. SLIC operates by iteratively optimizing a composite distance metric in the joint spectral–spatial domain:
D = x spectral ( i ) x spectral ( j ) 2 2 + λ x spatial ( i ) x spatial ( j ) 2 2 ,
where λ = ( m / S ) 2 controls the trade-off between spectral similarity and spatial proximity, m is the compactness parameter, and S = H W / K is the nominal grid interval associated with the target number of superpixels K. The scale (or equivalently K) determines the expected number of pixels per superpixel and hence the granularity of the graph, while the compactness parameter governs whether regions adhere more closely to spectral boundaries (small m) or favor smoother, more spatially regular shapes (large m). We deliberately choose a moderately fine scale and balanced compactness so that mixed pixels and small objects are not overly merged, and each superpixel remains approximately homogeneous in the LDA space. It is important to note that the superpixels are used to define region-level nodes and the sparsity pattern of the graph, but final predictions are produced at the pixel level by fusing the graph branch with a parallel CNN branch. In particular, the CNN and SSSM modules operate directly on the full-resolution pixel grid and preserve fine-grained local details, while the dual-branch GCN-Transformer stack captures long-range and nonlinear spectral–spatial dependencies on the superpixel graph. As a result, LDA-SLIC provides a stable structural prior that is complemented and refined by deeper, nonlinear feature learning in the downstream network.

3.1.3. Graph Representation Learning

To capture region-level interactions while keeping computation tractable, we construct a superpixel graph G = ( V , E ) where each node v i corresponds to a superpixel region R i , and edges are defined only between spatially adjacent regions. The node feature matrix S R K × C is computed by averaging the LDA-projected hyperspectral features within each region. This step reduces the spectral dimensionality to a compact, class-discriminative space and substantially lowers the memory requirements of subsequent graph operations. Based on these region features, we construct an initial adjacency matrix using a Gaussian kernel constrained by spatial neighborhood:
A i , j = exp γ S i S j 2 2 · I [ j N ( i ) ] .
Since each superpixel interacts only with its directly bordering regions, the resulting adjacency is naturally sparse, and the number of non-zero entries scales linearly with the number of superpixels. Graph operations therefore scale with O ( K d ˜ ) rather than O ( K 2 ) , where d ˜ denotes the small node degree induced by the superpixel topology.
The kernel bandwidth is determined in a simple data-dependent way: σ is estimated as the median distance between each superpixel and its spatial neighbors, yielding γ = 1 / σ 2 . In practice, the Gaussian-weighted adjacency serves as a structural prior and sparsity pattern rather than the final propagation operator. Within the GCN layer, we refine the edge weights by first projecting batch-normalized node features through a linear mapping to obtain feature embeddings, computing a learned affinity matrix via a sigmoid of the pairwise inner products, and then applying the sparsity mask from A . A row-wise softmax yields a stochastic propagation matrix A ˜ that preserves the superpixel topology while allowing data-driven adjustment of edge strengths.

3.2. Spectral Denoising Module

The spectral denoising module is designed as a lightweight feature refinement cascade to suppress spectral noise while preserving discriminative band correlations. Given an input hyperspectral cube X R H × W × B , the module applies two sequential stages, each consisting of batch normalization (BN) and 1 × 1 convolution for spectral filtering and dimensionality reduction:
F 1 = Conv 1 × 1 BN ( X ) , X denoised = Conv 1 × 1 BN ( F 1 ) ,
where BN ( · ) denotes channel-wise batch normalization with learnable affine parameters, and Conv 1 × 1 ( · ) represents a 1 × 1 convolution that reduces spectral dimensionality ( B D ) while acting as an adaptive spectral filter. This design effectively mitigates spectral noise and redundancy while maintaining computational efficiency.

3.3. SSSM Residual Convolution Module

The SSSM module (as shown in Figure 2) enables efficient multi-dimensional feature interaction through parameter-free shift operations. For an input tensor X R H × W × B , we define cyclic shift operators along three dimensions:
Shift B ( X ) h , w , b = X h , w , ( b + 1 ) mod B ,
Shift H ( X ) h , w , b = X ( h + 1 ) mod H , w , b ,
Shift W ( X ) h , w , b = X h , ( w + 1 ) mod W , b .
These shifted features are concatenated with the original input along the channel dimension:
X concat = Concat ( X , Shift B ( X ) , Shift H ( X ) , Shift W ( X ) ) ,
X 2 = Conv 3 × 3 ( Conv 1 × 1 ( X concat ) ) ,
The concatenated features are then processed by a 1 × 1 convolution followed by a 3 × 3 convolution for effective feature fusion and extraction, producing X 2 . A residual connection is employed to enhance representation capacity and alleviate gradient vanishing as follows:
Y = X 2 + X .
This design facilitates comprehensive spectral–spatial interaction without introducing additional parameters, making it computationally efficient for high-dimensional HSI data.

3.4. Dual-Branch Spectral–Spatial GCN-Transformer

The dual-branch module constitutes the core of SSGTN (as shown in Figure 3), designed to jointly model local graph structures and global dependencies. We initialize the node embeddings as X ( 0 ) = S + E ˜ , where E ˜ denotes learnable positional encodings. The superpixel-level features S are obtained by aggregating pixel representations after the spectral denoising and SSSM modules using the normalized assignment matrix Q :
S = Q X pix ,
which averages pixel features within each superpixel to form node descriptors for the dual-branch graph–Transformer module, as shown in Algorithm 1.
Algorithm 1 Dual-Branch Spectral–Spatial GCN-Transformer (SSGTB)
Require: Superpixel node features S ; adjacency matrix A ; positional encodings E ˜ ; layer
        number L; dropout rate p
Ensure: Final node features Z g
1:
X ( 0 ) S + E ˜                                                                                                       ▹ X ( 0 ) = S + E ˜
2:
for  = 1  to L do
3:
       H spa ( ) σ ( A ^ X ( 1 ) W gcn )                                                            ▹ H spa ( ) = σ ( A ^ X ( 1 ) W gcn )
4:
       Z spa ( ) LN X ( 1 ) + MHSA ( H spa ( ) )
5:
       H spa ( ) LN Z spa ( ) + FFN ( Z spa ( ) )
6:
       H spec ( ) GCN ( X ( 1 ) , A )
7:
       Z spec ( ) LN X ( 1 ) + MHSA ( H spec ( ) )
8:
       H spec ( ) LN Z spec ( ) + FFN ( Z spec ( ) )
9:
       U ( ) Dropout p W f ( ) H spa ( ) H spec ( )                        ▹ U ( ) = Dropout W f ( ) [ H spa ( ) H spec ( ) ]
10:
     X ( ) σ GCN ( U ( ) , A )                                                           ▹ X ( ) = σ GCN ( U ( ) , A )
11:
end for
12:
Z g X ( L )
13:
return  Z g

3.4.1. Spatial Path

The spatial path captures long-range spatial dependencies through graph-contextualized self-attention:
H spa ( ) = GCN ( X ( 1 ) , A ) = σ ( A ^ X ( 1 ) W gcn ) ,
Z spa ( ) = LN ( X ( 1 ) + MHSA ( H spa ( ) ) ) ,
H spa ( ) = LN ( Z spa ( ) + FFN ( Z spa ( ) ) ) ,
where A ^ = D ˜ 1 2 A ˜ D ˜ 1 2 denotes the normalized adjacency matrix with self-loops [51], MHSA represents multi-head self-attention [52], LN is layer normalization, and FFN is a position-wise feed-forward network.

3.4.2. Spectral Path

The spectral path focuses on cross-band correlations through spectral attention mechanisms:
H spec ( ) = GCN ( X ( 1 ) , A ) ,
Z spec ( ) = LN ( X ( 1 ) + MHSA ( H spec ( ) ) ) ,
H spec ( ) = LN ( Z spec ( ) + FFN ( Z spec ( ) ) ) .

3.4.3. Feature Fusion and Propagation

The outputs from both paths are integrated through concatenation and graph-convolutional fusion:
U ( ) = Dropout W f ( ) [ H spa ( ) H spec ( ) ] ,
X ( ) = σ GCN ( U ( ) , A ) .
After L layers, the final node representations Z g = X ( L ) are projected back to pixel space using the assignment matrix Q for classification:
Z pixel = Q Z g .
This architecture enables SSGTN to capture complementary information: graph convolutions encode local structural priors, while self-attention mechanisms model long-range spatial and spectral dependencies, resulting in a comprehensive representation for hyperspectral image classification.

4. Results

To comprehensively evaluate the performance of the proposed SSGTN framework, we conduct extensive experiments on three benchmark hyperspectral datasets with diverse spatial resolutions, spectral configurations, and land-cover characteristics.

4.1. Datasets

4.1.1. Indian Pines

The Indian Pines dataset was acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor over an agricultural area in Northwestern Indiana. The dataset comprises 145 × 145 pixels with 224 spectral bands in the wavelength range of 400–2500 nm. After removing noisy and water-absorption bands, 200 spectral bands are retained for analysis. The spatial resolution is 20 m per pixel, and the scene contains 16 distinct land-cover classes, primarily consisting of various crop types, forests, and natural vegetation. This dataset presents challenges due to its moderate spatial resolution and significant spectral similarities between different crop species.

4.1.2. WHU-Hi-LongKou

The WHU-Hi-LongKou dataset was collected by a Headwall Nano-Hyperspec imaging sensor mounted on a DJI Matrice 600 Pro UAV platform. The imagery covers 550 × 400 pixels with 270 spectral bands spanning the visible to near-infrared spectrum (400–1000 nm). With a high spatial resolution of 0.463 m, this dataset captures detailed agricultural patterns in Longkou, Hubei, China. It encompasses nine land-cover classes including various crop types (corn, cotton, and rice) and water bodies, totaling 204,542 labeled pixels. The high spatial resolution and rich spectral information make this dataset suitable for evaluating fine-grained classification performance.

4.1.3. Houston2018

The Houston2018 dataset was provided as part of the 2018 IEEE GRSS Data Fusion Contest, covering an urban–rural area in Houston, Texas. The dataset consists of 601 × 2384 pixels with 50 spectral bands in the 380–1050 nm range at 2.5 m spatial resolution. It includes 20 land-cover classes with significant class imbalance, ranging from abundant categories like healthy grass (9799 samples) to rare classes such as synthetic turf (684 samples). The total of 504,856 labeled pixels and the complex urban landscape make this dataset particularly challenging for classification algorithms.

4.2. Experimental Setup

4.2.1. Evaluation Metrics

We employ four standard evaluation metrics to comprehensively assess classification performance from different perspectives. Overall Accuracy (OA) measures the global classification correctness:
OA = 1 N i = 1 N I ( y i = y ^ i ) ,
where N is the total number of test samples, y i and y ^ i denote the true and predicted labels, respectively, and I ( · ) is the indicator function.
Average Accuracy (AA) computes the mean of per-class accuracies, providing a balanced performance assessment across classes:
AA = 1 C i = 1 C TP i TP i + FN i ,
where C is the number of classes, and TP i and FN i represent true positives and false negatives for class i.
The Kappa coefficient ( κ ) quantifies classification agreement beyond chance:
κ = p o p e 1 p e ,
Per-class accuracy provides detailed insights into individual class performance:
acc i = TP i TP i + FN i .

4.2.2. Implementation Details

All experiments are conducted on a computational platform equipped with an NVIDIA GeForce RTX 4090 GPU and Intel Xeon Silver 4310 CPU. To rigorously evaluate model performance under limited supervision, we adopt only 1% of labeled pixels per class (minimum one sample per class) for training, with an additional 1% for validation and the remaining 98% for testing. The validation set is used exclusively for early stopping and checkpoint selection (patience = 100), and the checkpoint with the best validation performance is used for final testing. The model is implemented using PyTorch 2.9.1 and trained for 600 epochs with the Adam optimizer, employing an initial learning rate of 5 × 10 4 with cosine annealing scheduling. No external pretraining, data augmentation, or other additional training techniques are used, and no extra pre-processing beyond what is described in this manuscript is applied. The superpixel graph construction uses K = 300 superpixels for Indian Pines and WHU-Hi-LongKou, and K = 500 for Houston2018, with compactness parameter m = 0.05 unless otherwise specified.

4.2.3. Benchmark Methods

We compare SSGTN against twelve state-of-the-art methods spanning four representative paradigms:
  • CNN-based methods: CNN-2D [10] extracts spatial features through cascaded 2D convolutions; SSRN [12] employs 3D convolutional layers for joint spectral–spatial modeling; HybridSN [12] integrates 2D and 3D convolutions for hierarchical feature extraction.
  • Transformer-based methods: SSFTT [31] utilizes spectral–spatial feature tokenization for sequence modeling; MorphFormer [33] incorporates morphological operations with Transformer architecture.
  • Mamba-based methods: Mamba-HSI adapts state–space models for hyperspectral imaging; MFormer [15] combines Mamba with Transformer components for long-range dependency modeling.
  • GCN-based methods: GCN [16] operates on superpixel-based graphs; CEGCN [42] enhances graph representations with CNN features; GraphMamba [20] integrates graph structures with state–space models.
For a fair comparison, all baseline methods are trained and evaluated using the training protocol as described in the previous subsection, while method-specific hyperparameters are set following the recommendations in the original papers or official repositories, and all other common settings are kept identical across methods.

4.3. Ablation Studies

4.3.1. Impact of Spectral–Spatial Graph Parameters

In the LDA-SLIC superpixel construction, the scale and compactness parameters jointly determine the granularity and regularity of the induced superpixel graph. The parameter study in Table 1 and Table 2 reveals a consistent trade-off: overly fine segmentation tends to generate small, fragmented regions whose descriptors are more vulnerable to spectral noise and mixed pixels, whereas overly coarse segmentation increases region heterogeneity and blurs class boundaries, weakening the homogeneity assumption underlying region-level graph reasoning. The compactness parameter further mediates boundary adherence versus spatial regularity; moderate compactness typically preserves meaningful object contours while avoiding irregular, elongated superpixels that may distort adjacency relations. Notably, the sensitivity to these parameters is dataset-dependent: scenes with clearer object extents and higher spatial resolution exhibit a wider feasible range, while coarse-resolution agricultural scenes with high inter-class spectral similarity require a more careful balance to reduce mixed superpixels. Overall, these observations motivate selecting a moderately fine graph granularity together with boundary-aware superpixels to best support subsequent graph propagation and global reasoning.

4.3.2. Component-Wise Analysis

We conduct component-wise ablation studies on the Houston2018 dataset to examine the contribution of each module in SSGTN (Table 3). Overall, the results indicate that the modules address different failure modes in low-label HSIC and thus provide complementary benefits. The Spectral Denoising Module (SDM) improves the reliability of region descriptors and affinity estimation by suppressing band redundancy and sensor noise prior to graph construction, which helps mitigate error propagation during subsequent message passing. The Spectral–Spatial Shift Module (SSSM) introduces an explicit yet parameter-free spectral–spatial interaction prior on the pixel grid before region aggregation, enhancing robustness to mixed pixels and local boundary perturbations. Building upon these strengthened low-level representations, the Spatial Transformer Branch captures long-range spatial context over the superpixel topology, whereas the Spectral Transformer Branch emphasizes non-local cross-band correlations that are not fully recovered by local propagation alone; their joint use therefore integrates spatial continuity with spectral correlation into a unified representation. Importantly, the additional overhead remains controlled because SSSM introduces no learnable parameters and attention operates on a compact superpixel graph rather than the full pixel lattice.

4.3.3. Training Ratio Analysis

To assess robustness under limited supervision, we vary the training sample ratio from 1% to 5% across all datasets (Table 4). The results highlight strong label efficiency: SSGTN benefits noticeably from small increases in supervision and exhibits diminishing returns as the annotation budget grows, which is consistent with the model’s ability to exploit structural priors and global context when labels are scarce. The improvement pattern also differs across datasets. For WHU-Hi-LongKou, the accuracy saturates rapidly due to clearer object extents and higher spatial resolution, suggesting that structural regularity and strong spatial cues allow effective learning with very few labels. In contrast, Indian Pines and Houston2018 require more supervision to stabilize class-balanced performance, which can be attributed to stronger spectral confusion, mixed pixels, and class imbalance that make minority categories harder to learn. Overall, this analysis indicates that SSGTN is particularly suitable for low-label settings, while further gains in highly imbalanced scenes are more likely to be constrained by data scarcity in rare classes rather than representation capacity alone.

4.3.4. Computational Complexity

To complement the performance-oriented ablations, we further report a per-image complexity comparison on the WHU-Hi-LongKou dataset in terms of FLOPs and model parameters (Table 5). A key observation is that the FLOPs are dominated more by the token/region granularity than by the parameter count. Patch-based 3D CNN/Transformer pipelines typically operate on dense spectral–spatial patches or long token sequences on the full pixel lattice, which leads to 10 3 10 4 G FLOPs per image. In contrast, superpixel-graph methods (GCN, CEGCN, and SSGTN) compress the image into a much smaller set of region nodes and perform message passing/attention on a sparse adjacency, substantially reducing computation while still leveraging contextual aggregation. Importantly, G-Mamba does not adopt superpixel segmentation and thus incurs high FLOPs despite being categorized as a graph/SSM hybrid, since its global modeling is still carried out at a much finer granularity. Meanwhile, the notably low FLOPs of MambaHSI are largely attributed to its tile-based processing strategy, which reduces the effective sequence length and computation per forward pass compared to patch-based tokenization used by several Transformer-style baselines. We note that peak GPU memory consumption is also strongly implementation-dependent (e.g., batch size, precision, and activation storage), and therefore we report FLOPs/parameters as hardware-agnostic indicators while providing qualitative discussion of memory trends through token/region granularity. Overall, SSGTN provides a favorable computation–capacity trade-off with a clear advantage in FLOPs, while its parameter size is comparatively larger and remains a noticeable drawback.

4.4. Experimental Results

4.4.1. Comparative Performance Analysis

Table 6, Table 7 and Table 8 and Figure 4, Figure 5 and Figure 6 indicate that SSGTN provides consistent improvements across three representative benchmarks under the 1% per-class protocol. We additionally report a classical support vector machine (SVM) baseline as a non-deep-learning reference. Across all three datasets, SVM yields noticeably lower OA and κ compared with deep models, and its large variance on Indian Pines indicates limited robustness under scarce supervision and strong spectral confusion. With Indian Pines, the dominant challenge is the combination of coarse spatial resolution, strong spectral similarity among crop types, and extreme class imbalance. In this regime, methods that rely heavily on local neighborhoods tend to be more affected by mixed pixels and scarce supervision for minority categories. The region-level structural prior and global reasoning in SSGTN improve spatial consistency and overall robustness, while class-balanced performance remains constrained by the lack of samples in rare classes. In WHU-Hi-LongKou, the scene exhibits clearer object extents and higher spatial resolution, and superpixel regions align well with meaningful structures. Consequently, region-level graph modeling becomes highly effective and most competitive methods approach saturation, with SSGTN maintaining clean boundaries and fewer spurious fragments in the prediction maps. In Houston2018, the urban environment introduces high intra-class variability and pronounced imbalance. In this setting, combining graph-regularized spatial coherence with global contextual modeling is particularly beneficial for complex man-made structures, whereas residual errors are mainly concentrated in rare categories, suggesting that data imbalance is a primary bottleneck for further gains.

4.4.2. Qualitative Results and Visual Analysis

Classification maps in Figure 4, Figure 5 and Figure 6 are examined from three complementary perspectives, i.e., intra-region homogeneity, boundary adherence, and robustness to sparse supervision. Compared with CNN-based baselines, which tend to produce fragmented predictions around mixed pixels, and purely global models, which may introduce scattered misclassifications when local structure is ambiguous, SSGTN yields spatially coherent regions while preserving class transitions at object boundaries. This behavior is consistent with the intended role of the superpixel graph in enforcing structural regularity and the Transformer branch in compensating for long-range contextual dependencies, thereby mitigating both salt-and-pepper noise and excessive over-smoothing.
On Indian Pines (Figure 4), where small agricultural parcels and spectrally similar crop types often induce local confusion, SSGTN reduces isolated mislabeled pixels and maintains more continuous field patterns, indicating improved handling of mixed pixels under scarce labels. On WHU-Hi-LongKou (Figure 5), the dominant challenge is aligning predictions with elongated field boundaries; SSGTN better follows these boundaries and suppresses cross-field label leakage, reflecting effective region-level regularization. On the urban Houston2018 scene (Figure 6), fine-grained man-made structures and strong material similarity can cause boundary blur or sporadic errors; SSGTN preserves sharper transitions between adjacent classes and produces cleaner structural layouts, suggesting that combining graph-based locality with attention-driven context is beneficial for discriminating spectrally close categories.

5. Discussion

The experimental results demonstrate that SSGTN effectively addresses key challenges in hyperspectral image classification under limited labeled data through its novel architectural design. The dual-branch framework successfully leverages complementary strengths: the graph-based branch preserves discriminative spectral patterns in homogeneous regions, while the Transformer branch captures long-range dependencies essential for complex landscapes.
However, SSGTN exhibits limitations in handling severely underrepresented classes, as evidenced by the poor performance on Class 12 in Houston2018. This limitation stems from graph sparsity in rare classes and attention bias toward dominant categories. Future work should explore topology-aware graph sampling and attention regularization to improve minority class representation.
Compared to CNN-based methods, SSGTN achieves superior spatial coherence through graph-structured regularization. Relative to pure GCN approaches, the Transformer branch mitigates oversmoothing in heterogeneous scenes. The computational complexity of joint graph-attention learning necessitates careful hardware considerations for large-scale deployments, though the sparse graph construction provides significant efficiency gains.
The consistent performance advantage across diverse datasets and low-label training regimes demonstrates that SSGTN is a robust and generalizable framework for hyperspectral image classification, particularly in practical scenarios where labeled data are limited and computational efficiency is critical.

6. Conclusions

This paper has presented the SSGTN, a novel dual-branch architecture that advances hyperspectral image classification under limited supervision. The proposed framework integrates four key innovations: an LDA-SLIC superpixel graph construction module that preserves discriminative spectral–spatial features, a spectral denoising module for noise suppression, a parameter-efficient Spectral–Spatial Shift Module for multi-dimensional feature interaction, and a dual-branch GCN-Transformer that jointly models local topology and global dependencies. Extensive experiments on three benchmark datasets demonstrate that SSGTN consistently outperforms state-of-the-art methods under limited supervision conditions while maintaining computational efficiency through sparse graph representations and optimized module design.
Future research will focus on enhancing SSGTN’s capability to handle severe class imbalance through advanced graph sampling strategies and attention regularization techniques. We will also explore self-supervised pretraining approaches to further improve sample efficiency and investigate adaptive graph construction mechanisms for better boundary preservation in heterogeneous landscapes. These developments aim to extend the framework’s applicability to more complex real-world scenarios while addressing current limitations in computational scalability and minority class representation.

Author Contributions

Conceptualization, H.S. and Z.L.; methodology, H.S.; investigation, H.S., Z.L., Y.M., G.Z. and X.D.; editing, H.S. and Z.L.; writing—review, H.S., Z.L., Y.M., G.Z. and X.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was jointly supported by the National Natural Science Foundation of China (NSFC) under Grant 62403155; the Guangzhou Basic and Applied Basic Research Topics under Grant 2024A04J2081; the Guangdong Basic and Applied Basic Research Fund under Grant 2023A1515011850; the Guangzhou Science and Technology Planning Project under Grant 2024A03J0460; and the College Students’ Science and Technology Innovation Cultivation Project of Guangdong Province, China under Grant pdjh2024a293.

Data Availability Statement

The hyperspectral datasets used in this study are publicly available. The Indian Pines dataset is available at https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes#Indian_Pines, accessed on 28 November 2025. The WHU-Hi-LongKou dataset is part of the WHU-Hi hyperspectral benchmark and is available at https://rsidea.whu.edu.cn/e-resource_WHUHi_sharing.htm, accessed on 28 November 2025. The Houston2018 dataset was released as part of the 2018 IEEE GRSS Data Fusion Contest, jointly provided by the National Center for Airborne Laser Mapping (NCALM) and the University of Houston; detailed information and access are available at https://machinelearning.ee.uh.edu/2018-ieee-grss-data-fusion-challenge-fusion-of-multispectral-lidar-and-hyperspectral-data/, accessed on 28 November 2025.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Mahlein, A.K. Plant disease detection by imaging sensors–parallels and specific demands for precision agriculture and plant phenotyping. Plant Dis. 2016, 100, 241–251. [Google Scholar] [CrossRef] [PubMed]
  2. Tan, K.; Ma, W.; Chen, L.; Wang, H.; Du, Q.; Du, P.; Yan, B.; Liu, R.; Li, H. Estimating the distribution trend of soil heavy metals in mining area from HyMap airborne hyperspectral imagery based on ensemble learning. J. Hazard. Mater. 2021, 401, 123288. [Google Scholar] [CrossRef]
  3. Van der Meer, F.D.; Van der Werff, H.M.; Van Ruitenbeek, F.J.; Hecker, C.A.; Bakker, W.H.; Noomen, M.F.; Van Der Meijde, M.; Carranza, E.J.M.; De Smeth, J.B.; Woldai, T. Multi- and hyperspectral geologic remote sensing: A review. Int. J. Appl. Earth Obs. Geoinf. 2012, 14, 112–128. [Google Scholar] [CrossRef]
  4. Wu, Y.; Wang, Y.; Zhang, D. Design and Analysis of Spaceborne Hyperspectral Imaging System for Coastal Studies. Remote Sens. 2025, 17, 986. [Google Scholar] [CrossRef]
  5. Xu, J.L.; Esquerre, C.; Sun, D.W. Methods for performing dimensionality reduction in hyperspectral image classification. J. Near Infrared Spectrosc. 2018, 26, 61–75. [Google Scholar] [CrossRef]
  6. Ye, Z.; He, M.; Fowler, J.E.; Du, Q. Hyperspectral image classification based on spectra derivative features and locality preserving analysis. In Proceedings of the 2014 IEEE China Summit & International Conference on Signal and Information Processing (Chinasip), Xi’an, China, 9–13 July 2014; pp. 138–142. [Google Scholar]
  7. Tan, K.; Li, E.; Du, Q.; Du, P. Hyperspectral image classification using band selection and morphological profiles. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 40–48. [Google Scholar] [CrossRef]
  8. Hao, T.; Zhang, Z.; Crabbe, M.J.C. Few-Shot Hyperspectral Remote Sensing Image Classification via an Ensemble of Meta-Optimizers with Update Integration. Remote Sens. 2024, 16, 2988. [Google Scholar] [CrossRef]
  9. Dao, P.D.; Mantripragada, K.; He, Y.; Qureshi, F.Z. Improving hyperspectral image segmentation by applying inverse noise weighting and outlier removal for optimal scale selection. ISPRS J. Photogramm. Remote Sens. 2021, 171, 348–366. [Google Scholar] [CrossRef]
  10. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
  11. Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
  12. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef]
  13. He, X.; Chen, Y.; Lin, Z. Spatial-spectral transformer for hyperspectral image classification. Remote Sens. 2021, 13, 498. [Google Scholar] [CrossRef]
  14. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking hyperspectral image classification with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5518615. [Google Scholar] [CrossRef]
  15. He, Y.; Tu, B.; Liu, B.; Li, J.; Plaza, A. 3DSS-Mamba: 3D-spectral-spatial mamba for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5534216. [Google Scholar] [CrossRef]
  16. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph convolutional networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5966–5978. [Google Scholar] [CrossRef]
  17. Wan, S.; Gong, C.; Zhong, P.; Du, B.; Zhang, L.; Yang, J. Multiscale dynamic graph convolutional network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3162–3177. [Google Scholar] [CrossRef]
  18. Yang, J.Y.; Li, H.C.; Hu, W.S.; Pan, L.; Du, Q. Adaptive cross-attention-driven spatial–spectral graph convolutional network for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2021, 19, 6004705. [Google Scholar] [CrossRef]
  19. Yang, T.A.; Hy, T.S.; Dao, P.D. MOB-GCN: A Novel Multiscale Object-Based Graph Neural Network for Hyperspectral Image Classification. arXiv 2025, arXiv:2502.16289. [Google Scholar]
  20. Ahmad, M.; Butt, M.H.F.; Usama, M.; Mazzara, M.; Distefano, S.; Khan, A.M.; Hong, D. Hybrid State-Space and GRU-based Graph Tokenization Mamba for Hyperspectral Image Classification. arXiv 2025, arXiv:2502.06427. [Google Scholar]
  21. Yang, X.; Ye, Y.; Li, X.; Lau, R.Y.; Zhang, X.; Huang, X. Hyperspectral image classification with deep learning models. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5408–5423. [Google Scholar] [CrossRef]
  22. Alkhatib, M.Q.; Al-Saad, M.; Aburaed, N.; Almansoori, S.; Zabalza, J.; Marshall, S.; Al-Ahmad, H. Tri-CNN: A three branch model for hyperspectral image classification. Remote Sens. 2023, 15, 316. [Google Scholar] [CrossRef]
  23. Ahmad, M.; Khan, A.M.; Mazzara, M.; Distefano, S.; Ali, M.; Sarfraz, M.S. A fast and compact 3-D CNN for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 19, 5502205. [Google Scholar] [CrossRef]
  24. Ge, Z.; Cao, G.; Li, X.; Fu, P. Hyperspectral image classification method based on 2D–3D CNN and multibranch feature fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5776–5788. [Google Scholar] [CrossRef]
  25. Ye, A.; Zhou, X.; Miao, F. Innovative hyperspectral image classification approach using optimized CNN and ELM. Electronics 2022, 11, 775. [Google Scholar] [CrossRef]
  26. Yu, C.; Han, R.; Song, M.; Liu, C.; Chang, C.I. Feedback attention-based dense CNN for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5501916. [Google Scholar] [CrossRef]
  27. Hang, R.; Li, Z.; Liu, Q.; Ghamisi, P.; Bhattacharyya, S.S. Hyperspectral image classification with attention-aided CNNs. IEEE Trans. Geosci. Remote Sens. 2020, 59, 2281–2293. [Google Scholar] [CrossRef]
  28. Diakite, A.; Jiangsheng, G.; Xiaping, F. Hyperspectral image classification using 3D 2D CNN. IET Image Process. 2021, 15, 1083–1092. [Google Scholar] [CrossRef]
  29. Munishamaiaha, K.; Kannan, S.K.; Venkatesan, D.; Jasiński, M.; Novak, F.; Gono, R.; Leonowicz, Z. Hyperspectral image classification with deep CNN using an enhanced elephant herding optimization for updating hyper-parameters. Electronics 2023, 12, 1157. [Google Scholar] [CrossRef]
  30. Yang, X.; Zhang, X.; Ye, Y.; Lau, R.Y.; Lu, S.; Li, X.; Huang, X. Synergistic 2D/3D convolutional neural network for hyperspectral image classification. Remote Sens. 2020, 12, 2033. [Google Scholar] [CrossRef]
  31. Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–spatial feature tokenization transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
  32. Yang, X.; Cao, W.; Lu, Y.; Zhou, Y. Hyperspectral image transformer classification networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5528715. [Google Scholar] [CrossRef]
  33. Roy, S.K.; Deria, A.; Shah, C.; Haut, J.M.; Du, Q.; Plaza, A. Spectral–spatial morphological attention transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5503615. [Google Scholar] [CrossRef]
  34. Zhang, J.; Sun, M.; Chang, S. Spatial and Spectral Structure-Aware Mamba Network for Hyperspectral Image Classification. Remote Sens. 2025, 17, 2489. [Google Scholar] [CrossRef]
  35. Peng, H.; Lin, K.; Liu, H. HS-Mamba: Full-Field Interaction Multi-Groups Mamba for Hyperspectral Image Classification. arXiv 2025, arXiv:2504.15612. [Google Scholar]
  36. Jin, C.; Teng, X.; Chu, M.; Hao, Y.; Qin, S.; Li, X.; Yu, X. LDBMamba: Language-guided Dual-Branch Mamba for hyperspectral image domain generalization. Expert Syst. Appl. 2025, 280, 127620. [Google Scholar] [CrossRef]
  37. Zhang, Y.; Jin, X.; Zhang, X.; Wu, Y.; Tu, L. EchoMamba: A new Mamba model for fast and efficient hyperspectral image classification. PLoS ONE 2025, 20, e0330678. [Google Scholar] [CrossRef] [PubMed]
  38. Zhang, T.; Xuan, C.; Cheng, F.; Tang, Z.; Gao, X.; Song, Y. CenterMamba: Enhancing semantic representation with center-scan Mamba network for hyperspectral image classification. Expert Syst. Appl. 2025, 287, 127985. [Google Scholar] [CrossRef]
  39. Gu, Q.; Luan, H.; Huang, K.; Sun, Y. Hyperspectral image classification using multi-scale lightweight transformer. Electronics 2024, 13, 949. [Google Scholar] [CrossRef]
  40. Qin, A.; Shang, Z.; Tian, J.; Wang, Y.; Zhang, T.; Tang, Y.Y. Spectral–spatial graph convolutional networks for semisupervised hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2018, 16, 241–245. [Google Scholar] [CrossRef]
  41. Mou, L.; Lu, X.; Li, X.; Zhu, X.X. Nonlocal graph convolutional networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8246–8257. [Google Scholar] [CrossRef]
  42. Liu, Q.; Xiao, L.; Yang, J.; Wei, Z. CNN-enhanced graph convolutional network with pixel-and superpixel-level feature fusion for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 8657–8671. [Google Scholar] [CrossRef]
  43. Zhang, Y.; Li, X.; Xu, Y. Pixel-level graph neural networks based on optimized feature representation for hyperspectral image classification. IEEE Access 2025, 13, 140830–140846. [Google Scholar] [CrossRef]
  44. Shang, R.; Zhu, K.; Chang, H.; Zhang, W.; Feng, J.; Xu, S. Hyperspectral image classification based on mixed similarity graph convolutional network and pixel refinement. Appl. Soft Comput. 2025, 170, 112657. [Google Scholar] [CrossRef]
  45. Chu, Y.; Cao, J.; Huang, J.; Ju, H.; Liu, G.; Cao, H.; Ding, W. Global-local graph convolutional broad network for hyperspectral image classification. Appl. Soft Comput. 2025, 170, 112723. [Google Scholar] [CrossRef]
  46. Yao, D.; Zhi-li, Z.; Xiao-feng, Z.; Wei, C.; Fang, H.; Yao-ming, C.; Cai, W.W. Deep hybrid: Multi-graph neural network collaboration for hyperspectral image classification. Def. Technol. 2023, 23, 164–176. [Google Scholar] [CrossRef]
  47. Jia, S.; Jiang, S.; Zhang, S.; Xu, M.; Jia, X. Graph-in-graph convolutional network for hyperspectral image classification. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 1157–1171. [Google Scholar] [CrossRef]
  48. Zheng, Z.; Debbagh, M.; Zhou, X.; Sun, S.; Huang, Y. Graph-Transformer with spatial-spectral features fusion for hyperspectral image classification. Expert Syst. Appl. 2025, 264, 125962. [Google Scholar] [CrossRef]
  49. Fisher, R.A. The use of multiple measurements in taxonomic problems. Ann. Eugen. 1936, 7, 179–188. [Google Scholar] [CrossRef]
  50. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef]
  51. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  52. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 600–610. [Google Scholar]
Figure 1. Overall architecture of the proposed SSGTN. The framework integrates four key components: (1) LDA-SLIC superpixel segmentation for graph construction, (2) spectral denoising module for noise suppression, (3) Spectral–Spatial Shift Module for multi-scale feature fusion, and (4) dual-branch GCN-Transformer module for joint local and global dependency modeling.
Figure 1. Overall architecture of the proposed SSGTN. The framework integrates four key components: (1) LDA-SLIC superpixel segmentation for graph construction, (2) spectral denoising module for noise suppression, (3) Spectral–Spatial Shift Module for multi-scale feature fusion, and (4) dual-branch GCN-Transformer module for joint local and global dependency modeling.
Remotesensing 18 00199 g001
Figure 2. Detailed structure of the Spectral–Spatial Shift Module. The module performs cyclic shifts along spectral (S), height (H), and width (W) dimensions, followed by concatenation and convolutional fusion. The residual connection preserves gradient flow and stabilizes training. Shift operations enable efficient cross-dimensional interaction without introducing additional parameters.
Figure 2. Detailed structure of the Spectral–Spatial Shift Module. The module performs cyclic shifts along spectral (S), height (H), and width (W) dimensions, followed by concatenation and convolutional fusion. The residual connection preserves gradient flow and stabilizes training. Shift operations enable efficient cross-dimensional interaction without introducing additional parameters.
Remotesensing 18 00199 g002
Figure 3. Architecture of the dual-branch spectral–spatial Graph Transformer block. The spatial path (top) processes graph-convolved features through multi-head self-attention to capture long-range spatial dependencies. The spectral path (bottom) employs similar mechanisms for cross-band correlation modeling. Both branches are fused via concatenation and graph convolution, with layer normalization and residual connections applied throughout.
Figure 3. Architecture of the dual-branch spectral–spatial Graph Transformer block. The spatial path (top) processes graph-convolved features through multi-head self-attention to capture long-range spatial dependencies. The spectral path (bottom) employs similar mechanisms for cross-band correlation modeling. Both branches are fused via concatenation and graph convolution, with layer normalization and residual connections applied throughout.
Remotesensing 18 00199 g003
Figure 4. Classification maps of different models on the Indian Pines dataset with 1% samples per class as the training set. (a) Ground truth; (b) CNN-2D; (c) SSRN; (d) HybridSN; (e) SSFTT; (f) MorphFormer; (g) MambaHSI; (h) MFormer; (i) GCN; (j) CEGCN; (k) Graph-Mamba; (l) SSGTN (ours).
Figure 4. Classification maps of different models on the Indian Pines dataset with 1% samples per class as the training set. (a) Ground truth; (b) CNN-2D; (c) SSRN; (d) HybridSN; (e) SSFTT; (f) MorphFormer; (g) MambaHSI; (h) MFormer; (i) GCN; (j) CEGCN; (k) Graph-Mamba; (l) SSGTN (ours).
Remotesensing 18 00199 g004
Figure 5. Classification maps of different models on the WHU-Hi-LongKou dataset with 1% samples per class as the training set. (a) Ground truth; (b) CNN-2D; (c) SSRN; (d) HybridSN; (e) SSFTT; (f) MorphFormer; (g) MambaHSI; (h) MFormer; (i) GCN; (j) CEGCN; (k) Graph-Mamba; (l) SSGTN (ours).
Figure 5. Classification maps of different models on the WHU-Hi-LongKou dataset with 1% samples per class as the training set. (a) Ground truth; (b) CNN-2D; (c) SSRN; (d) HybridSN; (e) SSFTT; (f) MorphFormer; (g) MambaHSI; (h) MFormer; (i) GCN; (j) CEGCN; (k) Graph-Mamba; (l) SSGTN (ours).
Remotesensing 18 00199 g005
Figure 6. Classification maps of different models on the Houston2018 dataset with 1% samples per class as the training set. (a) Ground truth; (b) CNN-2D; (c) SSRN; (d) HybridSN; (e) SSFTT; (f) MorphFormer; (g) MambaHSI; (h) MFormer; (i) GCN; (j) CEGCN; (k) Graph-Mamba; (l) SSGTN (ours).
Figure 6. Classification maps of different models on the Houston2018 dataset with 1% samples per class as the training set. (a) Ground truth; (b) CNN-2D; (c) SSRN; (d) HybridSN; (e) SSFTT; (f) MorphFormer; (g) MambaHSI; (h) MFormer; (i) GCN; (j) CEGCN; (k) Graph-Mamba; (l) SSGTN (ours).
Remotesensing 18 00199 g006
Table 1. Comprehensive parameter analysis on the Indian Pines dataset: performance metrics (OA, AA, κ in %) across different combinations of superpixel scale and compactness parameters with 1% training ratio. Optimal performance is achieved at scale = 30 and compactness = 0.05.
Table 1. Comprehensive parameter analysis on the Indian Pines dataset: performance metrics (OA, AA, κ in %) across different combinations of superpixel scale and compactness parameters with 1% training ratio. Optimal performance is achieved at scale = 30 and compactness = 0.05.
CompactnessScale Parameter
5 10 30 60 100
OA AA κ OA AA κ OA AA κ OA AA κ OA AA κ
0.0584.5678.6182.3786.7080.0884.8388.7579.4787.1687.5477.6385.7785.7979.4583.78
0.1083.3076.0780.9287.1280.2285.3185.7575.5083.7887.3479.5785.5686.3774.9584.45
0.5085.0275.9782.8584.7975.4582.6484.5576.7482.3681.9371.4379.4080.3171.9277.52
1.0083.6772.6281.3285.3877.1283.3585.7276.7083.7583.0171.2380.6578.9963.1076.06
5.0083.3975.5881.0785.7578.0983.7584.2274.7381.9583.8875.3781.6480.0170.2077.10
Table 2. Parameter sensitivity analysis on the WHU-Hi-LongKou dataset: performance metrics (OA, AA, κ in %) across scale and compactness configurations. The model maintains robust performance with OA consistently above 99.4%.
Table 2. Parameter sensitivity analysis on the WHU-Hi-LongKou dataset: performance metrics (OA, AA, κ in %) across scale and compactness configurations. The model maintains robust performance with OA consistently above 99.4%.
CompactnessScale Parameter
30 60 100 200 300
OA AA κ OA AA κ OA AA κ OA AA κ OA AA κ
0.199.4898.7399.3199.5798.9599.4499.6499.0799.5299.6498.9999.5399.6999.2399.59
0.599.4198.4499.2399.4798.8099.3099.5198.7399.3699.4998.7199.3399.6297.8099.50
1.099.4898.4499.3299.4198.5499.2299.4898.6699.3199.5498.6499.3999.3998.4699.20
5.099.4998.8199.3399.4398.6299.2599.4798.5999.3099.6098.9899.4899.5398.6799.38
1099.4498.4899.2699.4898.5699.3299.4698.7299.2899.6098.9899.4799.6598.9099.54
Table 3. Comprehensive ablation study on the Houston2018 dataset evaluating individual contributions of Spectral Denoising Module (SDM), Spectral–Spatial Shift Module (SSSM), Spatial Transformer Branch (SpaT), and Spectral Transformer Branch (SpeT) (mean ± std over five seeds).
Table 3. Comprehensive ablation study on the Houston2018 dataset evaluating individual contributions of Spectral Denoising Module (SDM), Spectral–Spatial Shift Module (SSSM), Spatial Transformer Branch (SpaT), and Spectral Transformer Branch (SpeT) (mean ± std over five seeds).
Module ConfigurationPerformance Metrics (%)
SDM SSSM SpaT SpeT κ OA AA
××××87.30 ± 0.2590.26 ± 0.2083.14 ± 0.99
×××87.26 ± 0.2290.23 ± 0.1682.16 ± 1.37
×××87.16 ± 0.3390.15 ± 0.2583.01 ± 1.37
××87.34 ± 0.4190.30 ± 0.3282.91 ± 0.84
×××90.11 ± 0.2792.40 ± 0.2184.00 ± 1.03
××88.86 ± 0.1091.42 ± 0.0784.84 ± 1.27
××89.73 ± 0.2992.11 ± 0.2383.14 ± 1.22
×91.78 ± 0.4493.69 ± 0.3487.25 ± 0.89
×××89.57 ± 0.2691.99 ± 0.1984.44 ± 1.36
××89.55 ± 0.2691.98 ± 0.2084.03 ± 2.56
××89.77 ± 0.4392.14 ± 0.3384.58 ± 1.57
×89.93 ± 0.1792.26 ± 0.1384.87 ± 2.57
××92.02 ± 0.2693.87 ± 0.2087.40 ± 0.94
×89.91 ± 0.3592.25 ± 0.2784.77 ± 3.13
×90.98 ± 0.2493.07 ± 0.1886.21 ± 1.86
92.55 ± 0.2594.28 ± 0.1988.14 ± 1.06
Table 4. OA, AA, and kappa (%) of SSGTN on three datasets under different training ratios. (mean ± std over five seeds).
Table 4. OA, AA, and kappa (%) of SSGTN on three datasets under different training ratios. (mean ± std over five seeds).
DatasetMetric0.010.020.030.040.05
Indian PinesOA87.1292.3393.6796.4495.96
AA80.2284.0785.6391.6387.19
κ 85.3191.2592.7895.9395.39
WHU-Hi-LongKouOA99.3999.7299.7499.8199.85
AA98.6299.3099.2499.4199.58
κ 99.2099.6399.6599.7599.81
Houston2018OA94.2495.9496.5296.9997.38
AA87.7391.0492.5493.2294.17
κ 92.5094.7195.4796.0896.59
Table 5. Per-image complexity comparison on the WHU-Hi-LongKou dataset. FLOPs are reported in G, and parameters are reported in MB.
Table 5. Per-image complexity comparison on the WHU-Hi-LongKou dataset. FLOPs are reported in G, and parameters are reported in MB.
MethodsFLOPs (G)Param (MB)
CNN-2D4059.601.56
SSRN8705.630.34
HybridSN1686.791.40
SSFTT1578.370.61
MorphFormer1356.350.56
MambaHSI12.080.51
MFormer4979.381.19
GCN7.860.14
CEGCN66.990.70
G-Mamba3008.701.30
SSGTN60.891.79
Table 6. Results on the Indian Pines dataset with 1% training samples (mean ± std over five seeds).
Table 6. Results on the Indian Pines dataset with 1% training samples (mean ± std over five seeds).
Class No.Hand-CraftedCNNsTransformer-BasedMamba-BasedGCN
SVM CNN-2D SSRN HybridSN SSFTT MorphFormer MambaHSI MFormer GCN CEGCN G-Mamba SSGTN
138.18 ± 14.2635.41 ± 2.2316.52 ± 1.3956.36 ± 11.5431.82 ± 31.5362.27 ± 38.2278.18 ± 12.1685.33 ± 9.5955.97 ± 4.5628.29 ± 1.4992.92 ± 6.0839.74 ± 18.95
254.61 ± 5.9376.94 ± 0.5973.59 ± 0.7556.89 ± 9.4479.20 ± 14.8885.38 ± 4.3385.79 ± 3.5573.86 ± 9.5275.95 ± 1.2579.29 ± 0.7573.15 ± 7.6985.11 ± 6.54
336.53 ± 12.8147.47 ± 1.2234.29 ± 1.5865.31 ± 3.4869.90 ± 12.0266.28 ± 11.5777.98 ± 6.9678.69 ± 6.4073.95 ± 1.0581.58 ± 0.8768.49 ± 11.3175.82 ± 10.06
434.03 ± 8.0635.45 ± 0.8914.92 ± 1.3327.64 ± 8.2970.39 ± 23.5973.25 ± 19.9789.96 ± 7.1965.02 ± 11.9190.96 ± 0.8969.03 ± 2.8352.91 ± 6.0983.67 ± 10.17
579.32 ± 4.0254.76 ± 1.6754.81 ± 2.2942.45 ± 5.6367.70 ± 11.0972.98 ± 7.1580.04 ± 6.8886.57 ± 3.6781.38 ± 0.9268.35 ± 2.7679.57 ± 8.2189.47 ± 3.53
684.96 ± 4.9798.29 ± 0.1197.87 ± 0.2290.17 ± 1.7491.99 ± 6.3190.64 ± 5.2990.59 ± 5.0596.71 ± 1.7691.04 ± 0.5395.81 ± 0.1394.96 ± 2.6297.12 ± 1.17
786.15 ± 6.2578.29 ± 0.8540.25 ± 2.0166.15 ± 12.0286.92 ± 18.7691.54 ± 6.8896.15 ± 4.210.00 ± 0.0098.52 ± 0.1877.69 ± 1.82100.00 ± 0.0072.59 ± 20.23
889.49 ± 7.0578.04 ± 2.1891.79 ± 0.5699.06 ± 1.8898.12 ± 3.7599.83 ± 0.3899.36 ± 1.2897.63 ± 4.0099.70 ± 0.0299.74 ± 0.0291.30 ± 7.5798.84 ± 1.04
934.44 ± 12.8642.75 ± 1.1514.97 ± 1.0486.67 ± 10.3035.56 ± 35.9231.11 ± 16.0176.67 ± 19.370.00 ± 0.0057.66 ± 3.6521.17 ± 1.6698.82 ± 2.3540.00 ± 21.73
1049.71 ± 7.8163.94 ± 0.6868.75 ± 0.8266.68 ± 7.5369.14 ± 13.1974.35 ± 3.4878.78 ± 3.8776.69 ± 4.7674.22 ± 0.8579.41 ± 0.7366.78 ± 3.8480.72 ± 11.00
1171.92 ± 3.8785.54 ± 0.5790.66 ± 0.5487.98 ± 2.8286.25 ± 5.4485.96 ± 4.5490.71 ± 4.0287.59 ± 1.6091.17 ± 0.7390.39 ± 0.6983.96 ± 5.8990.69 ± 6.60
1224.72 ± 9.3337.03 ± 0.9426.48 ± 1.6145.99 ± 9.7842.62 ± 17.1844.99 ± 16.7871.57 ± 7.8457.41 ± 13.6180.21 ± 0.6865.81 ± 1.3356.84 ± 11.1970.23 ± 4.52
1391.66 ± 2.4798.60 ± 0.0696.72 ± 0.6665.07 ± 4.5480.50 ± 18.8797.99 ± 2.7399.50 ± 0.6492.32 ± 5.8692.52 ± 0.5297.01 ± 0.4596.01 ± 3.0599.00 ± 1.37
1491.19 ± 1.3098.99 ± 0.0499.87 ± 0.0289.99 ± 3.0392.56 ± 5.8095.00 ± 3.3795.58 ± 2.3998.40 ± 0.7594.54 ± 0.3998.76 ± 0.0998.71 ± 0.8998.66 ± 0.86
1525.08 ± 4.1446.89 ± 1.0330.28 ± 0.8339.42 ± 1.3361.59 ± 16.1775.56 ± 11.3384.07 ± 5.4671.41 ± 10.0390.46 ± 1.1375.97 ± 2.1442.94 ± 8.5674.26 ± 12.63
1675.38 ± 12.6045.12 ± 2.9129.16 ± 2.3921.10 ± 9.8472.97 ± 35.0183.08 ± 27.6976.26 ± 13.5980.65 ± 14.8537.36 ± 0.3126.01 ± 1.1198.88 ± 1.4179.66 ± 16.48
OA (%)63.88 ± 1.8074.15 ± 0.2772.89 ± 0.2671.64 ± 1.1978.80 ± 3.3981.64 ± 1.1286.96 ± 0.7583.06 ± 1.2885.07 ± 0.3184.55 ± 0.1878.33 ± 2.3987.12 ± 2.91
AA (%)60.46 ± 2.0363.97 ± 0.3255.06 ± 0.3562.93 ± 2.1071.08 ± 7.8276.89 ± 4.1585.70 ± 2.1771.77 ± 1.3280.35 ± 0.7172.15 ± 0.4581.02 ± 1.2780.22 ± 3.74
κ (%)58.37 ± 2.2970.05 ± 0.3168.29 ± 0.3267.31 ± 1.6275.75 ± 3.9979.01 ± 1.2985.10 ± 0.8980.59 ± 1.4482.99 ± 0.3582.32 ± 0.2175.86 ± 2.6985.31 ± 3.25
Table 7. Results on the WHU-Hi-LongKou dataset with 1% training samples (mean ± std over five seeds).
Table 7. Results on the WHU-Hi-LongKou dataset with 1% training samples (mean ± std over five seeds).
Class No.Hand-CraftedCNNsTransformer-BasedMamba-BasedGCN
SVM CNN-2D SSRN HybridSN SSFTT MorphFormer MambaHSI MFormer GCN CEGCN G-Mamba SSGTN
198.07 ± 0.4599.82 ± 0.0499.88 ± 0.0499.91 ± 0.0899.88 ± 0.1499.86 ± 0.0899.74 ± 0.1799.94 ± 0.0497.39 ± 0.3899.23 ± 0.9699.95 ± 0.0499.76 ± 0.23
274.66 ± 5.6998.05 ± 0.6198.16 ± 0.8199.62 ± 0.1899.47 ± 0.1299.24 ± 0.2498.83 ± 0.8199.62 ± 0.3197.64 ± 1.3799.77 ± 0.4299.81 ± 0.1499.62 ± 0.43
373.62 ± 7.7796.29 ± 0.8897.33 ± 0.2999.39 ± 0.4498.65 ± 0.6598.59 ± 1.0898.24 ± 0.9197.11 ± 1.8791.13 ± 2.3396.63 ± 1.3098.91 ± 1.0797.93 ± 1.28
496.15 ± 0.3999.67 ± 0.0499.73 ± 0.1199.64 ± 0.0699.75 ± 0.0699.66 ± 0.0899.56 ± 0.1799.76 ± 0.1097.49 ± 0.6699.66 ± 1.4999.53 ± 0.2399.88 ± 0.32
557.78 ± 9.2994.59 ± 1.5092.93 ± 1.0295.32 ± 1.5996.41 ± 2.9597.73 ± 2.1096.52 ± 0.6997.98 ± 0.8498.58 ± 0.0196.63 ± 1.2398.72 ± 0.3699.96 ± 0.01
694.55 ± 1.9399.76 ± 0.3499.67 ± 0.4699.84 ± 0.0599.87 ± 0.1599.49 ± 0.4199.34 ± 0.4099.68 ± 0.1696.59 ± 1.8699.86 ± 0.1196.41 ± 1.4797.78 ± 0.79
799.96 ± 0.0299.98 ± 0.0199.98 ± 0.0199.96 ± 0.0199.89 ± 0.1199.93 ± 0.0599.85 ± 0.0799.91 ± 0.0498.96 ± 0.1799.91 ± 0.3996.41 ± 0.6198.42 ± 0.73
879.88 ± 4.7496.52 ± 0.8296.19 ± 0.9693.84 ± 1.4794.52 ± 2.5494.66 ± 2.2690.25 ± 1.9195.64 ± 1.3479.67 ± 6.1397.33 ± 0.9999.59 ± 0.0797.93 ± 0.54
963.54 ± 3.1294.89 ± 0.9294.43 ± 1.8495.38 ± 0.4497.04 ± 1.2395.18 ± 1.1886.44 ± 4.4495.44 ± 0.9655.34 ± 5.2098.44 ± 1.0098.91 ± 0.2397.68 ± 0.53
OA (%)94.24 ± 0.5799.28 ± 0.0999.33 ± 0.0899.39 ± 0.1099.48 ± 0.1199.41 ± 0.1398.90 ± 0.1099.50 ± 0.0596.14 ± 0.1199.47 ± 0.0299.59 ± 0.0799.62 ± 0.10
AA (%)82.02 ± 2.4797.39 ± 0.5197.56 ± 0.1198.07 ± 0.3998.39 ± 0.4298.26 ± 0.4296.53 ± 0.5198.34 ± 0.2690.31 ± 0.8098.61 ± 1.1998.91 ± 0.2398.97 ± 0.12
κ (%)92.40 ± 0.7799.06 ± 0.1399.12 ± 0.1099.20 ± 0.1499.31 ± 0.1599.22 ± 0.1798.55 ± 0.1499.34 ± 0.0794.92 ± 0.1599.30 ± 0.0299.46 ± 0.0999.51 ± 0.11
Table 8. Results on the Houston2018 dataset with 1% training samples (mean ± std over five seeds).
Table 8. Results on the Houston2018 dataset with 1% training samples (mean ± std over five seeds).
Class No.Hand-CraftedCNNsTransformer-BasedMamba-BasedGCN
SVM CNN-2D SSRN HybridSN SSFTT MorphFormer MambaHSI MFormer GCN CEGCN G-Mamba SSGTN
196.94 ± 0.9689.74 ± 0.4083.87 ± 3.7276.84 ± 5.6179.26 ± 8.7478.55 ± 1.3379.22 ± 3.4173.53 ± 9.6661.07 ± 4.2485.13 ± 1.8684.20 ± 4.1686.47 ± 0.94
295.20 ± 0.4795.19 ± 0.4193.57 ± 2.3391.88 ± 1.3790.99 ± 3.9792.22 ± 1.0591.67 ± 0.4791.74 ± 1.5086.39 ± 1.3394.91 ± 0.7084.20 ± 4.1694.04 ± 0.65
3100.00 ± 0.0096.61 ± 2.7699.58 ± 0.5186.15 ± 1.7199.10 ± 0.6497.88 ± 4.08100.00 ± 0.0088.77 ± 4.2199.38 ± 1.2598.78 ± 0.7896.72 ± 3.6999.55 ± 0.40
495.67 ± 0.7596.91 ± 0.2694.59 ± 1.8796.13 ± 1.5596.34 ± 1.1196.41 ± 1.3394.06 ± 0.4792.28 ± 1.6177.50 ± 1.5396.46 ± 0.7595.58 ± 0.8996.30 ± 0.31
586.74 ± 2.8683.19 ± 0.9993.52 ± 2.0671.73 ± 2.8385.45 ± 4.8779.77 ± 6.7480.48 ± 3.9477.33 ± 5.3358.65 ± 2.2778.49 ± 4.1383.34 ± 3.1281.81 ± 3.88
696.43 ± 0.9296.02 ± 1.2498.78 ± 0.8696.26 ± 1.0797.36 ± 2.9896.43 ± 2.8097.30 ± 3.1394.55 ± 6.8598.21 ± 2.6997.11 ± 2.3798.96 ± 0.2298.70 ± 1.41
785.92 ± 10.8183.08 ± 9.4799.44 ± 1.1270.00 ± 10.2262.31 ± 5.2976.23 ± 15.1173.38 ± 8.3484.03 ± 12.3726.06 ± 24.5156.64 ± 21.2387.83 ± 11.9480.43 ± 9.24
891.50 ± 0.9491.08 ± 0.8894.19 ± 1.6290.93 ± 1.2394.72 ± 1.3196.29 ± 0.8997.29 ± 0.3290.34 ± 2.1697.41 ± 0.6397.58 ± 0.8391.54 ± 1.0898.68 ± 0.19
984.26 ± 2.3897.45 ± 0.0797.55 ± 0.4998.57 ± 0.1498.14 ± 0.5297.58 ± 0.3998.73 ± 0.1097.91 ± 0.4497.27 ± 0.1998.70 ± 0.1297.84 ± 0.2398.66 ± 0.16
1058.85 ± 2.4573.79 ± 1.0383.42 ± 2.7678.36 ± 1.9878.87 ± 4.4680.40 ± 4.3581.71 ± 1.1875.36 ± 2.4376.32 ± 1.0683.75 ± 1.7075.86 ± 1.4584.92 ± 1.09
1165.17 ± 1.2169.46 ± 0.8174.97 ± 2.2562.00 ± 2.9871.01 ± 2.2367.71 ± 4.6269.75 ± 1.8664.92 ± 3.3849.76 ± 1.5275.53 ± 2.5372.15 ± 1.4575.59 ± 3.11
1239.31 ± 3.575.90 ± 4.7537.81 ± 5.4211.65 ± 2.215.77 ± 7.663.55 ± 3.5416.70 ± 3.8610.27 ± 2.620.85 ± 1.668.19 ± 3.9511.79 ± 4.3416.37 ± 6.35
1364.53 ± 3.1283.88 ± 1.0584.01 ± 6.0991.07 ± 0.6591.05 ± 2.1188.90 ± 4.9992.95 ± 0.5185.60 ± 2.5893.62 ± 0.5793.83 ± 1.6785.75 ± 0.6793.81 ± 0.83
1491.24 ± 0.6688.69 ± 0.7889.63 ± 1.9592.93 ± 1.7093.13 ± 3.9695.78 ± 1.4694.59 ± 1.1086.99 ± 9.7992.44 ± 2.1795.12 ± 0.8289.04 ± 1.4896.05 ± 1.56
1598.00 ± 0.7998.15 ± 0.6598.19 ± 0.9396.11 ± 2.1798.85 ± 0.8997.65 ± 1.7298.82 ± 0.4498.48 ± 0.8091.74 ± 2.1799.08 ± 0.6298.95 ± 0.6498.20 ± 1.85
1692.99 ± 0.4691.97 ± 0.8594.49 ± 1.6593.87 ± 1.0595.35 ± 2.1992.94 ± 3.4893.50 ± 0.8593.76 ± 4.0875.03 ± 6.4896.58 ± 0.6094.36 ± 0.3696.64 ± 1.67
1789.44 ± 3.7557.53 ± 27.670.00 ± 0.001.54 ± 2.4445.63 ± 45.1623.52 ± 28.4458.31 ± 18.1922.07 ± 16.530.00 ± 0.000.00 ± 0.0088.03 ± 5.2465.26 ± 26.31
1882.23 ± 2.3884.25 ± 2.0694.68 ± 2.4296.79 ± 0.9287.07 ± 3.5890.91 ± 4.0087.99 ± 1.4187.43 ± 4.2658.09 ± 12.3793.74 ± 2.8988.55 ± 2.0593.73 ± 1.65
1986.55 ± 3.8492.29 ± 0.9496.56 ± 2.4696.94 ± 0.8393.63 ± 2.6191.58 ± 6.0195.50 ± 1.8394.09 ± 2.5984.32 ± 2.2298.27 ± 0.4292.66 ± 0.7599.65 ± 0.32
2098.23 ± 0.4097.54 ± 0.4499.18 ± 0.4298.71 ± 0.6598.98 ± 0.9698.76 ± 1.9798.29 ± 1.2998.69 ± 0.9099.90 ± 0.1099.89 ± 0.0999.32 ± 0.2999.75 ± 0.21
OA (%)81.41 ± 1.4890.40 ± 0.2792.14 ± 0.4991.17 ± 0.2591.99 ± 0.4891.59 ± 0.7092.79 ± 0.0989.89 ± 0.3987.89 ± 0.1293.97 ± 0.1191.02 ± 0.1594.24 ± 0.16
AA (%)84.96 ± 0.6783.64 ± 1.4185.40 ± 0.1579.97 ± 1.2083.15 ± 2.2482.15 ± 2.8085.01 ± 0.9780.41 ± 0.9871.20 ± 1.1182.39 ± 1.2086.24 ± 1.0187.73 ± 1.97
κ (%)76.58 ± 1.7487.46 ± 0.3689.78 ± 0.6188.46 ± 0.3389.57 ± 0.6189.07 ± 0.9190.57 ± 0.1286.79 ± 0.5084.19 ± 0.1592.14 ± 0.1488.31 ± 0.2092.50 ± 0.21
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, H.; Luo, Z.; Ma, Y.; Zhu, G.; Dai, X. SSGTN: Spectral–Spatial Graph Transformer Network for Hyperspectral Image Classification. Remote Sens. 2026, 18, 199. https://doi.org/10.3390/rs18020199

AMA Style

Shi H, Luo Z, Ma Y, Zhu G, Dai X. SSGTN: Spectral–Spatial Graph Transformer Network for Hyperspectral Image Classification. Remote Sensing. 2026; 18(2):199. https://doi.org/10.3390/rs18020199

Chicago/Turabian Style

Shi, Haotian, Zihang Luo, Yiyang Ma, Guanquan Zhu, and Xin Dai. 2026. "SSGTN: Spectral–Spatial Graph Transformer Network for Hyperspectral Image Classification" Remote Sensing 18, no. 2: 199. https://doi.org/10.3390/rs18020199

APA Style

Shi, H., Luo, Z., Ma, Y., Zhu, G., & Dai, X. (2026). SSGTN: Spectral–Spatial Graph Transformer Network for Hyperspectral Image Classification. Remote Sensing, 18(2), 199. https://doi.org/10.3390/rs18020199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop