Next Article in Journal
Time-of-Flow Distributions in Discrete Quantum Systems: From Operational Protocols to Quantum Speed Limits
Previous Article in Journal
Heat Conduction Model Based on the Explicit Euler Method for Non-Stationary Cases
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SGFNet: Redundancy-Reduced Spectral–Spatial Fusion Network for Hyperspectral Image Classification

1
Faculty of Innovation and Engineering, Macau University of Science and Technology, Taipa 999078, Macau
2
School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2025, 27(10), 995; https://doi.org/10.3390/e27100995
Submission received: 20 August 2025 / Revised: 16 September 2025 / Accepted: 22 September 2025 / Published: 24 September 2025
(This article belongs to the Section Multidisciplinary Applications)

Abstract

Hyperspectral image classification (HSIC) involves analyzing high-dimensional data that contain substantial spectral redundancy and spatial noise, which increases the entropy and uncertainty of feature representations. Reducing such redundancy while retaining informative content in spectral–spatial interactions remains a fundamental challenge for building efficient and accurate HSIC models. Traditional deep learning methods often rely on redundant modules or lack sufficient spectral–spatial coupling, limiting their ability to fully exploit the information content of hyperspectral data. To address these challenges, we propose SGFNet, which is a spectral-guided fusion network designed from an information–theoretic perspective to reduce feature redundancy and uncertainty. First, we designed a Spectral-Aware Filtering Module (SAFM) that suppresses noisy spectral components and reduces redundant entropy, encoding the raw pixel-wise spectrum into a compact spectral representation accessible to all encoder blocks. Second, we introduced a Spectral–Spatial Adaptive Fusion (SSAF) module, which strengthens spectral–spatial interactions and enhances the discriminative information in the fused features. Finally, we developed a Spectral Guidance Gated CNN (SGGC), which is a lightweight gated convolutional module that uses spectral guidance to more effectively extract spatial representations while avoiding unnecessary sequence modeling overhead. We conducted extensive experiments on four widely used hyperspectral benchmarks and compared SGFNet with eight state-of-the-art models. The results demonstrate that SGFNet consistently achieves superior performance across multiple metrics. From an information–theoretic perspective, SGFNet implicitly balances redundancy reduction and information preservation, providing an efficient and effective solution for HSIC.

1. Introduction

Each pixel in a hyperspectral image (HSI) contains continuous spectral information, giving hyperspectral imaging techniques stronger object recognition capabilities than traditional RGB methods [1]. In recent years, hyperspectral imaging has been widely applied in fields such as crop identification [2], mineral exploration [3], and environmental monitoring [4]. Among the various tasks based on HSI processing, hyperspectral image classification (HSIC) is one of the core research directions [5,6,7]. HSIC mainly predicts the category labels of different objects. Due to its significant practical application value, HSIC has always been a hot research topic in the field of remote sensing [8,9,10].
Previously, supervised learning-based machine learning methods were applied to HSIC, such as the K Nearest-Neighbor algorithm (KNN) [11] and the support vector machine (SVM) [12]. However, these machine learning algorithms rely on manual feature extraction, making it difficult to automatically learn complex spectral and spatial features and weakening the ability to model spatial information. On the other hand, with the emergence of deep learning technology in recent years, the above problems have been effectively solved. Deep learning can automatically learn complex spectral features and spatial structures and mine deeper nonlinear relationships, thus achieving better results in HSIC, and it has therefore become the focus of many scholars in modern HSIC [13,14,15].
In the research process of applying deep learning to HSIC, many convolutional neural network (CNN) models have been used due to their excellent ability for local feature extraction [16]. A CNN can effectively capture the local spatial correlation in an image through a convolution operation. With the help of multi-layer convolution and pooling operations, it can gradually extract the abstract features from the low to the high level, showing outstanding advantages in the spatial feature processing of HSIs [17,18,19]. At an early stage, Chen et al. [20] used a regularized deep feature extraction (FE) method based on a CNN, which utilized multiple convolutional and pooling layers to extract nonlinear, discriminative, and invariant deep features of HSIs, effectively extracting spectral–spatial features of HSIs and avoiding the problem of overfitting. To solve the problems of excessive computational complexity in 3D-CNNs and the inability to fully utilize spectral information in 2D-CNNs, Roy et al. [21] proposed the Hybrid Spectral Convolutional Neural Network (HybridSN), which makes use of the hybrid structures of 3D-CNNs and 2D-CNNs and balances the spectral–spatial feature extraction and computational efficiency by extracting the spectral–spatial features first and then refining the spatial features. Chen et al. [22] proposed for the first time an auto-convolutional neural network (Auto-CNN) for HSIC, which utilizes one-dimensional auto-convolutional neural networks and three-dimensional auto-convolutional neural networks as spectral and spectral spatial classifiers, respectively, to solve the problem that manually designed deep learning architectures may not be able to adapt to a specific dataset.
Although CNNs have many advantages in HSIC, they still have limitations. Because the sensory field in the convolution limits the CNN, capturing long-distance dependencies and global context information in an HSI is difficult, and the processing of spectral features is relatively singular [23]. Therefore, many scholars have focused on the Transformer model to overcome the shortcomings of CNN in HSIC. Hong et al. [24] introduced the Transformer into HSIC for the first time and proposed a novel network called SpectralFormer. It uses neighboring bands in the HSI to learn spectral local sequence information. It generates group spectral embeddings, which are connected by cross-layer jumps to transfer class memory components, thus better mining spectral feature sequence attributes to reduce the loss of key information. In order to solve the problem of most deep learning-based HSIC methods destroying spectral information when extracting spatial features or only being able to extract sequential spectral features in short-range contexts, Ayas et al. [25] proposed a network architecture called SpectralSWIN, which makes use of the proposed Swin-Spectral Module to extract spectral–spatial features by combining the sliding-window self-attention and grouped convolution of spectral dimensions to achieve the hierarchical extraction of spectral–spatial features. Simultaneously, in order to address the issues of excessive dimensionality, spectral information redundancy, and the difficulty of appropriately combining spatial and spectral information in HSI, including the failure of existing methods to utilize first-order derivatives and frequency domain information fully, Fu et al. [26] proposed a Differential-Frequency Attention-based Band Selection Transformer (DFAST) architecture by using the DFASEmbeddings module containing a multi-branching structure, a 3D convolution, the spectral–spatial attention mechanism, and the cascade Transformer encoder. A Differential-Frequency Attention-based Band Selection Transformer (DFAST) architecture is proposed.
Although Transformers demonstrate remarkable advantages in capturing global dependencies for HSI, they still suffer from high computational overhead, particularly when handling long sequences, since the self-attention mechanism scales quadratically with sequence length [27]. Recently, Mamba [28], an emerging sequence modeling paradigm, has offered a new direction for HSIC research. Building on this, Sun et al. [29] proposed the Hyperspectral Spatial Mamba (HyperSMamba) model, which integrates the Spatial Mamba (MS-Mamba) encoder with an Adaptive Fusion Attention Module (AFAttention). This design not only alleviates the quadratic complexity bottleneck of Transformer-based self-attention but also mitigates the excessive computational burden in prior Mamba-based approaches, which is caused by selective scanning strategies.
Although the Mamba model has improved computational complexity compared to the Transformer, we can see from the above models that the Mamba architecture is primarily based on the state space model (SSM) [30,31,32]. However, Yu et al. [33] found that SSM contains a significant amount of redundancy in image classification tasks, which negatively impacts classification accuracy. Therefore, they proposed the MambaOut model, which removes SSM from the model, significantly reducing the computational power required while improving performance. Additionally, we found that many deep learning models used for HSIC address the issue of excessive computational requirements by performing dimension reduction via principal component analysis (PCA) [34]. However, this also poses the problem of partial spectral feature loss [35].
Therefore, to address the challenges in HSIC, we explored whether key spectral information could be extracted through independent modules after PCA dimensionality reduction to compensate for spectral information loss. Could a structure similar to Mamba be designed to achieve an efficient fusion of spectral and spatial features while enhancing computational efficiency? Thus, we propose the Spectral-Guided Fusion Network (SGFNet). Its core component is the Spectral-Guided Gated Convolutional Network (SGGC), which draws inspiration from the essence of the Mamba model while eliminating its redundant State Space Mechanism (SSM). To precisely capture the complex interplay between spectral and spatial information, we first process guiding spectral features through a Spectral-Aware Filtering Module (SAFM). This module encodes raw spectral sequences into a globally shared representation while amplifying critical spectral information. These spectral features are then fed into a redesigned Mamba architecture to compensate for key spectral information lost during PCA processing. To effectively fuse one-dimensional spectral features with two-dimensional spatial features, we introduce the Spectral–Spatial Adaptive Fusion (SSAF) module, enabling the efficient integration of spectral representations with global spatial representations. The fused features are applied to the gating mechanism of the SGGC, enabling spectral features to more effectively guide spatial feature extraction. Through rigorous comparative experiments, the proposed SGFNet demonstrates outstanding classification performance. In summary, our contributions include the following:
  • We propose SGFNet, which is an innovative spectral-guided MambaOut network for hyperspectral image classification (HSIC). From an information–theoretic perspective, hyperspectral data often exhibit high spectral redundancy, resulting in high entropy and making it challenging to extract discriminative features. SGFNet explores the feasibility of the MambaOut architecture in HSIC and demonstrates that the state–space mechanism (SSM) is not indispensable. By leveraging low-entropy spectral priors to guide spatial feature extraction, SGFNet enhances informative patterns while significantly reducing the number of parameters and FLOPs.
  • We design a Spectral-Aware Filtering Module (SAFM) that effectively suppresses redundant spectral responses while retaining informative spectral components. This process can reduce the entropy of raw hyperspectral data to provide reliable and high-information support for subsequent modules.
  • We propose the Spectral-Guided Gated CNN (SGGC), which is a Mamba-inspired structure without the SSM. Within SGGC, we introduce the Spectral–Spatial Adaptive Fusion (SSAF) module, which aggregates one-dimensional spectral features with two-dimensional spatial features and feeds the fused, low-entropy representations into the gating mechanism, effectively guiding spatial feature extraction with reduced uncertainty through spectral information.
  • We conducted rigorous and fair comparative experiments. The experimental results show that SGFNet achieved the highest overall accuracy (OA), average accuracy (AA), and Kappa coefficient when compared with eight state-of-the-art algorithms on four benchmark datasets. These results demonstrate the effectiveness of our entropy-aware design in improving HSIC performance.
The remainder of this paper is organized as follows. In Section 2, we introduce related work. In Section 3, we describe in detail the various components of the proposed SGFNet. In Section 4, we conduct detailed experiments and analyses. Section 5 discusses and compares the effects of different parameters from multiple perspectives. Finally, Section 6 summarizes the overall model and outlines future development directions.

2. Related Work

MambaOut

MambaOut is an architecture that removes the state-space model (SSM), and experiments demonstrate its superiority over the visual Mamba model in the ImageNet image classification task. Its core module is the Gated Convolutional Neural Network (Gated CNN). As shown in Figure 1, this structure illustrates the comparison between the Mamba and the Gated Convolutional Neural Network. The overall architecture incorporates multiple linear transformations, convolutional operations, and gating mechanisms. When the gray dashed region is included, it corresponds to the complete Mamba architecture; removing this component yields the Gated CNN architecture. The gating mechanism efficiently captures spatial local correlations while suppressing redundant frequency bands. Consequently, MambaOut achieves a balance between feature extraction and computational efficiency. Assuming the input is X, the specific process is as follows:
X = Norm ( X )
X C N N = TokenMixer X W 1 σ X W 2 W 3 + X
where Norm ( · ) is normalized, X is the normalized intermediate variable, and X C N N denotes the obtained variable after the gated CNN block. W 1 , W 2 , and W 3 are learnable parameters obtained by linear operations. TokenMixer · is the convolution operation.

3. Methods

3.1. Spectral-Aware Filter Module

The SAFM we designed aims to extract more recognisable spectral features from HSI and select important spectral frequency information in the frequency domain. From an information–theoretic perspective, SAFM effectively reduces redundant spectral responses, lowering the entropy of the spectral representation while preserving critical discriminative information. By suppressing high-entropy (noisy or redundant) components and retaining informative low-entropy features, SAFM provides a more compact and informative spectral encoding for subsequent modules. The process is as follows: first, we assume that the original spectral feature is S i n , and then we apply the Fast Fourier Transform (FFT) to obtain its frequency-domain representation:
S i n f = F S i n
Next, we modulate the frequency components using a quantization matrix W and transform back to the time domain:
S 1 = F 1 W S i n f
Finally, the filtered spectral feature S 1 is passed through a multi-layer perceptron (MLP) to produce the output S o u t :
S o u t = σ out W L + 1 · σ W L · σ ( σ ( W 1 S 1 + b 1 ) + b 2 ) + b L + b L + 1
where W k and b k are the weight matrix and bias vector of the k-th layer, σ ( · ) is the hidden layer activation function, and σ out ( · ) is the output layer activation function.
From an information–theoretic perspective, the SAFM effectively reduces redundant or noisy spectral components—corresponding to high-entropy information in the frequency domain—while retaining low-entropy, informative spectral features. The quantization matrix W acts to suppress high-entropy components, and the subsequent MLP integrates the filtered spectral features into a compact and discriminative representation suitable for downstream spectral–spatial processing.

3.2. Spectral–Spatial Adaptive Fusion Module

The precise fusion of spatial and spectral features is crucial in HSIC tasks. However, existing models for HSIC often employ direct addition or concatenation methods for feature fusion, which struggle to capture the spatial–spectral correlations between different regions accurately. Therefore, we propose a solution called the SSAF module, which can dynamically generate spectral–spatial weights and perform subsequent fusion processes to integrate two-dimensional spatial features and one-dimensional spectral features accurately; the process is illustrated in Figure 2c. Assuming that SSAF takes spatial features x and spectral features s as inputs, its output is the fusion feature f. The specific derivation process is as follows:
x = PWConv ( AdaptiveAvgPool ( x ) )
Subsequently, x and s pass through their shared parameters simultaneously. The entire process is as follows:
w 1 = σ Linear h C LayerNorm GELU Linear C h ( x ) w 2 = σ Linear h C LayerNorm GELU Linear C h ( s )
where h = C 4 is the hidden dimension after dimensionality reduction, σ is the activation function, and lightweight weight calculation is achieved through a path from dimensionality reduction to nonlinear activation to dimensionality expansion. Here, weights w 1 and w 2 represent the importance of spatial features and spectral features in the current sample. Finally, the generated weights are used to perform weighted fusion on the input features to obtain the following:
f = LayerNorm w 1 x + w 2 s
where symbol ⊙ denotes element-wise multiplication.

3.3. Spectral Guidance Gated CNN Module

The SGGC module we designed has the structure shown in Figure 2b. SGGC consists of three parts: the main branch for spatial feature extraction, the gated branch of the module centred on SSAF, and the residual connection. Therefore, we set the input spatial features as x R B × C × H × W and the input frequency domain features as s R B × C . First, the original features are retained as residual shortcuts to mitigate the gradient vanishing problem in deep networks. The specific process of this module is as follows.
shortcut = x
And then to fit the dimension requirement of LayerNorm, the dimension order of spatial features is first adjusted, where the channel dimension is postponed to obtain the following:
x trans 1 = rearrange x , bhwc b h w c R B × H × W × C
The channel features at each spatial location are also normalized to stabilize the fluctuations in the distribution due to intensity differences in the hyperspectral bands, and so the following is obtained:
x norm [ b , h , w , c ] = x trans 1 [ b , h , w , c ] μ b , h , w σ b , h , w 2 + ϵ
where μ b , h , w is the channel mean, σ b , h , w 2 is the channel variance, and ϵ is the anti-zero constant. So, we derive the main branch, which first extends the normalized features to the hidden dimension h = expand × C ( expand = 0.5 ) through the linear layer to enhance the feature capacity, and so we obtain the following:
c 1 = W in · x norm + b in R B × H × W × h
where W in R h × C is is the weight matrix and b in R h is the bias. And then the following is obtained after D W C o n v :
c 2 = D W C o n v 3 × 3 c 1
Finally, the dimensions are recovered to fit the subsequent gating operation, which is obtained as follows:
c = rearrange c 2 , bhwc b h w c R B × H × W × h
The second part is a gate-controlled branch, which aims to fuse spatial and spectral features to generate joint features. The specific steps are as follows:
g fusion = SSAF x norm , s = LayerNorm w 1 x norm + w 2 s R B × C
From Equation (7), w 1 , w 2 R B × C are the dynamic weights generated by the SSAF, which achieves adaptive balance between spatial and spectral features. And then the fusion features are projected to the hidden dimension h, and the spatial dimension is extended to match the spatial dimensions of the main branch features, which is obtained as follows:
g = W g · g fusion + b g R B × h
Finally, the obtained gating signal g is activated by GELU , and the main branch feature c is screened element by element and finally output by residual connection as follows:
x gate = GELU ( g ) c R B × H × W × h x out = GELU W out · x gate + b out R B × H × W × C
where W out R C × h is the dimensionality reduction matrix, and the output is obtained as follows:
y = rearrange x out , bhwc b c h w + shortcut

3.4. SGFNet Overview

The core of the SGFNet is a multi-stage spectral guided gated convolution + downsampling method that gradually extracts spatial–spectral features. The innovative shared spectral encoding ensures the injection of spectral information across stages, ultimately outputting category predictions through a classification head. The detailed process is shown in Figure 2a. We assume that the input raw spatial feature tensor is X raw R B × C raw × H × W , the spectral features after SAFM screening are S s a f m , and the final representation is Y ^ . To reduce the dimensionality of spatial features, PCA dimension reduction is first performed on X raw . The steps are as follows:
X = PCA X raw = X raw · P R B × C in × H × W
where C in indicates the number of channels after dimensionality reduction. Moreover, P R C raw × C in denotes the principal component analysis (PCA) projection matrix. Subsequently, we input the PCA-dimension-reduced spatial feature vector x into the embedding layer, whose specific details are shown in Figure 3a, to align the spatial feature dimension with the spectral feature dimension. The following results are obtained:
X emb = EmbeddingLayer ( X ) R B × C emb × H × W
Then, X emb = x i ( i = 0 , 1 , 2 · n 1 ) undergoes feature extraction and dimension reduction through multiple consecutive SEGC modules and downsampling layers, gradually integrating spatial and spectral features. The specific details of downsampling are shown in Figure 3b. Thus, we obtain the following:
x 1 = DownsampleLayer ( SGGC ( x 0 , S s a f m ) x n = DownsampleLayer ( SGGC ( x n 1 , S s a f m )
Thus, the final spatial feature x n is processed by the classification head to generate category probabilities, resulting in the following:
x pool = AdaptiveAvgPool 2 d ( x n ) Y ^ = Linear Flatten x pool
The specific procedural steps of the model can be obtained from Algorithm 1.
Algorithm 1: Pseudo-Procedure of the Proposed SGFNet
Entropy 27 00995 i001

4. Results

This section provides a detailed introduction to the dataset used in the experiment, parameter settings, model comparison results, and visualization analysis. We chose to compare the proposed model with eight cutting-edge models commonly used in the HSIC field to evaluate its effectiveness. All tests used the same hyperparameter settings and experimental conditions as the original study to ensure fairness.

4.1. Data Description

To comprehensively evaluate the performance of the proposed model, we conducted comparative experiments using four well-known hyperspectral datasets: the Augsburg dataset (AU), the Houston 2013 dataset (HU2013), the Pavia University dataset (PU), and the WHU-Hi-LongKou dataset (LK). The following subsections will provide detailed information about each dataset.

4.1.1. Augsburg Dataset

The Augsburg dataset collection utilized three dedicated systems: the HySpex sensor for hyperspectral imaging, the C-band synthetic aperture radar (SAR) sensor installed on the Sentinel-1 satellite, and the DLR-3K system for digital elevation model (DEM) data. The Augsburg dataset is divided into seven categories based on land cover types. The false-color image and the ground truth image of the dataset are shown in Figure 4.

4.1.2. Houston2013 Dataset

The Houston 2013 dataset was acquired by the ROSIS-3 sensor near the University of Houston, Texas, USA, and has a spatial resolution of 2 meters, a spectral range of 430–860 nanometers, 144 bands, and an image size of 349 × 190 pixels, covering 15 land cover types. The false-color image and the ground truth image of the dataset are shown in Figure 5.

4.1.3. Pavia University Dataset

The Pavia University dataset was acquired using the Reflective Optical System Imaging Spectrometer (ROSIS-3) sensor in Pavia, Italy. It features images with a 610 × 340 pixel resolution across 115 spectral bands. This dataset includes 42,776 annotated samples representing nine land cover categories: asphalt, grassland, gravel, trees, metal debris, bare soil, bricks, and shadows. The false-color image and the ground truth image of the dataset are shown in Figure 6.

4.1.4. WHU-Hi-LongKou Dataset

The WHU-Hi-Longkou dataset was collected in Wuhan, Hubei Province, using Headwall Nano-Hyperspec imaging sensors. The study area covers a typical agricultural region with nine different land cover types. The dataset contains image files with a 550 × 400 pixels resolution, comprising 270 spectral bands covering a wavelength range of 400 to 1000 nanometers. The false-color image and the ground truth image of the dataset are shown in Figure 7.

4.2. Experimental Settings

All algorithmic modeling experiments in the article were implemented using Python 3.12.3 and Pytorch 2.5.1, and they were trained on computers equipped with RTX 4060Ti 16 GB GPU (NVIDIA, Santa Clara, CA, USA) and Intel Core i5-13600KF CPU (Intel, Santa Clara, CA, USA).
The AU, HU 2013, PU, and LK datasets each contain 30 training samples with the remaining samples forming the test set. Table 1, Table 2, Table 3 and Table 4 present the distribution of training and test samples for each land cover class within each dataset along with the total number of samples. The hyperparameters are set as follows: the initial learning rate is 0.03, The total number of training epochs for all four datasets is 200, and the learning rate is halved every 50 epochs. To optimize the model, we select the Adam optimizer. Additionally, during the experiments, to eliminate the interference of random factors on the stability of the results, we conduct independent repeated experiments using ten completely randomly generated random seeds to validate the robustness of the model performance fully.
To more thoroughly analyze and compare the classification performance of the proposed model, six commonly used evaluation metrics were selected for the experiment: classification accuracy for each feature category, overall accuracy (OA), average accuracy (AA), Kappa coefficient (Kappa), number of parameters required, and model FLOPS. Additionally, to more intuitively compare the classification performance of each model, visualization charts of the classification results for each comparison model were generated. Finally, all experimental results are based on the average of ten independent runs to achieve more accurate comparisons.

4.3. Experimental Results and Analysis

We selected eight models based on four different categories for comparison. These models include SPRN [36] and CLOLN [37] based on CNN, SSFTT [38] and GSC-VIT [39] based on Transformer networks, FDGC [40] and WFCG [41] based on GCN, and MambaHSI [42] and IGroupSS-Mamba [43] based on Mamba networks. Finally, all experiments were conducted under the parameter settings specified in the original paper, using the original hyperparameters to ensure the fairest and most accurate comparison results. The experimental results are the average and variance of ten runs with the specific comparison of experimental results presented in Table 5, Table 6, Table 7 and Table 8. As shown in the tables, our proposed model achieves the highest OA, AA, and Kappa across all datasets as well as the parameters (K) and FLOPs (M) for each model in the experiments. The results are visualized in Figure 8, Figure 9, Figure 10 and Figure 11. In the following subsections, we will analyze the classification performance of different models on the selected datasets.

4.3.1. Results and Analysis on the Augsburg Dataset

As shown in Table 5, the proposed model achieves the highest performance on OA, AA, and Kappa with respective values of 90.14%, 83.87%, and 86.26%. Compared to the second-best model, IGroupSS-Mamba, our model outperforms it by 2.69% on OA, 1.11% on AA, and 3.53% on Kappa. Meanwhile, we can observe that the GCN-based model FDGC did not achieve good results with an OA of only 77.37%. This may be because GCN relies on iterative learning from neighboring nodes, and when the number of selected samples is small, it is difficult to fully reflect the actual graph structure, which affects feature learning and classification performance. However, the WFCG model achieved better results than the FDGC model. This may be because modifying the traditional GCN to a Graph Attention Network (GAT) enables it to adaptively learn the attention weights between nodes, effectively capturing the complex local dependencies between pixels and thereby improving classification accuracy. Additionally, the SSFTT model based on Transformers, which relies on self-attention mechanisms, can model pixel relationships across the entire domain, outperforming CNN-based models such as SPRN and CLOLN. While the CNN-based model CLOLN does not lead in accuracy, it has the lowest number of parameters and FLOPs, which can be attributed to the CNN’s local perception and weight-sharing design, resulting in significantly fewer parameters and FLOPs compared to fully connected networks. Regarding parameter count and FLOPs, the proposed model also holds a leading position with a classification accuracy superior to that of other models. From Figure 8, we can see that compared to other comparison models, our proposed model yields smoother classification results and the least noise.

4.3.2. Results and Analysis on the Houston2013 Dataset

As shown in Table 6, compared to the AU dataset FDGC model, the accuracy has improved on this dataset, which also verifies that the GCN relies on the propagation characteristics of the graph structure, specifically the number of samples. Meanwhile, the Mamba-based model IGroupSS-Mamba still achieved the second-best performance, proving the Mamba module’s effectiveness compared to other models. Meanwhile, we can see that while the WFCG model achieves an intermediate level of classification accuracy, it requires a massive amount of FLOPS, reaching 92,600.4 M, reflecting the significant increase in FLOPS required by GAT as the dataset grows. CLOCN continues to maintain the lowest parameter count and FLOPS. The model we proposed still achieves the highest classification accuracy with the OA reaching 96.63%, the AA reaching 97.11%, and Kappa reaching 96.36%. As shown in Figure 9, our model still achieves optimal classification performance and remains highly accurate even when dealing with smaller target trees. However, SSFTT exhibits a higher rate of classification errors when dealing with similar targets such as Parking lot 1 and Parking lot 2.

4.3.3. Results and Analysis on the Pavia University Dataset

As shown in Table 7, the proposed model maintains the best performance in terms of accuracy with OA, AA, and Kappa values all exceeding 98%. At the same time, the parameters and FLPOS of the proposed model are second only to those of the CNN-based model CLOLN, achieving the second-best performance and high accuracy while maintaining low computational complexity. We found that the classification accuracy of IGroupMamba, which has consistently ranked second in the AU and HU2013 datasets, has decreased. This may be because when the dataset contains more complex feature mixtures or increased spatial resolution differences, the difficulty of feature extraction increases, and the interval group space–spectral blocks of IGroupSS-Mamba cannot effectively capture the spatial–spectral context information, leading to a decline in model performance. Additionally, we observed that Mamba-based models outperform those based on the Transformer. This may be because Mamba models, which utilize a selective state space sequence modeling mechanism, can better capture long-range dependencies in long sequences compared to the self-attention mechanism of Transformers, thereby achieving higher accuracy. As shown in Figure 10, our proposed model exhibits distinct features at object edges, achieving the best classification performance. Meanwhile, as shown in Table 9, our proposed model achieved a training time of 9.51 s and a testing time of 0.48 s, delivering the second-best performance among all comparison models. This demonstrates that our model attains optimal results within a shorter training cycle, further validating its superiority.

4.3.4. Results and Analysis on the WHU-Hi-LongKou Dataset

As shown in Table 8, even with the LK dataset, which consists of large images but a small number of training samples, our model outperforms other models. Its OA, AA, and Kappa values are as high as 98.51%, 98.60%, and 98.04%, respectively. Additionally, we can observe that our model achieves the highest classification accuracy across six categories. Furthermore, the OA accuracy of the Mamba-based model exceeds 97%, further validating the advantage of the Mamba model over other models in large-scale classification tasks. Finally, as shown in Figure 11, the GSC-VIT model exhibits a high error rate in the broadleaf soybean category with significant noise evident in the visualization. Additionally, we can observe that other models exhibit numerous classification errors along the edges in the corn category. In contrast, our proposed model yields the smoothest and most accurate results, further confirming its superiority.

5. Discussion

5.1. Impact of the Patch Size

The range of patch sizes determines the strength of the spatial context information captured. To quantitatively assess the impact of different patch sizes on the proposed model’s performance, this experiment fixed other parameters and only altered the spatial dimensions of the input patches, setting them to 9, 11, 13, 15, and 17. The changes in OA were tested on the AU, HU2013, PU, and LK datasets.
As shown in Figure 12a, the different characteristics of the datasets also dominate the differences in their classification accuracy. The PU and LK datasets achieve higher accuracy at larger patch sizes, so increasing the patch size can provide more spatial context to improve accuracy. The HU2013 dataset, on the other hand, exhibits relatively stable accuracy at medium patch sizes, while excessively large or small patch sizes tend to lead to a decline in accuracy. The AU dataset exhibits relatively stable accuracy at smaller patch sizes with accuracy gradually decreasing as the patch size increases. These results suggest that appropriate patch sizing balances spatial context with feature redundancy.

5.2. Impact of PCA on the Results

To quantify the impact of the PCA ratio on the performance of the proposed model, this experiment fixed the network structure and set five different PCA ratios: 1/10, 1/8, 1/6, 1/4, and 1/2. For each PCA ratio, the changes in OA were analyzed across four datasets.
As shown in Figure 12b, the PCA ratio of 1/8 is the relatively optimal ratio. At this point, PU and LK maintain peak accuracy, while HU2013 accuracy is improved to a maximum value of 96.63%. At this point, the AU error bar is the shortest, indicating its optimal stability. This ratio removes redundant noise from AU and HU2013 while retaining the key spectral features of PU and LK. This setting effectively removes redundant spectral information while retaining essential discriminative features, enhancing the efficiency of spectral representation and improving downstream classification.

5.3. The Impact of Embedding Dimensions

In HSIC, the embedding dimension (Embedding) determines the model’s ability to encode spectral–spatial features. To investigate its impact on performance, five groups of embedding dimensions were set: 32, 64, 96, 128, and 160. The OA, AA, and Kappa changes were analyzed across the AU, HU2013, PU, and LK datasets.
As shown in Figure 13, at lower dimensions (32–64), the OA, AA, and Kappa values of all datasets typically increase. This is because at lower dimensions, the feature encoding capacity is limited, while increasing the dimension can capture more spectral–spatial correlation information, thereby improving the discriminative power of classification. Most datasets reach their performance peak at medium dimensions (64–128). This indicates that this dimension range effectively balances rich feature expression with avoiding introducing excessive redundancy (which may lead to overfitting). At higher dimensions (128–160), simpler datasets, such as PU and LK, maintain stable or slightly improved performance, as their features are easier to distinguish. However, model performance growth slows or fluctuates for complex datasets like AU and HU2013, possibly due to overly strong feature expression, which introduces noise.

5.4. Ablation Experiment

5.4.1. The Impact of Different Modules

To validate the effectiveness of the proposed modules, we designed and conducted a series of ablation experiments. The experiments involved the application of the SGGC, SAFM, and SSAF modules. To ensure fair comparison, we simultaneously assumed that the SAFM module was replaced with a standard MLP and that the Sum module directly concatenated the spatial and spectral features in SSAF. The comparison experiments were conducted on the PU dataset, where 30 samples were used for training and the remaining samples were used as test samples. Evaluation metrics included OA, AA, and Kappa. As shown in Table 10, integrating these modules achieves the best performance, demonstrating the synergistic effect of reduced redundancy and improved spectral–spatial feature representation.

5.4.2. The Impact of Different Encoder Block Numbers

To determine the effect of the number of encoder blocks on SGFNet’s performance, we varied the number of encoder blocks from 1 to 5. These experiments were conducted on the PU dataset with the number of samples in the training set fixed at 30 and the remaining samples allocated to the test set. The performance evaluation metrics were OA, AA, and Kappa, and the model parameters (K) and FLOPS (M) were also recorded.
As shown in Table 11, the experimental results exhibit a clear performance trend: as the number of encoder blocks increases, model performance gradually improves until it reaches a peak at three encoder blocks, after which further increases in the number of encoder blocks result in diminishing returns. When the number of encoders is set to three, the model achieves optimal performance across all metrics. It also has a relatively balanced computational overhead, achieving a balance between predictive capability and computational efficiency.

5.5. Feature Visualization

To further demonstrate the capability of our proposed SGFNet in learning feature distributions, t-SNE [44] is employed as a dimension reduction tool to visually examine the distribution of the learned features. As shown in Figure 14, the t-SNE projection results vividly demonstrate that our model exhibits excellent feature separation capabilities across all four datasets. It can be clearly observed that features belonging to different categories are distinctly separated into mutually isolated clusters in the low-dimensional embedding space. In contrast, features within the same category remain highly compact with minimal internal variability. This indicates that the model effectively extracts discriminative spectral–spatial features while avoiding redundant or noisy representations.

6. Conclusions

In this paper, we propose SGFNet for HSIC, which is a network in which spectral features continuously guide the extraction of spatial features. SGFNet integrates three core components: the Spectral-Aware Filtering Module (SAFM) for enhancing key spectral information, the Spectral–Spatial Adaptive Fusion (SSAF) module for the dynamic integration of spectral and spatial features, and the Spectral Guidance Gated CNN (SGGC)—a Mamba-inspired architecture without the SSM mechanism—augmented with SSAF and spectral guidance to enable effective spatial feature extraction while maintaining a lightweight design. We investigated the feasibility of applying the MambaOut architecture to HSIC and conducted extensive experiments using four publicly available datasets. The results demonstrate that SGFNet consistently outperforms eight state-of-the-art methods across multiple evaluation metrics, highlighting the effectiveness of the proposed modules and supporting the design choice of removing the SSM mechanism. From an information–theoretic perspective, SGFNet effectively reduces redundant spectral information, which can be interpreted as lowering the entropy of the feature representations while preserving discriminative information critical for classification. This balance between redundancy reduction and information preservation contributes to the model’s superior performance and efficient representation of spectral–spatial features. In future work, we plan to extend SGFNet to other hyperspectral tasks, such as object detection and change detection, and to develop lightweight variants suitable for deployment in resource-constrained environments.

Author Contributions

Conceptualization, B.W. and C.C.; methodology, B.W. and C.C.; software, B.W. and C.C.; validation, B.W. and C.C.; formal analysis, B.W., C.C. and D.K.; investigation, B.W. and C.C.; resources, D.K.; data curation, D.K.; writing—original draft preparation, B.W. and C.C.; writing—review and editing, D.K.; visualization, B.W. and C.C.; supervision, D.K.; funding acquisition, B.W. and D.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under grant number 12090020.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ma, W.K.; Bioucas-Dias, J.M.; Chanussot, J.; Gader, P. Signal and Image Processing in Hyperspectral Remote Sensing [From the Guest Editors]. IEEE Signal Process. Mag. 2014, 31, 22–23. [Google Scholar] [CrossRef]
  2. Vairavan, C.; Kamble, B.M.; Durgude, A.G.; Ingle, S.R.; Pugazenthi, K. Hyperspectral Imaging of Soil and Crop: A Review. J. Exp. Agric. Int. 2024, 46, 48–61. [Google Scholar] [CrossRef]
  3. Peyghambari, S.; Zhang, Y. Hyperspectral remote sensing in lithological mapping, mineral exploration, and environmental geology: An updated review. J. Appl. Remote Sens. 2021, 15, 031501. [Google Scholar] [CrossRef]
  4. Rajabi, R.; Zehtabian, A.; Singh, K.D.; Tabatabaeenejad, A.; Ghamisi, P.; Homayouni, S. Hyperspectral imaging in environmental monitoring and analysis. Front. Environ. Sci. 2024, 11, 1353447. [Google Scholar] [CrossRef]
  5. Ahmad, M.; Shabbir, S.; Roy, S.K.; Hong, D.; Wu, X.; Yao, J.; Khan, A.M.; Mazzara, M.; Distefano, S.; Chanussot, J. Hyperspectral image classification—Traditional to deep models: A survey for future prospects. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 15, 968–999. [Google Scholar] [CrossRef]
  6. Datta, D.; Mallick, P.K.; Bhoi, A.K.; Ijaz, M.F.; Shafi, J.; Choi, J. Hyperspectral image classification: Potentials, challenges, and future directions. Comput. Intell. Neurosci. 2022, 2022, 3854635. [Google Scholar] [CrossRef]
  7. Khan, M.J.; Khan, H.S.; Yousaf, A.; Khurshid, K.; Abbas, A. Modern trends in hyperspectral image analysis: A review. IEEE Access 2018, 6, 14118–14129. [Google Scholar] [CrossRef]
  8. Ullah, F.; Ullah, I.; Khan, R.U.; Khan, S.; Khan, K.; Pau, G. Conventional to deep ensemble methods for hyperspectral image classification: A comprehensive survey. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 3878–3916. [Google Scholar] [CrossRef]
  9. Zhang, Z.; Huang, L.; Wang, Q.; Jiang, L.; Qi, Y.; Wang, S.; Shen, T.; Tang, B.-H.; Gu, Y. UAV hyperspectral remote sensing image classification: A systematic review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 18, 3099–3124. [Google Scholar] [CrossRef]
  10. Shaik, R.U.; Periasamy, S.; Zeng, W. Potential assessment of PRISMA hyperspectral imagery for remote sensing applications. Remote Sens. 2023, 15, 1378. [Google Scholar] [CrossRef]
  11. Huang, K.; Li, S.; Kang, X.; Fang, L. Spectral–spatial hyperspectral image classification based on KNN. Sens. Imaging 2016, 17, 1. [Google Scholar] [CrossRef]
  12. Mercier, G.; Lennon, M. Support vector machines for hyperspectral image classification with spectral-based kernels. In Proceedings of the IGARSS 2003, 2003 IEEE International Geoscience and Remote Sensing Symposium, Proceedings (IEEE Cat. No. 03CH37477), Toulouse, France, 21–25 July 2003; pp. 288–290. [Google Scholar]
  13. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep learning for hyperspectral image classification: An overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef]
  14. Grewal, R.; Singh Kasana, S.; Kasana, G. Machine learning and deep learning techniques for spectral spatial classification of hyperspectral images: A comprehensive survey. Electronics 2023, 12, 488. [Google Scholar] [CrossRef]
  15. Manian, V.; Alfaro-Mejía, E.; Tokars, R.P. Hyperspectral image labeling and classification using an ensemble semi-supervised machine learning approach. Sensors 2022, 22, 1623. [Google Scholar] [CrossRef] [PubMed]
  16. Bera, S.; Shrivastava, V.K.; Satapathy, S.C. Advances in Hyperspectral Image Classification Based on Convolutional Neural Networks: A Review. CMES-Comput. Model. Eng. Sci. 2022, 133, 219–250. [Google Scholar] [CrossRef]
  17. Taye, M.M. Theoretical understanding of convolutional neural network: Concepts, architectures, applications, future directions. Computation 2023, 11, 52. [Google Scholar] [CrossRef]
  18. Zhang, M.; Li, W.; Du, Q. Diverse region-based CNN for hyperspectral image classification. IEEE Trans. Image Process. 2018, 27, 2623–2634. [Google Scholar] [CrossRef]
  19. Lee, H.; Kwon, H. Going deeper with contextual CNN for hyperspectral image classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef]
  20. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  21. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
  22. Chen, Y.; Zhu, K.; Zhu, L.; He, X.; Ghamisi, P.; Benediktsson, J.A. Automatic Design of Convolutional Neural Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7048–7066. [Google Scholar] [CrossRef]
  23. He, X.; Chen, Y.; Lin, Z. Spatial-Spectral Transformer for Hyperspectral Image Classification. Remote Sens. 2021, 13, 498. [Google Scholar] [CrossRef]
  24. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking Hyperspectral Image Classification with Transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  25. Ayas, S.; Tunc-Gormus, E. SpectralSWIN: A spectral-swin transformer network for hyperspectral image classification. Int. J. Remote Sens. 2022, 43, 4025–4044. [Google Scholar] [CrossRef]
  26. Fu, D.; Zeng, Y.; Zhao, J. DFAST: A Differential-Frequency Attention-Based Band Selection Transformer for Hyperspectral Image Classification. Remote Sens. 2025, 17, 2488. [Google Scholar] [CrossRef]
  27. Zhang, G.; Abdulla, W. Transformers Meet Hyperspectral Imaging: A Comprehensive Study of Models, Challenges and Open Problems. arXiv 2025, arXiv:2506.08596. [Google Scholar] [CrossRef]
  28. Gu, A.; Dao, T. Mamba: Linear-time sequence modeling with selective state spaces. arXiv 2023, arXiv:2312.00752. [Google Scholar] [CrossRef]
  29. Sun, M.; Wang, L.; Jiang, S.; Cheng, S.; Tang, L. HyperSMamba: A Lightweight Mamba for Efficient Hyperspectral Image Classification. Remote Sens. 2025, 17, 2008. [Google Scholar] [CrossRef]
  30. Wang, X.; Wang, S.; Ding, Y.; Li, Y.; Wu, W.; Rong, Y.; Kong, W.; Huang, J.; Li, S.; Yang, H. State space model for new-generation network alternative to transformers: A survey. arXiv 2024, arXiv:2404.09516. [Google Scholar] [CrossRef]
  31. Zhang, H.; Zhu, Y.; Wang, D.; Zhang, L.; Chen, T.; Wang, Z.; Ye, Z. A survey on visual mamba. Appl. Sci. 2024, 14, 5683. [Google Scholar] [CrossRef]
  32. Lv, X.; Sun, Y.; Zhang, K.; Qu, S.; Zhu, X.; Fan, Y.; Wu, Y.; Hua, E.; Long, X.; Ding, N. Technologies on Effectiveness and Efficiency: A Survey of State Spaces Models. arXiv 2025, arXiv:2503.11224. [Google Scholar] [CrossRef]
  33. Yu, W.; Wang, X. Mambaout: Do we really need mamba for vision? In Proceedings of the Computer Vision and Pattern Recognition Conference, Nashville, TN, USA, 10–17 June 2025; pp. 4484–4496. [Google Scholar]
  34. Maćkiewicz, A.; Ratajczak, W. Principal components analysis (PCA). Comput. Geosci. 1993, 19, 303–342. [Google Scholar] [CrossRef]
  35. Uddin, M.P.; Mamun, M.A.; Hossain, M.A. PCA-based feature reduction for hyperspectral remote sensing image classification. Iete Tech. Rev. 2021, 38, 377–396. [Google Scholar] [CrossRef]
  36. Zhang, X.; Shang, S.; Tang, X.; Feng, J.; Jiao, L. Spectral Partitioning Residual Network with Spatial Attention Mechanism for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  37. Li, C.; Rasti, B.; Tang, X.; Duan, P.; Li, J.; Peng, Y. Channel-layer-oriented lightweight spectral–spatial network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–14. [Google Scholar] [CrossRef]
  38. Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–spatial feature tokenization transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  39. Zhao, Z.; Xu, X.; Li, S.; Plaza, A. Hyperspectral image classification using groupwise separable convolutional vision transformer network. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–17. [Google Scholar] [CrossRef]
  40. Liu, Q.; Dong, Y.; Zhang, Y.; Luo, H. A Fast Dynamic Graph Convolutional Network and CNN Parallel Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  41. Dong, Y.; Liu, Q.; Du, B.; Zhang, L. Weighted feature fusion of convolutional neural network and graph attention network for hyperspectral image classification. IEEE Trans. Image Process. 2022, 31, 1559–1572. [Google Scholar] [CrossRef]
  42. Li, Y.; Luo, Y.; Zhang, L.; Wang, Z.; Du, B. MambaHSI: Spatial–Spectral Mamba for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–16. [Google Scholar] [CrossRef]
  43. He, Y.; Tu, B.; Jiang, P.; Liu, B.; Li, J.; Plaza, A. IGroupSS-Mamba: Interval group spatial-spectral mamba for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5538817. [Google Scholar] [CrossRef]
  44. Maaten, L.v.d.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
Figure 1. Architecture with shadow regions (Mamba), architecture without shadow regions (Gated CNN).
Figure 1. Architecture with shadow regions (Mamba), architecture without shadow regions (Gated CNN).
Entropy 27 00995 g001
Figure 2. Detailed flowchart of SGFNet. (a): The upper part is the main branch of SGFNet, and the lower part is the Spectral-Aware Filtering Module (SAFM). (b): Spectral-Guided Gated CNN Module (SGGC). (c): Spectral–Spatial Adaptive Fusion Module (SSAF).
Figure 2. Detailed flowchart of SGFNet. (a): The upper part is the main branch of SGFNet, and the lower part is the Spectral-Aware Filtering Module (SAFM). (b): Spectral-Guided Gated CNN Module (SGGC). (c): Spectral–Spatial Adaptive Fusion Module (SSAF).
Entropy 27 00995 g002
Figure 3. Embedding and downsampling details diagram. The asterisk ‘*’ indicates multiplicity. (a): Embedding layer, (b): downsampling layer.
Figure 3. Embedding and downsampling details diagram. The asterisk ‘*’ indicates multiplicity. (a): Embedding layer, (b): downsampling layer.
Entropy 27 00995 g003
Figure 4. Augsburg dataset. (a) False color image. (b) Ground truth.
Figure 4. Augsburg dataset. (a) False color image. (b) Ground truth.
Entropy 27 00995 g004
Figure 5. Houston2013 dataset. (a) False color image. (b) Ground truth.
Figure 5. Houston2013 dataset. (a) False color image. (b) Ground truth.
Entropy 27 00995 g005
Figure 6. Pavia University dataset. (a) False color image. (b) Ground truth.
Figure 6. Pavia University dataset. (a) False color image. (b) Ground truth.
Entropy 27 00995 g006
Figure 7. WHU-Hi-LongKou dataset. (a) False color image. (b) Ground truth.
Figure 7. WHU-Hi-LongKou dataset. (a) False color image. (b) Ground truth.
Entropy 27 00995 g007
Figure 8. Visualization results of different model classifications in the Augsburg dataset. (a) Ground truth. (b) SPRN. (c) CLOLN. (d) FDGC. (e) WFCG. (f) SSFTT. (g) GSC-VIT. (h) MambaHSI. (i) IGroupSS-Mamba. (j) Ours. The color markers below indicate category correspondences.
Figure 8. Visualization results of different model classifications in the Augsburg dataset. (a) Ground truth. (b) SPRN. (c) CLOLN. (d) FDGC. (e) WFCG. (f) SSFTT. (g) GSC-VIT. (h) MambaHSI. (i) IGroupSS-Mamba. (j) Ours. The color markers below indicate category correspondences.
Entropy 27 00995 g008
Figure 9. Visualization results of different model classifications in the Houston2013 dataset. (a) Ground truth. (b) SPRN. (c) CLOLN. (d) FDGC. (e) WFCG. (f) SSFTT. (g) GSC-VIT. (h) MambaHSI. (i) IGroupSS-Mamba. (j) Ours. The color markers below indicate category correspondences.
Figure 9. Visualization results of different model classifications in the Houston2013 dataset. (a) Ground truth. (b) SPRN. (c) CLOLN. (d) FDGC. (e) WFCG. (f) SSFTT. (g) GSC-VIT. (h) MambaHSI. (i) IGroupSS-Mamba. (j) Ours. The color markers below indicate category correspondences.
Entropy 27 00995 g009
Figure 10. Visualization results of different model classifications in the Pavia University dataset. (a) Ground truth. (b) SPRN. (c) CLOLN. (d) FDGC. (e) WFCG. (f) SSFTT. (g) GSC-VIT. (h) MambaHSI. (i) IGroupSS-Mamba. (j) Ours. The color markers below indicate category correspondences.
Figure 10. Visualization results of different model classifications in the Pavia University dataset. (a) Ground truth. (b) SPRN. (c) CLOLN. (d) FDGC. (e) WFCG. (f) SSFTT. (g) GSC-VIT. (h) MambaHSI. (i) IGroupSS-Mamba. (j) Ours. The color markers below indicate category correspondences.
Entropy 27 00995 g010aEntropy 27 00995 g010b
Figure 11. Visualization results of different model classifications in the WHU-Hi-LongKou dataset. (a) Ground truth. (b) SPRN. (c) CLOLN. (d) FDGC. (e) WFCG. (f) SSFTT. (g) GSC-VIT. (h) MambaHSI. (i) IGroupSS-Mamba. (j) Ours. The color markers below indicate category correspondences.
Figure 11. Visualization results of different model classifications in the WHU-Hi-LongKou dataset. (a) Ground truth. (b) SPRN. (c) CLOLN. (d) FDGC. (e) WFCG. (f) SSFTT. (g) GSC-VIT. (h) MambaHSI. (i) IGroupSS-Mamba. (j) Ours. The color markers below indicate category correspondences.
Entropy 27 00995 g011
Figure 12. The impact of patch size and PCA on model performance. (a): Patch size, (b): PCA.
Figure 12. The impact of patch size and PCA on model performance. (a): Patch size, (b): PCA.
Entropy 27 00995 g012
Figure 13. The impact of different embedding dimensions on four datasets. (a): Augsburg dataset, (b): Houston2013 dataset, (c): Pavia University dataset, (d): WHU-Hi-LongKou dataset.
Figure 13. The impact of different embedding dimensions on four datasets. (a): Augsburg dataset, (b): Houston2013 dataset, (c): Pavia University dataset, (d): WHU-Hi-LongKou dataset.
Entropy 27 00995 g013
Figure 14. t-SNE visualisation on four different datasets.
Figure 14. t-SNE visualisation on four different datasets.
Entropy 27 00995 g014
Table 1. The land cover categories and the dataset division for each category of the Augsburg dataset.
Table 1. The land cover categories and the dataset division for each category of the Augsburg dataset.
ClassCategoryTrainingTestingTotal
1Forest3013,47713,507
2Residential area3030,29930,329
3Industrial area3038213851
4Low plants3026,82726,857
5Allotment30545575
6Commercial area3016151645
7Water3015001530
Total 21078,08478,294
Table 2. The land cover categories and the dataset division for each category of the Houston2013 dataset.
Table 2. The land cover categories and the dataset division for each category of the Houston2013 dataset.
ClassCategoryTrainingTestingTotal
1Healthy grass3012211251
2Stressed grass3012241254
3Synthetic grass30667697
4Tree3012141244
5Soil3012121242
6Water30295325
7Residential3012381268
8Commercial3012141244
9Road3012221252
10Highway3011971227
11Railway3012051235
12Parking lot 13012031233
13Parking lot 230439469
14Tennis court30398428
15Running track30630660
Total 45014,57915,029
Table 3. The land cover categories and the dataset division for each category of the Pavia University dataset.
Table 3. The land cover categories and the dataset division for each category of the Pavia University dataset.
ClassCategoryTrainingTestingTotal
1Asphalt3066016631
2Meadows3018,61918,649
3Gravel3020692099
4Trees3030343064
5Metal Sheets3013151345
6Bare-soil3049995029
7Bitumen3013001300
8Bricks3036523682
9Shadows30917947
Total 27042,50642,776
Table 4. The land cover categories and the dataset division for each category of the WHU-Hi-LongKou dataset.
Table 4. The land cover categories and the dataset division for each category of the WHU-Hi-LongKou dataset.
ClassCategoryTrainingTestingTotal
1Corn3034,48134,511
2Cotton3083448374
3Sesame3030013031
4Broad-leaf soybean3063,18263,212
5Narrow-leaf soybean3041214151
6Rice3011,82411,854
7Water3067,02667,056
8Roads and houses3070947124
9Mixed weed3051995229
Total 270204,272204,542
Table 5. Quantitative result (ACC% ± STD%) of Augsburg dataset. Best in bold.
Table 5. Quantitative result (ACC% ± STD%) of Augsburg dataset. Best in bold.
ClassCNN-BasedGCN-BasedTransformer-BasedMamba-BasedOurs
SPRNCLOLNFDGCWFCGSSFTTGSC-ViTMambaHSIIGroupSS-MambaSGFNet
Forest94.05 ± 3.4292.06 ± 6.3687.84 ± 3.1593.67 ± 2.9395.56 ± 1.8993.32 ± 3.7395.51 ± 1.4393.88 ± 2.8997.57 ± 1.38
Residential area86.47 ± 3.3391.80 ± 7.2273.77 ± 7.6486.08 ± 3.1086.39 ± 5.8183.05 ± 6.3584.02 ± 5.6984.18 ± 3.5087.57 ± 3.48
Industrial area65.49 ± 5.4371.61 ± 16.3266.59 ± 7.6870.22 ± 7.4268.49 ± 4.0462.22 ± 14.3262.69 ± 8.9369.99 ± 7.4469.34 ± 7.13
Low plants86.16 ± 4.0596.92 ± 2.1778.58 ± 6.5588.80 ± 3.7588.42 ± 5.3186.37 ± 4.5979.2 ± 5.7792.04 ± 2.4894.39 ± 2.30
Allotment93.10 ± 3.1015.07 ± 7.5884.42 ± 6.0197.27 ± 1.7791.03 ± 3.1392.36 ± 4.6789.52 ± 4.0694.70 ± 2.4296.50 ± 1.74
Commercial area66.71 ± 4.9933.64 ± 12.0469.58 ± 5.8967.95 ± 7.0763.42 ± 11.2858.91 ± 13.8264.76 ± 6.5366.25 ± 7.5966.69 ± 5.73
Water73.44 ± 3.8033.87 ± 10.0768.44 ± 4.3480.21 ± 2.6068.97 ± 5.6572.88 ± 5.0171.53 ± 3.4978.28 ± 2.9875.03 ± 4.41
OA (%)86.03 ± 2.2981.24 ± 6.0977.37 ± 4.1287.14 ± 1.9687.02 ± 1.0084.31 ± 3.2182.7 ± 2.7187.45 ± 0.9690.14 ± 0.99
AA (%)80.77 ± 1.7062.14 ± 4.1475.55 ± 2.1583.46 ± 1.0980.33 ± 1.8678.44 ± 1.4378.18 ± 1.0982.76 ± 1.2383.87 ± 0.89
Kappa (%)80.80 ± 2.9774.52 ± 6.9669.89 ± 4.8682.17 ± 2.6382.00 ± 1.2478.52 ± 4.0568.33 ± 8.9882.73 ± 1.2286.26 ± 1.32
Params (K)181.375.31855.6175.83148.4297.48405.51139.49108.75
FLOPs (M)11.751.5727.9823761.2222.86.238,160.1510.343.72
Table 6. Quantitative result (ACC% ± STD%) of Houston2013 dataset. Best in bold.
Table 6. Quantitative result (ACC% ± STD%) of Houston2013 dataset. Best in bold.
ClassCNN-BasedGCN-BasedTransformer-BasedMamba-BasedOurs
SPRNCLOLNFDGCWFCGSSFTTGSC-ViTMambaHSIIGroupSS-MambaSGFNet
Healthy Grass97.26 ± 3.9388.24 ± 9.0091.48 ± 3.6693.85 ± 5.4490.74 ± 6.7788.32 ± 19.5195.20 ± 4.6699.05 ± 0.6696.21 ± 2.73
Stressed Grass96.16 ± 3.6393.41 ± 5.9992.03 ± 3.5296.74 ± 4.2991.96 ± 13.4093.50 ± 5.2898.29 ± 1.2699.16 ± 0.9098.60 ± 1.33
Synthetic Grass99.27 ± 0.6395.21 ± 9.8499.27 ± 0.6099.93 ± 0.1099.50 ± 0.6799.16 ± 1.3299.74 ± 0.4699.73 ± 0.0699.99 ± 0.04
Tree95.07 ± 2.6597.44 ± 3.8090.84 ± 5.2596.46 ± 2.3491.50 ± 5.6493.50 ± 2.1597.26 ± 2.4798.57 ± 1.1199.28 ± 1.71
Soil99.98 ± 0.0597.07 ± 2.8699.64 ± 0.5899.93 ± 0.1499.59 ± 0.6698.81 ± 1.9399.54 ± 0.7699.99 ± 0.02100.00 ± 0.00
Water98.75 ± 1.0794.85 ± 4.7296.36 ± 4.3796.61 ± 4.6595.48 ± 6.2397.56 ± 3.3897.47 ± 2.33100.00 ± 0.0098.71 ± 3.86
Residential93.23 ± 2.5894.02 ± 2.9484.64 ± 5.1093.63 ± 6.7584.99 ± 7.9787.50 ± 6.2893.03 ± 2.2594.26 ± 3.8696.03 ± 2.79
Commercial82.63 ± 4.7593.80 ± 3.9483.25 ± 4.9687.93 ± 4.2782.21 ± 7.5578.39 ± 6.5981.35 ± 3.9888.54 ± 1.7588.50 ± 4.75
Road88.96 ± 3.3288.32 ± 2.8388.02 ± 3.9387.72 ± 6.4984.31 ± 4.7881.96 ± 3.9990.03 ± 3.4084.04 ± 7.2191.42 ± 3.90
Highway94.82 ± 4.2476.55 ± 6.9696.35 ± 1.7090.44 ± 10.3392.79 ± 6.7792.64 ± 7.9396.55 ± 1.4992.13 ± 3.1597.51 ± 2.66
Railway91.61 ± 2.8691.57 ± 4.4095.86 ± 2.8293.13 ± 5.6392.69 ± 9.6079.84 ± 8.9292.74 ± 2.1896.37 ± 1.8698.31 ± 2.52
Parking Lot 191.28 ± 4.2383.37 ± 6.1788.97 ± 4.3992.31 ± 3.0884.53 ± 12.8684.66 ± 9.6291.19 ± 3.6896.55 ± 1.5195.15 ± 2.85
Parking Lot 292.23 ± 4.0989.11 ± 4.6495.09 ± 3.6493.99 ± 3.9289.82 ± 11.2494.72 ± 2.8297.67 ± 2.0497.60 ± 1.3897.02 ± 2.23
Tennis Court100.00 ± 0.0097.98 ± 2.6399.66 ± 0.56100.00 ± 0.00100.00 ± 0.0099.87 ± 0.30100 ± 0.00100.00 ± 0.00100.00 ± 0.00
Running Track99.73 ± 0.5596.08 ± 3.9698.70 ± 1.78100.00 ± 0.0099.86 ± 0.1499.78 ± 0.32100 ± 0.00100.00 ± 0.00100.00 ± 0.00
OA (%)93.95 ± 0.7990.31 ± 1.7392.34 ± 0.8394.02 ± 1.5091.39 ± 3.2389.67 ± 2.7194.46 ± 0.8395.64 ± 0.7296.63 ± 0.47
AA (%)94.73 ± 0.6691.80 ± 1.4693.45 ± 0.9694.83 ± 1.4992.55 ± 3.1491.35 ± 2.2095.34 ± 0.7896.40 ± 0.6197.11 ± 0.46
Kappa (%)93.45 ± 0.8589.52 ± 1.8791.61 ± 1.0293.59 ± 1.6490.62 ± 3.5688.83 ± 2.9294.21 ± 1.9395.28 ± 0.7896.36 ± 0.51
Params (K)182.354.852383.6871.67148.4288.78401.94340.07105.21
FLOPs (M)11.711.3729.4692,600.422.85.6427,681.6731.415.34
Table 7. Quantitative result (ACC% ± STD%) of Pavia University dataset. Best in bold.
Table 7. Quantitative result (ACC% ± STD%) of Pavia University dataset. Best in bold.
ClassCNN-BasedGCN-BasedTransformer-BasedMamba-BasedOurs
SPRNCLOLNFDGCWFCGSSFTTGSC-ViTMambaHSIIGroupSS-MambaSGFNet
Asphalt92.82 ± 3.0098.27 ± 1.0088.35 ± 6.6098.68 ± 1.0588.01 ± 7.6697.10 ± 1.6494.35 ± 1.7396.98 ± 1.2797.21 ± 2.88
Meadows85.90 ± 6.0498.98 ± 0.7694.52 ± 2.9296.28 ± 2.0296.34 ± 3.1297.68 ± 1.3296.43 ± 1.9293.11 ± 1.6098.56 ± 1.14
Gravel91.11 ± 6.5787.83 ± 7.5488.81 ± 2.9599.70 ± 0.6389.44 ± 9.3186.46 ± 7.4293.39 ± 5.3595.00 ± 1.7797.93 ± 1.79
Trees94.94 ± 1.6897.80 ± 1.4685.60 ± 4.6094.85 ± 3.3292.79 ± 4.8691.83 ± 9.9489.75 ± 3.6096.48 ± 0.4898.73 ± 0.89
Metal sheets99.76 ± 0.2699.68 ± 0.5997.28 ± 3.16100.00 ± 0.0099.79 ± 0.2299.45 ± 0.5399.99 ± 0.0299.96 ± 0.0699.93 ± 0.13
Bare soil93.79 ± 4.5985.27 ± 7.5097.85 ± 2.1799.01 ± 0.8397.14 ± 3.3479.65 ± 9.2498.60 ± 0.9399.27 ± 1.2899.00 ± 0.86
Bitumen97.65 ± 3.4685.92 ± 7.0095.54 ± 10.9699.82 ± 0.2798.81 ± 2.3086.63 ± 10.1096.50 ± 3.1399.85 ± 0.1999.96 ± 0.05
Bricks91.45 ± 2.8190.93 ± 4.1291.42 ± 5.0798.59 ± 1.1189.11 ± 5.4889.50 ± 4.7094.40 ± 2.9392.93 ± 4.8998.89 ± 0.61
Shadows99.33 ± 0.6197.12 ± 2.0388.33 ± 5.3199.01 ± 1.5798.24 ± 0.7798.71 ± 1.1099.36 ± 0.6298.65 ± 0.6199.05 ± 0.54
OA (%)90.36 ± 3.0294.97 ± 1.4392.76 ± 2.2697.52 ± 1.0394.11 ± 1.5992.65 ± 2.3095.74 ± 0.9095.29 ± 0.8198.51 ± 0.80
AA (%)94.08 ± 1.3193.53 ± 1.5991.97 ± 3.0098.44 ± 0.5494.22 ± 2.0091.89 ± 1.7395.86 ± 1.1196.92 ± 0.4098.81 ± 0.48
Kappa (%)87.53 ± 3.8193.38 ± 1.8490.49 ± 2.9396.74 ± 1.3492.14 ± 2.3390.38 ± 2.9595.00 ± 2.2493.84 ± 1.0498.03 ± 1.05
Params (K)179.024.11987.6165.95148.4277.9412.24139.5548.47
FLOPs (M)11.551.1528.3426,502.9222.84.9625,746.4810.345.74
Table 8. Quantitative result (ACC% ± STD%) of WHU-Hi-LongKou dataset. Best in bold.
Table 8. Quantitative result (ACC% ± STD%) of WHU-Hi-LongKou dataset. Best in bold.
ClassCNN-BasedGCN-BasedTransformer-BasedMamba-BasedOurs
SPRNCLOLNFDGCWFCGSSFTTGSC-ViTMambaHSIIGroupSS-MambaSGFNet
Corn98.90 ± 1.6996.38 ± 4.6597.14 ± 2.1199.28 ± 0.5399.21 ± 0.5797.2 ± 2.8599.32 ± 0.5499.21 ± 0.4099.31 ± 0.50
Cotton97.96 ± 1.8693.69 ± 13.2595.70 ± 2.5997.26 ± 0.9198.38 ± 1.3795.73 ± 2.6797.65 ± 2.298.21 ± 0.7798.78 ± 0.90
Sesame99.28 ± 0.4970.09 ± 31.5198.74 ± 0.9099.08 ± 0.5199.39 ± 0.8391.94 ± 14.9399.28 ± 1.1399.41 ± 0.4199.41 ± 0.53
Broad-leaf soybean95.75 ± 1.4199.57 ± 0.2892.34 ± 2.5393.74 ± 1.4195.85 ± 2.2287.64 ± 10.1493.52 ± 2.0195.84 ± 1.3596.58 ± 1.13
Narrow-leaf soybean95.39 ± 4.5668.34 ± 22.1098.57 ± 1.3299.36 ± 0.8498.57 ± 1.3297.74 ± 1.6798.07 ± 2.398.81 ± 0.4799.67 ± 0.35
Rice99.04 ± 0.8297.66 ± 1.5394.84 ± 4.7597.36 ± 2.7897.82 ± 1.8998.00 ± 3.3498.75 ± 0.799.38 ± 0.2799.25 ± 0.36
Water99.10 ± 0.7999.47 ± 0.4696.07 ± 1.6698.79 ± 0.6998.18 ± 1.0199.52 ± 0.2299.73 ± 0.1799.02 ± 0.6899.84 ± 0.15
Roads and houses94.46 ± 2.5677.41 ± 15.3386.86 ± 4.3297.18 ± 2.2292.61 ± 5.8193.70 ± 2.8491.61 ± 3.3496.27 ± 0.6197.66 ± 1.39
Mixed weed92.69 ± 3.0673.30 ± 18.7089.57 ± 7.4496.55 ± 3.0291.52 ± 8.2293.25 ± 3.7894.32 ± 4.6194.12 ± 4.8296.93 ± 3.84
OA (%)97.59 ± 0.4993.78 ± 1.9494.62 ± 0.9897.07 ± 0.6497.28 ± 0.5994.71 ± 3.0697.14 ± 0.6197.84 ± 0.5098.51 ± 0.37
AA (%)96.95 ± 0.5786.22 ± 3.6694.43 ± 1.4397.62 ± 0.4596.61 ± 0.7294.97 ± 1.5596.92 ± 0.5397.81 ± 0.5298.60 ± 0.42
Kappa (%)96.84 ± 0.6391.96 ± 2.4193.02 ± 1.2696.18 ± 0.8296.45 ± 1.0793.19 ± 3.896.25 ± 11.2597.18 ± 0.6498.04 ± 0.48
Params (K)184.896.771987.6187.66148.42173.06433.62139.55112.11
FLOPs (M)11.962.0628.3437,681.5522.810.2532,012.8510.349.54
Table 9. Comparison of runtime on the Pavia University dataset. T t r ( s ) indicates training time, while T t e ( s ) indicates the time required to test the entire HSI.
Table 9. Comparison of runtime on the Pavia University dataset. T t r ( s ) indicates training time, while T t e ( s ) indicates the time required to test the entire HSI.
MetricsSPRNCLOLNFDGCWFCGSSFTTGSC-ViTMambaHSIIGroupSS-MambaOurs
T t r ( s ) 9.749.815.65328.111.2613.66384.1252.419.51
T t e ( s ) 7.061.827.491.713.254.890.0427.270.48
Table 10. Ablation experiment results. Bold indicates the best results; the symbols in the first row denote module names; ✓ and × indicate whether the module is included.
Table 10. Ablation experiment results. Bold indicates the best results; the symbols in the first row denote module names; ✓ and × indicate whether the module is included.
SGGCMLPSAFMSumSSAFOA (%)AA (%)Kappa (%)
××××97.01 ± 1.4897.26 ± 1.0296.05 ± 1.93
××97.59 ± 0.7498.08 ± 0.5496.82 ± 0.97
××97.74 ± 0.7398.19 ± 0.7597.02 ± 0.95
××98.15 ± 0.5398.37 ± 0.7397.55 ± 0.70
××98.51 ± 0.8098.81 ± 0.4898.03 ± 1.05
Table 11. Model performance under different encoder block numbers.
Table 11. Model performance under different encoder block numbers.
NumberOA (%)AA (%)Kappa (%)Params (K)FLOPS (M)
197.28 ± 0.8897.77 ± 0.7896.41 ± 1.1620.733.64
298.23 ± 0.6698.44 ± 0.7697.65 ± 0.8834.64.52
398.51 ± 0.8098.81 ± 0.4898.03 ± 1.0548.475.74
498.36 ± 0.5398.70 ± 0.3297.82 ± 0.7062.346.64
597.94 ± 1.0698.41 ± 0.5397.28 ± 1.3976.228.06
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, B.; Cao, C.; Kong, D. SGFNet: Redundancy-Reduced Spectral–Spatial Fusion Network for Hyperspectral Image Classification. Entropy 2025, 27, 995. https://doi.org/10.3390/e27100995

AMA Style

Wang B, Cao C, Kong D. SGFNet: Redundancy-Reduced Spectral–Spatial Fusion Network for Hyperspectral Image Classification. Entropy. 2025; 27(10):995. https://doi.org/10.3390/e27100995

Chicago/Turabian Style

Wang, Boyu, Chi Cao, and Dexing Kong. 2025. "SGFNet: Redundancy-Reduced Spectral–Spatial Fusion Network for Hyperspectral Image Classification" Entropy 27, no. 10: 995. https://doi.org/10.3390/e27100995

APA Style

Wang, B., Cao, C., & Kong, D. (2025). SGFNet: Redundancy-Reduced Spectral–Spatial Fusion Network for Hyperspectral Image Classification. Entropy, 27(10), 995. https://doi.org/10.3390/e27100995

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop