Next Article in Journal
FADiff: A Frequency-Aware Diffusion Model Based on Hybrid CNN–Transformer Network for Radar-Based Precipitation Nowcasting
Next Article in Special Issue
Applications of Airborne Hyperspectral Imagery in Rare Earth Element Exploration: A Case Study of the World-Class Bayan Obo Deposit, China
Previous Article in Journal
Variable Frequency Phase Modulation on Time-Modulated Metasurface for SAR Feature Reconstruction
Previous Article in Special Issue
Retrieval of Multiple Variables from Hyperspectral Data: A PRISMA-Aligned Systematic Review of Classical Physics-Based Machine Learning and Hybrid Algorithms in Vegetation and Raw Materials Application Domains
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Double-Attention Context Interactive Network for Hyperspectral Image Classification

1
School of Communication and Electronic Engineering, Shandong Normal University, Jinan 250358, China
2
Shandong Provincial Engineering and Technical Center of Light Manipulations & Shandong Provincial Key Laboratory of Optics and Photonic Device, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
3
Shandong Key Laboratory of Medical Physics and Image Processing, School of Communication and Electronic Engineering, Shandong Normal University, Jinan 250358, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2026, 18(7), 1059; https://doi.org/10.3390/rs18071059
Submission received: 30 January 2026 / Revised: 27 March 2026 / Accepted: 30 March 2026 / Published: 2 April 2026

Highlights

What are the main findings?
  • We propose a Double-Attention Context Interactive Network (DACINet) that strengthens long-range contextual interaction and 3D spectral–spatial feature learning for hyperspectral image classification, via a Context Interaction Fusion Module (CIFM) and Channel–Spatial Double-Attention (CSDA).
  • Extensive experiments on Indian Pines, Pavia University, and Salinas datasets demonstrate that the DACINet achieves superior classification accuracy and robustness compared to state-of-the-art methods.
What are the implications of the main findings?
  • By explicitly modeling long-range context interaction across distant spectral bands and introducing Channel–Spatial Double-Attention, the framework alleviates the limitations of local receptive fields in conventional CNNs, yielding more discriminative and robust hyperspectral representations under limited training samples and class imbalance.
  • By further coupling context-interactive feature fusion with a 2D–3D hybrid convolutional design, the proposed CNN-based solution enhances spectral–spatial integration and boundary/detail sensitivity, improving overall classification reliability in practical hyperspectral image classification settings.

Abstract

Convolution is still the main method for hyperspectral image classification, since it takes into account both spatial and spectral characteristics. However, the convolution relies on local perceptual computation, ignoring the effective discriminant of context association for classification. In this paper, we propose a Double-Attention Context Interactive Network (DACINet) for hyperspectral image classification. Specifically, a Context Interaction Fusion Module (CIFM) is designed to enhance long-range contextual dependencies. By stacking multiple 3D convolutional layers, the module progressively enlarges its receptive field, while cross-layer residual connections facilitate the integration of features from different contextual scales, thereby strengthening the model’s ability to capture complex relationships within the hyperspectral data. Then, a Channel–Spatial Double-Attention (CSDA) mechanism based on 3D is proposed for enhancing the two-dimensional spatial features and one-dimensional spectral features, respectively, and fusing the enhanced features. Furthermore, we also construct a hybrid convolutional layer, which combines 2D and 3D convolution to further enhance spectral bands on the basis of three-dimensional understanding. Extensive experiments on the widely used IP, UP, SA and HU datasets show that the proposed DACINet achieves superior classification accuracy, reaching Overall Accuracies of 96.78%, 97.77%, 99.53% and 86.67% respectively, outperforming other state-of-the-art models.

1. Introduction

Hyperspectral images (HSIs) are common 3D remote sensing data, which contain both spectral information and spatial information of surface objects. Hyperspectral image classification (HSIC) aims to assign a unique semantic label to each HSI pixel based on the given land cover category [1,2]. It is widely used in the fields of resource investigations, environmental monitoring and agricultural production [3]. In the early days, machine learning methods such as SVMs were commonly adopted to extract effective discriminant features from HSIs, and different classifiers were then used to classify each pixel [4].
Benefiting from advances in deep learning [5,6,7], hyperspectral image processing has achieved remarkable progress. CNN-based methods were the first to be introduced into hyperspectral image classification. Hu et al. [8] proposed a five-layer 1D-CNN, which better extracts the spectral features of hyperspectral images. Fang et al. [9] employed discriminative band selection for pipeline hyperspectral images to better extract spatial features using 2D convolution. Subsequently, many new frameworks achieved remarkable results in hyperspectral classification. He et al. [10] employed a hybrid Mamba-Transformer framework to explore the multiscale properties of hyperspectral data effectively. Liang et al. [11] achieved efficient global dependency modeling using a double-branch Mamba-like linear attention mechanism. Zhang et al. [12] proposed the Center-Scan Mamba network to model spatial–spectral long-range dependency with linear complexity. Although new architectures such as Transformers and Mamba demonstrate great potential in hyperspectral image classification, the CNN and its variants remain the most widely used and mature solution due to their strong feature extraction capability and low computational complexity.
Recently, CNN variants have once again become the research focus in hyperspectral image classification. Ahmad et al. [13] proposed a 3D CNN to leverage spectral–spatial feature maps to enhance HSIC performance. Ghaderizadeh et al. [14] proposed a hybrid 3D–2D CNN, which employs a 3D fast learning block followed by a 2D CNN to extract spectral–spatial features. Alkhatib et al. [15] classified HSIs using a multiscale 3D CNN and three-branch feature fusion. Gündüz et al. [16] introduced a dual attention mechanism into both spectral and spatial modules to enhance the model’s discriminative ability. These variants achieve excellent classification results. However, 1D networks focus on spectral information, 2D networks focus on spatial information, and 3D networks are limited to local perception and ignore contextual correlations. This separation between spectral and spatial processing leads to insufficient global perception, making it difficult for the model to fully exploit the inherent advantages of the graphical unification. Recent efforts such as the Online Spectral Information Compensation Network [17] address this by extracting multiscale spatial features through a multi-branch network and progressively compensating spectral information. However, the OSICN primarily focuses on spatial scale diversity rather than explicitly modeling contextual interactions across spectral bands.
To alleviate the above problems, we propose a novel Double-Attention Context Interactive Network (DACINet) for modeling the contextual interaction of spectral and spatial features simultaneously. The DACINet mainly consists of a Context Interaction Fusion Module (CIFM), Channel–Spatial Double-Attention (CSDA), and a 2D–3D hybrid convolutional layer. CIFM can capture long-range correlations between spectral bands by incorporating cross-layer residual connections into the 3D convolutional network. Then, the CSDA enhances two-dimensional spatial features and one-dimensional spectral features, respectively, and fuses the enhanced features. In addition, a hybrid convolution layer that combines 2D and 3D convolution is introduced to further enhance spectral bands based on 3D feature extraction. To validate the effectiveness and robustness of our proposed model, we conduct extensive experiments on four challenging benchmark datasets: Indian Pines (IP), Pavia University (UP), Salinas (SA), and the 2013 University of Houston (HU) dataset.
The main contributions of this paper are as follows:
  • A novel DACINet is proposed to improve HSIC performance by enhancing contextual interaction and leveraging 3D spectral–spatial features.
  • A CIFM is proposed to capture contextual interaction features, while CSDA is designed to suppress irrelevant information and to enhance spectral bands and spatial information.
  • A hybrid convolution layer combining 2D and 3D convolutions is proposed to further strengthen the representation of spectral–spatial features.

2. Related Work

In this section, we review two topics most relevant to our work: deep learning-based hyperspectral image classification and attention-based hyperspectral image classification.

2.1. Deep Learning-Based Hyperspectral Image Classification

Convolutional Neural Networks (CNNs) [18], especially 3D CNNs [19,20], have become the main approach for hyperspectral image classification, owing to their robust feature extraction capabilities. Ghaderizadeh et al. [14] extracted spectral–spatial features using 3D convolutions and further optimized them through 2D convolutions. To address the limitation of fixed receptive fields in capturing long-range contextual dependencies, Roy et al. [21] introduced an adaptive receptive field 3D residual module, dynamically adjusting convolution kernels to enhance feature representation. To overcome the Euclidean space constraints of CNNs and enhance the modeling of non-Euclidean geometric relationships, Graph Neural Networks (GNNs) [22] have been introduced into hyperspectral classification. SSGRAM [23] proposed a 3D spectral–spatial feature network combined with a Graph Attention Feature Processor. NESSGGCN [24] constructed a Gated GCN-CNN Non-Euclidean Spectral–Spatial Feature Mining Network to simultaneously extract features from non-Euclidean and Euclidean spaces. Recently, various novel deep learning architectures have been proposed for HSI classification. Yang et al. [25] proposed an Enhanced Multiscale Feature Fusion Network (EMFFN) that extracts multiscale features through a three-stage parallel multi-path architecture. For cross-scene classification, Liu et al. [26] proposed a Dual Classification Head Self-Training Network (DHSNet) that alleviates domain discrepancies through dual-head consistency learning. Wang et al. presented HyperSIGMA [27], the first billion-parameter foundation model for HSIs, which unifies multiple interpretation tasks via a sparse sampling attention mechanism. Unsupervised clustering is a fundamental task for HSI analysis when labeled samples are unavailable. Zhang et al. [28] proposed Elastic Graph Fusion Subspace Clustering (EGFSC) for large-scale HSI clustering via superpixel-level learning and dual-graph fusion. Huang et al. [29] introduced a structural prior-guided subspace clustering method incorporating local/non-local spatial priors and cluster structure priors. Jiang et al. [30] proposed Structured Anchor Projected Clustering with linear time complexity through anchor generation and dual-graph learning. These unsupervised methods provide powerful tools for mining the intrinsic structures of HSIs. The structural information captured by these methods, such as superpixel-level organization and graph-based relationships, offers valuable insights that can inform the design of feature extraction modules in supervised classification frameworks like our DACINet.

2.2. Attention-Based Hyperspectral Image Classification

Attention mechanisms [31] were initially proposed in the field of natural language processing (NLP). Following the success of the Vision Transformer (ViT) in image classification, Chen et al. [32] pioneered its application to HSI classification, demonstrating its potential for capturing global context. To better harness the complementary strengths of different architectures, hybrid attention designs have since emerged. Arshad et al. [33] introduced HAT to integrate the local representational capabilities of 3D and 2D CNNs within a Transformer framework. Zhao et al. [34] combined group-wise separable convolution with spectral calibration attention, effectively reorganizing and weighting features in the spectral dimension. Similarly, Jing et al. [35] introduced dynamic convolution and spatial–spectral attention mechanisms to dynamically extract and integrate multi-level semantic features. Although the aforementioned methods have made significant progress in the design and application of attention mechanisms, most approaches treat spectral and spatial attention modules as separate units, whether sequential or parallel, thus failing to establish a unified dual-path interactive framework. For instance, the DBDA network [36] applies channel-wise and spatial-wise attention in two independent branches. Although effective, this design inherently decouples spectral and spatial attention, limiting their interactive fusion. In contrast, the CSDA module proposed in this paper operates within a unified 3D feature space, enabling parallel enhancement and interactive fusion of spectral and spatial features. This interactive design distinguishes CSDA from existing decoupled attention mechanisms and forms the core innovation of the proposed DACINet.

3. Proposed Method

In this section, we present the proposed Double-Attention Context Interactive Network (DACINet) in detail.
The diagram of the DACINet is illustrated in Figure 1. A hyperspectral image captures tens to hundreds of continuous spectral bands of a target region simultaneously, with each band containing a large amount of pixel information. In the DACINet, we first apply PCA to project the high-dimensional data into a low-dimensional space, which reduces dimensionality and eliminates redundant information. The reduced-dimensionality features are then input into the CIFM to model contextual interactions between spectral and spatial features simultaneously. Next, CSDA enhances spatial and spectral context interaction features, respectively, to retain more discriminative information. A hybrid convolutional layer combining 2D and 3D convolutions further enhances the discriminative power of spectral features. Finally, the resulting feature maps are classified by a fully connected layer.

3.1. PCA

In order to solve the problem of hyperspectral image data redundancy, we still employ principal component analysis (PCA) to reduce the dimensionality of the hyperspectral data. The mapping process of PCA is as follows:
y 1 i y 2 i y n i = u 1 T · ( x 1 i , x 2 i , , x n i ) u 2 T · ( x 1 i , x 2 i , , x n i ) u n T · ( x 1 i , x 2 i , , x n i ) ,
where X i is hyperspectral images, and X i = ( x 1 i , x 2 i , , x n i ) T R H × W × B . u i T is the corresponding eigenvector. The feature after dimensionality reduction is represented as Y i = ( y 1 i , y 2 i , , y n i ) T R H × W × B . The PCA maps high-dimensional data to low-dimensional space through some linear projection, maximizing the retention of effective discriminative information in the projected dimension.

3.2. The Context Interaction Fusion Module (CIFM)

In hyperspectral images, homogeneous regions with different semantic and geometric properties but extremely similar appearance pose a significant challenge to the classification process. The long-distance contextual relationship between the image bands can provide effective discriminative constraints for such regions. To capture this contextual information, we design a Context Interaction Fusion Module (CIFM) consisting of stacked 3D convolutional residual blocks with cross-layer connections, as shown in Figure 2. While each individual block follows the standard residual 3D CNN design, the CIFM as a whole serves as a dedicated context encoder strategically positioned before the CSDA module to provide richly contextualized features for subsequent dual attention enhancement.
In the CIFM, long-distance correlations between hyperspectral image bands are modeled through cross-layer residual connections. The input feature Y R H × W × B is processed by 3D convolution, and the resulting feature is:
Y j l = 3 D C o v ( Y ) = W l + 1 7 × 3 × 3 Y + b l + 1 ,
Y = R ( j = 1 x l F b n ( Y j l ) W i l + 1 + b i l + 1 ) ,
F b n ( Y j l ) = Y j l E ( Y j l ) V a r 2 ( Y j l ) + ϵ × γ + β ,
where Y j l is the output features of the convolution layer, W l + 1 7 × 3 × 3 is the learnable weight with kernel size ( 7 × 3 × 3 ) , and b l + 1 is the bias. R ( · ) represents the non-linear activation function ReLU. F b n ( Y j l ) is regularization, E ( Y j l ) and V a r 2 ( Y j l ) mean the batch mean and variance in input features, and ϵ , γ , β represent the stability constant, scaling factor, and offset, respectively. A double-layer 3D convolution is set in the CIFM, since the features are processed again according to Equations (2)–(4), and output features after two weighted layers are labeled as Y j l + 1 , and then two additional rounds of weight processing are performed:
Y j l = W l + 1 1 × 1 × 1 Y + b l + 1 ,
Y j l + 1 = j = 1 x l ( F b n ( Y j l ) W i l + 1 + b i l + 1 ) ,
where Y j l + 1 are the output features after the two consecutive layers of weights. It further joins through cross-layer residuals:
Y 1 = Y + Y j l + 1 ,
where Y 1 is the output of the first 3D residual block. The final output feature Y f of the CIFM is obtained after f times of the same superimposed residual block processing. These cross-layer connections capture information associations over long distances, enhancing the context modeling capabilities of 3D convolution.

3.3. The Channel–Spatial Double-Attention (CSDA)

Hyperspectral images contain both spatial information and spectral band information. Effective integration of these two modalities can significantly improve classification performance. Considering the 3D spectral–spatial characteristics of hyperspectral data, we propose a Channel–Spatial Double-Attention (CSDA) mechanism based on pooling and feature fusion, as shown in Figure 3.
CSDA consists of two parallel branches: a channel attention mechanism and a spatial attention mechanism. Different from the original CBAM, which processes these two attentions sequentially, our CSDA processes them independently and in parallel.
Given the input feature map Y f from the CIFM, the channel and spatial attention branches are computed as follows:
f 1 m a x = F m a x ( Y f ) , f 1 a v g = F a v g ( Y f ) ,
f 2 m a x = M a t r ( M a x ( Y f ) ) , f 2 a v g = M a t r ( M e a n ( Y f ) ) ,
where f 1 m a x , f 1 a v g , f 2 m a x , and f 2 a v g denote the pooled feature maps from each operation. F m a x ( · ) and F a v g ( · ) are the maximum pooling and global average pooling operations. M a t r denotes a matrix transformation operation.
Then, f 1 m a x and f 1 a v g are passed through an MLP for feature fusion:
Z 1 m a x = W 1 × 1 m a x f 1 m a x + b l + 1 Z 1 a v g = W 1 × 1 a v g f 1 a v g + b l + 1 Z = Z 1 m a x + Z 1 a v g .
The key to explicitly modeling spectral inter-band correlations lies within this shared MLP. Since its layers are fully connected, the MLP learns complex, non-linear interdependencies between all spectral bands. By considering the context provided by all bands simultaneously, it determines the relative importance of each band and produces an attention map Z that encodes these cross-band relationships.
For the spectral channel, the output features are obtained by element-wise multiplication of the attention map with the input:
F C a m = Z Y f .
For the spatial channel, f 2 m a x and f 2 a v g are concatenated as:
U 1 = C a t ( f 2 m a x , f 2 a v g ) ,
which are then passed through a convolution layer to obtain the weight matrix:
U = W ( 7 × 7 ) l + 1 U 1 + b l + 1 .
The output features are also obtained by pixel dot multiplication of the weight matrix with the input
F S a m = U Y f .
The outputs of the two branches are added to obtain the final output of CSDA:
F = F C a m + F S a m .
By processing channel and spatial attention in parallel, CSDA ensures that both dimensions are enhanced independently before fusion, preserving the unique characteristics of spectral and spatial information. The dual-branch design effectively captures regional correlations in both domains, assigning higher weights to informative regions while suppressing irrelevant ones, thereby improving the discriminative power of the learned features for HSI classification.

3.4. The Hybrid Convolutional Layer

Multilayer 3D convolution is usually used to process hyperspectral images. In practice, the 1D spectrum and the 2D space are characterized independently. The use of 3D convolution increases the complexity of the model and prolongs the processing time. In the proposed DACINet, we also design a hybrid convolutional layer based on 2D and 3D convolutions. The layer consists of two 2D convolution layers and two 3D convolution layers. The size of the 3D convolutional layer kernel is ( 7 , 3 , 3 ) , and the size of the 2D convolutional kernel is ( 3 , 3 ) . The hybrid layer begins with 3D convolutions because they are effective in capturing joint spectral–spatial features directly from the hyperspectral data, building a solid foundation for later processing. Given the feature map F output by the CSDA module, it is first processed by two consecutive 3D convolutional layers:
F = 3 D Cov ( 3 D Cov ( F ) ) .
The output F′ is a 4D tensor, with dimensions corresponding to feature channels, spectral bands, height, and width. This 4D tensor is then reshaped into a 3D tensor by merging the channel and spectral dimensions, resulting in a feature map of the shape (channels × bands) × height × width. This reshaped tensor is then fed into two consecutive 2D convolutional layers:
F = 2 D Cov ( 2 D Cov ( F ) ) .
Through this reshaping, each channel in the resulting feature map encodes a specific combination of feature type and spectral band, allowing the 2D convolutions to learn cross-band interactions by combining these channels. This lets the model refine spatial patterns while implicitly learning how different spectral bands interact, without the extra cost of additional 3D layers. After feature extraction, the output is passed through batch normalization and ReLU activation, then flattened and fed into a fully connected layer for classification. In this design, 3D convolutions extract features simultaneously in spectral and spatial directions, while 2D convolutions focus more on spatial feature refinement. When combined, they enable multi-level feature learning and improve the model’s generalization ability and classification accuracy.

3.5. Classification

Finally, a fully softmax classifier is used for HSI classification. The class probability of each pixel is obtained through the softmax layer, and then the cross-entropy loss L is calculated:
L = 1 M m = 1 M k = 1 K l n ( y K m ) l o g ( y ^ K m ) ,
where M is the number of samples in a small sample set. K is the total number of categories. y K m and y ^ K m represent actual and predicted sample labels respectively. The loss function tends to converge quickly on many kinds of problems, and the optimal solution can be found quickly.

4. Experiments

4.1. Datasets

In experiments, we adopted a stratified random sampling strategy, randomly selecting 5% of labeled samples per class for the IP dataset and 1% for the UP, SA, and Houston2013 datasets as training samples. The remaining samples were used for testing. Indian Pines (IP): The Indian Pines dataset is the first test data for HSIC, imaged by an Airborne Visual Infrared Imaging Spectrometer (AVIRIS) in 1992 on an Indian pine tree in Indiana, USA. Its spatial dimension is 145 × 145 pixels, the spectral dimension is 200 spectral bands, and it consists of 16 target categories. Table 1 presents the category names, number of categories of the Indian Pines dataset, as well as the corresponding color annotations for each category in the visualization of classification results.
Pavia University (UP): The Pavia University dataset was acquired in 2001 over the University of Pavia campus, Italy, using the Reflective Optical System Imaging Spectrometer (ROSIS). The image comprises 610 × 340 pixels, with 115 spectral bands covering the wavelength range of 0.43–0.86 µm. After the removal of 12 noisy bands, the remaining 103 bands are commonly used for analysis. provides the category names, the number of categories, and the corresponding color annotations for this dataset. Table 2 presents the category names, number of categories of the Pavia University dataset, as well as the corresponding color annotations for each category in the visualization of classification results.
Salinas (SA): The Salinas hyperspectral dataset was acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) over the Salinas Valley, CA, USA. The image has dimensions of 512 × 217 pixels (totaling 111,104 pixels) and originally comprises 220 spectral bands covering a wavelength range of approximately 0.4–2.5 µm. The scene is annotated into 16 distinct land cover classes. Table 3 presents the category names, number of categories of the Salinas dataset, as well as the corresponding color annotations for each category in the visualization of classification results.
University of Houston 13 (HU): The HU dataset consists of 349 × 1905 pixels with 144 spectral channels ranging from 364 to 1046 nm and a spatial resolution of 2.5 m/pixel. In addition, the ground truth reference was subdivided into spatially disjoint subsets for training and testing, including 15 mutually exclusive urban land cover classes with 15,029 labeled pixels. Table 4 presents the category names, number of categories of the Houston 13 dataset, as well as the corresponding color annotations for each category in the visualization of classification results.

4.2. Evaluation Metrics

To quantitatively compare the classification performance of different methods and modules from various aspects, in the following experiments, we adopt four commonly used evaluation metrics, namely, Overall Accuracy (OA), Average Accuracy (AA), Kappa Coefficient (Kap) and Accuracy per Class (AEC).

4.3. Implementations Details

All experiments are conducted on an NVIDIA T400 GPU (NVIDIA Corporation, Santa Clara, CA, USA), Windows 11 64-bit, Python 3.8.16 and PyTorch 2.2.1. For model training, the adaptive moment estimation (Adam) optimizer is employed, and the network starts with a learning rate of 0.001 and 100 Epochs. To better compare the advantages of different networks, the best results are highlighted in bold.

4.4. Comparison with State-of-the-Art Methods

The proposed model is compared with representative baselines spanning different methodological paradigms in HSI classification. These include: traditional machine learning methods RF [37] and SVM [38] to establish fundamental benchmarks; early deep learning models like Context [39] to trace the evolution from CNNs to advanced architectures; hybrid 2D/3D CNNs, such as HybridN [40] and CVSSN [41], as direct competitors sharing a similar hybrid convolutional design; attention-based networks RSSAN [42], SSTN [43], and SSAtt [44] to evaluate our CSDA module against prominent attention mechanisms; and graph convolutional networks like F-GCN [45] to assess performance across different paradigms for modeling non-Euclidean data relationships.
Table 5 shows the experimental results of different methods on the IP dataset. The results show that among the 16 categories of objects, nine categories of the proposed method achieved the best classification effect. The comprehensive classification effect is the best, with its OA reaching 96.78%, AA reaching 90.60%, and Kappa reaching 96.32%. Compared with the HybridN model with a suboptimal effect, our DACINet improves 1.54%, 0.04% and 4.02% on OA, AA and Kappa respectively. It is noteworthy that beyond the mean accuracy, DACINet often exhibits a lower standard deviation across multiple runs compared to other methods (as shown in Table 5, Table 6, Table 7 and Table 8). This indicates a more stable and reliable classification performance, which is a significant advantage in practical applications. To quantify the impact of limited training samples on classification performance, we analyze the relationship between per class sample size and accuracy. As shown in Table 1, classes 1 (Alfalfa), 7 (Grass/pasture-mowed), and 9 (Oats) have the fewest training samples in the IP dataset, with only 46, 28, and 20 samples, respectively. Correspondingly, Table 5 reveals that these three classes consistently achieve the lowest accuracies across all compared methods. Even with our proposed DACINet, the accuracies on these minority classes are only 70.75%, 74.42%, and 73.33%, respectively. In contrast, classes with abundant training samples, such as class 2 (Corn-notill, 1428 samples) and class 11 (Soybean-mintill, 2455 samples), achieve accuracies exceeding 95% with the DACINet. This clear positive correlation between training sample size and classification accuracy demonstrates that limited sample availability is indeed the primary factor contributing to the suboptimal performance on these minority classes.
Table 6 and Table 7 present the experimental results of different methods on the UP and SA datasets. For the UP dataset, the results of Table 6 show that in the nine types of land features, our proposed method achieved the best classification effect in six of them. Due to the relatively small number of land feature types in the UP dataset, as can be observed from the data in the table, the accuracy of our proposed DACINet in eight types of land features reached over 90%, and in the second and fifth types, it was as high as over 99%. Kappa is mainly used to measure whether the final classification result is consistent with the actual observation value, and it measures the stability of the entire model’s random classification. Our OA index has climbed to 97.77%, and the AA index has reached an excellent result of 96.72%. At the same time, the consistency test index Kappa is as high as 97.04%. Compared with the HybridN model that ranked second in comprehensive performance, our proposed method has improved by three percentage points in OA, jumped by 6.08% in AA, and achieved a 4.01% increase in Kappa. It can be seen that our proposed model has achieved the best overall classification effect, average classification effect, and model stability on the entire dataset.
In Table 7, it can be seen that the OA index of our proposed DACINet model reached 99.53%, while the OA of the suboptimal HybridN network was 97.95%, an increase of 1.58% for the SA dataset. However, the OA of other methods generally fluctuated around 90%. The AA index is used to evaluate the overall classification performance of a model. According to the experimental results in the table, the overall classification effect of each model on the SA dataset was above 90%. The AA index of the DACINet model reached 99.58%, which was a significant improvement compared to SSTN and SSAtt, and it was 2.12% higher than the HybridN network. Kappa is also an important indicator for evaluating the effectiveness of a classification model. For the proposed method, Kappa reached 97.04%, and in 16 land cover categories, our proposed method achieved the best classification effect for 10 categories. This demonstrates that the DACINet can more accurately and stably complete the hyperspectral image classification.
Table 8 presents the experimental results of different methods on the HU dataset. From the experimental results, it can be observed that among the 15 land cover categories, our proposed DACINet achieves the best classification performance in nine categories, demonstrating its powerful feature discrimination capability. Particularly noteworthy is that on categories with complex textural characteristics, such as class 5 (Grass) and class 14 (Tennis Court), our method achieves accuracies of 97.55% and 96.32%, respectively, significantly outperforming other comparative methods. In terms of comprehensive evaluation metrics, our method achieves the best results across all three key indicators: OA, AA, and Kappa, reaching 86.67%, 89.20%, and 87.12%, respectively. Compared with the suboptimal A2S2K model, our DACINet improves by 1.72%, 3.85%, and 4.03% in OA, AA, and Kappa, respectively. It is worth noting that even on categories with lower sample discriminability, such as class 10 (Coastal) and class 13 (Parking Lot), our method maintains relatively stable classification performance, benefiting from the effective spectral–spatial feature capability of the CSDA module. Overall, on this challenging HU dataset, the DACINet demonstrates excellent classification performance and good generalization ability, further verifying the effectiveness and robustness of the proposed framework.

4.5. Ablation Studies

Ablation of different modules. In this section, ablation experiments are conducted to verify the functions of different modules in the DACINet. The CIFM, CSDA and hybrid convolution layer are introduced to the backbone network step by step. The results are shown in Table 9, where √ indicates the module is used, and × indicates it is not used. It can be seen that the classification results only through the CIFM are the worst, with OA of 85.20% and Kappa of 92.90 on the IP dataset, and with OA of 90.40% and Kappa of 87.14 on the UP dataset. This is because it focuses on fusion and lacks judgment on the validity. When combined with the CIFM and hybrid convolution layer, the overall classification accuracy and stability of the three are improved, which indicates that the feature extraction after the context interaction fusion is effective. Obviously, the DACINet with all modules obtains the best classification accuracy in OA and Kappa on the three datasets, specifically OA of 99.53% and Kappa of 99.48 on the SA dataset. This indicates that all modules play their own effective roles and jointly improve the hyperspectral classification.
Effectiveness of CSDA. In Table 10, we show the difference between the proposed CSDA and CBAM, where √ indicates the module is used, and × indicates it is not used. It is observed that when compared with the baseline without any attention mechanism, the CBAM mechanism dose not improve the performance on the UP and SA datasets effectively. On the IP dataset, the OA increases by 1.85%, and Kappa increases by 2.11; the performance is improved significantly. This is because the sample distribution of the IP dataset is unbalanced, and the discriminant information can be effectively screened by introducing the attention mechanism. The proposed CSDA achieves performance improvement on all three datasets, and it is higher than CBAM. This indicates that screening of spectral and spatial features through dual-channels can effectively alleviate sample imbalance, while it also improves classification performance in common scenarios.
Validity of Robustness. To verify the robustness of the proposed model, we conduct experiments on three datasets with different samples. The results on the IP dataset are shown in Figure 4. It can be seen that both OA and AA increase as the percentage of training samples increases. In detail, deep learning models significantly improve the classification performance of machine learning models (RF, SVM) across all sample distributions. HybridN has a lower OA and AA score on the IP dataset in few-shot samples, especially when the sample is less than 5%. Overall comparison shows that the proposed DACINet results in the best OA and AA scores across all sample distributions, which indicates the DACINet has stronger robustness than others. To further verify the impact of sample size on classification, we calculated the classification confusion matrix for the nine categories of the UP dataset, as shown in Figure 5. The classification accuracy of the category with a smaller sample size (such as Shadows) is significantly lower than that of the category with a larger sample size (such as Meadows). This also verifies the contribution of sample size to classification accuracy. Small sample studies still need to be strengthened.
Analysis of Band Dimension. The dimensionality reduction has a certain impact on the final classification performance of the model. For this reason, we conducted relevant experiments on three datasets while keeping other conditions unchanged and varying the band dimensions. The experimental results are shown in Figure 6. The results show that the performance of the three different datasets varies in different band dimension parameters. When the dimension reduction number is set to 36 for the IP dataset, the values of the three evaluation parameters (OA, AA, and Kappa) are the highest. When it is set to 38, the performance of the model is the worst. Among them, due to the unbalanced data sample distribution in the IP dataset, the values of the OA and Kappa parameters are not much different, while the value of the AA parameter fluctuates significantly. When the dimension reduction band number is set to 13 for the UP dataset, the performance is the best. As the number of bands set increases, the effect gradually decreases, and when it is set to 19, the decline is the greatest, and the classification effect is the worst. For the SA dataset, the values of the three evaluation parameters increase first and then decrease as the number of bands set increases. When the parameters are set to 17, the effect reaches the optimal state. Based on the experimental results, the band parameters of the IP, UP, and SA datasets are set to 36, 13, and 17 respectively.
Selection of the Convolution Kernel. The size of the convolution kernel directly affects the amount of information about the adjacent pixels in the selected space. To better explore the different datasets’ requirements for the spatial dimension in the model, this paper also conducts comparative experiments with different convolution kernels. A 3 × 3 small kernel in the spatial dimension is sufficient to extract effective spatial features, and it has a relatively small number of parameters. In the spectral dimension, experiments are conducted on the selected UP, IP, SA, and HU datasets with intervals of two pixels ranging from 1× to 13×. OA and AA are used as the classification results for the four datasets. In the experiment, when the length of the original spectral bands is less than the length of the vector that needs to be mapped and filled, the triangular principle is adopted, that is, it is filled in two steps. The experimental results are shown in Figure 7. The results show that the overall trend on the four datasets is that it first increases with the increase in the spatial window and then tends to be balanced. When the spectral size is less than seven, the performance gradually improves, and after seven, the classification accuracy tends to stabilize. Considering the overall computational cost, in all subsequent experiments, the convolution kernel is set to 7 × 3 × 3.
Analysis of the Complexity. Table 11 presents the complexity comparison of classification networks using 3D convolution kernels on the IP and UP datasets, mainly focusing on the parameters and floating-point numbers of neural networks. The results show that when using only a 3D CNN, the network parameters are the least, but the floating-point numbers are more. Since the main body of the A2S2K network adopts the residual network, its parameters are the least, but the computational load is relatively large. The proposed method ranks second in terms of both network parameters and floating-point numbers, achieving the best comprehensive evaluation effect. The proposed hybrid convolution layer integrates spatial and spectral information. Among them, the 3D convolution simultaneously captures feature information in both the spectral and spatial dimensions, and introduces 2D convolution to enhance spatial feature extraction. Compared with using only 3D convolution, this design significantly reduces the total number of parameters and the number of floating-point operations, while ensuring the model’s performance.
Analysis of Input Spatial. The size of the input space dimensions also has an impact on the final classification effect of the model. In the three datasets, while setting the optimal band parameters and keeping other conditions consistent, we changed the input space size parameter of the model for experiments to find the most suitable input space size for the three datasets. The experimental results are shown in Figure 8. The results show that the classification performance of the three datasets varies under different input space sizes. For the IP and UP datasets, the fluctuation amplitudes of the three evaluation parameters (OA, AA, and Kappa) are consistent. As the input space size increases, they first decrease, then increase, and then decrease again. Both the IP and UP datasets achieve the best classification effect at an input size of 17 × 17. However, for the SA dataset, the three evaluation parameters show an overall trend of increasing first and then decreasing as the input size increases. The best classification effect is achieved when the input size is set to 27 × 27. Therefore, based on the experimental results, we set the input size of the IP and UP datasets to 17 × 17, and the SA dataset to 27 × 27.
Convergence Analysis of the DACINet. To verify the rationality of the Epoch parameter setting of the model, this section visualizes the training convergence process of the Indian Pines and Salinas datasets. The experimental results are shown in Figure 9. The results indicate that the convergence speed of the IP dataset is relatively slow, and it converges completely at around 30. However, the loss function and accuracy curves of this dataset are relatively stable. The SA dataset converges faster and has reached a stable state at 10, but the loss function curve fluctuates significantly, while the accuracy curve is stable. This proves that the Epoch setting we have made is reasonable, and the proposed model has good stability and generalization ability.

4.6. Visualization Analysis

To show the effectiveness of the DACINet more intuitively, the classification results of the proposed DACINet and other representative methods on the IP dataset are visualized in Figure 10, colors follow the same scheme as described in Table 1. Among them, a small area in the figure indicates a small number of ground objects, and fewer samples are selected in the classification process. As observed, SSTN and SSAtt classification methods have more errors in the classification of ground objects with fewer samples. The F-GCN method is improved compared with other methods, but it tends to produce wrong results at the intersection of two categories. Clearly, the DACINet has the best overall classification performance, and the boundary accuracy has been significantly improved. The classification visualization results of datasets UP, SA and HU are also presented in Figure 11, Figure 12 and Figure 13, colors follow the same scheme as described in Table 2, Table 3 and Table 4.
The classification performance of the RF and SVM machine learning models on the four datasets is highly susceptible to other factors. For the IP dataset, SSTN and SSAtt are two classification methods that have more errors in the classification of land features with few samples. The HybridN method has been improved compared to other methods, but it is prone to generating incorrect results at the boundary of the two categories. For the UP dataset, SSTN, SSAtt and HybridN are three classification methods that have large errors in the classification of wheat stubble represented by sky blue. Since the UP dataset is an image captured in a university area, the land features are often distributed in narrow and elongated strips, which are prone to generating errors at the edges. The DACINet proposed in this section effectively solves the problem of classification errors in the UP dataset classification task. For the SA dataset, the visual graph results of classification methods such as RF, SSAtt and HybridN have obvious errors. The DACINet, while maintaining the advantages of other methods, significantly improves the classification accuracy of each category. For the HU dataset, the classification task is more challenging due to the complex urban scenes with diverse land cover categories. As shown in the classification visualization results, traditional machine learning methods RF and SVM produce substantial misclassifications, particularly in categories with similar spectral characteristics such as Grass-healthy, Grass-stressed, and Grass-synth. The Context and RSSAN methods show some improvement but still struggle with categories like Parking-lot1 and Parking-lot2, which have irregular shapes and scattered distributions. SSTN and SSAtt exhibit better performance in homogeneous regions like Water and Tree, yet they generate noticeable errors in complex categories such as Residential and Commercial, where mixed pixels are prevalent. A2S2K, as one of the advanced methods, achieves relatively good results but still fails to accurately classify challenging categories like Tennis-court and Running-track with limited samples. In contrast, the proposed DACINet significantly reduces misclassifications across all categories, achieving the most accurate and complete classification maps. The experimental results show that the DACINet network can better complete the classification task on the four datasets. Therefore, by assigning greater weight to spectral–spatial features via the dual attention channel, the context interaction facilitates better feature fusion. Combined with the hybrid network, this approach can alleviate classification errors at object boundaries and edge blurring, thereby improving the overall classification performance of the model.

5. Discussion

The DACINet was designed to address a central challenge in HSI classification: jointly modeling spectral and spatial information while capturing long-range contextual dependencies. Standard CNNs, constrained by local receptive fields, struggle to aggregate information from distant pixels or spectral bands. Our framework tackles this by integrating contextual modeling, dual attention enhancement, and a hybrid convolutional strategy. The result is a model that learns features that are both spectrally discriminative and spatially coherent, leading to its consistent performance across diverse datasets.
The CIFM extends standard 3D CNNs by stacking layers with cross-layer residual connections. This design progressively expands the effective receptive field, allowing the model to draw information from larger spectral–spatial neighborhoods. The residual connections then integrate features across these different scales, a capability crucial for distinguishing classes that are spectrally similar but differ in their spatial context. The innovation of the CSDA module lies in its parallel and interactive computation. Unlike sequential mechanisms like CBAM, where channel attention biases spatial processing, or decoupled approaches like DBDA that lack cross-branch interaction, CSDA computes both attention maps from the same 3D feature map and fuses them via addition. This enables complementary learning between spectral and spatial features, aligning with the intrinsic structure of HSI data. The hybrid convolutional layer balances performance and efficiency through its sequential 3D-to-2D design. Initial 3D layers build a rich spectral–spatial representation, while the subsequent reshape encodes spectral information into the channel dimension, allowing lighter 2D convolutions to refine spatial patterns. This explains the high accuracy and low complexity of the DACINet, which achieves far fewer FLOPs than other attention-based methods while maintaining strong performance.
Despite its robust overall performance, the accuracy of the DACINet remains constrained in categories with extremely few training samples. This limitation arises because, like most deep learning frameworks, the model struggles to learn sufficiently discriminative representations from only a handful of examples—especially in datasets with severe class imbalance. A natural direction for future work is therefore to integrate few-shot learning strategies into the framework. One promising approach involves embedding meta-learning into the DACINet, where the model is trained across many episodes to learn a more generalizable metric space. This would enable the network to compare new samples against a small support set and make predictions based on feature similarity, rather than relying solely on class statistics learned from abundant data.

6. Conclusions

This paper proposes a Double-Attention Context Interactive Network (DACINet) for hyperspectral image classification. The DACINet is mainly composed of a CIFM, a CSDA mechanism and a hybrid convolutional layer. The CIFM captures correlations between long-distance image bands through cross-layer residual connections to enhance contextual interaction. The CSDA is a new dual-channel attention mechanism for 3D features, which enhances 2D spatial and 1D spectral features, respectively, and fuses them to strengthen 3D associations. The hybrid convolutional layer combines 2D and 3D convolution to further enhance the discriminability of spectral information. Experiments are carried out on the datasets IP, UP, SA and HU to verify the performance of the proposed DACINet. The results show that the proposed DACINet is superior to other state-of-the-art methods. Ablation studies validate the effectiveness of each core component, while visualization analysis reveals the model’s capability for the modeling of spectral–spatial features. In the future, more attention will be paid to few-shot hyperspectral image classification.

Author Contributions

Investigation, M.W.; Resources, Y.Z.; Writing—original draft, N.H.; Writing—review & editing, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by grants from the National Natural Science Foundation of China (42271093), the Natural Science Foundation of Shandong Province (ZR2024QF060, ZR2025QC1571), and the Science and Technology Support Plan for Youth Innovation of Colleges and Universities of Shandong Province of China (2025KJH134).

Data Availability Statement

The data presented in this study are available in public hyperspectral remote sensing scene repositories. These data were derived from the following resources available in the public domain: 1. Indian Pines (IP) dataset: https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes#Indian_Pines (accessed on 30 July 2025) (alternative domestic mirror: https://opendatalab.org.cn/OpenDataLab/Indian_Pines (accessed on 30 July 2025)); 2. Pavia University (UP) dataset: https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes#Pavia_University (accessed on 30 July 2025) (alternative domestic mirror: https://hf-mirror.com/datasets/danaroth/pavia (accessed on 30 July 2025)); 3. Salinas (SA) dataset: https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes#Salinas (accessed on 28 July 2025) (alternative domestic mirror: https://opendatalab.org.cn/OpenDataLab/Salinas (accessed on 28 July 2025)); and 4. Houston 2013 (HU) dataset: https://www.grss-ieee.org/community/technical-committees/2013-ieee-grss-data-fusion-contest/ (accessed on 10 February 2026) (alternative domestic mirror: https://drive.uc.cn/s/3fe4f55a213f4?public=1 (accessed on 10 February 2026)). All datasets used in the experiments are publicly accessible without restrictions. The original data files, including hyperspectral images and ground truth labels, can be downloaded directly from the provided official URLs for research reproduction.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Imani, M.; Ghassemian, H. An overview on spectral and spatial information fusion for hyperspectral image classification: Current trends and challenges. Inf. Fusion 2020, 59, 59–83. [Google Scholar] [CrossRef]
  2. Wang, X.; Liu, J.; Chi, W.; Wang, W.; Ni, Y. Advances in Hyperspectral Image Classification Methods with Small Samples: A Review. Remote Sens. 2023, 15, 3795. [Google Scholar] [CrossRef]
  3. Zhang, Y.; Li, W.; Zhang, M.; Qu, Y.; Tao, R.; Qi, H. Topological structure and semantic information transfer network for cross-scene hyperspectral image classification. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 2817–2830. [Google Scholar]
  4. Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  5. Xue, Z.; Zhou, Y.; Du, P. S3Net: Spectral–spatial Siamese network for few-shot hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5531219. [Google Scholar] [CrossRef]
  6. Hu, L.; He, W.; Zhang, L.; Zhang, H. Cross-Domain Meta-Learning under Dual Adjustment Mode for Few-Shot Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5526416. [Google Scholar] [CrossRef]
  7. Wang, Z.; Zhao, S.; Zhao, G.; Song, X. Dual-Branch Domain Adaptation Few-Shot Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5506116. [Google Scholar] [CrossRef]
  8. Thoreau, R.; Achard, V.; Risser, L.; Berthelot, B.; Briottet, X. Active Learning for Hyperspectral Image Classification: A comparative review. IEEE Geosci. Remote Sens. Mag. 2022, 10, 256–278. [Google Scholar] [CrossRef]
  9. Fang, L.; Liu, G.; Li, S.; Ghamisi, P.; Benediktsson, J.A. Hyperspectral image classification with squeeze multibias network. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1291–1301. [Google Scholar] [CrossRef]
  10. He, Y.; Tu, B.; Liu, B.; Li, J.; Plaza, A. HSI-MFormer: Integrating Mamba and Transformer Experts for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5621916. [Google Scholar] [CrossRef]
  11. Liang, L.; Xie, P.; Zhang, Y.; Li, J.; Zhang, Z.; Li, J.; Plaza, A. DBMLLA: Double-branch Mamba-like linear attention network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5524315. [Google Scholar] [CrossRef]
  12. Zhang, T.; Xuan, C.; Cheng, F.; Tang, Z.; Gao, X.; Song, Y. CenterMamba: Enhancing semantic representation with center-scan mamba network for hyperspectral image classification. Expert Syst. Appl. 2025, 287, 127985. [Google Scholar] [CrossRef]
  13. Ahmad, M.; Khan, A.M.; Mazzara, M.; Distefano, S.; Ali, M.; Sarfraz, M.S. A Fast and Compact 3-D CNN for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5502205. [Google Scholar] [CrossRef]
  14. Ghaderizadeh, S.; Abbasi-Moghadam, D.; Sharifi, A.; Zhao, N.; Tariq, A. Hyperspectral Image Classification Using a Hybrid 3D-2D Convolutional Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7570–7588. [Google Scholar] [CrossRef]
  15. Alkhatib, M.Q.; Al-Saad, M.; Aburaed, N.; Almansoori, S.; Zabalza, J.; Marshall, S.; Al-Ahmad, H. Tri-CNN: A Three Branch Model for Hyperspectral Image Classification. Remote Sens. 2023, 15, 316. [Google Scholar] [CrossRef]
  16. Gündüz, A.; Orman, Z. Hyperspectral image classification using a hybrid RNN-CNN with enhanced attention mechanisms. J. Indian Soc. Remote Sens. 2025, 53, 613–629. [Google Scholar] [CrossRef]
  17. Yang, J.; Du, B.; Xu, Y.; Zhang, L. Can Spectral Information Work While Extracting Spatial Distribution?—An Online Spectral Information Compensation Network for HSI Classification. IEEE Trans. Image Process. 2023, 32, 2360–2373. [Google Scholar] [CrossRef]
  18. Şakaci, S.A.; Urhan, O. Spectral-Spatial Classification of Hyperspectral Imagery with Convolutional Neural Network. In Proceedings of the 2020 Innovations in Intelligent Systems and Applications Conference (ASYU), Istanbul, Turkey, 15–17 October 2020; pp. 1–4. [Google Scholar] [CrossRef]
  19. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  20. He, M.; Li, B.; Chen, H. Multi-scale 3D deep convolutional neural network for hyperspectral image classification. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3904–3908. [Google Scholar] [CrossRef]
  21. Roy, S.K.; Manna, S.; Song, T.; Bruzzone, L. Attention-Based Adaptive Spectral–Spatial Kernel ResNet for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7831–7843. [Google Scholar] [CrossRef]
  22. Krichen, M. Generative Adversarial Networks. In Proceedings of the 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), Delhi, India, 6–8 July 2023; pp. 1–7. [Google Scholar] [CrossRef]
  23. Paul, B.; Fattah, S.A.; Rajib, A.; Saquib, M. SSGRAM: 3-D Spectral-Spatial Feature Network Enhanced by Graph Attention Map for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5516715. [Google Scholar] [CrossRef]
  24. Zhang, Z.; Huang, L.; Tang, B.H.; Wang, Q.; Ge, Z.; Jiang, L. Non-Euclidean Spectral-Spatial feature mining network with Gated GCN-CNN for hyperspectral image classification. Expert Syst. Appl. 2025, 272, 126811. [Google Scholar] [CrossRef]
  25. Yang, J.; Wu, C.; Du, B.; Zhang, L. Enhanced Multiscale Feature Fusion Network for HSI Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10328–10347. [Google Scholar] [CrossRef]
  26. Liu, R.; Liang, J.; Yang, J.; Hu, M.; He, J.; Zhu, P.; Zhang, L. DHSNet: Dual Classification Head Self-Training Network for Cross-Scene Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5534515. [Google Scholar] [CrossRef]
  27. Wang, D.; Hu, M.; Jin, Y.; Miao, Y.; Yang, J.; Xu, Y.; Qin, X.; Ma, J.; Sun, L.; Li, C.; et al. HyperSIGMA: Hyperspectral Intelligence Comprehension Foundation Model. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 6427–6444. [Google Scholar] [CrossRef]
  28. Zhang, Y.; Wang, X.; Jiang, X.; Zhang, L.; Du, B. Elastic Graph Fusion Subspace Clustering for Large Hyperspectral Image. IEEE Trans. Circuits Syst. Video Technol. 2025, 35, 6300–6312. [Google Scholar] [CrossRef]
  29. Huang, S.; Zeng, H.; Chen, H.; Zhang, H. Spatial and Cluster Structural Prior-Guided Subspace Clustering for Hyperspectral Image. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5511115. [Google Scholar] [CrossRef]
  30. Jiang, G.; Zhang, Y.; Wang, X.; Jiang, X.; Zhang, L. Structured anchor learning for large-scale hyperspectral image projected clustering. IEEE Trans. Circuits Syst. Video Technol. 2024, 35, 2328–2340. [Google Scholar] [CrossRef]
  31. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2023, arXiv:1706.03762. [Google Scholar] [CrossRef]
  32. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar] [CrossRef]
  33. Arshad, T.; Zhang, J. Hierarchical Attention Transformer for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2024, 21, 5504605. [Google Scholar] [CrossRef]
  34. Zhao, Z.; Xu, X.; Li, S.; Plaza, A. Hyperspectral Image Classification Using Groupwise Separable Convolutional Vision Transformer Network. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5511817. [Google Scholar] [CrossRef]
  35. Jing, C.; Sun, G.; Zhang, A.; Fu, H.; Cheng, J.; Shi, Z. A Dynamic Attention Unet Network for Hyperspectral Image Classification. In Proceedings of the IGARSS 2025—2025 IEEE International Geoscience and Remote Sensing Symposium, Brisbane, Australia, 3–8 August 2025; pp. 8458–8461. [Google Scholar] [CrossRef]
  36. Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of hyperspectral image based on double-branch dual-attention mechanism network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef]
  37. Nhaila, H.; Elmaizi, A.; Sarhrouni, E.; Hammouch, A. Supervised classification methods applied to airborne hyperspectral images: Comparative study using mutual information. Procedia Comput. Sci. 2019, 148, 97–106. [Google Scholar] [CrossRef]
  38. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  39. Lee, H.; Kwon, H. Going deeper with contextual CNN for hyperspectral image classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [PubMed]
  40. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar]
  41. Li, M.; Liu, Y.; Xue, G.; Huang, Y.; Yang, G. Exploring the relationship between center and neighborhoods: Central vector oriented self-similarity network for hyperspectral image classification. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 1979–1993. [Google Scholar] [CrossRef]
  42. Zhu, M.; Jiao, L.; Liu, F.; Yang, S.; Wang, J. Residual spectral–spatial attention network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 449–462. [Google Scholar]
  43. Zhong, Z.; Li, Y.; Ma, L.; Li, J.; Zheng, W.S. Spectral–spatial transformer network for hyperspectral image classification: A factorized architecture search framework. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5514715. [Google Scholar]
  44. Hang, R.; Li, Z.; Liu, Q.; Ghamisi, P.; Bhattacharyya, S.S. Hyperspectral image classification with attention-aided CNNs. IEEE Trans. Geosci. Remote Sens. 2020, 59, 2281–2293. [Google Scholar] [CrossRef]
  45. Xu, J.; Li, K.; Li, Z.; Chong, Q.; Xing, H.; Xing, Q.; Ni, M. Fuzzy graph convolutional network for hyperspectral image classification. Eng. Appl. Artif. Intell. 2024, 127, 107280. [Google Scholar] [CrossRef]
Figure 1. The structure of the proposed DACINet. The process begins with PCA for dimensionality reduction of the input HSI cube. The resulting patches are fed into the CIFM, which employs stacked 3D convolutions with cross-layer residual connections to capture long-range contextual information. The feature maps then enter the CSDA, which enhances channel (spectral) and spatial features in parallel before fusing them. Subsequently, a hybrid convolutional layer, comprising sequential 3D and 2D convolutions, further refines the spectral–spatial representation. Finally, a fully connected layer produces the pixel-wise classification results.
Figure 1. The structure of the proposed DACINet. The process begins with PCA for dimensionality reduction of the input HSI cube. The resulting patches are fed into the CIFM, which employs stacked 3D convolutions with cross-layer residual connections to capture long-range contextual information. The feature maps then enter the CSDA, which enhances channel (spectral) and spatial features in parallel before fusing them. Subsequently, a hybrid convolutional layer, comprising sequential 3D and 2D convolutions, further refines the spectral–spatial representation. Finally, a fully connected layer produces the pixel-wise classification results.
Remotesensing 18 01059 g001
Figure 2. The structure of the CIFM.
Figure 2. The structure of the CIFM.
Remotesensing 18 01059 g002
Figure 3. The structure of CSDA.
Figure 3. The structure of CSDA.
Remotesensing 18 01059 g003
Figure 4. Robustness verification of the proposed model on the IP, UP, and SA datasets. The x-axis represents the percentage of labeled samples (from each class) used for training. The y-axis represents the classification accuracy (Overall Accuracy, OA, or Average Accuracy, AA).
Figure 4. Robustness verification of the proposed model on the IP, UP, and SA datasets. The x-axis represents the percentage of labeled samples (from each class) used for training. The y-axis represents the classification accuracy (Overall Accuracy, OA, or Average Accuracy, AA).
Remotesensing 18 01059 g004
Figure 5. The confusion matrix for 9 categories in the UP dataset.
Figure 5. The confusion matrix for 9 categories in the UP dataset.
Remotesensing 18 01059 g005
Figure 6. The OA, AA, and Kappa values of different bands on the IP, UP, and SA datasets.
Figure 6. The OA, AA, and Kappa values of different bands on the IP, UP, and SA datasets.
Remotesensing 18 01059 g006
Figure 7. Different kernel size on IP, UP, SA and HU datasets.
Figure 7. Different kernel size on IP, UP, SA and HU datasets.
Remotesensing 18 01059 g007
Figure 8. The OA, AA, and Kappa values under different input spatial sizes on the IP, UP, and SA datasets. The x-axis values represent the spatial dimension of input patches.
Figure 8. The OA, AA, and Kappa values under different input spatial sizes on the IP, UP, and SA datasets. The x-axis values represent the spatial dimension of input patches.
Remotesensing 18 01059 g008
Figure 9. Convergence analysis of the proposed DACINet. The training loss and accuracy curves on (a) the IP dataset and (b) the SA dataset.
Figure 9. Convergence analysis of the proposed DACINet. The training loss and accuracy curves on (a) the IP dataset and (b) the SA dataset.
Remotesensing 18 01059 g009
Figure 10. Visualization of different models on IP dataset.
Figure 10. Visualization of different models on IP dataset.
Remotesensing 18 01059 g010
Figure 11. Visualization of different models on UP dataset.
Figure 11. Visualization of different models on UP dataset.
Remotesensing 18 01059 g011
Figure 12. Visualization of different models on SA dataset.
Figure 12. Visualization of different models on SA dataset.
Remotesensing 18 01059 g012
Figure 13. Visualization of different models on HU dataset.
Figure 13. Visualization of different models on HU dataset.
Remotesensing 18 01059 g013
Table 1. Category information of the Indian Pines dataset.
Table 1. Category information of the Indian Pines dataset.
NumberCategoryColorTotal Samples
1Alfalfa 46
2Corn-notill 1428
3Corn-mintill 830
4Corn 237
5Grass/pasture 483
6Grass/trees 730
7Grass/pasture-mowed 28
8Hay-windrowed 478
9Oats 20
10Soybean-notill 972
11Soybean-mintill 2455
12Soybean-clean 593
13Wheat 205
14Woods 1265
15Bldg-grass-tree-drivers 386
16Stone-steel-towers 93
Total//10,249
Table 2. Category information of the Pavia University dataset.
Table 2. Category information of the Pavia University dataset.
NumberCategoryColorTotal Samples
1Asphalt 6631
2Meadows 18,649
3Gravel 2099
4Trees 3064
5Painted Metal Sheets 1345
6Bare Soil 5029
7Bitumen 1330
8Self-Blocking Bricks 3682
9Shadows 947
Total//42,776
Table 3. Category information of the Salinas dataset.
Table 3. Category information of the Salinas dataset.
NumberCategoryColorTotal Samples
1Broccoli-green-weeds_1 2009
2Broccoli-green-weeds_2 3726
3Fallow 1976
4Fallow-rough-plow 1394
5Fallow-smooth 2678
6Stubble 3959
7Celery 3579
8Grapes-untrained 11,271
9Soil-vinyard-develop 6203
10Corn-senesced-green-weeds 3278
11Lettuce-romaine-4wk 1068
12Lettuce-romaine-5wk 1927
13Lettuce-romaine-6wk 916
14Lettuce-romaine-7wk 1070
15Vinyard-untrained 7268
16Vinyard-vertical-trellis 1807
Total//54,129
Table 4. Category information of the Houston 13 dataset.
Table 4. Category information of the Houston 13 dataset.
NumberCategoryColorTotal Samples
1Grass-healthy 1238
2Grass-stressed 1241
3Grass-synth 690
4Tree 1231
5Soil 1229
6Water 321
7Residential 1255
8Commercial 1231
9Road 1239
10Highway 1214
11Railway 1222
12Parking-lot1 1220
13Parking-lot2 464
14Tennis-court 423
15Running-track 653
Total//14,871
Table 5. Classification results (%) of various HSIC methods on IP dataset with 5% labeled samples per class. The best result for each class/metric is highlighted in bold.
Table 5. Classification results (%) of various HSIC methods on IP dataset with 5% labeled samples per class. The best result for each class/metric is highlighted in bold.
ClassRFSVMContextRSSANSSTNSSAttHybridNCVSSNF-GCNOurs
87.06 ±  5.6342.86 ±  4.7857.89 ±  5.2788.89 ±  2.8777.77  ± 3.9890.93 ±  4.1379.86 ±  6.2464.71 ±  1.9693.83 ±  3.2470.75 ±  3.65
66.45 ± 6.1266.93 ± 5.3663.08 ± 5.3262.27 ± 3.4585.35 ± 4.5383.72 ± 7.3474.16 ± 2.1993.33 ± 3.4278.95 ± 6.2585.66 ± 4.32
70.92 ± 3.4275.45 ± 3.5447.28 ± 2.6560.00 ± 5.4393.40 ± 4.6581.05 ± 3.2593.60 ± 1.7896.24 ± 2.5378.63 ± 3.3296.41 ± 3.21
43.38 ± 2.6548.56 ± 4.2346.06 ± 1.6882.72 ± 3.2498.95 ± 2.4388.94 ± 2.6789.37 ± 1.3299.98 ± 0.4596.62 ± 2.1396.07 ± 3.21
72.57 ± 6.7885.49 ± 5.3481.93 ± 5.6779.41 ± 7.4884.07 ± 4.3887.57 ± 4.9889.31 ± 3.7892.22 ± 3.9576.84 ± 8.1297.55 ± 2.45
81.03 ± 3.2584.48 ± 2.2376.09 ± 1.9891.33 ± 3.1496.13 ± 3.2199.22 ± 1.1399.67 ± 1.0298.56 ± 2.3295.79 ± 2.1199.14 ± 1.10
58.33 ± 10.1180.00 ± 11.2399.28 ± 3.5499.52 ± 2.2135.71 ± 12.2372.00 ± 11.4371.11 ± 9.8794.73 ± 7.3296.43 ± 3.2174.42 ± 5.43
86.24 ± 3.1288.82 ± 2.2888.13 ± 5.4593.64 ± 3.4594.93 ± 4.5294.57 ± 4.8793.32 ± 5.4392.26 ± 3.78100.00 ± 2.3197.09 ± 3.54
99.19 ± 2.3360.00 ± 6.4984.62 ± 5.4655.56 ± 8.9842.86 ± 7.5693.33 ± 4.6572.22 ± 3.2180.00 ± 6.6599.03 ± 2.3173.33 ± 5.63
10 64.48 ± 4.3276.44 ± 6.5666.22 ± 7.6659.84 ± 5.4690.92 ± 5.3485.23 ± 3.1285.85 ± 7.6593.46 ± 3.6578.91 ± 5.4393.85 ± 4.23
11 63.18 ± 3.4469.17 ± 5.4373.27 ± 4.5372.88 ± 5.7496.04 ± 2.3590.25 ± 4.3496.46 ± 1.2997.84 ± 5.4378.84 ± 6.1296.12 ± 3.67
12 57.14 ± 9.4967.51 ± 5.4658.06 ± 8.8755.46 ± 8.5678.65 ± 5.6967.91 ± 7.8889.64 ± 4.5785.85 ± 6.1285.50 ± 5.3295.92 ± 4.17
13 88.83 ± 5.3492.67 ± 4.3571.62 ± 6.5486.11 ± 5.8797.30 ± 2.8997.00 ± 5.1298.72 ± 4.3095.42 ± 3.46100.00 ± 2.1798.73 ± 4.12
14 83.85 ± 1.6584.88 ± 4.3290.99 ± 3.1988.77 ± 5.4396.10 ± 3.1096.54 ± 4.3296.87 ± 2.9097.62 ± 4.8794.86 ± 5.2997.92 ± 3.82
15 66.67 ± 6.5480.15 ± 4.2163.66 ± 4.2072.49 ± 1.8990.02 ± 4.1288.83 ± 4.7678.63 ± 3.2195.86 ± 2.6995.34 ± 4.0193.31 ± 3.54
16 98.41 ± 2.1098.25 ± 2.9897.78 ± 4.1260.53 ± 3.0298.82 ± 3.6193.33 ± 4.0842.74 ± 5.1281.90 ± 2.8993.24 ± 3.0694.12 ± 4.32
OA 70.42 ± 3.0475.28 ± 4.2371.18 ± 4.5773.18 ± 4.3491.32 ± 5.4188.35 ± 3.2193.95 ± 2.8795.22 ± 5.3495.24 ± 3.8796.78 ± 3.24
AA 75.19 ± 2.6575.21 ± 4.3272.92 ± 3.1275.62 ± 5.0184.81 ± 5.6588.72 ± 4.6984.35 ± 3.2090.50 ± 2.1990.56 ± 2.9790.60 ± 3.05
Kappa 65.71 ± 3.0271.45 ± 3.2166.98 ± 4.3269.23 ± 3.7690.13 ± 2.6586.71 ± 1.7893.09 ± 4.0394.55 ± 3.7892.30 ± 3.2196.32 ± 3.58
Table 6. Classification results (%) of various HSIC methods on UP dataset with 1% labeled samples per class. The best result for each class/metric is highlighted in bold.
Table 6. Classification results (%) of various HSIC methods on UP dataset with 1% labeled samples per class. The best result for each class/metric is highlighted in bold.
ClassRFSVMContextRSSANSSTNSSAttHybridNCVSSNOurs
79.58 ± 3.8778.28 ± 2.5689.65 ± 2.8486.19 ± 4.2389.66 ± 3.0887.96 ± 2.1993.16 ± 3.1895.45 ± 2.7696.84 ± 2.85
84.04 ± 1.9884.47 ± 2.1094.30 ± 1.2896.92 ± 2.0997.46 ± 3.2198.55 ± 3.9898.72 ± 2.7698.71 ± 4.0199.45 ± 1.65
55.06 ± 4.7076.82 ± 3.0967.47 ± 5.3159.67 ± 5.4274.78 ± 6.2374.32 ± 2.6584.95 ± 3.6786.35 ± 2.3788.69 ± 3.20
90.87 ± 4.3291.73 ± 3.4191.75 ± 4.7999.09 ± 2.1291.09 ± 3.5698.39 ± 4.2893.11 ± 1.6796.77 ± 3.2397.33 ± 4.11
95.00 ± 3.1193.97 ± 3.3399.98 ± 2.0490.14 ± 3.9798.59 ± 4.2298.08 ± 3.6595.63 ± 2.1796.41 ± 5.0499.77 ± 0.78
75.77 ± 1.6794.05 ± 4.3180.84 ± 3.4486.04 ± 2.1796.82 ± 2.0496.41 ± 4.1397.06 ± 3.2296.36 ± 4.2898.26 ± 2.65
73.85 ± 3.3369.68 ± 5.6781.13 ± 6.5371.05 ± 3.7699.75 ± 1.5494.87 ± 3.4792.42 ± 2.3890.39 ± 3.4298.13 ± 3.63
71.54 ± 3.2472.87 ± 3.5680.41 ± 4.4476.00 ± 5.0589.06 ± 3.2786.13 ± 4.1285.86 ± 6.5589.04 ± 4.2990.80 ± 5.08
99.46 ± 2.2299.89 ± 1.8786.31 ± 2.3580.86 ± 5.6891.65 ± 3.1895.81 ± 5.2377.95 ± 4.5697.37 ± 2.3397.25 ± 4.55
OA 81.71 ± 4.3383.43 ± 1.2188.86 ± 3.2489.09 ± 4.0193.64 ± 2.8794.08 ± 1.6694.77 ± 2.2195.93 ± 3.0697.77 ± 2.70
AA 80.58 ± 2.2184.64 ± 3.2485.72 ± 4.1082.88 ± 3.1792.10 ± 3.2992.28 ± 4.1990.64 ± 3.9594.09 ± 2.7996.72 ± 2.64
Kappa 75.02 ± 2.9877.24 ± 4.1985.18 ± 2.3985.49 ± 1.8991.56 ± 3.2592.13 ± 2.1393.03 ± 1.6594.60 ± 3.1097.04 ± 1.23
Table 7. Classification results (%) of various HSIC methods on SA dataset with 1% labeled samples per class. The best result for each class/metric is highlighted in bold.
Table 7. Classification results (%) of various HSIC methods on SA dataset with 1% labeled samples per class. The best result for each class/metric is highlighted in bold.
ClassRFSVMContextRSSANSSTNSSAttHybridNCVSSNOurs
99.68 ± 4.0999.78 ± 2.1997.02 ± 3.2199.95 ± 3.6795.48 ± 2.7999.98 ± 1.7996.39 ± 5.0496.27 ± 3.78100.00 ± 1.64
98.75 ± 4.3298.86 ± 3.0898.15 ± 5.3499.86 ± 2.7997.55 ± 4.0899.08 ± 2.1899.76 ± 4.3799.98 ± 3.5699.94 ± 2.19
84.90 ± 7.6487.95 ± 7.6592.67 ± 3.8990.73 ± 5.4393.43 ± 5.2293.22 ± 6.3499.31 ± 3.8795.46 ± 4.2899.23 ± 2.86
97.16 ± 5.2097.36 ± 4.3996.86 ± 8.0291.26 ± 7.0496.27 ± 4.6798.20 ± 2.5997.34 ± 5.3395.83 ± 3.2796.18 ± 5.33
93.95 ± 6.3395.49 ± 5.4497.87 ± 4.7599.05 ± 0.8799.59 ± 0.4296.92 ± 3.4498.11 ± 2.1299.17 ± 0.6597.25 ± 2.19
99.59 ± 0.4699.91 ± 0.1098.44 ± 2.1099.64 ± 1.0199.95 ± 0.87100.00 ± 0.4299.01 ± 1.4599.99 ± 1.0199.92 ± 0.45
97.77 ± 2.1197.75 ± 2.0197.68 ± 3.2197.19 ± 3.2499.91 ± 1.0298.22 ± 2.5499.96 ± 0.7898.44 ± 2.13100.00 ± 0.29
72.04 ± 3.4572.34 ± 5.6585.63 ± 5.4384.12 ± 2.9887.10 ± 5.4485.34 ± 4.6697.19 ± 3.7792.81 ± 5.6799.75 ± 1.32
95.85 ± 4.3598.47 ± 3.2299.00 ± 2.1098.59 ± 3.2199.79 ± 1.0299.95 ± 0.1299.46 ± 0.7999.46 ± 1.2299.80 ± 1.22
10 82.79 ± 6.5489.33 ± 4.5891.71 ± 5.4791.68 ± 7.1295.41 ± 3.4094.51 ± 3.2998.44 ± 4.3997.85 ± 3.2199.52 ± 1.23
11 94.63 ± 2.3090.14 ± 5.4393.35 ± 3.8996.93 ± 3.9880.22 ± 5.4394.66 ± 4.1093.48 ± 4.2999.41 ± 1.6799.72 ± 2.10
12 95.08 ± 3.0995.68 ± 5.6798.10 ± 3.0894.10 ± 5.3097.21 ± 3.1297.39 ± 4.3398.03 ± 4.0299.86 ± 1.0298.92 ± 2.06
13 92.13 ± 6.2292.84 ± 8.1198.96 ± 4.5299.65 ± 2.1099.89 ± 0.2397.88 ± 3.4495.87 ± 4.3299.88 ± 1.2398.67 ± 3.22
14 91.86 ± 8.3395.68 ± 5.5498.10 ± 4.5594.10 ± 6.4397.21 ± 2.3597.39 ± 5.4198.03 ± 4.2999.86 ± 2.4598.92 ± 4.22
15 68.15 ± 5.6674.12 ± 7.4481.53 ± 3.4979.95 ± 8.9883.53 ± 4.3080.47 ± 4.8795.98 ± 5.6689.69 ± 2.3399.39 ± 1.22
16 94.38 ± 3.2298.71 ± 4.2397.59 ± 4.3399.56 ± 1.2398.18 ± 3.2299.76 ± 1.2998.89 ± 4.3299.12 ± 1.27100.00 ± 1.02
OA 86.51 ± 3.2188.30 ± 3.2092.58 ± 4.3292.28 ± 5.3493.46 ± 2.3793.15 ± 3.2497.95 ± 4.2296.24 ± 4.5599.53 ± 2.87
AA 91.17 ± 3.2292.77 ± 5.4495.18 ± 2.6695.12 ± 3.4295.17 ± 3.4195.79 ± 3.1297.64 ± 3.1197.48 ± 3.0999.58 ± 1.03
Kappa 84.95 ± 6.5486.93 ± 3.9091.74 ± 5.3491.40 ± 4.2692.71 ± 5.4392.37 ± 3.8997.72 ± 4.2395.81 ± 2.9899.48 ± 2.10
Table 8. Classification results (%) of various HSIC methods on HU dataset with 1% labeled samples per class. The best result for each class/metric is highlighted in bold.
Table 8. Classification results (%) of various HSIC methods on HU dataset with 1% labeled samples per class. The best result for each class/metric is highlighted in bold.
ClassRFSVMContextRSSANSSTNSSAttA2S2KCVSSNOurs
89.06 ±  4.6382.86 ± 3.7877.89 ± 3.2790.89 ±  5.8777.98  ± 4.9891.33 ±  3.1386.56 ±  2.2478.59 ±  4.1692.95 ±  2.24
85.76 ± 5.2384.89 ± 6.1665.28 ± 6.2374.17 ± 2.8785.55 ± 3.2388.25 ± 8.4989.16 ± 5.1090.13 ± 3.2291.95 ± 4.15
90.02 ± 8.0392.45 ± 3.0478.18 ± 9.6182.02 ± 10.1384.40 ± 5.5082.15 ± 3.1587.27 ± 7.1896.14 ± 3.5088.83 ± 5.23
90.43 ± 4.6592.36 ± 6.5384.26 ± 5.8388.78 ± 7.1486.85 ± 5.2390.14 ± 4.7093.01 ± 6.8792.18 ± 4.6595.22 ± 4.23
72.57 ± 6.7885.49 ± 5.3481.93 ± 5.6779.41 ± 7.4884.07 ± 4.3887.57 ± 4.9889.31 ± 3.7892.22 ± 3.9597.55 ± 2.45
95.03 ± 6.8594.48 ± 7.1376.87 ± 8.9183.32 ± 7.2485.43 ± 4.7689.22 ± 5.1391.02 ± 4.2294.16 ± 3.2393.23 ± 6.80
67.43 ± 8.9167.40 ± 11.4378.87 ± 6.5479.34 ± 7.1674.87 ± 9.2378.13 ± 7.9376.11 ± 6.3780.13 ± 8.3279.42 ± 7.43
73.74 ± 11.1274.08 ± 8.2865.13 ± 13.4564.32 ± 11.4576.83 ± 12.5283.47 ± 8.8783.32 ± 6.3984.26 ± 8.7889.68 ± 9.41
67.19 ± 7.9365.38 ± 9.4972.62 ± 10.4673.56 ± 9.9881.86 ± 7.5681.03 ± 7.6585.22 ± 8.2187.10 ± 6.6589.13 ± 4.31
10 59.48 ± 13.3257.44 ± 12.5654.01 ± 14.6657.84 ± 14.4659.42 ± 9.3469.93 ± 8.9285.15 ± 9.6578.46 ± 10.6578.65 ± 11.23
11 53.18 ± 8.4463.17 ± 7.4368.27 ± 8.5365.88 ± 6.6456.44 ± 12.3571.25 ± 6.3476.46 ± 6.2980.84 ± 5.4385.32 ± 7.67
12 52.64 ± 9.5957.51 ± 8.4655.56 ± 8.6758.36 ± 9.5664.15 ± 7.6973.91 ± 9.8874.14 ± 6.5780.15 ± 9.1279.42 ± 8.17
13 48.83 ± 13.3450.67 ± 9.3580.12 ± 7.5479.41 ± 6.8783.30 ± 6.8990.10 ± 7.1292.72 ± 5.3091.42 ± 6.4694.12 ± 7.17
14 80.85 ± 5.6584.74 ± 7.3279.59 ± 6.1987.13 ± 8.4382.45 ± 5.1078.54 ± 9.3288.17 ± 5.7885.62 ± 4.3796.32 ± 6.82
15 96.27 ± 4.5499.85 ± 0.7184.26 ± 6.2078.59 ± 6.8993.22 ± 4.1289.83 ± 4.7697.63 ± 3.2192.16 ± 3.6994.34 ± 2.01
OA 71.82 ± 3.0475.78 ± 2.2369.48 ± 4.1770.18 ± 4.2474.32 ± 4.4180.35 ± 3.2184.95 ± 4.1783.24 ± 3.3286.67 ± 4.24
AA 75.79 ± 2.5578.11 ± 3.4270.12 ± 4.2475.62 ± 4.2179.81 ± 5.6581.72 ± 4.2985.35 ± 3.2384.98 ± 4.3789.20 ± 4.15
Kappa 69.11 ± 5.1273.41 ± 5.2167.28 ± 7.3268.54 ± 5.7674.13 ± 5.6578.91 ± 4.7883.09 ± 3.6785.30 ± 3.3487.12 ± 5.18
Table 9. Ablation results of different modules. Bold values indicate the best results among all compared methods.
Table 9. Ablation results of different modules. Bold values indicate the best results among all compared methods.
Different ModulesMetricsIPUPSA
CIFM CSDA HCL
××OA (%)93.1797.0597.47
Kappa93.6396.0798.41
××OA (%)85.2090.4097.71
Kappa82.9087.1497.58
×OA (%)94.7697.6898.56
Kappa94.0296.9398.51
OA (%)96.7897.7799.53
Kappa96.3297.0499.48
Table 10. Ablation results of different attention. Bold values indicate the best results among all compared methods.
Table 10. Ablation results of different attention. Bold values indicate the best results among all compared methods.
Attention MechanismMetricsIPUPSA
CBAM CSDA
××OA (%)94.7697.6898.56
Kappa94.0296.9398.51
×OA (%)96.6197.2898.90
Kappa96.1396.3898.77
×OA (%)96.7897.7799.53
Kappa96.3297.0499.48
Table 11. Comparison of Params and FLOPs for different methods. Bold values indicate the best results among all compared methods.
Table 11. Comparison of Params and FLOPs for different methods. Bold values indicate the best results among all compared methods.
DatasetMetricsContextSSAN3D CNNA2S2KOurs
IPParams (M)1.21187.0200.9750.3710.578
FLOPs (M)84.797062.9630.284170.4550.98
UPParams (M)0.70325.0244.6570.2210.631
FLOPs (M)49.521982.812.29887.3043.561
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, N.; Wang, Z.; Wang, M.; Zhao, Y. Double-Attention Context Interactive Network for Hyperspectral Image Classification. Remote Sens. 2026, 18, 1059. https://doi.org/10.3390/rs18071059

AMA Style

Hu N, Wang Z, Wang M, Zhao Y. Double-Attention Context Interactive Network for Hyperspectral Image Classification. Remote Sensing. 2026; 18(7):1059. https://doi.org/10.3390/rs18071059

Chicago/Turabian Style

Hu, Nannan, Zhongao Wang, Minghao Wang, and Yuefeng Zhao. 2026. "Double-Attention Context Interactive Network for Hyperspectral Image Classification" Remote Sensing 18, no. 7: 1059. https://doi.org/10.3390/rs18071059

APA Style

Hu, N., Wang, Z., Wang, M., & Zhao, Y. (2026). Double-Attention Context Interactive Network for Hyperspectral Image Classification. Remote Sensing, 18(7), 1059. https://doi.org/10.3390/rs18071059

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop