Next Article in Journal
Influence of System-Scale Change on Co-Alignment Comparative Accuracy in Fixed Terrestrial Photogrammetric Monitoring Systems
Previous Article in Journal
Research on Cultivated Land Quality Assessment at the Farm Scale for Black Soil Region in Northeast China Based on Typical Period Remote Sensing Images from Landsat 9
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MSDCA: A Multi-Scale Dual-Branch Network with Enhanced Cross-Attention for Hyperspectral Image Classification

1
School of Computer Science, Qinghai Normal University, Xining 810016, China
2
Institute of Tibetan Plateau Research, Chinese Academy of Sciences, Xining 810016, China
3
Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science and Technology, Nanjing 210044, China
4
College of lnformation Science and Technology & Artifctal Intelligence, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(13), 2198; https://doi.org/10.3390/rs17132198
Submission received: 26 May 2025 / Revised: 17 June 2025 / Accepted: 23 June 2025 / Published: 26 June 2025

Abstract

The high dimensionality of hyperspectral data, coupled with limited labeled samples and complex scene structures, makes spatial–spectral feature learning particularly challenging. To address these limitations, we propose a dual-branch deep learning framework named MSDCA, which performs spatial–spectral joint modeling under limited supervision. First, a multiscale 3D spatial–spectral feature extraction module (3D-SSF) employs parallel 3D convolutional branches with diverse kernel sizes and dilation rates, enabling hierarchical modeling of spatial–spectral representations from large-scale patches and effectively capturing both fine-grained textures and global context. Second, a multi-branch directional feature module (MBDFM) enhances the network’s sensitivity to directional patterns and long-range spatial relationships. It achieves this by applying axis-aware depthwise separable convolutions along both horizontal and vertical axes, thereby significantly improving the representation of spatial features. Finally, the enhanced cross-attention Transformer encoder (ECATE) integrates a dual-branch fusion strategy, where a cross-attention stream learns semantic dependencies across multi-scale tokens, and a residual path ensures the preservation of structural integrity. The fused features are further refined through lightweight channel and spatial attention modules. This adaptive alignment process enhances the discriminative power of heterogeneous spatial–spectral features. The experimental results on three widely used benchmark datasets demonstrate that the proposed method consistently outperforms state-of-the-art approaches in terms of classification accuracy and robustness. Notably, the framework is particularly effective for small-sample classes and complex boundary regions, while maintaining high computational efficiency.

1. Introduction

Hyperspectral images classification remains a key research area within the broader field of hyperspectral data analysis, consistently attracting considerable attention from scholars and researchers [1]. This domain focuses on the systematic extraction and interpretation of the rich spectral information embedded in hyperspectral images. By leveraging the distinct spectral signatures of various objects across numerous spectral bands, it facilitates highly accurate pixel-wise classifications [2]. The precision achieved in hyperspectral classification plays a crucial role, not only in advancing the capabilities of hyperspectral remote sensing but also in providing valuable contributions to numerous application areas, including precision agriculture [3], medical imaging [4], mineral exploration [5], food safety monitoring [6], and military surveillance [7]. As highlighted in [8], artificial intelligence-based methods for remote sensing data analysis still face considerable obstacles, including inconsistent sample distributions, high inter-class similarity, and complex contextual dependencies. However, the high dimensionality of HSI data, coupled with limited labeled samples and complex scene structures, poses significant challenges to accurate classification. These factors often hinder the performance of conventional methods, especially in scenarios with spatial heterogeneity or label scarcity.
Early hyperspectral image classification methods predominantly relied on spectral similarity measures, such as the Bhattacharyya distance [9], spectral angle matching (SAM) [10], and the k-Nearest neighbor algorithm (k-NN) [11]. Although these approaches are conceptually simple and easy to interpret, they often suffer from classification errors due to issues like “homoscedasticity” and “heteroscedasticity”. The adoption of machine learning techniques presents a significant step forward in improving classification performance. For instance, techniques for reducing data dimensionality, such as principal component analysis (PCA) [12] and linear discriminant analysis (LDA) [13], have shown higly effective in simplifying the high-dimensional nature of hyperspectral imagery. Currently, classification algorithms such as support vector machines (SVM) [14] and random forests (RF) [15] have been widely adopted to improve the stability and reliability of classification results. While dimensionality reduction and SVM-based techniques have been widely used, their limited ability to model high-order structural information restricts performance. In this regard, nonlocal low-rank tensor modeling has emerged as a promising direction [16], providing enhanced representation power through structured feature decomposition and nonlocal context encoding. These insights underscore the need for models that not only leverage spatial–spectral fusion, but also integrate multi-scale abstraction and inter-feature relational modeling.
To better capture the spatial and spectral information in hyperspectral images, recent research has increasingly adopted deep learning approaches for classification tasks. Although neural networks have existed since the mid-20th century, the surge in computational capabilities and data availability in the early 2000s significantly propelled the progress of deep models. The emergence of deep belief networks (DBNs) [17] in 2006 addressed training difficulties in deep structures, and the success of AlexNet in the 2012 ImageNet challenge marked a turning point for deep learning in visual recognition. In the context of hyperspectral image analysis, deep models have gained prominence for their ability to autonomously and hierarchically extract representative features. For example, Yang et al. introduced a semi-supervised adversarial autoencoder, which leveraged stacked autoencoders [18] to learn more abstract spectral representations, thus enhancing classification accuracy. Building on the proven effectiveness of convolutional neural networks (CNNs) in visual data processing, Hu et al. developed a one-dimensional CNN (1D-CNN) [19] specifically designed for hyperspectral image analysis. This model captures spectral characteristics by passing the input data through a sequence of layers, including convolution, pooling, dense connections, and final classification layers. This development had a profound influence on subsequent CNN-based classification techniques. Zhao et al. later presented a two-dimensional CNN (2D-CNN) [20], which extracted spatial features by reducing the spectral dimensions. However, both of these methods focused primarily on extracting features in a single dimension. Recognizing the inherently three-dimensional structure of hyperspectral imagery, scholars such as Chen [21] and Li [22] pioneered network architectures based on 3D convolutional neural networks (3D-CNNs), allowing the simultaneous and effective extraction of spatial and spectral information. To further improve the classification accuracy, Zhang et al. proposed a hybrid approach that combined 1D-CNN to extract spectral information with 2D-CNN to capture spatial–spectral features. The outputs of these two streams were fused using various strategies, including direct addition, feature concatenation, and their weighted counterparts [23]. Roy et al. subsequently improved this framework by merging the strengths of 2D-CNN and 3D-CNN into a hierarchical architecture [24], which reduced computational complexity while simultaneously increasing classification accuracy by enabling more comprehensive spatial–spectral feature extraction.
In addition to exploring diverse CNN-based architectures for hyperspectral image classification, recent studies have increasingly introduced integrated methods that combine convolutional networks with traditional machine learning techniques and advanced computational strategies. For example, Cao et al. enhanced classification accuracy by combining CNN with Markov random fields for spatial smoothing [25]. Liang et al. extracted compact subspace features from CNN high-level outputs using sparse representation methods to improve spatial–spectral characterization [26]. Zhong et al. proposed the spatial–spectral residual network [27], which employed residual modules in both spectral and spatial branches to enhance feature extraction. This architecture improved the network’s ability to capture complex patterns, leading to more reliable classification outcomes. Attention mechanisms also emerged as a key component in CNN-based hyperspectral models, helping to address the unique spatial–spectral structure of hyperspectral data. He et al. noted that, unlike RGB images, hyperspectral imagery contained a distinct structure combining 2D spatial and spectral dimensions. To address this, they proposed the M3D-CNN [28], a multi-scale 3D convolutional model that jointly learned spatial and spectral features. Building on this direction, Hang et al. proposed a dual-branch attention-integrated CNN [29], which separated spectral and spatial processing to enhance the feature distinctiveness. Wang et al. introduced Cubic-CNN [30], an end-to-end model that incorporated dimensionality reduction while preserving both global and local characteristics. To avoid the limitations of sequential processing, Hang et al. further developed a multi-attention dual-branch architecture [31], enabling efficient parallel learning of spectral and spatial information. Ben Hamida et al. expanded this paradigm by proposing the 3D-DLA [32], which used 3D convolutions for simultaneous spatial–spectral modeling. Beyond CNN-based models, recent research also explored novel architectures such as generative adversarial networks (GANs) [33], capsule networks [34], and graph convolutional networks (GCNs) [35], aiming to overcome the limitations of traditional approaches. These models provided promising alternatives for enhancing classification performance in complex hyperspectral scenes.
While convolutional neural networks (CNNs) had traditionally served as a foundational tool for hyperspectral image analysis, the emergence of transformer-based models [36] introduced novel perspectives and opportunities for advancement in this domain. Originally proposed by Vaswani et al., the transformer architecture was subsequently adapted for a range of computer vision applications. With developments such as the vision transformer (ViT) [37], transformers gained substantial traction in tasks including image classification, object recognition, and semantic understanding. Unlike CNNs, which were constrained by fixed receptive fields, transformer models used self-attention to capture relationships between globally distributed features across an image. This advantage positioned transformers as a powerful alternative to traditional CNN-based approaches, offering improved classification accuracy in hyperspectral imagery.
Even though modeling joint spatial–spectral dependencies in hyperspectral images remains a persistent challenge, recent advances in Transformer-based architectures have introduced promising solutions. He et al. [38] proposed the spatial–spectral transformer, which incorporated dense connectivity to enhance inter-band correlations and improve classification accuracies. Qing et al. [39] developed a transformer-based network using continuous spectral attention for fine-grained spectral dependency learning. Hong et al. [40] introduced SpectralFormer, which employed grouped spectral embedding (GSE) for localized spectral feature extraction and cross-layer adaptive fusion (CAF) for dynamic inter-layer information exchange. To improve semantic representation, Sun et al. [41] proposed SSFTT, which transformed low-level features into semantic tokens and integrated CNNs with transformers for improved spatial–spectral representation. Further innovations included SPRLT-Net by Xue et al. [42], which used recursive local attention to capture fine-grained spatial relationships, and GPE by Mei et al. [43], which directed attention to localized spatial–spectral regions. Fang et al. [44] presented MAR-LT, a lightweight attention-enhanced convolutional framework, while Roy et al. [45] introduced MorphFormer, which combined morphological operations with attention modules to improve geometric feature extraction. These developments collectively highlighted the increasing potential of transformer-based models to advance hyperspectral image classification.
Current HSI classification methods have demonstrated promising performance in capturing spatial and spectral features. However, most rely on fixed-size sampling patches, which limits their ability to model multi-scale contextual information, particularly in complex or heterogeneous scenes. Additionally, generating high-quality pixel-level annotations is labor-intensive and costly, resulting in limited labeled data. This scarcity of supervision, coupled with rigid sampling strategies, constrains model adaptability and classification accuracy. To address these issues, we propose a dual-branch deep learning framework for low-supervision hyperspectral classification. The model integrates multi-scale 3D spatial–spectral convolutions and directional 2D feature encoding to capture both global context and fine-grained spatial structures. A cross-attention transformer module is further introduced to enable semantic alignment and adaptive fusion across heterogeneous branches. This design improves feature discrimination, particularly for small-sample categories and boundary-region pixels, while maintaining high classification accuracy under limited supervision. Extensive experiments on benchmark HSI datasets demonstrate the effectiveness and robustness of the proposed method.
The design of the multi-scale dual-branch network with enhanced cross-attention (MSDCA) incorporates the following three major contributions:
  • The multi-scale 3d spatial–spectral feature extraction module (3D-SSF) is designed to learn hierarchical spatial–spectral representations under complex scene conditions using three parallel 3D convolutional branches with varying kernel sizes and dilation rates. It enables the extraction of both fine-grained local features and broad contextual information, supporting effective multi-scale abstraction in heterogeneous land-cover areas.
  • The multi-branch directional feature module (MBDFM) is developed to capture axis-specific spatial patterns by applying depthwise separable convolutions along multiple orientations. Through parallel branches with horizontal, vertical, and square-shaped kernels, this module enhances directional sensitivity and long-range spatial structure modeling. As a lightweight 2D complement to the 3D branch, MBDFM improves spatial feature precision without increasing computational complexity.
  • An enhanced cross-attention transformer encoder (ECATE) is proposed to perform semantic alignment and adaptive fusion across multi-scale features. It employs a dual-path fusion strategy involving cross-attention between branch-specific tokens and residual-based structural preservation. The fused features are further refined by efficient channel attention and spatial attention, enabling more discriminative representation under complex or weakly supervised scenarios.

2. Materials and Methods

In this section, we elaborate on the proposed MSDCA framework, which is composed of a multi-scale dual-branch architecture, directional feature modeling, and transformer-based semantic fusion. Figure 1 presents the architecture of the proposed MSDCA network.
The overall information flow in MSDCA proceeds as follows: First, the input hyperspectral data undergo PCA-based dimensionality reduction and patch extraction at two different spatial scales. These patches are then fed into a dual-branch feature extraction structure, where the larger-patch branch applies the multi-scale 3D spatial–spectral feature module (3D-SSF), and the smaller-patch branch uses a lightweight convolution to preserve fine-grained spatial details. Both outputs are subsequently passed through the multi-branch directional feature module (MBDFM) to capture directional spatial patterns. The resulting features from each branch are then tokenized and input into the enhanced cross-attention transformer encoder (ECATE), where cross-attention is applied to enable deep semantic interaction between the two streams. Finally, the fused tokens are passed through channel and spatial attention modules and projected into classification scores via a fully connected head.

2.1. HSI Data Preprocessing

Hyperspectral images (HSIs) capture rich spectral information across a large number of bands. However, the resulting high-dimensional data present significant processing challenges. To address this, data preprocessing becomes crucial, as it helps extract meaningful features while reducing computational complexity. Among various preprocessing methods, principal component analysis (PCA) is a commonly used and effective technique to reduce the dimensionality of HSIs data. Consider the original hyperspectral images I R M × N × L , where M and N represent the spatial dimensions (height and width), and L is the number of spectral bands. PCA is then applied along the spectral dimension to reduce the number of bands from L to l, resulting in a transformed image I PCA R M × N × l . This dimensionality reduction technique effectively removes redundant spectral information while maintaining full spatial resolution, ensuring that the essential spatial structure of the images remains intact.
In hyperspectral classification, the sample size is often limited. To make the most of the available training data and enhance the model’s generalization capability, patches of varying sizes are used for feature extraction. To acquire information across multiple scales, the model is designed to extract two patches of different dimensions, L 1 p R s 1 × s 1 × l and L 2 p R s 2 × s 2 × l , where s 1 × s 1 and s 2 × s 2 denote the respective window sizes. The label assigned to each patch corresponds to the label of its central pixel. During patch extraction, a padding operation is applied to the image boundaries to ensure that edge pixels are adequately handled. Once all 3D patches are generated, those with label values of 0 are discarded. The two extracted variables are then combined to construct a dataset, which serves as input to the network. The generated data samples corresponding to each pixel are stored in set A. Following a given sampling rate, the dataset is randomly divided into training and testing subsets. Both sets contain the corresponding ground truth labels. These labels are represented as Y R M × N , where Y is the ground truth labels.

2.2. Double-Branch Multi-Level Spatial–Spectral Feature Extraction Module

As illustrated in Figure 1, the model adopts a dual-branch architecture, where each branch is dedicated to extracting features from image patches of different sizes. The branch processing larger patches, represented as L 1 p , is designed to capture global features, while the branch handling smaller patches, denoted as L 2 p , emphasizes the modeling of local features. This structure allows the network to effectively learn both global context and fine-grained spatial details by leveraging multi-scale input cubes.
In the first branch, corresponding to the larger patch size s 1 × s 1 , a multi-scale 3D spectral–spatial joint feature extraction module (3D-SSF) is constructed to extract features using convolutional kernels with varying dilation rates. Once the input feature cube is processed through the 3D-SSF module, three parallel convolutional paths are applied. The first path utilizes eight convolution kernels of size 3 × 3 × 3 with a dilation rate of 2 to capture features within a medium receptive field. The second path employs eight kernels of size 5 × 5 × 5 , also with a dilation rate of 2, to extract information from a broader spatial context. To obtain global receptive field features, the third path applies eight convolution kernels of size 7 × 7 × 7 . The outputs from these three paths are then fused with their corresponding inputs through element-wise addition, resulting in the final multi-scale 3D feature representation denoted as L 1 3 D . DConv indicates the use of dilated convolution.
L 1 3 D = Dilated Conv 3 D ( 3 × 3 × 3 ) ( L 1 p ) Dilated Conv 3 D ( 5 × 5 × 5 ) ( L 1 p ) 3 DConv ( 7 × 7 × 7 ) ( L 1 p ) L 1 3 D
In the second branch, which corresponds to the smaller patch size s 2 × s 2 , a 3D convolutional kernel of size 1 × 3 × 3 is employed. In this configuration, the kernel maintains a fixed size of 1 along the spectral axis, while a size of 3 is used in the spatial dimensions. This design allows the receptive field to expand solely in the spatial domain, preserving the spectral dimension without alteration. As a result, local spectral features are maintained while spatial context is effectively captured.
L 2 3 D = 3 DConv ( 1 × 3 × 3 ) ( L 2 p )
After completing 3D feature extraction, a multi-branch directional feature module (MBDFM) is introduced to further enhance spatial feature representation, capture spatial dependencies across multiple scales, and reduce computational cost. This module adopts a multi-branch parallel structure and leverages depthwise separable convolutions (DWConv) to extract spatial features at different scales. Residual connections are also incorporated to improve the expressiveness of the features and maintain the stability of the gradient flow. The MBDFM module consists of three parallel branches, each tailored to capture features at a specific scale. The first branch applies a 3 × 3 DWConv to extract local textures and short-range spatial dependencies. The second branch uses a 1 × 11 DWConv to model long-range dependencies along the horizontal direction. Moreover, the third branch utilizes a 11 × 1 DWConv to capture long-range dependencies along the vertical direction. The outputs from the three branches are concatenated to form a unified multi-scale joint feature representation. To further improve feature representation, residual connections are applied to the fused features, leading to the final output representation:
F 1 = Concat DWConv ( 3 × 3 ) ( L 1 3 D ) , DWConv ( 1 × 11 ) ( L 1 3 D ) , DWConv ( 11 × 1 ) ( L 1 3 D ) L 1 3 D
F 2 = Concat DWConv ( 3 × 3 ) ( L 2 3 D ) , DWConv ( 1 × 11 ) ( L 2 3 D ) , DWConv ( 11 × 1 ) ( L 2 3 D ) L 2 3 D

2.3. Enhanced Cross-Multi-Attention Transformer Encoder Module

Once the multi-scale 2D feature information is extracted from the dual-branch multi-level spatial–spectral feature extraction module, a tokenization step is performed to improve compatibility with the transformer framework. The extracted features F 1 and F 2 are first flattened into sequences. These flattened features are then projected through linear transformations to obtain token representations. Subsequently, token representations are derived via linear transformations, followed by feature weighting using the softmax function.
Then, the learnable classification token T cls is added as an additional feature, followed by the positional encoding P E p o s , resulting in the final token sequences:
T 1 = [ T f 1 ; T cls ] P E p o s T 2 = [ T f 2 ; T cls ] P E p o s
However, these tokens still represent localized features derived from different layers and branches of the network. To enhance interdependence and interaction among these tokens, an enhanced cross-multi-attention feature fusion module is introduced, as illustrated in Figure 2.
Following the notation introduced in the previous module, the extracted tokens T 1 and T 2 are processed through convolutional layers with different kernel sizes to generate their corresponding attention tensors. For T 1 , a 3 × 3 2D convolution with padding set to 1 is first applied to obtain Q T 1 . Then, a 5 × 5 dilated convolution with a padding of 2 is used to produce K T 1 . Finally, a 3 × 3 dilated convolution with a dilation rate of 2 is used to obtain V T 1 . In this case, the convolution kernel is a 3 × 3 matrix, but the dilation rate of 2 means that the kernel elements are spaced apart by one pixel between them. This increases the receptive field of the convolution operation, allowing the model to capture a wider context in the input without increasing the number of parameters. Similarly, T 2 is initially passed through a 3 × 3 convolution with padding of 1 to generate Q T 2 . This is followed by a 3 × 3 convolution with a dilation factor of 2 is applied to obtain K T 2 . Lastly, a 5 × 5 convolution is performed to obtain V T 2 . Through this process, T 1 is enriched by integrating information from V T 2 , and T 2 is refined using V T 1 , thereby enabling complementary fusion of global semantic information.
Subsequently, deep interactions between the two branches are facilitated using the cross-attention mechanism, resulting in the generation of cross-features A T 1 and A T 2 .
A T 1 = Softmax Q T 1 K T 1 d k V T 2
A T 2 = Softmax Q T 2 K T 2 d k V T 1
To enable effective interaction between A T 1 and A T 2 , we concatenate and normalize them before feeding into an MLP to derive A cross .
A cross = M L P ( L N ( A T 1 A T 2 ) )
To capture local interactions between T 1 and T 2 , we employ a 2D convolution on their concatenated representation, resulting in the fused output A fused .
A fused = 2 D C o n v ( T 1 T 2 )
During the fusion stage, we apply a grouped 1 × 1 convolution to both A cross and A fused , followed by BatchNorm and ReLU, to perform a lightweight channel-wise transformation. This operation reduces computational complexity while maintaining inter-channel dependencies. To further enhance the quality of the fused features, the module integrates both the efficient channel attention (ECA) and spatial attention (SA) mechanisms, as depicted in Figure 3. Specifically, ECA computes channel-wise weights by applying adaptive average pooling followed by a one-dimensional convolution, enabling dynamic reweighting of channel responses. In parallel, SA emphasizes spatially informative regions to complement the channel-wise enhancement. The final fused feature, denoted as T ^ , is subsequently used as the input to the classifier. The complete process can be formulated as
A ECA = ( σ ( 2 D c o n v ( 2 D G r o u p C o n v ( A c r o s s ) 2 D G r o u p C o n v ( A f u s e d ) ) ) ) A f u s e d A f u s e d
T ^ = SA ( ECA ( A ECA ) )
where σ is the sigmoid activation function and 2 D G r o u p C o n v ( · ) is a 2D grouped convolution that processes grouped channels separately to improve efficiency.
The complete procedure of the proposed MSDCA is outlined in Algorithm 1.
Algorithm 1 MSDCA Model.
Input: The input HSI data I R M × N × L , with truth labels Y R a × b . PCA parameter l = 30 . Extract two sets of feature cubes: L 1 p R s 1 × s 1 × l and L 2 p R s 2 × s 2 × l , where s 1 > s 2 . The training dataset is divided in a ratio of 1:99.
Output: Predicted labels for the test dataset.
 1:
The batch size is 64, and learning rate is 5 × 10 4 . The training epochs, ε = 500 .
 2:
The dataset after PCA dimensionality reduction is represented as I PCA R M × N × l .
 3:
Extract the data L 1 p and L 2 p , and place them into a set A. Then, divide the set A into training and test subsets using a sampling ratio of 1:99.
 4:
for  i = 1 to ϵ  do
 5:
    The data L 1 p and L 2 p are processed by a dual-branch multi-level spatial spectrum feature extraction module to generate features F 1 and F 2 .
 6:
    The features F 1 and F 2 are used to obtain the labeled sequences T 1 and T 2 through Equation (5).
 7:
    The tokens T 1 and T 2 are passed through the ECATE, resulting in the fused spectral–spatial enhanced feature T ^ through Equation (11).
 8:
    The learnable classification token is passed through a linear layer, and the classification probabilities are computed using the Softmax function.
 9:
end for
  10:
The trained model is applied to the test dataset to generate predicted labels.

2.4. Classifier Head

For the HSI classification task, the fully connected classifier head (FCCH) is employed to produce the final classification output. Specifically, a learnable classification token (CLS Token) is extracted from the enhanced cross-multi-attention transformer encoder module, which encapsulates a global representation of the entire input. This token is then passed through the multilayer perceptron (MLP) classification head to generate the final prediction. The MLP head consists of several fully connected (FC) layers. Each of these layers is followed by layer normalization (LN), thus generating a prediction vector I R 1 × C , where C denotes the total number of categories. To interpret these outputs probabilistically, a Softmax function is applied across the vector elements, converting the raw logits into a categorical probability distribution that satisfies the normalization condition i = 1 C p i = 1 . The class label assigned to each input sample is then determined by identifying the index corresponding to the highest probability in this distribution.
The entire classification process can be formulated as follows:
I = Softmax ( FC ( LN ( T ^ cls ) ) )

3. Results

In this section, we evaluate the effectiveness of the proposed methods through extensive experiments conducted on three publicly available datasets. First, we provide an overview of the three datasets used in the study. Then, the experimental setup is detailed, followed by a parametric analysis. Next, we present the classification results along with an evaluation of our method and comparative approaches. Finally, ablation experiments are systematically conducted to examine how each module influences the model’s performance, allowing for a clearer understanding of their individual contributions.
Table 1 presents the sample distribution for the training and test sets, detailing the specific data for each category division.

3.1. Data Description

Our proposed model is validated on three publicly available datasets, which are described in the following.
(1) Houston2013 dataset: The Houston 2013 dataset is collaboratively provided by the University of Houston research group and the U.S. National Mapping Center. It comprises 15 land cover classes and includes 144 spectral bands spanning a wavelength range of 0.38∼1.05 μm. The dataset consists of 349 × 1905 pixels with a spatial resolution of 2.5 m/pixel. Pseudo-color images and ground-truth classification maps are illustrated in Figure 4a,b.
(2) Pavia University: The Pavia University dataset was acquired in 2001 over the University of Pavia in northern Italy using the Reflectance Optical System Imaging Spectrometer (ROSIS) sensor. The original dataset contains 115 spectral bands covering a wavelength range from 0.43 to 0.86 micrometers. The image has spatial dimensions of 610 × 340 pixels and a spatial resolution of 1.3 m/pixel, and it includes nine land cover categories. To reduce the impact of noise, 12 noisy bands were removed during the experimental process. Pseudo-color images and the corresponding ground truth classification maps are shown in Figure 5a,b.
(3) Trento dataset: The Trento dataset was acquired using the AISA Eagle hyperspectral imaging sensor over a rural area located south of Trento, Italy. It consists of 63 spectral bands covering a wavelength range from 0.42∼0.99 μm, with an image size of 600 × 166 pixels used for classification. The spatial resolution is 1 m/pixel, and the dataset includes six land cover categories. Figure 6a,b display the pseudo-color composite of the hyperspectral data and the corresponding ground truth classification map, respectively.

3.2. Parameter Analysis

Key hyperparameters of the proposed model, namely, the batch size and the sizes of the first and second cubic patches, were systematically analyzed through experimental evaluation. The results, shown in Figure 7, Figure 8 and Figure 9, offer insight into their optimal configurations.
(1) Batch Size: Batch size plays a vital role in shaping the behavior and performance of deep learning models. It affects not only how efficiently the network is trained and how much memory is utilized but also influences the model’s predictive accuracy and ability to generalize to unknown data. Employing a larger batch size typically speeds up convergence due to more stable gradient estimates but comes at the cost of increased memory usage. Conversely, smaller batch sizes, while less demanding computationally, can introduce higher variance in gradient updates, potentially leading to overfitting or unstable training dynamics. Given these trade-offs, determining the most suitable batch size requires balancing hardware limitations with the needs of the specific learning task. To explore this balance, we conducted a series of experiments using batch sizes selected from the set 16 , 32 , 64 , 128 , 256 , ensuring that all other hyperparameters remained fixed. The experimental findings revealed that a batch size of 64 yielded the most favorable classification results in the evaluation metrics, indicating its effectiveness under the given model and dataset conditions.
(2) Patch Size: In hyperspectral image (HSI) classification, the size of the input patch plays a critical role in determining the model performance. Larger patches tend to capture more extensive spatial contexts, which can enrich feature representation and support more informed predictions. However, this advantage comes at the cost of increased memory usage and computational complexity. In contrast, smaller patches highlight fine-grained local structures but may fail to preserve essential spatial dependencies among neighboring pixels. To balance these factors, the proposed model adopts a dual-branch architecture that leverages feature representations at multiple spatial scales. Specifically, one branch processes a relatively larger patch to gather broader contextual cues, while the other focuses on a smaller patch to retain local detail. This multiscale strategy makes patch size a key variable that influences classification accuracy.
To identify the most effective patch configurations, we conducted controlled experiments by independently adjusting the patch size for each branch while keeping all other hyperparameters constant. For the first branch, patch sizes were selected from the set 9 , 11 , 13 , 15 , 17 , and for the second branch, from 3 , 5 , 7 , 9 , 11 . The evaluation results indicate that the highest classification accuracy is achieved when the first branch uses a patch size of 13 and the second branch operates on a patch size of 7.

3.3. Classification Results and Analysis

To validate the effectiveness of our proposed model, we conduct a series of comparative experiments against several widely baseline methods, including SVM [14], 1D-CNN [19], 3D-CNN [21], M3D-CNN [28], 3D-DLA [32], SSFTT [41], MorphFormer [45], and TNAAC [46]. For each baseline, we adhere to the original implementations by preserving their network configurations and training protocols as documented in their respective publications. Moreover, to ensure a fair comparison, all models are trained and evaluated using the same number of samples, which are randomly selected according to the ratios listed in Table 1. A visual summary of the classification results across multiple datasets is provided in Figure 10, allowing for an intuitive performance comparison. The experimental findings clearly indicate that our MSDCA model achieves superior classification accuracy across the board, outperforming all other approaches in a consistent manner.
(1) Quantitative Results and Analysis: The results of the experiments are presented in Table 2, Table 3 and Table 4, where the best-performing scores are distinctly marked for clarity. These evaluations were conducted on three widely used hyperspectral image datasets: Houston2013, Pavia University, and Trento. Performance was assessed using several standard classification indicators, including overall accuracy (OA), average accuracy (AA), the Kappa coefficient, and classification accuracy for each individual category. Taking the Houston2013 dataset as a representative example, MSDCA achieves the highest accuracy in categories such as “StressedGrass”, “Water”, “Residential”, “Road”, “ParkingLot1”, “TennisCourt”, and “RunningTrack”. Even in classes such as “HealthyGrass”, “SyntheticGrass”, and “Soil”, where MSDCA does not produce the highest accuracy, it still achieves highly competitive results. In contrast, traditional approaches such as SVM and 1D-CNN demonstrate inferior performance in specific categories. This clearly highlights that, especially in limited-sample scenarios, MSDCA effectively captures multiscale features and fully leverages the spatial–spectral characteristics of HSI data, thus significantly boosting classification performance. Moreover, our method achieves superior results in the Houston2013 dataset, mainly due to the discrete and localized distribution of sample points within this dataset. In contrast, the other two datasets typically contain broader regions of homogeneous classes. Consequently, our model exhibits a semantic modeling advantage in the Houston2013 dataset, thus providing improved classification accuracy.
Our method achieves the best precision in classifying categories including “Asphalt”, “Tree”, “MetalSheets”, and “Shadows” in the Pavia University dataset, as well as “AppleTrees”, “Woods”, “Vineyard”, and “Roads” in the Trento dataset. Conversely, categories such as “Meadows”, “Bitumen”, and “Bricks” in the Pavia University dataset, and “Building” and “Ground” in the Trento dataset, demonstrate moderate classification performance. As indicated in Table 3, the TNCCA model achieves similar or slightly superior accuracy compared to MSDCA for certain individual classes. This is largely attributed to the relatively limited number of samples within these specific categories. Moreover, due to the percentage-based random sampling approach employed, these categories inherently possess fewer training instances, resulting in notable class imbalance.
(2) Visual Evaluation and Analysis: Figure 11, Figure 12 and Figure 13 provide visual representations of the classification results through corresponding classification maps. These visualizations, when compared with the spatial distribution of noise observed in the original hyperspectral images, offer intuitive insight into the performance differences among competing methods. From these comparisons, it becomes evident that our proposed approach yields more precise and coherent classification outcomes, effectively reducing misclassification in complex or noisy regions, and consistently surpassing the performance of other evaluated models.
Visually, the classification maps generated by the MSDCA model display clear spatial delineation and minimal background interference. In contrast, the outputs produced by alternative methods generally suffer from less distinct boundaries and a higher presence of classification noise. Taking the Houston2013 dataset as a representative case, the map predicted by our model exhibits strong visual agreement with the ground truth annotations. However, baseline models such as SVM, 1D-CNN, 3D-CNN, M3D-CNN, and 3D-DLA tend to generate more frequent misclassifications and noise-induced artifacts, particularly in complex regions. A closer examination of the zoomed-in sections reveals that MSDCA achieves superior performance in accurately identifying specific land cover types, such as “ParkingLot1”, “StressedGrass”, and “Road”. These improvements are evident in the clarity and consistency of the classified regions. The improved performance of MSDCA on the Houston2013 dataset stems from both the structural complexity of the data and the architectural design of our model. The presence of spectrally similar yet spatially fragmented classes, such as “ParkingLot1” and “StressedGrass”, challenges traditional models. MSDCA addresses this through its dual-branch structure, which combines local spatial–spectral detail extraction with global contextual modeling. Furthermore, directional encoding via MBDFM and semantic fusion through ECATE enhance the model’s ability to handle intricate spatial layouts, resulting in clearer boundaries and reduced misclassification, as illustrated in Figure 12 and Table 2. Likewise, in the Pavia University dataset, the model demonstrates its ability to distinguish the “Bare Soil” category with high precision. While competing approaches often misclassify this region, which introduces erroneous green patches, our method substantially reduces both classification errors and visual noise. These observations underscore the enhanced robustness and spatial awareness of the MSDCA framework across diverse scenarios.
Ultimately, the experimental results confirm that our model consistently outperforms alternative techniques, showcasing its robustness and efficiency in feature representation, even in scenarios with limited samples.

3.4. Analysis of Inference Speed

To assess the inference performance of the MSDCA framework, we recorded both training and evaluation durations across benchmark datasets. The results indicate that the proposed model exhibits notable computational efficiency, managing to complete 500 training epochs within a compact overall runtime. It is worth noting that model evaluation was conducted following each training epoch to ensure consistent performance evaluation. This design resulted in a cumulative testing time that exceeded the total training time. To further accelerate model convergence, a dynamic learning rate adjustment strategy was implemented throughout the training process. Among the datasets, Pavia University, distinguished by its high spatial and spectral resolution, required the longest total training time of approximately 1.364 min. Nevertheless, the per-epoch training duration remained low, averaging around 0.163 s. The training precesses for the remaining datasets were even more time-efficient. As reflected in Table 5, these findings confirm that MSDCA not only delivers top-tier classification accuracy but also achieves fast convergence and strong runtime performance, validating its practicality for real-world classification tasks.

3.5. Ablation Analysis

To better understand the individual impact of core components within our architecture, we performed an ablation analysis on the Houston2013 dataset. This investigation targeted four primary modules: the 3D spatial–spectral feature extraction module (3D-SSF), the multi-branch directional feature module (MBDFM), the cross-attention mechanism (CA), and the enhanced cross-attention transformer encoder (ECATE). We systematically tested five different architectural variants, each comprising different combinations of the above modules. Their performances were assessed using three standard metrics: overall accuracy (OA), average accuracy (AA), and the Kappa coefficient ( κ ). A comprehensive summary of these experimental results can be found in Table 6, offering clear insights into the contribution of each module to overall classification effectiveness.
To evaluate the effectiveness of each component in our model, we conducted ablation studies by selectively disabling modules or replacing them with simpler alternatives. Case 1 is a baseline model using standard 3D and 2D convolutions without any cross-attention or directional encoding modules. Under this configuration, the model achieved an OA of 87.85%, an AA of 88.56%, and a κ coefficient of 86.87%. Case 2 incorporates the cross-attention (CA) mechanism to replace the vanilla transformer encoder, enabling semantic interactions between dual-branch features. The experimental findings reveal that this substitution increased OA to 90.59%, AA to 91.36%, and κ to 89.83%, highlighting the role of CA in strengthening feature interactions. Case 3 adds the multi-branch directional feature module (MBDFM) in place of conventional 2D convolutions to enhance spatial pattern modeling. This led to an increase in OA to 91.89%, AA to 92.50%, and κ to 91.23%. These results validate the effectiveness of MBDFM in improving classification performance. Case 4 introduces the enhanced cross-attention transformer encoder (ECATE) to better align multi-scale tokens and capture inter-branch dependencies. This enhancement further increased OA, AA, and κ to 93.10%, 93.08%, and 92.54%, respectively, reinforcing the significance of ECATE in feature fusion. Case 5 represents the full MSDCA architecture, which integrates 3D-SSF, MBDFM, CA, and ECATE. It achieved the highest classification performance, with an OA of 93.94%, an AA of 94.40%, and a κ coefficient of 93.45%. These results highlighted the synergistic effect of different modules, significantly enhancing feature extraction and classification performance. In conclusion, this study demonstrates the positive contributions of 3D-SSF, MBDFM, and ECATE in improving classification accuracy within the network.

4. Discussion

This study proposed a novel multi-scale dual-branch cross-attention (MSDCA) framework for hyperspectral image classification, aiming to enhance both spatial–spectral feature representation and inter-channel dependency modeling. Unlike conventional CNN-based architectures that rely solely on fixed receptive fields or stacked convolutions, the MSDCA introduces a collaborative mechanism that adaptively integrates hierarchical feature maps through multi-scale fusion and selectively emphasizes informative features via cross-attention guidance. This design enables the model to better extract discriminative patterns in complex hyperspectral data.
From the classification results across the Houston2013, Trento, and Pavia University datasets, MSDCA consistently achieved superior performance compared to state-of-the-art methods, with notable improvements in Overall Accuracy (OA), Average Accuracy (AA), and Kappa coefficient. Particularly in challenging categories such as “Stressed Grass”, “Residential”, and “Parking Lot 2”, which are often prone to misclassification due to intra-class variability or sample imbalance, MSDCA demonstrated remarkable robustness and sensitivity. These results validate the hypothesis that combining multi-scale spatial–spectral features with a dynamic dual-branch attention structure contributes to stronger class discrimination and higher model reliability.
It is worth noting that MSDCA not only excels in major classes with ample training samples but also maintains stable accuracy in minor categories. This suggests that the architecture effectively mitigates the influence of sample imbalance, likely due to the improved context aggregation and channel-wise attention interaction. Furthermore, the model achieves high performance with relatively modest architectural complexity, benefiting from the use of grouped convolutions and a lightweight attention module, which preserves computational efficiency.
In addition, the ablation study illustrates the indispensable role of each key component within the MSDCA framework. The removal of the 3D Spatial–Spectral Feature extraction (3D-SSF) module led to significant degradation in performance across all datasets, confirming its effectiveness in capturing joint spatial and spectral representations. The Multi-Branch Dilated Fusion Module (MBDFM) was also proven critical, as it enables multi-scale context aggregation, allowing the model to better handle heterogeneous land cover distributions. Furthermore, the Efficient Channel and Temporal Enhancement (ECATE) mechanism substantially contributes to the discriminability of deep features by adaptively emphasizing informative channels and capturing temporal dependencies. These findings indicate that the modules are not functionally redundant but rather complementary, and that their integration collectively strengthens both the representational power and generalization ability of the proposed architecture.
Nonetheless, the proposed MSDCA framework also has its limitations. First, although the dual-branch structure enhances feature interactions, the attention mechanism is inherently data-driven and lacks interpretability, making it difficult to trace specific feature contributions. Second, the model’s performance still relies on sufficient annotated data; in scenarios with extremely limited labels, its advantage may be diminished. Future work could explore integrating self-supervised learning strategies or knowledge distillation techniques to further reduce data dependency. Moreover, introducing explainable attention visualization tools or uncertainty quantification mechanisms may enhance model transparency and promote its practical deployment in remote sensing tasks.

5. Conclusions

In this paper, we propose an efficient dual-branch deep learning framework for HSI classification under limited supervision. By integrating multi-scale 3D spatial–spectral feature extraction and directional 2D spatial encoding, the network effectively captures both global and local structures. Furthermore, the enhanced cross-attention transformer module (ECATE) enables deep semantic alignment and adaptive feature fusion across heterogeneous branches. On the benchmark datasets, including Houston2013, Pavia University, and Trento, our method consistently demonstrates superior performance in overall accuracy, class-level robustness, and computational efficiency. The model is particularly effective in classifying small-sample categories and boundary-region pixels. In future work, we aim to extend this framework by incorporating uncertainty-guided pseudo-labeling and region-level consistency constraints, further reducing the reliance on labeled data in extreme low-supervision settings.
Despite its strong performance, the current model still relies on fixed patch sizes and pre-defined receptive field settings, which may limit adaptability to varying spatial distributions. In future work, we also plan to investigate dynamic patch sampling and adaptive receptive field strategies to improve structural flexibility.

Author Contributions

Conceptualization, N.J., S.G. and L.S.; methodology, N.J., Y.Z. and L.S.; software, N.J. and L.S.; validation, N.J. and S.G.; investigation, N.J. and Y.Z.; writing—original draft preparation, N.J.; writing—review and editing, L.S., S.G. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Key Research and Development and Transformation Plan Projects of Qinghai Province with Grant No. 2025-QY-215.

Data Availability Statement

The data presented in this study are available in the article.

Acknowledgments

The authors thank the anonymous reviewers and the editors for their insightful comments and helpful suggestions that helped improve the quality of our manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Guerri, M.F.; Distante, C.; Spagnolo, P.; Bougourzi, F.; Taleb-Ahmed, A. Deep learning techniques for hyperspectral image analysis in agriculture: A review. ISPRS Open J. Photogramm. Remote Sens. 2024, 12, 100062. [Google Scholar] [CrossRef]
  2. Safari, K.; Prasad, S.; Labate, D. A multiscale deep learning approach for high-resolution hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 18, 167–171. [Google Scholar] [CrossRef]
  3. Alanazi, A.; Wahab, N.H.A.; Al-Rimy, B.A.S. Hyperspectral Imaging for Remote Sensing and Agriculture: A Comparative Study of Transformer-based Models. In Proceedings of the 2024 IEEE 14th Symposium on Computer Applications & Industrial Electronics (ISCAIE), Penang, Malaysia, 24–25 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 129–136. [Google Scholar]
  4. Noor, S.S.M.; Michael, K.; Marshall, S.; Ren, J.; Tschannerl, J.; Kao, F.J. The properties of the cornea based on hyperspectral imaging: Optical biomedical engineering perspective. In Proceedings of the 2016 International Conference on Systems, Signals and Image Processing (IWSSIP), Bratislava, Slovakia, 4 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–4. [Google Scholar]
  5. Yadav, P.P.; Shetty, A.; Raghavendra, B.; Narasimhadhan, A. 1-D CNN for Mineral Classification using Hyperspectral Data. In Proceedings of the 2023 IEEE India Geoscience and Remote Sensing Symposium (InGARSS), Bangalore, India, 10–13 December 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–4. [Google Scholar]
  6. Fong, A.; Shu, G.; McDonogh, B. Farm to table: Applications for new hyperspectral imaging technologies in precision agriculture, food quality and safety. In Proceedings of the CLEO: Applications and Technology, San Jose, CA, USA, 10–15 May 2020; Optica Publishing Group: Washington, DC, USA, 2020. AW3K–2. [Google Scholar]
  7. Ardouin, J.P.; Lévesque, J.; Rea, T.A. A demonstration of hyperspectral image exploitation for military applications. In Proceedings of the 2007 10th International Conference on Information Fusion, Quebec, QC, Canada, 9–12 July 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 1–8. [Google Scholar]
  8. Zhang, L.; Zhang, L. Artificial intelligence for remote sensing data analysis: A review of challenges and opportunities. IEEE Geosci. Remote Sens. Mag. 2022, 10, 270–294. [Google Scholar] [CrossRef]
  9. Hsieh, T.H.; Kiang, J.F. Comparison of CNN algorithms on hyperspectral image classification in agricultural lands. Sensors 2020, 20, 1734. [Google Scholar] [CrossRef] [PubMed]
  10. Keshava, N. Distance metrics and band selection in hyperspectral processing with applications to material identification and spectral libraries. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1552–1565. [Google Scholar] [CrossRef]
  11. Xu, Q.; Liang, Y.; Wang, D.; Luo, B. Hyperspectral image classification based on SE-Res2Net and multi-scale spatial spectral fusion attention mechanism. J. Comput.-Aided Des. Comput. Graph. 2021, 33, 1726–1734. [Google Scholar] [CrossRef]
  12. Licciardi, G.; Marpu, P.R.; Chanussot, J.; Benediktsson, J.A. Linear versus nonlinear PCA for the classification of hyperspectral data based on the extended morphological profiles. IEEE Geosci. Remote Sens. Lett. 2012, 9, 447–451. [Google Scholar] [CrossRef]
  13. Ye, Q.; Yang, J.; Liu, F.; Zhao, C.; Ye, N.; Yin, T. L1-norm distance linear discriminant analysis based on an effective iterative algorithm. IEEE Trans. Circuits Syst. Video Technol. 2016, 28, 114–129. [Google Scholar] [CrossRef]
  14. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  15. Ham, J.; Chen, Y.; Crawford, M.M.; Ghosh, J. Investigation of the random forest framework for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 492–501. [Google Scholar] [CrossRef]
  16. Zhang, L.; Song, L.; Du, B.; Zhang, Y. Nonlocal low-rank tensor completion for visual data. IEEE Trans. Cybern. 2019, 51, 673–685. [Google Scholar] [CrossRef] [PubMed]
  17. Zhong, P.; Gong, Z.; Li, S.; Schönlieb, C.B. Learning to diversify deep belief networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3516–3530. [Google Scholar] [CrossRef]
  18. Yang, X.; Chen, J.; Chen, Z. Classification of alteration zones based on drill core hyperspectral data using semi-supervised adversarial autoencoder: A case study in Pulang Porphyry Copper Deposit, China. Remote Sens. 2023, 15, 1059. [Google Scholar] [CrossRef]
  19. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
  20. Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  21. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  22. Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
  23. Zhang, Y.; Huynh, C.P.; Ngan, K.N. Feature fusion with predictive weighting for spectral image classification and segmentation. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6792–6807. [Google Scholar] [CrossRef]
  24. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef]
  25. Cao, X.; Zhou, F.; Xu, L.; Meng, D.; Xu, Z.; Paisley, J. Hyperspectral image classification with Markov random fields and a convolutional neural network. IEEE Trans. Image Process. 2018, 27, 2354–2367. [Google Scholar] [CrossRef]
  26. Liang, H.; Li, Q. Hyperspectral imagery classification using sparse representations of convolutional neural network features. Remote Sens. 2016, 8, 99. [Google Scholar] [CrossRef]
  27. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
  28. He, M.; Li, B.; Chen, H. Multi-scale 3D deep convolutional neural network for hyperspectral image classification. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 3904–3908. [Google Scholar]
  29. Hang, R.; Li, Z.; Liu, Q.; Ghamisi, P.; Bhattacharyya, S.S. Hyperspectral image classification with attention-aided CNNs. IEEE Trans. Geosci. Remote Sens. 2020, 59, 2281–2293. [Google Scholar] [CrossRef]
  30. Wang, J.; Song, X.; Sun, L.; Huang, W.; Wang, J. A novel cubic convolutional neural network for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4133–4148. [Google Scholar] [CrossRef]
  31. Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-branch multi-attention mechanism network for hyperspectral image classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef]
  32. Hamida, A.B.; Benoit, A.; Lambert, P.; Amar, C.B. 3-D deep learning approach for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434. [Google Scholar] [CrossRef]
  33. Zhan, Y.; Hu, D.; Wang, Y.; Yu, X. Semisupervised hyperspectral image classification based on generative adversarial networks. IEEE Geosci. Remote Sens. Lett. 2017, 15, 212–216. [Google Scholar] [CrossRef]
  34. Paoletti, M.E.; Haut, J.M.; Fernandez-Beltran, R.; Plaza, J.; Plaza, A.; Li, J.; Pla, F. Capsule networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2145–2160. [Google Scholar] [CrossRef]
  35. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph convolutional networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5966–5978. [Google Scholar] [CrossRef]
  36. Vaswani, A. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  37. Dosovitskiy, A. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  38. He, X.; Chen, Y.; Lin, Z. Spatial-spectral transformer for hyperspectral image classification. Remote Sens. 2021, 13, 498. [Google Scholar] [CrossRef]
  39. Qing, Y.; Liu, W.; Feng, L.; Gao, W. Improved transformer net for hyperspectral image classification. Remote Sens. 2021, 13, 2216. [Google Scholar] [CrossRef]
  40. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking hyperspectral image classification with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5518615. [Google Scholar] [CrossRef]
  41. Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–spatial feature tokenization transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
  42. Xue, Z.; Xu, Q.; Zhang, M. Local transformer with spatial partition restore for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4307–4325. [Google Scholar] [CrossRef]
  43. Mei, S.; Song, C.; Ma, M.; Xu, F. Hyperspectral image classification using group-aware hierarchical transformer. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5539014. [Google Scholar] [CrossRef]
  44. Fang, Y.; Ye, Q.; Sun, L.; Zheng, Y.; Wu, Z. Multiattention joint convolution feature representation with lightweight transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5513814. [Google Scholar] [CrossRef]
  45. Roy, S.K.; Deria, A.; Shah, C.; Haut, J.M.; Du, Q.; Plaza, A. Spectral–spatial morphological attention transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5503615. [Google Scholar] [CrossRef]
  46. Wang, X.; Sun, L.; Lu, C.; Li, B. A novel transformer network with a CNN-enhanced cross-attention mechanism for hyperspectral image classification. Remote Sens. 2024, 16, 1180. [Google Scholar] [CrossRef]
Figure 1. An overview of MSDCA, showing the dual-branch processing paths, tokenization of extracted features, and integration through the enhanced cross-attention transformer encoder (ECATE).
Figure 1. An overview of MSDCA, showing the dual-branch processing paths, tokenization of extracted features, and integration through the enhanced cross-attention transformer encoder (ECATE).
Remotesensing 17 02198 g001
Figure 2. Enhanced cross-multi-attention transformer encoder.
Figure 2. Enhanced cross-multi-attention transformer encoder.
Remotesensing 17 02198 g002
Figure 3. Efficient channel attention and spatial attention.
Figure 3. Efficient channel attention and spatial attention.
Remotesensing 17 02198 g003
Figure 4. Illustration of the Houston2013 dataset: (a) false-color image composed using three representative spectral channels; (b) corresponding reference label map.
Figure 4. Illustration of the Houston2013 dataset: (a) false-color image composed using three representative spectral channels; (b) corresponding reference label map.
Remotesensing 17 02198 g004
Figure 5. Illustration of the Pavia University dataset: (a) false-color image composed using three representative spectral channels; (b) corresponding reference label map.
Figure 5. Illustration of the Pavia University dataset: (a) false-color image composed using three representative spectral channels; (b) corresponding reference label map.
Remotesensing 17 02198 g005
Figure 6. Illustration of Trento data: (a) false-color image composed using three representative spectral channels; (b) corresponding reference label map.
Figure 6. Illustration of Trento data: (a) false-color image composed using three representative spectral channels; (b) corresponding reference label map.
Remotesensing 17 02198 g006
Figure 7. Evaluation of optimal parameter settings on Houston2013: (a) batch size; (b) patch size_s1; (c) patch size_s2.
Figure 7. Evaluation of optimal parameter settings on Houston2013: (a) batch size; (b) patch size_s1; (c) patch size_s2.
Remotesensing 17 02198 g007
Figure 8. Evaluation of optimal parameter settings on Pavia University: (a) batch size; (b) patch size_s1; (c) patch size_s2.
Figure 8. Evaluation of optimal parameter settings on Pavia University: (a) batch size; (b) patch size_s1; (c) patch size_s2.
Remotesensing 17 02198 g008
Figure 9. Evaluation of optimal parameter settings on Trento dataset: (a) batch size; (b) patch size_s1; (c) patch size_s2.
Figure 9. Evaluation of optimal parameter settings on Trento dataset: (a) batch size; (b) patch size_s1; (c) patch size_s2.
Remotesensing 17 02198 g009
Figure 10. Overall visualization of method performance across datasets: (a) Houston2013 dataset; (b) Pavia University dataset; (c) Trento dataset.
Figure 10. Overall visualization of method performance across datasets: (a) Houston2013 dataset; (b) Pavia University dataset; (c) Trento dataset.
Remotesensing 17 02198 g010
Figure 11. Visualization of the classification outcomes on the Houston2013 dataset using various approaches: (a) reference ground truth map; (b) results obtained by SVM, achieving an overall accuracy (OA) of 31.82%; (c) 1D-CNN achieving an OA of 58.09%; (d) 3D-CNN with an OA of 84.24%; (e) M3D-CNN reaching an OA of 77.40%; (f) 3D-DLA achieving 75.51% OA; (g) SSFTT with 87.85% OA; (h) morphFormer yielding 87.14% OA; (i) TNCCA producing an OA of 92.50%; and (j) the MSDCA attaining the highest OA of 94.03%.
Figure 11. Visualization of the classification outcomes on the Houston2013 dataset using various approaches: (a) reference ground truth map; (b) results obtained by SVM, achieving an overall accuracy (OA) of 31.82%; (c) 1D-CNN achieving an OA of 58.09%; (d) 3D-CNN with an OA of 84.24%; (e) M3D-CNN reaching an OA of 77.40%; (f) 3D-DLA achieving 75.51% OA; (g) SSFTT with 87.85% OA; (h) morphFormer yielding 87.14% OA; (i) TNCCA producing an OA of 92.50%; and (j) the MSDCA attaining the highest OA of 94.03%.
Remotesensing 17 02198 g011
Figure 12. Comparison of classification results on the Pavia University dataset using a range of representative methods: (a) ground truth label map; (b) SVM, resulting in an overall accuracy (OA) of 71.73%; (c) 1D-CNN with an OA of 69.74%; (d) 3D-CNN, achieving 89.88% OA; (e) M3D-CNN, reporting an OA of 78.93%; (f) 3D-DLA, reaching an OA of 91.06%; (g) SSFTT, yielding 96.69% OA; (h) morphFormer, attaining 96.78% OA; (i) TNCCA, producing an OA of 97.99%; and (j) the proposed method, which delivered the highest performance with an OA of 98.16%.
Figure 12. Comparison of classification results on the Pavia University dataset using a range of representative methods: (a) ground truth label map; (b) SVM, resulting in an overall accuracy (OA) of 71.73%; (c) 1D-CNN with an OA of 69.74%; (d) 3D-CNN, achieving 89.88% OA; (e) M3D-CNN, reporting an OA of 78.93%; (f) 3D-DLA, reaching an OA of 91.06%; (g) SSFTT, yielding 96.69% OA; (h) morphFormer, attaining 96.78% OA; (i) TNCCA, producing an OA of 97.99%; and (j) the proposed method, which delivered the highest performance with an OA of 98.16%.
Remotesensing 17 02198 g012
Figure 13. Visualization of classification results on the Trento dataset using multiple competing methods: (a) ground truth reference; (b) SVM output, yielding an overall accuracy (OA) of 66.62%; (c) 1D-CNN, which achieved 73.07% OA; (d) 3D-CNN result with an OA of 85.75%; (e) M3D-CNN attaining an OA of 93.49%; (f) 3D-DLA output, reaching 91.87% OA; (g) SSFTT, producing an accuracy of 97.55%; (h) morphFormer, with an OA of 98.34%; (i) TNCCA, obtaining an overall accuracy of 98.57%; and (j) MSDCA, which delivered the best performance with the highest OA of 98.91%.
Figure 13. Visualization of classification results on the Trento dataset using multiple competing methods: (a) ground truth reference; (b) SVM output, yielding an overall accuracy (OA) of 66.62%; (c) 1D-CNN, which achieved 73.07% OA; (d) 3D-CNN result with an OA of 85.75%; (e) M3D-CNN attaining an OA of 93.49%; (f) 3D-DLA output, reaching 91.87% OA; (g) SSFTT, producing an accuracy of 97.55%; (h) morphFormer, with an OA of 98.34%; (i) TNCCA, obtaining an overall accuracy of 98.57%; and (j) MSDCA, which delivered the best performance with the highest OA of 98.91%.
Remotesensing 17 02198 g013
Table 1. Training and testing split for all three datasets.
Table 1. Training and testing split for all three datasets.
No.Houston2013 DatasetTrento DatasetPavia University Dataset
Class Training Testing Class Training Testing Class Training Testing
#1Healthy Grass131238Apple Trees403994Asphalt666565
#2Stressed Grass131241Buildings292874Meadows18618,463
#3Synthetic Grass7690Ground5474Gravel212078
#4Tree121232Woods919032Trees313033
#5Soil121230Vineyard10510,396Metal Sheets131332
#6Water3322Roads313143Bare soil504979
#7Residential131255 Bitumen131317
#8Commercial121232 Bricks373645
#9Road131239 Shadows9938
#10Highway121215
#11Railway121223
#12Parking Lot 1121221
#13Parking Lot 25464
#14Tennis Court4424
#15Running Track7653
Total15014,879Total30129,913Total42642,350
Table 2. Classification performance of various methods on the Houston2013 dataset.
Table 2. Classification performance of various methods on the Houston2013 dataset.
No. SVM [14]1D-CNN [19]3D-CNN [21]M3D-CNN [28]3D-DLA [32]SSFTT [41]morphFormer [45]TNCCA [46]MSDCA
H e a l t h y G r a s s 73.04 ± 10.5481.42 ± 6.1791 ± 0.9191.65 ± 2.1787.95 ± 5.585.79 ± 1.7796.55 ± 1.7995.02 ± 2.0394.04 ± 0.04
S t r e s s e d G r a s s 43.76 ± 32.4388.56 ± 2.9196.28 ± 0.4688.48 ± 6.0291.56 ± 3.4689.86 ± 6.5496.30 ± 2.0196.13 ± 1.5698.47 ± 0.11
S y n t h e t i c G r a s s 38.9 ± 46.88100 ± 0.0099.74 ± 0.2498.17 ± 1.5695.28 ± 1.5191.36 ± 9.8998.21 ± 0.7294.62 ± 1.3299.86 ± 0.00
T r e e 13.13 ± 13.0973.99 ± 18.3294.71 ± 0.1684.92 ± 5.5988.86 ± 2.4491.01 ± 3.5693.15 ± 1.6391.56 ± 2.3491.79 ± 0.07
S o i l 78.72 ± 20.4196.23 ± 3.7799.98 ± 0.0497.67 ± 1.5695.25 ± 3.7399.75 ± 0.2191.36 ± 5.63100 ± 0.0099.92 ± 0.00
W a t e r 22.05 ± 33.4654.47 ± 16.5794.7 ± 1.5952.17 ± 23.9765.34 ± 16.0783.23 ± 1.4579.60 ± 4.3291.85 ± 3.4597.35 ± 0.14
R e s i d e n t i a l 11.03 ± 12.4245.2 ± 17.7968.49 ± 2.657.07 ± 9.7564.03 ± 4.4874.32 ± 3.6977.71 ± 4.5989.57 ± 3.8498.25 ± 0.06
C o m m e r c i a l 25.1 ± 12.1523.59 ± 7.0167.82 ± 3.2568.62 ± 7.3857.18 ± 2.9769.45 ± 2.3665.26 ± 1.9882.44 ± 3.5381.53 ± 0.23
R o a d 7.38 ± 5.7275.09 ± 11.5869.73 ± 1.8950.73 ± 4.1370.54 ± 5.2989.21 ± 3.5988.69 ± 3.9988.57 ± 3.0992.78 ± 0.07
H i g h w a y 11.79 ± 13.7921.32 ± 19.9987.28 ± 3.1985.27 ± 5.5269.51 ± 5.7795.02 ± 1.5189.64 ± 9.2592.02 ± 2.6094.45 ± 0.11
R a i l w a y 10.55 ± 13.4524.74 ± 9.7975.47 ± 2.475.06 ± 7.5368.29 ± 7.3492.46 ± 5.3188.31 ± 1.0381.97 ± 5.7989.76 ± 0.23
P a r k i n g L o t 1 4.67 ± 4.3435.45 ± 20.7368.73 ± 2.7576.77 ± 12.6380.92 ± 4.1882.49 ± 4.3674.56 ± 3.3289.46 ± 2.3194.23 ± 0.11
P a r k i n g L o t 2 3.45 ± 3.344.96 ± 2.9771.55 ± 6.416.85 ± 6.2311.38 ± 4.5184.69 ± 5.4085.01 ± 3.3090.89 ± 3.5889.74 ± 0.45
T e n n i s C o u r t 67.45 ± 2.8528.63 ± 19.999.58 ± 0.4889.06 ± 12.836.6 ± 5.1499.86 ± 0.1490.63 ± 4.12100 ± 0.00100 ± 0.00
R u n n i n g T r a c k 99.17 ± 0.5599.26 ± 0.2799.85 ± 0.0099.97 ± 0.0794.64 ± 3.53100 ± 0.0097.66 ± 0.87100 ± 0.00100 ± 0.00
OA (%)31.82 ± 7.0258.09 ± 2.5984.24 ± 0.6177.4 ± 1.5975.51 ± 1.0787.85 ± 1.2187.14 ± 0.9092.50 ± 0.8094.03 ± 0.04
AA (%)34.01 ± 7.1156.86 ± 3.3585.99 ± 0.3975.5 ± 2.6671.82 ± 0.7288.56 ± 0.9187.36 ± 0.8691.88 ± 0.7594.48 ± 0.05
κ × 10026.75 ± 7.4654.71 ± 2.7682.96 ± 0.6675.52 ± 1.7373.45 ± 1.1686.87 ± 1.2986.45 ± 0.9190.97 ± 0.5893.54 ± 0.05
Note: Italic values denote the names of land-covers. Bold values indicate the optimal results.
Table 3. Classification results of various methods on the Pavia University dataset.
Table 3. Classification results of various methods on the Pavia University dataset.
No.SVM [14]1D-CNN [19]3D-CNN [21]M3D-CNN [28]3D-DLA [32]SSFTT [41]morphFormer [45]TNCCA [46]MSDCA
A s p h a l t 94.1 ± 0.193.05 ± 0.5993.08 ± 1.0685.75 ± 7.3393.59 ± 0.9997.31 ± 0.5296.51 ± 0.4897.74 ± 1.3197.98 ± 0.42
M e a d o w s 93.36 ± 0.4896.78 ± 0.3996.86 ± 0.8192.9 ± 2.8797.64 ± 0.5398.46 ± 0.2199.64 ± 0.1099.98 ± 0.0199.95 ± 0.03
G r a v e l 0 ± 00 ± 054.58 ± 3.6641.45 ± 7.2775.25 ± 2.9582.55 ± 2.1083.21 ± 1.4586.37 ± 5.1288.36 ± 0.51
T r e e s 33.39 ± 1.723.86 ± 7.0294.45 ± 0.6579.82 ± 10.6383.9 ± 3.4594.97 ± 1.8996.09 ± 2.1197.46 ± 0.3997.32 ± 1.41
M e t a l S h e e t s 33.39 ± 1.723.86 ± 7.0294.45 ± 0.6579.82 ± 10.6383.9 ± 3.4594.97 ± 1.8996.09 ± 2.1197.46 ± 0.3997.32 ± 1.41
B a r e s o i l 17.94 ± 1.97.51 ± 176.75 ± 3.0153.87 ± 10.3480.11 ± 2.1699.45 ± 0.3299.46 ± 0.1899.69 ± 0.1198.72 ± 1.91
B i t u m e n 0 ± 00 ± 070.07 ± 4.4248.65 ± 16.7279.26 ± 3.2799.46 ± 0.2880.01 ± 5.2999.02 ± 0.8097.42 ± 1.38
B r i c k s 89.48 ± 0.6690.86 ± 1.3684.43 ± 2.4163.05 ± 20.3985.33 ± 1.0195.71 ± 1.6894.86 ± 1.1895.21 ± 1.2094.1 ± 4.31
S h a d o w s 50.98 ± 9.8463.82 ± 10.7499.59 ± 0.4252.93 ± 25.2388.06 ± 5.6182.47 ± 6.5493.41 ± 1.6098.11 ± 0.0499.81 ± 0.1
OA (%)71.73 ± 0.2769.74 ± 0.7589.88 ± 0.178.93 ± 5.9491.06 ± 0.6596.69 ± 0.2596.78 ± 0.4797.99 ± 0.1398.16 ± 0.72
AA (%)53.15 ± 1.346.34 ± 1.8385.38 ± 0.7168 ± 9.1186.86 ± 0.8194.36 ± 0.4993.77 ± 0.9296.50 ± 0.8097.03 ± 1.17
κ × 10060.54 ± 0.4856.94 ± 1.1786.49 ± 0.1371.47 ± 8.1488.04 ± 0.8895.88 ± 0.6096.23 ± 0.4198.10 ± 0.4597.64 ± 1.04
Note: Italic values denote the names of land-covers. Bold values indicate the optimal results.
Table 4. Classification results of different methods on the Trento dataset.
Table 4. Classification results of different methods on the Trento dataset.
No.SVM [14]1D-CNN [19]3D-CNN [21]M3D-CNN [28]3D-DLA [32]SSFTT [41]morphFormer [45]TNCCA [46]MSDCA
A p p l e T r e e s 17.89 ± 4.3321.95 ± 5.0597.97 ± 1.0995.46 ± 1.2086.32 ± 0.5799.45 ± 0.1799.42 ± 0.4199.60 ± 0.3199.76 ± 0.08
B u i l d i n g s 46.05 ± 13.2873.78 ± 11.4258.02 ± 2.2278.49 ± 2.4681.16 ± 0.5997.21 ± 0.5292.77 ± 1.3098.11 ± 0.4197.24 ± 1.48
G r o u n d 51.09 ± 15.3385.88 ± 5.6991.98 ± 1.7368.49 ± 15.0470.74 ± 1.2154.26 ± 1.9791.37 ± 4.1797.26 ± 1.0996.25 ± 2.18
W o o d s 70.08 ± 3.7455.29 ± 7.4699.95 ± 0.0797.84 ± 0.2396.96 ± 1.24100 ± 0.0099.95 ± 0.02100 ± 0.00100 ± 0.00
V i n e y a r d 75.81 ± 7.1336.32 ± 3.9978.44 ± 3.8997.82 ± 0.4599.40 ± 0.4999.92 ± 0.0899.85 ± 0.11100 ± 0.00100 ± 0.00
R o a d s 64.21 ± 21.3269.33 ± 12.0877.96 ± 4.5885.04 ± 0.5580.25 ± 0.4289.46 ± 2.7092.44 ± 1.0593.21 ± 1.6394.96 ± 0.32
OA (%)66.62 ± 2.3673.07 ± 1.5685.75 ± 1.0993.49 ± 0.6091.87 ± 0.8897.55 ± 0.3198.34 ± 0.0498.57 ± 0.2498.91 ± 0.2
AA (%)49.69 ± 4.2956.76 ± 1.0584.06 ± 0.7488.24 ± 3.6482.01 ± 0.4089.96 ± 0.5795.46 ± 0.8597.36 ± 0.5597.53 ± 0.64
κ × 10056.59 ± 3.3259.5 ± 1.781.28 ± 1.3592.91 ± 0.5590.65 ± 0.6397.24 ± 0.2197.84 ± 0.2198.14 ± 0.2098.54 ± 0.27
Note: Italic values denote the names of land-covers. Bold values indicate the optimal results.
Table 5. Inference speed of the MSDCA model across different datasets (epoch = 500).
Table 5. Inference speed of the MSDCA model across different datasets (epoch = 500).
DatasetHouston2013Pavia UniversityTrento
Train Test Train Test Train Test
Time (min)0.55214.6731.3633.1150.98226.354
Table 6. Ablation study on different modules conducted using the Houston2013 dataset.
Table 6. Ablation study on different modules conducted using the Houston2013 dataset.
CasesComponentsIndicators
3D-SSF MBDFM Cross Attention ECAFE OA (%) AA (%) κ × 100
13D-Conv2D-Conv××87.8588.5686.87
23D-Conv2D-Conv×90.5991.3689.83
33D-Conv×91.8992.5091.23
43D-Conv93.1093.0892.54
593.9494.4093.45
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, N.; Geng, S.; Zheng, Y.; Sun, L. MSDCA: A Multi-Scale Dual-Branch Network with Enhanced Cross-Attention for Hyperspectral Image Classification. Remote Sens. 2025, 17, 2198. https://doi.org/10.3390/rs17132198

AMA Style

Jiang N, Geng S, Zheng Y, Sun L. MSDCA: A Multi-Scale Dual-Branch Network with Enhanced Cross-Attention for Hyperspectral Image Classification. Remote Sensing. 2025; 17(13):2198. https://doi.org/10.3390/rs17132198

Chicago/Turabian Style

Jiang, Ning, Shengling Geng, Yuhui Zheng, and Le Sun. 2025. "MSDCA: A Multi-Scale Dual-Branch Network with Enhanced Cross-Attention for Hyperspectral Image Classification" Remote Sensing 17, no. 13: 2198. https://doi.org/10.3390/rs17132198

APA Style

Jiang, N., Geng, S., Zheng, Y., & Sun, L. (2025). MSDCA: A Multi-Scale Dual-Branch Network with Enhanced Cross-Attention for Hyperspectral Image Classification. Remote Sensing, 17(13), 2198. https://doi.org/10.3390/rs17132198

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop