Next Article in Journal
Machine-Learning-Based Historical Reconstruction of Soil Organic Carbon Dynamics in Coastal Tidal Flats: Quantifying the Spatiotemporal Impacts of Reclamation
Previous Article in Journal
Spectral Differentiation of Whitish Leaf Diseases—Impact of Host Tissue, Symptom Variability and Scale
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MDS3-Net: A Multiscale Spectral–Spatial Sequence Hybrid CNN–Transformer Model for Hyperspectral Image Classification

1
School of Geographic Sciences, Hunan Normal University, Changsha 410081, China
2
Key Laboratory of Geospatial Big Data Mining and Application, Hunan Province, Changsha 410081, China
3
BGP Inc., China National Petroleum Corporation, Zhuozhou 072751, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(7), 977; https://doi.org/10.3390/rs18070977
Submission received: 6 February 2026 / Revised: 12 March 2026 / Accepted: 22 March 2026 / Published: 25 March 2026
(This article belongs to the Section Remote Sensing Image Processing)

Highlights

What are the main findings?
  • A novel MDS3-Net model is developed, which synergizes MSDC for spectral discrimination and geometric alignment, a linear-complexity S3 Encoder for global context, and DPFE for semantics-preserving dimensionality reduction.
  • Experimental results on four benchmark datasets (University of Pavia, Houston2013, LongKou, and University of Trento) demonstrate that MDS3-Net achieves higher OA, AA, and Kappa values compared with existing approaches.
What are the implications of the main findings?
  • The integration of local convolutional extraction and efficient global context modeling improves classification robustness, especially for classes with similar spectral characteristics and complex spatial structures.
  • MDS3-Net provides a scalable framework for hybrid deep learning models, potentially advancing the processing of high-dimensional remote sensing data with limited labeled samples.

Abstract

Hyperspectral image (HSI) classification faces significant challenges due to the spatial–spectral heterogeneity of land covers and the geometric rigidity of standard convolutions. Although Transformers offer powerful global modeling capabilities, their quadratic computational complexity limits practical efficiency. To address these limitations, this paper proposes a novel hierarchical framework named MDS3-Net (Multiscale Deformable Spectral–Spatial Sequence Network). Specifically, we design a Multiscale Spectral-Deformable Convolution (MSDC) module that adopts a cascaded strategy to sequentially extract discriminative spectral features and adaptively align spatial receptive fields with irregular object boundaries. To capture long-range dependencies efficiently, a Spectral–Spatial Sequence (S3) Encoder is introduced based on a gated large-kernel convolution mechanism, achieving global context modeling with linear complexity. Furthermore, a Dual-Path Feature Extraction (DPFE) module is proposed to perform semantics-preserving dimension reduction via spectral reorganization and spatial attention. Experimental results on four public datasets demonstrate that the proposed MDS3-Net achieves state-of-the-art classification performance and exhibits superior robustness under limited training samples compared to existing methods.

1. Introduction

Hyperspectral images (HSIs), acquired by advanced sensors on satellites, aerial vehicles, or drones [1,2,3], provide continuous spectral curves for each pixel. This rich spectral information enables the precise identification of materials, making HSIs indispensable in mineral exploration [4], urban planning [5], precision agriculture [6], environmental monitoring [7], and military reconnaissance [8]. However, the very advantage of HSIs—their high dimensionality—also introduces the “curse of dimensionality” [9]. More critically, real-world HSI scenes are characterized by significant spatial–spectral heterogeneity, where the same material may exhibit varying spectral signatures due to environmental changes, and land covers often present irregular geometries and scale variations [10]. Consequently, effectively utilizing this complex data for accurate classification remains a significant challenge.
Deep learning has revolutionized HSI classification by automatically learning hierarchical representations [11,12], moving beyond handcrafted features. Among these techniques, Convolutional Neural Networks (CNNs) have become the dominant backbone. Early works, such as Li et al. [13], employed 1D CNNs to extract spectral signatures. Recognizing that HSIs are volumetric data, subsequent studies integrated spatial context. For instance, Zhang et al. [14] and Xu et al. [15] utilized dual-branch architectures to extract spectral and spatial features respectively. While these methods improved performance, they typically rely on standard convolution operations defined on a fixed, rigid grid. This inherent rigidity assumes that relevant features always lie within a regular rectangular neighborhood, ignoring the fact that object boundaries in HSIs are often curved or fragmented. As a result, standard CNNs often fail to capture the intrinsic geometric deformations of objects, leading to feature misalignment at boundaries [16].
To address this, Li et al. [17] partitioned HSIs into multiple 3D cubes and applied 3D CNNs to simultaneously perform convolutions along spatial and spectral dimensions. However, employing stacked 3D CNNs significantly increases parameter count and leads to gradient vanishing issues [18,19,20]. Therefore, Roy et al. [21] proposed a hybrid 3D-2D convolution approach to reduce network complexity. Zhong et al. [22] introduced residual structures [23] into 3D spectral and 2D spatial convolutions. To further address the geometric limitations of standard convolutions, Dai et al. [24] introduced deformable convolutions by adding offsets to standard 2D convolutions, enabling adaptive sampling. Yu and Vladlen [25] proposed an efficient method to enlarge convolutional receptive fields. While these techniques alleviate the constraints of fixed grids, modeling long-range dependencies remains a critical challenge for CNNs.
In recent years, Transformer architectures have been introduced into the field of computer vision and achieved impressive results [26]. Unlike CNNs that primarily capture local features, Transformers can capture long-range dependencies between features with global features [27]. In the context of HSI classification, Hong et al. [28] reformulated the task as a sequence modeling problem and proposed a spectral Transformer, surpassing classical ViT. Similarly, Qing et al. [29] exploited spectral attention and self-attention mechanisms, while Liu et al. [30] designed a hierarchical Transformer with shifted windows to enable multi-scale feature extraction with reduced computational redundancy. Moreover, interactive learning frameworks, such as the Center Transformer [31], have been developed to capture multi-scale spatial–spectral representations by interacting features from center to surrounding regions.
In addition, hybrid CNN-Transformer architectures have gained popularity for combining local and global feature modeling. For instance, Sun et al. [32] proposed SSFTT, where a Transformer encoder processes spectral–spatial features extracted from hierarchical 3D and 2D convolutional blocks. Similarly, Fu et al. [33] constructed parallel CNN and Transformer branches to integrate local and non-local features. Xu et al. [34] developed a novel Transformer architecture that incorporates embedded convolution modules to adaptively fuse features from diverse receptive fields. Roy et al. [35] designed learnable spectral and spatial morphological networks using morphological convolutions combined with attention mechanisms. Yang et al. [36] proposed a two-stream CNN using 2D and 3D convolutions for local feature extraction, followed by a Transformer to model global dependencies. To alleviate the computational burden and overfitting of Transformers, Woo et al. [37] introduced channel and spatial attention modules using convolution operations to emulate attention effects. Furthermore, Zhang et al. [38] proposed a cascaded spatial cross-attention network that simultaneously captures local and global spatial contextual features via cross-attention. Beyond the aforementioned architectures, the HSI classification field has recently witnessed significant progress in several other advanced dimensions. For instance, enhanced multiscale feature fusion networks [39] have been developed to capture robust spatial–spectral representations. To alleviate the reliance on massive labeled data, weakly supervised paradigms like the ITER framework [40] have been explored to generate effective image-to-pixel representations. Furthermore, with the advent of large-scale deep learning, vision transformer-based foundation models, such as HyperSIGMA [41], have emerged to unify HSI interpretation across diverse and complex scenes.
However, despite these advances, existing methods still face challenges. First, standard Transformers suffer from quadratic computational complexity ( O ( N 2 ) ), which restricts efficiency and scalability. Second, in HSI scenarios with limited samples, they are prone to overfitting. Third, standard downsampling methods in these hierarchical networks often cause the irreversible loss of fine-grained details, leading to the disappearance of small-scale objects.
To overcome these challenges—specifically geometric rigidity, high computational complexity, and information loss during downsampling—we propose an innovative hierarchical framework named Multiscale Deformable Spectral–Spatial Sequence Network (MDS3-Net). Unlike previous methods, MDS3-Net introduces a synergistic design that balances local adaptivity and global efficiency. Specifically, we design a Multiscale Spectral-Deformable Convolution (MSDC) module to simultaneously extract discriminative spectral features and adaptively align spatial features with irregular object boundaries. To resolve the quadratic complexity of Transformers, we introduce a Spectral–Spatial Sequence (S3) Encoder based on a gated convolutional mechanism, which captures long-range dependencies with linear complexity ( O ( N ) ). Furthermore, a Dual-Path Feature Extraction (DPFE) module is proposed to perform dimension reduction while preserving salient spectral–spatial information.
The main contributions of this paper are summarized as follows:
(1)
We propose MDS3-Net, a novel unified hierarchical framework that synergizes local geometric adaptability with efficient global modeling, achieving state-of-the-art HSI classification performance even under limited training samples.
(2)
We design a MSDC module that decouples spectral and spatial feature extraction, enabling effective spectral discrimination and dynamic alignment with irregular object boundaries.
(3)
We introduce a S3 Encoder that utilizes a gated large-kernel convolution mechanism to capture global long-range dependencies with linear computational complexity ( O ( N ) ), overcoming the heavy computational burden of traditional self-attention.
(4)
We propose a DPFE module as a semantics-preserving downsampling mechanism, which performs dimensionality reduction via spatial attention and spectral reorganization to prevent the loss of fine-grained details.
The remainder of the paper is organized as follows. Section 2 elaborates on the proposed MDS3-Net framework and its core components. Section 3 details the experimental setup, datasets, and comprehensive performance analysis. Section 4 presents a further discussion on the experimental results. Finally, Section 5 concludes the paper with a summary of findings and future directions.

2. Methodology

2.1. Overall Architecture

We present MDS3-Net, an innovative hierarchical framework designed for hyperspectral image classification, with the detailed architecture depicted in Figure 1. The MDS3-Net architecture incorporates three key synergistic components: the MSDC module, the S3 Encoder, and the DPFE module.
Prior to feature extraction, to mitigate the curse of dimensionality and spectral redundancy, we first perform Principal Component Analysis (PCA) on the raw hyperspectral imagery. Subsequently, we extract spatial neighborhoods from the dimension-reduced data, generating multiple 3D image patches denoted as I R H × W × B , where H × W represents the spatial dimensions and B is the number of spectral bands. These patches serve as the input for the subsequent stage-wise hierarchical processing.
The proposed framework is designed to progressively extract and integrate spectral–spatial information through a pyramidal structure. Within each processing stage, we adopt a dual-branch strategy to simultaneously capture local and global information. Specifically, the MSDC module serves as the primary extractor, employing a decoupled strategy that combines spectral convolution for discriminative spectral features with deformable convolution for geometric adaptability. Simultaneously, the S3 Encoder operates along a parallel residual path, sharing the same input as the MSDC module. It models long-range sequential dependencies via a gated convolutional mechanism, achieving global receptive fields with linear computational complexity. The local features from the MSDC and the global context from the S3 Encoder are then fused via element-wise addition. Subsequently, this fused representation is fed into the DPFE module, which serves as a semantics-preserving downsampling mechanism. By prioritizing salient information preservation during resolution reduction, the DPFE effectively bridges adjacent stages.
Overall, this architectural design adheres to the principle of complementary feature learning, wherein each component performs a distinct yet cooperative role to strengthen the joint spectral–spatial representation. The MSDC emphasizes both spectral fidelity and local structural adaptability; the S3 Encoder facilitates global contextual awareness while maintaining high computational efficiency; and the DPFE functions as a critical filtering mechanism to ensure salient information preservation during spatial and spectral dimension reduction. Through the progressive integration of these complementary cues, MDS3-Net achieves a balanced synergy between local detail preservation, global semantic understanding, and model efficiency.

2.2. MSDC

As the fundamental feature extraction unit of MDS3-Net, the MSDC module is engineered to address the inherent spectral–spatial coupling and geometric complexity of hyperspectral data. Figure 2 illustrates the internal structure of the MSDC block. It adopts a decoupled strategy combining spectral convolution and deformable spatial convolution, augmented with dual residual connections to facilitate feature reuse and gradient propagation.
The module first applies a spectral convolution with a kernel size of k × 1 × 1 , meaning the operation is performed exclusively in the spectral dimension to aggregate spectral information without altering the spatial structure. To preserve the original spectral fidelity and prevent network degradation, a residual connection is introduced. The intermediate output X m i d is formulated as:
X m i d = σ ( BN ( Conv k × 1 × 1 ( x ) ) ) + x
where x is the input feature, Conv k × 1 × 1 denotes the 3D convolution with a spatial kernel size of 1 × 1 , BN represents Batch Normalization [42], and σ denotes the ReLU activation function [43].
Subsequently, the spatial features X m i d are processed by a deformable convolution with a kernel size of k × k . Unlike standard convolutions that sample from a fixed grid, deformable convolution introduces learnable offsets to dynamically adjust the sampling positions [24]:
F deform ( X m i d ) = p n R W ( p n ) · X m i d ( p 0 + p n + Δ p n )
where R denotes the regular sampling grid, W represents the convolution weights, and Δ p n is the learnable offset. Similar to the first stage, a second residual connection is applied to the spatial branch. The final output of the MSDC block is obtained by:
X o u t = σ ( BN ( F deform ( X m i d ) ) ) + X m i d
where X o u t and X m i d denote the output and input feature maps of the residual connection, respectively. F deform ( · ) represents the deformable convolution operation. B N ( · ) and σ ( · ) denote the operations defined previously. Notably, the symbol + denotes element-wise addition rather than feature concatenation. This operation acts as a standard residual connection to refine the representations while perfectly preserving the original channel dimensions, thereby avoiding the drastic increase in computational complexity that concatenation would cause in subsequent deep layers.
To capture features at varying scales and receptive fields, the MSDC module employs a hierarchical kernel configuration. Specifically, regarding the implementation details, in the shallow stages (Block 1 and Block 2), we utilize a smaller kernel size ( k = 3 ) to capture fine-grained texture and local spectral variations. Conversely, in the deeper stages (Block 3 and Block 4), the kernel size is increased ( k = 5 ) to expand the receptive field and encapsulate broader semantic context. This multiscale design enables the network to effectively recognize objects of various sizes, ranging from small targets to large homogeneous regions.

2.3. S3 Encoder

While the MSDC module excels at extracting local spectral–spatial features, it inherently lacks the ability to model global contextual dependencies due to its limited receptive field [44]. To address this limitation, we introduce the S3 Encoder. Unlike traditional Transformers that suffer from quadratic computational complexity with respect to token length, the S3 Encoder models long-range interactions with linear complexity via a gated convolutional mechanism.
As shown in Figure 3, the encoder block comprises two synergistic sub-modules: the Gated Spectral–Spatial Mixer (GS2M) and the Feed-Forward Network (FFN). Layer Normalization (LN) [45] is applied before each sub-module to normalize feature distributions, and residual connections are employed after each block. This design effectively alleviates the vanishing gradient problem during the training of deep networks.

2.3.1. GS2M

The GS2M is specifically designed to replace the computationally intensive Multi-Head Self-Attention (MHSA). As depicted in Figure 4, it adopts a large-kernel convolution combined with a gating mechanism to efficiently aggregate global context.
Given a normalized input feature map X i n , the module first projects it into a hidden representation using a 1 × 1 convolution. This representation is then split along the channel dimension into two parallel branches: the gating branch and the feature branch. The gating branch utilizes a depthwise convolution with a large kernel size ( 7 × 7 ) to capture broad spatial cues, followed by a GELU activation to generate a spatial attention map. Simultaneously, the feature branch retains the local spectral details. The attention map then modulates the feature branch via element-wise multiplication. The mathematical formulation is defined as:
X g a t e = GELU ( DWConv 7 × 7 ( Conv 1 × 1 ( X i n ) ) )
X f e a t = Conv 1 × 1 ( X i n )
Y G S 2 M = Proj o u t ( X g a t e X f e a t ) + X i n
where ⊙ denotes the element-wise multiplication and Proj o u t is the output projection layer. This gating design allows the model to adaptively select spectral–spatial features based on global context while maintaining a linear computational complexity of O ( N ) , where N is the number of pixels.

2.3.2. FFN

As illustrated in Figure 3, the output of the GS2M is subsequently processed by the FFN. Standard FFNs in Transformers typically operate in a pixel-wise manner (using two 1 × 1 convolutions), which may overlook local structural details. To mitigate this, our FFN integrates a 3 × 3 depthwise convolution within the expansion layer. This locality-enhanced design ensures that fine-grained texture information is preserved and refined during the channel mixing process. The FFN can be expressed as:
Y o u t = Conv 1 × 1 ( σ ( DWConv 3 × 3 ( Conv 1 × 1 ( Y G S 2 M ) ) ) ) + Y G S 2 M
where σ denotes the GELU activation. This modification effectively complements the global modeling capability of the GS2M, creating a comprehensive feature encoder.

2.4. DPFE

Downsampling is a pivotal operation in hierarchical networks. However, standard pooling methods often result in the irreversible degradation of fine-grained details, leading to the disappearance of small-scale objects. To address this, we propose the DPFE module. Distinct from the S3 Encoder which focuses on global feature modeling, the DPFE functions as a downsampling mechanism dedicated to preserving key spectral and spatial information. It is explicitly designed to filter background noise and minimize the loss of salient features during spatial and spectral dimension reduction.
As illustrated in Figure 5, the DPFE module operates through two parallel paths. The spectral reorganization path is designed to perform linear spectral transformation. It employs a 1 × 1 × 1 convolution to project the input spectral features onto the target dimension, followed by a 1 × 2 × 2 Average Pooling layer to perform spatial downsampling. This path ensures that essential spectral context is efficiently transferred during the spatial and spectral dimension reduction process.
Simultaneously, the spatial squeeze path functions as a global attention filter. It first reduces channel dimensionality via a 1 × 1 × 1 convolution to generate intermediate features X g . These features are then processed by a large-kernel depthwise convolution ( 1 × 7 × 7 ) and a Sigmoid activation to generate a spatial attention map. This map modulates X g via element-wise multiplication, effectively suppressing background noise and highlighting salient regions. Finally, the refined features undergo spatial downsampling identical to the spectral reorganization path. The outputs from both paths are fused via element-wise addition to integrate local spectral details with global salient semantics. The operation is summarized as:
X s p e c = Pool ( Conv 1 × 1 × 1 s p e c ( X i n ) )
X s q z = Pool ( Conv 1 × 1 × 1 s q z ( X i n ) σ ( DWConv 1 × 7 × 7 ( Conv 1 × 1 × 1 s q z ( X i n ) ) ) )
X o u t = X s p e c + X s q z
where X s p e c and X s q z denote the intermediate features generated by the spectral reorganization path and the spatial squeeze path, respectively. Pool represents the 1 × 2 × 2 Average Pooling operation, and σ refers to the Sigmoid activation function. Specifically, Conv 1 × 1 × 1 s p e c and Conv 1 × 1 × 1 s q z correspond to the distinct pointwise convolutions utilized in these two respective paths.

3. Experimental Results

3.1. Data Description

We quantitatively and qualitatively evaluated the model’s performance on four representative and promising HSI datasets—University of Pavia, Houston 2013, LongKou and University of Trento in the form of image classification. For each dataset, 5% of the labeled samples were randomly selected for training, 1% for validation, and the remaining 94% for testing.

3.1.1. University of Pavia (UP)

The UP dataset is a hyperspectral scene acquired by the ROSIS sensor (German Aerospace Center (DLR), Cologne, Germany) during a flight over the University of Pavia in northern Italy. This image features a geometric sampling resolution of 1.3 m, featuring dimensions of 610 × 340 pixels and 103 spectral bands spread across nine distinct classes. The details of the UP dataset used in our experiments are shown in Table 1a.

3.1.2. Houston 2013 (HS2013)

The HS2013 dataset was acquired by the ITRES CASI-1500 sensor (ITRES Research Limited, Calgary, AB, Canada) over the University of Houston campus and its neighboring urban areas. It consists of 349 × 1905 pixels with a spatial resolution of 2.5 m. The dataset contains 144 spectral bands spanning the wavelength range from 380 nm to 1050 nm, and includes 15 complex land-cover classes. Known for its challenging characteristics, such as severe cloud shadows and diverse urban materials with high spectral similarity, this dataset is widely used to evaluate the robustness of classification models in complex scenarios. The details of the HS2013 dataset used in our experiments are shown in Table 1c.

3.1.3. LongKou (LK)

The LK dataset was acquired in Longkou Town, Hubei province, China, using an 8-mm focal length Headwall Nano-Hyperspec imaging sensor (Headwall Photonics Inc., Bolton, MA, USA) mounted on a DJI Matrice 600 Pro UAV platform (DJI, Shenzhen, China). The study area represents a simple agricultural scene, comprising six crop species and three other land cover types. The imagery, sized at 550 × 400 pixels, encompasses 270 bands ranging from 400 to 1000 nm, with a spatial resolution of approximately 0.463 m. The details of the LK dataset used in our experiments are shown in Table 1b.

3.1.4. University of Trento (UT)

The UT dataset was acquired by the airborne imaging spectrometer for applications (AISA) Eagle sensor (Specim, Oulu, Finland) over the campus of Trento University, Italy. There are 600 × 166 pixels with a spatial resolution of 1 m and 63 spectral bands ranging from 402–989 nm are provided. All labeled pixels fall into six different classes, such as buildings, roads, and ground. The details of the UT dataset used in our experiments are shown in Table 1d.

3.2. Experimental Setup

3.2.1. Configuration

The validation tests of the proposed methodology were conducted using the following computer hardware setup: Intel i9-14900K CPU, 192 GB RAM, and NVIDIA RTX 4090 GPU. The software platform for the experiments was based on CUDA 12.1, Pytorch 2.3.0, and Python 3.9.19. The training parameters were set to a starting learning rate of 5 × 10 5 , 300 epochs, and batch size of 64. Additionally, the number of retained principal components in the PCA preprocessing step was set to 30 for all datasets.

3.2.2. Evaluation Metrics

In order to evaluate the classification effectiveness of different models, we chose three widely used metrics, which are Overall Accuracy (OA), Average Accuracy (AA), and Kappa Coefficient. Overall accuracy is the ratio of the number of all correctly categorized pixels to the total number of pixels, and AA reflects the average of the categorization accuracies of all categories, while the Kappa coefficient is used to measure how well the categorization results match with the random categorization. Moreover, in order to alleviate the influence of experimental randomness, all experiments are run five times. Mean values and standard deviations are reported for each class and metric.

3.3. Classification Results

In order to verify the effectiveness of the proposed MDS3-Net, we compared it against eight state-of-the-art CNN- and Transformer-based models: 3DCNN [17], HybridSN [21], SSRN [22], SpectralFormer [28], SSFTT [32], morphFormer [35], DSFormer [34], and CSCANet [38].

3.3.1. University of Pavia

The classification accuracy results for the UP dataset are detailed in Table 2. In terms of overall quantitative indicators, our proposed MDS3-Net achieves the best performance, registering the highest OA ( 99.49 % ), AA ( 99.09 % ), and Kappa ( 99.33 % ). Specifically, it outperforms competitors in distinguishing complex classes such as Bare Soil (Class 6) and Self-blocking Bricks (Class 8) from spectrally similar land covers. This quantitative advantage is corroborated by the visual classification maps presented in Figure 6. While traditional CNNs (3DCNN) and earlier Transformer architectures (SpectralFormer) display noticeable fragmentation and misclassification noise, maps generated by recent methods like DSFormer and CSCANet appear relatively smooth. However, as observed in Figure 6j, the map generated by MDS3-Net is visually the closest to the Ground Truth (Figure 6a). It produces sharper boundaries and fewer misclassified pixels in heterogeneous regions compared to other state-of-the-art methods. This visual superiority is attributed to the synergistic architecture: the MSDC and S3 Encoder cooperatively model local-global features to suppress background noise, while the DPFE module functions as a critical filtering mechanism to preserve salient spectral–spatial details during the dimension reduction process.

3.3.2. Houston 2013

The classification accuracy results for the HS2013 dataset are presented in Table 3. Demonstrating its robust feature extraction capabilities in complex urban scenarios, MDS3-Net achieves the best performance across all three comprehensive metrics. It registers the highest OA of 98.05 % , representing a significant and substantial improvement of over 2.1 % compared to the second-best methods, such as HybridSN ( 95.87 % ) and CSCANet ( 95.75 % ), and a massive lead of nearly 20 % compared to the baseline 3DCNN ( 78.53 % ). Similar superiorities are confirmed in AA ( 98.09 % ) and Kappa ( 97.89 % ). A detailed class analysis reveals that the HS2013 dataset poses extreme challenges due to severe cloud shadows and spectrally similar urban materials. While competing methods fluctuate drastically and struggle with challenging structural categories—such as Highway (Class 10), Railway (Class 11), and Parking Lot 1 (Class 12), MDS3-Net maintains exceptional stability, achieving outstanding accuracies exceeding 98.8 % in these classes and reaching perfect 100 % accuracy in Soil (Class 5) and Running Track (Class 15). Furthermore, the visualization maps shown in Figure 7 strongly correlate with these quantitative results. In this wide-swath urban scene, early methods like 3DCNN and SpectralFormer display extensive noise and severe misclassification errors, particularly in regions corrupted by cloud shadows. While advanced models like CSCANet and DSFormer mitigate some of these issues, they still exhibit noticeable misclassification clusters when dealing with highly mixed and complex urban textures. In contrast, the map generated by MDS3-Net (Figure 7j) is visually the most faithful to the ground truth. It successfully overcomes the interference of shadow artifacts and high spectral similarity, yielding spatially coherent land-cover regions with remarkably sharp and accurate boundaries, thereby visually confirming the superior ability of our MDS3-Net’s hybrid architecture to learn resilient and precise spectral–spatial representations even in the most adverse conditions.

3.3.3. LongKou

The classification accuracy results for the LK dataset are detailed in Table 4. On this scene, MDS3-Net achieves state-of-the-art performance across all evaluation metrics. Specifically, MDS3-Net registered an OA of 99.86 % , surpassing the second-best method, DSFormer ( 99.81 % ). A distinctive advantage is observed in AA, where MDS3-Net reached 99.57 % , demonstrating superior balance compared to competitors. Similar trends are seen in the Kappa coefficient ( 99.82 % ). This robustness is most evident in Class 5 (Soy_narrow), a challenging class where traditional methods like 3DCNN struggle significantly ( 54.27 % ). In contrast, MDS3-Net achieves a remarkable 99.67 % , outperforming even the strong competitor DSFormer ( 98.84 % ). The visual classification maps presented in Figure 8 corroborate these quantitative findings. While earlier architectures like 3DCNN and SpectralFormer (Figure 8b,e) exhibit noticeable salt-and-pepper noise and fragmentation, advanced methods such as DSFormer and CSCANet produce highly smooth classification maps similar to ours. However, the map generated by MDS3-Net (Figure 8j) achieves the highest fidelity to the Ground Truth (Figure 8a). By effectively combining global homogeneity with precise feature identification, MDS3-Net ensures consistent accuracy even in the most difficult crop regions (Class 5) that are prone to misclassification by other methods.

3.3.4. University of Trento

The classification accuracy results for the UT dataset are presented in Table 5. Despite the challenge of complex urban land cover and limited samples, MDS3-Net achieves the highest values across all three main evaluation metrics. Specifically, MDS3-Net records an OA of 99.64 % , surpassing the second-best method, morphFormer ( 99.11 % ), and demonstrating a substantial improvement of over 15 % compared to the 3DCNN. Notably, our method also achieves the highest AA ( 99.44 % ) and Kappa ( 99.53 % ), indicating superior consistency. A detailed class analysis reveals the source of this advantage: while comparison methods perform well on distinct vegetation classes (e.g., Classes 4 and 5), MDS3-Net distinguishes itself in structural categories. For instance, in Class 6 (Roads), MDS3-Net achieves 98.93 % , significantly outperforming the nearest competitor, CSCANet ( 96.27 % ), by over 2 % . The visual classification maps shown in Figure 9 confirm these quantitative findings. While the map generated by 3DCNN (Figure 9b) exhibits severe fragmentation and noise, advanced methods like morphFormer and CSCANet produce relatively smooth maps. However, the map generated by MDS3-Net (Figure 9j) offers the highest fidelity to the Ground Truth (Figure 9a). It effectively suppresses noise while preserving the sharp geometry of urban structures, further validating the model’s robustness in handling the complex spatial details of the UT environment.
In summary, the extensive experiments across four diverse datasets confirm that the proposed MDS3-Net achieves consistent state-of-the-art performance, validating its robust generalization ability. This comprehensive superiority is not attributable to a single component but stems from the synergistic integration of the three proposed modules. The MSDC module excels in extracting discriminative spectral information and aligning geometric features, effectively handling objects with irregular boundaries and complex shapes across all datasets. Simultaneously, the S3 Encoder captures long-range dependencies to maintain global semantic consistency, which is vital for distinguishing spectrally similar materials in both urban and agricultural scenes. Furthermore, the DPFE module functions as a critical filtering mechanism, preventing the loss of fine structural details during dimension reduction. Together, these components enable MDS3-Net to balance local detail preservation with global contextual understanding, thereby addressing the complex variability inherent in diverse HSI scenes.

3.4. Additional Experiments

We conducted further experiments to analyze the contribution of different MDS3-Net components and the impact of key parameters. These experiments include ablation analysis, the effect of training sample ratios, the number of principal components, patch size variation, the number of MSDC blocks, and the parameter sensitivity analysis of the S3 Encoder.

3.4.1. Ablation Analysis

To systematically investigate the specific contribution of each component within MDS3-Net, we conducted a comprehensive ablation study by selectively enabling or disabling the MSDC, S3 Encoder, and DPFE module. The quantitative results on all four datasets are summarized in Table 6.
As observed in Table 6, the complete MDS3-Net achieves the highest performance across all datasets, confirming the necessity of the synergistic integration of all three modules. Taking the highly challenging HS2013 dataset as a representative example, we can draw three key observations:
(1)
Single-module limitations: When the network relies on a single component, the performance is limited. The stand-alone DPFE yields the lowest OA ( 77.57 % ), which is expected as it is primarily designed for transition and downsampling refinement rather than deep feature extraction. While the MSDC or S3 Encoder alone achieves a moderate baseline ( 91.99 % and 93.17 % , respectively), they fail to capture the full spectrum of local and global characteristics required for precise classification compared to the integrated model.
(2)
Synergy of dual modules: The integration of any two modules yields substantial performance gains over the single-component baselines. For instance, combining the MSDC and S3 Encoder boosts the OA to 96.94 % . This validates our design philosophy that fusing local spectral-geometric features (from MSDC) with global sequential dependencies (from S3 Encoder) significantly enhances discriminative power. Similarly, the combination of S3Encoder and DPFE reaches 96.73 % , highlighting the importance of the DPFE in preserving salient information during feature processing.
(3)
Holistic integration: The full MDS3-Net achieves the peak OA of 98.05 % , outperforming all single- and dual-module variants. This demonstrates that the three components are not merely additive but complementary: the MSDC aligns local features, the S3 Encoder models global context, and the DPFE ensures detail preservation during dimension reduction. The absence of any single module disrupts this balance and leads to a noticeable degradation in accuracy.

3.4.2. Impact of Training Sample Ratios

To evaluate the robustness of MDS3-Net under limited supervision conditions, we varied the training set proportions from 1% to 10% across all four datasets. The comparative experimental results are visualized in Figure 10.
Specifically, when the training ratio increases from 1% to 5%, most models show a significant upward trend in performance. However, as the ratio continues to rise beyond 5%, the growth rate plateaus, and certain methods even exhibit a decline in accuracy. This performance degradation observed in specific models can be attributed to their unique architectural characteristics. For the traditional 3DCNN, its extremely limited network capacity and rigid receptive field struggle to accommodate the increased spatial–spectral variance introduced by larger training sets, leading to underfitting and classification instability. For SpectralFormer, which primarily focuses on spectral sequence modeling, the lack of robust spatial contextual constraints makes it highly sensitive to the increased intra-class variance and local noise. In contrast, models that effectively fuse spatial–spectral features with appropriate capacity maintain stable decision boundaries and exhibit greater robustness. Based on this observation, we selected 5% as the fixed training ratio for our main comparative experiments.
Despite these fluctuations in competitor performance, MDS3-Net demonstrates exceptional stability, particularly in data-scarce scenarios. This robustness is clearly exhibited on the highly complex HS2013 dataset (Figure 10b). When the training ratio is extremely limited to merely 1%, MDS3-Net still achieves the highest OA of approximately 89 % . While the advanced CSCANet demonstrates competitive resilience, MDS3-Net consistently maintains an absolute lead, whereas other recent competitors like morphFormer and DSFormer experience more noticeable performance degradation, dropping to around 85 % and 81 % , respectively. As shown across all subfigures in Figure 10, when the training ratio is reduced to 1% or 3%, competitors such as 3DCNN and SpectralFormer suffer significant performance degradation due to overfitting. In contrast, MDS3-Net maintains a substantial lead even with minimal supervision, achieving consistently high OA values that are comparable to those of other methods trained with larger datasets. This robustness indicates that the synergistic integration of the MSDC, S3 Encoder, and DPFE effectively extracts and preserves discriminative features without relying heavily on massive labeled data, proving the model’s effectiveness in practical applications where labels are expensive to acquire.

3.4.3. Impact of the Number of Principal Components

The number of retained principal components (C) determines the spectral richness of the network input, directly affecting the balance between information preservation and noise redundancy. To investigate its impact, we evaluated the performance of MDS3-Net with C varying from 10 to 50, as illustrated in Figure 11.
From the results, we can observe that different datasets exhibit varying sensitivities to spectral dimensionality. Notably, the LK, UT, and the challenging HS2013 datasets all achieve their peak classification accuracy exactly at C = 30 , demonstrating that this dimensionality offers the optimal trade-off between informative spectral features and noise reduction. For the HS2013 dataset in particular, the performance exhibits a clear inverted-U shape, rising steadily from lower dimensions to reach its absolute maximum at C = 30 , before declining as redundant bands introduce noise. Although the UP dataset favors slightly lower dimensions due to its specific spectral characteristics, it still maintains a stable local peak at C = 30 before dropping at higher dimensions. Consequently, to ensure robust generalization and consistency across all diverse scenes, we set the number of principal components to 30 for our method.

3.4.4. Impact of Spatial Patch Size

To compare the effect of different input spatial contexts on the performance of MDS3-Net, we varied the patch size from 9 × 9 to 17 × 17 . The OA trends for all four datasets are illustrated in Figure 12.
First, it can be observed that the OAs on the four datasets generally follow a trend of increasing and then decreasing as the patch size grows. An appropriately sized patch provides sufficient neighborhood information for the MSDC and S3 Encoder to capture local structures and global dependencies, whereas an overly large patch may introduce irrelevant background noise and spatial redundancy.
Second, as shown in Figure 12, MDS3-Net achieves optimal classification performance on the UP, HS2013, and UT datasets when the patch size is set to 13 × 13 . For the LK dataset, the performance peaks at a patch size of 15 × 15 . Taking into account the consistent optimal performance across the majority of the datasets and the computational resources available, we set the input patch size to 13 × 13 for all experiments in this study.

3.4.5. Impact of the Number of MSDC Blocks

To determine the optimal depth of the spatial–spectral feature extraction stage, we investigated the impact of the number of MSDC blocks on classification performance. By varying the block count from 1 to 5, we evaluated the OAs on all four datasets, as illustrated in Figure 13.
As the number of MSDC blocks increases from 1 to 4, a substantial and consistent improvement in OA is observed across all datasets. This sharp upward trend indicates that a shallow architecture is insufficient to capture the intricate, high-level spectral–spatial representations required for accurate classification in heterogeneous scenes. By cascading multiple MSDC blocks, the network progressively expands its receptive field, allowing the deformable convolutions to capture broader geometric contexts while refining local spectral details.
However, when the network depth is further increased to 5 blocks, the classification performance plateaus and even experiences a slight degradation across the datasets. This decline can be attributed to the risks of overfitting, information redundancy, and potential optimization difficulties that often accompany overly deep structures. Consequently, to achieve the optimal balance between feature extraction capacity, classification accuracy, and computational efficiency, the number of cascaded MSDC blocks is empirically set to 4 in our proposed MDS3-Net.

3.4.6. Parameter Sensitivity Analysis of the S3 Encoder

The structural configuration of the S3 Encoder plays a pivotal role in balancing feature modeling capability and computational complexity. To determine the optimal architecture, we conducted sensitivity analysis on two key hyperparameters: the spatial kernel size (K) in the GS2M block and the number of encoder layers (L). The experiments were performed on the UP, HS2013, LK, and UT datasets, as illustrated in Figure 14.
First, we investigated the influence of K in the GS2M module, varying K from 3 to 9. As shown in Figure 14a, the OA generally exhibits an upward trend as K increases from 3 to 7. This phenomenon indicates that the GS2M module relies on a large kernel gating mechanism to capture long-range spatial dependencies, and a larger kernel size effectively expands the receptive field, enabling the module to simulate global feature interactions. However, as K further increases to 9, the performance on the UP and LK datasets decreases significantly, while it tends to saturate on the HS2013 and UT datasets. This indicates that while seeking a wider receptive field is beneficial, an excessively large spatial window may introduce irrelevant background noise and disrupt the consistency of local spectra, which is particularly disadvantageous for classifying small or heterogeneous objects. Therefore, we set K = 7 to achieve the best balance between modeling long-term dependencies and preserving fine-grained local details.
Second, we evaluated the impact of the network depth by varying L from 1 to 5. Figure 14b demonstrates the variations in OA with different network depths. It can be observed that the model achieves optimal performance when L = 2 across all four datasets. While deeper networks are theoretically capable of extracting higher-level semantic abstractions, a significant performance drop occurs when L exceeds 2. This phenomenon can be attributed to the optimization difficulties inherent in training deep networks with limited hyperspectral training samples such as overfitting. Therefore, to ensure high classification accuracy while maintaining computational efficiency, the number of S3 Encoder layers is fixed at L = 2 .

3.5. Computational Complexity and Efficiency Analysis

In addition to classification accuracy, computational efficiency—encompassing computational cost, model size, and running time—is a pivotal factor for assessing the practical value of HSI classification models. Table 7 and Figure 15 present a comprehensive comparison of the proposed MDS3-Net against state-of-the-art methods in terms of four key metrics: Floating Point Operations (FLOPs) measured in millions (M), the number of parameters (Param) measured in thousands (K), alongside training and testing times measured in seconds (s), the sum of which yields the total running time.
It is worth noting that these statistics were recorded on the HS2013 dataset under the same experimental setting as Table 3, specifically with 5% of samples for training, 1% for validation, and the remaining 94% for testing. This consistent configuration ensures a fair comparison of both performance and computational cost across different methods.
As observed from Table 7 and intuitively illustrated in Figure 15, lightweight networks like SSRN and SSFTT exhibit relatively lower computational costs in terms of FLOPs. However, this efficiency comes at a severe expense of classification performance. For instance, the most lightweight SSRN trails the proposed method by over 16.8 % in OA, falling to the bottom-left corner. On the other hand, complex models like SpectralFormer and DSFormer suffer from high computational burdens. As depicted by its rightmost position and darker warm color in Figure 15, SpectralFormer requires the longest total running time ( 102.53 s) and possesses high FLOPs ( 28.79 M). Similarly, the traditional 3DCNN involves an excessively heavy parameter load of 3527.22 K, which is clearly visualized by its massive bubble size without delivering competitive accuracy ( 78.53 % ).
The proposed MDS3-Net achieves a superior trade-off between performance and efficiency. Located prominently in the top-left region of Figure 15, it demonstrates an optimal balance. Remarkably, MDS3-Net maintains a highly competitive computational footprint: its FLOPs ( 5.67 M) are significantly lower than those of recent Transformer-based methods like SpectralFormer ( 28.79 M), morphFormer ( 25.64 M), and DSFormer ( 21.05 M), while its parameter count ( 140.35 K) remains highly comparable to, and even slightly lower than, lightweight architectures like SSFTT ( 148.49 K). This efficiency is primarily attributed to the design of the S3 Encoder, which replaces standard heavy computations with streamlined spectral–spatial sequence modeling. Although the integration of the deformable convolution mechanism in the MSDC module introduces a slight increase in training time compared to simple CNNs (due to the learning of adaptive sampling offsets), this cost is well-justified. Crucially, compared to SpectralFormer, MDS3-Net reduces the total running time by over 35 % while achieving state-of-the-art accuracy across all metrics. This demonstrates that the proposed architecture effectively leverages computational resources to enhance feature representation capability without incurring excessive overhead.

4. Discussion

4.1. Mechanism Analysis of Performance Superiority

The extensive experimental results in Section 3 validate that MDS3-Net outperforms current state-of-the-art methods. Beyond the numerical improvements, it is crucial to understand the underlying mechanisms driving this success. The superiority of MDS3-Net primarily stems from its ability to synergistically address HSI classification challenges through three synergistic dimensions: the unified extraction of local spatial–spectral features, the efficient modeling of global long-range dependencies, and the preservation of salient information during dimensionality reduction.
First and foremost, the MSDC module serves as the core engine for joint spectral–spatial feature extraction. Unlike standard CNNs that utilize fixed square kernels and treat spatial–spectral dimensions rigidly, the MSDC module introduces a dual-enhancement mechanism. Regarding spatial adaptability, land cover objects—such as the complex urban structures in HS2013 or winding roads in Trento—often exhibit irregular shapes that do not conform to fixed grids. The deformable mechanism in MSDC decouples the receptive field, allowing sampling locations to dynamically align with object boundaries, thereby effectively reducing “mixed pixel” interference at edges. Concurrently, in terms of spectral discrimination, the MSDC employs a cascaded strategy. It first applies multiscale spectral convolution to extract discriminative spectral signatures across different local band ranges. These spectrally refined features are then sequentially processed by the spatial deformable convolution. This serial design ensures that the network distinguishes between materials with subtle spectral discrepancies before performing geometric alignment. Therefore, the high classification accuracy of MDS3-Net in heterogeneous regions is a direct result of this synergy—MSDC refines features spectrally and then aligns them spatially.
Second, the ablation study (Table 6) confirms the necessity of the S3 Encoder. While MSDC excels at local spectral–spatial extraction, pure convolutional operations inherently struggle to capture long-range dependencies. The S3 Encoder compensates for this by utilizing a gated large-kernel mechanism to model global sequential relationships across the spectral–spatial domain. Unlike traditional Transformers that rely on computationally intensive self-attention, the S3 Encoder achieves this global modeling with linear complexity. Consequently, the architecture establishes a complementary hierarchy: both localized and adaptive feature extraction through MSDC, and efficient global context refinement through S3 Encoder as a supplement.
Third, the DPFE module plays a critical role in maintaining feature integrity during downsampling. In conventional hierarchical networks, standard pooling operations often lead to the irreversible loss of fine-grained details, causing small-scale objects to disappear in deeper layers. The DPFE addresses this by employing a dual-path strategy: a spectral reorganization path to strictly preserve spectral context and a spatial squeeze path to highlight salient regions via spatial attention. By filtering background noise while retaining key semantic information during dimension reduction, the DPFE effectively bridges adjacent stages, ensuring that the network maintains high distinctiveness even for small targets or complex boundaries.

4.2. Architectural Efficiency and Practicality

In the realm of HSI classification, achieving a balance between high accuracy and low computational cost is a pivotal consideration for practical deployment. The efficiency of the proposed MDS3-Net stems from its strategic architectural design.
Specifically, the efficiency and practicality of MDS3-Net are realized through the layer-wise synergistic operation of its three core components. First, within each processing stage, the MSDC module and the S3 Encoder work in a complementary manner. The MSDC utilizes decoupled convolutions to efficiently extract dense local features. Simultaneously, the S3 Encoder, strategically embedded in the residual path, captures global context to rectify these local representations. Distinct from standard ViTs, our S3 Encoder avoids quadratic complexity ( O ( N 2 ) ), ensuring that the overhead of global modeling remains manageable.
Second, connecting these stages is the DPFE module, which acts as a semantics-preserving compressor. Unlike standard pooling layers that indiscriminately discard information, the DPFE employs a dual-path strategy: a spectral reorganization path to linearly project spectral dimensions and a spatial squeeze path to filter background noise via spatial attention. This hierarchical architecture optimizes the allocation of computational resources. By progressively reducing feature resolution while preserving salient information through DPFE, the network ensures that deeper layers operate on compact, high-level semantic embeddings. This “coarse-to-fine” processing flow allows MDS3-Net to retain powerful global modeling capabilities without incurring the prohibitive costs associated with full-resolution processing, thereby achieving an optimal balance between inference speed and classification accuracy.

4.3. Limitations

Despite the superior classification performance and competitive efficiency achieved by MDS3-Net, there remain limitations regarding model complexity that warrant further discussion. Although MDS3-Net is significantly faster than standard Transformer-based methods, it inevitably incurs higher storage and computational costs compared to extremely lightweight CNNs (such as SSRN). Specifically, the calculation of learnable offsets in the MSDC module and the large-kernel depthwise convolutions in the S3 Encoder require more floating-point operations than simple static convolutions. This reflects a necessary trade-off to achieve high-precision classification in complex scenes. To address this, future work will focus on developing lightweight versions of MDS3-Net, thereby further reducing the resource overhead to enhance deployability on resource-constrained platforms.

5. Conclusions

In this paper, we have proposed MDS3-Net, a novel hierarchical framework designed to address the dual challenges of geometric rigidity and computational inefficiency in HSI classification. By synergizing the MSDC with the S3 Encoder, our method effectively unifies the adaptive extraction of local spectral–spatial features and the modeling of global long-range dependencies with linear computational complexity. Additionally, the DPFE module functions as a critical bridge between stages, facilitating semantics-preserving dimensionality reduction through spectral reorganization and spatial attention mechanisms.
Extensive experiments on four benchmark datasets (UP, HS2013, LK, and UT) demonstrate that MDS3-Net consistently achieves state-of-the-art classification performance, particularly in scenes characterized by complex geometric boundaries and significant spectral variability. Quantitative comparisons and ablation studies further validate the necessity of each synergistic component—MSDC for joint spectral discrimination and geometric alignment, S3 Encoder for efficient global context modeling, and DPFE for semantics-preserving dimensionality reduction. Moreover, the complexity analysis confirms that MDS3-Net attains an optimal trade-off between accuracy and efficiency, significantly outperforming standard Transformer-based models in terms of training speed while maintaining superior precision.
In future work, we intend to focus on developing lightweight versions of MDS3-Net. This will aim to reduce the parameter count and computational overhead identified in the limitations, thereby further facilitating the deployment of high-performance HSI classification models on resource-constrained edge devices.

Author Contributions

Conceptualization, T.B., B.Y. and S.H.; methodology, T.B., B.Y. and S.H.; validation, T.B.; writing—original draft preparation, T.B.; writing—review and editing, B.Y., Y.C., X.Z., L.Y. and S.H.; visualization, T.B.; funding acquisition, S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Scientific Research Project by Hunan Provincial Department of Education, grant number 21B0046.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors thank those who provided help in this study.

Conflicts of Interest

Author Li Yue was employed by the company BGP Inc., China National Petroleum Corporation. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Audebert, N.; Le Saux, B.; Lefevre, S. Deep learning for classification of hyperspectral data: A comparative review. IEEE Geosci. Remote Sens. Mag. 2019, 7, 159–173. [Google Scholar]
  2. Tinega, H.C.; Chen, E.; Ma, L.; Nyasaka, D.O.; Mariita, R.M. HybridGBN-SR: A deep 3D/2D genome graph-based network for hyperspectral image classification. Remote Sens. 2022, 14, 1332. [Google Scholar] [CrossRef]
  3. Zhong, Y.; Hu, X.; Luo, C.; Wang, X.; Zhao, J.; Zhang, L. WHU-Hi: UAV-borne hyperspectral with high spatial resolution (H2) benchmark datasets and classifier for precise crop identification based on deep convolutional neural network with CRF. Remote Sens. Environ. 2020, 250, 112012. [Google Scholar]
  4. Zhou, L.; Ma, X.; Wang, X.; Hao, S.; Ye, Y.; Zhao, K. Shallow-to-deep spatial–spectral feature enhancement for hyperspectral image classification. Remote Sens. 2023, 15, 261. [Google Scholar]
  5. Zhong, Y.; Cao, Q.; Zhao, J.; Ma, A.; Zhao, B.; Zhang, L. Optimal decision fusion for urban land-use/land-cover classification based on adaptive differential evolution using hyperspectral and LiDAR data. Remote Sens. 2017, 9, 868. [Google Scholar] [CrossRef]
  6. Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent advances of hyperspectral imaging technology and applications in agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
  7. Stuart, M.B.; Davies, M.; Hobbs, M.J.; Pering, T.D.; McGonigle, A.J.S.; Willmott, J.R. High-resolution hyperspectral imaging using low-cost components: Application within environmental monitoring scenarios. Sensors 2022, 22, 12. [Google Scholar] [CrossRef]
  8. Khan, M.J.; Khan, H.S.; Yousaf, A.; Khurshid, K.; Abbas, A. Modern trends in hyperspectral image analysis: A review. IEEE Access 2018, 6, 14118–14129. [Google Scholar] [CrossRef]
  9. Liu, B.; Yu, X.; Zhang, P.; Tan, X.; Yu, A.; Xue, Z. DSS-TRM: Deep spatial-spectral transformer for hyperspectral image classification. Eur. J. Remote Sens. 2022, 55, 103–114. [Google Scholar]
  10. Zhong, Y.; Wang, X.; Xu, Y.; Wang, S.; Jia, T.; Hu, X.; Zhao, J.; Wei, L.; Zhang, L. Mini-UAV-borne hyperspectral remote sensing: From observation and processing to applications. IEEE Geosci. Remote Sens. Mag. 2018, 6, 46–62. [Google Scholar] [CrossRef]
  11. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Deep learning classifiers for hyperspectral imaging: A review. ISPRS J. Photogramm. Remote Sens. 2019, 158, 279–317. [Google Scholar]
  12. Liu, B.; Yu, A.; Yu, X.; Wang, R.; Gao, K.; Guo, W. Deep multiview learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7758–7772. [Google Scholar] [CrossRef]
  13. Li, W.; Wu, G.; Zhang, F.; Du, Q. Hyperspectral image classification using deep pixel-pair features. IEEE Trans. Geosci. Remote Sens. 2017, 55, 844–853. [Google Scholar] [CrossRef]
  14. Zhang, Z.; Liu, B.; Yu, X.; Zhang, P.; Tan, X. S2DCN: Spectral–spatial difference convolution network for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 3053–3068. [Google Scholar]
  15. Xu, X.; Li, W.; Ran, Q.; Du, Q.; Gao, L.; Zhang, B. Multisource remote sensing data classification based on convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 937–949. [Google Scholar] [CrossRef]
  16. Liao, D.; Shi, C.; Wang, L. A spectral–spatial fusion transformer network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–15. [Google Scholar] [CrossRef]
  17. Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar]
  18. Kherimiche, A.; Ouahbi, I.; El Makkaoui, K. Hyperspectral image classification using deep learning: A recent overview. In Proceedings of the International Conference on Intelligent Computing in Data Sciences (ICDS), Marrakech, Morocco, 23–25 October 2024; pp. 1–8. [Google Scholar]
  19. Esmaeili, M.; Abbasi-Moghadam, D.; Sharifi, A.; Tariq, A.; Li, Q. ResMorCNN model: Hyperspectral images classification using residual-injection morphological features and 3DCNN layers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 219–243. [Google Scholar]
  20. Basodi, S.; Ji, C.; Zhang, H.; Pan, Y. Gradient amplification: An efficient way to train deep neural networks. Big Data Min. Anal. 2020, 3, 196–207. [Google Scholar] [CrossRef]
  21. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3D-2D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar]
  22. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
  23. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  24. Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; Wei, Y. Deformable convolutional networks. arXiv 2017, arXiv:1703.06211. [Google Scholar] [CrossRef]
  25. Yu, F.; Koltun, V. Multi-scale context aggregation by dilated convolutions. arXiv 2015, arXiv:1511.07122. [Google Scholar]
  26. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2021, arXiv:2010.11929. [Google Scholar] [CrossRef]
  27. Shi, C.; Yue, S.; Wang, L. A dual-branch multiscale transformer network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 10328–10347. [Google Scholar] [CrossRef]
  28. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking hyperspectral image classification with transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  29. Qing, Y.; Liu, W.; Feng, L.; Gao, W. Improved Transformer Net for Hyperspectral Image Classification. Remote Sens. 2021, 13, 2216. [Google Scholar] [CrossRef]
  30. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. arXiv 2021, arXiv:2103.14030. [Google Scholar] [CrossRef]
  31. Yang, J.; Du, B.; Zhang, L. From center to surrounding: An interactive learning framework for hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2023, 197, 145–166. [Google Scholar]
  32. Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral-spatial feature tokenization transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
  33. Fu, C.; Zhou, T.; Guo, T.; Zhu, Q.; Luo, F.; Du, B. CNN-transformer and channel-spatial attention based network for hyperspectral image classification with few samples. Neural Netw. 2025, 186, 107311. [Google Scholar] [CrossRef] [PubMed]
  34. Xu, Y.; Wang, D.; Zhang, L.; Zhang, L. Dual selective fusion transformer network for hyperspectral image classification. Neural Netw. 2025, 187, 107311. [Google Scholar] [CrossRef] [PubMed]
  35. Roy, S.K.; Deria, A.; Shah, C.; Haut, J.M.; Du, Q.; Plaza, A. Spectral–spatial morphological attention transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–15. [Google Scholar] [CrossRef]
  36. Yang, L.; Zhang, L.; Wang, Y.; Chen, J.; Liu, Z.; Bian, L.; Yang, C. TC-HISRNet: Hyperspectral image super-resolution network based on contextual band joint transformer and CNN. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 9632–9645. [Google Scholar] [CrossRef]
  37. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional block attention module. arXiv 2018, arXiv:1807.06521. [Google Scholar] [CrossRef]
  38. Zhang, B.; Chen, Y.; Xiong, S.; Lu, X. Hyperspectral image classification via cascaded spatial cross-attention network. IEEE Trans. Image Process. 2025, 34, 899–913. [Google Scholar] [CrossRef]
  39. Yang, J.; Wu, C.; Du, B.; Zhang, L. Enhanced multiscale feature fusion network for HSI classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar]
  40. Yang, J.; Du, B.; Wang, D.; Zhang, L. ITER: Image-to-pixel representation for weakly supervised HSI classification. IEEE Trans. Image Process. 2024, 33, 257–272. [Google Scholar] [CrossRef]
  41. Wang, D.; Hu, M.; Jin, Y.; Miao, Y.; Yang, J.; Xu, Y.; Qin, X.; Ma, J.; Sun, L.; Li, C.; et al. HyperSIGMA: Hyperspectral intelligence comprehension foundation model. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 6427–6444. [Google Scholar] [CrossRef]
  42. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar] [CrossRef]
  43. Nair, V.; Hinton, G.E. Rectified linear units improve restricted Boltzmann machines. In Proceedings of the International Conference on Machine Learning (ICML), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
  44. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
  45. Ba, J.L.; Kiros, J.R.; Hinton, G.E. Layer normalization. arXiv 2016, arXiv:1607.06450. [Google Scholar] [CrossRef]
Figure 1. Overall framework of the proposed MDS3-Net. The input HSI data first undergoes PCA for dimensionality reduction. The network then processes the patches through a 4-stage hierarchical structure. In each stage, the MSDC block and the S3 Encoder extract local spectral–spatial features and global long-range dependencies, respectively. The DPFE module bridges adjacent stages by performing semantics-preserving spatial and spectral downsampling. Finally, a classifier generates the predicted output classes.
Figure 1. Overall framework of the proposed MDS3-Net. The input HSI data first undergoes PCA for dimensionality reduction. The network then processes the patches through a 4-stage hierarchical structure. In each stage, the MSDC block and the S3 Encoder extract local spectral–spatial features and global long-range dependencies, respectively. The DPFE module bridges adjacent stages by performing semantics-preserving spatial and spectral downsampling. Finally, a classifier generates the predicted output classes.
Remotesensing 18 00977 g001
Figure 2. Illustration of the MSDC block.
Figure 2. Illustration of the MSDC block.
Remotesensing 18 00977 g002
Figure 3. Architecture of the S3 Encoder.
Figure 3. Architecture of the S3 Encoder.
Remotesensing 18 00977 g003
Figure 4. Detailed structure of the GS2M.
Figure 4. Detailed structure of the GS2M.
Remotesensing 18 00977 g004
Figure 5. Illustration of the DPFE module.
Figure 5. Illustration of the DPFE module.
Remotesensing 18 00977 g005
Figure 6. Classification visualization maps of all methods on the UP dataset. (a) Ground Truth. (b) 3DCNN. (c) HybridSN. (d) SSRN. (e) SpectralFormer. (f) SSFTT. (g) morphFormer. (h) DSFormer. (i) CSCANet. (j) MDS3-Net.
Figure 6. Classification visualization maps of all methods on the UP dataset. (a) Ground Truth. (b) 3DCNN. (c) HybridSN. (d) SSRN. (e) SpectralFormer. (f) SSFTT. (g) morphFormer. (h) DSFormer. (i) CSCANet. (j) MDS3-Net.
Remotesensing 18 00977 g006
Figure 7. Classification visualization maps of all methods on the HS2013 dataset. (a) Ground Truth. (b) 3DCNN. (c) HybridSN. (d) SSRN. (e) SpectralFormer. (f) SSFTT. (g) morphFormer. (h) DSFormer. (i) CSCANet. (j) MDS3-Net.
Figure 7. Classification visualization maps of all methods on the HS2013 dataset. (a) Ground Truth. (b) 3DCNN. (c) HybridSN. (d) SSRN. (e) SpectralFormer. (f) SSFTT. (g) morphFormer. (h) DSFormer. (i) CSCANet. (j) MDS3-Net.
Remotesensing 18 00977 g007
Figure 8. Classification visualization maps of all methods on the LK dataset. (a) Ground Truth. (b) 3DCNN. (c) HybridSN. (d) SSRN. (e) SpectralFormer. (f) SSFTT. (g) morphFormer. (h) DSFormer. (i) CSCANet. (j) MDS3-Net.
Figure 8. Classification visualization maps of all methods on the LK dataset. (a) Ground Truth. (b) 3DCNN. (c) HybridSN. (d) SSRN. (e) SpectralFormer. (f) SSFTT. (g) morphFormer. (h) DSFormer. (i) CSCANet. (j) MDS3-Net.
Remotesensing 18 00977 g008
Figure 9. Classification visualization maps of all methods on the UT dataset. (a) Ground Truth. (b) 3DCNN. (c) HybridSN. (d) SSRN. (e) SpectralFormer. (f) SSFTT. (g) morphFormer. (h) DSFormer. (i) CSCANet. (j) MDS3-Net.
Figure 9. Classification visualization maps of all methods on the UT dataset. (a) Ground Truth. (b) 3DCNN. (c) HybridSN. (d) SSRN. (e) SpectralFormer. (f) SSFTT. (g) morphFormer. (h) DSFormer. (i) CSCANet. (j) MDS3-Net.
Remotesensing 18 00977 g009
Figure 10. Comparison of different training sample percentages. (a) UP. (b) HS2013. (c) LK. (d) UT.
Figure 10. Comparison of different training sample percentages. (a) UP. (b) HS2013. (c) LK. (d) UT.
Remotesensing 18 00977 g010
Figure 11. Impact of the number of principal components on OA across the four datasets.
Figure 11. Impact of the number of principal components on OA across the four datasets.
Remotesensing 18 00977 g011
Figure 12. Comparison of OAs of different patch size.
Figure 12. Comparison of OAs of different patch size.
Remotesensing 18 00977 g012
Figure 13. Comparison of OAs with different numbers of MSDC blocks.
Figure 13. Comparison of OAs with different numbers of MSDC blocks.
Remotesensing 18 00977 g013
Figure 14. Parameter sensitivity analysis of the S3 Encoder. (a) Impact of the spatial kernel size in GS2M. (b) Impact of the number of S3 Encoder layers.
Figure 14. Parameter sensitivity analysis of the S3 Encoder. (a) Impact of the spatial kernel size in GS2M. (b) Impact of the number of S3 Encoder layers.
Remotesensing 18 00977 g014
Figure 15. Visualization of the performance-complexity trade-off on the HS2013 dataset. The X-axis represents FLOPs (M), and the Y-axis represents OA (%). The bubble size denotes the number of parameters (K), and the color indicates the total running time (s), which is the sum of training and testing times.
Figure 15. Visualization of the performance-complexity trade-off on the HS2013 dataset. The X-axis represents FLOPs (M), and the Y-axis represents OA (%). The bubble size denotes the number of parameters (K), and the color indicates the total running time (s), which is the sum of training and testing times.
Remotesensing 18 00977 g015
Table 1. Details of the classes and sample numbers for the UP, LK, HS2013, and UT datasets.
Table 1. Details of the classes and sample numbers for the UP, LK, HS2013, and UT datasets.
(a) University of Pavia (UP)(b) LongKou (LK)
No. Color Class Train Val Test No. Color Class Train Val Test
1Asphalt3316362371Corn172532732,459
2Meadows93217717,5402Cotton418797877
3Gravel1041919763Sesame151282852
4Trees1532928824Soy_broad316060059,452
5Painted-m-s671212665Soy_narrow207393905
6Bare Soil2514747316Rice59211211,150
7Bitumen661212527Water335263763,067
8Self-block-b1843434648Roads_houses356676701
9Shadows4798919Mixed_weed261494919
Total213540240,239Total10,2221938192,382
(c) Houston 2013 (HS2013)(d) University of Trento (UT)
No.ColorClassTrainValTestNo.ColorClassTrainValTest
1Healthy grass631211761Apple trees201383795
2Stressed grass631211792Buildings145272731
3Synthetic grass3576553Ground234452
4Trees631211694Woods456868581
5Soil631211675Vineyard525999877
6Water1743046Roads158302986
7Residential64131191
8Commercial63121169
9Road63121177
10Highway62121153
11Railway62121161
12Parking Lot 162121159
13Parking Lot 2245440
14Tennis Court225401
15Running Track337620
Total75914914,121Total150828428,422
Table 2. Classification results of different methods for the UP dataset.
Table 2. Classification results of different methods for the UP dataset.
Methods3DCNNHybridSNSSRNSpectralFormerSSFTTmorphFormerDSFormerCSCANetMDS3-Net
184.82 ± 8.1589.81 ± 2.5597.98 ± 1.2690.92 ± 0.5498.74 ± 0.8299.86 ± 3.3599.50 ± 0.2199.50 ± 0.2499.23 ± 0.78
294.63 ± 0.0698.67 ± 0.6297.98 ± 0.1099.23 ± 0.1299.19 ± 0.0499.99 ± 0.1899.87 ± 0.10100.00 ± 0.0099.98 ± 0.03
30.00 ± 0.0093.03 ± 8.1897.53 ± 3.1142.47 ± 2.7899.09 ± 1.8397.72 ± 4.5893.71 ± 1.1496.05 ± 2.2198.08 ± 0.93
481.94 ± 2.2196.09 ± 4.2397.81 ± 1.9996.42 ± 2.3797.06 ± 1.7996.63 ± 3.1699.42 ± 0.3398.39 ± 0.5798.42 ± 0.84
599.45 ± 2.2199.76 ± 1.5897.98 ± 0.02100.00 ± 0.0098.95 ± 0.20100.00 ± 0.00100.00 ± 0.0099.98 ± 0.05100.00 ± 0.00
616.65 ± 1.4595.49 ± 2.8994.51 ± 1.0844.76 ± 1.0799.19 ± 0.2399.30 ± 1.6299.81 ± 0.1999.80 ± 0.4799.84 ± 0.17
70.40 ± 16.5087.05 ± 2.9097.98 ± 3.4150.72 ± 1.2899.11 ± 0.1594.96 ± 0.5298.38 ± 0.9499.85 ± 0.2499.82 ± 0.21
895.26 ± 9.6682.66 ± 5.6797.08 ± 2.0893.27 ± 1.6995.15 ± 1.8796.71 ± 2.4896.65 ± 0.7796.99 ± 1.0098.95 ± 0.91
924.80 ± 7.9493.45 ± 11.4097.64 ± 2.0599.55 ± 3.4196.40 ± 3.0494.83 ± 5.6797.24 ± 1.1198.44 ± 0.8897.45 ± 1.90
OA (%)74.12 ± 1.6494.48 ± 0.8796.90 ± 0.4186.56 ± 0.4298.54 ± 0.1798.99 ± 0.6299.09 ± 0.0999.29 ± 0.1399.49 ± 0.18
AA (%)49.80 ± 2.3392.89 ± 1.2397.40 ± 0.7679.71 ± 0.6398.10 ± 0.4197.78 ± 1.0598.29 ± 0.2598.24 ± 0.2399.09 ± 0.36
κ × 100 64.30 ± 2.1792.67 ± 1.1696.50 ± 0.5481.64 ± 0.5598.34 ± 0.2398.65 ± 0.8398.80 ± 0.1299.06 ± 0.1899.33 ± 0.24
The best results are shown in bold.
Table 3. Classification results of different methods for the HS2013 dataset.
Table 3. Classification results of different methods for the HS2013 dataset.
Methods3DCNNHybridSNSSRNSpectralFormerSSFTTmorphFormerDSFormerCSCANetMDS3-Net
185.03 ± 2.9697.23 ± 2.1189.95 ± 6.3099.43 ± 0.4098.38 ± 1.3997.7 ± 1.2096.79 ± 1.1696.51 ± 2.6798.3 ± 0.97
292.08 ± 5.4897.91 ± 1.4487.89 ± 6.8286.71 ± 1.9496.63 ± 3.9598.94 ± 0.7299.35 ± 0.5498.41 ± 1.8499.41 ± 0.62
390.49 ± 3.2299.44 ± 1.2198.14 ± 1.8395.11 ± 1.5699.10 ± 1.7899.50 ± 0.3099.79 ± 0.2199.67 ± 0.499.8 ± 0.13
487.85 ± 5.1296.12 ± 1.5989.96 ± 4.7590.80 ± 1.0199.20 ± 0.7495.90 ± 2.4499.27 ± 0.7298.49 ± 2.3099.67 ± 0.39
598.91 ± 0.8297.47 ± 2.4799.84 ± 0.3598.20 ± 0.8798.00 ± 2.5099.29 ± 0.5899.60 ± 0.2999.97 ± 0.08100.00 ± 0.00
659.80 ± 6.0298.35 ± 1.7272.61 ± 10.5440.72 ± 8.8166.35 ± 30.1684.87 ± 6.5894.34 ± 5.1588.53 ± 4.9597.48 ± 2.18
785.00 ± 3.1095.09 ± 2.4982.38 ± 12.8085.94 ± 1.4693.85 ± 6.1194.82 ± 2.2994.09 ± 2.3795.39 ± 3.7697.77 ± 1.58
857.34 ± 8.0095.39 ± 1.7766.34 ± 8.7865.06 ± 4.7283.09 ± 8.5188.3 ± 2.0984.17 ± 4.683.67 ± 5.1193.74 ± 2.49
976.80 ± 2.4993.70 ± 1.4976.96 ± 6.2582.04 ± 3.9885.63 ± 7.6392.86 ± 2.3180.78 ± 4.0792.90 ± 3.1093.40 ± 2.89
1074.58 ± 14.4892.76 ± 2.7268.65 ± 10.8874.71 ± 6.7895.54 ± 3.8797.43 ± 1.6489.56 ± 3.4597.78 ± 1.9299.01 ± 1.47
1175.09 ± 11.9695.77 ± 2.1569.77 ± 11.9072.70 ± 8.2188.14 ± 15.6095.69 ± 2.4592.76 ± 2.7797.04 ± 3.1199.08 ± 0.95
1248.07 ± 21.9994.84 ± 1.7187.27 ± 6.2176.90 ± 5.4189.29 ± 3.3795.34 ± 1.4585.45 ± 1.7494.73 ± 3.9698.85 ± 0.87
1349.39 ± 5.6296.06 ± 2.753.04 ± 8.9860.36 ± 6.8956.02 ± 37.0793.82 ± 1.5879.89 ± 2.9792.44 ± 1.994.99 ± 2.07
1492.16 ± 4.6597.05 ± 3.9598.39 ± 2.2891.49 ± 2.8278.75 ± 41.5798.43 ± 2.14100.00 ± 0.0099.85 ± 0.3199.88 ± 0.31
1594.56 ± 3.3895.76 ± 4.5697.85 ± 2.9491.06 ± 3.1799.79 ± 0.4599.95 ± 0.11100.00 ± 0.0099.76 ± 0.75100.00 ± 0.00
OA (%)78.53 ± 1.4795.87 ± 0.6481.18 ± 1.3982.78 ± 1.5791.27 ± 4.1295.79 ± 0.5092.77 ± 0.8195.75 ± 0.8098.05 ± 0.29
AA (%)72.95 ± 1.2596.20 ± 0.6079.27 ± 1.5680.75 ± 1.3488.52 ± 6.4895.52 ± 0.5693.06 ± 0.8095.68 ± 0.6998.09 ± 0.37
κ × 100 76.77 ± 1.6095.53 ± 0.6979.62 ± 1.5181.38 ± 1.7090.55 ± 4.4795.44 ± 0.5492.19 ± 0.8895.40 ± 0.8797.89 ± 0.32
The best results are shown in bold.
Table 4. Classification results of different methods for the LK dataset.
Table 4. Classification results of different methods for the LK dataset.
Methods3DCNNHybridSNSSRNSpectralFormerSSFTTmorphFormerDSFormerCSCANetMDS3-Net
196.92 ± 0.8698.48 ± 0.6199.94 ± 0.0497.91 ± 0.0999.58 ± 0.5199.66 ± 0.3599.98 ± 0.0199.93 ± 0.06100.00 ± 0.00
283.40 ± 2.4295.12 ± 4.6699.85 ± 3.9186.98 ± 0.8299.59 ± 0.4799.71 ± 2.4499.92 ± 0.0799.72 ± 0.3399.82 ± 0.37
380.82 ± 2.0199.36 ± 3.6999.05 ± 1.5872.24 ± 2.2699.35 ± 1.8599.54 ± 2.3499.71 ± 0.2799.39 ± 0.7699.89 ± 0.23
495.18 ± 0.5199.82 ± 0.1099.93 ± 0.0995.25 ± 0.6299.40 ± 0.1199.75 ± 0.1399.90 ± 0.0399.92 ± 0.0499.93 ± 0.05
554.27 ± 5.2996.43 ± 13.2095.51 ± 4.0466.60 ± 4.7698.83 ± 5.9598.03 ± 8.7798.84 ± 0.3596.83 ± 1.9999.67 ± 2.96
697.87 ± 1.1399.99 ± 1.1099.95 ± 0.3499.74 ± 0.5399.33 ± 0.2799.73 ± 1.4299.96 ± 0.0599.76 ± 0.1699.92 ± 0.19
799.32 ± 0.3199.97 ± 0.0899.97 ± 0.0299.98 ± 0.0899.50 ± 0.1299.96 ± 0.1799.98 ± 0.0199.96 ± 0.0299.98 ± 0.06
896.11 ± 4.8397.48 ± 1.3996.66 ± 1.2991.19 ± 4.1997.85 ± 4.2499.18 ± 5.7998.19 ± 0.3297.16 ± 0.9198.64 ± 2.19
986.49 ± 4.4198.13 ± 4.5298.23 ± 4.8471.90 ± 1.6498.21 ± 1.3697.77 ± 3.8097.99 ± 0.5896.35 ± 0.8298.27 ± 1.97
OA (%)95.27 ± 0.3699.25 ± 0.7699.38 ± 0.3095.51 ± 0.2199.31 ± 0.2399.25 ± 0.7499.81 ± 0.0299.66 ± 0.0799.86 ± 0.06
AA (%)79.04 ± 1.0898.31 ± 2.7798.79 ± 1.1286.86 ± 0.6299.07 ± 0.9099.26 ± 1.0599.39 ± 0.1098.78 ± 0.3499.57 ± 0.25
κ × 100 93.81 ± 0.2199.01 ± 1.0799.58 ± 0.4094.10 ± 0.2899.24 ± 0.3199.18 ± 0.5899.75 ± 0.0399.55 ± 0.0999.82 ± 0.20
The best results are shown in bold.
Table 5. Classification results of different methods for the UT dataset.
Table 5. Classification results of different methods for the UT dataset.
Methods3DCNNHybridSNSSRNSpectralFormerSSFTTmorphFormerDSFormerCSCANetMDS3-Net
169.20 ± 2.1097.78 ± 5.4599.28 ± 0.4496.23 ± 0.4899.38 ± 2.0299.57 ± 2.6299.88 ± 0.1299.94 ± 0.0599.19 ± 0.42
263.50 ± 4.6093.59 ± 2.7596.17 ± 0.9478.72 ± 0.9492.63 ± 0.5396.09 ± 1.2795.21 ± 1.3398.46 ± 0.9298.64 ± 0.55
30.00 ± 0.0098.67 ± 6.6485.16 ± 1.8185.96 ± 0.6165.42 ± 3.3397.87 ± 3.8595.36 ± 2.9066.18 ± 44.3899.91 ± 0.21
484.06 ± 3.1899.53 ± 6.13100.00 ± 0.0098.18 ± 0.1599.99 ± 0.61100.00 ± 0.00100.00 ± 0.00100.00 ± 0.0099.99 ± 0.02
598.83 ± 1.0299.13 ± 6.2199.96 ± 1.5897.51 ± 0.23100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
684.51 ± 2.2393.55 ± 5.9087.95 ± 2.9587.88 ± 1.0189.81 ± 1.4796.01 ± 4.1592.04 ± 1.1696.27 ± 1.5298.93 ± 0.48
OA (%)83.95 ± 1.0697.94 ± 1.1098.02 ± 0.3394.54 ± 0.2197.59 ± 0.8899.11 ± 0.3298.61 ± 0.1298.92 ± 0.6899.64 ± 0.09
AA (%)57.16 ± 0.9497.04 ± 1.4394.75 ± 0.5790.75 ± 0.4791.20 ± 0.6297.84 ± 1.0997.08 ± 0.5193.47 ± 7.3499.44 ± 0.15
κ × 100 78.24 ± 1.3397.25 ± 1.3897.36 ± 0.2692.73 ± 0.6296.78 ± 1.1198.82 ± 0.3598.15 ± 0.1598.55 ± 0.9199.53 ± 0.12
The best results are shown in bold.
Table 6. Ablation study on different modules of MDS3-Net.
Table 6. Ablation study on different modules of MDS3-Net.
MethodMSDCS3 EncoderDPFEUP OA (%)HS2013 OA (%)LK OA (%)UT OA (%)
MDS3-Net××94.7291.9994.8793.46
××93.7093.1796.8793.41
××88.6477.5789.5985.22
×97.3696.9498.2197.15
×96.4995.0197.8696.84
×98.4196.7398.9197.58
99.4998.0599.8699.64
The check mark (✓) indicates that the corresponding module is included in the network, while the cross mark (×) indicates that the module is excluded. The best results are shown in bold.
Table 7. Comparison of classification performance and computational complexity on the HS2013 dataset.
Table 7. Comparison of classification performance and computational complexity on the HS2013 dataset.
MethodsClassification PerformanceComputational Complexity
OA (%) AA (%) κ × 100 FLOPs (M) Param (K) Tr Time (s) Te Time (s)
3DCNN78.5372.9576.7714.863527.2247.4933.05
HybridSN95.8796.2095.5316.01534.5310.160.18
SSRN81.1879.2779.623.4618.3356.711.02
SpectralFormer82.7880.7581.3828.79280.30101.600.93
SSFTT91.2788.5290.556.99148.4913.260.35
morphFormer95.7995.5295.4425.64153.7669.051.59
DSFormer92.7793.0692.1921.05677.2979.171.71
CSCANet95.7595.6895.4016.37298.7029.060.65
MDS3-Net (Ours)98.0598.0997.895.67140.3564.531.17
The best results are shown in bold. Tr and Te denote Training and Testing, respectively.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bian, T.; Yang, B.; Chen, Y.; Zhou, X.; Yue, L.; Hu, S. MDS3-Net: A Multiscale Spectral–Spatial Sequence Hybrid CNN–Transformer Model for Hyperspectral Image Classification. Remote Sens. 2026, 18, 977. https://doi.org/10.3390/rs18070977

AMA Style

Bian T, Yang B, Chen Y, Zhou X, Yue L, Hu S. MDS3-Net: A Multiscale Spectral–Spatial Sequence Hybrid CNN–Transformer Model for Hyperspectral Image Classification. Remote Sensing. 2026; 18(7):977. https://doi.org/10.3390/rs18070977

Chicago/Turabian Style

Bian, Taonian, Bin Yang, Yuanjiang Chen, Xuan Zhou, Li Yue, and Shunshi Hu. 2026. "MDS3-Net: A Multiscale Spectral–Spatial Sequence Hybrid CNN–Transformer Model for Hyperspectral Image Classification" Remote Sensing 18, no. 7: 977. https://doi.org/10.3390/rs18070977

APA Style

Bian, T., Yang, B., Chen, Y., Zhou, X., Yue, L., & Hu, S. (2026). MDS3-Net: A Multiscale Spectral–Spatial Sequence Hybrid CNN–Transformer Model for Hyperspectral Image Classification. Remote Sensing, 18(7), 977. https://doi.org/10.3390/rs18070977

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop