Next Article in Journal
Proof of Concept of an Integrated Laser Irradiation and Thermal/Visible Imaging System for Optimized Photothermal Therapy in Skin Cancer
Previous Article in Journal
Rapid and Accurate Shape-Sensing Method Using a Multi-Core Fiber Bragg Grating-Based Optical Fiber
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spatial-Channel Multiscale Transformer Network for Hyperspectral Unmixing

College of Electronic and Information Engineering, Changchun University, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(14), 4493; https://doi.org/10.3390/s25144493
Submission received: 14 May 2025 / Revised: 3 July 2025 / Accepted: 17 July 2025 / Published: 19 July 2025
(This article belongs to the Section Sensor Networks)

Abstract

In recent years, deep learning (DL) has been demonstrated remarkable capabilities in hyperspectral unmixing (HU) due to its powerful feature representation ability. Convolutional neural networks (CNNs) are effective in capturing local spatial information, but limited in modeling long-range dependencies. In contrast, transformer architectures extract global contextual features via multi-head self-attention (MHSA) mechanisms. However, most existing transformer-based HU methods focus only on spatial or spectral modeling at a single scale, lacking a unified mechanism to jointly explore spatial and channel-wise dependencies. This limitation is particularly critical for multiscale contextual representation in complex scenes. To address these issues, this article proposes a novel Spatial-Channel Multiscale Transformer Network (SCMT-Net) for HU. Specifically, a compact feature projection (CFP) module is first used to extract shallow discriminative features. Then, a spatial multiscale transformer (SMT) and a channel multiscale transformer (CMT) are sequentially applied to model contextual relations across spatial dimensions and long-range dependencies among spectral channels. In addition, a multiscale multi-head self-attention (MMSA) module is designed to extract rich multiscale global contextual and channel information, enabling a balance between accuracy and efficiency. An efficient feed-forward network (E-FFN) is further introduced to enhance inter-channel information flow and fusion. Experiments conducted on three real hyperspectral datasets (Samson, Jasper and Apex) and one synthetic dataset showed that SCMT-Net consistently outperformed existing approaches in both abundance estimation and endmember extraction, demonstrating superior accuracy and robustness.

1. Introduction

Hyperspectral images (HSIs) are three-dimensional data cubes that contain both spatial and spectral information, typically consisting of tens to hundreds of spectral bands. These bands typically span a spectral range from the visible to the short-wave infrared regions, approximately from 400 to 2500 nm [1]. Unlike traditional RGB images that are limited to three channels (red, green, and blue), HSIs provide detailed spectral signatures of materials along with their spatial distribution, making them widely applicable in diverse fields such as food safety [2], environmental monitoring [3], and mineral exploration [4]. However, due to limitations in imaging technology, HSIs generally suffer from a low spatial resolution [5], where each pixel often contains a mixture of spectral information from multiple materials—commonly referred to as mixed pixels [6]. The presence of a large number of mixed pixels significantly degrades the performance of HSI-based applications. Therefore, it is essential to decompose these mixed pixels to retrieve the pure spectral components (known as endmembers) and their corresponding proportions within each pixel, a process known as hyperspectral unmixing (HU). The task of extracting the pure spectral signatures from mixed pixels is referred to as endmember extraction [7], while estimating their proportion in each pixel is called abundance estimation [8]. Under physically meaningful constraints, abundance values are typically required to satisfy two conditions: the Abundance Nonnegative Constraint (ANC) and the Abundance Sum-to-one Constraint (ASC) [9,10].
In HU tasks, the linear mixing model (LMM) [11] has become the most widely adopted unmixing framework due to its clear physical interpretability and computational simplicity. Based on the LMM assumption, numerous unmixing approaches have been proposed to effectively estimate endmember spectra and their corresponding abundance distributions.
Traditional HU methods include geometric approaches, statistical models [12], and sparse regression-based techniques [13,14]. Among geometric methods, vertex component analysis (VCA) [15] and fully constrained least squares unmixing (FCLSU) [16] are widely used. VCA projects the HSI onto directions orthogonal to the subspace formed by the selected endmembers and iteratively extracts potential endmember spectra. Under the assumption that pure pixels exist, this method can effectively identify pure material spectra in the scene, providing a basis for subsequent abundance estimation. FCLSU, on the other hand, performs least-squares regression to estimate abundances given known endmembers, while enforcing the non-negativity and sum-to-one constraints. However, in practical scenarios, pixels composed entirely of a single material are rarely observed, making the pure-pixel assumption often invalid and limiting the applicability of VCA. In addition, the performance of FCLSU heavily relies on the accuracy of the extracted endmembers. If the estimated endmembers deviate from the actual spectra, the resulting abundance maps may also suffer in accuracy, thereby degrading the overall unmixing performance. To address these limitations, a family of methods based on non-negative matrix factorization (NMF) has been proposed [17,18,19,20]. Unlike geometric approaches, NMF does not depend on the pure-pixel assumption. Instead, it decomposes the observed HSI into a product of two nonnegative matrices representing the endmember spectra and their abundances, respectively. This allows for a fully unsupervised estimation of both components, making NMF more robust in highly mixed scenes. Qian et al. [21] introduced the L 1 / 2 sparsity constraint into NMF for HU, referred to as L 1 / 2 -NMF, which improves the unmixing accuracy by promoting sparsity in abundance estimation. Compared with the traditional L 1 -norm, the L 1 / 2 -norm induces stronger sparsity and is mathematically non-convex. Rajabi and Ghassemian [22] proposed a multilayer extension called Multilayer NMF (MLNMF), which iteratively factorizes the observation matrix into multiple hierarchical layers to refine unmixing performance. Sparse regression-based methods assume that each pixel can be represented as a linear combination of a small subset of endmembers from a predefined spectral library. These methods aim to identify both the contributing endmembers and their corresponding proportions through sparse optimization. Bioucas-Dias and Figueiredo [23] proposed Sparse Unmixing via Variable Splitting and Augmented Lagrangian (SUnSAL), which incorporates a L 1 -norm regularization term to enforce sparsity. SUnSAL is particularly effective when a large spectral library is available and pure pixels are difficult to obtain.
Recently, deep learning (DL) networks have provided effective solutions for HU [24,25]. A typical DL-based unmixing framework adopts an autoencoder (AE) architecture, which consists of an encoder and a decoder. The encoder is responsible for extracting low-dimensional representations from the input HSI, which correspond to abundance estimations. The decoder reconstructs the original HSI using the estimated abundances and the learned endmember spectra [26]. Based on the AE framework, the integration of different feature extraction modules and the design of tailored loss functions can further improve unmixing performance [27,28]. For instance, Qu and Qi [29] proposed an untied denoising autoencoder with sparsity (uDAS), which introduces an L 21 -norm constraint to enhance the accuracy of abundance estimation. This regularization helps reduce redundancy in the learned features and improves the robustness and precision of the encoder in estimating abundance maps. Su et al. [30] introduced the Stacked Nonnegative Sparse Autoencoders (SNSAEs), which employ an end-to-end fully connected (FC) AE structure. Without explicitly incorporating spatial modeling, this approach leverages spectral feature learning to effectively estimate abundance representations under unsupervised conditions, achieving robust unmixing performance for HSIs.
Early AE-based unmixing methods primarily relied on FC layers to construct the encoder and decoder. During the processing of HSIs, each pixel (or spectral vector) is often treated as an independent sample, or the entire HSI is flattened into a long vector for spectral feature learning. However, these approaches typically ignore the spatial relationships between neighboring pixels. To more effectively leverage the valuable spatial information in HSIs, researchers have introduced convolutional neural networks (CNNs) into AE architectures to further improve HU performance. Palsson et al. [31] proposed a CNN Autoencoder Unmixing (CNNAEU) framework, which integrates convolutional encoders and decoders to extract spatial features and reconstruct spectral information. This approach enables a more accurate abundance estimation by jointly learning spatial–spectral representations. Rasti et al. [32] introduced an unsupervised HU method based on deep CNNs, termed Unmixing Deep Prior (UnDIP). By exploiting the structural prior embedded in the network itself, UnDIP models the relationships between endmembers and abundances without external supervision, thereby enhancing unmixing accuracy and robustness. Gao et al. [33] proposed a Cycle-Consistency Unmixing Network (CyCU-Net), which cascades two autoencoders for HU and introduces cycle-consistency constraints through spectral and abundance reconstruction losses. This framework strengthens the representational capacity of both endmembers and abundances, improving both the accuracy and stability of unmixing. While CNN-based AE unmixing methods are capable of extracting local spatial features, such feature extraction is primarily dependent on the size of convolutional kernels, which inherently rely on limited receptive fields. This constraint hampers their ability to capture long-range spatial dependencies and global spectral relationships, leading to the loss of critical contextual features during unmixing. Moreover, due to the high dimensionality of HSIs, although some CNN-based methods enhance global modeling via encoder–decoder or residual structures, they still rely on stacked local operations, whereas the transformer captures long-range spatial–spectral dependencies more efficiently through self-attention.
Transformer architectures have rapidly gained attention in remote sensing image processing due to their superior ability to model long-range dependencies and capture global contextual features. In recent years, several studies have explored the application of transformers to HU and have achieved promising results [34,35,36]. Ghosh et al. [37] proposed the first hybrid HU model that combines transformer and CNN architectures. In this approach, the multi-head self-attention mechanism of the transformer is employed to complement the limited receptive field of the CNN, thereby enhancing the robustness and accuracy of the unmixing process. This work laid a foundation for subsequent transformer-based HU research. Recently, there has been increasing interest in integrating CNNs and transformers to further improve unmixing performance. Hu et al. [34] introduced the Multiscale Convolution Attention Network (HUMSCAN), which consists of an endmember estimation sub-network and an abundance estimation sub-network. By leveraging multiscale convolutions to extract spatial features at different scales and attention mechanisms to enhance salient feature representations, HUMSCAN effectively improves HU performance. Yang et al. [35] proposed the Cascaded Dual-Constrained Transformer Autoencoder (CDCTA), which constructs a progressive, cascaded structure by stacking multiple transformer encoder–decoder modules. This design enhances the model’s depth and expressive capacity for complex mixed pixels. Moreover, CDCTA incorporates two additional constraints—endmember separability and abundance sparsity—into the network to improve the accuracy of both endmember extraction and abundance estimation. Wang et al. [38] proposed the Multiscale Aggregation Transformer Network (MAT-Net), which fully exploits CNN-extracted spectral and multiscale spatial features and then fuses them using a transformer encoder. MAT-Net features a dual-stream, multi-branch CNN encoder and an enhanced multiscale self-attention module that adaptively aggregates information across scales, achieving effective and accurate endmember extraction and abundance estimation. Gan et al. [39] proposed a Channel Multi-Scale Dual-Stream Autoencoder (CMSDAE), which performs multiscale feature modeling along the channel dimension to effectively reduce redundancy in the spatial domain and enhance feature representation, thereby improving the accuracy of endmember extraction and abundance estimation. Hadi et al. [40] introduced a Dual-branch Spectral–Spatial Feature Fusion Transformer (DSSFT), which integrates spectral and spatial information through two parallel branches. The spectral branch employs a self-attention mechanism to model complex spectral variations and enhance endmember identification, while the spatial branch adopts patch-level embedding to capture global spatial context, improving the discriminative ability for endmembers and abundances in heterogeneous regions. In addition, Xiang et al. [41] proposed an Endmember-Oriented Transformer Network (EOT-Net), which combines endmember bundle modeling with directional subspace projection to extract endmember-specific features and incorporates a low-redundancy attention mechanism to enhance feature discrimination, effectively improving unmixing accuracy.
However, existing HU methods that combine CNNs and transformers often fail to fully exploit the channel-wise information of HSIs, and they lack dynamic interaction mechanisms for multiscale global contextual modeling. These limitations restrict the joint representation capability of spatial and spectral features in HSIs. To address this issue, we propose a Spatial-Channel Multiscale Transformer Network (SCMT-Net) for HU. Specifically, a spatial multiscale transformer (SMT) module is first introduced to learn spatial features of the HSI, followed by a channel multiscale transformer (CMT) module designed to capture long-range dependencies across spectral channels. The integration of these two modules enables global and dynamic modeling across spatial and spectral dimensions. Moreover, a multiscale multihead self-attention (MMSA) mechanism is incorporated into both the SMT and CMT modules to effectively extract rich spatial–spectral contextual information. Finally, an efficient feed-forward network (E-FFN) is employed to enhance inter-channel information flow and feature fusion, thereby further improving unmixing performance.
The main contributions of this article are summarized as follows:
1. We propose a novel unmixing network, SCMT-Net, which integrates a CFP module and a spatial-channel multiscale transformer module to enable the collaborative modeling of local details and a global context, achieving the dynamic learning of multiscale spatial and spectral relationships.
2. A CMT module is designed to deeply capture long-range dependencies across HSI spectral channels. By combining it with the SMT module, we construct the core SCMT module, which significantly enhances the modeling capacity of spatial-channel global relationships in complex scenarios.
3. A new MMSA module is introduced, embedding multiscale global contextual and channel information into the attention mechanism to capture rich spatial–spectral features. Additionally, an E-FFN is incorporated to further strengthen inter-channel information interaction, thereby improving overall unmixing performance.
The remainder of this article is organized as follows. Section 2 introduces the background and related concepts of HU. Section 3 presents the architecture and fundamental principles of the proposed SCMT-Net. Section 4 discusses the experimental results on three real-world hyperspectral datasets and one synthetic dataset, including comparisons with several representative HU methods and ablation studies on SCMT-Net. Finally, Section 5 concludes the article with a summary of key findings.

2. Background

In HSIs, due to the limited spatial resolution and the mixed distribution of surface materials, each pixel typically contains a mixture of multiple pure spectral components (endmembers). The most commonly used LMM assumes that the observed pixel spectrum can be represented as a weighted linear combination of several endmember spectra. Its mathematical expression is given by
Y = EA + N
The input HSI is denoted as X R L × H × W , where H, W, and L represent the height, width, and number of spectral bands of the original HSI, respectively. The HSI can be mathematically reshaped into a matrix Y R L × n , where n = H · W denotes the total number of pixels and L represents the number of spectral bands. It is important to note that this reshaping is used solely for notational purposes; in practice, the encoder retains the spatial structure before explicitly flattening the input for the transformer. The endmember matrix is denoted as E R L × R , where R represents the number of endmembers present in the HSI. The corresponding abundance cube (i.e., the stack of R abundance maps) is represented as M R R × H × W , which can be reshaped into a matrix A R R × n ; N R L × n represents the additive noise present in Y .
In addition, HU tasks typically require the following three physical constraints to be satisfied:
First, the endmember matrix must be non-negative, that is, E 0 ; second, the abundance matrix is subject to the ANC, i.e., A 0 ; finally, the ASC must also be satisfied: 1 R T A = 1 n T , where 1 n denotes an all-ones column vector of dimension n.
Although the LMM offers good physical interpretability and modeling simplicity, under non-ideal imaging conditions such as illumination variations, terrain undulations, material inhomogeneity, or multipath scattering, the actual mixing process often exhibits pixel-wise spectral variability. This leads to the inability of the LMM to accurately model such complex scenarios. To address this issue, researchers have proposed a generalized version of the LMM, which enhances the adaptability and representational capacity of the model while preserving its linear structure.
GLMM introduces scaling factors for endmembers at the pixel level, allowing endmember spectra to vary across different pixels, thereby enhancing the ability to model spectral variability in real-world scenarios. Its mathematical expression is as follows:
x n = M n · a n + e n
Specifically, x n R L denotes the observed spectrum of the nth pixel, M n R L × R denotes the endmember spectral matrix of that pixel, a n R R denotes the corresponding abundance vector, and e n denotes the additive noise. GLMM extends the standard LMM by introducing pixel-level endmember scaling factors, allowing endmembers to vary across different pixels, thereby enhancing the ability to represent spectral variability while preserving the linear mixing structure.
In this study, although SCMT-Net adopts the LMM as a physical foundation and constraint framework for task modeling, the network itself is essentially a nonlinear unmixing method. Its architecture integrates multiscale attention mechanisms, nonlinear activation functions, and multiscale depthwise separable convolution modules, enabling the end-to-end learning of complex nonlinear mappings from input hyperspectral images to abundance maps and endmember spectra. Therefore, SCMT-Net does not rely on the strict linear assumptions of LMM; instead, it builds upon this physical modeling basis to achieve a more expressive and flexible nonlinear modeling process. This design allows the model to maintain robust performance and generalization capability, even under complex mixing scenarios involving pixel-level endmember variability or nonlinear interactions.

3. Methods

The overall architecture of SCMT-Net is illustrated in Figure 1. SCMT-Net adopts an AE structure consisting of an encoder and a decoder. Within the encoder, the input X is first processed by the CFP module, which performs channel dimensionality reduction to extract discriminative features X CFP R C × H × W , where C denotes the reduced number of channels. Subsequently, X CFP is fed into the SCMT module, which sequentially incorporates the SMT and CMT modules to extract global spatial features and long-range dependencies among channels at multiple scales. This process provides enriched spatial interactions and inter-channel correlations for the unmixing task. The encoder comprises three stages, each employing depthwise separable atrous convolutions with different atrous rates to effectively capture multiscale spatial–spectral information. To satisfy the ASC and the ANC, the output of the encoder is projected back to the original HSI spatial dimensions through a convolutional layer, followed by a softmax activation to generate the final estimated abundance maps. Finally, the decoder increases the number of output channels to match the spectral dimensionality of the original HSI through a convolutional layer and simultaneously extracts the estimated endmember signatures. The following section provides a detailed discussion of the components of SCMT-Net and analyzes its key modules.

3.1. CFP Module

The CFP module consists of a convolutional (Conv) layer, a batch normalization (BN) layer, and a dropout layer. Specifically, the convolutional layer employs a 1 × 1 two-dimensional convolution to reduce the spectral dimensionality of the input HSI and extract essential spatial features. This is followed by batch normalization to improve training stability and mitigate gradient vanishing, and a dropout layer to reduce the risk of overfitting. The CFP module ultimately outputs low-dimensional features denoted as X CFP .

3.2. SCMT Module

The SCMT module is composed of two key components: SMT and CMT. This module is designed to fully exploit global feature dependencies within the HSI, thereby enhancing the feature representation capability during the HU process. The following subsections detail the fundamental procedures of the SMT and CMT modules, respectively.

3.2.1. SMT Module

The structure of the SMT module is illustrated in Figure 2. First, the feature map X CFP generated by the CFP module is divided into N patches of size p × p , where each patch has a dimensionality of X p R C × p × p , and a total of N = H W p 2 patches are obtained. All patches are flattened to form a token sequence X t R N × ( p × p × C ) . This sequence is then fed into the multiscale transformer (MT) module to achieve global spatial feature modeling across all patches, thereby enhancing the contextual representation of spatial features. The MT module consists of the MMSA module, a BN layer, and the E-FFN module. Specifically, the token sequence X t is first processed by the MMSA module, followed by normalization and a residual connection with the original input, resulting in X attn . This intermediate output is then passed through the E-FFN module and another residual connection to obtain the final output X out . The above process can be formulated as follows:
X attn = X t + BN ( MMSA ( X t ) )
X out = X attn + E FFN ( X attn )
After passing through the MT module, the feature map is reshaped back to the original spatial dimensions, thereby restoring the structural layout of the image.
(1)
MMSA Module
The MMSA module adopts a dual-branch architecture to extract multiscale global contextual and channel information, as illustrated in Figure 3. The upper branch is designed to capture multiscale global contextual information. Specifically, the input feature map X t is first processed by a pointwise convolution (PW Conv) to reduce its channel dimension to C/4. The resulting feature is then passed through three parallel depthwise separable atrous convolutions, each with a kernel size of 3 × 3. The atrous rates for the three convolutional branches are denoted as R i ( i = { 1 , 2 , 3 } ) , with values { ( 1 , 3 , 5 ) , ( 3 , 5 , 7 ) , ( 5 , 7 , 9 ) } , corresponding to the MMSA modules integrated into each stage of the MT module.
Subsequently, a PW Conv is employed to restore the feature map to the original number of input channels, followed by an element-wise summation. The result is then processed through another PW Conv and a residual connection to obtain the final output X Multi :
X i = dwConv 3 × 3 R i PWConv ( X t )
X ˜ i = PWConv ( X i )
X Multi = PWConv i = 1 3 X ˜ i X t
where the symbol ⊗ denotes element-wise multiplication. X i ( i = 1 , 2 , 3 ) represent the outputs of three depthwise separable atrous convolution branches, dwConv 3 × 3 R i ( · ) denotes a 3 × 3 depthwise separable atrous convolution with an atrous rate of R i and X ˜ i ( i { 1 , 2 , 3 } ) denote the corresponding outputs from three PW Conv branches. X Multi indicates the final fused feature. Subsequently, the fused feature is fed into an adaptive average pooling layer to extract multiscale features X Ada R C × ( A · A ) :
X Ada = AdaptivePool X Multi
For simplicity, the flattening operation is omitted. The resulting feature X Ada exhibits a lower spatial resolution compared to the original input X t , where A is set to 9, taking the Samson dataset as an example. Note that A is empirically set as a fixed hyperparameter for each dataset, as summarized in Table 1. This feature representation captures rich multiscale contextual information derived from the input.
The matrix X Ada is used to compute the key ( K ) and value ( V ) for the multi-head self-attention mechanism, while X t is used to generate the queries ( Q ). The computation process is formulated as follows:
( Q , K , V ) = ( X t W q , X Ada W k , X Ada W v )
where W q , W k , W v denote the learnable weight matrices for the linear transformations. The K and V matrices incorporate multiscale contextual information, enhancing the capability of modeling global contextual features and thereby improving unmixing performance. Subsequently, the Q , K , and V matrices are fed into the multi-head self-attention module to compute the self-attention features:
Attention M = Softmax Q · K T d k · V
where d k denotes the channel dimension of K , and the division by d k can be regarded as an approximate normalization. The softmax function is applied row-wise across the matrix. For simplicity, the concept of multi-head attention is omitted in Equation (9), as discussed in [42,43]. Since the lengths of K and V are shorter than that of the input X t , the proposed MMSA module introduces lower computational overhead compared with conventional multi-head self-attention mechanisms. Furthermore, as K and V encode rich multiscale contextual information, the proposed MMSA module is more effective in modeling global contextual dependencies, which benefits the HU task.
Inspired by SENet [44], a channel attention branch is constructed in the lower branch to efficiently capture inter-channel dependencies. The input feature X t is first passed through a global average pooling layer to generate a channel attention map X Avg R C × 1 × 1 . This map is then fed into a PW Conv for channel reduction, followed by a ReLU activation and another PW Conv to restore the channel dimension to C. Finally, a Sigmoid activation function is applied to obtain channel-wise attention weights, which are multiplied element-wise with the original feature X t to produce the channel-enhanced feature map. The entire process can be described as follows.
X Avg = Avgpool ( X t )
X C = ReLU ( PWConv ( X Avg ) )
Attention C = Sigmoid ( PWConv ( X C ) ) X t
Finally, the features from the upper and lower branches are summed to obtain the final output of the MMSA module, denoted as X MMSA :
X MMSA = Attention M + Attention C
(2)
E-FFN Module
Conventional transformers typically rely on FC layers as the FFN [40] and depend entirely on the attention mechanism to capture dependencies among pixels [41]. Although such a design facilitates global feature modeling, it is limited in learning local information from HSIs. To address this limitation, we replace the FC layers with PW Conv and insert two parallel depthwise separable convolutions with kernel sizes of 3 × 3 and 5 × 5 in between, as illustrated in Figure 4.
This process can be formulated as follows:
X 1 = dwConv 3 × 3 ( PWConv ( X MMSA ) )
X 2 = dwConv 3 × 3 ( PWConv ( X MMSA ) )
X E - FFN = PWConv ( X 1 + X 2 )
MMSA and E-FFN are the core submodules shared by both SMT and CMT. Their architecture is described in the SMT subsection for clarity, as CMT employs the same design.

3.2.2. CMT Module

In HU tasks, inter-channel relationships also play a critical role in enhancing unmixing performance. To further explore the channel characteristics of HSIs, we design the CMT module. In this module, the number of tokens input into the MT module is changed from the number of patches to the number of channels. The basic workflow is illustrated in Figure 5. The CMT module flattens the N patches into C tokens, where each channel is treated as an individual token and then fed into the MT module. The structure of CMT is similar to that of SMT, with the main difference being that the transformer shifts its modeling target from spatial relationships among image patches to spectral relationships among channels. By globally modeling the spectral features across different channels, the MT module effectively enhances the inter-channel spectral feature correlations within the HSI.
Therefore, SCMT sequentially processes the input through the SMT and CMT modules to capture both spatial features and spectral features across different channels, thereby providing more comprehensive and accurate representations for the HU task.

3.3. Unmixing with Decoder

The decoder first applies a convolutional operation to the features extracted by the encoder to generate the abundance cube M R R × H × W . A subsequent 3 × 3 convolution is used for fine-grained refinement. To satisfy the ANC and ASC, a softmax activation function is applied along the channel dimension to obtain the estimated abundance map. To estimate the endmember signatures, the abundance matrix M is fed into the decoder branch of the AE, which consists of a single convolutional layer. This convolution expands the spectral dimension of M from R to L, producing the reconstructed HSI X ^ . The weights of this convolutional layer are initialized using endmember signatures extracted by the VCA method and are updated during training through back propagation. VCA is a widely adopted geometric-based endmember extraction technique that is known for its simplicity, efficiency, and ease of implementation, making it suitable for a broad range of HU scenarios, ultimately yielding the estimated endmember matrix E ^ R L × R .

3.4. Loss Function

Two types of loss functions are introduced during the training process of the proposed model: the reconstruction error (RE) loss and the spectral angle distance (SAD) loss. The specific formulations are given as follows:
L RE ( X , X ^ ) = 1 H W i = 1 H j = 1 W X ^ i j X i j 2
L SAD ( X , X ^ ) = 1 R i = 1 R arccos X i , X ^ i X i 2 X ^ i 2
The RE loss is computed using the mean squared error (MSE), which guides the encoder to extract essential features from the input HSIs while reducing the influence of redundant information. The SAD loss, on the other hand, is scale-invariant and helps mitigate the limitations of MSE in distinguishing endmember components caused by absolute magnitude differences. In HU tasks, the combination of these two loss functions not only compensates for their individual shortcomings but also accelerates the convergence of the model.
The total loss function is defined as a weighted sum of these two loss terms:
L = β L RE + γ L SAD
where β and γ are regularization parameters used to balance the contributions of the two loss terms.

4. Experiments

4.1. Datasets

In this study, three real HSIs and one synthetic dataset were used to evaluate the performance of the proposed algorithm. Figure 6 shows the true color images and the corresponding reference endmembers of the real datasets.
(1) Samson Dataset [45]: Collected by the SAMSON sensor, this dataset consists of 952 × 952 pixels with 156 spectral bands ranging from 401 to 889 nm. A cropped subimage of size 95 × 95 pixels was used in the experiments. The dataset contains three endmember classes: soil, tree, and water.
(2) Jasper Ridge Dataset [46]: Acquired by the AVIRIS sensor, the original image has a spatial resolution of 512 × 614 pixels and contains 224 spectral bands covering the wavelength range of 380 to 2500 nm. A 100 × 100 pixel subimage was used in the experiments. After removing bands affected by water absorption and atmospheric interference, 198 valid bands were retained. The image includes four endmember classes: soil, water, tree, and road.
(3) Apex Dataset [37]: The Apex image is acquired by the APEX sensor, consisting of 110 × 110 pixels with 285 spectral bands covering the wavelength range of 413–2421 nm. This dataset includes four endmember classes: water, tree, road, and roof.
(4) Synthetic Dataset [47]: The dataset is constructed using endmembers extracted from a real HSI and contains 50 × 50 pixels with 162 spectral bands, covering five categories of endmembers: roof, metal, soil, tree, and asphalt. It is generated based on the GLMM, which extends the LMM by introducing pixel-wise scaling factors to simulate spectral variability from terrain, illumination, or atmospheric effects. While more flexible, the mixing remains linear. This dataset helps assess the proposed method’s robustness and generalization under challenging conditions, offering a more rigorous benchmark for real-world scenarios.

4.2. Description of Experimental Equipment and Parameters

The experiments in this study were conducted on a PC equipped with an AMD Ryzen 7 7735H processor with Radeon Graphics (AMD, Santa Clara, CA, USA) and an NVIDIA GeForce RTX 4060 Laptop GPU (NVIDIA, Santa Clara, CA, USA) using the Python 3.8.0 interpreter. Several hyperparameters were explored across different datasets, as summarized in Table 1. The regularization parameters β and γ are employed to balance the contributions of the RE and SAD loss terms. Other parameters, including patch size P, the resolution of the feature X Ada denoted as A, the number of training epochs, the learning rate, and the weight decay coefficient, are also listed in the table.

4.3. Comparison Methods

To comprehensively evaluate the effectiveness of the proposed SCMT-Net, seven representative unmixing methods were selected for comparison. Specifically, VCA [16] and FCLSU [17] were chosen as representatives of geometry-based and least-squares-based unmixing approaches, respectively. MLNMF [23] was included as a typical statistical modeling method, and the uDAS [30] was considered as an unsupervised learning-based method. In addition, six state-of-the-art DL-based unmixing models were evaluated: CyCU-Net [34], Deep-Trans [38], HUMSCAN [35], CDCTA [36], MAT-Net [39], and CMSDAE [40]. Among them, CyCU-Net focuses on spatial feature extraction, Deep-Trans is the first unmixing network based on the transformer architecture, HUMSCAN emphasizes multiscale spatial feature modeling, CDCTA addresses endmember variability while preserving spectral geometry, MAT-Net integrates CNN-extracted spectral and spatial features using a transformer-based encoder, and CMSDAE enhances channel-wise multiscale representation through spectral-channel attention mechanisms. For all comparison methods, the initial endmembers were extracted using the VCA algorithm.

4.4. Evaluation Metrics

The quantitative results are reported using the root mean squared error (RMSE) between the estimated abundances and the ground-truth abundances, which is calculated as follows:
RMSE ( M , M ^ ) = 1 R H W k = 1 R i = 1 H j = 1 W M ^ k i j M k i j 2
as well as the SAD between the estimated and ground-truth endmembers, which is computed as follows:
SAD ( S , S ^ ) = 1 R i = 1 R arccos S ( i ) , S ^ ( i ) S i 2 · S ^ i 2
where “·” denotes the inner product between vectors, and S ( i ) represents the i-th column of the ground-truth endmember matrix S.

4.5. Quantitative Results

(1) Samson Dataset: The quantitative results on the Samson dataset are presented in Table 2 and Table 3. It can be observed that SCMT-Net significantly outperforms other methods in both abundance estimation and endmember extraction. The proposed method achieves an average RMSE of 0.0854, which is notably lower than that of the second-best method, CMSDAE. Moreover, the average SAD of SCMT-Net is 0.0389. Although CyCU-Net demonstrates the best performance for tree endmember extraction, SCMT-Net achieves the highest overall endmember estimation accuracy. These results demonstrate the strong competitiveness of SCMT-Net on the Samson dataset and further validate its feasibility and superiority in HU tasks.
(2) Jasper Dataset: The quantitative evaluation results on the Jasper Ridge dataset are presented in Table 4 and Table 5. As shown in the tables, SCMT-Net achieves an average RMSE of 0.0885, which is 19.3% lower than the second-best method, CMSDAE. The proposed method outperforms most existing techniques in abundance estimation for all four endmembers in the Jasper dataset, delivering competitive results. Although methods such as Trans-Net and CDCTA exhibit certain strengths, SCMT-Net demonstrates the most accurate estimation of endmember spectra overall, highlighting its robustness and effectiveness in HU.
(3) Apex Dataset: The quantitative evaluation results on the Apex dataset are reported in Table 6 and Table 7. SCMT-Net stands out by achieving the lowest average RMSE and SAD values, with an average RMSE of 0.1185 and an average SAD of 0.0771, indicating superior performance in both abundance and endmember estimation. The endmember “Road” in the Apex dataset poses a considerable challenge for most comparison methods, whereas SCMT-Net is capable of producing a satisfactory estimation. The Apex dataset contains richer spectral information with a greater number of spectral bands and more complex spatial features, making the unmixing task more challenging.
(4) Synthetic Dataset: The quantitative results on the synthetic dataset are presented in Table 8 and Table 9. The proposed method achieves superior performance in both abundance estimation and endmember signature reconstruction, with improvements of 15.4% and 30%, respectively, over the second-best methods. Even under non-ideal conditions with pixel-level endmember variability, the proposed SCMT-Net consistently maintains the lowest average RMSE and SAD values, demonstrating its robustness and practical applicability in complex mixing scenarios.

4.6. Visual Analysis

For the Samson dataset, the abundance estimation results are illustrated in Figure 7. The proposed SCMT-Net produces abundance maps that are highly consistent with the ground truth, which can be attributed to its effective utilization of multiscale spatial and channel information, enabling precise capture of fine-grained features in fragmented regions. In contrast, methods such as FCLSU and MLNMF tend to underestimate the abundance of soil and tree, resulting in over-detailed water abundance maps. Moreover, Trans-Net and CDCTA overestimate the abundance of water in some tree-covered areas. The abundance maps generated by CyCU-Net lack the detail and smoothness observed in the reference maps. Figure 8 shows the endmember estimation results, where the endmembers extracted by the proposed method are almost identical to the ground-truth signatures.
For the Jasper dataset, Figure 9 and Figure 10 present a visual comparison of unmixing results obtained by different methods. The abundance maps and endmember spectra generated by SCMT-Net are more consistent with the reference data. Methods such as FCLSU and MLNMF tend to overestimate the abundance of water while underestimating those of trees and roads, and the soil abundance maps they produce lack fine details. SCMT-Net demonstrates particularly strong performance in estimating the “tree” endmember. In the Jasper dataset, roads occupy a small portion of the scene, making the estimation of their abundance and spectral signatures more challenging due to complex distributions. Many methods fail to accurately estimate the abundance of roads or to fully separate them, whereas SCMT-Net achieves more precise separation. These results further confirm that the proposed network is more effective in capturing fine-grained details and contextual information in HSIs compared to other DL approaches.
For the Apex dataset, the abundance maps shown in Figure 11 indicate that the results produced by SCMT-Net are visually closest to the reference. FCLSU and MLNMF significantly overestimate the abundance of water and fail to provide accurate estimation for roads. The water abundance map generated by SCMT-Net avoids the redundant textures and erroneous details commonly observed in other methods, demonstrating higher accuracy, particularly in complex mixed-pixel regions. Compared with other approaches, SCMT-Net effectively suppresses unnecessary estimation errors and ensures proper separation between water and other materials. As shown in Figure 12, FCLSU, MLNMF, and CyCU-Net fail to correctly extract the road endmember. Although HUMSCAN achieves the best performance for the “Water” endmember category, the proposed method shows superior overall performance in endmember estimation.
For the synthetic dataset, as shown in the visual results of Figure 13, although most methods can roughly distinguish different materials in the scene, SCMT-Net achieves the reconstruction results closest to the reference. As illustrated in Figure 14, CDCTA shows a significant deviation in estimating the “Roof” endmember, while FCLSU achieves the best performance on the “Metal” endmember. Nevertheless, the proposed method demonstrates superior overall performance in endmember estimation. It is worth noting that even under non-ideal conditions where pixel-level endmember variability is present—conditions that deviate from the assumptions of the LMM—SCMT-Net is still able to maintain stable and accurate unmixing performance. These results indicate its strong generalization capability and robustness, making it well suited for HU tasks in complex mixing scenarios.

4.7. Ablation Study

We adopted an AE architecture to perform unmixing tasks on four different datasets. To validate the contribution of each encoder module in SCMT-Net, ablation experiments were conducted accordingly.
(1) Effect of Each Module of SCMT-Net: Table 10 presents the ablation results of different module combinations in the SCMT encoder across multiple datasets. Compared with the full configuration where all three modules work cooperatively, the unmixing performance degrades when any individual module is used alone. This indicates that each component plays a complementary role. As transformer architectures are particularly effective at modeling long-range dependencies, the inclusion of both the SMT and CMT modules significantly enhances the unmixing capability of the SCMT encoder.
(2) Effect of MMSA Module: Table 11 investigates the performance contributions of the multiscale branch and the channel attention branch within the MMSA module. When only the multiscale branch is used, the lack of channel-wise information interaction leads to performance degradation, indicating that the channel attention branch in MMSA is critical—while introducing negligible computational overhead. Conversely, when only the channel attention branch is used, the absence of multiscale contextual information also results in suboptimal performance. These results demonstrate that using either component alone is less effective than combining both, highlighting the complementary nature of multiscale and channel attention mechanisms in HU.

4.8. Computational Complexity and Time Consumption Analysis

To compare the model complexity and computational efficiency of different methods, all experiments were conducted on the same computing platform. Model complexity was evaluated by calculating the number of parameters and floating-point operations (FLOPs), while computational efficiency was assessed based on runtime. All experiments were performed on the Jasper dataset, and the results are presented in Table 12.
Traditional methods exhibit high computational efficiency with a relatively low processing time. Among the deep learning-based approaches, CDCTA and HUMSCAN show longer runtimes due to their complex network architectures, whereas the proposed SCMT-Net demonstrates more competitive computational efficiency. Among all the compared methods, SCMT-Net achieves the best unmixing accuracy while maintaining relatively low parameter counts and FLOPs, along with a shorter runtime. Since model complexity remains an important consideration, we will further optimize the architecture of SCMT-Net in our future work.

5. Conclusions

This paper proposes a novel HU network, SCMT-Net, which combines the CFP module and a hybrid model of the SCMT module to achieve the collaborative modeling of local details and a global context. In particular, the CMT module we designed excels at capturing long-range dependencies between channels, while the MMSA module embedded in both the SMT and CMT promotes the representation of multiscale features. Coupled with the E-FFN, it further enhances the information interaction between different channels, effectively learning the dynamic relationships between multiscale spatial and spectral features. Through comparative analysis on multiple datasets, SCMT-Net demonstrated superior performance in abundance estimation and endmember extraction tasks, validating its strong generalization ability and outstanding feature representation capabilities. Although SCMT-Net demonstrated strong performance on multiple HU datasets, it still has certain limitations. The proposed model has room for improvement in terms of parameter size and computational overhead, and future work will focus on designing more lightweight network architectures while maintaining unmixing performance. In addition, the channel feature modeling process may be affected by redundant information, potentially weakening the representation of high-dimensional spectral features. Future research will consider introducing more efficient attention mechanisms to optimize the channel modeling strategy and incorporating feature aggregation enhancement modules to further improve the model’s accuracy, robustness, and practicality.

Author Contributions

All authors made substantial contributions to the work. Conceptualization: H.S. and Q.C.; Methodology: Q.C.; Software: Q.C.; Validation: Q.C., M.C. and J.X.; Writing—Original Draft Preparation: Q.C.; Writing—Review and Editing: H.S. and F.M.; Visualization: J.X. and M.C.; Supervision: H.S. and F.M.; Funding Acquisition: H.S. and F.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Education Department Project of Jilin Province under Grant JJKH20240741KJ and the Science and Technology Department Project of Jilin Province under Grant YDZJ202501ZYTS528.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Some datasets are available at https://rslab.ut.ac.ir/data (accessed on 14 May 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  2. Smith, M.O.; Johnson, P.E.; Adams, J.B. Quantitative determination of mineral types and abundances from reflectance spectra using principal components analysis. J. Geophys. Res. Solid Earth 1985, 90, C797–C804. [Google Scholar] [CrossRef]
  3. Lv, Z.; Huang, H.; Li, X.; Zhao, M.; Benediktsson, J.A.; Sun, W.; Falco, N. Land cover change detection with heterogeneous remote sensing images: Review, progress, and perspective. Proc. IEEE 2022, 110, 1976–1991. [Google Scholar] [CrossRef]
  4. Lv, Z.; Zhong, P.; Wang, W.; You, Z.; Falco, N. Multiscale attention network guided with change gradient image for land cover change detection using remote sensing images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 2501805. [Google Scholar] [CrossRef]
  5. Keshava, N.; Mustard, J.F. Spectral unmixing. IEEE Signal Process. Mag. 2002, 19, 44–57. [Google Scholar] [CrossRef]
  6. Ghamisi, P.; Yokoya, N.; Li, J.; Liao, W.; Liu, S.; Plaza, J.; Rasti, B.; Plaza, A. Advances in hyperspectral image and signal processing: A comprehensive overview of the state of the art. IEEE Geosci. Remote Sens. Mag. 2017, 5, 37–78. [Google Scholar] [CrossRef]
  7. Dong, L.; Yuan, Y.; Luxs, X. Spectral–spatial joint sparse NMF for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2020, 59, 2391–2402. [Google Scholar] [CrossRef]
  8. Lu, X.; Dong, L.; Yuan, Y. Subspace clustering constrained sparse NMF for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3007–3019. [Google Scholar] [CrossRef]
  9. Yuan, Y.; Zhang, Z.; Wang, Q. Improved collaborative non-negative matrix factorization and total variation for hyperspectral unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 998–1010. [Google Scholar] [CrossRef]
  10. Li, J.; Li, Y.; Song, R.; Mei, S.; Du, Q. Local spectral similarity preserving regularized robust sparse hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7756–7769. [Google Scholar] [CrossRef]
  11. Bhatt, J.S.; Joshi, M.V. Deep learning in hyperspectral unmixing: A review. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 2189–2192. [Google Scholar]
  12. Dobigeon, N.; Moussaoui, S.; Coulon, M.; Tourneret, J.Y.; Hero, A.O. Joint Bayesian endmember extraction and linear unmixing for hyperspectral imagery. IEEE Trans. Signal Process. 2009, 57, 4355–4368. [Google Scholar] [CrossRef]
  13. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Sparse unmixing of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2014–2039. [Google Scholar] [CrossRef]
  14. Khoshsokhan, S.; Rajabi, R.; Zayyani, H. Sparsity-constrained distributed unmixing of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1279–1288. [Google Scholar] [CrossRef]
  15. Nascimento, J.M.; Dias, J.M. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef]
  16. Heinz, D.C.; Chang, C.-I. Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 529–545. [Google Scholar] [CrossRef]
  17. Lu, X.; Wu, H.; Yuan, Y.; Yan, P.; Li, X. Manifold regularized sparse NMF for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2012, 51, 2815–2826. [Google Scholar] [CrossRef]
  18. Li, X.; Zhang, X.; Yuan, Y.; Dong, Y. Adaptive relationship preserving sparse NMF for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5504516. [Google Scholar] [CrossRef]
  19. Soydan, H.; Koz, A.; Düzgün, H.Ş. Secondary iron mineral detection via hyperspectral unmixing analysis with sentinel-2 imagery. Int. J. Appl. Earth Obs. Geoinf. 2021, 101, 102343. [Google Scholar] [CrossRef]
  20. Khoshsokhan, S.; Rajabi, R.; Zayyani, H. Clustered multitask non-negative matrix factorization for spectral unmixing of hyperspectral data. J. Appl. Remote Sens. 2019, 13, 026509. [Google Scholar] [CrossRef]
  21. Qian, Y.; Jia, S.; Zhou, J.; Robles-Kelly, A. Hyperspectral unmixing via L1/2 sparsity-constrained nonnegative matrix factorization. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4282–4297. [Google Scholar] [CrossRef]
  22. Rajabi, R.; Ghassemian, H. Spectral unmixing of hyperspectral imagery using multilayer NMF. IEEE Geosci. Remote Sens. Lett. 2014, 12, 38–42. [Google Scholar] [CrossRef]
  23. Bioucas-Dias, J.M.; Figueiredo, M.A. Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing. In Proceedings of the 2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  24. Zhang, X.; Sun, Y.; Zhang, J.; Wu, P.; Jiao, L. Hyperspectral unmixing via deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1755–1759. [Google Scholar] [CrossRef]
  25. Rodrigues, B.P.; Rofatto, V.F.; Matsuoka, M.T.; Teles Assunção, T. Resampling in neural networks with application to spatial analysis. Geo-Spat. Inf. Sci. 2022, 25, 413–424. [Google Scholar] [CrossRef]
  26. Jin, Q.; Ma, Y.; Fan, F.; Huang, J.; Mei, X.; Ma, J. Adversarial autoencoder network for hyperspectral unmixing. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 4555–4569. [Google Scholar] [CrossRef] [PubMed]
  27. Hong, D.; Gao, L.; Yao, J.; Yokoya, N.; Chanussot, J.; Heiden, U.; Zhang, B. Endmember-guided unmixing network (EGU-Net): A general deep learning framework for self-supervised hyperspectral unmixing. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6518–6531. [Google Scholar] [CrossRef] [PubMed]
  28. Mahdavi, F.; Zayyani, H.; Rajabi, R. RSS localization using an optimized fusion of two deep neural networks. IEEE Sens. Lett. 2021, 5, 7501104. [Google Scholar] [CrossRef]
  29. Qu, Y.; Qi, H. uDAS: An untied denoising autoencoder with sparsity for spectral unmixing. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1698–1712. [Google Scholar] [CrossRef]
  30. Su, Y.; Marinoni, A.; Li, J.; Plaza, J.; Gamba, P. Stacked nonnegative sparse autoencoders for robust hyperspectral unmixing. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1427–1431. [Google Scholar] [CrossRef]
  31. Palsson, B.; Ulfarsson, M.O.; Sveinsson, J.R. Convolutional autoencoder for spectral–spatial hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2020, 59, 535–549. [Google Scholar] [CrossRef]
  32. Rasti, B.; Koirala, B.; Scheunders, P.; Ghamisi, P. UnDIP: Hyperspectral unmixing using deep image prior. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5504615. [Google Scholar] [CrossRef]
  33. Gao, L.; Han, Z.; Hong, D.; Zhang, B.; Chanussot, J. CyCU-Net: Cycle-consistency unmixing network by learning cascaded autoencoders. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5503914. [Google Scholar] [CrossRef]
  34. Hu, S.; Li, H. Hyperspectral unmixing with multi-scale convolution attention network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 17, 2531–2542. [Google Scholar] [CrossRef]
  35. Yang, Y.; Wang, Y.; Liu, T. CDCTA: Cascaded dual-constrained transformer autoencoder for hyperspectral unmixing with endmember variability and spectral geometry. J. Appl. Remote Sens. 2024, 18, 026502. [Google Scholar] [CrossRef]
  36. Wang, W.; Chen, W.; Qiu, Q.; Chen, L.; Wu, B.; Lin, B.; He, X.; Liu, W. Crossformer++: A versatile vision transformer hinging on cross-scale attention. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 46, 3123–3136. [Google Scholar] [CrossRef] [PubMed]
  37. Ghosh, P.; Roy, S.K.; Koirala, B.; Rasti, B.; Scheunders, P. Hyperspectral unmixing using transformer network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5535116. [Google Scholar] [CrossRef]
  38. Wang, P.; Liu, R.; Zhang, L. MAT-Net: Multiscale Aggregation Transformer Network for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5538115. [Google Scholar] [CrossRef]
  39. Gan, Y.; Wang, Y.; Li, Q.; Luo, Y.; Wang, Y.; Pan, Y. Dual-stream autoencoder for channel-level multi-scale feature extraction in hyperspectral unmixing. Knowl.-Based Syst. 2025, 317, 113428. [Google Scholar] [CrossRef]
  40. Hadi, F.; Farooque, G.; Shao, Y.; Yang, J.; Xiao, L. DSSFT: Dual branch spectral-spatial feature fusion transformer network for hyperspectral image unmixing. Earth Sci. Inform. 2025, 18, 352. [Google Scholar] [CrossRef]
  41. Xiang, S.; Li, X.; Chen, S. An Endmember-Oriented Transformer Network for Bundle-Based Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5503315. [Google Scholar] [CrossRef]
  42. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  43. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
  44. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  45. Palsson, B.; Sveinsson, J.R.; Ulfarsson, M.O. Spectral-spatial hyperspectral unmixing using multitask learning. IEEE Access 2019, 7, 148861–148872. [Google Scholar] [CrossRef]
  46. Zhu, F.; Wang, Y.; Xiang, S.; Fan, B.; Pan, C. Structured sparse method for hyperspectral unmixing. ISPRS J. Photogramm. Remote Sens. 2014, 88, 101–118. [Google Scholar] [CrossRef]
  47. Borsoi, R.A.; Erdoğmuş, D.; Imbiriba, T. Learning interpretable deep disentangled neural networks for hyperspectral unmixing. IEEE Trans. Comput. Imaging 2023, 9, 977–991. [Google Scholar] [CrossRef]
Figure 1. Architecture of SCMT-Net. In the abundance maps, red indicates higher abundance values while blue indicates lower abundance values.
Figure 1. Architecture of SCMT-Net. In the abundance maps, red indicates higher abundance values while blue indicates lower abundance values.
Sensors 25 04493 g001
Figure 2. The framework of SMT.
Figure 2. The framework of SMT.
Sensors 25 04493 g002
Figure 3. Illustration of MMSA module.
Figure 3. Illustration of MMSA module.
Sensors 25 04493 g003
Figure 4. The framework of efficient feed-forward network (E-FFN).
Figure 4. The framework of efficient feed-forward network (E-FFN).
Sensors 25 04493 g004
Figure 5. The framework of CMT.
Figure 5. The framework of CMT.
Sensors 25 04493 g005
Figure 6. True color images and reference endmembers of the experimental datasets.
Figure 6. True color images and reference endmembers of the experimental datasets.
Sensors 25 04493 g006
Figure 7. Visual comparison of abundance maps obtained by different unmixing methods on the Samson dataset.
Figure 7. Visual comparison of abundance maps obtained by different unmixing methods on the Samson dataset.
Sensors 25 04493 g007
Figure 8. Visual comparison of endmembers obtained by different unmixing methods on the Samson dataset. Ground-truth endmembers are shown in blue, and estimated endmembers are shown in orange.
Figure 8. Visual comparison of endmembers obtained by different unmixing methods on the Samson dataset. Ground-truth endmembers are shown in blue, and estimated endmembers are shown in orange.
Sensors 25 04493 g008
Figure 9. Visual comparison of abundance maps obtained by different unmixing methods on the Jasper dataset.
Figure 9. Visual comparison of abundance maps obtained by different unmixing methods on the Jasper dataset.
Sensors 25 04493 g009
Figure 10. Visual comparison of endmembers obtained by different unmixing methods on the Jasper dataset. Ground-truth endmembers are shown in blue, and estimated endmembers are shown in orange.
Figure 10. Visual comparison of endmembers obtained by different unmixing methods on the Jasper dataset. Ground-truth endmembers are shown in blue, and estimated endmembers are shown in orange.
Sensors 25 04493 g010
Figure 11. Visual comparison of abundance maps obtained by different unmixing techniques on the Apex dataset.
Figure 11. Visual comparison of abundance maps obtained by different unmixing techniques on the Apex dataset.
Sensors 25 04493 g011
Figure 12. Visual comparison of endmembers obtained by different unmixing methods on the Apex dataset. Ground-truth endmembers are shown in blue, and estimated endmembers are shown in orange.
Figure 12. Visual comparison of endmembers obtained by different unmixing methods on the Apex dataset. Ground-truth endmembers are shown in blue, and estimated endmembers are shown in orange.
Sensors 25 04493 g012
Figure 13. Visual comparison of abundance maps obtained by different unmixing techniques on the synthetic dataset.
Figure 13. Visual comparison of abundance maps obtained by different unmixing techniques on the synthetic dataset.
Sensors 25 04493 g013
Figure 14. Visual comparison of endmembers obtained by different unmixing methods on the synthetic dataset. Ground-truth endmembers are shown in blue, and estimated endmembers are shown in orange.
Figure 14. Visual comparison of endmembers obtained by different unmixing methods on the synthetic dataset. Ground-truth endmembers are shown in blue, and estimated endmembers are shown in orange.
Sensors 25 04493 g014
Table 1. Hyperparameter settings.
Table 1. Hyperparameter settings.
HyperparametersSyntheticSamsonJasperApex
P(5 × 5)(5 × 5)(5 × 5)(5 × 5)
A109108
β 8 × 1029 × 1022 × 1035 × 102
γ 9 ×  10 2 11 ×  10 2 5 ×  10 2 5 ×  10 2
Epoch250220140250
Learning rate8 ×  10 3 6 ×  10 3 4 ×  10 3 9 ×  10 2
Weight decay6 ×  10 5 5 ×  10 4 22 ×  10 5 4 ×  10 4
Table 2. Average RMSE results on the Samson dataset.
Table 2. Average RMSE results on the Samson dataset.
ClassFCLSUMLNMFuDASCYCUTrans-NetHUMSCANCDCTAMATCMSDAEProposed
Soil0.24200.23960.15600.17800.16080.16240.17730.11150.10170.0925
Tree0.23850.23900.11680.16520.15200.10430.15950.05600.06530.0581
Water0.38510.38610.23550.16480.26610.18720.28660.10420.10300.0997
Mean RMSE0.29650.29640.17650.16950.19980.15520.21520.09390.09170.0854
Table 3. Average SAD results on the Samson dataset.
Table 3. Average SAD results on the Samson dataset.
ClassVCAMLNMFuDASCYCUTrans-NetHUMSCANCDCTAMATCMSDAEProposed
Soil0.02480.02480.02070.02100.01860.02370.06910.02210.01370.0134
Tree0.05180.05280.04920.02820.04650.08710.04870.04710.04100.0420
Water0.10930.09830.12990.11010.09020.08590.12320.07410.06320.0613
Mean SAD0.06200.05860.06660.05310.05180.06560.08030.04780.03930.0389
Table 4. Average RMSE results on the Jasper dataset.
Table 4. Average RMSE results on the Jasper dataset.
ClassFCLSUMLNMFuDASCYCUTrans-NetHUMSACNCDCTAMATCMSDAEProposed
Tree0.15580.15470.12670.18870.13360.11580.08200.14440.12580.0786
Water0.19650.34220.15090.09910.05570.06340.09190.09920.05210.0460
Soil0.13930.13970.11940.29460.23240.13160.19780.12760.12250.1178
Road0.10870.10370.11840.26860.19530.13910.17600.09360.12130.0956
Mean RMSE0.15340.20690.12950.22600.16810.11630.14600.11800.10980.0885
Table 5. Average SAD results on the Jasper dataset.
Table 5. Average SAD results on the Jasper dataset.
ClassVCAMLNMFuDASCYCUTrans-NetHUMSCANCDCTAMATCMSDAEProposed
Road0.09010.10170.03610.10230.06060.03730.03700.04580.04170.0304
Water0.25540.29290.41010.17700.25740.25670.23680.06090.04610.0426
Soil0.11660.16300.05320.28390.04650.06520.04490.11230.08790.0531
Tree0.16570.13370.15090.52410.18340.19250.10410.12470.09910.0752
Mean SAD0.15690.17280.16260.27180.13700.13790.10570.08590.06870.0503
Table 6. Average RMSE results on the Apex dataset.
Table 6. Average RMSE results on the Apex dataset.
ClassFCLSUMLNMFuDASCYCUTrans-NetHUMSCANCDCTAMATCMSDAEProposed
Road0.18270.16860.25780.34000.19460.16300.21710.15680.17860.1379
Tree0.24270.23880.47810.25500.12150.10110.12620.12620.12630.1236
Roof0.21010.20150.33520.12130.13750.11850.12060.14330.12690.1148
Water0.38270.70810.38450.12780.11460.09370.10270.13710.13670.0931
Mean RMSE0.26600.39610.37260.23000.14550.12210.14850.14130.14370.1185
Table 7. Average SAD results on the Apex dataset.
Table 7. Average SAD results on the Apex dataset.
ClassVCAMLNMFuDASCYCUTrans-NetHUMSCANCDCTAMATCMSDAEProposed
Road0.47750.14020.14850.43120.10810.11940.14540.12450.12600.0614
Tree0.12870.81790.13500.24760.13530.14030.13470.13690.13210.1319
Roof0.36600.07090.10780.11020.10240.11430.20000.11450.11500.0756
Water0.21231.19130.06160.42040.03970.03880.04320.04320.04150.0395
Mean SAD0.29610.55510.11320.30240.09640.10320.13080.10480.10360.0771
Table 8. Average RMSE results on the Synthetic dataset.
Table 8. Average RMSE results on the Synthetic dataset.
ClassFCLSUMLNMFuDASCYCUTrans-NetHUMSCANCDCTAMATCMSDAEProposed
Asphalt0.28180.31580.28130.30330.26140.18740.23180.06900.06850.0686
Tree0.11110.07360.08090.07500.09520.07020.12230.05400.04830.0400
Roof0.11900.17620.13350.14520.07770.08240.19910.08030.07290.0613
Metal0.11750.13950.11040.28530.21320.19330.21420.10960.10080.0782
Dirt0.12920.16700.13330.16900.31690.12150.13040.08350.08230.0687
Mean RMSE0.16520.19160.16330.21380.21420.14070.18510.08140.07650.0647
Table 9. Average SAD results on the Synthetic dataset.
Table 9. Average SAD results on the Synthetic dataset.
ClassFCLSUMLNMFuDASCYCUTrans-NetHUMSCANCDCTAMATCMSDAEProposed
Asphalt0.05860.13290.13240.09690.21030.05270.12770.04740.04310.0338
Tree0.06600.04530.04650.09080.08450.04900.12480.02110.01320.0271
Roof0.10030.18870.23390.18230.18350.10880.43230.08510.06430.0570
Metal0.00800.01780.02170.07460.03500.01790.06330.10290.07160.0153
Dirt0.07600.06470.06810.05960.09990.10430.21560.04350.04400.0318
Mean SAD0.06180.08990.10050.10080.12260.06650.19270.06000.04720.0330
Table 10. Ablation study of different encoder module combinations on the four datasets.
Table 10. Ablation study of different encoder module combinations on the four datasets.
DatasetCFPCFP + SMTCFP + CMTProposed
RMSESADRMSESADRMSESADRMSESAD
Samson0.12420.07290.11490.05900.09580.06780.08540.0389
Jasper0.13130.08220.11440.07310.14590.07740.08850.0503
Apex0.18760.14090.19010.16290.18720.12140.11850.0771
Synthetic0.08350.03480.11030.05920.10340.03370.06470.0330
Table 11. Ablation study of the MMSA module on the four datasets.
Table 11. Ablation study of the MMSA module on the four datasets.
ModuleMultiscaleChannelSamsonJasperApexSynthetic
RMSESADRMSESADRMSESADRMSESAD
MMSA 0.09180.10700.13290.07380.14550.09640.12650.1409
MMSA 0.11230.09050.12960.07740.12450.11750.13960.1649
MMSA0.08540.03890.08850.05030.11850.07710.06470.0330
Table 12. Computational complexity and time consumption comparison on the Jasper dataset.
Table 12. Computational complexity and time consumption comparison on the Jasper dataset.
MethodFCLSUMLNMFuDASCYCUTrans-NetHUMSACNCDCTAMATCMSDAEProposed
aRMSE0.15340.20690.12950.22600.16810.11630.14600.11800.10980.0885
aSAD0.15690.17280.16260.27180.13700.13790.10570.08590.06870.0503
Params-0.005 M0.007 M0.29 M7.75 M9.64 M8.37 M6.79 M6.6 M7.69 M
FLOPs-71.23 K17.52 K0.35 M2.97 G4.89 G5.23 G2.89 G5.13 G4.76 G
Computational Time0.69 s2.36 s6.56 s7.12 s22.35 s67.21 s72.69 s23.69 s24.72 s23.85 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, H.; Cao, Q.; Meng, F.; Xu, J.; Cheng, M. Spatial-Channel Multiscale Transformer Network for Hyperspectral Unmixing. Sensors 2025, 25, 4493. https://doi.org/10.3390/s25144493

AMA Style

Sun H, Cao Q, Meng F, Xu J, Cheng M. Spatial-Channel Multiscale Transformer Network for Hyperspectral Unmixing. Sensors. 2025; 25(14):4493. https://doi.org/10.3390/s25144493

Chicago/Turabian Style

Sun, Haixin, Qiuguang Cao, Fanlei Meng, Jingwen Xu, and Mengdi Cheng. 2025. "Spatial-Channel Multiscale Transformer Network for Hyperspectral Unmixing" Sensors 25, no. 14: 4493. https://doi.org/10.3390/s25144493

APA Style

Sun, H., Cao, Q., Meng, F., Xu, J., & Cheng, M. (2025). Spatial-Channel Multiscale Transformer Network for Hyperspectral Unmixing. Sensors, 25(14), 4493. https://doi.org/10.3390/s25144493

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop