Next Article in Journal
Identification of Abandoned Cropland and Global–Local Driving Mechanism Analysis via Multi-Source Remote Sensing Data and Multi-Objective Optimization
Previous Article in Journal
TEGOA-CNN: An Improved Gannet Optimization Algorithm for CNN Hyperparameter Optimization in Remote Sensing Sence Classification
Previous Article in Special Issue
Multimodal Prompt Tuning for Hyperspectral and LiDAR Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SCSU–GDO: Superpixel Collaborative Sparse Unmixing with Graph Differential Operator for Hyperspectral Imagery

1
School of Computer Science, China University of Geosciences, Wuhan 430074, China
2
The Second Survey and Mapping Institute of Hunan Province, Changsha 410121, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(17), 3088; https://doi.org/10.3390/rs17173088
Submission received: 3 July 2025 / Revised: 2 September 2025 / Accepted: 2 September 2025 / Published: 4 September 2025

Abstract

In recent years, remarkable advancements have been achieved in hyperspectral unmixing (HU). Sparse unmixing, in which models mix pixels as linear combinations of endmembers and their corresponding fractional abundances, has become a dominant paradigm in hyperspectral image analysis. To address the inherent limitations of spectral-only approaches, spatial contextual information has been integrated into unmixing. In this article, a superpixel collaborative sparse unmixing algorithm with graph differential operator (SCSU–GDO), is proposed, which effectively integrates superpixel-based local collaboration with graph differential spatial regularization. The proposed algorithm contains three key steps. First, superpixel segmentation partitions the hyperspectral image into homogeneous regions, leveraging boundary information to preserve structural coherence. Subsequently, a local collaborative weighted sparse regression model is formulated to jointly enforce data fidelity and sparsity constraints on abundance estimation. Finally, to enhance spatial consistency, the Laplacian matrix derived from graph learning is decomposed into a graph differential operator, adaptively capturing local smoothness and structural discontinuities within the image. Comprehensive experiments on three datasets prove the accuracy, robustness, and practical efficacy of the proposed method.

1. Introduction

Hyperspectral remote sensing is an advanced imaging technology capable of capturing both spectral and spatial characteristics of ground targets in a single acquisition [1]. Hyperspectral images (HSI) leverage high-dimensional spectral data to enable precise discrimination and characterization of materials, with diverse applications across mineral exploration [2], agricultural monitoring [3], ecological assessment [4], submarine geomorphology mapping [5], and military surveillance [6,7]. However, the inherent limitations of HSI acquisition systems—such as restricted spatial resolution and atmospheric scattering effects—often result in mixed pixels, where each pixel spectrally integrates multiple materials. Consequently, accurately disentangling endmembers and quantifying their fractional abundances remains a critical challenge in HSI interpretation, constituting the fundamental task of spectral unmixing [8].
Classical spectral unmixing relies on a mathematical formalism of spectral mixing mechanisms, typically implemented through a mixing model. Among these, the Linear Mixing Model (LMM) has gained prominence owing to its computational tractability and broad applicability [9]. With LMM assumption, mixed pixels can be represented as a weighted sum of endmembers, with the weights corresponding to their fractional abundances [10]. While LMM offers simplicity an ease of implementation, its effectiveness is often limited in scenarios with intricate spectral–spatial interactions [11]. Furthermore, conventional unmixing approaches decouple endmember extraction from abundance estimation—a sequential workflow prone to error propagation. Prominent endmember extraction algorithms, i.e., N-FINDR [12], Pixel Purity Index (PPI) [13], and Vertex Component Analysis (VCA) [14] inherently rely on the pure pixel assumption, an assumption that may not hold under real-world imaging conditions, imposing inherent limitations on unmixing fidelity.
To mitigate these limitations, contemporary spectral unmixing approaches increasingly rely on predefined spectral libraries. Among these, Multiple Endmember Spectral Mixture Analysis (MESMA) and Sparse Unmixing (SU) employ comprehensive or overcomplete spectral libraries to estimate per-pixel abundances. MESMA operates by iteratively selecting optimal endmember subsets from the library to reconstruct observed spectrum with minimal residual error [15,16,17]. Inspired by the use of standard spectral libraries, together with the advances of sparse representation [18,19], sparse unmixing has rapidly become a typical semi-supervised unmixing method [20]. By imposing sparsity priors on abundance vectors, SU capitalizes on the inherent sparsity of real-world scenes—where only 3–5 endmembers can appear within a pixel [21,22]. Sparse unmixing jointly enables spectral fitting and endmember selection, circumventing the need for explicit pure pixel assumptions.
This sparsity prior motivates the development of sparsity-constrained unmixing algorithms that explicitly exploit structural patterns within the abundance matrix. For example, SUnSAL [23] imposes pixel-wise sparsity through an ℓ1-norm regularizer, whereas the Collaborative SUnSAL (CLSUnSAL) [24] adopts a group-sparsity paradigm via the ℓ2,1-norm, enforcing joint sparsity across all pixels under the assumption of homogeneous endmember activation. Critically, CLSUnSAL presupposes that all pixels share identical active endmembers—a condition rarely satisfied in practice, as endmember distributions are typically localized within spatially contiguous regions rather than globally pervasive. To overcome this limitation, the Local Collaborative Sparse Unmixing (LCSU) framework [21] introduces spatially adaptive collaboration, restricting sparsity constraints to homogeneous superpixel neighborhoods to reconcile global spectral coherence with local spatial consistency.
Hyperspectral imagery inherently encapsulates not only discriminative spectral signatures but also multiscale spatial patterns. Due to the limitations of spectral-only unmixing, spatially aware methods have emerged as a dominant approach, leveraging spatial-contextual information to mitigate spectral ambiguity. For instance, SUnSAL–TV [25] extends the baseline sparse unmixing model by imposing Total Variation (TV) regularization, enforcing piecewise smoothness in abundance maps through ℓ1 penalties on gradient magnitudes. However, TV-based methods primarily capture local smoothness while neglecting spatially heterogeneous structures, prompting the integration of non-local self-similarity priors [26] to preserve both local continuity and global texture coherence. Building upon this concept, Dual Regularized Sparse Unmixing (DRSU) [27] incorporates dual sparsity-inducing weights—simultaneously constraining endmember activation sparsity and abundance spatial consistency. Its variant DRSU–TV [28] further couples TV regularization with weighted sparse constraints, enabling adaptive exploration of spatial–spectral correlations. To unify these advancements, the Spectral–Spatial Weighted Sparse Unmixing (S2WSU) [29] achieves enhanced abundance estimation accuracy through joint optimization of spectral fidelity, structured sparsity, and multi-scale spatial constraints, demonstrating superior performance in complex scenes.
Superpixel segmentation as a pivotal strategy in spatially aware hyperspectral unmixing, has effectively balanced spatial coherence and computational efficiency [30,31]. For instance, the Multi-Scale Sparse Unmixing Algorithm (MUA-SLIC) [32] leverages SLIC [33] to partition image into homogeneous superpixels, enabling multi-scale spectral approximation through hierarchical transformation. By decoupling the spatially regularized unmixing problem into computationally tractable sub-problems, MUA–SLIC achieves efficient integration of spectral–spatial context while maintaining low algorithmic complexity. Building upon this paradigm, SUSRLR–TV [34] synthesizes TV regularization with low-rank matrix recovery, simultaneously enforcing abundance smoothness and intra-superpixel correlation. Further advancing spatial constraints, SBGLSU [35] incorporates graph Laplacian Operator to model intrinsic geometric relationships within superpixels, demonstrating statistically significant improvements in abundance estimation accuracy. At the frontier of this field, Spatial–Spectral Multiscale Sparse Unmixing [36] proposes a unified regularization architecture for multi-resolution analysis, holistically exploiting cross-scale spectral variability and spatial texture persistence. Collectively, these superpixel-driven approaches establish a systematic methodology to address the spectral–spatial heterogeneity inherent to real-world hyperspectral scenes.
Conventional Total Variation (TV) regularization struggles to distinguish intrinsic material transitions from noise-induced variations in local regions. In contrast, Relative Total Variation (RTV) [37] quantifies both absolute intensity gradients and relative structural coherence, enabling robust extraction of spatially coherent features. Inspired by RTV’s discriminative capability, GLDWSU [38] is proposed to encode adaptive spatial dependencies through learned graph topologies. Specifically, the graph Laplacian matrix derived from spectral–spatial affinity learning captures multi-scale structural patterns adaptively, serving as a data-driven spatial regularizer. However, traditional graph Laplacian Operator primarily enforce second-order smoothness (i.e., penalizing curvature) [39], which inadequately preserves first-order continuity (gradient-level consistency) across neighboring pixels. To address this limitation, FoGTF-HU [40] directly constrains first-order differences via a trend filtering operator, effectively modeling piecewise linear abundance transitions while suppressing staircase artifacts common in TV-based methods.
In recent years, deep learning has been increasingly applied to sparse unmixing tasks. For instance, SUnCNN [41] formulates the unmixing problem as an optimization over neural network parameters, using a convolutional encoder–decoder to generate abundance maps from a predefined spectral library. Abundance constraints are enforced through softmax activation, and spatial information is implicitly captured via convolutional operations. Another notable approach is EGU-Net [42], an unsupervised two-stream Siamese network designed for hyperspectral unmixing. It incorporates a secondary network that learns from pure or nearly pure endmember spectra to guide the main unmixing branch, with both branches sharing parameters and enforcing physically meaningful constraints such as nonnegativity and sum-to-one. EGU-Net further supports spatial–spectral unmixing by integrating convolutional operations, enabling it to preserve spatial coherence while producing accurate and interpretable abundance maps without requiring ground truth labels. Despite these advancements, deep learning-based methods often require large-scale labeled data, substantial training time, and involve many tunable parameters. Additionally, their lack of transparency can hinder the integration of domain knowledge such as physical priors or spatial structures.
Given these limitations, deep learning–based methods may not always be practical for real hyperspectral unmixing tasks. In contrast, traditional sparse unmixing frameworks remain appealing because they are physically interpretable, unsupervised, and computationally efficient. However, existing sparse approaches such as SUnSAL and its spatial extensions typically rely on pixel-wise sparsity assumptions or second-order Laplacian regularization. While these strategies can be effective in certain cases, they tend to overlook local spectral similarity within homogeneous regions and often oversmooth structural boundaries, thereby limiting their ability to accurately model the complex spatial heterogeneity of real hyperspectral images. Motivated by these limitations, this study proposes a novel hyperspectral unmixing framework termed SCSU–GDO, which synergistically integrates superpixel collaborative sparse regression with graph differential operator spatial regularization. More details are illustrated in Figure 1. The methodology begins by segmenting hyperspectral images into spatially contiguous and spectrally homogeneous superpixels using SLIC, establishing adaptive neighborhoods as the foundation for localized collaboration. Leveraging the inherent property that mixed pixels within each superpixel share similar endmember subsets and exhibit smoothly varying abundances, a locally weighted collaborative sparse regression model is designed. This model jointly optimizes endmember activation patterns and abundance smoothness within homogeneous regions through group sparsity constraints (ℓ2,1-norm), effectively addressing the limitations of global sparsity assumptions in handling spatial heterogeneity. Furthermore, to overcome the second-order smoothness bias of traditional graph Laplacian Operator, a first-order graph differential operator is constructed via spectral decomposition of the learned Laplacian matrix. This operator enforces gradient-level continuity of abundances among intra-class pixels while preserving inter-class structural boundaries, balancing sparsity promotion with spatial fidelity. In addition, ADMM [43] is used for optimization, ensuring coordinated convergence of spectral reconstruction error, local collaborative sparsity, and differential spatial regularization. Unlike existing sparse or graph-based unmixing methods that rely solely on global sparsity assumptions or conventional Laplacian regularization, the proposed framework introduces several key innovations, summarized as follows:
(1)
Superpixel-based Local Weighted Collaborative Sparse Regression, is proposed to consider local correlation for unmixing. This strategy applies superpixel segmenta-tion to hyperspectral images and then performs local weighted collaborative sparse unmixing, which provides improved sparse constraints on the abundance matrix and yields better unmixing results.
(2)
First-order graph differential operator is adopted as a spatial regularizer, which directly models gradient-level variations and better preserves structural boundaries, in contrast to traditional Laplacian operators that primarily enforce second-order smoothness.
(3)
Experimental results confirm the effectiveness of the proposed method, and SCSU–GDO has obtained better performance compared to other SOTA algorithms.

2. Sparse Unmixing

2.1. Basic Linear Mixing Model (LMM)

LMM assumes that each mixed pixel can be formed by a linear combination of multiple known endmembers’ signature. Let Y = y 1 ,   y 2 ,   y 3 ,   ,   y n R l × n represent a hyperspectral image containing l bands and n pixels. Here, y d is denoted as a pixel, and the basic model is
y d = i = 1 m e i p i + ε
where m is the number of endmembers, e i denotes endmember signature, and p i represents abundance vector. ε represents the model errors.
Considering practical situation, two constraints are imposed, i.e., Abundance Sum-to-one Constraint (ASC), and Abundance Non-negativity Constraint (ANC), which are expressed as
A N C : p i 0 ,   A S C : i = 1 m p i = 1

2.2. Sparse Unmixing Methods

Sparse unmixing is a representative semi-supervised method that estimates fractional abundances using a pre-existing spectral library, thereby circumventing the challenging task of endmember extraction. Since hyperspectral images typically contain fewer endmembers, the resulting abundance matrix tends to be sparse. Accordingly, sparse unmixing seeks a sparse linear representation that best characterizes each mixed pixel, leading to the following optimization formulation:
min X 1 2 Y A X F 2 + λ X 0   s . t .   X 0
where λ is a regularization parameter for sparse term. · F represents the frobenius norm for data fidelity, and · 0 represents the l 0 norm, meaning the number of non-zero elements. The mathematical expression is
X 0 = lim p 0 x p p = lim p 0 x k p = Z i : x i 0
Since l 0 norm is a typical non-convex problem, it is difficult to solve directly [44]. SUnSAL relaxed the original l 0 norm to l 1 norm so as to solve directly, which is represented as
min X 1 2 Y A X F 2 + λ X 1,1   s . t . X 0
where X 1,1 = j = 1 n x j , and x j represents the j-th column of the abundance matrix X . Similarly, the CLSUnSAL relaxed the l 0 -norm to a mixed l 2,1 -norm to globally enforce row sparsity of abundances, ensuring that all pixels share the same active set of endmembers. The model is expressed as follows:
min X 1 2 Y A X F 2 + λ X 2,1   s . t . X 0
where X 2,1 = k = 1 m x k , and x k represents the k-th row of the abundance matrix. Compared to the SUnSAL, CLSUnSAL enforces joint sparsity across the entire image and accounts for the similarity in endmember distribution between adjacent pixels rather than employing pixel-level independent regression.
Considering the spatial consistency of the abundance map, where adjacent pixels contain the same endmembers and share similar abundance values, SUnSAL–TV explores the spatial information with total variation regularization accounting for spatial characteristics, and the SUnSAL–TV model is represented as follows:
min X 1 2 Y A X F 2 + λ X 1,1 + λ T V T V X   s . t . X 0
where λ T V is the regularization parameter, and T V X { i , j } ε x i x j 1 . ε represents the set of neighboring pixels, including horizontal and vertical, in the image.

3. SCSU–GDO

Traditional sparse unmixing methods, such as SUnSAL and CLSUnSAL, typically assume a globally consistent sparse structure across the entire scene. This global sparsity assumption makes them less effective in capturing the spatial heterogeneity of real hyperspectral images, where material compositions vary locally. Similarly, conventional spatial regularization techniques—such as total variation and graph Laplacian-based models—enforce second-order smoothness, which often leads to over-smoothing and the loss of critical boundary information.
The proposed SCSU–GDO framework partitions the image into spatially coherent and spectrally homogeneous superpixels, within which group-sparse regression is applied to exploit local similarity and improve abundance estimation. To further enhance spatial consistency while preserving meaningful transitions, a first-order graph differential operator is constructed from a learned Laplacian matrix, allowing for gradient-level regularization that better aligns with object boundaries.
Compared with existing sparse or graph-based methods, the proposed model provides a more adaptive and structure-aware approach to hyperspectral unmixing, capable of handling spatial variability and preserving fine-scale details. The overall model is formulated as a constrained optimization problem and solved using the ADMM algorithm, which enables efficient coordination between spectral reconstruction, local sparsity modeling, and spatial regularization.
The following subsections present the detailed mathematical formulation and optimization strategy.

3.1. Superpixel-Based Local Collaborative Sparse Regression

3.1.1. Superpixel Segmentation-Based Uniform Region Extraction

Pixels in uniform regions exhibit similar spectral reflectance, which typically indicates that these pixels share similar endmembers as well as fractional abundances. Recognizing this characteristic, we use the SLIC algorithm, a typical superpixel segmentation algorithm, to segment the image into regions. Compared to standard K-means clustering, SLIC adopts a spatially localized comparison strategy by restricting centroid updates to a fixed spatial neighborhood. This spatial constraint allows SLIC to better capture local spectral–spatial homogeneity in hyperspectral images while maintaining computational efficiency. After this process, the uniform region can be obtained shown in Figure 2.

3.1.2. Local Collaborative Sparse Regression

Inspired by CLSUnSAL, and acknowledged that mixed pixels with similar endmembers and fractional abundances often cluster within spatially uniform regions rather than being scattered across the entire image, we employ the Local Collaborative Sparse Unmixing (LCSU), formulated as below:
min X 1 2 A X Y F 2 + λ i = 1 n k = 1 m x j N i k 2   s . t . x 0
where N i is the neighborhood of pixel i , and λ is a regularization parameter controlling the degree of sparseness. Compared to CLSUnSAL, LCSU assumes that adjacent pixels share the same support. This approach fits better as the endmember is more likely to appear within spatially uniform regions.

3.1.3. Sparse Regularization

In this paper, uniform regions obtained through superpixel segmentation are utilized to achieve local collaborative sparsity. Additionally, a reweighting matrix is introduced to enhance the row sparsity. A hyperparameter, referred to as the superpixel homogeneity index, is introduced to adjust the sparsity constraint and denoted as
i = 1 K W i X i 2,1
where W = d i a g 1 X 1 , : 2 + ϵ ,   ,   1 X m , : 2 + ε represents the weight matrix, and ε is a small positive constant. K denotes the number of superpixels. is denoted as Hadamard product.

3.2. Graph Differential Operator-Based Graph Learning

3.2.1. Graph Learning for Laplacian Matrix

In SBGLSU, SLIC is first applied to extract spatially uniform regions, and a weighted map is constructed for each superpixel to obtain graph Laplacian regularization. Since the graph Laplacian matrix (GLM) in this algorithm is predefined and not adaptively derived from image content, it limits the model’s applicability to other tasks. In contrast, the GLM in GLDWSU is constructed directly from the image using RTV, which more effectively captures local spatial structures and preserves meaningful edge information. This data-driven construction enhances the adaptability and relevance of the resulting graph to the input scene.
According to [45], assuming each pixel y of HSI Y as a vertex of a graph, the graph Laplacian can be calculated by
L y = D r T W r y D r + D c T W c y D c
where W r y and W c y are the diagonal weight matrices, and D r and D c are the discrete derivative operator in the row and column directions of the image Y and are defined as
D r = 1 1 1 1 1 1 ,   D c = 1 0 1 1 1 1
In the row direction, W r y = d i a g w r y is defined as
w r y = g σ 1 g σ D r y + ε 1 D y + ε
where g σ is denoted as the Gaussian filter, and represents convolution operator. represents the element-wise multiplication operator, and ε is a small positive constant.
In the column direction, the weight matrix W c y is defined similarly.
Since original image y is usually contaminated by noise, to obtain accurate graph Laplacian matrix, denoising process can be considered before the calculation of graph learning. The specific process is as follows:
y t + 1 = argmin y 1 2 y s 2 2 + λ 2 y T L y t y
where y is denoted as the ideal clean image and s denotes the observed data, λ is the parameter, and L() is the GLM.

3.2.2. Graph Differential Operator

Laplacian matrix is well-suited for capturing second-order smoothness, but it insufficiently represents the first-order smoothness of spatial structures in hyperspectral images [40]. To address this limitation, a first-order graph differential operator is utilized to effectively represent the spatial information of HSI in the proposed method. Since the Laplacian matrix L can be expressed as L = P T P , and from Equation (9), we can calculate P as
P = W 1 2 D
where ( W 1 / 2 ) T ( W 1 / 2 ) = W . Compared to Equation (10), W and D in the above formula are not distinguished in the row and column directions. Thus, the differential operator P imposes spatial constraints on the abundance matrix by modeling gradient-level variations across neighboring pixels. This enables the regularization term X P 1 to suppress noise while preserving sharp abundance transitions.
This operator plays a role similar to Total Variation (TV) in image processing, where first-order differences promote piecewise smoothness rather than enforcing global smoothing. By applying this operator over a learned graph topology, the model adaptively enforces spatial coherence across pixels while being sensitive to boundaries between materials.

3.3. Formulation of Proposed SCGDO Model

The proposed model employs local collaborative weighted sparse regression based on superpixel segmentation as the sparse constraint term, and utilizes a first-order graph differential operator to further enhance the model’s sparsity, represented as
min x 0 1 2 Y A X F 2 + λ 1 i = 1 K W i X i 2,1 + λ 2 X P 1
where λ 1 and λ 2 are parameters. The second term represents the sparse constraint. Final term involves spatial structure regularization, utilizing the graph differential operator P to promote similar segmented smoothness between image Y and fractional abundance image X. Specifically, λ 1 controls the local sparsity within superpixels, and λ 2 enforces spatial smoothness on the abundance maps via the graph differential operator.
To solve the optimization problem of (13), ADMM is employed and transformed the original (13) into (14):
min X , U , V , Z 1 2 Y A X F 2 + λ 1 i = 1 K U i 2,1 + λ 2 Z 1 + l R + V   s . t .   W X = U ,   X = V ,   V P = Z
where l R + · represents the indicator function. Adding D, E, and F, the augmented Lagrangian function for (14) is reformulated as
L X , U , V , Z , D , E , F = 1 2 Y A X F 2 + λ 1 i = 1 K U i 2,1 + λ 2 Z 1 + l R + V + μ 2 W X U + D F 2 + μ 2 X V + E F 2 + μ 2 V P Z + F F 2
where μ is a penalty coefficient.
During the iteration, ADMM is adopted to solve sub-problems of each variable:
X t + 1 = a r g m i n X 1 2 Y A X F 2 + μ 2 W t X U t + D t F 2 + μ 2 X V t + E t F 2
Since the above problem (19) is convex optimization, it has a closed-form solution, and can be directly calculated as
X t + 1 = A T A + μ I + W T W t 1 A T Y + μ W T t U t D t + V t E t
Similarly, other sub-problems are expressed as follows:
U t + 1 = a r g m i n U λ 1 i = 1 K U i 2,1 + μ 2 W t X t + 1 U i + D t F 2
V t + 1 = a r g m i n V l R + V + μ 2 X t + 1 V + E t F 2 + μ 2 V P Z t + F t F 2
Z t + 1 = a r g m i n Z λ 2 Z 1 + μ 2 V t + 1 P Z t + F t F 2
Algorithm 1 has displayed the entire algorithm process, which involves the soft threshold function s o f t u , θ = s i g n u m a x u θ , 0 and v + = m a x v , 0 .
Algorithm 1: Pseudo code of SCSU–GDO
Input: Y ,   A ,   P , λ 1 , λ 2 , μ , ϵ
1. Initialization: t = 0 ,   X 0 ,   U 0 ,   V 0 ,   Z 0 ,   D 0 ,   E 0 ,   F 0
2. repeat
3. W = d i a g 1 X 1 , : 2 + ϵ ,   ,   1 X m , : 2 + ε
4. X t + 1 = A T A + μ I + W T W t 1 A T Y + μ W T t U t D t + V t E t
5. for g = 1 to n g
U t + 1 = v e c t o r _ s o f t W g t X g t + 1 + D g t , λ 1 μ
end for
6. V t + 1 = [ X t + 1 + E t + Z t F t P T I + P P T 1 ] +
7.   Z t + 1 = s o f f V t + 1 P + F t , λ 2 μ
8.   D t + 1 = D t + W t X t + 1 U t + 1
9. E t + 1 = E t + X t + 1 V t + 1
10.   F t + 1 = F t + V t + 1 Z t + 1
11. until the condition is met the termination condition
Output: X

4. Experiment and Analysis

To assess the effectiveness of the proposed algorithm, both synthetic hyperspectral datasets and real hyperspectral remote sensing image were used. For performance comparison, five advanced spatial–spectral sparse unmixing methods were selected as benchmarks, including CLSUnSAL [24], SUnSAL–TV [25], S2WSU [29], SBGLSU [30], and FoGTF-HU [40]. Two quantitative indicators, signal reconstruction error (SRE) and root mean square error (RMSE), were used. The definitions of these two indicators are shown as follows
S R E d B = 10 l o g 10 E X 2 2 E X X ^ 2 2
where E ( · ) represents the expected function, X ^ is the estimated abundance, and X denotes the true abundance. RMSE is determined by the difference between X ^ and X , and is defined as follows:
R M S E = 1 N × X X ^ 2 2

4.1. Simulated Datasets

The splib06 dataset, released in September 2007, was randomly selected from U.S. Geological Survey (USGS) spectral library A R 224 × 240 , consisting of 240 endmembers. The selection ensured that the spectral angle of any two endmembers was greater than 4 degrees. The simulated data consists of reflectance values uniformly distributed over 224 spectral bands in the range of 0.4 to 2.5 μ m . Several endmembers were randomly chosen from library A, and simulated data cubes (DC) were generated by linearly combining them with specific abundance matrices satisfying ASC and ANC. DC1 selected five spectral signatures from A, resulting in a cube with dimensions of 75 × 75 pixels. Similarly, DC2 contained nine endmembers and 100 × 100 pixels, which was generated similarly, and added Gaussian white noise with SNRs of 20 dB, 30 dB, and 40 dB. The spectra of DC1 and DC2 are displayed in Figure 3. The abundance maps of DC1 and DC2 are shown in Figure 4 and Figure 5, respectively.
The proposed SCSU–GDO framework, along with five advanced sparse unmixing methods, was tested and compared on two simulated datasets. The comparative results are visually presented in Figure 6 and Figure 7. Figure 6 shows the abundance maps of DC1 under 30 dB SNR. Compared to the other algorithms, CLSUnSAL, SUnSAL–TV, and S2WSU exhibit significantly poorer performance in terms of spatial consistency and abundance accuracy. Notably, the estimated abundances for Endmember 3 from these three methods approach zero, indicating substantial deviations from ground truth. Furthermore, the abundance results obtained by S2WSU lack spatial smoothness. In contrast, SBGLSU demonstrates improved alignment with true abundances for Endmembers 2 and 4 but retains residual errors in Endmember 3. Although SBGLSU’s performance is comparable to FoGTF-HU in Figure 6, its abundance maps contain prominent noise artifacts, particularly under high noise levels (shown as Table 1).
Similar trends are observed in Figure 7. FoGTF-HU and SCSU–GDO preserve most density information from reference maps, whereas SUnSAL–TV and CLSUnSAL produce overly smooth distributions. SBGLSU, despite partial improvements, introduces localized false densities. In contrast, the improved performance of SCSU–GDO can be attributed to its algorithmic design. Specifically, the superpixel-based collaborative sparse regression adaptively enforces sparsity within spatially homogeneous regions, enhancing local coherence and suppressing noise. Meanwhile, the graph differential operator imposes structure-aware regularization that promotes spatial smoothness while preserving sharp transitions at material boundaries. This complementary combination enables the model to produce abundance maps that are both accurate and spatially consistent, achieving improved unmixing accuracy over FoGTF-HU.
Quantitative results (Table 1 and Table 2) further validate these findings. For DC1 (30 dB SNR), SCSU–GDO achieves an SRE of 44.6 dB, surpassing SBGLSU by 17.33 dB (63.55% improvement) and FoGTF-HU by 12 dB (36.52%). On DC2, SCSU–GDO attains SRE values 3.64 dB (15%) and 2.52 dB (9.9%) higher than SBGLSU and FoGTF-HU, respectively. Additionally, SCSU–GDO reports consistently lower RMSE values across both datasets, indicating a performance advantage in abundance estimation clarification.
Mechanistically, DC1’s well-defined boundaries and localized spectral homogeneity allow superpixel segmentation to aggregate similar pixels. Subsequent Laplacian regularization within superpixels drives convergence of endmember abundances, explaining SBGLSU’s moderate advantages over CLSUnSAL and SUnSAL–TV. However, FoGTF-HU’s first-order graph differential operator captures pixel-level smoothness in high-dimensional HSIs, while SCSU–GDO integrates this operator with superpixel-based weighted sparse regularization, which enables SCSU–GDO to balance the local detail preservation and the global structural coherence, ensuring robust performance across noise levels and image complexities.
In summary, the proposed SCSU–GDO algorithm demonstrates consistently improved performance in both visual results and quantitative metrics (e.g., RMSE, SRE) compared to the selected spatial–spectral sparse unmixing algorithms.
To investigate the influence of the regularization parameters, after multiple considerations and experiments, we finally test the value λ 1 (ranging for [1 × 10−4, 5 × 10−4, 1 × 10−3, 5 × 10−3, 5 × 10−2, 1 × 10−2]) and λ 2 (ranging for [1 × 10−6, 1 × 10−5, 5 × 10−5, 1 × 10−4, 1 × 10−3, 1 × 10−2]). Figure 8 illustrates the variation in SRE values across different datasets and values with DC1 and DC2 under SNR with 30 dB.

4.2. Real Hyperspectral Remote Sensing Image

Here, a widely used real hyperspectral remote sensing image, Cuprite data, was utilized for comparative analysis. A sub-image of dimensions 250 × 191 pixels was selected for evaluation, with 224 spectral bands. To reduce impact of noise, bands 1–2, 105–115, 150–170, and 223–224 were excluded, leaving 188 bands for analysis. Additionally, 498 spectral signatures were selected from USGS library to construct an over-complete spectral library, also called spectral dictionary, denoted as A (188 bands).
Given the absence of reference abundance maps, qualitative comparisons were performed between the USGS Tricorder classification map and the abundance estimates generated by CLSUnSAL, SUnSAL–TV, S2WSU, SBGLSU, FoGTF-HU, and the proposed SCSU–GDO algorithm. Three typical minerals—Alunite, Buddingtonite, and Chalcedony—were selected for visual assessment, as illustrated in Figure 9. CLSUnSAL, which assumed a globally consistent active set of endmembers across all pixels, produced less accurate abundance maps with spatial inconsistencies. SUnSAL–TV generated smoother abundance distributions but failed to preserve fine-scale details. Although S2WSU and SBGLSU achieved relatively accurate unmixing results compared to FoGTF-HU, their Buddingtonite abundance maps exhibited residual noise artifacts. This limitation arose because FoGTF-HU leveraged spatial structure optimization to extract more precise spatial information for the three minerals, aligning its results more closely with the reference classification map.
In contrast, the proposed SCSU–GDO accounted for the spatial coherence of endmembers, which predominantly clustered in localized regions rather than being uniformly distributed across the scene. By employing a graph differential operator to capture spatial features, SCSU–GDO generated abundance maps that retained finer details and exhibited superior alignment with the Tricorder classification map (Figure 9). These results demonstrated the effectiveness of the proposed algorithm for real hyperspectral images.
To further evaluate the computational efficiency of the proposed algorithms, we report the average runtime (in seconds) of each algorithm on the Cuprite dataset in Table 3. As expected, methods with simpler regularization schemes tend to execute faster, while algorithms incorporating spatial or graph-based constraints typically require more computation. The proposed SCSU–GDO algorithm exhibits a moderate runtime relative to other structured unmixing methods, reflecting the additional overhead from superpixel segmentation, collaborative sparse modeling, and graph differential regularization. In terms of computational complexity, the main components of SCSU–GDO include (1) superpixel segmentation via SLIC, which is an efficient clustering algorithm with linear time complexity relative to the number of pixels; (2) localized sparse regression within each superpixel, which significantly reduces the problem size and enables efficient parallel or block-wise optimization; and (3) graph-based regularization using a sparse differential operator, which relies on sparse matrix computations and thus remains tractable. The entire optimization process is solved using the ADMM framework, which decomposes the model into sub-problems with closed-form solutions, further improving computational stability. These design choices allow SCSU–GDO to balance accuracy and efficiency. While slightly more expensive than baseline methods, its runtime remains acceptable, and the improvements in unmixing performance justify the added complexity. This demonstrates the effectiveness of the proposed algorithm in practical hyperspectral applications.
In addition to the Cuprite dataset, we also evaluated the proposed algorithms on the Urban hyperspectral image [46]. This dataset was acquired by the AVIRIS sensor over Washington, D.C., and a subregion of size 307 × 307 pixels with 210 spectral bands was selected for analysis. To reduce the impact of water absorption, several noisy bands were removed, resulting in 162 effective bands. The spectral library was constructed using the publicly available endmember signatures of six typical urban materials. These endmembers serve as the reference basis for sparse unmixing and comparison.
For the Urban dataset, six representative land-cover classes—Asphalt, Grass, Tree, Roof, Metal, and Dirt—were selected for evaluation based on the provided ground truth label map. Four typical classes—Asphalt, Grass, Roof, and Dirt—were selected for visual assessment. Figure 10 presents the corresponding abundance maps estimated by CLSUnSAL, SUnSAL–TV, S2WSU, SBGLSU, FoGTF-HU, and the proposed SCSU–GDO method. The hyperspectral image was cropped to a 120 × 120 region to focus on a spatially complex urban area. It is worth noting that, based on both the original image and the provided ground truth, the right-slanted road segment in the cropped region is not asphalt but rather an unpaved dirt road, as evidenced by its spectral characteristics and spatial context in the reference data. As illustrated in Figure 10, the proposed SCSU–GDO method produced abundance maps most consistent with this ground truth interpretation. In particular, for the Dirt endmember, the method accurately recovered the right-hand unpaved road segment without misclassifying it as asphalt, unlike several competing methods. For the Grass endmember, the central region was reconstructed with spectral abundances closely matching the ground truth distribution, effectively capturing subtle within-class variations. Similar improvements can be observed for the Roof endmember, where sharp building boundaries were preserved without sacrificing intra-class homogeneity. These results highlight the ability of SCSU–GDO to leverage superpixel-based collaborative sparsity and first-order graph differential regularization to model local spectral variability while preserving structural boundaries in complex urban scenes.

5. Conclusions

We proposed a superpixel collaborative sparse unmixing method with graph differential operator, namely SCSU–GDO, a novel spectral–spatial sparse unmixing algorithm that integrates superpixel-driven local collaboration and graph differential spatial regularization. The core contribution lies in three aspects: First, by partitioning the HSI into homogeneous regions through SLIC superpixel segmentation, the model enforces adaptive sparsity constraints that adapt to localized spectral variations while preserving structural boundaries, with a weighting factor introduced to enhance row-wise sparsity without compromising spatial continuity. Second, instead of conventional Laplacian-based spatial constraints, we decompose the graph Laplacian into first-order differential operators via adaptive graph learning, achieving structure-aware regularization that simultaneously promotes smoothness within superpixel regions and preserves discontinuities across boundaries. Finally, ADMM is adopted to split the non-convex problem into tractable sub-problems with closed-form solutions, ensuring computational stability. Quantitative experiments on both synthetic and real hyperspectral datasets showed that the proposed method consistently achieved lower RMSE and higher SRE values than several SOTA algorithms, indicating improved abundance estimation accuracy in a mathematically measurable sense. The visual results further confirmed that SCSU–GDO effectively preserves spatial structures and material boundaries in abundance maps. Despite these promising results, the current framework involves several regularization parameters that significantly influence performance. The selection of these parameters is based on empirical tuning, which may hinder the practical adaptability of the method. In future work, we intend to simplify the model structure, reduce the number of hyperparameters, and develop more automatic or data-driven parameter selection strategies.

Author Contributions

All the authors made significant contributions to the work. K.Y., Z.Z. and Q.Y. designed this research study, analyzed the results, and performed the validation work. R.F. provided advice for the revision of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key R&D Program of China “Intergovernmental International Science and Technology Innovation Cooperation” under Grant 2025YFE0107100, the National Natural Science Foundation of China under Grant 42471430, the Guangxi Key Research and Development Program under Grant Guike AB24010220, Hubei Natural Science Foundation under Grant 2024AFB561, and Hunan Provincial Natural Science Foundation under Grant 2024JJ8353/2024JJ8342.

Data Availability Statement

The original contributions presented in this study are included in the article [23]. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tong, Q.; Zhang, B.; Zhang, L. Current progress of hyperspectral remote sensing in China. J. Remote Sens. 2016, 20, 689–707. [Google Scholar]
  2. Li, J.; Bioucas-Dias, M.; Plaza, A.; Liu, L. Robust Collaborative Nonnegative Matrix Factorization for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6076–6090. [Google Scholar] [CrossRef]
  3. Wang, L.; Zuo, B.; Le, Y.; Chen, Y.; Li, J. Penetrating remote sensing: Next-generation remote sensing for transparent earth. Innovation 2023, 4, 100519. [Google Scholar] [CrossRef]
  4. Marconi, S.; Weinstein, B.G.; Zou, S.; Bohlman, S.A.; Zare, A.; Singh, A.; Stewart, D.; Harmon, I.; Steinkraus, A.; White, E. Continental-scale hyperspectral tree species classification in the United States National Ecological Observatory Network. Remote Sens. Environ. 2022, 282, 113264. [Google Scholar] [CrossRef]
  5. Dumke, I.; Ludvigsen, M.; Ellefmo, S.L.; Søreide, F.; Johnsen, G.; Murton, B.J. Underwater Hyperspectral Imaging Using a Stationary Platform in the Trans-Atlantic Geotraverse Hydrothermal Field. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2947–2962. [Google Scholar] [CrossRef]
  6. Shimoni, M.; Haelterman, R.; Perneel, C. Hyperspectral Imaging for Military and Security Applications: Combining Myriad Processing and Sensing Techniques. IEEE Geosci. Remote Sens. Mag. 2019, 7, 101–117. [Google Scholar] [CrossRef]
  7. Tu, B.; Yang, X.; He, W.; Li, J.; Plaza, A. Hyperspectral Anomaly Detection Using Reconstruction Fusion of Quaternion Frequency Domain Analysis. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 8358–8372. [Google Scholar] [CrossRef]
  8. Zhang, J.; Zhang, X.; Jiao, L. Dual-View Hyperspectral Anomaly Detection via Spatial Consistency and Spectral Unmixing. Remote Sens. 2023, 15, 3330. [Google Scholar] [CrossRef]
  9. Buchholz, S.; Rajendran, G.; Rosenfeld, E.; Aragam, B.; Schölkopf, B.; Ravikumar, P. Learning linear causal representations from interventions under general nonlinear mixing. In Proceedings of the 37th International Conference on Neural Infomation Processing Systems, New Orleans, LA, USA, 10–16 December 2023; Volume 1968, pp. 45419–45462. [Google Scholar]
  10. Settle, J.J.; Drake, N.A. Linear mixing and the estimation of ground cover proportions. Int. J. Remote Sens. 1993, 14, 1159–1177. [Google Scholar] [CrossRef]
  11. Hong, D.; Yokoya, N.; Chanussot, J.; Zhu, X. An augmented linear mixing model to address spectral variability for hyperspectral unmixing. IEEE Trans. Image Process. 2018, 28, 1923–1938. [Google Scholar] [CrossRef]
  12. Winter, M.E. N-FINDR: An algorithm for fast autonomous spectral end-member determination in hyperspectral data. Imaging Spectrom. V 1999, 3753, 266–275. [Google Scholar]
  13. Boardman, J.W.; Kruse, F.A.; Green, R.O. Mapping target signatures via partial unmixing of AVIRIS data. In Proceedings of the Summaries of the Fifth Annual JPL Airborne Earth Science Workshop, Pasadena, CA, USA, 23–26 January 1995; Volume 1. [Google Scholar]
  14. Nascimento, J.M.P.; Dias, J.M.B. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef]
  15. Wang, L.; Lu, K.; Liu, P.; Ranjan, R.; Chen, L. IK-SVD: Dictionary Learning for Spatial Big Data via Incremental Atom Update. Comput. Sci. Eng. 2014, 16, 41–52. [Google Scholar] [CrossRef]
  16. Xu, M.; Zhang, L.; Du, B.; Zhang, L. An image-based endmember bundle extraction algorithm using reconstruction error for hyperspectral imagery. Neurocomputing 2016, 173, 397–405. [Google Scholar] [CrossRef]
  17. Borsoi, R.A.; Imbiriba, T.; Bermudez, J.C.M.; Richard, C. Deep generative models for library augmentation in multiple endmember spectral mixture analysis. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1831–1835. [Google Scholar] [CrossRef]
  18. Song, W.; Liu, P.; Wang, L. Sparse representation-based correlation analysis of non-stationary spatiotemporal big data. Int. J. Digit. Earth 2016, 9, 892–913. [Google Scholar] [CrossRef]
  19. Meerdink, S.K.; Hook, S.J.; Roberts, D.A.; Abbott, E. The ECOSTRESS spectral library version 1.0. Remote Sens. Environ. 2019, 230, 111196. [Google Scholar] [CrossRef]
  20. Guerri, M.F.; Distante, C.; Spagnolo, P.; Bougourzi, F.; Taleb-Ahmed, A. Deep learning techniques for hyperspectral image analysis in agriculture: A review. ISPRS Open J. Photogramm. Remote Sens. 2024, 23, 100062. [Google Scholar] [CrossRef]
  21. Zhang, S.; Li, J.; Liu, K.; Deng, C.; Liu, L.; Plaza, A. Hyperspectral unmixing based on local collaborative sparse regression. IEEE Geosci. Remote Sens. Lett. 2016, 13, 631–635. [Google Scholar] [CrossRef]
  22. Shi, Z.; Shi, T.; Zhou, M.; Xu, X. Collaborative sparse hyperspectral unmixing using l0 norm. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5495–5508. [Google Scholar] [CrossRef]
  23. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Sparse unmixing of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2014–2039. [Google Scholar] [CrossRef]
  24. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Collaborative sparse regression for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2013, 52, 341–354. [Google Scholar] [CrossRef]
  25. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Total variation spatial regularization for sparse hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4484–4502. [Google Scholar] [CrossRef]
  26. Zhong, Y.; Feng, R.; Zhang, L. Non-local sparse unmixing for hyperspectral remote sensing imagery. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2013, 7, 1889–1909. [Google Scholar] [CrossRef]
  27. Wang, R.; Li, H.C.; Liao, W.; Pižurica, A. Double reweighted sparse regression for hyperspectral unmixing. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium, Beijing, China, 10–15 July 2016; pp. 6986–6989. [Google Scholar]
  28. Wang, R.; Li, H.C.; Liao, W.; Pižurica, A. Hyperspectral unmixing using double reweighted sparse regression and total variation. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1146–1150. [Google Scholar]
  29. Zhang, S.; Li, J.; Li, H.; Deng, Z.; Plaza, A. Spectral–spatial weighted sparse regression for hyperspectral image unmixing. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3265–3276. [Google Scholar] [CrossRef]
  30. Dundar, T.; Ince, T. Sparse representation-based hyperspectral image classification using multiscale superpixels and guided filter. IEEE Geosci. Remote Sens. Lett. 2018, 16, 246–250. [Google Scholar] [CrossRef]
  31. Zhang, S.; Deng, C.; Li, J.; Wang, S.; Li, F.; Xu, C.; Plaza, A. Superpixel-guided sparse unmixing for remotely sensed hyperspectral imagery. In Proceedings of the IGARSS 2019–2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 2155–2158. [Google Scholar]
  32. Borsoi, R.A.; Imbiriba, T.; Bermudez, J.C.M.; Richard, C. A fast multiscale spatial regularization for sparse hyperspectral unmixing. IEEE Geosci. Remote Sens. Lett. 2018, 16, 598–602. [Google Scholar] [CrossRef]
  33. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef]
  34. Li, H.; Feng, R.; Wang, L.; Zhong, Y.; Zhang, L. Superpixel-based reweighted low-rank and total variation sparse unmixing for hyperspectral remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2020, 59, 629–647. [Google Scholar]
  35. Ince, T. Superpixel-based graph Laplacian regularization for sparse hyperspectral unmixing. IEEE Geosci. Remote Sens. Lett. 2020, 19, 1–5. [Google Scholar] [CrossRef]
  36. Ince, T.; Dobigeon, N. Spatial-Spectral Multiscale Sparse Unmixing for Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 5511605. [Google Scholar] [CrossRef]
  37. Xu, L.; Yan, Q.; Xia, Y.; Jia, J. Structure extraction from texture via relative total variation. ACM Trans. Graph. (TOG) 2012, 31, 1–10. [Google Scholar] [CrossRef]
  38. Song, F.; Deng, S. Graph learning and denoising-based weighted sparse unmixing for hyperspectral images. Int. J. Remote Sens. 2023, 44, 428–451. [Google Scholar] [CrossRef]
  39. Krishnan, D.; Fattal, R.; Szeliski, R. Efficient preconditioning of laplacian matrices for computer graphics. ACM Trans. Graph. (TOG) 2013, 32, 1–15. [Google Scholar] [CrossRef]
  40. Song, F.; Deng, S. First-order graph trend filtering for sparse hyperspectral unmixing. IEEE Geosci. Remote Sens. Lett. 2023, 20, 5508505. [Google Scholar] [CrossRef]
  41. Rasti, B.; Koirala, B. SUnCNN: Sparse unmixing using unsupervised convolutional neural network. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  42. Hong, D.; Gao, L.; Yao, J.; Yokoya, N.; Chanussot, J.; Heiden, U.; Zhang, B. Endmember-guided unmixing network (EGU-Net): A general deep learning framework for self-supervised hyperspectral unmixing. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6518–6531. [Google Scholar] [CrossRef]
  43. Bioucas-Dias, J.M.; Figueiredo, M.A.T. Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing. In Proceedings of the 2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  44. Zhang, Z.; Xu, Y.; Yang, J.; Li, X.; Zhang, D. A survey of sparse representation: Algorithms and applications. IEEE Access 2015, 3, 490–530. [Google Scholar] [CrossRef]
  45. Wang, Y.; Sharpnack, J.; Smola, A.J.; Tibshirani, R.J. Trend filtering on graphs. J. Mach. Learn. Res. 2016, 17, 1–41. [Google Scholar]
  46. Zhu, F. Hyperspectral Unmixing: Ground Truth Labeling, Datasets, Benchmark Performances and Survey. arXiv 2017, arXiv:1708.05125. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the SCSU–GDO algorithm.
Figure 1. Flowchart of the SCSU–GDO algorithm.
Remotesensing 17 03088 g001
Figure 2. SLIC for hyperspectral image. (a) Original image; (b) segmentation image.
Figure 2. SLIC for hyperspectral image. (a) Original image; (b) segmentation image.
Remotesensing 17 03088 g002
Figure 3. The spectra of the DC1 (5 endmembers) (left) and DC2 (9 endmembers) (right).
Figure 3. The spectra of the DC1 (5 endmembers) (left) and DC2 (9 endmembers) (right).
Remotesensing 17 03088 g003
Figure 4. True fractional abundance of simulated DC1.
Figure 4. True fractional abundance of simulated DC1.
Remotesensing 17 03088 g004
Figure 5. True fractional abundance of simulated DC2.
Figure 5. True fractional abundance of simulated DC2.
Remotesensing 17 03088 g005
Figure 6. Estimated abundance plots of endmembers 2, 3, and 4 in DC1 with SNR = 30 db.
Figure 6. Estimated abundance plots of endmembers 2, 3, and 4 in DC1 with SNR = 30 db.
Remotesensing 17 03088 g006
Figure 7. Estimated abundance plots of endmembers 2, 3, and 4 in DC2 with SNR = 30 db.
Figure 7. Estimated abundance plots of endmembers 2, 3, and 4 in DC2 with SNR = 30 db.
Remotesensing 17 03088 g007
Figure 8. SRE of DC1 and DC2 changes with λ 1 and λ 2 under SNR = 30 dB.
Figure 8. SRE of DC1 and DC2 changes with λ 1 and λ 2 under SNR = 30 dB.
Remotesensing 17 03088 g008
Figure 9. Three representative mineral abundance maps obtained by comparing copper mine data using the comparison algorithm and the proposed algorithm.
Figure 9. Three representative mineral abundance maps obtained by comparing copper mine data using the comparison algorithm and the proposed algorithm.
Remotesensing 17 03088 g009
Figure 10. Four representative endmember abundance maps obtained from the Urban dataset using the comparison algorithms and the proposed algorithm.
Figure 10. Four representative endmember abundance maps obtained from the Urban dataset using the comparison algorithms and the proposed algorithm.
Remotesensing 17 03088 g010
Table 1. SREs obtained by different unmixing algorithms with simulated DC1 and DC2.
Table 1. SREs obtained by different unmixing algorithms with simulated DC1 and DC2.
SNR (dB)CLSUnSALSUnSAL–TVS2WSUSBGLSUFoGTG-HUSCSU–GDO
DC1
201.45
λ = 9 × 10−1
4.54
λ = 5 × 10−3
λtv = 1 × 10−2
6.21
λ = 1 × 10−2
7.64
λ = 1 × 10−1
λlap = 5
17.8
λ1 = 1 × 10−5
λ2 = 1 × 10−5
23.15
λ1 = 1 × 10−3
λ2 = 1 × 10−4
306.80
λ = 6 × 10−1
7.48
λ = 5 × 10−4
λtv = 1 × 10−2
7.72
λ = 1 × 10−2
27.27
λ = 1 × 10−3
λlap = 100
32.67
λ1 = 1 × 10−5
λ2 = 1 × 10−6
44.60
λ1 = 1 × 10−2
λ2 = 1 × 10−5
407.36
λ = 6 × 10−1
15.83
λ = 5 × 10−6
λtv = 5 × 10−4
25.53
λ = 5 × 10−5
34.73
λ = 1 × 10−3
λlap = 100
42.07
λ1 = 1 × 10−5
λ2 = 1 × 10−6
49.37
λ1 = 1 × 10−3
λ2 = 1 × 10−6
DC2
202.28
λ = 7 × 10−1
10.57
λ = 1 × 10−2
λtv = 1 × 10−2
10.23
λ = 5 × 10−2
15.51
λ = 5 × 10−2
λlap = 20
17.24
λ1 = 5 × 10−4
λ2 = 5 × 10−4
20.05
λ1 = 1 × 10−2
λ2 = 1 × 10−3
305.22
λ = 1 × 10−2
16.58
λ = 1 × 10−2
λtv = 5 × 10−3
20.15
λ = 5 × 10−2
20.33
λ = 5 × 10−2
λlap = 1 × 10−1
22.83
λ1 = 5 × 10−5
λ2 = 5 × 10−5
23.08
λ1 = 5 × 10−2
λ2 = 1 × 10−4
4013.38
λ = 1 × 10−2
17.51
λ = 1 × 10−2
λtv = 1 × 10−3
24.98
λ = 5 × 10−4
24.31
λ = 5 × 10−2
λlap = 1 × 10−1
25.43
λ1 = 1 × 10−3
λ2 = 1 × 10−5
27.95
λ1 = 5 × 10−2
λ2 = 1 × 10−5
The numbers in bold represent the optimal accuracy.
Table 2. RMSEs obtained by different unmixing algorithms with simulated DC1 and DC2.
Table 2. RMSEs obtained by different unmixing algorithms with simulated DC1 and DC2.
SNR (dB)CLSUnSALSUnSAL–TVS2WSUSBGLSUFoGTG-HUSCSU–GDO
DC1
20 2.92 × 10−2 2.05 × 10−2 1.69 × 10−2 1.43 × 10−2 4.4 × 10−32.404 × 10−3
30 1.58 × 10−2 1.46 × 10−2 1.42 × 10−2 1.50 × 10−3 8 × 10−42.03 × 10−4
40 1.48 × 10−2 5.6 × 10−3 1.8 × 10−3 6.3 × 10−4 2.7 × 10−41.17 × 10−4
DC2
20 4.19 × 10−2 1.61 × 10−2 1.68 × 10−2 9.1 × 10−3 7.5 × 10−35.419 × 10−3
30 2.99 × 10−2 8.1 × 10−3 5.4 × 10−3 5.2 × 10−3 3.9 × 10−3 3.739 × 10−3
40 1.17 × 10−2 7.3 × 10−3 3.1 × 10−3 3.3 × 10−3 2.9 × 10−32.183 × 10−4
The numbers in bold represent the optimal accuracy.
Table 3. Runtime (in seconds) of different unmixing algorithms on the Cuprite dataset.
Table 3. Runtime (in seconds) of different unmixing algorithms on the Cuprite dataset.
CLSUnSALSUnSAL–TVS2WSUSBGLSUFoGTG-HUSCSU–GDO
Runtime (s) 214.3942 2.9086 × 103 368.2366 426.4404 401.4723 411.4796
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, K.; Zhao, Z.; Yang, Q.; Feng, R. SCSU–GDO: Superpixel Collaborative Sparse Unmixing with Graph Differential Operator for Hyperspectral Imagery. Remote Sens. 2025, 17, 3088. https://doi.org/10.3390/rs17173088

AMA Style

Yang K, Zhao Z, Yang Q, Feng R. SCSU–GDO: Superpixel Collaborative Sparse Unmixing with Graph Differential Operator for Hyperspectral Imagery. Remote Sensing. 2025; 17(17):3088. https://doi.org/10.3390/rs17173088

Chicago/Turabian Style

Yang, Kaijun, Zhixin Zhao, Qishen Yang, and Ruyi Feng. 2025. "SCSU–GDO: Superpixel Collaborative Sparse Unmixing with Graph Differential Operator for Hyperspectral Imagery" Remote Sensing 17, no. 17: 3088. https://doi.org/10.3390/rs17173088

APA Style

Yang, K., Zhao, Z., Yang, Q., & Feng, R. (2025). SCSU–GDO: Superpixel Collaborative Sparse Unmixing with Graph Differential Operator for Hyperspectral Imagery. Remote Sensing, 17(17), 3088. https://doi.org/10.3390/rs17173088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop