Next Article in Journal
Quantum Private Array Content Comparison Based on Multi-Qubit Swap Test
Next Article in Special Issue
Ellipsoid-Structured Localized Generalized Eigenvalue Proximal Support Vector Machines
Previous Article in Journal
GRUAtt-Autoformer: A Hybrid Framework with BiGRU-Enhanced Attention for Crude Oil Price Forecasting
Previous Article in Special Issue
Rat Locomotion Analysis Based on Straight Line Detection in Hough Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Sub-Abundance Map Regularized Sparse Unmixing Framework Based on Dynamic Abundance Subspace Awareness

1
The School of Computer Science and Engineering, North Minzu University, Yinchuan 750021, China
2
The Image and Intelligence Information Processing Innovation Team of the National Ethnic Affairs Commission of China, Yinchuan 750021, China
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(23), 3826; https://doi.org/10.3390/math13233826
Submission received: 3 November 2025 / Revised: 25 November 2025 / Accepted: 26 November 2025 / Published: 28 November 2025

Abstract

Sparse unmixing (SU) has become a research hotspot in hyperspectral image (HSI) analysis in recent years due to its interpretable physical mechanisms and engineering practicality. However, traditional SU methods are confronted with two core bottlenecks: Firstly, the high computational complexity of the abundance matrix inversion severely limits algorithmic efficiency. Secondly, the inherent challenges posed by large-scale highly coherent spectral libraries hinder improvement of unmixing accuracy. To overcome these limitations, this study proposes a novel sub-abundance map regularized sparse unmixing (SARSU) framework based on dynamic abundances subspace awareness. Specifically, first of all, we have developed an intelligent spectral atom selection strategy that employs a designed dynamic activity evaluation mechanism to quantify the participation contribution of spectral library atoms during the unmixing process in real time. This enables adaptive selection of critical subsets to construct active subspace abundance maps, effectively mitigating spectral redundancy interference. Secondly, we innovatively integrated weighted nuclear norm regularization based on sub-abundance maps into the model, deeply mining potential low-rank structures within spatial distribution patterns to significantly enhance the spatial fidelity of unmixing results. Additionally, a multi-directional neighborhood-aware dual total variation (DTV) regularizer was designed, which enforces spatial consistency constraints between adjacent pixels through a four directional (horizontal, vertical, diagonal, and back-diagonal) differential penalty mechanism, ensuring abundance distributions comply with physical diffusion laws of ground objects. Finally, to efficiently solve the proposed objective model, an optimization algorithm based on the Alternating Direction Method of Multipliers (ADMM) was developed. Comparative experiments conducted on two simulated datasets and four real hyperspectral benchmark datasets, alongside comparisons with state-of-the-art methods, validated the efficiency and superiority of the proposed approach.

1. Introduction

With the rapid development of hyperspectral imaging technology, hyperspectral remote sensing images have been widely applied in environmental monitoring [1], target detection [2], mineral exploration [3], and other fields. However, due to the insufficient spatial resolution and the spatial complexity of the distribution of materials, there are numerous mixed pixels in hyperspectral images (HSIs) [4]. The existence of mixed pixels hinders the refined application of HSIs [5]. To address the spectral mixing problem and effectively identify the spectral components and their proportions in each mixed pixel, hyperspectral unmixing (HU) technology has emerged and become an important step in the preprocessing of hyperspectral images. Its task is to estimate the spectral reflectance of pure materials (i.e., endmembers) and their proportion (i.e., fractional abundances) from the mixed pixels [6].
Up to now, based on different spectral mixing mechanisms, researchers have proposed the Linear Mixing Model (LMM) [7] and the Nonlinear Mixing Model (NLMM) [8]. The LMM assumes that the mixing of materials occurs at the macroscopic scale, and the incident solar photons interact with only one substance. Attributed to its computational efficiency, analytical tractability, and intuitive physical interpretability, the LMM has been extensively utilized in HU. Therefore, this paper carries out the unmixing research based on the LMM.
In the past few decades, researchers have proposed lots of unmixing methods based on the LMM, which can generally be classified into geometrical, statistical, sparse regression, and emerging methods (such as methods based on Non-negative Tensor Factorization (NTF) and methods based on Deep Learning (DL)). Specifically, geometrical methods are rooted in convex geometry principles and are bifurcated into two primary categories: pure pixel assumption frameworks and non-pure pixel assumption frameworks. Pure pixel methods demand the presence of at least one pure spectral pixel for each endmember within the HSI, where endmembers are identified as the vertices of the convex hull (simplex) in the HSI feature space. The representative algorithms include subspace projection [9] and maximum simplex volume maximization [10]. In contrast, non-pure pixel methods are anchored in the minimum simplex theory [11,12], where endmembers are theoretically estimated by solving for the optimal simplex vertices. This attribute enables the minimum simplex approach to effectively address scenarios with highly mixed spectral data, where conventional pure pixel methods often encounter limitations.
Statistical methods are based on probability and mathematical statistics theory and do not require the pure pixel assumption. The NMF method is the most representative method among them. NMF decomposes the non-negative hyperspectral data matrix into the product of a basis matrix and its coefficient matrix, corresponding to endmembers and abundances, respectively [13,14,15,16]. However, since the NMF model is based on mathematical statistics and optimization, it may obtain virtual endmembers without physical meaning. Moreover, due to the non-convexity of the NMF model, it cannot obtain the global optimal solution. The Non-negative Tensor Factorization (NTF) originates from the tensor decomposition theory and can be divided into four types according to different decomposition methods: Canonical Polyadic Decomposition (CPD) [17], Tucker Decomposition (TD) [18], Block Term Decomposition (BTD) [19,20,21], and Matrix–Vector NTF (MV-NTF). MV-NTF decomposes a tensor into the outer product form of a matrix and a vector [22]. Due to its clear physical meaning, it is widely used in unmixing. DL methods have strong nonlinear feature learning capabilities, can mine deep information in HSI, and can overcome the limitations of single-layer information [23,24]. However, DL methods require an amount of training and computational cost so there are shortcomings in timeliness.
In recent years, sparse unmixing (SU) has attracted widespread attention as a new semi-supervised method, which is mainly inspired by sparse representation theory. Iordache et al. [25] applied the sparse regression model to HSI unmixing and used l 1 , 1 regularization to enhance the sparsity of the representation coefficients (i.e., abundances), achieving effective abundance estimation (SUnSAL). Subsequently, the Total Variation (TV) regularizer was introduced into SU to improve the piecewise smoothness of the estimated abundance maps [26]. Inspired by this, many SU models using TV regularization have emerged [27,28,29]. Although TV-based methods can effectively enhance the piecewise smoothness of abundance maps, they increase computational costs. To address this issue, researchers proposed multi-scale methods to replace TV regularization. For example, Ricardo Augusto Borsoi et al. [30] proposed a new multi-scale spatial regularization SU method based on segmentation algorithms, decomposing the unmixing problem into two simple processes in the approximate image domain and the original image domain. Building on this, Taner Ince et al. combined the advantages of coupled spectral–spatial double-weighted SU [31] and designed the coarse abundances obtained from unmixing in the approximate image domain as a spatial–spectral weight matrix and used this weight matrix to design a new multi-scale regularization, achieving effective unmixing results with less time consumption [32]. However, multi-scale unmixing methods are prone to over-smoothing and local image blurring, and they are not as effective as TV regularization in preserving HSI detail features and edge information.
TV regularization excels in uncovering the spatial information within HSI and enhancing the piecewise smoothness of abundances by calculating differences along horizontal and vertical orientations. However, by analyzing the neighborhood structure of HSI, it can be seen that calculating differences only in the horizontal and vertical directions does not fully utilize the neighborhood information of pixels. For example, in addition to horizontal and vertical neighbors, the diagonal and back-diagonal directions are also important spatial structural information for a pixel. Traditional TV ignores this spatial structural prior, resulting in its inability to fully capture the local neighborhood structure of pixels. To address this deficiency, we propose a dual total variation (DTV) regularization method that incorporates differences in the horizontal, vertical, diagonal, and back-diagonal directions of pixels into the model to further encourage the piecewise smoothness of abundances. Similar ideas have also been studied and applied in LF image watermarking [33] and human parsing [34] to enhance structural preservation and spatial consistency.
In addition, methods based on graph Laplacian regularization [35] and joint sparse blocks and low-rank representations [36] have also shown excellent performance in preserving spatial correlations, resulting in highly effective unmixing outcomes. However, in the above methods, constraints are applied to the abundance matrix during SU. In fact, the structure of the abundance matrix is different from the original spatial structure of HSI, while all abundance maps (i.e., each row of the abundance matrix) are consistent with the spatial structure of HSI. Therefore, constraining each abundance map directly is more efficient than applying regularizations to the abundance matrix. However, the abundance matrix is overloaded with an excessive number of abundance maps, and the vast majority of these maps fail to make meaningful contributions to unmixing. In addition, the spectral library has a large scale and high coherence, which hinders the effective estimation of each abundance map. To address this issue, researchers have proposed some methods [37,38,39] to fine-tune the library atoms before unmixing to reduce the size of the spectral library. However, these methods all depend on the quality of the original hyperspectral data because they all learn active atoms from the given HSI. Therefore, these methods often fail to learn the correct active atoms when HSI is severely contaminated by noise. To solve this problem, Shen et al. proposed a layered sparse regression method [40] called LSU. This method decomposes SU into a multi-layer process in which each layer fine-tunes the library atoms. Although LSU can effectively overcome the noise influence of HSI, the design of multi-layer also brings considerable computational load to the model. In view of this, Qu et al. [41] proposed the NeSU-LP method, which divided the SU into two independent and continuous sub-processes, and completed the spectral library pruning at one time to obtain the ideal unmixing effect. However, this method cannot avoid the risk of incorrectly pruned active library atoms. Unlike LSU and NeSU-LP, the FaSUn method [42] considers a contribution matrix of the spectral library to adaptively update the activity of atoms in the library, rather than directly adjusting the spectral library. The combination of the spectral library and the contribution matrix is used to achieve semi-supervised unmixing. Although this approach avoids the noise influence of HSI and the cumbersome process of library atom adjustment, neither the contribution matrix nor its combination with the spectral library or abundance matrix has a clear physical meaning. Moreover, solving the contribution matrix and abundance matrix together brings a non-convex problem.
Different from the above methods, we propose a sub-abundance map regularization SU method that evaluates the activity of all abundance maps and constrains only the most active sub-abundance maps. This approach differs from other methods that directly apply regularization based on SUnSAL and also from semi-supervised methods with unclear physical meanings. Instead, it precisely imposes constraints on the active abundance maps and achieves precise regularization at the abundance map level while reducing the impact of the large scale and high coherence of the spectral library on SU and avoiding the potential risk of losing important library atoms due to spectral library pruning, thereby improving unmixing quality and efficiency. In the proposed model, we use a new singular value threshold-based discrimination method to determine the number of active sub-abundance maps in HSI. At the same time, we introduce weighted nuclear norm regularization and the designed DTV regularization to constrain the low rank and piecewise smoothness of the sub-abundance maps.
The main contributions of this paper are as follows:
  • We propose a novel sub-abundance map regularized sparse unmixing (SARSU) framework based on dynamic abundance subspace awareness. This method introduces an intelligent spectral atom selection strategy that employs a dynamically designed activity evaluation mechanism to quantify the participation contribution of spectral library atoms during the unmixing process. By adaptively selecting key subsets, it constructs active subspace abundance maps to effectively mitigate spectral redundancy interference.
  • A weighted nuclear norm regularization based on sub-abundance maps was developed to deeply mine potential low-rank structures within spatial distribution patterns, significantly improving the spatial accuracy of unmixing results. Furthermore, a multi-directional neighborhood-aware dual total variation (DTV) regularizer was introduced into the model. Through a four-directional (horizontal, vertical, diagonal, and back-diagonal) differential penalty mechanism, it ensures spatial consistency between adjacent pixels, enabling abundance distributions comply with the physical diffusion laws of ground objects.
  • An efficient optimization algorithm based on the Alternating Direction Method of Multipliers (ADMM) was developed. Comprehensive experiments were conducted on two simulated datasets and four real hyperspectral benchmark datasets. The experimental results underwent rigorous analysis, and critical algorithm issues were thoroughly discussed. Through comparative performance analysis with state-of-the-art methods, the effectiveness and superiority of the proposed approach were validated.
The remainder of this paper is organized as follows. The proposed method is presented in Section 2. Section 3 describes our experimental results with simulated hyperspectral datasets. Section 4 describes our experiments with real hyperspectral datasets. Finally, some problem concerning our method is discussed in Section 5, and Section 6 provides some conclusions and some future research directions.

2. Proposed Model

2.1. Sparse Unmixing

Throughout this paper, all matrices appearing are denoted by uppercase letters, and vectors and scalars are denoted by lowercase letters.
The sparse unmixing assumes that an observed pixel spectrum in an HSI can be produced by a linear combination of endmember signatures and their corresponding abundances. Based on the sparse unmixing, the HSI dataset can be expressed as
X = A U + G ,
where X R + l × n refers to the observation HSI matrix, with l bands and n pixels. Let A R + l × d be a large spectral library, where d is the number of spectral signatures in A, and  U R + d × n denotes the abundance matrix corresponding to library A for the observed data X. G R l × n is the error.
In general, two constraints—the abundance non-negative constraint (ANC) and the abundance sum-to-one constraint (ASC)— are added to restrict the SU model and can be formulated as
U 0 , 1 d T U = 1 n T ,
where 1 d T and 1 n T represent all-one vectors with size d and size n, respectively.
It is worth mentioning that ASC is strongly criticized in the real scenario [25], so we only add ASC in the synthetic dataset experiments and do not use it in the real dataset experiments.

2.2. Sub-Abundance Map Regularization

In an HSI, the change between materials should be gradual, with only a minimal number of pixels at the edges exhibiting abrupt changes. Consequently, the entire HSI should exhibit piecewise smoothness, a characteristic that should consistently manifest in the abundance map. Furthermore, based on the material distribution patterns observed in real-world scenarios, the distribution of materials within an HSI tends to concentrate in specific areas. This implies that the abundance maps should possess the property of low rank. The piecewise smoothness and low rank of the abundance map offer valuable prior information for precise abundance estimation, ensuring the preservation of both the spatial characteristics of the image and its local similarity. To achieve this, it is both feasible and effective to incorporate a regularization term into the abundance map, thereby constraining its piecewise smoothness and low rank. By combining this with l 1 , 1 regularization, an SU problem can be formulated, incorporating both low-rank and piecewise smoothness constraints on the abundance maps, as follows:
min U 1 2 | | Y A U | | F 2 + α | | U | | 1 , 1 + β φ ( U ) + λ ψ ( U ) s . t . U 0 ,
where · F is the Frobenius norm, U 1 , 1 = j = 1 n u j 1 , u j denotes the column of the j- t h abundance matrix, and  φ ( U ) and ψ ( U ) represent low-rank and piecewise smoothness regularization, respectively. At this juncture, we neglect the ASC constraint. In model (3), the first term serves as the reconstruction term, and the second term introduces sparse regularization, while the third and fourth terms represent the low-rank regularization and the piecewise smooth regularization, respectively.
However, for SU, the large-scale spectral library leads to a large number of abundance maps in the abundance matrix, most of which are near-zero abundance maps that do not contribute to the unmixing. Constraining them one by one not only brings a lot of unnecessary computational overhead but also weakens the model’s attention to the abundance maps that contribute to the unmixing, thus limiting the unmixing efficiency. Therefore, we propose sub-abundance map regularization, which only imposes piecewise smooth and low-rank regularization constraints on the abundance maps of the ground objects in the scene (i.e., the contributing abundance maps) to reduce the computational overhead and improve the unmixing accuracy. Specifically, the SU model based on sub-abundance map regularization can be expressed as follows:
min U 1 2 | | Y A U | | F 2 + α | | U | | 1 , 1 + β i = 1 R φ ( U ^ i ) + λ i = 1 R ψ ( U ^ i ) s . t . U 0 .
The first term of model (4) is the reconstruction term, and the second term is the sparsity regularization, both of which act on the abundance matrix. The third and fourth terms are the low-rank regularization term and the piecewise smooth regularization term acting on the sub-abundance map, respectively. { U ^ i } i = 1 R is the set of matrices formed by expanding the R contributing sub-abundance maps separately along the spatial dimension. In order to determine the value of R and accurately identify the contributing abundance maps, we use the information of the clean HSI and the sparsity of each abundance map as prior knowledge for identification. Specifically, the value of R is determined by the following equation:
R = i = 1 p ( σ i > ε ? 1 : 0 ) ,
where { σ i } i = 1 p is an element in the singular-value matrix Σ , and p represents the number of non-zero-singular values in Σ . Σ is obtained by the singular value decomposition of the denoised HSl, specifically
[ U U , Σ , V V ] = S V D ( e s t n o i s e ( Y ) ) .
where UU and VV represent the left singular matrix and the transpose of right singular matrix, respectively.
Since denoised HSI has a low-rank equivalent to the number of endmembers [43], it is theoretically feasible to take the number of large singular values of denoised HSI as the number of endmembers (that is, the number of contributing abundance maps) and then constrain the number of sub-abundance maps. It is worth mentioning that the value of ε determines the final number obtained in Equation (5) to a certain extent, so its value is very important. Since the large and small values in the singular value matrix have huge differences and there are jump changes, we choose ε = 1 × 10−4, which has been proved to achieve ideal results in the experiment, and we will prove it in Section 5.
In each iteration, the contribution of each abundance map is evaluated according to its sparsity, thus determining the sub-abundance maps that need to be constrained. Specifically, the selection of sub-abundance maps is determined by the following equation:
{ U ^ i } i = 1 R = { U n f o l d ( s o r t ( { j = 1 n a b s ( U i , j ) } i = 1 d ) ) i } i = 1 R .
where a b s ( · ) represents the absolute value of all elements of each abundance map, s o r t ( · ) represents the order of the obtained sparse vector of the abundance maps from largest to smallest, the least sparse (that is, the largest contribution) R abundance maps are selected, and U n f o l d ( · ) unfolds the abundance maps into a matrix according to spatial dimension.
By adopting the strategy of sub-abundance map regularization, it can not only save the computational cost but also avoid the error that may be caused by directly pruning the spectral library and has better fault tolerance. It can be observed that in model (4), the first two terms are a classical sparse regression problem, while the third and fourth terms are the regularization terms directly acting on the sub-abundance maps, and the two parts are relatively independent and unified with each other. This makes each iteration in the unmixing process actually focus on the optimization of the sub-abundance maps, which can effectively reduce the impact of the large-scale and high coherence of the spectral library on the unmixing. In addition, model (4) also has good generalization ability. In the extreme case, when R = 0 , model (4) degenerates to the original SU model, and when R = d , model (4) degenerates to the same SU model that regularizes all abundance maps as model (3).

2.3. Low-Rank Dual TV SU Model with Sub-Abundance Map Regularization

For model (4), we next specifically identify φ ( U ^ i ) and ψ ( U ^ i ) . Weighted kernel norm regularization has been proved to have good low-rank properties and can be used as an effective convex approximationof rank functions [44], so it has attracted extensive attention from researchers. Here, we use weighted kenel norm regularization to characterize the low-rank properties of sub-abundance maps. Specifically, φ ( U ^ i ) is defined as follows:
φ ( U ^ i ) = | | U ^ i | | W i , = j = 1 p w i , j σ j ( U ^ i ) ,
where σ j ( U ^ i ) represents the i th nonzero singular value of U ^ i , where W i is the diagonal matrix of all w i , j , and w i , j is the weight of the i-th nonzero singular value, which is defined as follows:
w i , j = 1 ( σ ( U ^ i ) + 1 e 5 ) .
Next, we construct ψ ( U ^ i ) . Recently, TV has been widely used in unmixing due to its excellent performance. By punishing the difference between adjacent pixels in the image, TV can promote the segmentation smooth of the image and retain the edge details well, which can effectively remove the influence of noise in the image. However, given a pixel x ( i , j ) , where ( i , j ) is its coordinate in the spatial domain, the calculation of its prior to first-order difference design to four pixels is x ( i , j ) , x ( i + 1 , j ) , x ( i , j + 1 ) , x ( i + 1 , j + 1 ) . When calculating the nearest neighbor gradient of a pixel, there are four gradients to choose from, namely, horizontal, vertical, diagonal, and back-diagonal. However, traditional TV only pays attention to the difference between horizontal and vertical directions but does not consider the intrinsic information between adjacent pixels in the diagonal and back-diagonal directions, which results in the spatial neighbor information of pixels not being fully explored and utilized. To solve this problem, we propose DTV regularization and design the difference of diagonal and back-diagonal directions as TV operator so as to make more use of spatial neighborhood information. The mathematical form of DTV is given below. Let X be a discrete image of m × n . The Anisotropic DTV calculation based on l 1 norm is as follows:
D T V ( X ) = | | X | | T V + | | X | | T V D = { i = 1 m 1 j = 1 n 1 { | x ( i + 1 , j ) x ( i , j ) | + | x ( i , j + 1 ) x ( i , j ) | } + { i = 1 m 1 | x ( i + 1 , n ) x ( i , n ) | + j = 1 n 1 | x ( m , j + 1 ) x ( m , j ) | ) } } + i = 1 m 1 j = 1 n 1 { | x ( i + 1 , j + 1 ) x ( i , j ) | + | x ( i + 1 , j ) x ( i , j + 1 ) | } .
For ease of presentation, we define = 1 2 , where 1 and 2 denote | | . | | T V and | | . | | T V D , respectively. In order to distinguish the different importance of horizontal and vertical differences from diagonal and back-diagonal differences, we choose to bound them separately with different regularization parameters.
Finally, in view of the excellent performance of weighted l 1 , 1 regularization in promoting the row sparsity of the abundance matrix [45], we replace the l 1 , 1 regularization in model (4) with weighted l 1 , 1 regularization to enhance the utilization of spectral information in the unmixing process. Substituting Equations (8) and (10) together into model (4), the final model is obtained as follows:
min U 1 2 | | Y A U | | F 2 + α | | W s U | | 1 , 1 + β i = 1 R | | U ^ i | | W i , + λ 1 i = 1 R | | U ^ i | | T V + λ 2 i = 1 R | | U ^ i | | T V D s . t . U 0 .
where ⊗ represents element-wise multiplication, W s = [ w s , 1 , w s , 2 , , w s , d ] T × o n e s ( s i z e ( U ) ) is the weight matrix of the sparse regularization, the  o n e s ( · ) operator constructs matrix whose entries are all ones, and the  w s , i is defined as
w s , i = 1 ( j = 1 n U i , j 2 ) 1 2 + 1 e 5 .
Finally, for ASC, we take the approach in [16] and define the augmented matrix:
Y ¯ = Y δ 1 n T , A ¯ = A δ 1 d T .
They are used in the unmixing process instead of the original HSI data matrix Y and spectral library A for iteration. It is worth mentioning that there are strong signature variability in real-world scenarios, which makes the use of ASC subject to strong criticisms [25]. Therefore, we only use ASC on synthetic datasets and do not enforce it in experiments on real datasets.
Obviously, model (11) is a constrained non-convex optimization problem, and its unconstrained equivalent form is as follows:
min U 1 2 | | Y A U | | F 2 + α | | W s U | | 1 , 1 + β i = 1 R | | U ^ i | | W i , + λ 1 i = 1 R | | U ^ i | | T V + λ 2 i = 1 R | | U ^ i | | T V D + l R + ( U ) ,
where l R + ( · ) is the indicative function defined as follows:
l R + ( U ) = 0 , i f ( min ( U i , j ) 0 , i = 1 , 2 , , n ) + , i f ( min ( U i , j ) < 0 , i = 1 , 2 , , n ) .
In order to effectively solve Equation (14), we introduce auxiliary variables V 1 , V 2 , V 3 , V 4 , V 5 , V 6 , V 7 and convert them into the following equivalent form:
min U 1 2 | | Y V 1 | | F 2 + α | | W s V 2 | | 1 , 1 + β i = 1 R | | V 3 , i | | W i , + λ 1 i = 1 R | | V 5 , i | | 1 + λ 2 i = 1 R | | V 7 , i | | 1 + l R + ( U ) s . t . V 1 = A U , V 2 = U , V 3 , i = U ^ i , V 4 , i = U ^ i , V 5 , i = 1 V 4 , i , V 6 , i = U ^ i , V 7 , i = 2 V 6 , i .
Further, the augmented Lagrange function of Equation (16) can be obtained as follows:
L μ ( U , V ) = 1 2 | | Y V 1 | | F 2 + α | | W s V 2 | | 1 , 1 + β i = 1 R | | V 3 , i | | W i , + λ 1 i = 1 R | | V 5 , i | | 1 + λ 2 i = 1 R | | V 7 , i | | 1 + l R + ( U ) + μ 2 ( | | V 1 A U D 1 | | F 2 + | | V 2 U D 2 | | F 2 + i = 1 R | | V 3 , i U ^ i D 3 , i | | F 2 + i = 1 R | | V 4 , i U ^ i D 4 , i | | F 2 + i = 1 R | | V 5 , i 1 V 4 , i D 5 , i | | F 2 + i = 1 R | | V 6 , i U ^ i D 6 , i | | F 2 + i = 1 R | | V 7 , i 2 V 6 , i D 7 , i | | F 2 ) .
where μ > 0 is the penalty parameter, and D 1 , D 2 , D 3 , D 4 , D 5 , D 6 , D 7 are the scaled Lagrange multipliers. Noting that Equation (17) is desepratable, and under the ADMM framework, it can be divided into various subproblems for solving, which are as follows:
V 1 : 1 2 | | Y V 1 | | F 2 + μ 2 | | V 1 A U D 1 | | F 2 V 2 : α | | W s V 2 | | 1 , 1 + μ 2 | | V 2 U D 2 | | F 2 V 3 : β i = 1 R | | V 3 , i | | W i , + μ 2 i = 1 R | | V 3 , i U ^ i D 3 , i | | F 2 V 4 : i = 1 R ( | | V 4 , i U ^ i D 4 , i | | F 2 + | | V 5 , i 1 V 4 , i D 5 , i | | F 2 ) V 5 : λ 1 i = 1 R | | V 5 , i | | 1 + μ 2 i = 1 R | | V 5 , i 1 V 4 , i D 5 , i | | F 2 V 6 : i = 1 R ( | | V 6 , i U ^ i D 6 , i | | F 2 + | | V 7 , i 2 V 6 , i D 7 , i | | F 2 ) V 7 : λ 2 i = 1 R | | V 7 , i | | 1 + μ 2 i = 1 R | | V 7 , i 2 V 6 , i D 7 , i | | F 2 U : l R + ( U ) + μ 2 ( | | V 1 A U D 1 | | F 2 + | | V 2 U D 2 | | F 2 )

2.4. Optimization Framework

The subproblems of V 1 , V 2 , and V 6 are differentiable and convex and can be solved by taking the derivative and setting it to 0. The updated formula is as follows:
V 1 ( k + 1 ) = ( μ I + I ) 1 ( Y + μ A U ( k ) + μ D 1 ( k ) ) ,
V 4 , i ( k + 1 ) ( 1 T 1 + I ) 1 [ ( U ^ i ( k ) + D 4 , i ( k ) ) + 1 T ( V 5 , i ( k ) D 5 , i ( k ) ) ] ,
V 6 , i ( k + 1 ) ( 2 T 2 + I ) 1 [ ( U ^ i ( k ) + D 6 , i ( k ) ) + 2 T ( V 7 , i ( k ) D 7 , i ( k ) ) ] .
where I is the identity matrix of size n × n . Note that 1 T 1 and 2 T 2 only act on spatial domains and can be used band by band, and for each band, 1 and 2 are a convolution and can, therefore, be computed using the Fast Fourier transform diagonalization. In addition, for the subproblems of V 3 , V 4 , V 5 , V 6 , V 7 , we take the method of updating the abundance maps one by one to solve and assign the values of all updated sub-abundance maps to the corresponding sub-abundance maps in the abundance matrix U after solving each subproblem.
The subproblems of V 2 , V 5 , V 7 can be solved using the well-known soft-thresholding method, which is defined as follows:
s o f t T h ( x , α ) = s i g n ( x ) max { 0 , | x | α } , α > 0 .
Therefore, the solution for V 2 , V 5 , V 7 can be expressed as
V 2 ( k + 1 ) s o f t T h ( U ( k ) + D 2 ( k ) , α μ W s ) ,
V 5 , i ( k + 1 ) s o f t T h ( 1 V 4 , i ( k ) + D 5 , i ( k ) , λ 1 μ ) ,
V 7 , i ( k + 1 ) s o f t T h ( 2 V 6 , i ( k ) + D 7 , i ( k ) , λ 2 μ ) .
The subproblems of V 3 can be solved using the singular value soft threshold method. Assuming X = U U d i a g ( σ 1 , , σ p ) V V is a singular value decomposition of X, the singular value soft threshold operator is defined as follows:
S V T w , ρ ( X ) = U U d i a g ( ( σ 1 ρ w 1 ) , , ( σ p ρ w p ) ) V V .
Obviously the closed solution of the V 3 subproblem can be expressed as
V 3 , i ( k + 1 ) = S V T W i , β μ ( U ^ i ( k ) + D 3 , i ( k ) ) .
The subproblems of U can be obtained by taking the derivative and setting the derivative to 0 and then projecting the solution into the non-negative quadrant, specifically
U ( k + 1 ) = max ( 0 , ( A T A + I ) 1 [ A T ( V 1 ( k ) D 1 ( k ) ) + ( V 2 ( k ) D 2 ( k ) ) ] ) .
Finally, update D 1 , D 2 , D 3 , D 4 , D 5 , D 6 , D 7 using the dual ascent method:
D 1 ( k + 1 ) = D 1 ( k ) ( V 1 ( k ) A U ( k ) ) ,
D 2 ( k + 1 ) = D 2 ( k ) ( V 2 ( k ) U ( k ) ) ,
D 3 , i ( k + 1 ) = D 3 , i ( k ) ( V 3 , i ( k ) U ^ i ( k ) ) ,
D 4 , i ( k + 1 ) = D 4 , i ( k ) ( V 4 , i ( k ) U ^ i ( k ) ) ,
D 5 , i ( k + 1 ) = D 5 , i ( k ) ( V 5 , i ( k ) 1 V 4 , i ( k ) ) ,
D 6 , i ( k + 1 ) = D 6 , i ( k ) ( V 6 , i ( k ) U ^ i ( k ) ) ,
D 7 , i ( k + 1 ) = D 7 , i ( k ) ( V 7 , i ( k ) 2 V 6 , i ( k ) ) .
It should be noted that { U ^ i } i = 1 R tends to be constant in the iteration, but when { U ^ i } i = 1 R changes, V 3 . i , V 4 . i , V 5 . i , V 6 . i , V 7 . i should change uniformly, and Equation (30)–(34) should be replaced with the following updated formula:
D 3 , i ( k + 1 ) = z e r o s ( s i z e ( D 3 , i ( k ) ) ) ,
D 4 , i ( k + 1 ) = z e r o s ( s i z e ( D 4 , i ( k ) ) ) ,
D 5 , i ( k + 1 ) = z e r o s ( s i z e ( D 5 , i ( k ) ) ) ,
D 6 , i ( k + 1 ) = z e r o s ( s i z e ( D 6 , i ( k ) ) ) ,
D 7 , i ( k + 1 ) = z e r o s ( s i z e ( D 7 , i ( k ) ) ) .
Next, we analyze the time complexity of the algorithm. The ADMM optimization framework including update U , V 1 , V 2 , V 3 , V 4 , V 5 , V 6 , V 7 , D 1 , D 2 , D 3 , D 4 , D 5 , D 6 , D 7 . The most expensive steps are U, whose computational complexity is O ( n l 2 ) , and  V 4 , V 6 , whose computational complexity is O ( n log n ) , and the rest of the computational complexity is O ( n ) . Considering that there is usually l 2 > > log n , the computational complexity of SARSU is O ( n l 2 ) . Finally, the complete SARSU pseudocode is given in Algorithm 1.
Algorithm 1 Pseudocode of the SARSU algorithm
1:
  Input: The observed data Y ¯ R l × n , the spectral library A ¯ R l × d , the number of active abundance maps R, the weighted matrices W s and W, the parameters α , β , λ 1 , λ 2 , μ and Maxiter.
2:
  Output: abundance matrix U R d × n .
3:
  Initialization: Set k = 0 , U ( 0 ) R l × d , { U ^ i } i = 1 R R m 1 × m 2 and V 1 ( 0 ) , , V 7 ( 0 ) , D 1 ( 0 ) , , D 7 ( 0 ) .
4:
  Repeat:
5:
  Update V 1 by Equation (18).
6:
  Update V 2 by Equation (22), update w s by Equation (12).
7:
  Update V 3 i by Equation (26).
  Update V 4 i by Equation (19).
  Update V 5 i by Equation (23).
  Update V 6 i by Equation (20).
  Update V 7 i by Equation (24).
8:
  Update U by Equation (27).
9:
  Update Lagrange multipliers:
  Update D 1 by Equation (28).
  Update D 2 by Equation (29).
  if { U ^ i } i = 1 R has changed
  Update D 3 , i , D 4 , i , D 5 , i , D 6 , i , D 7 , i by Equations (30)–(34).
  else
  Update D 3 , i , D 4 , i , D 5 , i , D 6 , i , D 7 , i by Equations (35)–(39).
10: 
Update iteration: k k + 1 .
11: 
Until Maxiter is reached.

3. Experiment on Synthetic Dataset

3.1. Experimental Settings

(1) Details of the experiment. Due to the global non-convexity of the proposed SARSU, some details such as initialization and parameter determination need to be specified in the method implementation. First, we have to determine the initialization of the abundance matrix U, and improper initialization may cause the algorithm to fall into a local minimum solution. Generally, there are two strategies for the initialization of abundance matrix U random initialization or pseudo-inverse initialization with spectral library. Random initialization is the random allocation of values between 0 and 1 as elements of the matrix U, while the latter is initialized using the initial decomposition Y = A U of the image and the strategy of inverse solution U. In order to avoid the great uncontrollability of randomly initialized values, and the possibility of inappropriate initializations that cause the algorithm to fall into local minimum solutions, we choose pseudo-inverse initialization, that is, U = ( A T A ) 1 A T Y . Secondly, the setting of augmented Lagrange penalty u directly affects the convergence speed of the algorithm. To this end, we adopt an adaptive update strategy by examining the ratio of the ADMM’s original and dual residuals in a given interval of iterations and update μ based on this. The reason for this is that both the original and dual residuals converge to 0, and we can make them converge uniformly by adjusting μ . We use the following definitions of primal residual norm and dual residual norm as a measure of iteration completion:
r ( k ) = | | [ V 1 ( k ) ; V 2 ( k ) ; V 3 ( k ) ; V 4 ( k ) ; V 5 ( k ) ; V 6 ( k ) ; V 7 ( k ) ] [ A U ( k ) ; U ( k ) ; U ^ ( k ) ; U ^ ( k ) ; 1 U ^ ( k ) ; U ^ ( k ) ; 2 U ^ ( k ) ] | | F 2 ,
d ( k ) = | | [ V 1 ( k ) ; ; V 7 ( k ) ] [ V 1 ( k 1 ) ; ; V 7 ( k 1 ) ] | | F 2 .
In addition, we set a maximum number of iterations for the algorithm, such as 200. If either of the above two strategies is satisfied, the iteration ends. Empirically, we have found that a combination of these two strategies always produces satisfactory results.
(2) In this section, to prove the validity of our proposed SARSU, we select eight semi-supervised unmixing methods as comparison methods to evaluate the unmixing performance, namely, SUnSAL [25], SUnSAL_TV [26], JSpBLRU [36], MUA_SLIC [30], LSU [40], S2MSU [32], SUnCNN [23], and FaSUn [42]. As for the parameters of each method, we selected them according to the suggestions in the original paper and then carried out small tuning in the experiment. The parameter selection of all comparison methods and proposed methods is shown in Table 1. All experiments were conducted in an environment with 12th Generation Intel Core i7-12700F and Windows 10.
(3) Synthetic datasets. This paper simulates two hyperspectral images, referred to as DC1 and DC2. Endmembers were randomly selected from the United States Geological Survey (USGS) spectral library, and corresponding abundance maps were generated to simulate possible spatial material distributions. The endmembers in the USGS library have 224 spectral bands, uniformly distributed in the range of 0.4–2.5 μm. We used a linear mixing model to generate the data, randomly selecting features from the USGS library as endmembers and applying the ASC to each simulated pixel, with an angle of at least 4.44° between any pair of spectral signatures. The images were contaminated by both white noise and spectrally correlated Gaussian noise generated by a low-pass filter. DC1 with 75 × 75 pixels contains five endmembers, with each row being a mixture of different numbers of endmembers. Since it includes all possible mixing scenarios, it is often used in unmixing tasks to test algorithm performance. DC2 is composed of a linear mixture of nine endmembers and contains 100 × 100 pixels. The abundance maps of DC2 were sampled from a Dirichlet distribution centered on a Gaussian random field, exhibiting piecewise smoothness and sharp edge transitions. The experiments were conducted on both DC1 and DC2 at signal-to-noise ratios (SNRs) of 20 dB, 30 dB, and 40 dB. Figure 1 shows the reflectance curves of each band of the endmembers selected by DC1 and DC2 and the synthetic images.
(4) Evaluation index. We use signal reconstruction error ( SRE ) as an evaluation index to evaluate the unmixing results, which is defined as follows:
S R E = 20 log 10 E ( | | U | | F 2 ) E ( | | U U ^ | | F 2 ) .
where E ( · ) represents the expectation operator, and the higher the SRE value, the better the unmixing effect. We chose SRE rather than root mean square error ( RMSE ) as an evaluation metric because SRE is built on a logarithmic scale. When the reconstruction error is low, SRE can better distinguish the unmixing performance of the model. This index is widely used in spectral-library-based unmixing methods.
In order to ensure the fair comparison of all methods, we use pseudo-inverse initialization to obtain the initial abundance of all methods. All the methods were tested ten times on the same image, and the optimal results of the ten times were selected for comparison. In order to restore the real unmixing scene as much as possible, we use the noisy image to initialize the abundance and unmixing.

3.2. Parameter Selection Experiment

In this experiment, we selected the four parameters α , β , λ 1 , and λ 2 in SARSU on DC1. With the image fixed and the SNR set to 30 dB, we first fixed the values of λ 1 and λ 2 and then optimized α and β . As shown in Figure 2a, when α = 5 × 10 3 and β = 1 × 10 3 , the SRE value reaches the maximum, indicating the best unmixing results. Therefore, we selected α = 5 × 10 3 and β = 1 × 10 3 and then optimized λ 1 and λ 2 . As shown in Figure 2b, when α = 5 × 10 3 and β = 5 × 10 4 , the SRE value reaches the maximum. Therefore, α = 5 × 10 3 , β = 1 × 10 3 , λ 1 = 5 × 10 3 , and λ 2 = 5 × 10 4 were selected in the subsequent experiments.

3.3. Noise Robustness Experiment

This experiment aims to evaluate the good performance and excellent robustness of the proposed SARSU method under different levels of Gaussian noise. The experiments are conducted on both DC1 and DC2, with noise levels selected as SNR = 20 dB, 30 dB, and 40 dB. To ensure the fairness of the experiment, all methods are compared under the same initialization conditions.
Figure 3 shows the SRE results of each method under different SNR levels on DC1 and DC2. It can be clearly observed that as the SNR value increases, the SRE values of all methods increase, indicating that the performance of all methods improves with the reduction in noise level. However, our method clearly achieves better results than other methods. Under all noise levels on both DC1 and DC2, the SRE values obtained by our method are significantly superior to those of other methods, demonstrating better stability and noise robustness. This indicates that our sub-abundance map regularization strategy is successful. By directly integrating the low-rank and piecewise smoothness priors into the contributing abundance maps, it not only enhances their importance in unmixing but also effectively reduces the solution space, thereby further improving the algorithm’s performance. Figure 4 and Figure 5, respectively, display the estimated abundance maps of each method on DC1 and DC2 at 30 dB. It can be clearly seen that compared with other methods, our method achieves more desirable unmixing results and produces better abundance maps. At the 30 dB level, the abundance maps obtained by our method effectively eliminate the influence of noise, with smoother edges and better preservation of edge details, making them more consistent with the true abundance maps. Since the SNR of real images obtained by sensors is generally around this level, this indicates that our method is very promising for practical applications. This clearly confirms the effectiveness and advancement of SARSU in improving unmixing performance.

3.4. Convergence Analysis

The SARSU using the aforementioned stopping criterion is convergent. The convergence of the algorithm can be analyzed in a manner similar to that in [46,47]. We have demonstrated this in our experiments. As shown in Figure 6, the convergence curves of the objective function, primal residual, and dual residual in the DC1 experiment are presented. It can be seen from Figure 6a that the objective function value tends to flatten after 40 iterations and has essentially stabilized before 200 iterations. The primal and dual residual curves shown in Figure 6b also indicate that the primal and dual residuals oscillate and approach each other. As the number of iterations increases, the amplitude of oscillation gradually decreases until stabilization. By 200 iterations, the residuals have essentially completely converged to 0, which proves the convergence of SARSU.

3.5. Computational Cost Analysis

As is well known, the computational cost of the SU model is significantly influenced by the scale of the spectral library atoms. Particularly for algorithms with high time complexity, an increase in the number of spectral library atoms will lead to a substbackal rise in computational cost. We have previously analyzed the time complexity of SARSU. Now, we perform unmixing on each method using spectral libraries composed of different numbers of spectral library atoms. The number of atoms in the spectral library is set to 50, 100, 150, 240, 350, and 498, respectively, and the running time of each method is recorded. Figure 7 presents the running time curves of each method. It can be observed that our method, along with SUnSAL and MUA_SLIC, spends less time and has a relatively flat curve, indicating that the running time is not significantly affected by the number of spectral library atoms. Since SUnSAL does not include any additional regularization terms and MUA_SLIC employs a time-saving strategy based on multi-scale processing, they achieve fast running times. Our method, however, still maintains a rapid running time even after incorporating low-rank and piecewise smooth regularization terms for the abundance maps. This is attributed to our sub-abundance map regularization strategy, which balances unmixing performance and timeliness. Other methods exhibit a significant increase in running time with an increasing number of spectral library atoms. Notably, SUnCNN and LSU show the most pronounced increase. The former is limited by the time-consuming nature of deep learning, while the latter, due to its hierarchical characteristics, experiences an increase in the number of unmixing layers as the number of spectral library atoms increases, resulting in considerable time overhead. It is worth mentioning that JSpBLRU and SUnSAL_TV employ low-rank regularization and TV regularization on the abundance, respectively. However, their computational costs are much higher than that of the proposed method, which utilizes both types of regularization simultaneously.
Table 2 presents the running time of each method on DC2. It can be seen from the table that the proposed method is only slower than SUnSAL and MUA_SLIC, and it has a significant advantage in running time compared with other methods, which once again confirms the superiority of the proposed method in terms of speed.

3.6. Ablation Experiment

To verify the effectiveness of the proposed method, we conducted ablation experiments on DC1. We designed six cases and performed experiments under SNR levels of 20, 30, and 40 dB. The six cases are as follows:
min U 1 2 | | Y A U | | F 2 + α | | W s U | | 1 , 1 s . t . U 0 ,
min U 1 2 | | Y A U | | F 2 + α | | W s U | | 1 , 1 + β i = 1 R | | U ^ i | | W i , s . t . U 0 ,
min U 1 2 | | Y A U | | F 2 + α | | W s U | | 1 , 1 + β i = 1 R | | U ^ i | | W i , + λ 1 i = 1 R | | U ^ i | | T V s . t . U 0 ,
min U 1 2 | | Y A U | | F 2 + α | | W s U | | 1 , 1 + β i = 1 R | | U ^ i | | W i , + λ 2 i = 1 R | | U ^ i | | T V D s . t . U 0 ,
min U 1 2 | | Y A U | | F 2 + α | | W s U | | 1 , 1 + λ 1 i = 1 R | | U ^ i | | T V + λ 2 i = 1 R | | U ^ i | | T V D s . t . U 0 ,
min U 1 2 | | Y A U | | F 2 + α | | W s U | | 1 , 1 + β i = 1 R | | U ^ i | | W i , + λ 1 i = 1 R | | U ^ i | | T V + λ 2 i = 1 R | | U ^ i | | T V D s . t . U 0 .
Figure 8 presents the SRE values of models (43)–(48) at SNR levels of 20 dB, 30 dB, and 40 dB. The figure clearly demonstrates the contributions of the low-rank regularization and the dual TV regularization to the models. It can be observed that the two TV regularizations have almost equal importance, especially at low SNR levels, where dual TV regularization can significantly improve the unmixing results. The difference between models (43) and (44) well illustrates the importance of the low-rank regularization, which can effectively promote the low-rank structure of the abundance maps and remove some outliers, thereby enhancing the unmixing results of the model. Model (48) is the complete model of the proposed method, and it achieves the best unmixing results by integrating both sub-abundance maps low-rank regularization and dual TV regularization.

4. Experiment on Real Dataset

4.1. Preparation Before the Experiment

In the experiments of this section, four real-world datasets were utilized: Samson, Urban, AVIRIS Cuprite, and Jasper Ridge. All competing methods were applied for unmixing and evaluation. These datasets are publicly available and widely adopted by researchers for unmixing tasks, providing effective validation of algorithm performance. Consistent with the synthetic dataset experiments, all methods employed pseudoinverse-based initialization. Due to the lack of ground-truth data for real-world validation, the RGB composite images of the four datasets are provided in Figure 9 for visual comparison.
Since real-world datasets lack ground-truth abundance maps, the unmixing performance cannot be directly evaluated using abundance-based metrics like the SRE , as done for synthetic datasets. To quantitatively assess the unmixing results, we adopt the image-level SRE and RMSE , which are defined as follows:
S R E I M = 20 log 10 E ( | | Y | | F 2 ) E ( | | Y A U | | F 2 ) ,
R M S E = | | Y A U | | F 2 L × N .
where Y denotes the original HSI, A represents the spectral library, and U is the abundance matrix obtained after unmixing. The S R E I M can to some extent evaluate the quality of the unmixing results. The greater the value, the more effective the unmixing outcome will be.

4.2. Experiment Results

The Samson dataset was acquired using the Samson sensor and has 95 × 95 pixels, 156 bands, and a spectral range of 0.041 to 0.889 μm. The scene is commonly assumed to include three endmembers: Soil, Tree, and Water. Figure 10 presents the abundance maps estimated by all methods on the Samson dataset. Notably, our method demonstrates desirable performance, particularly in estimating Water, where it achieves more precise spatial distribution and cleaner background separation compared to other approaches. The remaining methods also exhibit competent overall performance: SUnCNN and FaSUn yield satisfactory estimates for Soil, while SUnSAL_TV and MUA_SLIC effectively suppress noise but introduce varying degrees of oversmoothing.
The Jasper Ridge dataset was captured by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) sensor. It contains 224 bands with 100 × 100 pixels and a spectral resolution of 9.46 nm. To mitigate atmospheric effects, several bands were removed, retaining 198 bands for unmixing. The scene is generally considered to include four endmembers: Tree, Water, Soil, and Road. Figure 11 presents the abundance maps estimated by different methods on the Jasper Ridge dataset. The results indicate that our method achieves overall satisfactory performance. Specifically, our method delivers superior estimates for Water, whereas SUnSAL, JSpBLRU, and S2MSU exhibit suboptimal performance in Water estimation. For Dirt and Road, all methods yield reasonable abundance estimates, with the exception of FaSUn, which underperforms in Road estimation. Overall, our method achieves robust abundance estimation accuracy.
The widely adopted AVIRIS Cuprite dataset was selected to evaluate the proposed method as it is a benchmark in hyperspectral unmixing. A sub-image from the sector ’f970619t01p02_r02_sc04.a.rfl’ (Available online at: https://avng.jpl.nasa.gov/pub/ABoselli/f970619t01p02_r02c/, accessed on 1 November 2025) with dimensions of 250 × 191 pixels and 240 spectral bands was chosen for experimentation, where 188 bands were retained after removing noise-corrupted and water vapor absorption bands. Twelve minerals are hypothesized to be present in this dataset: Alunite GDS82 Na82, Andradite WS487, Buddingtonite GDS85D-206, Chalcedony CU91-6A, Dumortierite HS190.3B, Kaolin/Smect H89-FR-5 30K, Kaolin/Smect KLF508 85%K, Montmorillonite + Illi CM37, Muscovite GDS108, Nontronite NG-1.a, Pyrope WS474, and Sphene HS189.3B. Figure 12 displays the abundance maps estimated by different methods on the AVIRIS Cuprite dataset. Two key observations can be drawn: First, all algorithms achieve reasonable unmixing results. Second, our method accurately localizes regions with the highest mineral concentrations, which benefits from the sub-abundance map regularization effectively enforcing low-rank and piecewise smoothness properties of abundance maps, thereby preserving critical details while suppressing noise and outliers.
The Urban (available online at http://www.erdc.usace.army.mil/Media/Fact-Sheets/Fact-Sheet-Article-View/Article/610433/hypercube/#, accessed on 1 November 2025) dataset consists of 307 × 307 pixels and contains 210 spectral bands. After removing noise-corrupted and water vapor absorption bands, 162 bands were retained for unmixing. As established in prior studies, the scene comprises six materials: Asphalt-road, Grass, Roof #1, Tree, Roof #2, and Concrete-road. Figure 13 presents the unmixing results of various algorithms on the Urban dataset. It is evident that our method achieves superior overall performance. Specifically, both our method and SUnCNN yield more complete and distinct abundance estimates for Grass, Roof #1, and Tree, while our method further produces smoother abundance maps with effective noise suppression. For Roof #2, our method, along with SUnSAL_TV, SUnSAL, and LSU, delivers satisfactory results, but our solution exhibits higher precision and cleaner background separation. All methods perform comparably well in estimating the Concrete-road abundance.
Table 3 and Table 4 present the SRE and RMSE values obtained by all methods on four real-world datasets, respectively, with the highest values bolded and the second-highest values underlined. It is evident that our method achieves the best results across all real-world datasets, demonstrating the effectiveness of the proposed sub-abundance map regularization strategy in enhancing unmixing performance.

4.3. Running Time Analysis

The running times of all methods on four real-world datasets are summarized in Table 5. It can be observed that our method maintains favorable time efficiency even under the complex scenarios of real datasets. Specifically, on the Samson and Jasper Ridge datasets, our method achieves the shortest runtime compared to SUnSAL. This is attributed to the fact that the complexity of real data requires SUnSAL to perform more iterations for convergence, while our method incorporates the low-rank and piecewise smoothness priors of abundance maps, which can speed up convergence. Additionally, the dual TV regularization employed in our framework captures twice the spatial information of conventional TV, thereby accelerating convergence. On the AVIRIS Cuprite and Urban datasets, our method also exhibits faster execution speeds compared to other methods that similarly adopt low-rank and TV regularization. These results collectively demonstrate the superiority of our approach in computational efficiency.

5. Discussion

(1) The proposed active sub-abundance map number estimation strategy is based on the SVD of the denoised HSI. A clean HSI data matrix has a jump between large and small singular values (i.e., the large and small values in its singular value matrix are mutated). In order to prove the rationality of the threshold we selected, the two values of the singular value matrix of the denoised HSI data matrix of each dataset are statistically analyzed at the numerical jump.
Table 6 presents two values of the singular value matrix of the denoised HSI data matrix at the numerical jump and number of active sub-abundance maps of each dataset. It can be seen that our selection of the threshold value of 1 ×   10 10 is reasonable and feasible and greatly reduces the number of sub-abundance maps that need to be regularized, saving computational overhead. To demonstrate the accuracy of this strategy, we included the results of the widely used VD [48] and Hysime [49] methods for determining the number of endmembers for comparison with our method. The experimental results proved the accuracy of this strategy.
(2) The sub-abundance map regularization strategy proposed in this paper can effectively reduce the regularization scale of the abundance matrix and save computational overhead. The inherent low-rank spatial prior of the abundance maps can be efficiently exploited by directly applying weighted nuclear norm regularization to the abundance maps. In addition, it can reduce the adverse effects of the large scale and high coherence of the spectral library on unmixing without pruning the spectral library, avoiding the risk of losing important library atoms in the pruning process. In SU, the proposed method can be applied in real scenarios as a practical fast method.
(3) Regarding the prediction of the number of active sub-abundance maps, besides the singular value thresholding method used in this paper, there are also many effective methods for estimating the number of endmembers that can be utilized. Firstly, the number of active sub-abundance maps is determined by estimating the number of endmembers, then they are specifically determined, and finally they are constrained. Such a series of operations can be used as a technical route of SU in future practice.

6. Conclusions

In this paper, we propose a novel sub-abundance map regularized sparse unmixing framework based on dynamic abundance subspace awareness, called SARSU. By constraining the contributing sub-abundance maps during the unmixing process, the method reduces the optimization complexity while mitigating the adverse effects of the large scale and high coherence of spectral libraries. For sub-abundance map regularization, a weighted nuclear norm regularization is employed to enforce the low-rank property of abundance maps, and a DTV regularization is designed to exploit additional spatial information and promote piecewise smoothness. The experimental results on both synthetic and real datasets demonstrate the effectiveness of the proposed method, achieving superior hyperspectral unmixing performance compared to other state-of-the-art sparse unmixing methods.
In future work, we will focus on addressing practical challenges and exploring solutions for sparse unmixing in real-world applications. Additionally, we will explore the method of combining semi-supervised models with deep priors in order to further develop a more ideal unmixing approach.

Author Contributions

Conceptualization, K.Q.; methodology, F.L.; software, F.L.; validation, H.W.; formal analysis, H.W.; investigation, F.L.; resources, H.W.; data curation, K.Q.; writing—original draft, F.L.; writing—review and editing, W.B.; visualization, H.W.; supervision, W.B.; project administration, W.B.; funding acquisition, K.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China grant number 62461001 and in part by the Natural Science Project Fund for High-level Talents of North Minzu University (Project No. 2025BG239).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rajabi, R.; Zehtabian, A.; Singh, K.D.; Tabatabaeenejad, A.; Ghamisi, P.; Homayouni, S. Editorial: Hyperspectral imaging in environmental monitoring and analysis. Front. Environ. Sci. 2024, 11, 1353447. [Google Scholar] [CrossRef]
  2. Chen, L.; Liu, J.; Chen, W.; Du, B. A GLRT-Based Multi-Pixel Target Detector in Hyperspectral Imagery. IEEE Trans. Multimed. 2023, 25, 2710–2722. [Google Scholar] [CrossRef]
  3. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  4. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef]
  5. Plaza, A.; Du, Q.; Bioucas-Dias, J.M.; Jia, X.; Kruse, F.A. Foreword to the special issue on spectral unmixing of remotely sensed data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4103–4110. [Google Scholar] [CrossRef]
  6. Rasti, B.; Zouaoui, A.; Mairal, J.; Chanussot, J. Image Processing and Machine Learning for Hyperspectral Unmixing: An Overview and the HySUPP Python Package. IEEE Trans. Geosci. Remote. Sens. 2024, 62, 5517631. [Google Scholar] [CrossRef]
  7. Wei, J.; Wang, X. An Overview on Linear Unmixing of Hyperspectral Data. Math. Probl. Eng. 2020, 2020, 3735403. [Google Scholar] [CrossRef]
  8. Bin, Y.; Bin, W. Review of nonlinear unmixing for hyperspectral remote sensing imagery. J. Infrared Millim. Waves 2017, 36, 173–185. [Google Scholar] [CrossRef]
  9. Nascimento, J.; Dias, J. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef]
  10. Xiong, W.; Chang, C.I.; Wu, C.C.; Kalpakis, K.; Chen, H.M. Fast Algorithms to Implement N-FINDR for Hyperspectral Endmember Extraction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 545–564. [Google Scholar] [CrossRef]
  11. Li, J.; Agathos, A.; Zaharie, D.; Bioucas-Dias, J.M.; Plaza, A.; Li, X. Minimum Volume Simplex Analysis: A Fast Algorithm for Linear Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5067–5082. [Google Scholar] [CrossRef]
  12. Lin, C.H.; Chi, C.Y.; Wang, Y.H.; Chan, T.H. A Fast Hyperplane-Based Minimum-Volume Enclosing Simplex Algorithm for Blind Hyperspectral Unmixing. IEEE Trans. Signal Process. 2016, 64, 1946–1961. [Google Scholar] [CrossRef]
  13. Feng, X.R.; Li, H.C.; Li, J.; Du, Q.; Plaza, A.; Emery, W.J. Hyperspectral unmixing using sparsity-constrained deep nonnegative matrix factorization with total variation. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6245–6257. [Google Scholar] [CrossRef]
  14. He, W.; Zhang, H.; Zhang, L. Total Variation Regularized Reweighted Sparse Nonnegative Matrix Factorization for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3909–3921. [Google Scholar] [CrossRef]
  15. Qu, K.; Bao, W. Multiple-priors ensemble constrained nonnegative matrix factorization for spectral unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 963–975. [Google Scholar] [CrossRef]
  16. Qu, K.; Li, Z.; Wang, C.; Luo, F.; Bao, W. Hyperspectral Unmixing Using Higher-order Graph Regularized NMF with Adaptive Feature Selection. IEEE Trans. Geosci. Remote. Sens. 2023, 61, 5511815. [Google Scholar] [CrossRef]
  17. Zhang, Q.; Wang, H.; Plemmons, R.J.; Pauca, V.P. Spectral unmixing using nonnegative tensor factorization. In Proceedings of the ACM-SE 45, Winston-Salem, NC, USA, 23–24 March 2007. [Google Scholar]
  18. Huck, A.; Guillaume, M. Estimation of the hyperspectral tucker ranks. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009; pp. 1281–1284. [Google Scholar] [CrossRef]
  19. Ding, M.; Fu, X.; Zhao, X.L. Fast and Structured Block-Term Tensor Decomposition for Hyperspectral Unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 1691–1709. [Google Scholar] [CrossRef]
  20. Wang, C.; Zhao, X.L.; Zhang, H.; Li, B.Z.; Ding, M. Hyperspectral Image Mixed Noise Removal via Nonlinear Transform-Based Block-Term Tensor Decomposition. IEEE Geosci. Remote Sens. Lett. 2023, 20, 5502605. [Google Scholar] [CrossRef]
  21. Jiang, X.; Sun, L.; Lin, P. Local Sparsity Blocks and Tensor Low Rank Regularized Sparse Unmixing. In Proceedings of the 2023 13th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), Athens, Greece, 31 October–2 November 2023; pp. 1–5. [Google Scholar] [CrossRef]
  22. Qian, Y.; Xiong, F.; Zeng, S.; Zhou, J.; Tang, Y.Y. Matrix-Vector Nonnegative Tensor Factorization for Blind Unmixing of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1776–1792. [Google Scholar] [CrossRef]
  23. Rasti, B.; Koirala, B. SUnCNN: Sparse Unmixing Using Unsupervised Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5508205. [Google Scholar] [CrossRef]
  24. Su, Y.; Zhu, Z.; Gao, L.; Plaza, A.; Li, P.; Sun, X.; Xu, X. DAAN: A Deep Autoencoder-Based Augmented Network for Blind Multilinear Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5512715. [Google Scholar] [CrossRef]
  25. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Sparse Unmixing of Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2014–2039. [Google Scholar] [CrossRef]
  26. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Total Variation Spatial Regularization for Sparse Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4484–4502. [Google Scholar] [CrossRef]
  27. Song, H.; Wu, X.; Zou, A.; Liu, Y.; Zou, Y. Weighted Total Variation Regularized Blind Unmixing for Hyperspectral Image. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5507505. [Google Scholar] [CrossRef]
  28. Li, H.; Feng, R.; Wang, L.; Zhong, Y.; Zhang, L. Superpixel-Based Reweighted Low-Rank and Total Variation Sparse Unmixing for Hyperspectral Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2021, 59, 629–647. [Google Scholar] [CrossRef]
  29. Miao, Y.; Yang, B. Multilevel Reweighted Sparse Hyperspectral Unmixing Using Superpixel Segmentation and Particle Swarm Optimization. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6013605. [Google Scholar] [CrossRef]
  30. Borsoi, R.A.; Imbiriba, T.; Bermudez, J.C.M.; Richard, C. A Fast Multiscale Spatial Regularization for Sparse Hyperspectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2019, 16, 598–602. [Google Scholar] [CrossRef]
  31. Qi, L.; Li, J.; Wang, Y.; Huang, Y.; Gao, X. Spectral–Spatial-Weighted Multiview Collaborative Sparse Unmixing for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8766–8779. [Google Scholar] [CrossRef]
  32. Ince, T.; Dobigeon, N. Spatial–Spectral Multiscale Sparse Unmixing for Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 5511605. [Google Scholar] [CrossRef]
  33. Wang, C.; Zhang, Q.; Wang, X.; Zhou, L.; Li, Q.; Xia, Z.; Ma, B.; Shi, Y.Q. Light-Field Image Multiple Reversible Robust Watermarking Against Geometric Attacks. IEEE Trans. Dependable Secur. Comput. 2025, 22, 5861–5875. [Google Scholar] [CrossRef]
  34. Liu, Y.; Wang, C.; Lu, M.; Yang, J.; Gui, J.; Zhang, S. From Simple to Complex Scenes: Learning Robust Feature Representations for Accurate Human Parsing. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 5449–5462. [Google Scholar] [CrossRef]
  35. Ince, T. Superpixel-Based Graph Laplacian Regularization for Sparse Hyperspectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5501305. [Google Scholar] [CrossRef]
  36. Huang, J.; Huang, T.Z.; Deng, L.J.; Zhao, X.L. Joint-sparse-blocks and low-rank representation for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2419–2438. [Google Scholar] [CrossRef]
  37. Xu, X.; Pan, B.; Chen, Z.; Shi, Z.; Li, T. Simultaneously Multiobjective Sparse Unmixing and Library Pruning for Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3383–3395. [Google Scholar] [CrossRef]
  38. Zhang, X.; Yuan, Y.; Li, X. Reweighted Low-Rank and Joint-Sparse Unmixing with Library Pruning. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5527816. [Google Scholar] [CrossRef]
  39. Kucuk, S.; Yuksel, S.E. Total Utility Metric Based Dictionary Pruning for Sparse Hyperspectral Unmixing. IEEE Trans. Comput. Imaging 2021, 7, 562–572. [Google Scholar] [CrossRef]
  40. Shen, X.; Chen, L.; Liu, H.; Su, X.; Wei, W.; Zhu, X.; Zhou, X. Efficient hyperspectral sparse regression unmixing with multilayers. IEEE Trans. Geosci. Remote. Sens. 2023, 61, 5522614. [Google Scholar] [CrossRef]
  41. Qu, K.; Luo, F.; Wang, H.; Bao, W. A New Fast Sparse Unmixing Algorithm Based on Adaptive Spectral Library Pruning and Nesterov Optimization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 6134–6151. [Google Scholar] [CrossRef]
  42. Rasti, B.; Zouaoui, A.; Mairal, J.; Chanussot, J. Fast Semisupervised Unmixing Using Nonconvex Optimization. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5526713. [Google Scholar] [CrossRef]
  43. Das, S.; Routray, A.; Deb, A.K. Noise robust estimation of number of endmembers in a hyperspectral image by Eigenvalue based gap index. In Proceedings of the 2016 8th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Los Angeles, CA, USA, 21–24 August 2016; pp. 1–5. [Google Scholar] [CrossRef]
  44. Su, H.; Jia, C.; Zheng, P.; Du, Q. Superpixel-Based Weighted Collaborative Sparse Regression and Reweighted Low-Rank Representation for Hyperspectral Image Unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 393–408. [Google Scholar] [CrossRef]
  45. Zhang, S.; Li, J.; Li, H.C.; Deng, C.; Plaza, A. Spectral–Spatial Weighted Sparse Regression for Hyperspectral Image Unmixing. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3265–3276. [Google Scholar] [CrossRef]
  46. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  47. Qu, K.; Li, Z.; Luo, X.; Bao, W.; Luo, F. Hyperspectral Unmixing Using Reweighted Unidirectional TV Low-Rank NTF With Multiple-Factor Collaboration Regularization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 9628–9644. [Google Scholar] [CrossRef]
  48. Chang, C.I.; Du, Q. Estimation of number of spectrally distinct signal sources in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2004, 42, 608–619. [Google Scholar] [CrossRef]
  49. Bioucas-Dias, J.M.; Nascimento, J.M.P. Hyperspectral Subspace Identification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2435–2445. [Google Scholar] [CrossRef]
Figure 1. The endmember curves and images for DC1 and DC2. (a) The endmember curves of DC1. (b) The endmember curves of DC2. (c) Image of the first band of DC1. (d) Image of the second band of DC2.
Figure 1. The endmember curves and images for DC1 and DC2. (a) The endmember curves of DC1. (b) The endmember curves of DC2. (c) Image of the first band of DC1. (d) Image of the second band of DC2.
Mathematics 13 03826 g001
Figure 2. SRE values of abundances obtained by the proposed method with different parameters on DC1. (a) SRE values of different α and β . (b) SRE values of different λ 1 and λ 2 .
Figure 2. SRE values of abundances obtained by the proposed method with different parameters on DC1. (a) SRE values of different α and β . (b) SRE values of different λ 1 and λ 2 .
Mathematics 13 03826 g002
Figure 3. SRE values of the abundance matrix acquired by all methods at different SNR on DC1 and DC2. (a) SRE values on DC1. (b) SRE values on DC2.
Figure 3. SRE values of the abundance matrix acquired by all methods at different SNR on DC1 and DC2. (a) SRE values on DC1. (b) SRE values on DC2.
Mathematics 13 03826 g003
Figure 4. All abundance maps estimated by each method at 30 dB on DC1. From the first row to the last row are the abundance maps of endmember 1 to endmember 5.
Figure 4. All abundance maps estimated by each method at 30 dB on DC1. From the first row to the last row are the abundance maps of endmember 1 to endmember 5.
Mathematics 13 03826 g004
Figure 5. All abundance maps estimated by each method at 30 dB on DC2. From the first row to the last row are the abundance maps of endmember 1 to endmember 9.
Figure 5. All abundance maps estimated by each method at 30 dB on DC2. From the first row to the last row are the abundance maps of endmember 1 to endmember 9.
Mathematics 13 03826 g005
Figure 6. Objective function curve, primal residual, and dual residual curves. (a) Objective function curve. (b) Primal residual and dual residual curves.
Figure 6. Objective function curve, primal residual, and dual residual curves. (a) Objective function curve. (b) Primal residual and dual residual curves.
Mathematics 13 03826 g006
Figure 7. The running time of each method under different numbers of spectral library atoms on DC1.
Figure 7. The running time of each method under different numbers of spectral library atoms on DC1.
Mathematics 13 03826 g007
Figure 8. SRE values obtained from the ablation experiments conducted under different SNR levels on DC1.
Figure 8. SRE values obtained from the ablation experiments conducted under different SNR levels on DC1.
Mathematics 13 03826 g008
Figure 9. The images of all real datasets. (a) Samson datasets. (b) Jasper Ridge dataset. (c) AVIRIS Cuprite dataset. (d) Urban dataset.
Figure 9. The images of all real datasets. (a) Samson datasets. (b) Jasper Ridge dataset. (c) AVIRIS Cuprite dataset. (d) Urban dataset.
Mathematics 13 03826 g009
Figure 10. Abundance maps of all methods obtained from Samson dataset.
Figure 10. Abundance maps of all methods obtained from Samson dataset.
Mathematics 13 03826 g010
Figure 11. Abundance maps of all methods obtained from Jasper dataset.
Figure 11. Abundance maps of all methods obtained from Jasper dataset.
Mathematics 13 03826 g011
Figure 12. Abundance results achieved by all methods on AVIRIS Cuprite. All minerals from left to right are Alunite GDS82 Na82, Andradite WS487, Buddingtonite GDS85D-206, Chalcedony CU91-6A, Dumortierite HS190.3B, Kaolin/Smect H89-FR-5 30K, Kaolin/Smect KLF508 85%K, Montmorillonite + Illi CM37, Muscovite GDS108, Nontronite NG-1.a, Pyrope WS474, and Sphene HS189.3B.
Figure 12. Abundance results achieved by all methods on AVIRIS Cuprite. All minerals from left to right are Alunite GDS82 Na82, Andradite WS487, Buddingtonite GDS85D-206, Chalcedony CU91-6A, Dumortierite HS190.3B, Kaolin/Smect H89-FR-5 30K, Kaolin/Smect KLF508 85%K, Montmorillonite + Illi CM37, Muscovite GDS108, Nontronite NG-1.a, Pyrope WS474, and Sphene HS189.3B.
Mathematics 13 03826 g012
Figure 13. Abundance results achieved by all methods on Urban. From the first row to the last row are the abundance maps of Asphalt-road, Grass, Roof # 1 , Tree, Roof # 2 , and Concrete-road.
Figure 13. Abundance results achieved by all methods on Urban. From the first row to the last row are the abundance maps of Asphalt-road, Grass, Roof # 1 , Tree, Roof # 2 , and Concrete-road.
Mathematics 13 03826 g013
Table 1. Selection of parameters for each method.
Table 1. Selection of parameters for each method.
Methods/DatasetsSynthetic DatasetsReal Datasets
SUnSAL [25] λ = 0.5 λ = 0.05
SUnSAL_TV [26] λ = 0.01 , λ T V = 0.001 , μ = 0.2 λ = 0.01 , λ T V = 0.0001 , μ = 0.5
JSpBLRU [36] d = 7 , μ = 0.01 , γ = 0.005 , τ = 1 d = 50 , μ = 0.01 , γ = 0.0005 , τ = 10
MUA_SLIC [30] λ 1 = 0.01 , λ 2 = 0.1 , β = 0.01 , s l i c _ s i z e = 6 , s l i c _ r e g = 0.005 λ 1 = 0.001 , λ 2 = 0.01 , β = 10 , s l i c _ s i z e = 200 , s l i c _ r e g = 0.01
LSU [40] α = 0.05 , β = 0.005 , μ = 0.1 α = 0.05 , β = 5 , μ = 0.1
S2MSU [32] b l o c k s i z e = 5 , s t e p s i z e = 3 , λ b a r = 0.005 , λ = 0.05 b l o c k s i z e = 3 , s t e p s i z e = 2 , λ b a r = 0.005 , λ = 0.001
SUnCNN [23] I t e r = 12,000 I t e r = 20 , 000
FaSUn [42] T = 10 , 000 , T A = T B = 5 , μ 1 = 50 , μ 2 = 2 , μ 3 = 1 T = 10 , 000 , T A = T B = 5 , μ 1 = 400 , μ 2 = 20 , μ 3 = 1
Table 2. Running time (s) of each method on DC2, and the bold indicates the optimal value.
Table 2. Running time (s) of each method on DC2, and the bold indicates the optimal value.
Datasets/MethodsSUnSALSUnSAL_TVJSpBLRUMUA_SLICLSUS2MSUSUnCNNFaSUnProposed
DC26.7587.5337.5815.94116.2422.51173.2791.6020.82
Table 3. S R E I M of all methods on real datasets, and the bolded parts indicate the optimal values.
Table 3. S R E I M of all methods on real datasets, and the bolded parts indicate the optimal values.
Datasets/MethodsSUnSALSUnSAL_TVJSpBLRUMUA_SLICLSUS2MSUSUnCNNFaSUnProposed
Samson20.2523.3326.1029.1025.5330.2126.9525.9730.94
Jasper Ridge18.8227.3622.4721.1027.1126.8126.0428.5231.85
AVIRIS Cuprite23.1230.6626.3523.0833.1335.4232.7229.3336.17
Urban19.9628.3626.3927.7928.6228.0527.1624.3928.87
Table 4. RMSE of all methods on real datasets, and the bolded parts indicate the optimal values.
Table 4. RMSE of all methods on real datasets, and the bolded parts indicate the optimal values.
Datasets/MethodsSUnSALSUnSAL_TVJSpBLRUMUA_SLICLSUS2MSUSUnCNNFaSUnProposed
Samson0.0240.0170.0110.0090.0130.0120.0170.015 0 . 006
Jasper Ridge0.0380.0190.0180.0280.0160.0160.0200.014 0 . 012
AVIRIS Cuprite0.0250.0120.0220.0250.0080.0190.0230.012 0 . 005
Urban0.0270.0100.0140.011 0 . 009 0.0180.0240.015 0 . 009
Table 5. Running time(s) of each method on real datasets, and the bolded parts indicate the optimal values.
Table 5. Running time(s) of each method on real datasets, and the bolded parts indicate the optimal values.
Datasets/MethodsSUnSALSUnSAL_TVJSpBLRUMUA_SLICLSUS2MSUSUnCNNFaSUnProposed
Samson20.3282.7970.7343.5241.8925.8340.7955.15 18 . 04
Jasper Ridge39.68117.2585.4557.34131.5658.51127.21104.07 37 . 54
AVIRIS Cuprite 68 . 27 245.83233.6288.05432.76152.96765.53155.28109.05
Urban 84 . 08 591.19464.83139.90837.30354.231070.85226.78162.06
Table 6. Two values of the singular value matrix of the denoised HSI data matrix at the numerical jump and number of active sub-abundance maps of each dataset.
Table 6. Two values of the singular value matrix of the denoised HSI data matrix at the numerical jump and number of active sub-abundance maps of each dataset.
DatasetsDC1DC2SamsonJasper RidgeAVIRIS CupriteUrban
Big singular value1.904.360.104.730.691.81
Small singular value 2.62 × 10 12 1.80 × 10 13 2.17 × 10 14 1.76 × 10 10 5.35 × 10 13 2.99 × 10 13
Number of active sub-abundance maps5923171827
Number of endmember determined by VD5925222529
Number of endmember determined by Hysime5923192027
The true endmember number59Not availableNot availableNot availableNot available
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qu, K.; Luo, F.; Wang, H.; Bao, W. A Novel Sub-Abundance Map Regularized Sparse Unmixing Framework Based on Dynamic Abundance Subspace Awareness. Mathematics 2025, 13, 3826. https://doi.org/10.3390/math13233826

AMA Style

Qu K, Luo F, Wang H, Bao W. A Novel Sub-Abundance Map Regularized Sparse Unmixing Framework Based on Dynamic Abundance Subspace Awareness. Mathematics. 2025; 13(23):3826. https://doi.org/10.3390/math13233826

Chicago/Turabian Style

Qu, Kewen, Fangzhou Luo, Huiyang Wang, and Wenxing Bao. 2025. "A Novel Sub-Abundance Map Regularized Sparse Unmixing Framework Based on Dynamic Abundance Subspace Awareness" Mathematics 13, no. 23: 3826. https://doi.org/10.3390/math13233826

APA Style

Qu, K., Luo, F., Wang, H., & Bao, W. (2025). A Novel Sub-Abundance Map Regularized Sparse Unmixing Framework Based on Dynamic Abundance Subspace Awareness. Mathematics, 13(23), 3826. https://doi.org/10.3390/math13233826

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop