Next Article in Journal
GPR Monitoring of Artificial Debonded Pavement Structures throughout Its Life Cycle during Accelerated Pavement Testing
Previous Article in Journal
Uncertainty Estimation for Deep Learning-Based Segmentation of Roads in Synthetic Aperture Radar Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sparse Constrained Low Tensor Rank Representation Framework for Hyperspectral Unmixing

1
Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(8), 1473; https://doi.org/10.3390/rs13081473
Submission received: 6 March 2021 / Revised: 30 March 2021 / Accepted: 7 April 2021 / Published: 11 April 2021
(This article belongs to the Section AI Remote Sensing)

Abstract

:
Recently, non-negative tensor factorization (NTF) as a very powerful tool has attracted the attention of researchers. It is used in the unmixing of hyperspectral images (HSI) due to its excellent expression ability without any information loss when describing data. However, most of the existing unmixing methods based on NTF fail to fully explore the unique properties of data, for example, low rank, that exists in both the spectral and spatial domains. To explore this low-rank structure, in this paper we learn the different low-rank representations of HSI in the spectral, spatial and non-local similarity modes. Firstly, HSI is divided into many patches, and these patches are clustered multiple groups according to the similarity. Each similarity group can constitute a 4-D tensor, including two spatial modes, a spectral mode and a non-local similarity mode, which has strong low-rank properties. Secondly, a low-rank regularization with logarithmic function is designed and embedded in the NTF framework, which simulates the spatial, spectral and non-local similarity modes of these 4-D tensors. In addition, the sparsity of the abundance tensor is also integrated into the unmixing framework to improve the unmixing performance through the L 2 , 1 norm. Experiments on three real data sets illustrate the stability and effectiveness of our algorithm compared with five state-of-the-art methods.

1. Introduction

Hyperspectral remote sensing images are widely used in classification [1,2], target recognition [3], detection [4] and segmentation [5] tasks since they contain hundreds of spectral information bands. Unfortunately, hyperspectral images (HSI) generally have a low spatial resolution, and a large number of mixed pixels exist in the image, which hinders the application of remote sensing technology. To solve this problem, unmixing is used to extract all the pure spectra (i.e., endmember) in each pixel and the proportion (i.e., abundance) of each endmember [6,7]. In the current research, there are two main kinds of unmixing models. One is the linear mixed model (LMM) assuming that the pixels are linearly expressed by all endmembers in the image, and the other is the nonlinear mixed model, which assumes that there are multiple complex interactions among endmembers [8,9]. Since LMM is simple and easy to model, this paper only discusses the unmixing algorithm under the linear mixing assumption.
In recent years, the non-negative matrix factorization (NMF) framework has received extensive attention since its clear physical interpretability and strong ability of data representation among unmixing algorithms under the assumption of the LMM model. NMF aims at finding a set of bases that can express the data as accurately as possible and the expression coefficients of the data under the bases [10]. However, the NMF framework that regards unmixing as a blind source separation problem is non-convex problem with respect to two matrices obtained, which is greatly affected by the initial value and the solution is not stable enough [11]. Therefore, to improve the performance of unmixing, many regularizations are added to the NMF framework based on the prior knowledge about hyperspectral data, such as sparse regularization [12,13,14], smoothing regularization [15,16], non-local similarity regularization [17], manifold regularization [18], and low-rank regularization [19,20]. Hong et al. [21] embed the spectral variability dictionary learning into the linear NMF framework to make the endmembers more accurate. Unfortunately, the unmixing algorithm based on the NMF framework has a flaw, which destroys the integrity of the input data since the 3-D HSI is reshaped into a 2-D matrix. Recently, inspired by the success of non-negative tensor factorization (NTF), which regards HSI as a 3-D tensor without any information loss, a framework based on NTF framework has developed and shown superior performance in hyperspectral unmixing [22,23,24,25,26]. Regrettably, the existing unmixing methods based on NTF mostly focus on the similarity of abundance on the spatial domain, which are not conducive to fully exploring the internal similarity structure of data, such as non-local spatial similarity and information redundancy in the spectral direction.
To overcome the above issues, a new sparse-constrained low-rank tensor factorization (SCLT) framework is proposed for hyperspectral unmixing in this paper, which integrates a multi-mode low-rank learning regularization and a sparsity constraint by L 2 , 1 norm. Firstly, the hyperspectral data are divided into many patches with the same size. Then, all similar patches are clustered into the same group. Each similarity group can be regarded as a 4-D tensor, which has four modes, including two spatial modes in vertical and horizontal directions, a spectral mode and a non-local similarity mode. Since 4-D tensors are composed of multiple similar patches, all four modes have strong similar relationships. Therefore, a low-rank regularization is designed here to learn these strong similarities, which can express the multi-directional low-rank structure of HSI. In addition, based on prior knowledge, the abundance is sparse. To explore the sparse structure of the abundance tensor, the L 2 , 1 norm is introduced into the unmixing model to constrain the abundance tensor. Specifically, during each iteration, the sparse regularization is used to force the abundance tensor to remain sparse, and the low-rank regularization plays an important role in exploring the similarity relationship in the various modes of HSI. Since the proposed SCLT is a multi-variate problem, the alternative direction multiple method (ADMM) is utilized to optimize the objective function. For convenience, the contributions of this paper can be summarized as follows:
  • A new sparsity-constrained low-rank tensor factorization model (SCLT) is proposed for unmixing, which aims at maintaining the low-rank spatial structure of the raw data and the sparsity of the abundance tensor in the iterative process.
  • A regularization is designed to explore the low-rank structure of data, which can learn the low-rank representation of 4-D tensor containing similarity patches in spectral mode, spatial mode and non-local similarity mode.
  • The sparsity constraint is embedded in the proposed model to explore the sparsity structure of the abundance tensor through the L 2 , 1 norm, which can improve the performance of unmixing.
The rest of this paper is organized as follows. Section 2 introduces some works related to NTF. The problem under the NTF unmixing framework is formulated in Section 3. Section 4 describes the method proposed in this paper in detail. Section 5 and Section 6 are the experimental analysis and conclusions, respectively.

2. Related Work

For hyperspectral mixed pixels, NMF has proved as a very effective method for unmixing. Under the LMM, the goal of NMF is to decompose a 3-D HSI reshaped into a 2-D matrix into two matrices, which are the endmember matrix and the abundance matrix. Specifically, each pixel in the hyperspectral data can be synthesized by the endmember matrix and its corresponding coefficients [27]. This model can accurately simulate a large number of mixed pixels, and the calculation is simple, convenient and easy to understand. However, for both the endmember matrix and the abundance matrix, the objection function of NMF is a non-convex problem, and its solution is not unique [28]. Therefore, researchers search for some constraints to assist the NMF framework to improve the performance of unmixing. Among them, the sparse constraint is one of the most important constraints. Imposing L 0 constraint on the abundance matrix is the most ideal sparse regularization [16]. Regrettably, since L 0 is an NP-hard problem, many improved L p regularizers have been proposed. The L 1 regularization is used in [12] to promote the sparsity of the abundance matrix. The L 1 / 2 is illustrated in [14,29], which can make the matrix have better sparsity. At the same time, the L 2 regularization is applied in [30], which can promote the smooth performance of the abundance matrix compared with the sparse performance. Therefore, to explore the sparse structure in the matrix rather than just the number of non-zeros, some methods also use L 2 , 1 regularization to constrain the abundance matrix in an attempt to improve the performance of unmixing [31,32].
Unfortunately, the NMF-based methods need to reshape the 3-D hyperspectral data into a 2-D matrix, which seriously damages the original spatial structure among pixels. In recent years, many works have applied the NTF framework to replace NMF for unmixing, which treat hyperspectral data as a 3-D tensor and directly decomposes it instead of reshaping it into a 2-D matrix [33]. Qian et al. [22] pioneer a matrix-vector unmixing method, which combines CPD and Tucker decomposition to innovatively develop ideas for unmixing methods. Subsequently, there are many improvements to the MV-NTF method. For instance, the S-MV-NTF algorithm is proposed in [25] which uses superpixel segmentation and low-rank tensor representation to consider the global and local information of data. Xiong et al. [26] apply the TV regularization to the abundance tensor. Sparse constraint and minimum volume constraint are added to MV-NTF to improve the performance of the algorithm in [34]. Imbiriba et al. [35] propose a low-rank tensor regularization to deal with spectral variability. Non-local low-rank tensor regularizations with weight constraints also show their promoting effects on unmixing in [36,37]. Unfortunately, regularization based on non-local low rank is simulated by the kernel norm and weighted kernel norm in these existing methods. They are defined by the singular values of the matrix, which would bring higher computational complexity and unstable solutions.
Different from the above methods, the SCLT algorithm proposed in this paper uses NTF as the basic framework, and simultaneously explores the relationships among pixels in the image and the sparsity of abundance. It not only keeps the algorithm stable, but also improves the unmixing performance. The SCLT by combining sparse constraint and low rank constraint is quite different from the above-mentioned existing unmixing methods. Specifically, (1) SCLT learns non-local similarity in HSI, and explores the low-rank structure of the 4-D tensor of the similarity group in four modes, including two spatial, one spectral and one non-local similarity modes. (2) Embedding the L 2 , 1 sparse regularization into the NTF framework can not only constrain the number of zeros in abundance tensor, but also explore the sparse structure of the abundance tensor, that is, keep the zero regions in the abundance.

3. Problem Formulation

The definition of some concepts used in this paper and the NTF unmixing framework are shown in this section.

3.1. Concepts

In this paper, we use decorated letters to represent tensors and boldface uppercase letters to represent matrices. Boldface lowercase letters and italic letters are used to denote vectors and constants, respectively. For instance, Y R I 1 , I 2 , I 3 represents a HSI cube. M R I × J is the endmember matrix; y R I × 1 is the abundance vector; I denotes the spectral dimension of HSI.
Mode: The dimensions of a tensor are called modes. For example, the tensor Y R I 1 , I 2 , I 3 has three modes.
Frobenius Norm: The Frobenius Norm of Y R I 1 , I 2 , I 3 is:
| | Y | | F = i 1 = 1 I 1 i 2 = 1 I 2 i 3 = 1 I 3 | y i 1 i 2 i 3 | 2 1 / 2 .
Mode-n Unfolding: The tensor is expanded along mode n to form a matrix. Known as a tensor Y R I , J , N , its mode-1, mode-2 and mode-3 unfolding are:
Y < 1 > ( j 1 ) N + n , i = y i j n , Y < 2 > ( n 1 ) I + i , j = y i j n , Y < 3 > ( i 1 ) J + j , k = y i j n .

3.2. NTF Model

According to the LMM, each pixel is formed by the linear combination of all endmembers. Therefore, known a hyperspectral data Y R I 1 × I 2 × I 3 , each pixel y n in Y can be expressed as follows:
y n = M × a n + e ,
where M R I 1 × P is the endmember matrix; a n R P × 1 is the corresponding coefficient vector of the endmember M in the pixel y n ; e denotes the noise; I 1 , I 2 , I 3 , P are the length, width, number of bands and number of endmembers of hyperspectral data, respectively.
For the entire HSI, it is regarded as a 3-D tensor Y , and the endmember spectra can form a 2-D matrix M . Therefore, the abundance A and noise E should also be in the form of tensors. It can be expressed as:
Y = A × 3 M + E ,
in which A × 3 is the mode-3 unfolding of tensor A . Similar to the NMF framework, the NTF also has two constraints: (1) abundance non-negative constraint (ANC), which is used to ensure that the unmixing model has real physical meaning. (2) abundance sum-to-one constraint (ASC), which is used to keep the sum of the proportion of all pixels within each pixel as 1. Therefore, the objective function of the unmixing model under the NTF framework can be written as:
J ( A , M ) = 1 2 Y A × 3 M F 2 , s . t . A 0 , A × 1 1 P = 1 I 1 × I 2 ,
where A × 1 is the mode-1 unfolding of tensor A , 1 P and 1 I 1 × I 2 represent a P-dimensional vector and a matrix with I 1 × I 2 size with each element of 1.

4. Proposed Method

To fully explore the low-rank structure within the hyperspectral data and the sparse regions in the abundance tensor instead of non-zero numbers, a sparse constrained low-rank tensor factorization algorithm (named SCLT) is used. This section explains the proposed algorithm and its optimization under the ADMM framework in details.

4.1. SCLT Model

Unlike [22,33,34,35], in SCLT a novel kind of low-rank tensor regularization is proposed to learn the low-rank structure in HSI. According to prior knowledge, there are non-local similarities in hyperspectral data. First, the HSI is divided into many patches with the full spectral bands of I 3 by the sliding window of r × r . Then, the K-means++ algorithm [38] is used to gather all the similar patches together to form several similarity groups. Each similarity group all can be expressed as a 4-D tensor with four modes, i.e., two spatial modes, one spectral mode and one non-local similarity mode. Suppose that K clusters are obtained after clustering, then the k-th 4-D tensor can be written as: Y ( k ) r × r × I 3 × N k = Y ( k , j ) j = 1 N k , k = 1 , 2 , , K , where N k is the number of similar cubes. Since the pixels in the 4-D tensors are highly similar, they have strong internal correlation in spatial, spectral and non-local similarity modes. Therefore, each 4-D tensor is unfolded from four modes to explore its low-rank attributes. Here a new regularization Y L R that can be used to learn the multi-mode low-rank structure of the tensor is proposed. Specifically, for the k-th tensor, it can be defined as:
Y ( k ) L R = t = 1 3 a t L ( Y < t > ( k ) ) ,
in which Y < t > ( k ) denotes the unfolding matrix of the tensor Y ( k ) on the t-th mode. L ( · ) is the low rank norm. Y < t > ( k ) is three different matrix, including Y < 1 > k R r × r I 3 N k , Y < 2 > k R r 2 × I 3 N k and Y < 3 > k R r 2 I 3 × N k ; a t is the weight. The definition of a t is the similar as in [39]. In detail,
a t = β t t = 1 3 β t , β t = min ( Π j = 1 t D j , Π j = t 3 D j ) , t = 1 , 2 , 3 ,
where D j is the length of the j-th dimension of the tensor Y ( k ) .
In function (6), L ( · ) is used to impose low rank constraint on the matrix. Generally, the kernel norm · * is used to solve the low-rank approximation problem of the matrix Y , which is defined as:
Y * = i σ i ( Y ) ,
where σ i ( Y ) represents the i-th single value of the matrix Y . It can be seen from (8) that the kernel norm treats all singular values of the matrix as equally important. However, the large singular values provide the main information of the image and are more important, while small singular values mainly represent noise. Therefore, large singular values should be suppressed less, and small singular values should be suppressed more in low-rank approximation. Motivated by this fact, ref. [39] thinks that taking the singular values and the logarithm function can solve the low-rank problem of the matrix more accurately. The logarithmic low-rank norm is defined as:
L ( Y ) = i log σ i ( Y ) + ε ,
where ε is a very small constant used to ensure that the log function is meaningful.
Hence, (9) is used as the definition of matrix in low-rank approximation in this paper. According to the basic objective function (5) of unmixing, it can be concluded that all pixels in the HSI share an endmember matrix, and all the relations among pixels can be transferred to the abundance tensor. Therefore, the low-rank regularization is embedded in model (5). According to [40], properly relaxing ASC can make the unmixing performance higher. Therefore, only ANC on abundance A is made here. Then, the objective function of unmixing designed in this paper can be written as:
J ( A , M ) = 1 2 Y A × 3 M F 2 + λ L R k = 1 K A ( k ) L R , s . t . A 0 ,
in which λ L R denotes the weight parameter of the multi-mode low-rank regularization. In addition, the abundance tensor has zero connected regions on each band according to the prior, so it contains a sparse structure. To take advantage of this joint sparse property, we use the L 2 , 1 norm of the abundance tensor here, which can keep the sparse area of the abundance tensor in unmixing instead of just keeping the number of zeros. Specifically, the L 2 , 1 norm of the tensor can be defined as follows:
A 2 , 1 = p = 1 P A : : p 2 , 1 = p = 1 P i = 1 I 1 j = 1 I 2 A i , j , p .
Finally, the final objective function of the model SCLT can be written as:
J ( A , M ) = 1 2 Y A × 3 M F 2 + λ S A 2 , 1 + λ L R k = 1 K t = 1 3 a t A ( k ) L R s . t . A 0 ,
where λ S is the weight parameter of the sparse regularization.

4.2. Optimization

For (12), first the endmember matrix M is updated. At this time, the objective function (12) becomes:
J ( A , M ) = 1 2 Y A × 3 M F 2 ,
which is a convex problem. Therefore, the update rule of M can be directly obtained:
M M . * ( Y × 3 A × 3 T ) . / ( M A × 3 A × 3 T ) ,
where Y × 3 is the mode-3 unfolding of tensor Y ; . * and . / represent the multiplication and division of corresponding elements, respectively.
Since the direct solution of abundance tensor A is a difficult problem, this paper uses the ADMM framework [39] to solve it. In detail, the intermediate variable of abundance A is introduced to (12), then the augmented Lagrangian function can be described as:
L ( A , Q 1 , H 1 , Q 2 , H 2 , Q 3 , H 3 , U , V , W , H 4 , H 5 , H 6 ) = min 1 2 Q 1 Y F 2 + μ 2 Q 1 A × 3 M + H 1 F 2 + λ L R k = 1 K a 1 L ( U < 1 > k ) + μ 2 U A + H 4 F 2 + λ L R k = 1 K a 2 L ( V < 2 > k ) + μ 2 V A + H 5 F 2 + λ L R k = 1 K a 3 L ( W < 3 > k ) + μ 2 W A + H 6 F 2 + λ S Q 2 2 , 1 + μ 2 Q 2 A + H 2 F 2 + l R + Q 3 + μ 2 Q 3 A + H 3 F 2 ,
where Q 1 , Q 2 , Q 3 , U , V , W are the intermediate variables of tensor A . H 1 , H 2 , H 3 , H 4 , H 5 , H 6 denote the Lagrangian multipliers and μ is the Lagrangian penalty parameter. The detailed solution of each variable is presented below. Specifically,
(a) Update A . (15) can be rewritten as:
L ( A ) = min μ 2 Q 1 A × 3 M + H 1 F 2 + μ 2 U A + H 4 F 2 + μ 2 V A + H 5 F 2 + μ 2 W A + H 6 F 2 + μ 2 Q 2 A + H 2 F 2 + μ 2 Q 3 A + H 3 F 2 .
Obviously, the above function (16) is a convex function for tensor A . We can directly obtain the partial derivative of it, and set the partial derivative to zero, then the updating rule is:
A ( ( Q 1 + H 1 ) × 3 M T + U + H 4 + V + H 5 + W + H 6 + Q 2 + H 2 + Q 3 + H 3 ) M T M + 5 I 1 ,
where I is an identity matrix.
(b) Update Q 1 , Q 2 , Q 3 , U , V , W .
When updating Q 1 , the objective function (15) can be expressed as:
Q 1 = arg min Q 1 1 2 Q 1 Y F 2 + μ 2 Q 1 A × 3 M + H 1 F 2 .
Similar to (16), the function (18) is also a convex problem for variable Q 1 , which can be easily solved. The specific update rule is as follows:
Q 1 1 1 + μ Y + μ A × 3 M H 1 .
When updating Q 2 , the objective function (15) can be expressed as:
Q 2 = arg min Q 2 λ S Q 2 2 , 1 + μ 2 Q 2 A + H 2 F 2 .
The objective function in [32] is similar to (20), hence it can be known that the update rule of Q 2 is:
Q 2 ( i , : ) v e c t s o f t ( γ ( i , : ) , λ s μ ) ,
in which v e c t s o f t ( a , b ) is a threshold function and is defined as v e c t s o f t ( a , b ) = a ( max { | | a | | 2 b , 0 } / ( max { | | a | | 2 b , 0 } + b ) ) , and γ = Q 2 H 2 .
When updating Q 3 , the objective function (20) can be expressed as:
Q 3 = arg min Q 3 l R + Q 3 + μ 2 Q 3 A + H 3 F 2 .
The purpose of setting Q 3 is to ensure that the abundance tensor A always has a physical meaning in the unmixing process, which is used to constrain the non-negativity of A . Easily, the update rule of Q 3 is:
Q 3 max A H 3 , 0 .
When updating U , the objective function (15) can be expressed as:
U = arg min U λ L R k = 1 K a 1 L ( U < 1 > k ) + μ 2 k = 1 K U < 1 > k A < 1 > k + H 4 < 1 > k F 2 ,
in which U < 1 > k , A < 1 > k and H 4 < 1 > k are the mode-1 unfolding form of tensors U k , A k and H 4 k respectively. U k , A k and H 4 k are the k-th similarity group of tensors U , A and H 4 respectively, which are all 4-D tensors. According to [39], the solution to problem (24) can be written as:
U < 1 > k = U 1 k Σ ˜ 1 k V 1 k , 1 k K .
Here, U 1 k , Σ 1 k and V 1 k can be solved by SVD. In detail, A < 1 > k H 4 < 1 > k = U 1 k Σ 1 k V 1 k ; Σ ˜ 1 k is a diagonal matrix and it can be obtained by Σ 1 k . Specifically,
Σ ˜ 1 k ( i , i ) = E ( a 1 λ L R ) , ε Σ 1 k ( i , i ) , E α , ε ( x ) = 0 c 2 0 , c 1 + c 2 2 c 2 > 0 ,
where c 1 and c 2 can be described as: c 1 = | x | ε , c 2 = c 1 2 4 ( α ε | x | ) . After obtaining U < 1 > k through (25), the mode-1 unfolding matrix U < 1 > k can be reverted to the tensor U k . Since U k is the k-th clustering group of U , the tensor U can be obtained through the corresponding relationship.
It can be seen from (15) that the update rules of U , V and W are similar. Therefore, it can be known that the update rules of V and W are as follows:
V < 2 > k = V 2 k Σ ˜ 2 k V 2 k , 1 k K , W < 3 > k = W 3 k Σ ˜ 3 k V 3 k , 1 k K .
where A < 2 > k H 5 < 2 > k = U 2 k Σ 2 k V 2 k and A < 3 > k H 6 < 3 > k = U 3 k Σ 3 k V 3 k . Σ ˜ 2 k and Σ ˜ 3 k can be obtained via Σ ˜ 2 k ( i , i ) = E ( a 2 λ L R ) , ε Σ 2 k ( i , i ) and Σ ˜ 3 k ( i , i ) = E ( a 3 λ L R ) , ε Σ 3 k ( i , i ) . The method of obtaining V and W is similar to that of U .
(c) Update Lagrangian multipliers H 1 , H 2 , H 3 , H 4 , H 5 , H 6 . From (15), we can easily get the update rule of six Lagrangian multipliers. In detail,
H 1 = H 1 A × 3 M + Q 1 , H i = H i A + Q i , i = 2 , 3 , H 4 = H 4 A + U , H 5 = H 5 A + V , H 6 = H 6 A + W .
The SCLT algorithm proposed for hyperspectral unmixing is concluded in Algorithm 1. It is worth noting that, similar to [41], a threshold is set here. Once errors are more than 10 times within the threshold, the iteration is stopped.
Algorithm 1 The proposed method SCLT
Input:
       Y R I 1 × I 2 × I 3 —the observed hyperspectral image; P—the number of endmembers;
      r—the size of window; K—the number of similarity group; Parameter λ S , and λ L R .
Initialize M R P × I 3 by VCA-based method, A R I 1 × I 2 × P by FCLS, Q 1 , Q 2 , Q 3 , U , V , W , H 1 , H 2 , H 3 , H 4 , H 5 , H 6
      While not convergence
            1. Update the endmember matrix M by (14);
            2. Update the abundance tensor A via (17);
            3. Update the variables Q 1 by (18), Q 2 by (21), Q 3 by (23), U , V and W via (25) and (27);
            4. Update the Lagrangian multipliers H 1 , H 2 , H 3 , H 4 , H 5 , H 6 by (28).
       end
Output:
       A —the abundance tensor; M —the endmember matrix.

5. Experiments

In this section, a series of experiments are detailed to test the performance of the proposed algorithm SCLT on three real hyperspectral data sets. To prove the effectiveness and superiority of SCLT, five existing methods were used as comparison algorithms, namely, SULRSR-TV [42], SGSNMF [43], MV-NTF [22], NL-TSUn [33] and ULTRA-V [35]. Among them, SULRSR-TV and SGSNMF are unmixing algorithms based on the NMF framework while MV-NTF, NL-TSUn, and ULTRA-V are unmixing methods based on NTF framework. It is worth noting that to ensure the fairness of all comparative experiments, we used the VCA-FCLS algorithm [44] for all data to determine the initial values of endmembers and abundances. On this basis, each algorithm runs 20 times, and the average values are taken as the final results.
Next, four subsections are presented in this section in detail. Section 5.1 introduces the evaluation indicators used in all algorithms in this paper. The results of three real hyperspectral data sets are described in Section 5.2. Section 5.3 analyzes the impact of each parameter change in the SCLT algorithm on the unmixing performance and the effectiveness of each module in SCLT.

5.1. Evaluation Indexes

This paper uses two commonly used unmixing evaluation indicators [8], including spectral angle distance (SAD) and root-mean-square error (RMSE). Specifically, the known ground truth endmember matrix M ^ and the abundance tensor A ^ , SAD and RMSE are calculated as:
RMS E = A A ^ 2 , SA D = cos 1 M ^ T M M ^ M ,
where M and A are the endmember matrix and abundance tensor extracted by the proposed SCLT algorithm.

5.2. Experiments on the Real Data

This subsection introduces the unmixing performance of the proposed algorithm SCLT and the five comparison methods on three real hyperspectral data sets.
Jasper Ridge Dataset.Figure 1a is a pseudo-color image of Jasper Ridge data. These data were captured by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. The original data have 224 bands, but the 1–3, 108–112, 154–166 and 220–224 bands are severely interfered with by water vapor and noise, which are removed here. The final image data size is 100 × 100 × 198 . According to [45], the Jasper Ridge data contains four kinds of endmembers, namely Soil, Water, Road and Tree.
Table 1 shows the SAD and its deviation results of the five comparison methods and the proposed SCLT algorithm on Jasper Ridge data. It can be concluded that the unmixing performance of the five algorithms on Jasper Ridge data is generally not high. The proposed SCLT algorithm just has significant advantages in extracting the endmember spectra of Soil. Although the superior performance of the SAD on the other three endmembers is not perfect, it is stable when extracting each spectrum, and the deviation is almost close to zero. Figure 2 and Figure 3 are visualization comparison of the unmixing results of the six algorithms. In Figure 2, the SCLT proposed in all abundance maps is the closest to the real reference abundance map, which shows the superiority of the proposed SCLT in abundance estimation. In Figure 3, the fourth spectral curve algorithm generally does not perform well, because it stands for the road spectrum, which requires a higher unmixing algorithm. It can be seen from this experiment that the proposed SCLT algorithm has a weak advantage over other algorithms on Jasper Ridge data.
Samon Dataset. According to the existing research results [46], Samon data mainly contain three types of ground objects, including Soil, Tree and Water. These data have 156 bands, and the spectral band coverage is 0.401–0.889 μm. Since the original data is very large, a sub-image with a size of 95 × 95 is used in real experiments for saving space. Specifically, the pseudo-color image of Samon data is shown in Figure 1b. As shown in Table 2, it is the quantitative comparison result of five comparison methods and the proposed SCLT on Samon data. It can be seen that SULRSR-TV, SGSNMF and SCLT all show better unmixing results on a single spectrum. However, these five comparison algorithms have poor performance on Soil. The SCLT not only has the best results on the average spectrum, but also has extremely high unmixing stability on each endmember with the smallest deviation. Specifically, the endmember spectral curves and estimated abundance maps extracted by the six unmixing methods in this experiment are presented in Figure 4 and Figure 5, respectively. They can also be confirmed from the visualizations, which show that the proposed SCLT method has the best unmixing performance.
Indiana Pines Dataset. The Indiana Pines dataset is a database widely used for unmixing. It contains 224 bands, some of which are also affected by water vapor and have low signal-to-noise-ratio (SNR). Therefore, the selected Indiana Pines dataset has 169 bands and the spatial size is 145 × 145 . The pseudo-color image is shown in Figure 1c. Yuan et al. [41] points out that the Indiana Pines dataset mainly includes six kinds of materials, i.e., Wheat, Man-made land, Soybean, Corn, Vegetation and Haystack. The SAD results of six algorithms are shown in Table 3. It can be concluded that the SCLT algorithm has extremely superior performance in terms of stability, which is consistent with the unmixing results of Jasper data and Samon data. Although the proposed SCLT algorithm does not have the lowest error on each endmember spectrum, its average value is the smallest. Figure 6 introduces the endmember spectrum comparison curves extracted by six algorithms. Since the Indiana Dataset does not have the ground truth of abundances, here we only show the comparison of the abundance maps in Figure 7, and the results are similar to [8], which is sufficient to prove that the proposed SCLT algorithm has full superiority.

5.3. Parameter Analysis

The parameters involved in the SCLT algorithm are discussed in this subsection, which mainly include the weight parameters of the low-rank regularization λ L R and the sparse constraint λ S , the size r of the patch and the number K of the similarity group. Due to space limitations, here the Jasper Ridge dataset is taken as an example to illustrate how this paper evaluates the parameters. According to [8], the control variable method is used to determine the parameters. Firstly, λ L R , λ S and r are set 0.01, 0.01 and 5, respectively. K is set to { 1 , 5 , 10 , 15 , 20 , 25 , 30 , 35 , 40 , 45 } . It can be seen from Figure 8a that, when K is 20, the unmixing achieves the best performance. Subsequently, K is fixed to 20. r is set to { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 } . Figure 8b shows the SAD results under these parameter settings. It can be concluded that when r = 3 , the SCLT algorithm achieves the best performance on Jasper Ridge dataset.
Finally, two regularization parameters are discussed, where λ L R represents the proposed low-rank regularization, and λ S represents the weight of the sparsity constraint. λ L R is set { 0 , 0.5 × 10 3 , 1 × 10 2 , 1.5 × 10 2 , 2 × 10 2 , 2.5 × 10 2 , 3 × 10 2 , 3 . × 10 2 , 4 × 10 2 , 4.5 × 10 2 , 5 × 10 2 } in turn when λ S is 0.01. Then λ L R is fixed 0.02, λ S is set { 0 , 1 × 10 2 , 2 × 10 2 , 3 × 10 2 , 4 × 10 2 , 5 × 10 2 , 6 × 10 2 , 7 × 10 2 , 8 × 10 2 , 9 × 10 2 , 1 × 10 1 } in turn. In the discussion of these two parameters, a value of zero is set, the purpose of which is to prove the effectiveness of each term of the proposed algorithm. When λ L R is zero, SCLT only adds the sparse constraint. When λ S is zero, the algorithm only adds the low-rank constraint. It can be understood from the unmixing results in Figure 9 that when λ L R = 0.02 and λ S = 0.020, the SCLT algorithm reaches the optimal solution, which also proves the effectiveness of each constraint of the algorithm. Consistent with the method discussed above, the parameter settings of the remaining experiments are presented in Table 4.

6. Conclusions

In this paper, we propose a novel non-negative tensor factorization framework by combining a low tensor rank constraint and a L 2 , 1 sparsity constraint for hyperspectral unmixing. A regularization that can describe the low-rank properties of HSI is designed to learn the low-rank structure of the similarity groups along the spatial, spectral and non-local similarity modes in the image. In addition, the L 2 , 1 norm is applied to explore the sparseness of abundances. The proposed unmixing framework SCLT can not only learn the low-rank structure of data in the different modes, but also keep the sparseness of the abundance. The SCLT is optimized by the ADMM method. The experimental results conducted on three real data introduce the superiority and effectiveness of the proposed algorithm.

Author Contributions

Conceptualization, L.D.; Data curation, L.D.; Formal analysis, L.D.; Funding acquisition, Y.Y.; Investigation, L.D.; Methodology, L.D.; Supervision, Y.Y.; Writing—original draft, L.D.; Writing—review and editing, L.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China 2020YFB2103902, The National Science Fund for Distinguished Young Scholars 61825603 and Key Program of National Natural Science Foundation of China 61632018.

Data Availability Statement

All the data in our paper can be downloaded publicly from http://lesun.weebly.com/hyperspectral-data-set.html (accessed on 6 March 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lu, X.; Sun, H.; Zheng, X. A Feature Aggregation Convolutional Neural Network for Remote Sensing Scene Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7894–7906. [Google Scholar] [CrossRef]
  2. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 1–13. [Google Scholar] [CrossRef]
  3. Fan, X.; Zhang, Y.; Li, F.; Chen, Y.; Shao, T.; Zhou, S. A robust spectral target recognition method for hyperspectral data based on combined spectral signatures. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; pp. 4328–4331. [Google Scholar]
  4. Zhang, W.; Lu, X.; Li, X. Similarity Constrained Convex Nonnegative Matrix Factorization for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4810–4822. [Google Scholar] [CrossRef]
  5. Sun, H.; Li, S.; Zheng, X.; Lu, X. Remote Sensing Scene Classification by Gated Bidirectional Network. IEEE Trans. Geosci. Remote Sens. 2020, 58, 82–96. [Google Scholar] [CrossRef]
  6. Boardman, J.W. Mapping target signatures via partial unmixing of aviris data. In Proceedings of the JPL Airborne Geoscience Workshop, Pasadena, CA, USA, 23–26 January 1995. [Google Scholar]
  7. Zhang, B.; Sun, X.; Gao, L.; Yang, L. Endmember Extraction of Hyperspectral Remote Sensing Images Based on the Ant Colony Optimization (ACO) Algorithm. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2635–2646. [Google Scholar] [CrossRef]
  8. Lu, X.; Dong, L.; Yuan, Y. Subspace Clustering Constrained Sparse NMF for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3007–3019. [Google Scholar] [CrossRef]
  9. Kizel, F.; Benediktsson, J.A. Spatially Enhanced Spectral Unmixing Through Data Fusion of Spectral and Visible Images from Different Sensors. Remote Sens. 2020, 12, 1255. [Google Scholar] [CrossRef] [Green Version]
  10. Miao, L.; Qi, H. Endmember Extraction From Highly Mixed Data Using Minimum Volume Constrained Nonnegative Matrix Factorization. IEEE Trans. Geosci. Remote Sens. 2007, 45, 765–777. [Google Scholar] [CrossRef]
  11. Zhang, Z.; Liao, S.; Zhang, H.; Wang, S.; Wang, Y. Bilateral Filter Regularized L2 Sparse Nonnegative Matrix Factorization for Hyperspectral Unmixing. Remote Sens. 2018, 10, 816. [Google Scholar] [CrossRef] [Green Version]
  12. Guo, Z.; Wittman, T.; Osher, S. L1 unmixing and its application to hyperspectral image enhancement. International Society for Optics and Photonics. In Proceedings of the SPIE Defense, Security, and Sensing, Orlando, FL, USA, 13 March 2009. [Google Scholar]
  13. Qian, Y.; Jia, S.; Zhou, J.; Robles-Kelly, A. Hyperspectral unmixing via L_{1/2} sparsity-constrained nonnegative matrix factorization. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4282–4297. [Google Scholar] [CrossRef] [Green Version]
  14. Xu, Z.; Chang, X.; Xu, F.; Zhang, H. L1/2 regularization: A thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 1013. [Google Scholar] [PubMed]
  15. Yao, J.; Meng, D.; Zhao, Q.; Cao, W.; Xu, Z. Nonconvex-Sparsity and Nonlocal-Smoothness-Based Blind Hyperspectral Unmixing. IEEE Trans. Image Process. 2019, 28, 2991–3006. [Google Scholar] [CrossRef] [PubMed]
  16. Salehani, Y.E.; Gazor, S. Smooth and Sparse Regularization for NMF Hyperspectral Unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3677–3692. [Google Scholar] [CrossRef]
  17. Yang, L.; Peng, J.; Su, H.; Xu, L.; Wang, Y.; Yu, B. Combined Nonlocal Spatial Information and Spatial Group Sparsity in NMF for Hyperspectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1767–1771. [Google Scholar] [CrossRef]
  18. Wang, H.; Yang, W.; Guan, N. Cauchy sparse NMF with manifold regularization: A robust method for hyperspectral unmixing. Knowl. Based Syst. 2019, 184, 104898.1–104898.16. [Google Scholar] [CrossRef]
  19. Wang, M.; Zhang, B.; Pan, X.; Yang, S. Group Low-Rank Nonnegative Matrix Factorization with Semantic Regularizer for Hyperspectral Unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1022–1029. [Google Scholar] [CrossRef]
  20. Zheng, Y.; Wu, F.; Shim, H.J.; Sun, L. Sparse Unmixing for Hyperspectral Image with Nonlocal Low-Rank Prior. Remote Sens. 2019, 11, 2897. [Google Scholar] [CrossRef] [Green Version]
  21. Hong, D.; Yokoya, N.; Chanussot, J.; Zhu, X.X. An Augmented Linear Mixing Model to Address Spectral Variability for Hyperspectral Unmixing. IEEE Trans. Image Process. 2019, 28, 1923–1938. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Qian, Y.; Xiong, F.; Zeng, S.; Zhou, J.; Tang, Y.Y. Matrix-Vector Nonnegative Tensor Factorization for Blind Unmixing of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1776–1792. [Google Scholar] [CrossRef] [Green Version]
  23. Bilius, L.B.; Pentiuc, S.G. Improving the Analysis of Hyperspectral Images Using Tensor Decomposition. In Proceedings of the 2020 International Conference on Development and Application Systems (DAS), Suceava, Romania, 21–23 May 2020. [Google Scholar]
  24. Chatzichristos, C.; Kofidis, E.; Morante, M.; Theodoridis, S. Blind fMRI Source Unmixing via Higher-Order Tensor Decompositions. J. Neurosci. Methods 2019, 315, 17–47. [Google Scholar] [CrossRef]
  25. Xiong, F.; Chen, J.; Zhou, J.; Qian, Y. Superpixel-Based Nonnegative Tensor Factorization for Hyperspectral Unmixing. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 6392–6395. [Google Scholar]
  26. Xiong, F.; Qian, Y.; Zhou, J.; Tang, Y.Y. Hyperspectral Unmixing via Total Variation Regularized Nonnegative Tensor Factorization. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2341–2357. [Google Scholar] [CrossRef]
  27. Yuan, B. NMF hyperspectral unmixing algorithm combined with spatial and spectral correlation analysis. J. Remote Sens. 2018, 22, 265–276. [Google Scholar]
  28. Yuan, Y.; Feng, Y.; Lu, X. Projection-Based NMF for Hyperspectral Unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2632–2643. [Google Scholar] [CrossRef]
  29. Xu, Z.; Zhang, H.; Wang, Y.; Chang, X.; Liang, Y. L 1/2 regularization. Sci. China Inf. Sci. 2010, 53, 1159–1169. [Google Scholar] [CrossRef] [Green Version]
  30. Pauca, V.P.; Piper, J.; Plemmons, R.J. Nonnegative matrix factorization for spectral data analysis. Linear Algebra Its Appl. 2006, 416, 29–47. [Google Scholar] [CrossRef] [Green Version]
  31. Li, M.; Zhu, F.; Guo, A.J.X. A Robust Multilinear Mixing Model with l 2,1 norm for Unmixing Hyperspectral Images. In Proceedings of the 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP), Macau, China, 1–4 December 2020. [Google Scholar]
  32. Ma, Y.; Li, C.; Mei, X.; Liu, C.; Ma, J. Robust Sparse Hyperspectral Unmixing with 2,1 Norm. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1227–1239. [Google Scholar] [CrossRef]
  33. Huang, J.; Huang, T.Z.; Zhao, X.L.; Deng, L.J. Nonlocal Tensor-Based Sparse Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2020, 1–15. [Google Scholar] [CrossRef]
  34. Feng, B.; Wang, J. Constrained Nonnegative Tensor Factorization for Spectral Unmixing of Hyperspectral Images: A Case Study of Urban Impervious Surface Extraction. IEEE Geosci. Remote Sens. Lett. 2019, 16, 583–587. [Google Scholar] [CrossRef]
  35. Imbiriba, T.; Borsoi, R.A.; Bermudez, J.C.M. Low-Rank Tensor Modeling for Hyperspectral Unmixing Accounting for Spectral Variability. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1833–1842. [Google Scholar] [CrossRef] [Green Version]
  36. Sun, L.; Wu, F.; Zhan, T.; Liu, W.; Wang, J.; Jeon, B. Weighted Nonlocal Low-Rank Tensor Decomposition Method for Sparse Unmixing of Hyperspectral Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1174–1188. [Google Scholar] [CrossRef]
  37. Zhang, S.; Zhang, G.; Deng, C.; Li, J.; Wang, S.; Wang, J.; Plaza, A. Spectral-Spatial Weighted Sparse Nonnegative Tensor Factorization for Hyperspectral Unmixing. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Online. 26 September–2 October 2020; pp. 2177–2180. [Google Scholar]
  38. Arthur, D.; Vassilvitskii, S. K-Means++: The Advantages of Careful Seeding. In Proceedings of the Eighteenth Acm-siam Symposium on Discrete Algorithms, Philadelphia, PA, USA, 7–9 January 2007. [Google Scholar]
  39. Dian, R.; Li, S.; Fang, L. Learning a Low Tensor-Train Rank Representation for Hyperspectral Image Super-Resolution. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2672–2683. [Google Scholar] [CrossRef]
  40. Akhtar, N.; Mian, A. RCMF: Robust Constrained Matrix Factorization for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3354–3366. [Google Scholar] [CrossRef]
  41. Yuan, Y.; Fu, M.; Lu, X. Substance Dependence Constrained Sparse NMF for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2975–2986. [Google Scholar] [CrossRef]
  42. Li, H.; Feng, R.; Wang, L.; Zhong, Y.; Zhang, L. Superpixel-Based Reweighted Low-Rank and Total Variation Sparse Unmixing for Hyperspectral Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2021, 59, 629–647. [Google Scholar] [CrossRef]
  43. Wang, X.; Zhong, Y.; Zhang, L.; Xu, Y. Spatial group sparsity regularized nonnegative matrix factorization for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6287–6304. [Google Scholar] [CrossRef]
  44. Nascimento, J.M.P.; Dias, J.M.B. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef] [Green Version]
  45. Wang, W.; Liu, H. Deep Nonnegative Dictionary Factorization for Hyperspectral Unmixing. Remote Sens. 2020, 12, 2882. [Google Scholar] [CrossRef]
  46. Zhu, F.; Wang, Y.; Fan, B.; Xiang, S.; Meng, G.; Pan, C. Spectral Unmixing via Data-Guided Sparsity. IEEE Trans. Image Process. 2014, 23, 5412–5427. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Three real hyperspectral datasets. (a) Jasper Ridge. (b) Samon. (c) Indiana.
Figure 1. Three real hyperspectral datasets. (a) Jasper Ridge. (b) Samon. (c) Indiana.
Remotesensing 13 01473 g001
Figure 2. The abundance maps of six methods and the ground truth on Jasper Ridge Data.
Figure 2. The abundance maps of six methods and the ground truth on Jasper Ridge Data.
Remotesensing 13 01473 g002
Figure 3. The endmember spectra of Jasper Ridge Data extracted by the five comparison methods and the proposed method. (a) Tree. (b) Soil. (c) Water. (d) Road.
Figure 3. The endmember spectra of Jasper Ridge Data extracted by the five comparison methods and the proposed method. (a) Tree. (b) Soil. (c) Water. (d) Road.
Remotesensing 13 01473 g003
Figure 4. The endmember spectra of Samon Data extracted by the five comparison methods and the proposed method. (a) Soil. (b) Tree. (c) Water.
Figure 4. The endmember spectra of Samon Data extracted by the five comparison methods and the proposed method. (a) Soil. (b) Tree. (c) Water.
Remotesensing 13 01473 g004
Figure 5. The abundance maps of six methods and the Ground Truth on Samon Data.
Figure 5. The abundance maps of six methods and the Ground Truth on Samon Data.
Remotesensing 13 01473 g005
Figure 6. The endmember spectra of Indiana Pines Data extracted by the five comparison methods and the proposed method. (a) Man-made land. (b) Wheat. (c) Corn. (d) Soybean. (e) Vegetation. (f) Haystack.
Figure 6. The endmember spectra of Indiana Pines Data extracted by the five comparison methods and the proposed method. (a) Man-made land. (b) Wheat. (c) Corn. (d) Soybean. (e) Vegetation. (f) Haystack.
Remotesensing 13 01473 g006
Figure 7. The abundance maps of six methods on Indiana Pines Data.
Figure 7. The abundance maps of six methods on Indiana Pines Data.
Remotesensing 13 01473 g007
Figure 8. Algorithm parameter analysis on the Jasper Ridge dataset. (a) Parameter K; (b) Parameter r.
Figure 8. Algorithm parameter analysis on the Jasper Ridge dataset. (a) Parameter K; (b) Parameter r.
Remotesensing 13 01473 g008
Figure 9. Algorithm parameter analysis on the Jasper Ridge dataset. (a) Parameter λ L R ; (b) Parameter λ S .
Figure 9. Algorithm parameter analysis on the Jasper Ridge dataset. (a) Parameter λ L R ; (b) Parameter λ S .
Remotesensing 13 01473 g009
Table 1. Spectral angle distance (SAD) results on the jasper ridge data. Bold indicates the best unmixing result in each endmember.
Table 1. Spectral angle distance (SAD) results on the jasper ridge data. Bold indicates the best unmixing result in each endmember.
SULRSR-TVSGSNMFMV-NTFNL-TSUnULTRA-VSCLT
Tree0.2012 ± 0.01870.1955 ± 0.00130.2416 ± 0.01360.2081 ± 0.03060.0502 ± 0.00030.2761 ± 0.0081
Soil0.2336 ± 0.00110.5131 ± 0.00110.2601 ± 0.00890.2334 ± 0.00010.1422 ± 0.03420.1107 ± 0.0001
Water0.6194 ± 0.17610.1943 ± 0.00690.1586 ± 0.09560.4929 ± 0.27150.5641 ± 0.00130.3820 ± 0.0140
Road0.2943 ± 0.07220.2994 ± 0.00630.4544 ± 0.07060.3665 ± 0.11790.0362 ± 0.00330.1368 ± 0.0013
Mean0.3863 ± 0.03570.3005 ± 0.00390.3024 ± 0.03960.3723 ± 0.04820.3002 ± 0.00100.2516 ± 0.0024
Table 2. SAD results on the Samon data. Bold indicates the best unmixing result in each endmember.
Table 2. SAD results on the Samon data. Bold indicates the best unmixing result in each endmember.
SULRSR-TVSGSNMFMV-NTFNL-TSUnULTRA-VSCLT
Tree0.0247 ± 0.01210.0099 ± 0.00010.0433 ± 0.01290.0288 ± 0.01610.0340 ± 0.02720.0465 ± 0.0019
Water0.0495 ± 0.00110.0511 ± 0.00170.0953 ± 0.00110.0496 ± 0.00010.0566 ± 0.00900.0648 ± 0.0006
Soil0.1299 ± 0.00040.2189 ± 0.00240.2810 ± 0.00500.1289 ± 0.00030.2401 ± 0.02210.0937 ± 0.0031
Mean0.0818 ± 0.00190.1299 ± 0.00110.1733 ± 0.00360.0825 ± 0.00260.1438 ± 0.01540.0711 ± 0.0002
Table 3. SAD results on the Indiana pines data. Bold indicates the best unmixing result in each endmember.
Table 3. SAD results on the Indiana pines data. Bold indicates the best unmixing result in each endmember.
SULRSR-TVSGSNMFMV-NTFNL-TSUnULTRA-VSCLT
Man-made land0.0942 ± 0.02260.1266 ± 0.06180.5500 ± 0.10060.1090 ± 0.01280.0973 ± 0.01070.0938 ± 0.0007
Wheat0.2861 ± 0.21750.3439 ± 0.22050.3730 ± 0.03740.3320 ± 0.19580.4456 ± 0.00280.4203 ± 0.0793
Corn0.2669 ± 0.07460.0608 ± 0.02350.7127 ± 0.02940.2022 ± 0.10060.0807 ± 0.03100.0774 ± 0.0100
Soybean0.0867 ± 0.00840.0608 ± 0.02350.4952 ± 0.16110.1138 ± 0.07130.0090 ± 0.00110.0089 ± 0.0036
Vegetation0.0624 ± 0.00920.0270 ± 0.01510.2666 ± 0.05390.0651 ± 0.00850.0730 ± 0.00630.0689 ± 0.0128
Haystack0.1375 ± 0.14530.0611 ± 0.03630.4638 ± 0.06610.0725 ± 0.08120.0381 ± 0.03210.0375 ± 0.0018
Mean0.2079 ± 0.03520.3225 ± 0.03400.5041 ± 0.01840.1993 ± 0.03530.1921 ± 0.00110.1618 ± 0.0312
Table 4. Parameter settings.
Table 4. Parameter settings.
DataKr λ LR λ S
Jasper2030.0200.02
Samon2030.0150.01
Indiana5050.0100.01
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dong, L.; Yuan, Y. Sparse Constrained Low Tensor Rank Representation Framework for Hyperspectral Unmixing. Remote Sens. 2021, 13, 1473. https://doi.org/10.3390/rs13081473

AMA Style

Dong L, Yuan Y. Sparse Constrained Low Tensor Rank Representation Framework for Hyperspectral Unmixing. Remote Sensing. 2021; 13(8):1473. https://doi.org/10.3390/rs13081473

Chicago/Turabian Style

Dong, Le, and Yuan Yuan. 2021. "Sparse Constrained Low Tensor Rank Representation Framework for Hyperspectral Unmixing" Remote Sensing 13, no. 8: 1473. https://doi.org/10.3390/rs13081473

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop