Next Article in Journal
Synergetic Aerosol Layer Observation After the 2015 Calbuco Volcanic Eruption Event
Previous Article in Journal
Hyperspectral Image Classification Using Similarity Measurements-Based Deep Recurrent Neural Networks
Previous Article in Special Issue
Online Hashing for Scalable Remote Sensing Image Retrieval
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nonlocal Tensor Sparse Representation and Low-Rank Regularization for Hyperspectral Image Compressive Sensing Reconstruction

1
School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
2
Research & Development Institute of Northwestern Polytechnical University in Shenzhen, Shenzhen 518057, China
3
Department of Telecommunications and Information Processing, Ghent University-TELIN-IMEC, 9000 Ghent, Belgium
4
Department of Electronics and Informatics, Vrije, Universiteit Brussel, 1050 Brussel, Belgium
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(2), 193; https://doi.org/10.3390/rs11020193
Submission received: 7 December 2018 / Revised: 13 January 2019 / Accepted: 17 January 2019 / Published: 19 January 2019
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)

Abstract

:
Hyperspectral image compressive sensing reconstruction (HSI-CSR) is an important issue in remote sensing, and has recently been investigated increasingly by the sparsity prior based approaches. However, most of the available HSI-CSR methods consider the sparsity prior in spatial and spectral vector domains via vectorizing hyperspectral cubes along a certain dimension. Besides, in most previous works, little attention has been paid to exploiting the underlying nonlocal structure in spatial domain of the HSI. In this paper, we propose a nonlocal tensor sparse and low-rank regularization (NTSRLR) approach, which can encode essential structured sparsity of an HSI and explore its advantages for HSI-CSR task. Specifically, we study how to utilize reasonably the l 1 -based sparsity of core tensor and tensor nuclear norm function as tensor sparse and low-rank regularization, respectively, to describe the nonlocal spatial-spectral correlation hidden in an HSI. To study the minimization problem of the proposed algorithm, we design a fast implementation strategy based on the alternative direction multiplier method (ADMM) technique. Experimental results on various HSI datasets verify that the proposed HSI-CSR algorithm can significantly outperform existing state-of-the-art CSR techniques for HSI recovery.

1. Introduction

Hyperspectral image (HSI) is a three-dimension data cube by simultaneously capturing the information over two spatial and one spectral dimensions. The abundant spatial-spectral information is able to provide more accurate and reliable signature features on distinct materials, which contributes to various applications such as scene classification [1], object detection [2], environmental monitoring [3], etc. However, due to the large data sizes of HSI, the storage and transmission on limited resource platform become a challenge problem. Although various methods, mainly including wavelet transform [4,5,6], TDLT + KLT [7], DPCM [8] and JPEG2000 [9,10], have been proposed to compress HSI effectively, they treat the HSI as a collection of single band images and neglect the spatial-spectral knowledge redundancy. Thus, how to build rational and powerful HSI compressive reconstruction models is still a worthy research issue.
Recently, the compressive sensing (CS) [11,12,13] theory offers a brand-new field for HSI acquisition or compression, which only needs to capture a small number of incoherent measurements in the imaging stage. Then, the acquired measurements can be employed to reconstruct the whole HSI. For convenient application of CS on HSI, many well-known techniques [14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41] have been presented to convert an HSI into a sparse signal. Although HSI CS can greatly reduce the resource consumption on imaging, storage and transmission compared with those conventional compression methods, how to reconstruct precisely the HSI from fewer measurements is still a challenging problem.
One of the main concerns to the ill-posed reconstruction problem is to convert HSI into sparse description form via imposing some proper sparsity priors. For example, some effective sparsity terms with l 0 , l 1 and l p ( 0 < p < 1 ) norms [13,14,15,16] have been presented to characterize the sparsity for signal recovery, but those methods neglect the underlying structure information. Regularization-based approaches usually incorporate the prior knowledge into the observation model and develop a united framework [17,18,19,20]. For those methods, one key issue is how to design a proper regularization term to characterize the sparsity of HSI. The works in [21,22,23] mainly consider the sparsity of abundance matrix by the linear unmixing of an HSI, and then HSI CS models are built using spectral unmixing procedures. By introducing structured sparsity across spatial or spectral dimension, Zhang et al. [24,25,26,27,28] extended the compression method based sparse representation/dictionary learning to HSI compression. More recently, Meza et al. [29,30] explored the group sparsity based spatial/spectral redundancy structure to achieve HSI compressive sensing reconstruction (HSI-CSR). The HSI CS model proposed by Golbabaee et al. [31,32,33,34] utilized the piecewise smooth structure to explain the underlying gradient sparsity of an HSI. However, as those techniques depict the HSI sparsity in vector space, the description form of sparsity is treated as one vector without considering its multidimensional structure. It will inevitably induce losses and distortions of useful structure information.
Tensor-based HSI-CSR approaches can improve remarkably the HSI recovery quality, since the existing methods jointly take into account the spatial-spectral information, and reduce the losses and distortions caused by HSI reshaping [35,36,37,38,39,40,41,42,43,44]. Karami et al. [35,36] exploited discrete wavelet transform and Tucker decomposition (DWT-TD) to encode the spatial-spectral information of HSI. The core idea behind those techniques is first to use DWT to effectively separate an HSI into different sub-images, and then to apply TD on the DWT coefficients of HSI bands to compact the energy of sub-images. Zhang et al. [37,38] compressed an HSI to the core tensor and the HSI could be reconstructed by the multi-linear projection of the factor matrices. Those methods only consider an HSI as a whole 3D tensor while they are short of more potent constraints on spatial-spectral structure of an HSI. Yang [39] employed nonlinear tensor sparse representation to recover an HSI from small number of measurements, and some training examples are required. Wang [40] used the global spatial-spectral correlation and local smoothness properties underlying in an HSI to enhance the HSI-CSR task, in which the tensor Tucker decomposition and 3-D total variation jointly characterize the sparsity of an HSI. Du [41] proposed a patch-based low-rank tensor decomposition for HSI-CSR algorithm that combined the nonlocal similarity across the spatial domain and the low-rank property over spectral domain in a united framework.
Although methods reported in [37,38,40,41] are considerably effective for HSI-CSR compared with vector based approaches, it is difficult to estimate the accurate rank under tensor decomposition and further acquire unique decomposition. Thus, the methods based on tensor decomposition cannot provide an elaborate characterization on spatial-spectral information in HSI-CSR problem. In [42,43], this reasonable usage of the global correlation across spectrum (GCS) and nonlocal self-similarity over space (NSS) prior knowledge have led to quite powerful HSI denoising algorithms, and the effectiveness of GCS and NSS for HSI-CSR has not been reported in the public literature. Such facts inspire us to solve the challenging HSI-CSR problem by the structured sparsity based on GCS and NSS in this paper, and a unified framework combining nonlocal tensor sparse representation and low-rank regularization is proposed for HSI-CSR, as shown in Figure 1. The main contributions of this paper are listed as follows.
  • To the best of our knowledge, we are the first to exploit GCS and NSS to construct the nonlocal structure sparsity of HSI that is a faithfully structured sparsity representation form for HSI-CSR task.
  • For each cube that is formed by grouping nonlocal similar cubes, the tensor representation based on tensor sparse and low-rank approximation is introduced to encode the intrinsic spatial-spectral correlation.
  • The HSI-CSR task is treated as an optimization problem; we resort to alternative direction multiplier method (ADMM) [44] to solve it.
A preliminary version of this work has appeared in [45], which presents the basic approach. In [45], we established the nonlocal structured sparsity from the perspective of the tensor low-rank property, which adopts the two most commonly used tensor low-rank representation forms: tensor low-rank approximation and tensor low-rank decomposition. In this paper, we depict the nonlocal structured sparsity via the tensor low-rank approximation and sparse representation. Although the tensor low-rank decomposition and sparse representation are derived from the Tucker decomposition model, the former needs to preset the ranks along all dimension while the latter introduces an l 1 -based sparse term on core tensor. In practical application, the latter possesses the reliable capability to represent the high-dimension data by mitigating the tensor rank overfitting or underfitting. In addition, this paper adds: (1) the detailed background of HSI-CSR; (2) the theoretical analysis of NTSRLR; and (3) additional HSI-CSR experiments.
The remainder of this paper is organized as follows. Section 2 introduces the tensor notations and operations commonly used in this paper, and background of CS. In Section 3, a novel algorithm for HSI-CSR based on the NTSRLR model is proposed. Section 4 demonstrates the results of extensive experiments and Section 5 draws the conclusion.

2. Notations and Background of HSI-CS

2.1. Notations

Throughout the paper, we denote scalars, vectors, matrices and tensors by non-bold letters, bold lower case letters, bold upper case letters and calligraphic upper case letters, respectively. Besides, we introduce some necessary notations and preliminaries about tensor as follows. A tensor of order N, which corresponds to a N-dimensional data array, is denoted as X R I 1 × × I n × × I N . Elements of X are denoted as a i 1 i n i N , where 1 i n I n . Definitions of tensor terminologies in the paper follow exactly the same description in [46]. Denote X F = X , X ( i 1 i 2 , , i N | a i 1 i 2 , , i N | 2 ) 1 / 2 , X 1 = i 1 i 2 , , i N a i 1 i 2 , , i N and X 0 as the F-norm, l 1 norm and l 0 norm of a tensor X , respectively. X 0 K means that K is the number of non-zero entries of X . It is convenient to unfold a tensor into a matrix during the algorithm. The “unfold" operation along the mode-n on a tensor X is defined as unfold n ( X ) : = X ( n ) R I n × ( I 1 × × I n 1 I n + 1 × × I N ) , and its opposite operation “fold" is defined as fold n ( X ( n ) ) : = X . The Kronecker product of matrices A R I × J and B R K × L is a matrix of size I K × J L , denoted by A B . The multiplication of a tensor X with a matrix Y R I k × J k on mode-k is denoted by X × k Y = Z , which also can be defined in terms of mode-k unfolding as Z k = Y X k .
Definition 1.
(Tucker decomposition) [46]: The Tucker decomposition form of a tensor X is:
X = G × 1 U 1 × 2 × N U N
where G R J 1 × J 2 × × J N is the core tensor and it reflects the interaction between components along different modes, and U n R I n × J n is the orthogonal factor matrix in each mode. Thus, we can achieve the k-unfolding form of Tucker decomposition in Equation (1)
X ( n ) = U n G ( n ) ( U N U n + 1 U n 1 U 1 )

2.2. Background of HSI-CS

For a given HSI X R W × H × S ( W × H spatial resolution and S spectral bands), x R W H S denotes the vector form of X . Let N = W H S , then the compressive measurement y R M can be obtained from the following CS model:
y = Φ x
where Φ R M × N ( M < N ) denotes the compressive operator. The CS theory indicates that a sufficiently sparse signal x can be exactly reconstructed from only a few observation y when the compressive operator Φ satisfies the restricted isometry property (RIP) [11]. Under the RIP, the ill-posed recovery problem can be formulated into following form by pursuing the sparsest signal x, i.e.,
x = min x x 0 , s . t . y = Φ x
where · 0 denotes l 0 norm as a sparsity constraint. However, the l 0 norm minimization in Equation (4) is combinatorially NP-hard and unstable with the noise. For this reason, a feasible strategy is to replace nonconvex l 0 norm as a convex l 1 counterpart [15,47] as follows:
x = min x x 1 , s . t . y = Φ x
The optimization for above l 1 -minimization CS problem can resort to iterative shrinkage algorithm [48] and Bregman Split algorithm [49].
Since an HSI can be sparsely represented in a certain domain, many CS models have been proposed for an HSI. Zhang et al. [21,22,23] unmixed the HSI into a spatially sparse abundance matrix with an endmember matrix. Meza et al. [29,30,31] extracted the spatial/spectral redundancy structure and then applied the group sparsity constraint. Golbabaee [34] used a wavelet basis to transform the HSI into a sparse matrix, and then adopted the low-rankness and l 1 norm to jointly encode sparsity of the matrix. Zhang et al. [37,38] depicted the sparsity of an HSI in the core tensor domain, instead of reshaped vector domain. Further works [39,40,41] employ the sparse tensor decomposition to characterize sparsity of an HSI. However, those sparsity constraint terms are incapable of capturing the underlying structure in an HSI or handling the unwanted noise and artifacts in the CSR procedure. In our method, we try to cope with those problems by introducing more refined prior knowledge of an HSI to perfectly promote HSI-CSR performance.

3. The Proposed HSI-CSR via NTSRLR

Structured sparsity is of great importance to the HSI-CSR model that often reveals the rich self-repetitive structures over spatial domain and the highly correlated bands across the spectral domain. Several previous works exploiting nonlocal prior have indicated that the structured sparsity based on nonlocal self-similarity is fairly effective for image restoration [18,19]. However, the research works in HSI-CSR fields have not been documented. In this paper, we present a unified framework for HSI-CSR using the structured sparsity via nonlocal tensor sparse representation and low-rank approximation.

3.1. Non-Local Tensor Formula for Structure Sparsity

The proposed regularization model for structured sparsity consists of two steps: cube grouping for characterizing GCS and NSS and tensor formulation for sparsity enforcement.

3.1.1. Non-Local Structure Sparsity Analysis

Concerning the GCS and NNS underlying an HSI, we provide an analysis for nonlocal tensor sparsity and low-rankness, as illustrated in Figure 2. To begin with, for an initial third-order tensor HSI X R W × H × S (e.g., PaviaU dataset), we divide the HSI into a group of 3D full-band cubes (FBC) { P i , j } 1 i W w + 1 , 1 j H h + 1 R w × h × S ( w < W , h < H ) with overlaps. For the exemplar cube P i , j of size 8 × 8 × 60 located at spatial position ( i , j ) in Figure 2a marked in red, we first search K-1 (here, we set K = 80) similar cubes by k-NN within a local window (e.g., 70 × 70), shown as k-NN clustering in Figure 2b. Then, to avoid destroying the high spectral correlation, we unfold a series of 3D cubes into corresponding 2D matrices along the spectral modes (Figure 2c), and obtain a new third-order tensor Y p of size 64 × 80 × 60 by stacking a series of similar items (Figure 2d), where p = 1 , P , and P denotes the group number. Such constructed third-order tensor simultaneously employ the spatial local sparsity (mode-1), the non-local similarity between cubes (mode-2) and strong spectral correlation (mode-3). The outcome of such arrangement maximizes the benefit from nonlocal tensor representation form. Next, we give a visual interpretation for the nonlocal tensor sparsity and low-rank property.
First, by Tucker decomposition for a nonlocal similar cube group from PaviaU dataset, Figure 2e shows the location of singular values in the core tensor, where redder and bluer colors of elements represent large values and smaller values, respectively. To further understand the sparsity of tensor core, Figure 2(e2)–(e4) present three typical slices of core tensor. It is easy to find that the core tensor satisfies sparse property, with 82.59% of its elements being zeroes. Second, the low-rank analysis is performed along its local spatial, nonlocal spatial, and global spectral modes, as shown in Figure 2f. Evidently, the decaying trends of singular values on three curves (pink, blue and green curves correspond to local spatial, nonlocal spatial, and global spectral modes, respectively) indicate there are strong correlations in the three modes. Comparatively, the decaying trend of the curve in mode-2 is most drastic, which is consistent with the nonlocal spatial low-rank theory of an HSI given in [50]. According to the definition of the accumulation energy ratio (Aer) of top k singular values in [50], we calculate Top 10 singular values of three modes and attain the Aers of 0.8029, 0.9031 and 0.8186. The quantitative values (i.e., Aers) also indicate that each cube by grouping nonlocal similar cubes can possess strong low-rank correlation along the mode-2.

3.1.2. Non-Local Structure Sparsity Modeling

In Figure 2f, we can observe that the formed FBCs possess the low-rank property, and a tractable strategy is to use the mode-n rank ( r 1 , , r n ) to estimate tensor rank by Tucker decomposition [46]. For an Nth-order tensor X , the Tucker rank is defined as rank ( X ) : = [ rank ( X ( 1 ) ) , rank ( X ( 2 ) ) , , rank ( X ( N ) ) ] , where X ( i ) is the mode-i unfolding of X [51]. Motivated by the practical applications that the nuclear norm is the convex envelope of the matrix rank within the unit ball of the spectral norm, further tensor nuclear norm, X * = n = 1 N α n X ( n ) * is defined as weighting the unfolding matrix nuclear norm along each mode. Thus, we resort to the following relaxation form for each X p to characterize the low-rank property based on GCS and NSS:
L ( X p ) = i 3 α i X p ( i ) *
where X p ( i ) * = k = 1 min ( m , n ) σ k ( X p ( i ) ) denotes the nuclear norm of matrix X p ( i ) of size m × n .
In practice, { Y p } p = 1 P may contain some noise, the data Y p can be modeled as: Y p = X p + W p , where X p and W p denote the low-rank component and the noise component, respectively. Hence, we can estimate the low-rank tensor X p via the following optimization problem:
X p = min X p L ( X p ) , s . t . Y p X p F 2 ε
where ε is associated with the noise level. The model in Equation (7) is similar to the matrix cases in [18], the difference primarily reflected in that we consider the combination with the correlations along local-nonlocal spatial modes and spectral mode, and measure the low-rankness of a third-order tensor X p by a weighted sum of the rank along each unfolding. Besides, considering the strong nonlocal spatial low-rankness along mode-2 than two other modes, we set a larger weight for mode-2 in our experiments.
In addition, as shown in Figure 2e, we give a detailed analysis for another notable representation form for the sparsity prior based on tensor sparse decomposition, which suggests that we can depict the structured sparsity of an HSI from the perspective of core tensor. Some pioneering works are presented in [42,43,52,53,54]. Here, we draw attention to the structured sparsity formulation of an HSI under tensor sparse representation framework, thus each third-order tensor X p can be approximated by following problem:
min G p , U 1 p , U 2 p , U 3 p S ( G p ) , s . t . X p = G p × 1 U 1 p × 2 U 2 p × 3 U 3 p , U i p T U i p = I ( i = 1 , 2 , 3 )
where U 1 p , U 2 p , and U 3 p are factor matrices and S ( G p ) is sparse constraint term, and we assume S ( G p ) = G p 0 as suggested in [42,43,52]. However, the optimization problem based on l 0 constraint deduced by Equation (8) is non-convex, the research in [53,54] further relaxes the l 0 -based core sparsity to l 1 case as S ( G p ) = G p 1 . The convex optimization problem corresponding to l 1 case can be represented in Lagrangian form as following:
min G p , U 1 p , U 2 p , U 3 p λ 1 2 X p G p × 1 U 1 p × 2 U 2 p × 3 U 3 p F 2 + λ 2 G p 1 , s . t . U i p T U i p = I ( i = 1 , 2 , 3 )
where λ 1 and λ 2 are the trade-off parameters. Essentially, all factor matrices are orthogonal dictionaries along local–nonlocal spatial modes and spectral mode. It can be seen that the tensor sparse representation model explores the GCS and NSS of HSIs in different dimensions by adaptive multi-dictionaries learning. Compared with the matrix sparse representation technique [19,20], the advantage of tensor modeling is that it not only characterizes the spatial-spectral correlation but also the correlation over nonlocal similar cubes in an HSI.

3.2. Proposed Model

Based on the previous analysis, we now derive the following model for solving the HSI-CSR problem:
min x , G p , U 1 p , U 2 p , U 3 p p = 1 P λ 1 2 X p G p × 1 U 1 p × 2 U 2 p × 3 U 3 p F 2 + λ 2 S ( G p ) + λ 3 L ( X p ) , s . t . y = Φ x , U i p T U i p = I ( i = 1 , 2 , 3 )
where λ 3 is the regularization parameter. It is worth noting that the proposed model can fully exploit the underlying prior over spatial-spectral domain in an HSI, and thus is expected to have a strong ability to enhance HSI-CRS task.

3.3. Optimization Algorithm

For the proposed HSI-CSR model, we apply the ADMM [44], an effective strategy for solving large scale optimization problems, to solve Equation (10). Firstly, we replace S ( G p ) and L ( X p ) with the G p 1 and X p * , respectively, and introduce P auxiliary tensors { M p } p = 1 P and equivalently reformulate Equation (10) as follows:
min x , M p , G p , U 1 p , U 2 p , U 3 p p = 1 P λ 1 2 X p G p × 1 U 1 p × 2 U 2 p × 3 U 3 p F 2 + λ 2 G p 1 + λ 3 M p * , s . t . y = Φ x , M p = G p × 1 U 1 p × 2 U 2 p × 3 U 3 p , U i p T U i p = I ( i = 1 , 2 , 3 )
Then, its augmented Lagrangian function is:
L ( X p , M p , G p , U 1 p , U 2 p , U 3 p , Z p , Λ ) = p = 1 P λ 1 2 X p G p × 1 U 1 p × 2 U 2 p × 3 U 3 p F 2 + λ 2 G p 1 + λ 3 M p * + G p × 1 U 1 p × 2 U 2 p × 3 U 3 p M p , Z p + λ 4 2 G p × 1 U 1 p × 2 U 2 p × 3 U 3 p M p F 2 + Λ , y Φ x + 1 2 y Φ x F 2
where { Z p } p = 1 P and Λ are the Lagrange multipliers, λ 4 is the positive scalars. We shall break Equation (12) into five sub-problems and iteratively update each variable via fixing the other ones.
(a)
U 1 p , U 2 p , U 3 p problem:
min U 1 p , U 2 p , U 3 p λ 1 2 X p G p × 1 U 1 p × 2 U 2 p × 3 U 3 p F 2 + G p × 1 U 1 p × 2 U 2 p × 3 U 3 p M p , Z p + λ 4 2 G p × 1 U 1 p × 2 U 2 p × 3 U 3 p M p F 2 , s . t . U i p T U i p = I ( i = 1 , 2 , 3 )
which is equivalent to the following sub-problem:
min U 1 p , U 2 p , U 3 p p = 1 P G × 1 U 1 p × 2 U 2 p × 3 U 3 p O p F 2 , s . t . U i p T U i p = I ( i = 1 , 2 , 3 )
where O p = λ 1 X p + i = 1 3 ( λ 4 M i Z i ) λ 1 + 3 λ 4 can be easily solved by the method as suggested in [53,54].
(b)
G p sub-problem:
min G p λ 1 2 X p G p × 1 U 1 p × 2 U 2 p × 3 U 3 p F 2 + G p × 1 U 1 p × 2 U 2 p × 3 U 3 p M p , Z p + λ 4 2 G p × 1 U 1 p × 2 U 2 p × 3 U 3 p M p F 2 + λ 2 G p 1
It can be rewritten as
min G p 1 2 O p G p × 1 U 1 p × 2 U 2 p × 3 U 3 p F 2 + λ 2 G p 1
It can be solved by the Tensor-based Iterative Shrinkage Thresholding Algorithm (TISTA) in [53,54].
(c)
M p sub-problem:
min M p λ 3 M p * + G p × 1 U 1 p × 2 U 2 p × 3 U 3 p M p , Z p + λ 4 2 G p × 1 U 1 p × 2 U 2 p × 3 U 3 p M p F 2 ,
It can be briefly reformulated as:
min M p i = 1 3 λ 3 α i λ 4 M p ( i ) * + 1 2 B p + Z p λ 4 M p F 2 ,
where B p = G p × 1 U 1 p × 2 U 2 p × 3 U 3 p , its equivalent form is
min M p i = 1 3 λ 3 α i λ 4 M p ( i ) * + 1 2 B p ( i ) + Z p ( i ) λ 4 M p ( i ) F 2 ,
As suggested in [51], its close-form solution is expressed as:
M p ( i ) = fold i [ S α i λ 3 λ 4 ( B p ( i ) + Z p ( i ) λ 4 ) ] ,
For a given matrix X, the singular value shrinkage operator S τ ( X ) is defined as S τ ( X ) : = U X D τ ( Σ X ) V X T , and where X = U X σ X V X T is the SVD of X and D τ ( A ) = sgn ( A i j ) ( A i j τ ) + .
(d)
x sub-problem:
min X p = 1 P λ 1 2 X p G p × 1 U 1 p × 2 U 2 p × 3 U 3 p F 2 + Λ , y Φ x + 1 2 y Φ x F 2 ,
It is easy to observe that optimizing L with respect to x can be treated as solving the following linear system:
λ 1 x + Φ * ( Φ x ) = Φ * ( y Λ ) + λ 1 v e c ( X G × 1 U 1 × 2 U 2 × 3 U 3 ) ,
where G × 1 U 1 × 2 U 2 × 3 U 3 = p = 1 P G p × 1 U 1 p × 2 U 2 p × 3 U 3 p , v e c ( · ) denotes the vectorization operator for a matrix or tensor, and Φ * indicates the adjoint of Φ . Obviously, this linear system can be solved by well-known preconditioned conjugate gradient technique.
(e)
Update the multipliers
Z p = Z p + ρ λ 4 ( B p M p ) Λ = Λ + ρ ( y Φ x )
where ρ is a parameter associated with the convergence rate at values of, e.g., [1.05–1.1]. The whole optimization procedure for the proposed HSI-CSR model can be summarized as Algorithm 1, and we abbreviate the proposed method as NTSRLR.
Algorithm 1. HSI-CSR based NTSRLR.
Input: The compressive measurements y, measurement operator Φ , and the parameters of the algorithm.
1:  Initialization: Initializing an HSI x ( 0 ) via a standard CSR method (e.g., DCT based CSR).
2:  For l = 1 : L do
3:    Extract the set of tensor { X p } p = 1 P from x ( 0 ) via k-NN search the each exemplar cube;
4:    For p = 1 : P do
5:        Solve the problem (12) by ADMM;
6:        Updating U 1 p , U 2 p , U 3 p by via Equation (14);
7:        Updating G p via Equation (16);
8:        Updating M p via Equation (20);
9:        Updating the multipliers Z p via Equation (23);
10:    End for
11:    Updating x ( l ) via Equation (22);
12:    Updating the multiplier Λ via Equation (23);
13:  End for
Output: CS Reconstructed HSI x ( L ) .

4. Experimential Results and Analysis

In this section, various experiments on real HSI datasets are executed to assess the performance of the proposed NTSRLR method. We chose eight popular methods for comparisons, namely the three classic CS methods including StOMP [55], BCS [56] and multidimensional signal based KCS [57]; total variation based methods with LRTV [34] and TVAL3 [58]; structured sparsity based HSI-CSR methods with RLPHCS [24], SRPREC [25] and CSFHR [28]; and the recent joint tensor decomposition regularization and total variation based method (JTRTV) [40]. These methods represent state-of-the-art HSI-CSR, especially LRTV and JTRTV, which fully consider the HSI sparsity priors. In comparison experiments, we used the default parameter settings of those compared methods described in the reference papers. We adopted random measurement matrix as the sampling operator for all methods.

4.1. Quantitative Metrics

To evaluate the HSI-CSR performances of all methods, five quantitative picture quality indices (PQIs) were employed in experiments. The first index is mean peak signal-to-noise ratio (MPSNR), which is defined as the average PSNR of all bands for HSI, e.g.,
MPSNR ( X , X ^ ) = 1 S s = 1 S PSNR ( X s , X ^ s ) ,
where X s and X ^ s denote sth band images of ground truth X R W × H × S reconstructed HSI X ^ R W × H × S , respectively, and both of them are scaled to the range [0; 255].
The second index, mean structure similarity (MSSIM), was used to evaluate the similarity between the reconstructed HSI and the original HSI based on structural consistency, which is defined as average SSIM [59] of all bands for HSI,
MSSIM ( X , X ^ ) = 1 S s = 1 S SSIM ( X s , X ^ s ) ,
The third index, mean feature similarity (MFSIM), emphasizes the perceptual consistency with the original image, which is defined as average FSIM [60] of all bands for HSI,
MFSIM ( X , X ^ ) = 1 S s = 1 S FSIM ( X s , X ^ s ) ,
High values of these three measures MPSNR, MSSIM and MFSIM represent better reconstructed results.
The fourth index is the spectral angle mapper (SAM) [61], which calculates the average angle between spectrum vectors of the CS reconstructed HSI and the reference one across all spatial positions; its definition is as follows:
SAM ( X , X ^ ) = cos 1 ( x T x ^ x T x x ^ T x ^ ) ,
where x and x ^ denote vector form of the ground truth X reconstructed HSI X ^ , respectively.
The fifth index is the Erreur relative globale adimensionnelle desynthèse (ERGAS) [62], which measures fidelity of the CS reconstructed HSI based on the weighted sum of MSE in each band, defined as follows
ERGAS ( X , X ^ ) = 100 s = 1 S MSE ( X s , X ^ s ) μ X ^ s 2 ,
where MSE ( X s , X ^ s ) is the mean square error between X s and X ^ s , and μ X ^ s 2 is the mean value of X ^ s . Different from the former three PQI measures, smaller values of these two measures represent better reconstruction performances.

4.2. Experiments on Noiseless HSI Datasets

All methods are evaluated on three HSIs, namely Toy from the CAVE dataset (http://www1.cs.columbia.edu/CAVE/databases/multispectral/), PaviaU and corrected Indian Pines from hyperspectral remote sensing scenes (http://www.ehu.eus/ccwintco/index.php?title=Hyperspectra-Remote-Sensing-Scenes). The Toy is full spectral resolution reflectance data from 400 nm to 700 nm at 10 nm steps (31 bands total), with spatial resolution 512 × 512. The PaviaU dataset contains 103 bands, including 610 × 340 pixels. The Indian Pines is of size 145 × 145 with 10 m spatial resolution and consists of 200 bands via removing 20 noisy bands polluted by water absorption, which covers the wavelength in the range from 400 to 2500 nm by 10 nm spectral resolution. We conducted experiments on the three HSI datasets mainly for the following reasons. (1) The three HSI datasets possess higher spatial-spectral resolutions and richer non-local similarity, which facilitates that the structured sparsity across spatial-spectral domains is employed in our HSI-CSR model. (2) These HSIs are benchmark testing datasets in HSI reconstruction, as presented in [21,22,24,25,40,42,43,45,50,53,54]. (3) We selected the dataset with classification label, Indian Pines, which helps to compare all methods in term of classification accuracy. For the experiment, we cropped a sub-region of 300 × 300 for all bands of Toy and PaviaU, as shown in Figure 3. To validate the performance of proposed method, five different sampling rates (SR), namely 0.02, 0.05, 0.10, 0.15 and 0.20, were considered.

4.2.1. Visual Quality Evaluation

To visually demonstrate the HSI-CSR performances of the proposed method, we present the pseudocolor images with bands (25,15, 5), bands (55, 30, 5), and bands (23, 13, 3) of reconstructed Toy, PaviaU and Indian Pines obtained by all methods under sampling rates of 0.20, 0.10 and 0.15 in Figure 4, Figure 5 and Figure 6, respectively. We have the following observations. (1) All the competing methods achieved relatively good reconstructed results. (2) The proposed method outperformed the other methods, as shown by the enlarged subregion (delineated in a red box), where the large-scale sharp edges and small-scale fine texture features are reconstructed well, as shown in Figure 4, Figure 5 and Figure 6j. (3) The method StOMP produced serious noise during reconstruction and the details are blurred in the results of BCS, KCS and CSFHR. Instead of l 1 -based sparsity term, the TVAL3 utilizes the TV regularization based on gradient sparsity to preserve the more accurate edges but many details are lost. Although LRTV simultaneously considers the gradient sparsity and low-rankness of the data, the lack of an effective constraint for nonlocal spatial information will generate blurring artifacts. The JTRTV method is a generalization of LRTV for high-dimensional data, although it can deal with the artifacts problem generated by LRTV, it introduces unwanted noises. The RLPHCS and SRPREC consider the structure sparsity based on the reweighted Laplace prior. Nevertheless, their reconstructed results are unsatisfactory and the two methods appear to be virtually powerless for HSI-CSR. We provide following justifications about poor performance of RLPHCS and SRPREC: (1) The two HSI-CSR models use the maximum a posteriori framework to learn the hyperparameters; the accumulation of estimated bias for parameters may lead to a poor HSI-CSR performance. (2) The collected dictionaries in RLPHCS and SRPREC algorithms may not be overcomplete, which do not fully consider the redundant structure over spatial and spectral domain. This demonstrates the effectiveness of NTSRLR technique for HSI-CSR, greatly preserving the local details and structural information of the HSI.

4.2.2. Quantitative Evaluation

In Table 1 and Table 2, we provide the performance of all methods using MPSNR, MSSIM, MFSIM, SAM and ERGAS results, over all the spectral bands in Toy, PaviaU and Indian Pines. We highlight the best results for each case in bold in the current and following tables. The proposed method outperforms the other approaches under all sampling rates and in particular the PQIs are better than the recent JTRTV. At sampling rate ρ = 0.02, NTSRLR improves the MPSNR at least 10 dB more than JTRTV on the Toy, 1.3 dB better on the PaviaU, and 2.7 dB better on the Indian Pines. For ρ = 0.20, the average gain of MPSNR values of NTSRLR are more amplified compared with JTRTV, up to 14 dB on Toy, 8 dB on PaviaU and 7 dB on Indian Pines. MSSIM, MFSIM, SAM and ERGAS values values under three HSI datasets further confirm the robustness of the proposed method at all sampling rates. Although LRTV is second best method, obviously it still is inferior to ours by visual quality evaluation. Since NTSRTR explores the underlying nonlocal structure of an HSI by the tensor sparse representation and low-rank modeling, it gives higher MPNSR, MSSIM, and MFSIM values, and smaller SAM and ERGAS than the other methods, which only consider the local or single sparsity prior.
The values of PSNR, SSIM and FSIM across all bands on Indian Pines under sampling rate ρ = 0.10 are presented in Figure 7. The proposed method achieves the best PSNR, SSIM and FSIM values in most bands of the HSI, which also further validates the robustness of the proposed method over all spectral bands. To further illustrate the superiority of proposed NTSRLR on spectrum reconstruction, we chose four regions in Toy and PaviaU datasets shown Figure 8a,d; the average reflectance differences were calculated between reconstructed spectra and original spectra across all bands. The curves of those average reflectance differences are plotted in Figure 8b,c for Toy and Figure 8e,f for PaviaU. It is obvious that the reflectance difference between the reference and the reconstruction by NTSRLR is close to zero—much better than the other comparison methods.

4.2.3. Classification Performance on Indian Pines Dataset

The classification accuracy of the HSI with different algorithms was employed to further verify the effectiveness of the proposed method. Under the same circumstance, we chose the support vector machine (SVM) [63] and overall accuracy (OA) as the classifier and evaluation index, respectively. During the classification results with SVM algorithm, we used 16 ground-truth classes in Indian Pines and 10% randomly generated training sets from each class to test the classification accuracy. The classification results with different HSI-CSR methods under sampling rate ρ = 0.20 are revealed in Figure 9a–j. The OA are given in Table 3. As shown in Figure 9j, the classification results in original HSI appear continuous, and the OA is 86.37%. As shown in Figure 9i, the classification results of NTSRLR still show a continuous phenomenon, and the OA of NTSRLR is closer to the reference value. However, the classification results of other methods are more fragmentary in most regions of the image, with lower OA values.

4.3. Robustness for Noise Suppression during HSI-CSR

To further evaluate the effectiveness and robustness of proposed HSI-CSR method for noise suppression, we chose the Urban dataset (http://www.tec.army.mil/hypercube) contaminated by different degrees of mixture noise, which was with size of 307 × 307 and 4 m spatial resolution, and covers the wavelength in the range from 400 to 2400 nm by 10 nm spectral resolution. Under same competing methods, we removed 24 bands seriously affected by atmospheric attenuations and water absorptions, and finally reserved 186 bands for the dataset.
We present the pseudocolor image with bands (186, 131, 1), in which the input data is polluted by Gaussian noise and stripes, as shown in Figure 10k. The CSR results produced by StOMP, BCS, CSFHR and TVAL3 could neither recover the original HSI nor perform the denoising task well. Instead, the methods RLPHCS and SRPREC amplified the noise. Although the methods KCS, LRTV and JTRTV could suppress the noise to some extent, they lost the edges and textural details when compared to NTSRLR.
Furthermore, we present the quantitative comparisons by showing the horizontal mean profiles of bands 1 and 186 in Urban dataset before and after CSR in Figure 11 and Figure 12. The horizontal axis in the figure denotes the row number, and the vertical axis represents the mean gray value of each row. As shown in Figure 11k and Figure 12k, the profiles have huge fluctuation due to the disturbance of noises. After CSR, the fluctuation has been moderately alleviated. Evidently, the profiles with the proposed NTSRLR method are more natural and smoother. The over-smooth profiles corresponding to BCS are mainly due to the image blurring. This further substantiates the efficiency and robustness of the proposed HSI-CSR method for noise suppression.
Here, we give the theoretical analysis to explain why the proposed HSI-CSR algorithm is able to suppress noise at the same time. The primary cause is that proposed NTSRLR contributes the noise suppression to the joint tensor sparse and low-rank constraint on nonlocal cubes. The work in [64] refers to the fact that the low-rank representation for those nonlocal similar patches to a given patch offer helpful remedy for its better image denoising. For tensor data, one can obtain the same results when unfolding a tensor into a matrix along certain mode, and the nonlocal tensor low-rank term of NTSRLR model can simultaneously provide complementary low-rank structures along all modes to promote the denoising performance of tensor data. Therefore, the noise of HSI can be suppressed to some extent. Besides, the research is [53,54] has demonstrated the effectiveness of tensor sparse models in multi-dimensional signals denoising, which verifies the positive impact of NTSRLR on noise suppression from the perspective of tensor sparse representation.
Note that we removed all noisy bands and preserved only 171 bands for quantitative assessment. Table 4 presents MPSNR, MSSIM, MFSIM ERGAS and SAM of all methods under sampling rates 0.10, 0.15 and 0.20. It can be seen that our method not only recovered the structural and perceptual feature of Urban dataset, but also preserved better spectral information.

4.4. Effectiveness Analysis of Single NTSR or NTLR Constraint

To further demonstrate the effectiveness of nonlocal tensor sparse representation and low-rank regularization in our model, we conducted two more experiments using the PaviaU dataset. The first experiment was to perform CSR without the nonlocal tensor low-rank regularization term, and the reconstructed HSI was achieved solely by nonlocal tensor sparse representation (NTSR). The second experiment was a reconstruction with the nonlocal tensor low-rank regularization method, but without NTSR, which is abbreviated as NTLR.
Figure 13 shows the comparison results of MPSNR, MSSIM and SAM of all methods under sampling rates from 0.05 to 0.20 with interval 0.05. Compared with other methods, the proposed NTSRLR obtained larger MPSNRs and MSSIMs, and smaller errors as measured by SAM under different sampling rates. In particular, when the sampling rate is small, the results from NTSRLR are significantly better than the NTSR and NTLR, which are based on a single constraint. This provides additional evidence for the effectiveness of the proposed method from the perspective of having integrated constraints with both non-local sparse representation and low rankness in our model.

4.5. Computational Complexity Analysis

For an input HSI X R W × H × S , the number of FBCs is P = O ( W H ) , the size of each FBC group is w h × s × S , where s is number of FBCs in each group. The computation cost seems not very small for quite large P. However, CSR on the P FBCs can be processed in parallel, each with relatively small computational complexity. The computational complexity of the proposed algorithm that mainly lies in the update of M p ( i ) , U i p ( i = 1 , 2 , 3 ) . Updating U i p requires computing an SVD of I i × I i matrix, and updating M p ( i ) requires computing an SVD of I i × ( j i I j ) matrix. Relatively, the other variables G p , x and multipliers updating will not consume lots of running time.

4.6. Convergence Analysis

Lastly, we have conducted experiments to show the convergence of our method using the Toy and Indian Pines dataset as examples under different sampling rates and different initializations. Figure 14 plots the PSNRs versus iteration numbers for the tested HSIs when the sampling rates are at 0.10 for Toy and 0.15 for Indian Pines, when using initialization x = Φ * y and DCT. As can be seen, the different initialization ways can provide quite close solutions, which indicates the performance of proposed algorithm is not sensitive to initialization. However, the two initialization ways possess different rates of convergence, and, by contrast, the initialization via DCT requires only a small number of iterations to get to the final PSNR. Therefore, we adopted the initialization strategy based on DCT to speed up our algorithm. Besides, the value of PSNR will become a constant when the algorithm converges. Thus, in the experiment, we set the maximum number of iterations for termination condition.

4.7. Parameters Analysis

There are four parameters { λ i } i = 1 4 in the proposed model. Considering the different roles of nonlocal tensor sparseness and low-rankness terms, we conducted two more experiments on PaviaU dataset in Section 4.4. The results of MPSNR, MSSIM and SAM demonstrate the nonlocal tensor low-rank regularization term plays a more important role in proposed model than nonlocal tensor sparse representation term. It implies that the nonlocal tensor low-rankness term should be assigned a greater weight to balance the two parts. Therefore, we set λ 2 = 1 and λ 3 = 10 in all our experiments. Correspondingly, we can regard the other two parts with λ 1 and λ 4 tradeoff as loyalty terms of the nonlocal tensor sparseness and low-rankness; it is reasonable to obtain a greater value for λ 4 , and we set λ 1 = 0.02 and λ 4 = 250 , as suggested in [42].
Besides, the spatial size of cube and the number of non-local similar cubes are two key parameters. Some research [17,18,30,41] reports that the spatial size of cube and the number of non-local similar cubes are dependent on sampling rates. The higher the sampling rate is, the more detailed information of texture and structure the HSI loses. For this reason, the bigger spatial size and more non-local similar cubes are beneficial to provide extra knowledge to further promote the HSI reconstruction performance. Thus, according to the parameter setting principle in [17,18,30,41], we set spatial size to 6 × 6, 7 × 7, 8 × 8, 9 × 9 and 10 × 10 for ρ = 0.20, 0.15, 0.10, 0.05 and 0.02, respectively; and the corresponding number of non-local similar cubes are set to 50, 55, 60, 65 and 70.

5. Conclusions

In this paper, we propose a novel method for hyperspectral image compressed sensing reconstruction by non-local tensor sparse representation and low-rank regularization. The proposed method considers intrinsic structured sparsity, where the nonlocal similarity between spatial cubes and the global correlation across all bands are considered fully. Each cube group contains similar structures; its tensor-based sparsity and low-rank properties can be regarded as very valuable priors. Experimental results reveal that the proposed methods outperform the state-of-the-art methods in term of visual inspection, quantitative and classification accuracy assessment. The proposed method is also superior in noise suppression. We also conclude that it is advantageous to have integrated constraints using both non-local tensor sparse representation and low-rankness rather than using only one of them in our model.

Author Contributions

All authors contributed to the design of the methodology and the validation of experiments. J.X. wrote the paper. Y.Z., W.L. and J.C.-W.C. reviewed and revised the paper.

Funding

This work was supported by the National Natural Science Foundation of China (61371152 and 61771391), the Shenzhen Municipal Science and Technology Innovation Committee (JCYJ20170815162956949), and the Fund for Scientific Research in Flanders (FWO) project G037115N Data fusion for image analysis in remote sensing. Wenzhi Liao, a postdoctoral fellow of the Research Foundation Flanders (FWO-Vlaanderen), acknowledges its support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, J.; Zhao, Y.; Chan, J.C.-W. Learning and transferring deep joint spectral—Spatial features for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
  2. Yuan, Y.; Lin, J.; Wang, Q. Hyperspectral image classification via multitask joint sparse representation and stepwise MRF optimization. IEEE Trans. Cybern. 2016, 46, 2966–2977. [Google Scholar] [CrossRef]
  3. Liu, Y.; Shi, Z.; Zhang, G.; Chen, Y.; Li, S.; Hong, Y.; Shi, T.; Wang, J.; Liu, Y. Application of Spectrally Derived Soil Type as Ancillary Data to Improve the Estimation of Soil Organic Carbon by Using the Chinese Soil Vis-NIR Spectral Library. Remote Sens. 2018, 10, 1747. [Google Scholar] [CrossRef]
  4. Khelifi, F.; Bouridane, A.; Kurugollu, F. Joined spectral trees for scalable spiht-based multispectral image compression. IEEE Trans. Multimed. 2008, 10, 316–329. [Google Scholar] [CrossRef]
  5. Christophe, E.; Mailhes, C.; Duhamel, P. Hyperspectral image compression: Adapting spiht and ezw to anisotropic 3-d wavelet coding. IEEE Trans. Image Process. 2008, 17, 2334–2346. [Google Scholar] [CrossRef] [PubMed]
  6. Töreyın, B.U.; Yilmaz, O.; Mert, Y.M.; Türk, F. Lossless hyperspectral image compression using wavelet transform based spectral decorrelation. In Proceedings of the IEEE 7th International Conference on Recent Advances in Space Technologies (RAST), Istanbul, Turkey, 16–19 June 2015; pp. 251–254. [Google Scholar]
  7. Wang, L.; Wu, J.; Jiao, L.; Shi, G. Lossy-to-lossless hyperspectral image compression based on multiplierless reversible integer TDLT/KLT. IEEE Geosci. Remote Sens. Lett. 2009, 6, 587–591. [Google Scholar] [CrossRef]
  8. Mielikainen, J.; Toivanen, P. Clustered DPCM for the lossless compression of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2943–2946. [Google Scholar] [CrossRef]
  9. Du, Q.; Fowler, J.E. Hyperspectral image compression using JPEG2000 and principal component analysis. IEEE Geosci. Remote Sens. Lett. 2007, 4, 201–205. [Google Scholar] [CrossRef]
  10. Du, Q.; Ly, N.; Fowler, J.E. An operational approach to PCA+JPEG2000 compression of hyperspectral imagery. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2237–2245. [Google Scholar] [CrossRef]
  11. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  12. Boufounos, D.; Liu, D.; Boufounos, P.T. A lecture on compressive sensing. IEEE Signal Process. Mag. 2007, 24, 1–9. [Google Scholar]
  13. Huang, J.; Zhang, T.; Metaxas, D. Learning with structured sparsity. J. Mach. Learn. Res. 2011, 12, 3371–3412. [Google Scholar]
  14. Tan, M.; Tsang, I.W.; Wang, L. Matching pursuit LASSO part I: Sparse recovery over big dictionary. IEEE Trans. Signal Process. 2015, 63, 727–741. [Google Scholar] [CrossRef]
  15. Candes, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  16. Chartrand, R.; Yin, W. Iterative Reweighted Algorithms for Compressive Sensing. In Proceedings of the IEEE International Conference on Acoust. Speech Signal Process, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 3869–3872. [Google Scholar]
  17. Dong, W.; Wu, X.; Shi, G. Sparsity fine tuning in wavelet domain with application to compressive image reconstruction. IEEE Trans. Image Process. 2014, 23, 5249–5262. [Google Scholar] [CrossRef]
  18. Dong, W.; Shi, G.; Li, X.; Ma, Y.; Huang, F. Compressive sensing via nonlocal low-rank regularization. IEEE Trans. Image Process. 2014, 23, 3618–3632. [Google Scholar] [CrossRef] [PubMed]
  19. Dong, W.; Li, X.; Zhang, L.; Shi, G. Sparsity-based image denoising via dictionary learning and structural clustering. In Proceedings of the Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 457–464. [Google Scholar]
  20. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar]
  21. Zhang, L.; Wei, W.; Zhang, Y.; Yan, H.; Li, F.; Tian, C. Locally similar sparsity-based hyperspectral compressive sensing using unmixing. IEEE Trans. Comput. Imaging 2016, 2, 86–100. [Google Scholar] [CrossRef]
  22. Wang, L.; Feng, Y.; Gao, Y.; Wang, Z.; He, M. Compressed sensing reconstruction of hyperspectral images based on spectral unmixing. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2018, 11, 1266–1284. [Google Scholar] [CrossRef]
  23. Li, C.; Sun, T.; Kelly, K.F.; Zhang, Y. A compressive sensing and unmixing scheme for hyperspectral data processing. IEEE Trans. Image Process. 2012, 21, 1200–1210. [Google Scholar]
  24. Zhang, L.; Wei, W.; Tian, C.; Li, F.; Zhang, Y. Exploring structured sparsity by a reweighted laplace prior for hyperspectral compressive sensing. IEEE Trans. Image Process. 2016, 25, 4974–4988. [Google Scholar] [CrossRef]
  25. Zhang, L.; Wei, W.; Zhang, Y.; Shen, C.; Hengel, A.V.D.; Shi, Q. Dictionary learning for promoting structured sparsity in hyperspectral compressive sensing. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7223–7235. [Google Scholar] [CrossRef]
  26. Fu, W.; Li, S.; Fang, L.; Benediktsson, J.A. Adaptive spectral—Spatial compression of hyperspectral image with sparse representation. IEEE Trans. Geosc. Remote Sens. 2017, 55, 671–682. [Google Scholar] [CrossRef]
  27. Lin, X.; Liu, Y.; Wu, J.; Dai, Q. Spatial-spectral encoded compressive hyperspectral imaging. ACM Trans. Graphics (TOG) 2014, 33, 233. [Google Scholar] [CrossRef]
  28. Zhang, L.; Wei, W.; Zhang, Y.; Shen, C.; Hengel, A.V.D.; Shi, Q. Cluster sparsity field: An internal hyperspectral imagery prior for reconstruction. Int. J. Comput. Vis. 2015, 11, 1–25. [Google Scholar] [CrossRef]
  29. Meza, P.; Ortiz, I.; Vera, E.; Martinez, J. Compressive hyperspectral imaging recovery by spatial-spectral non-local means regularization. Opt. Express 2018, 26, 7043–7055. [Google Scholar] [CrossRef] [PubMed]
  30. Wei, J.; Huang, Y.; Lu, K.; Wang, L. Nonlocal low-rank-based compressed sensing for remote sensing image reconstruction. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1557–1561. [Google Scholar] [CrossRef]
  31. Khan, Z.; Shafait, F.; Mian, A. Joint group sparse pca for compressed hyperspectral imaging. IEEE Trans. Image Process. 2015, 24, 4934–4942. [Google Scholar] [CrossRef] [PubMed]
  32. Eason, D.T.; Andrews, M. Total variation regularization via continuation to recover compressed hyperspectral images. IEEE Trans. Image Process. 2015, 24, 284–293. [Google Scholar] [CrossRef] [PubMed]
  33. Jia, Y.; Luo, Z. Weighted total variation iterative reconstruction for hyperspectral pushbroom compressive imaging. J. Image Process. Theory Appl. 2016, 1, 6–10. [Google Scholar]
  34. Golbabaee, M.; Vandergheynst, P. Joint trace/TV norm minimization: A new efficient approach for spectral compressive imaging. In Proceedings of the 19th IEEE International Conference on Image Processing (ICIP), Orlando, FL, USA, 30 September–3 October 2012; pp. 933–936. [Google Scholar]
  35. Karami, A.; Yazdi, M.; Mercier, G. Compression of hyperspectral images using discerete wavelet transform and Tucker decomposition. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2012, 5, 444–450. [Google Scholar] [CrossRef]
  36. Wang, L.; Bai, J.; Wu, J.; Jeon, G. Hyperspectral image compression based on lapped transform and Tucker decomposition. Signal Process. Image Commun. 2015, 36, 63–69. [Google Scholar] [CrossRef]
  37. Zhang, L.; Zhang, L.; Tao, D.; Huang, X.; Du, B. Compression of hyperspectral remote sensing images by tensor approach. Neurocomputing 2015, 147, 358–363. [Google Scholar] [CrossRef]
  38. Fang, L.; He, N.; Lin, H. CP tensor-based compression of hyperspectral images. J. Opt. Image Sci. Vis. 2017, 34, 252. [Google Scholar] [CrossRef] [PubMed]
  39. Yang, S.; Wang, M.; Li, P.; Jin, L.; Wu, B.; Jiao, L. Compressive hyperspectral imaging via sparse tensor and nonlinear compressed sensing. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5943–5957. [Google Scholar] [CrossRef]
  40. Wang, Y.; Lin, L.; Zhao, Q.; Yue, T.; Meng, D.; Leung, Y. Compressive sensing of hyperspectral images via joint tensor tucker decomposition and weighted total variation regularization. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2457–2461. [Google Scholar] [CrossRef]
  41. Du, B.; Zhang, M.; Zhang, L.; Hu, R.; Tao, D. PLTD: Patch-based low-rank tensor decomposition for hyperspectral images. IEEE Trans. Multimed. 2016, 19, 67–79. [Google Scholar] [CrossRef]
  42. Xie, Q.; Zhao, Q.; Meng, D.; Xu, Z.; Gu, S.; Zuo, W.; Zhang, L. Multispectral images denoising by intrinsic tensor sparsity regularization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1692–1700. [Google Scholar]
  43. Peng, Y.; Meng, D.; Xu, Z.; Gao, C.; Yang, Y.; Zhang, B. Decomposable nonlocal tensor dictionary learning for multispectral image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2949–2956. [Google Scholar]
  44. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  45. Xue, J.; Zhao, Y.; Hao, J. Tensor non-local low-rank regularization for recovering compressed hyperspectral images. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3046–3050. [Google Scholar]
  46. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  47. Schwab, H. For most large underdetermined systems of linear equations the minimal l1 solution is also the sparsest solution. Commun. Pur Appl. Math. 2006, 59, 797–829. [Google Scholar]
  48. Daubechies, I.; Defrise, M.; Mol, C.D. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  49. Zhang, X.; Burger, M.; Bresson, X.; Osher, S. Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imag. Sci. 2010, 3, 253–276. [Google Scholar] [CrossRef]
  50. Xue, J.; Zhao, Y.; Liao, W.; Kong, S.G. Joint spatial and spectral low-rank regularization for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1940–1958. [Google Scholar] [CrossRef]
  51. Liu, J.; Musialski, P.; Wonka, P.; Ye, J. Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 208–220. [Google Scholar] [CrossRef] [PubMed]
  52. Quan, Y.; Huang, Y.; Ji, H. Dynamic texture recognition via orthogonal tensor dictionary learning. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 73–81. [Google Scholar]
  53. Qi, N.; Shi, Y.; Sun, X.; Yin, B. Tensor: Multi-dimensional tensor sparse representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 5916–5925. [Google Scholar]
  54. Qi, N.; Shi, Y.; Sun, X.; Wang, J.; Yin, B.; Gao, J. Multi-dimensional sparse models. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 163–178. [Google Scholar] [CrossRef]
  55. Donoho, D.L.; Tsaig, Y.; Drori, I.; Starck, J.L. Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory 2012, 58, 1094–1121. [Google Scholar] [CrossRef]
  56. Ji, S.; Xue, Y.; Carin, L. Bayesian compressive sensing. IEEE Trans. Signal Process. 2008, 56, 2346–2356. [Google Scholar] [CrossRef]
  57. Duarte, M.F.; Baraniuk, R.G. Kronecker compressive sensing. IEEE Trans. Image Process. 2012, 21, 494–504. [Google Scholar] [CrossRef]
  58. Li, C.; Yin, W.; Jiang, H.; Zhang, Y. An efficient augmented lagrangian method with applications to total variation minimization. Comput. Optim. Appl. 2013, 56, 507–530. [Google Scholar] [CrossRef]
  59. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  60. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. Fsim: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef]
  61. Wald, L. Data Fusion: Definitions and Architectures: Fusion of Images of Different Spatial Resolutions; Presses des MINES: Paris, France, 2002. [Google Scholar]
  62. Yuhas, R.H.; Boardman, J.W.; Goetz, A.F. Determination of semi-arid landscape endmembers and seasonal trends using convex geometry spectral unmixing techniques. In Summaries of the 4th Annual JPL Airborne Geoscience Workshop; NASA: Washington, DC, USA, 1993; Volume 4, pp. 205–208. [Google Scholar]
  63. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  64. Gu, S.; Xie, Q.; Meng, D.; Zuo, W.; Feng, X.; Zhang, L. Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 2017, 121, 183–208. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed HSI-CSR algorithm, which consists of two steps: sensing and reconstruction. First, it acquires the compressive measurement y by a random sampling matrix Φ . Second, NTSRLR recovers an HSI from the measurements y = Φ x .
Figure 1. Flowchart of the proposed HSI-CSR algorithm, which consists of two steps: sensing and reconstruction. First, it acquires the compressive measurement y by a random sampling matrix Φ . Second, NTSRLR recovers an HSI from the measurements y = Φ x .
Remotesensing 11 00193 g001
Figure 2. Nonlocal tensor sparsity and low-rank property analysis in HSI.
Figure 2. Nonlocal tensor sparsity and low-rank property analysis in HSI.
Remotesensing 11 00193 g002
Figure 3. HSIs employed in the compressive sensing experiments: (a) Toy; (b) PaviaU; and (c) Indian Pines.
Figure 3. HSIs employed in the compressive sensing experiments: (a) Toy; (b) PaviaU; and (c) Indian Pines.
Remotesensing 11 00193 g003
Figure 4. Compressive sensing reconstructed results on pseudocolor images with bands (25,15, 5) of the Toy image from different methods under sampling rate ρ = 0.20.
Figure 4. Compressive sensing reconstructed results on pseudocolor images with bands (25,15, 5) of the Toy image from different methods under sampling rate ρ = 0.20.
Remotesensing 11 00193 g004
Figure 5. Compressive sensing reconstructed results on pseudocolor images with bands (55, 30, 5) of the PaviaU image from different methods under sampling rate ρ = 0.10.
Figure 5. Compressive sensing reconstructed results on pseudocolor images with bands (55, 30, 5) of the PaviaU image from different methods under sampling rate ρ = 0.10.
Remotesensing 11 00193 g005
Figure 6. Compressive sensing reconstructed results on pseudocolor images with bands (23, 13, 3) of the Indian Pines image from different methods under sampling rate ρ = 0.15.
Figure 6. Compressive sensing reconstructed results on pseudocolor images with bands (23, 13, 3) of the Indian Pines image from different methods under sampling rate ρ = 0.15.
Remotesensing 11 00193 g006
Figure 7. PSNR, SSIM and FSIM values comparison of different methods for each band on Indian Pines dataset under sampling rate ρ = 0.20.
Figure 7. PSNR, SSIM and FSIM values comparison of different methods for each band on Indian Pines dataset under sampling rate ρ = 0.20.
Remotesensing 11 00193 g007
Figure 8. Comparison of spectra difference on Toy and PaviaU datasets: (b,c) the spectra difference curves of different methods corresponding to the region marked by cyan and green rectangles of Toy in (a) under sampling rate ρ = 0.05; and (e,f) the spectra difference curves of different methods corresponding to the region marked by red and blue rectangles of PaviaU in (d) under sampling rate ρ = 0.10.
Figure 8. Comparison of spectra difference on Toy and PaviaU datasets: (b,c) the spectra difference curves of different methods corresponding to the region marked by cyan and green rectangles of Toy in (a) under sampling rate ρ = 0.05; and (e,f) the spectra difference curves of different methods corresponding to the region marked by red and blue rectangles of PaviaU in (d) under sampling rate ρ = 0.10.
Remotesensing 11 00193 g008
Figure 9. Classification results for the Indian Pines image using SVM before and after CSR under sampling rate ρ = 0.20.
Figure 9. Classification results for the Indian Pines image using SVM before and after CSR under sampling rate ρ = 0.20.
Remotesensing 11 00193 g009
Figure 10. Compressive sensing reconstructed results on pseudocolor images with bands (186, 131, 1) of the noisy Urban image from different methods under sampling rate ρ = 0.10.
Figure 10. Compressive sensing reconstructed results on pseudocolor images with bands (186, 131, 1) of the noisy Urban image from different methods under sampling rate ρ = 0.10.
Remotesensing 11 00193 g010
Figure 11. Horizontal mean profiles of compressive sensing reconstructed results on 1st band of real noisy Urban HSI data from different methods under sampling rate ρ = 0.10.
Figure 11. Horizontal mean profiles of compressive sensing reconstructed results on 1st band of real noisy Urban HSI data from different methods under sampling rate ρ = 0.10.
Remotesensing 11 00193 g011
Figure 12. Horizontal mean profiles of compressive sensing reconstructed results on 186th band of real noisy Urban HSI data from different methods under sampling rate ρ = 0.10.
Figure 12. Horizontal mean profiles of compressive sensing reconstructed results on 186th band of real noisy Urban HSI data from different methods under sampling rate ρ = 0.10.
Remotesensing 11 00193 g012
Figure 13. MPSNR, MSSIM and SAM bars of different methods under sampling rates 0.05 to 0.20 with interval 0.05 on PaviaU dataset.
Figure 13. MPSNR, MSSIM and SAM bars of different methods under sampling rates 0.05 to 0.20 with interval 0.05 on PaviaU dataset.
Remotesensing 11 00193 g013
Figure 14. Verification of the convergence of the proposed method. Progression of the PSNRs for the Toy and Indian Pines datasets under different sampling rates.
Figure 14. Verification of the convergence of the proposed method. Progression of the PSNRs for the Toy and Indian Pines datasets under different sampling rates.
Remotesensing 11 00193 g014
Table 1. MPSNRs, MSSIMs, and MFSIMs of different CSR methods on three selected HSIs under different sampling rates.
Table 1. MPSNRs, MSSIMs, and MFSIMs of different CSR methods on three selected HSIs under different sampling rates.
SRsPQIsMethods
StOMPBCSKCSLRTVTVAL3RLPHCSSRPRECJTRTVCSFHRNTSRLR
[55][56][57][34][58][24][25][40][28]
Results on Toy
0.02MPSNR25.2718.4523.3922.0822.9113.1914.4017.1925.8727.81
MSSIM0.70400.34990.65650.66510.63640.20890.27860.16010.66390.7322
MFSIM0.80440.69370.78200.80610.73970.66510.62720.50330.83890.8484
0.05MPSNR29.3524.6326.9326.5127.6313.2213.8922.6529.9634.22
MSSIM0.82560.66720.78110.78730.78170.23720.19290.33740.74620.8930
MFSIM0.91890.78370.85230.87830.82730.64930.54800.62330.88450.9423
0.10MPSNR29.7128.2429.9432.0631.8113.0615.9229.9332.3540.12
MSSIM0.84160.80720.86410.92330.88710.20340.12670.68600.84180.9640
MFSIM0.92610.85630.89870.95170.90520.61630.45050.84660.92550.9814
0.15MPSNR30.9029.4031.8834.9933.4613.6927.7931.4734.9944.52
MSSIM0.89820.84290.90250.94270.91410.19930.74920.76730.89850.9848
MFSIM0.94850.87770.92320.96690.92820.56420.90820.88940.95270.9928
0.20MPSNR31.7531.6333.2640.5437.6513.7125.7433.3938.5347.86
MSSIM0.93450.88450.92360.98080.95930.24950.73840.85040.95410.9925
MFSIM0.96170.90940.93750.98760.96640.61820.89420.93070.97850.9965
Results on PaviaU
0.02MPSNR28.1121.7423.7923.0822.9915.1814.8428.0425.1129.83
MSSIM0.76030.47670.54860.65000.50140.15620.09900.67080.69230.8000
MFSIM0.82460.68250.67430.79740.64290.68080.57580.85930.80950.8884
0.05MPSNR30.0624.2626.5927.4925.2914.3815.4635.7332.7437.96
MSSIM0.85710.55720.67830.80990.59140.16980.12660.92350.87560.9551
MFSIM0.93710.73790.78540.88630.71320.71230.63790.96660.94420.9774
0.10MPSNR30.4026.3629.1432.9927.4815.7316.0037.1034.3642.15
MSSIM0.82230.64790.78710.91580.69070.12250.11570.94520.90620.9794
MFSIM0.94090.79630.86060.94790.78940.59300.54610.97610.95830.9905
0.15MPSNR31.5927.0830.8533.8128.3326.4628.2937.3936.7744.55
MSSIM0.87070.68120.84220.94170.72680.67710.85670.94870.94170.9872
MFSIM0.95230.81370.89810.96830.81650.87380.92550.97780.97410.9944
0.20MPSNR32.4928.5432.1340.5630.4628.1435.3838.0340.5646.55
MSSIM0.90200.74450.87450.97400.80570.73280.95470.95480.97050.9917
MFSIM0.95940.85180.91980.98620.87450.89640.98000.98070.98710.9965
Results on Indian Pines
0.02MPSNR30.4533.0331.4622.8130.1219.5123.5830.8730.8533.54
MSSIM0.74870.76920.73850.49160.78390.22340.40250.80100.80890.8202
MFSIM0.82990.81280.73370.84210.80260.71490.83270.81020.85000.8775
0.05MPSNR35.7037.2333.7126.7737.2816.4421.0137.0736.8641.15
MSSIM0.86930.81530.77630.80570.82210.09200.29440.92400.86710.9470
MFSIM0.86390.85540.79830.89360.85170.47140.81250.94750.92100.9553
0.10MPSNR40.7738.9735.3834.1039.6616.0625.1039.2937.3844.12
MSSIM0.93950.84270.81650.91530.86060.06140.53360.93380.87980.9719
MFSIM0.94200.88670.84910.94400.89190.38460.83170.94720.94390.9750
0.15MPSNR43.7139.4236.3934.6540.4719.6224.0539.8539.2745.65
MSSIM0.94650.84780.84170.92480.87430.47560.44160.93540.91970.9810
MFSIM0.97940.89420.87430.94960.90560.79560.78040.94760.95690.9818
0.20MPSNR44.9240.7237.1241.6642.3620.9526.0739.6741.8146.96
MSSIM0.93500.87400.86010.96700.90520.52590.49570.93670.94750.9863
MFSIM0.97720.91790.89070.97480.93490.82160.79660.94650.97060.9858
Table 2. SAM and ERGAS comparisons of different CSR methods on three selected HSIs under different sampling rates.
Table 2. SAM and ERGAS comparisons of different CSR methods on three selected HSIs under different sampling rates.
SRsPQIsMethods
StOMPBCSKCSLRTVTVAL3RLPHCSSRPRECJTRTVCSFHRNTSRLR
[55][56][57][34][58][24][25][40][28]
Results on Toy
0.02SAM0.30400.65480.30620.50960.38880.98530.97070.65990.40140.2810
ERGAS165.5864.5294.7362.5309.724112740582.3178.9154.8
0.05SAM0.25000.27810.23510.39670.28860.96330.92100.65320.34010.2029
ERGAS147.8257.3193.9204.3181.920642536321.4141.584.65
0.10SAM0.23180.19680.18940.21620.20800.62340.83820.41290.27500.1031
ERGAS141.94170.4136.9107.0113.112731853140.3108.135.92
0.15SAM0.26290.16540.16350.19400.18280.42280.45620.35820.21510.0998
ERGAS123.9148.6109.978.4093.8212621620118.579.2328.08
0.20SAM0.11230.14710.14780.11120.12940.38660.42500.29640.15990.0733
ERGAS112.5116.194.2941.2058.60978130595.7753.8520.86
Results on PaviaU
0.02SAM0.18190.22230.19310.15760.24600.95420.99500.17220.12480.1128
ERGAS137.8345.6264.4329.0284.325373585156.7153.8125.8
0.05SAM0.15420.17490.15120.13470.20210.88490.96460.08170.10190.0550
ERGAS123.4245.2187.6153.2213.42079299767.5696.1950.98
0.10SAM0.14470.14170.1210.08620.17010.70690.81680.07250.09050.0389
ERGAS118.7188.0138.790.35165.21858242558.5880.1932.53
0.15SAM0.11160.13260.10590.07080.15960.29140.23680.07080.07280.0315
ERGAS103.6173.3113.9677.17149.81247192156.6861.1524.90
0.20SAM0.08580.11780.09570.04620.13590.24070.08360.06740.05210.0260
ERGAS93.40146.298.6638.73117.71231142752.6341.2619.74
Results on Indian Pines
0.02SAM0.15110.16220.13830.27740.12460.91660.94760.10750.10870.0821
ERGAS143.2161.8138.6759.7126.517232297129.7198.7116.0
0.05SAM0.14470.08300.10630.08320.09110.56680.82860.05530.07230.0382
ERGAS89.4888.69119.2233.287.851558198864.84152.649.62
0.10SAM0.04340.07280.08880.05870.07430.48210.65230.05150.06590.0282
ERGAS38.7774.7796.9143.0868.531078132358.37127.435.96
0.15SAM0.03650.07140.07990.04980.06930.39140.46630.05050.05490.0229
ERGAS34.9872.3286.2437.4562.99917125856.1578.0730.81
0.20SAM0.02950.06220.07410.03440.05860.27490.45900.04810.05530.0190
ERGAS31.3961.8779.4333.5951.6136698250.7859.6927.19
Table 3. Classification performance comparison before and after CSR on Indian Pines under different sampling rates.
Table 3. Classification performance comparison before and after CSR on Indian Pines under different sampling rates.
SRsStOMPBCSKCSLRTVTVAL3RLPHCSSRPRECJTRTVCSFHRNTSRLROriginal
[55][56][57][34][58][24][25][40][28]
0.0271.19%50.64%52.37%60.96%51.85%29.61%10.51%20.03%53.21%73.69%86.37%
0.0575.70%57.83%56.18%69.64%57.83%36.66%13.32%54.47%59.17%77.32%
0.1076.32%59.01%62.01%71.24%60.92%41.82%14.62%55.66%62.98%79.31%
0.1578.41%63.80%65.80%77.03%62.70%45.53%45.53%56.84%65.24%80.26%
0.2080.28%68.73%70.73%79.19%65.73%46.57%57.83%58.13%67.70%81.79%
Table 4. MPSNRs, MSSIMs, MFSIMs ERGAS and SAM of different CSR methods on Urban with different sampling rates.
Table 4. MPSNRs, MSSIMs, MFSIMs ERGAS and SAM of different CSR methods on Urban with different sampling rates.
SRsPQIsMethods
StOMPBCSKCSLRTVTVAL3RLPHCSSRPRECJTRTVCSFHRNTSRLR
[55][56][57][34][58][24][25][40][28]
0.10MPSNR19.6316.9523.6324.7617.7922.0415.1327.7426.7630.88
MSSIM0.65230.41470.81520.87050.44230.81550.42450.89590.89330.9471
MFSIM0.88410.69180.89160.92770.65620.90880.77110.95610.92790.9746
ERGAS280.2380.4184.2159.6346.3261.6480.9111.5109.876.89
SAM0.28840.21570.15510.11970.26440.27370.47750.11960.12520.0682
0.15MPSNR20.6117.4525.7826.4018.4824.1620.9427.9428.2733.51
MSSIM0.70880.45460.87400.91340.49240.84420.83060.89920.90640.9662
MFSIM0.89720.71380.92420.95750.69460.92840.90160.95800.95820.9845
ERGAS250.4359.7145.4122.9320.0202.4296.3108.991.2356.89
SAM0.24610.20760.13100.10240.25180.22020.28850.11800.10750.0564
0.20MPSNR20.9318.7227.3733.2620.3525.9925.2428.4030.1135.62
MSSIM0.72740.55090.90510.96640.61330.85830.90340.90400.92750.9762
MFSIM0.90110.76450.94180.98400.78100.94590.94450.96080.97050.9896
ERGAS241.4310.8122.359.45259.0165.8183.7103.167.6044.66
SAM0.23230.18790.11560.05920.22070.18310.18590.11490.08280.0481

Share and Cite

MDPI and ACS Style

Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W. Nonlocal Tensor Sparse Representation and Low-Rank Regularization for Hyperspectral Image Compressive Sensing Reconstruction. Remote Sens. 2019, 11, 193. https://doi.org/10.3390/rs11020193

AMA Style

Xue J, Zhao Y, Liao W, Chan JC-W. Nonlocal Tensor Sparse Representation and Low-Rank Regularization for Hyperspectral Image Compressive Sensing Reconstruction. Remote Sensing. 2019; 11(2):193. https://doi.org/10.3390/rs11020193

Chicago/Turabian Style

Xue, Jize, Yongqiang Zhao, Wenzhi Liao, and Jonathan Cheung-Wai Chan. 2019. "Nonlocal Tensor Sparse Representation and Low-Rank Regularization for Hyperspectral Image Compressive Sensing Reconstruction" Remote Sensing 11, no. 2: 193. https://doi.org/10.3390/rs11020193

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop