Next Article in Journal
An Experiment-Based Profile Function for the Calculation of Damage Distribution in Bulk Silicon Induced by a Helium Focused Ion Beam Process
Next Article in Special Issue
Low-Cost Hyperspectral Imaging System: Design and Testing for Laboratory-Based Environmental Applications
Previous Article in Journal
Design, Implementation and Data Analysis of an Embedded System for Measuring Environmental Quantities
Previous Article in Special Issue
Three-Dimensional ResNeXt Network Using Feature Fusion and Label Smoothing for Hyperspectral Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Compressed Hyperspectral Sensing Imaging Based on Spectral Unmixing

1
Department of Electric Engineering, Tongling University, Tongling 244061, Anhui, China
2
Department of Mathematics and Computer, Tongling University, Tongling 244061, Anhui, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(8), 2305; https://doi.org/10.3390/s20082305
Submission received: 14 February 2020 / Revised: 6 April 2020 / Accepted: 15 April 2020 / Published: 17 April 2020
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)

Abstract

:
The huge volume of hyperspectral imagery demands enormous computational resources, storage memory, and bandwidth between the sensor and the ground stations. Compressed sensing theory has great potential to reduce the enormous cost of hyperspectral imagery by only collecting a few compressed measurements on the onboard imaging system. Inspired by distributed source coding, in this paper, a distributed compressed sensing framework of hyperspectral imagery is proposed. Similar to distributed compressed video sensing, spatial-spectral hyperspectral imagery is separated into key-band and compressed-sensing-band with different sampling rates during collecting data of proposed framework. However, unlike distributed compressed video sensing using side information for reconstruction, the widely used spectral unmixing method is employed for the recovery of hyperspectral imagery. First, endmembers are extracted from the compressed-sensing-band. Then, the endmembers of the key-band are predicted by interpolation method and abundance estimation is achieved by exploiting sparse penalty. Finally, the original hyperspectral imagery is recovered by linear mixing model. Extensive experimental results on multiple real hyperspectral datasets demonstrate that the proposed method can effectively recover the original data. The reconstruction peak signal-to-noise ratio of the proposed framework surpasses other state-of-the-art methods.

1. Introduction

Hyperspectral imagery (HSI) is different from conventional color images, and can collect tens or hundreds of spectrum samples for each image pixel. Therefore, HSI is usually used as a three-dimensional (3D) data cube with 2D spatial and 1D spectral variation [1]. This kind of data potential is useful in applications in the food safety, biomedical, forensic, and industrial fields [2]. However, with the increase in spatial and spectral resolution, the amount of data of HSI increases dramatically. This has motivated the application of compressed sensing (CS) [3] techniques for hyperspectral imaging.
CS is a mathematical framework for single-signal sensing and compression. CS theory has proved that sufficiently sparse signal can be accurately recovered from its compressed measurement by solving the quadratic programming [3]. Thus, only a few measurements need to be collected by CS technique to recover the original data. HSI can be transformed into sparse signals by many popular sparsification techniques such as orthogonal transformation-based methods [4], dictionary-based methods [5], or spectral unmixing [6,7].
As the double sparsity structure exists in both the spatial and spectral domains, a variety of sampling methods have been produced for HSI compressed sampling. First, spatial compressed sampling for conventional grayscale image can be applied directly to all spectral bands of HSI if the measurement matrix per channel is an independent random pattern, which is referred to as distributed CS (DCS) [8,9]. Second, the sparsity in spectral domains makes hyperspectral imagery easy to achieve with spectral compressed sampling. The most typical representative is compressive-projection principal component analysis (CPPCA) [10] and its derived algorithm [11,12,13,14]. However, CPPCA is only valid for the first few largest eigenvalues of hyperspectral imagery, but it is usually not true for the smaller eigenvalues, owing to the high degree of correlation among the spectral vectors [15]. Therefore, when the spectral sampling rate is very low, the CPPCA algorithm fails to recover original data. Previous works [9,16,17] on sampling mode have proposed compressed sampling hyperspectral imagery combining the structures of the spatial and spectral. Three-dimensional compressive sampling (3DCS) [16] constructed a generic 3D sparsity measure to exploit 3D piecewise smoothness and spectral low-rank property in hyperspectral imagery.
Inspired by distributed source coding (DSC) [18], the distributed compressive video sensing (DCVS) [19,20] framework was proposed for capturing and compressing video data simultaneously by integrating DSC and CS. DCVS divided the frames of a video sequence into key frames and non-key frames. Key frames are sent by conventional video lossless compression, and non-key frames are compressed sampled by common CS technology and transmitted to the decoder. Liu et al. [21] extended the DCVS framework to hyperspectral imagery. At the coding end, the compressed reference and non-reference band images and the prediction coefficients between them are collected. However, the calculation of prediction coefficient violates the original intention of CS sampling to be as simple as possible. In this paper, we divided hyperspectral imagery into key band images and CS band images, and then different sampling mode is applied to both types of images. With Kronecker product [9] transformation, the proposed compressed sampling method holds the same form as standard compressed sampling.
One of the important tasks of CS theory is how to recover the original data from a small amount of compressed data. The success of CS depends critically on the assumption that the underlying signals are sparse or compressible when represented on a suitable frame. Fortunately, hyperspectral imagery is highly correlated in both spatial and spectral domains and is thus compressible [15]. Many reconstruction algorithms are dedicated to development of sparse, total variation (TV), low rank, and other prior information for HIS [7,16,22,23,24]. However, this type of reconstruction algorithms performs convex optimization operations directly on the whole hyperspectral imagery. As there exists a huge amount of hyperspectral data, generally, the speed of such algorithms is slow.
Matrix decomposition is another kind of reconstruction approach for hyperspectral imagery. CPPCA [10] reconstructs an HSI data set using principal component analysis (PCA) at the decoder. The significant advantage for CPPCA is the transfer of computation from the on-board remote devices with limited computational resources to a ground working station. Although possessing excellent reconstruction quality and low computational complexity with high sampling rates, the reconstruction accuracy of CPPCA is low when the sampling rate lower than 0.2.
The linear mixing model (LMM) is one of the more simply and widely used hypotheses in hyperspectral imagery processing [2,25,26]. Hyperspectral imagery can be decomposed to endmember and abundance according to LMM. LMM-based compressed sensing reconstructing [6,15,27,28] for hyperspectral imagery demonstrated significant advantages in terms of both reconstruction quality and computational complexity. Spatio-spectral hybrid compressive sensing (SSHCS) [27] collects spatial and spectral compressed hyperspectral data, and recovers original hyperspectral imagery by the product of the endmembers extracted from the spatial compressed data and the corresponding abundance estimated from spectral compressed data. Spatial-spectral compressed reconstruction based on spectral unmixing (SSCR_SU) [28] extends SSHCS by alternately iterating endmembers and abundance. Spectral compressive acquisition (SpeCA) proposes a two-step measurement strategy operating on the spectral domain. One is the common spectral compressed sampling on per pixel, which is using to estimate abundance. The other is the spectral compressed sampling on some randomly chosen specific pixel, which can estimate endmembers by combining the estimated abundance.
In this paper, we propose a distributed compressed sampling and reconstruction framework for hyperspectral imagery. On the encoding side, we propose a distributed compressed sampling strategy similar to DCVS to collect hyperspectral data. The difference is that the side information of the key frame is used in DCVS reconstruction, which cannot be applied to the hyperspectral imagery in a low sampling rate environment with only a small number of key bands. Moreover, we recover hyperspectral data by linear spectral unmixing method on the decoding end. For brevity, we will call the proposed framework distributed compressed hyperspectral sensing (DCHS).
Specifically, the contribution of the paper has the following three aspects. First, distributed compressed sampling framework divides hyperspectral imagery into key band and CS band for separate acquisition, allowing endmembers and abundance to be independently estimated. Second, linear interpolation is employed to predict key band endmembers by the extracted CS band endmembers. Finally, an augmented Lagrangian minimization algorithm is designed to estimated abundance matrix under low sampling rate.
This paper is organized as follows. Section 2 proposes our DCHS framework. The endmembers predicting of key band and the augmented Lagrangian optimization algorithm for CS reconstruction by DCHS is described in Section 3. Section 4 presents the experimental results using three different datasets and discusses the quantitative and qualitative analysis. Finally, this study is concluded in Section 5.

2. Distributed Compressed Sampling Framework

The hyperspectral data of a single scene usually consists of several hundred images. Here, the matrix X N × L describes the hyperspectral data of a particular scene. Each column of X represents a vectorized band image and each row denotes the spectrum of one pixel. The size L is the number of band of the sensor, and N denotes the number of pixels per band.
Figure 1 schematizes the proposed distributed compressed sampling strategy and reconstruction framework of CS band images. The DCHS framework consists of two parts: encoding end and decoding end.
For the distributed compressed sampling of the encoding end in Figure 1, we divided hyperspectral imagery into key band images and CS band images, represented by X K N × L K and X C S N × L C S , respectively, where, L K is the number of key band and L C S = L L K is the number of the CS band.
First, hyperspectral imagery should be grouped in equal parts according to band, similar to the Group-of-Pictures (GOP) structure in many video codecs. L g denotes the number of bands in each group. The intermediate bands are then extracted from each group as the key band of the group. The key bands are transmitted directly to the decoding end without compressed sampling. This means that the equivalent sampling rate of the key band S R K is L K / L . Band selection [29,30,31,32,33,34], according to hyperspectral feature, will provide a benefit to the performance of grouping. However, it will violate the requirement of the lowest computational cost at the encoding end of CS.
The remaining bands are taken as CS band images and the measurement matrices is defined as A M × N ( M N ). Matrix A acts on the CS band along the spatial domain generating M measurements per band. The measurements obtained with matrix A are Y C S = A X C S . In our previous work [27], we designed a spatial measurement matrix, where each row is a one-hot vector. The designed matrix is still used in this paper for the spatial observation of CS band. The sampling rate of CS band S R C S is M L C S / N L . As a consequence, the total equivalent sampling rate of DCHS is S R = S R K + S R C S = ( N L K + M L C S ) / N L .
In hyperspectral imagery processing, LMM is an important and widely used model. LMM of hyperspectral imagery can be described in the following Equation.
X = S E
For the key band images, Equation (1) can be rewritten as
X K = S E K
where E K p × L K denotes an endmember matrix of the key band holding the spectral signatures of the endmembers; S N × p is the corresponding abundance matrix of key band, which describes the proportion fractions of ground materials at each pixel; and p is the number of endmember. As the CS band, the key band, and the original hyperspectral data describe the same scene and ground objects, the three data should have the same abundance matrix, S . Now, the compressed measurement of CS band can be written as
Y C S = A X C S = A S E C S
According to the proposed distributed compressed sampling mode, the task of DCHS reconstruction is to recover the CS band images, X C S , as the key band is transmitted directly to the decoder. From Equation (3), we can see that the task of reconstructing X C S can be converted to the estimation of the endmember matrix E C S and the abundance matrix, S . Therefore, at the decoding end of Figure 1, the observed data of CS band images is first used to extract endmembers of the CS band. Then, the endmembers of the key band are predicted by these extracted endmembers. Afterwards, the abundance matrix can be estimated by combining the images and endmembers of the key band. Next, the endmember of CS bands is modified by the estimated abundance. Finally, the CS band images are reconstructed using the modified endmember and the estimated abundance fraction based on LMM.

3. Reconstruction Algorithm of CS Band

In this section, we focus on the reconstruction algorithm of DCHS for CS band images, which mainly includes endmember extraction and abundance estimation, where the endmember extraction includes the extraction of E C S from spatial compressed data and the prediction of E K by the extracted endmember matrix E C S .

3.1. Endmember Extraction

The first goal is to extract E C S from Y C S . Thanks to the designed measurement matrix A , the existing endmember extraction algorithms are suitable for the compressed data [27]. Vertex component analysis (VCA) [35] is one of the most popular endmember extraction algorithms for hyperspectral unmixing. In this paper, we employ the VCA algorithm to extract the endmember matrix E C S from Y C S . We use Equation (4) for the endmember extraction,
E C S = v c a ( Y C S )
where v c a denotes VCA endmember extraction algorithm.
Note that p will play an important role for the VCA algorithm. In the absence of noise, the rank of observed data matrix Y C S is precisely p . Some state-of-the-art subspace clustering algorithms [36,37,38,39,40] will help to accurately estimate the number of endmembers. However, the goal of CS is to reconstruct the original data rather than unmixing. In the experiments, we find that a p slightly higher than the real number of endmember can slightly improve the reconstruction accuracy. Hyperspectral signal identification by minimum error (HySime) [41] can estimate higher endmembers in most cases. Therefore, p is estimated from Y C S by the HySime algorithm.
Next, we must successfully predict E K before estimating abundance, although the abundance can also be estimated from Y C S . However, due to the extremely low spatial sampling rate, accurate estimation of abundance is very difficult. Therefore, we turn to the prediction of E K to estimate S . We find that the matrix E K is composed of the column vectors extracted by interval L g from matrix E . The remaining column vectors composes the matrix E C S . The interpolation method can locate the nearest data value, and assign the value according to the nearest data. Therefore, a simplest interpolation algorithm is employed to predict E K from the extracted endmember matrix E C S of CS band.
E K = i n t e r p ( E C S )
where i n t e r p denotes interpolation method.
Figure 2 evaluates the performance of several interpolation methods by average signal-to-noise ratio of (SNR) between the reference value and its estimated value predicted from the CS band. The spectral curves used for the evaluation come from the USGS library [42], which includes 501 spectral curves of different mineral types with 224 spectral bands. A total of 188 spectral bands remain after removing the water absorption and noise bands. E K is selected as reference value from the USGS library according to the grouping rules of the DCHS framework. Linear, nearest neighbor, spline, and shape-preserving piecewise cubic (pchip) interpolation methods are tested.
From Figure 2, we can see that linear and pchip methods achieve better prediction results than the other two interpolation methods. Linear interpolation is slightly better than pchip interpolation. Therefore, linear interpolation is used in all the following experiments.

3.2. Abundant Estimation

The next goal is to estimate the abundance matrix S after E K is successfully predicted. As described in the previous section, it is difficult to estimate abundance directly from Y C S due to the extremely low spatial sampling rate. Therefore, in this section, we combine Y C S and X K to estimate abundance. At the same time, the abundance characterizes the distribution map of different materials in the scene, which is a sparse signal on the orthogonal basis. Although sparse coding and feature representation-based methods [36,37,38,39,40] can better describe the sparsity of abundance, their use will significantly increase the computational complexity and contribute little to the final CS band reconstruction. This is because the modification of E C S in the next section can make up for the deficiency of abundance estimation. Therefore, we employ wavelet base as the orthogonal sparse basis.
Now, the abundant estimation task can be described as solving S , given observed data Y C S , measurement matrix A , key band images X K , and endmember matrix E K and E C S . We consider the following constrained optimization problem,
min S W S 1 , 1     subject   to    X K = S E K ,   Y C S = A S E C S
where C 1 , 1 i = 1 p c i 1 ( c i denotes the ith column of C , 1 denotes 1 norm), and W represents an orthogonal wavelet base.
As the problem (6) is a non-convex optimization, we specialize the Alternating Direction Method of Multipliers (ADMM) [43,44] to optimize problem (6). First, by introducing regularization parameters, an equivalent way of writing the optimization problem (6) is the following unconstrained optimization problem,
min S W S 1 , 1 + λ 1 2 X K S E K F 2 + λ 2 2 Y C S A S E C S F 2  
where parameters λ 1 0 and λ 2 0 control the relative weight of the second and third terms in problem (7), respectively, and C F trace { C C T } denotes the Frobenius norm of C . We introduce an auxiliary matrix Z = W S . Problem (7) can be written as
min Z Z 1 , 1 + λ 1 2 X K W 1 Z E K F 2 + λ 2 2 Y C S A W 1 Z E C S F 2  
where W 1 is the inverse of matrix W .
Before the alternating minimization is apply to the corresponding augmented Lagrangian functions, we write the following equivalent formulation with auxiliary matrix R 1 , R 2 , and R 3 ,
min Z , R 1 , R 2 , R 3 Z 1 , 1 + λ 1 2 X K R 2 E K F 2 + λ 2 2 Y C S A R 3 F 2 subject   to   R 1 = Z R 2 = W 1 R 1 R 3 = R 2 E C S
Constrained optimization problem (9) has an augmented Lagrangian subproblem of the form
min Z , R 1 , R 2 , R 3 , T 1 , T 2 , T 3 Z 1 , 1 + λ 1 2 X K R 2 E K F 2 + λ 2 2 Y C S A R 3 F 2 + μ 2 Z R 1 T 1 F 2 + μ 2 W 1 R 1 R 2 T 2 F 2 + μ 2 R 2 E C S R 3 T 3 F 2
where μ > 0 is a positive penalty constant; T 1 , T 2 , and T 3 denote the Lagrange multipliers.
For each iteration of ADMM, we first fix R 1 , R 2 , R 3 and T 1 , T 2 , T 3 ; the minimizer of objective function (10) with respect to Z is the well-known soft threshold problem [45], and the problem can be reduced to
min Z Z 1 , 1 + μ 2 Z R 1 k T 1 k F 2
The soft threshold to solve problem (11) is given by
Z k + 1 = s o f t ( R 1 k + T 1 k , 1 μ )
Next, given other variables, simple manipulation shows that the minimization of objective function (10) with respect to R 1 is equivalent to
min R 1 Z k + 1 R 1 T 1 k F 2 + W 1 R 1 R 2 k T 2 k F 2
which is a least squares problem, and the corresponding normal Equation is
( I N + W W 1 ) R 1 = Z k + 1 T 1 k + W ( R 2 k + T 2 k )
where I N denotes the N × N identity matrix. As W is the orthonormal basis, W W 1 = I N . Therefore, the solution R 1 k of Equation (14) is given easily by
R 1 k + 1 = 1 2 [ Z k + 1 T 1 k + W ( R 2 k + T 2 k ) ]
Similarly, the steps to compute the values of the variables R 2 and R 3 are also least squares problems. The value of R 2 is given by
R 2 k + 1 = [ λ 1 X K E K T + μ W 1 R 1 k + 1 μ T 2 k + μ ( R 3 k + T 3 k ) E C S T ] ( λ 1 E K E K T + μ I p + μ E C S E C S T ) 1
where E K T is the transpose of matrix E K , and I p is a p × p identity matrix. The value of R 3 is given by
R 3 k + 1 = ( λ 2 A T A + μ I N ) 1 [ λ 2 A T Y C S + μ ( R 2 k + 1 E C S T 3 k ) ]
As the number of pixels N is usually large, ( λ 2 A T A + μ I N ) 1 often requires enormous computation time. However, the designed measure measurement matrices A and I N are sparse matrices. The inverse operation is easy to perform. Moreover, the inversion only needs to be calculated once, as λ 2 A T A + μ I N is unchanged for each iteration.
Finally, we update Lagrange multipliers by
T 1 k + 1 = T 1 k ( Z k + 1 R 1 k + 1 ) T 2 k + 1 = T 2 k ( W 1 R 1 k + 1 R 2 k + 1 ) T 3 k + 1 = T 3 k ( R 2 k + 1 E C S R 3 k + 1 )
After the k th iteration, the residual is defined as
r e s 1 = X K S k + 1 E K F / X K F r e s 2 = Y C S A S k + 1 E C S F / Y C S F
The iteration stopping criterion is defined as r e s 1 < ε 1 and r e s 2 < ε 2 .

3.3. Recovery of CS Band

Although we have extracted the endmember E C S by VCA algorithm in Section 3.1 and estimated the abundance S in Section 3.2, the endmember and abundance are not directly matched, and this may reduce the reconstruction accuracy. Therefore, we modify E C S to minimize the objective function Y C S A S k + 1 E C S F 2 , whose least squares solution is given
E ^ C S = ( ( A S k + 1 ) T A S k + 1 ) 1 ( A S k + 1 ) T Y C S
Finally, the CS band can be reconstructed by the LMM.
X ^ C S = S k + 1 E ^ C S
In summary, we call the proposed CS band reconstruction algorithm a DCHS reconstruction algorithm, which is described in Algorithm 1.
Algorithm 1: DCHS reconstruction algorithm
Inputs: X K , Y C S , and A
Output: X ^ C S
1. Estimate p by HySime algorithm from Y C S
2. Extract E C S from Y C S by VCA algorithm
3. Predict E K by E C S using interpolation algorithm
4. Set parameters: λ 1 , λ 2 , μ and m a x i t e r s
5. Initialize: S 0 = X K E K T ( E K E K T ) 1 , Z 0 = W S 0 , R 1 0 = Z 0 , R 2 0 = W 1 R 1 0 , R 3 0 = R 2 0 E C S , T 1 0 = 0 , T 2 0 = 0 , T 3 0 = 0 , k = 1 , t h r = 10 5 , r e s 1 = r e s 2 =
6. While k < m a x i t e r s and ( r e s 1 > t h r   or   r e s 2 > t h r )
 7. Compute Z k + 1 by soft-threshold function according to (12)
 8. Compute R 1 k + 1 by (15)
 9. Compute R 2 k + 1 by (16)
 10. Compute R 3 k + 1 by (17)
 11. Update Lagrange multipliers T 1 k + 1 , T 2 k + 1 , and T 3 k + 1 by (18)
 12. Compute r e s 1 and r e s 2 by (19)
 13. S k + 1 = W 1 Z k + 1 , k = k + 1
 End while
14. Modify E C S by (20)
15. Recover CS band according to LMM by (21)
In the DCHS reconstruction algorithm, the computational complexity is mainly reflected in the estimation of the abundance due to multiple iterations. In each iteration of abundance estimation, the most costly steps are the calculus of R 2 k + 1 and R 3 k + 1 , both of which have the order of complexity O ( p N L C S ) , where p is the number of endmembers, N is the number of pixels in the image, and L C S is the number of CS spectral bands.

4. Experiments and Results

In this section, we compare the proposed DCHS framework with several state-of-the-art reconstruction algorithms to evaluate the validity of the proposed framework, including MT-BCS [46], CPPCA [10], SSHCS [27], SpeCA [15], and SSCR_SU [28]. In the comparison experiments, we used the default parameter settings of those compared methods described in the reference papers. It is worth noting that the SpeCA algorithm cannot estimate the number of endmember. Therefore, in comparison experiments, we set it according to the ground truth. All the experiments were run with MATLAB 2014a (32-bit) on a laptop workstation with 2.6 GHz CPU and 32 GB RAM. We quantitatively and visually evaluated the performance of the proposed method on three real datasets, namely, Cuprite and Urban from the hyperspectral unmixing datasets [47], and PaviaU from hyperspectral remote sensing scenes [48].
The Cuprite dataset contains 188 bands via removing water absorption and noise bands, including 250 × 190 pixels. In general, the Cuprite dataset is considered to contain 14 types of minerals. The Urban dataset is of size 306 × 306 and consists of 162 bands. There are six endmembers contained in the ground truth. PaviaU dataset is of 610 × 340 pixels and 103 bands. The ground truths differentiate nine classes. The false-color images of the three dataset are shown in Figure 6a. The red, green, and blue channels are (40,20,10) bands for Cuprite dataset, (28,11,2) bands for Urban dataset, and (50,30,5) bands for PaviaU dataset, respectively.
In order to evaluate the reconstruction performances of all methods, three quantitative indices are employed in the experiments. The first index is mean peak signal-to-noise ratio (MPSNR) between the reconstructed images and the original images, which is defined as the average peak signal-to-noise ratio (PSNR) of all bands. MPSNR is defined as
MPSNR = 1 L i = 1 L 20 log 10 max ( X i ) X i X ^ i 2 2 / N
where X i and X ^ i correspond to the original and reconstructed band image vector. max ( X i ) is the peak value of X i . High values of MPSNR represent better reconstruction results.
The second index is mean spectral angle mapper (MSAM), which calculates the average angle between the original and reconstructed spectral vectors for all spatial positions; its definition is as follows,
MSAM = 1 N j = 1 N arccos X j T X ^ j X j 2 X ^ j 2
where X j and X ^ j are the jth spectral vectors of the original and reconstructed HSI, respectively. Low values of MSAM represent better reconstruction results.
The last index, mean structure similarity (MSSIM), is used to evaluate the structural consistency between the original and reconstructed HSI, which is expressed as
MSSIM = 1 L i = 1 L SSIM ( X i , X ^ i )
where SSIM ( X i , X ^ i ) is defined as the structure similarity of between X i and X ^ i . For the details of the SSIM function the reader can refer to work in [49].
The first group experiments discuss the parameter setting of the proposed algorithm by Cuprite dataset. In our DCHS reconstruction algorithm, there are three important parameters: λ 1 , λ 2 , and μ . First, we fix parameter μ = 30 , and set the number of bands in each group L g = 20 , 10 , 5 , corresponding to 0.0564, 0.1048, and 0.2048 sampling rate. Figure 3 shows the trends of MPSNR with λ 1 and λ 2 .
From Figure 3, we can see that the change trends of MPSNR with λ 1 and λ 2 are basically the same for different L g . Therefore, for different sampling rates, we can use the same setting for parameters λ 1 and λ 2 . In addition, MPSNR changes significantly more along the λ 2 -axis direction than along the λ 1 -axis direction, which means that the proposed DCHS reconstruction algorithm is more sensitive to λ 2 . MPSNR increases when λ 2 increases. When λ 2 is greater than 1, MPSNR increases very little. Therefore, in the following experiments, we set the parameters λ 1 = 10 4 , λ 2 = 1 .
Figure 4 shows the influence of parameter μ on the reconstruction performance with different sampling rates. We can see that as μ increases, the reconstructed MPSNR gradually increases. When μ is less than 10, the MPSNR increase rapidly. When μ exceeds 30, the MPSNR is basically unchanged. Therefore, in our following experiments, we set parameter μ = 30 .
The second group experiments compare the reconstruction performance of the proposed approach with respect to the state-of-the-art methods for the above three datasets. In these experiments, we give the changes of MPSNR, MSAM, and MSSIM of several reconstruction algorithms with the sampling rate. Because the sampling rate is a consistent indicator of compressed sensing references, although some references describe other forms of sampling rates, such as the number of bands per group L g , in this paper, they can be converted to sampling rates indicator. As the sampling rate of the proposed DCHS depends on L g , we test DCHS using different values of L g : 30, 20, 15, 10, 7, 5, 4, 3, corresponding to different sampling rates. For example, for the Cuprite dataset, the corresponding sampling rates are 0.0416, 0.0564, 0.0732, 0.1048, 0.1469, 0.2048, 0.2575, and 0.3365; for the Urban dataset, the corresponding sampling rates are 0.0406, 0.0589, 0.0711, 0.1078, 0.1506, 0.2056, 0.2544, and 0.34; for the PaviaU dataset, the corresponding sampling rates are 0.0388, 0.0581, 0.0677, 0.1061, 0.1446, 0.2022, 0.2503, and 0.3368.
Figure 5 shows the comparison results of MPSNR of different algorithms for different datasets. For the Cuprite dataset, the proposed DCHS algorithm shows its superiority at low sampling rates. For example, around 0.05 sampling rate, it is more than 4dB higher than the SpeCA algorithm with the best performance. However, this advantage gradually diminishes as the sampling rate increases. When the sampling rate exceeds 0.25, the DCHS algorithm achieves almost the same MPSNR values as the SSHCS. This is because when the sampling rate is increased, the value of L g will become smaller, and the endmember prediction accuracy of key band images will reduce, thereby affecting the reconstruction performance. In addition, it can be seen from the Figure 5a that CPPCA algorithm fails at low sampling rate. When the sampling rate exceeded 0.15, the reconstruction performance of CPPCA exceeds that of MT-BCS, and SSHCS exceeds that of SpeCA. Although the performance of CPPCA improves rapidly with the increase of sampling rate, it still lags behind other LMM-based reconstruction algorithms. For example, the MPSNR of DCHS is more than 5 dB higher than CPPCA with a higher sampling rate. The results further prove that hyperspectral compressed sensing reconstruction based on LMM, such as DCHS, SSCR_SU, SpeCA, and SSHCS, is better than the reconstruction algorithms without using LMM, such as CPPCA and MT-BCS.
Similar to the results of the Cuprite dataset, DCHS can still obtain the best reconstruction performance for the Urban and PaviaU datasets. It is worth mentioning that, unlike the experiment results of Cuprite and Urban, the reconstruction performance of the DCHS on the PaviaU dataset is also excellent even at a high sampling rate. It further illustrates the effectiveness of the proposed DCHS framework. In addition, the MT-BCS algorithm also performs very well on the PaviaU dataset. When the sampling rate exceeds 0.3, the MT-BCS algorithm is superior to other reconstruction algorithms except DCHS.
Figure 6 shows the visual qualities of the original and reconstructed pseudocolor images for the different datasets. The sampling rate is set to 0.0564, 0.0589, and 0.0581 for Cuprite, Urban, and PaviaU dataset, respectively. It can be seen from the figure that the CPPCA algorithm can hardly recover the original image near the 0.05 sampling rate. The reconstruction quality of MT-BCS is also very poor. The compressed sensing reconstruction algorithms based on LMM can recover the original image better, and the spatial details are well preserved. However, slight color distortion can still be observed on the PaviaU dataset. This color distortion phenomenon indicates that the LMM-based reconstruction algorithm has excellent performance in preserving spatial information, but is poor in maintaining spectral information. The advantages and disadvantages of SSHCS, SpeCA, SSCR_SU, and DCHS algorithms are hard to distinguish visually. Actually, they have subtle color distortions that make it difficult for the human eye to distinguish. Therefore, in order to illuminate the visual difference of reconstructed images achieved by LMM-based algorithms, we demonstrate the residual images between the original images and the reconstructed images in Figure 7.
Figure 7 shows the residual images at the 28th band of the three datasets. Note that the residuals of each dataset of all algorithms are amplified at the same scale. The brighter the residual image, the larger the residual, that is, the worse the reconstruction performance of the algorithm. The results of Figure 7 clearly demonstrate the effectiveness of the proposed DCHS algorithm. No matter which dataset, the residual images achieved by DCHS are obviously darker than the other three LMM-based algorithms. For the Cuprite and Urban datasets, the residual images of SSCR_SU brighter than that of SSHCS and SpeCA. The residual image of SpeCA is brightest on PaviaU dataset.
Table 1 illuminates the MSAM by various reconstruction algorithms. It can be seen from the experimental results that the proposed DCHS algorithm does not always lead with the spectral angle mapper, which may be caused by the low prediction accuracy of the endmember of key band by DCHS. A more accurate prediction algorithm may help reduce MSAM, but this is beyond the scope of the article. Even so, for the Urban dataset, the MSAM at all sampling rates is lower than other algorithms. For other two datasets, DCHS is still very close to the optimal result. For some specific sampling rates, DCHS still outperforms the other algorithms.
The comparison of the original and reconstructed spectral curves is shown in Figure 8. The sampling rates of Cuprite, Urban, and PaviaU are 0.3365, 0.34, and 0.3368, respectively. We also provide locally enlarged subgraphs. As the spectral deviation of the MT-BCS and CPPCA algorithms is serious, these two algorithms are removed from Figure 8 in order to clearly show the contrast effect. As can be seen from the figure, Cuprite has the highest spectral matching for all algorithm, while PaviaU has the worst, which is also consistent with MSAM in Table 1. It is possible that the reconstruction algorithms based on LMM are sensitive to the number of bands; the higher the number of bands, the better the reconstruction performance.
From locally enlarged subgraphs in Figure 8, the SpeCA algorithm for the Cuprite dataset is the worst, and the SSCR_CU algorithm for the Urban dataset is the worst. However, the spectral curves recovered by several algorithms for PaviaU dataset are poor. DCHS and SSHCS are closer to the original spectral curves. However, this is only a local feature and cannot explain the advantages and disadvantages of each algorithm. To evaluate the reconstruction algorithm on spectral domain, it is also necessary to refer to the statistical indicators of all pixels, such as MSAM.
The experimental results of MSSIM are shown in Table 2, which is similar to Table 1. In most cases, the proposed DCHS can achieve the highest MSSIM value; although it is not optimal in a few cases, it is still close to optimal. MT-BCS and CPPCA performed worst in both Table 1 and Table 2. The effectiveness of the LMM-based hyperspectral compressed sensing reconstruction algorithm is further confirmed.
In the last experiment, the runtime is measured in order to compare the computational complexity of algorithms. Herein, we use the Cuprite dataset to evaluate the speed of the algorithms. Table 3 presents the runtimes of different algorithms on Cuprite dataset. The running time of CPPCA and SSHCS is on the same order of magnitude, achieving the fastest reconstruction speed. The computational complexity of MT-BCS, SpeCA, and DCHS is equivalent, and the running time is in the same order of magnitude.

5. Conclusions

In this paper, inspired by DCVS, we proposed a compressed sensing framework for hyperspectral imagery, called DCHS, which first decomposes hyperspectral data into the CS band and key band for compressed sampling. To effectively recover original hyperspectral imagery from compressed data based on the proposed compressed sampling framework, we discarded side information based reconstruction of DCVS and developed a hyperspectral reconstruction algorithm based on spectral unmixing for distributed compressed sampling. The reconstruction process is converted to the estimation of the endmember and its corresponding abundance fraction. A method combining endmember extraction and prediction was proposed for key band endmembers estimation. The optimization algorithm of joint abundance sparsity, key and CS band observation data fidelity was also designed for abundance estimation. By analyzing the experimental results on three real datasets, we found that the proposed framework is beneficial to reconstruct the original data by LMM. More notably, the proposed method is able to obtain a more accurate peak signal-to-noise ratio compared to other state-of-the-art reconstruction algorithms.
However, the proposed DCHS cannot always lead the recovery of the spectral curve. Therefore, in future work, we will look for accurate endmember prediction algorithms in order to recover the spectral curve with high precision.

Author Contributions

All authors have read and agree to the published version of the manuscript. Z.W. wrote the manuscript and performed the experiments. H.X. derived mathematical formulas and revised the manuscript.

Funding

This research was funded by Key Projects of Natural Science Research of Universities of Anhui Province, grant number KJ2019A0709; Quality Engineering Project of Universities of Anhui Province, grant number 2016zy126; and Overseas Visiting and Research Project for Excellent Young Key Talents in Higher Education Institutions in Anhui Province, grant number gxgwfx2019056.

Acknowledgments

The authors would like to thank those anonymous reviewers for their insightful comments, in improving the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lin, X.; Liu, Y.B.; Wu, J.M.; Dai, Q.H. Spatial-spectral encoded compressive hyperspectral imaging. ACM Trans. Graph. 2014, 33. [Google Scholar] [CrossRef]
  2. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Qian, D.; Gader, P.; Chanussot, J. Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef] [Green Version]
  3. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  4. Candes, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  5. Rauhut, H.; Schnass, K.; Vandergheynst, P. Compressed sensing and redundant dictionaries. IEEE Trans. Inf. Theory 2008, 54, 2210–2219. [Google Scholar] [CrossRef] [Green Version]
  6. Zhang, L.; Wei, W.; Zhang, Y.; Yan, H.; Li, F.; Tian, C. Locally similar sparsity-based hyperspectral compressive sensing using unmixing. IEEE Trans. Comput. Imaging 2016, 2, 86–100. [Google Scholar] [CrossRef]
  7. Zhang, L.; Wei, W.; Tian, C.; Li, F.; Zhang, Y. Exploring structured sparsity by a reweighted laplace prior for hyperspectral compressive sensing. IEEE Trans. Image Process. 2016, 25, 4974–4988. [Google Scholar] [CrossRef]
  8. Duarte, M.F.; Sarvotham, S.; Baron, D.; Wakin, M.B.; Baraniuk, R.G. Distributed compressed sensing of jointly sparse signals. In Proceedings of the Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 28 October–1 November 2005; pp. 1537–1541. [Google Scholar]
  9. Duarte, M.F.; Baraniuk, R.G. Kronecker compressive sensing. IEEE Trans. Image Process. 2012, 21, 494–504. [Google Scholar] [CrossRef]
  10. Fowler, J.E. Compressive-projection principal component analysis. IEEE Trans. Image Process. 2009, 18, 2230–2242. [Google Scholar] [CrossRef] [PubMed]
  11. Fowler, J.E.; Qian, D. Reconstructions from compressive random projections of hyperspectral imagery. In Optical Remote Sensing: Advances in Signal Processing and Exploitation Techniques; Prasad, S.B.L.M., Chanussot, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 31–48. [Google Scholar]
  12. Li, W.; Prasad, S.; Fowler, J.E. Integration of spectral-spatial information for hyperspectral image reconstruction from compressive random projections. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1379–1383. [Google Scholar] [CrossRef]
  13. Ly, N.H.; Du, Q.; Fowler, J.E. Reconstruction from random projections of hyperspectral imagery with spectral and spatial partitioning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 466–472. [Google Scholar] [CrossRef] [Green Version]
  14. Chen, C.; Wei, L.; Tramel, E.W.; Fowler, J.E. Reconstruction of hyperspectral imagery from random projections using multihypothesis prediction. IEEE Trans. Geosci. Remote Sens. 2014, 52, 365–374. [Google Scholar] [CrossRef] [Green Version]
  15. Martín, G.; Bioucas-Dias, J.M. Hyperspectral blind reconstruction from random spectral projections. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2390–2399. [Google Scholar] [CrossRef]
  16. Shu, X.; Ahuja, N. Imaging via three-dimensional compressive sampling (3DCS). In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 439–446. [Google Scholar]
  17. Zhang, B.; Tong, X.; Wang, W.; Xie, J. The research of Kronecker product-based measurement matrix of compressive sensing. Eurasip J. Wirel. Commun. Netw. 2013, 2013, 1–5. [Google Scholar] [CrossRef] [Green Version]
  18. Slepian, D.; Wolf, J. Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19, 471–480. [Google Scholar] [CrossRef]
  19. Kang, L.-W.; Lu, C.-S. Distributed compressive video sensing. In Proceedings of the Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference, Taipei, Taiwan, 19–24 April 2009; pp. 1169–1172. [Google Scholar]
  20. Do, T.T.; Yi, C.; Nguyen, D.T.; Nguyen, N.; Lu, G.; Tran, T.D. Distributed compressed video sensing. In Proceedings of the IEEE International Conference on Image Processing, Cairo, Egypt, 7–10 November 2009; pp. 1393–1396. [Google Scholar]
  21. Liu, H.; Li, Y.; Wu, C.; Lv, P. Compressed hyperspectral image sensing based on interband prediction. J. Xidian Univ. 2011, 38, 37–41. (In Chinese) [Google Scholar] [CrossRef]
  22. Jia, Y.B.; Feng, Y.; Wang, Z.L. Reconstructing hyperspectral images from compressive sensors via exploiting multiple priors. Spectr. Lett. 2015, 48, 22–26. [Google Scholar] [CrossRef]
  23. Wang, Y.; Lin, L.; Zhao, Q.; Yue, T.; Meng, D.; Leung, Y. Compressive sensing of hyperspectral images via joint tensor tucker decomposition and weighted total variation regularization. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2457–2461. [Google Scholar] [CrossRef]
  24. Xue, J.; Zhao, Y.; Liao, W.; Chan, J. Nonlocal tensor sparse representation and low-rank regularization for hyperspectral image compressive sensing reconstruction. Remote Sens. 2019, 11, 193. [Google Scholar] [CrossRef] [Green Version]
  25. Hong, D.; Yokoya, N.; Chanussot, J.; Zhu, X.X. An augmented linear mixing model to address spectral variability for hyperspectral unmixing. IEEE Trans. Image Process. 2019, 28, 1923–1938. [Google Scholar] [CrossRef] [Green Version]
  26. Halimi, A.; Honeine, P.; Bioucasdias, J. Hyperspectral unmixing in presence of endmember variability, nonlinearity or mismodelling effects. IEEE Trans. Image Process. 2016, 25, 4565–4579. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Wang, Z.; Feng, Y.; Jia, Y. Spatio-spectral hybrid compressive sensing of hyperspectral imagery. Remote Sens. Lett. 2015, 6, 199–208. [Google Scholar] [CrossRef]
  28. Wang, L.; Feng, Y.; Gao, Y.; Wang, Z.; He, M. Compressed sensing reconstruction of hyperspectral images based on spectral unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1266–1284. [Google Scholar] [CrossRef]
  29. Sun, W.; Du, Q. Hyperspectral band selection: A review. IEEE Geosci. Remote Sens. Mag. 2019, 7, 118–139. [Google Scholar] [CrossRef]
  30. Hsu, P.-H. Feature extraction of hyperspectral images using wavelet and matching pursuit. ISPRS J. Photogramm. Remote Sens. 2007, 62, 78–92. [Google Scholar] [CrossRef]
  31. Uddin, M.P.; Mamun, M.A.; Hossain, M.A. Feature extraction for hyperspectral image classification. In Proceedings of the 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC), Dhaka, Bangladesh, 21–23 December 2017; pp. 379–382. [Google Scholar]
  32. Setiyoko, A.; Dharma, I.G.W.S.; Haryanto, T. Recent development of feature extraction and classification multispectral/hyperspectral images: A systematic literature review. J. Phys. Conf. Ser. 2017. [Google Scholar] [CrossRef]
  33. Marcinkiewicz, M.; Kawulok, M.; Nalepa, J. Segmentation of multispectral data simulated from hyperspectral imagery. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 3336–3339. [Google Scholar]
  34. Lorenzo, P.R.; Tulczyjew, L.; Marcinkiewicz, M.; Nalepa, J. Hyperspectral band selection using attention-based convolutional neural networks. IEEE Access 2020, 8, 42384–42403. [Google Scholar] [CrossRef]
  35. Nascimento, J.M.P.; Dias, J.M.B. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef] [Green Version]
  36. Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
  37. Peng, X.; Lu, C.; Yi, Z.; Tang, H. Connections between nuclear-norm and frobenius-norm-based representations. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 218–224. [Google Scholar] [CrossRef] [Green Version]
  38. Peng, X.; Feng, J.; Xiao, S.; Yau, W.; Zhou, J.T.; Yang, S. Structured autoencoders for subspace clustering. IEEE Trans. Image Process. 2018, 27, 5076–5086. [Google Scholar] [CrossRef] [PubMed]
  39. Peng, X.; Zhu, H.; Feng, J.; Shen, C.; Zhang, H.; Zhou, J.T. Deep clustering with sample-assignment invariance prior. IEEE Trans. Neural Netw. Learn. Syst. 2019, 1–12. [Google Scholar] [CrossRef] [Green Version]
  40. Peng, X.; Feng, J.; Zhou, J.T.; Lei, Y.; Yan, S. Deep subspace clustering. IEEE Trans. Neural Netw. Learn. Syst. 2020, 1–13. [Google Scholar] [CrossRef] [PubMed]
  41. Bioucas-Dias, J.M.; Nascimento, J.M.P. Hyperspectral subspace identification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2435–2445. [Google Scholar] [CrossRef] [Green Version]
  42. Kokaly, R.F.; Clark, R.N.; Swayze, G.A.; Livo, K.E.; Hoefen, T.M.; Pearson, N.C.; Wise, R.A.; Benzel, W.M.; Lowers, H.A.; Driscoll, R.L.; et al. USGS Spectral Library Version 7. 2017. Available online: https://www.researchgate.net/publication/323486055_USGS_Spectral_Library_Version_7 (accessed on 10 February 2020).
  43. Eckstein, J.; Bertsekas, D.P. On the douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef] [Green Version]
  44. Lin, Y.; Wohlberg, B.; Vesselinov, V. ADMM penalty parameter selection with krylov subspace recycling technique for sparse coding. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1945–1949. [Google Scholar]
  45. Combettes, P.; Wajs, R.V. Signal Recovery by Proximal Forward-Backward Splitting. Multiscale Model. Simul. 2005, 4, 1164–1200. [Google Scholar] [CrossRef] [Green Version]
  46. Shihao, J.; Dunson, D.; Carin, L. Multitask compressive sensing. IEEE Trans. Signal Process. 2009, 57, 92–106. [Google Scholar]
  47. Zhu, F.; Wang, Y.; Xiang, S.; Fan, B.; Pan, C. Structured sparse method for hyperspectral unmixing. ISPRS J. Photogramm. Remote Sens. 2014, 88, 101–118. [Google Scholar] [CrossRef] [Green Version]
  48. Gamba, P. A collection of data for urban area characterization. In Proceedings of the IGARSS 2004, 2004 IEEE International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 20–24 September 2004; pp. 1–72. [Google Scholar]
  49. Zhou, W.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar]
Figure 1. Framework of distributed compressed hyperspectral sensing (DCHS).
Figure 1. Framework of distributed compressed hyperspectral sensing (DCHS).
Sensors 20 02305 g001
Figure 2. The performance of different interpolation methods.
Figure 2. The performance of different interpolation methods.
Sensors 20 02305 g002
Figure 3. Sensitivity analysis of the regularization parameters λ 1 and λ 2 of the proposed DCHS algorithm with different number of bands in each group L g or different sampling rates S R . (a) L g = 20 , S R = 0.0564 , (b) L g = 10 , S R = 0.1048 , and (c) L g = 5 , S R = 0.2048 .
Figure 3. Sensitivity analysis of the regularization parameters λ 1 and λ 2 of the proposed DCHS algorithm with different number of bands in each group L g or different sampling rates S R . (a) L g = 20 , S R = 0.0564 , (b) L g = 10 , S R = 0.1048 , and (c) L g = 5 , S R = 0.2048 .
Sensors 20 02305 g003
Figure 4. Sensitivity analysis of the parameter μ of the proposed DCHS algorithm with different number of bands in each group L g .
Figure 4. Sensitivity analysis of the parameter μ of the proposed DCHS algorithm with different number of bands in each group L g .
Sensors 20 02305 g004
Figure 5. Mean peak signal-to-noise ratio (MPSNR) curves of different algorithms for different datasets: (a) Cuprite dataset, (b) Urban dataset, and (c) PaviaU dataset.
Figure 5. Mean peak signal-to-noise ratio (MPSNR) curves of different algorithms for different datasets: (a) Cuprite dataset, (b) Urban dataset, and (c) PaviaU dataset.
Sensors 20 02305 g005
Figure 6. Original and reconstructed pseudocolor images achieved by different algorithms on different datasets near the 0.05 sampling rate, from top to bottom Cupriete, Urban, and PaviaU: (a) original, (b) MT-BCS, (c) CPPCA, (d) SSHCS, (e) SpeCA, (f) SSCR_SU, and (g) DCHS.
Figure 6. Original and reconstructed pseudocolor images achieved by different algorithms on different datasets near the 0.05 sampling rate, from top to bottom Cupriete, Urban, and PaviaU: (a) original, (b) MT-BCS, (c) CPPCA, (d) SSHCS, (e) SpeCA, (f) SSCR_SU, and (g) DCHS.
Sensors 20 02305 g006
Figure 7. The 28th band residual images of different algorithms on different datasets near the 0.05 sampling rate, from top to bottom Cupriete, Urban, and PaviaU: (a) SSHCS, (b) SpeCA, (c) SSCR_SU, (d) DCHS.
Figure 7. The 28th band residual images of different algorithms on different datasets near the 0.05 sampling rate, from top to bottom Cupriete, Urban, and PaviaU: (a) SSHCS, (b) SpeCA, (c) SSCR_SU, (d) DCHS.
Sensors 20 02305 g007
Figure 8. Spectral curves of original and reconstructed achieved by different algorithms. (a) Cuprite dataset, (b)Urban dataset, (c) PaviaU dataset.
Figure 8. Spectral curves of original and reconstructed achieved by different algorithms. (a) Cuprite dataset, (b)Urban dataset, (c) PaviaU dataset.
Sensors 20 02305 g008
Table 1. Comparison of mean spectral angle mapper (MSAM) (°) achieved by the various algorithms (the best results are in bold).
Table 1. Comparison of mean spectral angle mapper (MSAM) (°) achieved by the various algorithms (the best results are in bold).
L g 302015107543
Results on the Cuprite Dataset
S R 0.04160.05640.07320.10480.14690.20480.25750.3365
MT-BCS13.1356.56415.63914.43793.53273.08212.80482.529
CPPCA87.40882.68363.71421.5023.50860.99310.90640.6554
SSHCS1.82220.99630.87751.13460.66020.40740.3870.3658
SpeCA0.83550.82270.62260.50050.47060.45230.42220.4079
SSCR_SU1.72161.06150.9830.90510.75750.63310.58460.5059
DCHS0.96940.70880.65280.57350.50830.49010.45370.4352
Results on the Urban Dataset
S R 0.04060.05890.07110.10780.15060.20560.25440.34
MT-BCS20.94912.69110.3567.75355.57843.93223.18642.241
CPPCA87.62673.81850.01113.1144.91183.14992.30141.7288
SSHCS3.80073.57483.17172.22192.08461.48051.14981.0011
SpeCA3.20962.81072.50852.18312.08452.03811.96041.9545
SSCR_SU8.91354.66122.942.64522.31532.16381.62171.3918
DCHS2.6712.23612.07541.5331.24481.12141.06310.9546
Results on the PaviaU Dataset
S R 0.03880.05810.06770.10610.14460.20220.25030.3368
MT-BCS41.14815.1311.2865.74883.69162.37481.81011.2232
CPPCA88.81487.68982.94359.2588.96033.44453.11122.357
SSHCS8.29976.02034.78412.85182.39711.81551.60731.4682
SpeCA4.94244.13843.32072.32862.02951.8091.60151.5467
SSCR_SU15.26915.72595.05583.4772.62792.1441.78091.7833
DCHS5.54153.2113.00722.41342.26232.18161.35421.1391
Table 2. Comparison of MSSIM achieved by the various algorithms (the best results are in bold).
Table 2. Comparison of MSSIM achieved by the various algorithms (the best results are in bold).
L g 302015107543
Results on the Cuprite Dataset
S R 0.04160.05640.07320.10480.14690.20480.25750.3365
MT-BCS0.29870.62970.72130.83320.910.94610.96040.9729
CPPCA0.00010.00260.01910.32780.95350.98620.98760.9926
SSHCS0.96240.9870.98550.99160.99400.99620.99650.997
SpeCA0.98630.98750.99120.99460.99530.99560.99610.9964
SSCR_SU0.97370.9880.98740.98760.98960.99250.99330.9949
DCHS0.98570.98880.99020.99220.99380.9940.99490.9953
Results on the Urban Dataset
S R 0.04060.05890.07110.10780.15060.20560.25440.34
MT-BCS0.41050.6140.68230.75630.8280.89240.91580.9487
CPPCA0.00510.03320.20080.69240.88420.93930.96090.9734
SSHCS0.94430.94240.93140.9590.95580.9730.98320.9863
SpeCA0.94670.94740.96750.97410.97540.97930.97950.9804
SSCR_SU0.87620.93440.96480.97110.97420.97710.98220.9839
DCHS0.96670.97220.97490.98250.98570.98710.98830.9901
Results on the PaviaU Dataset
S R 0.03880.05810.06770.10610.14460.20220.25030.3368
MT-BCS0.1240.47730.57160.76870.86170.92390.94920.9742
CPPCA0.00680.00720.01730.12540.70750.90130.91540.9351
SSHCS0.8030.87140.89280.94450.95850.97170.97550.9814
SpeCA0.81490.8630.88410.93410.94710.9580.96410.9655
SSCR_SU0.60540.86530.83540.91050.93830.9450.95660.9565
DCHS0.8610.91870.920.94130.95050.95720.97550.9834
Table 3. Comparison of runtime(s) for the various algorithms on Cuprite dataset.
Table 3. Comparison of runtime(s) for the various algorithms on Cuprite dataset.
L g 302015107543
S R 0.04160.05640.07320.10480.14690.20480.25750.3365
MT-BCS19.039122.118717.780725.606929.398034.621243.456952.9737
CPPCA0.10050.06270.05850.10060.10660.17270.21390.5978
SSHCS0.28310.13510.12690.09170.11320.10120.09190.0932
SpeCA15.569530.476449.269558.999958.944459.888557.187656.5255
SSCR_SU4.28373.39351.24503.45301.38131.25451.30051.3809
DCHS33.078834.607136.285634.265533.244130.895729.171126.9435

Share and Cite

MDPI and ACS Style

Wang, Z.; Xiao, H. Distributed Compressed Hyperspectral Sensing Imaging Based on Spectral Unmixing. Sensors 2020, 20, 2305. https://doi.org/10.3390/s20082305

AMA Style

Wang Z, Xiao H. Distributed Compressed Hyperspectral Sensing Imaging Based on Spectral Unmixing. Sensors. 2020; 20(8):2305. https://doi.org/10.3390/s20082305

Chicago/Turabian Style

Wang, Zhongliang, and Hua Xiao. 2020. "Distributed Compressed Hyperspectral Sensing Imaging Based on Spectral Unmixing" Sensors 20, no. 8: 2305. https://doi.org/10.3390/s20082305

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop