Next Article in Journal
Directional and Zonal Analysis of Urban Thermal Environmental Change in Fuzhou as an Indicator of Urban Landscape Transformation
Next Article in Special Issue
An Object-Based Strategy for Improving the Accuracy of Spatiotemporal Satellite Imagery Fusion for Vegetation-Mapping Applications
Previous Article in Journal
Study on the Intensity and Coherence Information of High-Resolution ALOS-2 SAR Images for Rapid Massive Landslide Mapping at a Pixel Level
Previous Article in Special Issue
SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks—Optimization, Opportunities and Limits
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Super-Resolution via Adaptive Dictionary Learning and Double 1 Constraint

1
Department of Criminal Science and Technology, Nanjing Forest Police College, Nanjing 210023, China
2
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
3
School of Science, Guangxi University of Science and Technology, Liuzhou 545006, China
4
School of Computer Science and Technology, Jinling Institute of Technology, Nanjing 211169, China
5
School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(23), 2809; https://doi.org/10.3390/rs11232809
Submission received: 24 October 2019 / Revised: 24 November 2019 / Accepted: 25 November 2019 / Published: 27 November 2019
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)

Abstract

:
Hyperspectral image (HSI) super-resolution (SR) is an important technique for improving the spatial resolution of HSI. Recently, a method based on sparse representation improved the performance of HSI SR significantly. However, the spectral dictionary was learned under a fixed size, empirically, without considering the training data. Moreover, most of the existing methods fail to explore the relationship among the sparse coefficients. To address these crucial issues, an effective method for HSI SR is proposed in this paper. First, a spectral dictionary is learned, which can adaptively estimate a suitable size according to the input HSI without any prior information. Then, the proposed method exploits the nonlocal correlation of the sparse coefficients. Double 1 regularized sparse representation is then introduced to achieve better reconstructions for HSI SR. Finally, a high spatial resolution HSI is generated by the obtained coefficients matrix and the learned adaptive size spectral dictionary. To evaluate the performance of the proposed method, we conduct experiments on two famous datasets. The experimental results demonstrate that it can outperform some relatively state-of-the-art methods in terms of the popular universal quality evaluation indexes.

Graphical Abstract

1. Introduction

Hyperspectral sensors can capture images with many contiguous and very narrow spectral bands that span the visible, near-infrared, and mid-infrared portions of the spectrum [1,2]. Thus, hyperspectral image (HSI) can provide fine spectral feature differences, to distinguish various materials, which can be widely and successfully used for many applications, such as object classification [3,4], tracking [5], recognition [6], and remote sensing [7,8]. Due to various hardware limitations, real captured HSI usually has low spatial resolution (LR), which significantly limits its application. However, it is not effective to enhance spatial resolution by improving the imaging quality of the hyperspectral sensors, and a breakthrough in hardware will be difficult and costly. Alternatively, HSI super-resolution (SR) has been proposed to generate a high spatial resolution (HR) HSI by fusing a high spectral resolution image, such as HSI, with an image, such as panchromatic image [9,10,11,12,13,14,15,16] or a multispectral image (MSI) [17,18,19,20,21,22,23,24,25,26,27,28,29,30].

1.1. Related Work

Traditionally, spatial–spectral image fusion methods fuse an LR HSI with an HR panchromatic image (single band), such as pansharpening [9]. As we know, the famous pansharpening methods include intensity-hue-saturation (IHS) [10,11], high-frequency information injection [12,13], and model-based methods [14,15,16]. Owing to the limited spectral resolution of the panchromatic image, these methods often produce some spectral distortions. Accordingly, fusing the LR HSI with an HR MSI (see Figure 1) has attracted increasing attention. Spectral unmixing and sparse representation have become the mainstream methods for the above fusion [17,18]. Several HSI SR methods have worked by using these approaches [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40]. In the following, we will briefly review the two categories of studies.

1.1.1. Spectral Unmixing Based Methods

In these methods, latent HSI is often decomposed into endmember and abundance matrices. The unmixing strategy [23,24] was first applied to the HSI SR problem. Naoto Yokoya et al. [19] proposed to reconstruct the HR HSI from a multispectral image and the corresponding HSI by a coupled nonnegative matrix factorization (CNMF) approach. Although this work provided a very promising result, the solution of nonnegative matrix factorization (NMF) is often not unique [20,21], and the results are not always satisfactory. By taking the sensor observation models into consideration, Bendoumi et al. [22] divided the whole image into several sub-images. Thus, the HSI could be enhanced with small spectral distortions. In general, the abovementioned methods have to unfold the three-dimensional data of the MSI or HSI into two-dimensional matrices. To break this traditional practice, the HSI is expressed by a tensor with three modes in the coupled sparse tensor factorization (CSTF) method [26]. Further, Dian et al. [27] presented a novel HSI SR approach by incorporating both the nonlocal self-similarity idea and tensor factorization into a unified framework.

1.1.2. Sparse Representation-Based Methods

Inspired by signal sparse decomposition, the final HR HSI is represented by an appropriate spectral dictionary learned from the real captured HSI [28,29,30]. A sparse regularization term was carefully designed in [28], where Qi et al proposed fusing HSI and MSI within a constrained optimization framework. Taking different spatial and spectral properties into account, the spatial and spectral fusion model (SASFM) used sparse matrix factorization to enhance the resolution of the input HSI [32]. Thus, correlations of signals in adjacent hyperspectral channels were exploited, based on the assumption that signals in different channels were jointly sparse in suitable dictionaries [33]. Recently, a nonnegative structured sparse representation (NSSR) method [38] was investigated to reconstruct an HR HSI from LR HSI and HR RGB images. To explore the global-structure and local-spectral self-similarity, a self-similarity constrained sparse-representation (SSCSR) model was proposed by Han et. al [39]. Owing to the complex structures in HSI, superpixel-based sparse-representation (SSR) model can extract the spectral features effectively [40].

1.2. Motivation and Contributions

Although the aforementioned methods gave us impressive recovery performance, we can produce even better fused results from two aspects. On the one hand, the spectral basis (or the dictionary) generally has many atoms, whose sizes range from hundreds to hundreds of thousands, according to the training data. Thus, learning an adaptive size dictionary can represent the data compactly and accurately. On the other hand, due to the nonlocal correlations existing in natural images [41,42], the sparse coefficients are not randomly distributed. Therefore, it is possible to generate the high spatial resolution HSI faithfully by exploiting the nonlocal similarity of these coefficients.
In this paper, we propose an adaptive dictionary learning and double 1 regularized sparse-representation model for HSI SR. In particular, this novel model mainly contains three parts. First, the proposed method learns an adaptive spectral dictionary whose atoms reflect the spectral signatures of materials in the HSI. It should be noted that the learning framework learns the spectral dictionary and estimates the number of atoms concurrently. Then, a transformed dictionary can be generated by choosing the corresponding bands from the adaptive spectral dictionary, which reflects the spectral signatures of the MSI. Due to the nonlocal correlations present in natural images, it is impossible for the sparse coefficients to be distributed randomly. To exploit the nonlocal similarity, a double 1 model is used to characterize the corresponding coefficients, which are obtained by decomposing pixels on the transformed dictionary. Finally, the HR HSI is estimated by the adaptive spectral dictionary and the coefficients. The detailed flowchart is presented in Figure 2.
The proposed method has the following distinct features.
(1) To represent the complex structures of HSI more effectively, an efficient adaptive size strategy is introduced to learn the spectral dictionary instead of using a fixed size dictionary.
(2) The proposed adaptive learning framework helps save time and effort in finding the correct sizes according to the content of the HSI.
(3) To improve the performance of HSI SR, a double 1 constraint of the sparse coefficients, based on the adaptive-size dictionary, is exploited to capture the nonlocal similarity the spectral-spatial information.
(4) The proposed model can be easily and effectively solved. In addition, extensive experimental results on different HSI datasets validate the superiority of our method.

2. Mothodology (or Materials and Methods)

2.1. The Traditional Sparse-Representation Method

Let X R B × n be the LR HSI, where n = a 1 × a 2 . Terms a 1 and a 2 are the width and height of X in spatial resolution, respectively. Term B indicates the spectral dimension. The acquired HR MSI is Y R b × N , where N = a 3 × a 4 , a 3 a 1 , a 4 a 2 and B b . Therefore, HSI SR aims to estimate an HR HSI Z R B × N by using X and Y . In general, the relationship between X , Y , and the latent HSI Z can be modelled as follows:
X = Z H
Y = P Z
where H R N × n denotes a degradation operator, and P R b × B is a transform matrix.
As a promising fusion technique, sparse representation has achieved great success [32,34,38,39]. In these methods, each pixel, z i R B , in Z R B × N is often represented via the following linear combination:
z i = D α i + e i
where D R B × K represents a spectral dictionary, α i R K is the corresponding representation coefficient assumed to be sparse, and e i is the representation error.
According to Equation (1), each pixel, x i R B , can also be represented as follows:
x i = j = 1 n H i , j z i = j = 1 n H i , j D α i = D j = 1 n H i , j α i   = D β i + t i
where β i R K is the representation coefficient in the spectral dictionary, D , H i , j denotes the element of the i -th row and the j -th column, and t i is the residual. According to Equations (2) and (3), pixel, y i R b , of the MSI, Y , can be mathematically formulated as follows:
y i = P z i P D α i
By combining Equations (4) and (5), it is easy to obtain the spectral dictionary, D , as well as the corresponding coefficients matrix, A = α 1 , α 2 , , α N R K × N . Finally, we can generate the HR HSI,
Z , from the following equation:
Z D A

2.2. The Proposed Method

Due to the complex structures in HSI, it is not reasonable that different variations are represented by a dictionary with a fixed size in traditional methods. In other words, the traditional sparse-representation model cannot represent the complex structures of HSI accurately. Thus, we propose the learning of a spectral dictionary with an adaptive size, according to the context in the captured area. Furthermore, double 1 prior of the sparse coefficients is exploited to improve HSI SR quality.

2.2.1. Adaptive Spectral Dictionary Learning

Traditionally, the spectral dictionary D R B × K was usually learned from a set of training exams x 1 , , x n :
min D , B 1 n i = 1 n x i D β i 2 2 + λ β i 1
where λ is a balance parameter, and B = β 1 , β 2 , β n is the corresponding coefficients matrix. K is the dictionary size, which was selected empirically. As previously mentioned, a spectral dictionary with a fixed size cannot reflect complex structures of HSI accurately. Thus, we can select the dictionary size adaptively by introducing a size penalty. Motivated by [43], the size penalty was modelled with a row-sparse norm. Based on this concept, we alternatively define β ^ j = β 1 , j , , β n , j 1 j k , where β i , j denotes the j -th element of β i . Then, the traditional dictionary-learning framework can be updated to the following equation:
min D , B 1 n i = 1 n x i D β i 2 2 + λ β i 1 + μ j = 1 K E β ^ j
where E β ^ j = 0   i f   β ^ j = 0   1   o t h e r w i s e , and μ is a balance parameter. For zero vectors, the size penalty outputs 0; otherwise, the result is 1. It should be noted that the dictionary is not initially constrained to containing zero vectors. Fortunately, Equation (8) can automatically determine the number of nonzeros. Thus, we can learn a spectral dictionary with an adaptive size through this model.
The objective function in Equation (8) contains multivariate indicator terms. Inspired by [44], we introduce a penalty R δ b to avail optimization, which is defined as follows:
R δ b = min v R n δ b v 2 2 + E v
If the parameter δ is large enough, R δ b can successfully approach the multivariate indicator function, E β ^ j [44].
In summary, the adaptive spectral dictionary, D , will be obtained through the following optimization problem:
min D , B , V 1 n i = 1 n x i D β i 2 2 + λ β i 1 + μ j = 1 K δ β ^ j v ^ j 2 2 + E v ^ j
where v i is the column vector, and v ^ j denotes the corresponding row vector in V . The optimization of Equation (10) performs in an alternative scheme over three stages, which correspond to Equations (11)–(19).
In the “dictionary update” stage, the sparse coefficient B and the variable V are fixed, and we can obtain the dictionary as follows:
min D L ( D ) = min D 1 n i = 1 n x i D β i 2 2
The stochastic gradient descent algorithm [45] is employed to update D iteratively. In the i t -th iteration, we get the following equation:
D i t = D i t - 1 δ g D L ( D i t - 1 )
where δ g is a learning rate, and D represents the gradient operator of D . We substitute D L ( D i t - 1 ) = 1 n i = 1 n x i D i t 1 β i β i T into Equation (12) and obtain the following:
D i t = D i t 1 δ g 1 n i = 1 n x i D i t 1 β i β i T
Similar to the “dictionary update” stage, with the other variables fixed, we update the sparse coefficient B according to the following equation:
min B 1 n i = 1 n x i D β i 2 2 + λ β i 1 + μ j = 1 K δ β ^ j v ^ j 2 2 = min B 1 n i = 1 n x i D β i 2 2 + λ β i 1 + n μ δ β i v i 2 2
For each i , Equation (14) is independent. Thus, finding the optimal β i solves the following independent problems of the form:
min β i x i D β i 2 2 + λ β i 1 + n μ δ β i v i 2 2
Let ϒ = x i n μ δ v i , Θ = D n μ δ I , and Equation (15) can be written as follows:
min β i ϒ Θ β i 2 2 + λ β i 1
This is a combination of the quadratic and 1 sparse terms; thus, the iterative shrinkage thresholding algorithm [46] is popular for solving this efficiently. In the i t -th iteration, we obtain the following equation:
β i ( i t ) = υ ( i t 1 ) 2 λ sgn υ ( i t 1 ) 0   ϒ ( i t 1 ) > 2 λ ϒ ( i t 1 ) 2 λ
where υ ( i t 1 ) = 1 2 β i ( i t 1 ) + Θ ( i t 1 ) T ϒ ( i t 1 ) Θ ( i t 1 ) β i ( i t 1 ) .
Finally, we update the variable V fixed D and B :
min v ^ j V j = 1 K δ β ^ j v ^ j 2 2 + E v ^ j ,
which can be decomposed to K independent functions with respect to j :
min v ^ j δ β ^ j v ^ j 2 2 + E v ^ j
According to [44], we can obtain its solution as follows:
v ^ j = δ β ^ j 2 2 1   β ^ j 2 2 < 1 δ   o t h e r w i s e
Algorithm 1: Adaptive Spectral Dictionary Learning
Input: the training examples x 1 , , x n , the regularization parameters λ = 0.2 , μ = 0.001 .
Initialize δ = 1 , i t = 1 , V 0 = 0 K × n ,
while δ < 10 6 do
 Input V i t 1 , D i t 1 , update B i t by (17);
 Input B i t , update V i t by (20);
 Input B i t and D i t 1 , update D i t by (13);
i t i t + 1 ;
δ 2 δ
end while
Output: the spectral dictionary D = D i t

2.2.2. Double 1 Regularization

This subsection addresses the problem of how to obtain the sparse coefficient α i from the HR MSI, Y , and the spectral dictionary, D , associated with Equation (5). Traditionally, α i is obtained with a 1 constraint as follows:
α i = arg min α i y i D ¯ α i 2 2 + λ 1 α i 1
where D ¯ = P D . In Equation (21), the sparse coefficient of each α i is computed independently. Actually, each pixel has a strong correlation with its nonlocal similar neighbors in the HR HSI. Thus, it is impossible for the sparse coefficients to be distributed randomly. In other words, even better SR results will be produced if the nonlocal similarities of the sparse coefficients are considered.
In Figure 3, we randomly plot the distributions of α i - j N N i p i j α j corresponding to the 23rd and 57th atoms (other atoms exhibit similar distributions) in the dictionary, D . Term N N ( i ) is the index set of similar neighbors of α i . Moreover, p i j = 1 c exp - α i - α j 2 2 are the weighting coefficients based on the similarity of α i and α j , and c is a positive constant. The number of nearest neighbors is selected as 10 in N N ( i ) . We can observe that the empirical distributions are highly peaked at zero and can be effectively characterized by 1 functions, while 2 functions have a much larger fitting error. Hence, this motivates us to improve the super-resolution quality by modelling the nonlocal similarity of the sparse coefficients by an 1 prior.
Based on this consideration, we incorporate the nonlocal geometric structure into the single 1 constraint model as another regularization term for HSI SR as follows.
α i = arg min α i y i - D ¯ α i 2 2 + λ 1 α i 1 + λ 2 α i - j N N i p i j α j 1
where λ 1 and λ 2 are two regularization parameters. The second term is the sparse constraint on the coefficients, while the last term emphasizes the nonlocal similarity for the sparse coefficients.
Furthermore, α i can be solved iteratively [47]. In the l -th iteration, we define κ l = j N N i p i j α j l and initialize κ 0 = 0 . Then we solve the single 1 constraint model (21) to get α i ( 1 ) ( i = 1 , 2 , , N ). Based on α i ( 1 ) i = 1 N , we can find similar sparse coefficients α j ( 1 ) ( j N N i ). In the next iteration of sparse coding process, α i ( 2 ) is the solution of Equation (22), and the second term is updated as α i 2 - j N N i p i j α j 1 1 . Such a procedure is iterated until convergence. Thus, Equation (22) can be transformed as follows:
α i l = arg min α i y i - D ¯ α i l 2 2 + λ 1 α i l 1 + λ 2 α i l - j N N i p i j α j l - 1 1
Regarding the l -th iteration, κ l 1 = j N N i p i j α j l 1 is a constant. We simply rewrite Equation (23) as follows:
α i l = arg min α i y i - D ¯ α i l 2 2 + λ 1 α i l 1 + λ 2 α i l - κ l 1 1
Equation (24) is a double 1 regularized least squares problem, which we solve by employing the surrogate functions [48]. Here, we introduce the following surrogate function:
ρ α i , a = C α i a 2 2 D ¯ α i D ¯ a 2 2
where the constant, C , is chosen ( D ¯ T D ¯ 2 2 < C ) to make ρ α i , a convex, and a denotes an auxiliary variable. Then we define the following function:
f α i l , a = y i - D ¯ α i l 2 2 + λ 1 α i l 1 + λ 2 α i l - κ l 1 1   + C α i l a 2 2 D α i l D a 2 2   = C α i l τ i l 2 2 + λ 1 α i l 1 + λ 2 α i l - κ l 1 1 + c o n s t
where τ i l = 1 C D ¯ T y i D ¯ T D a + a and c o n s t = y i 2 2 + C a 2 2 D ¯ a 2 2 C τ i l 2 2 . The objective function of (26) can be simplified further, as follows:
f α i l = α i l τ i l 2 2 + μ 1 α i l 1 + μ 2 α i l - κ l 1 1
where μ 1 = λ 1 C and μ 2 = λ 2 C are regularization parameters. We can obtain the scalar version of the above minimization problem as follows:
g m = m m 1 2 + μ 1 m + μ 2 m - m 2
where m , m 1 , and m 2 are the scalar components of α i l , τ i l , and κ l 1 , respectively. Then, the solution to Equation (24) is given by the following equation:
α i l + 1 = S μ 1 , μ 2 , κ l - 1 τ i l                             κ l - 1 0 S μ 1 , μ 2 , κ l - 1 τ i l                   κ l - 1 < 0
The generalized shrinkage operator S μ 1 , μ 2 , r 2 r is defined by the following:
S μ 1 , μ 2 , r 2 r = r + μ 1 + μ 2 r < μ 1 μ 2 0 μ 1 μ 2 r μ 1 μ 2 r μ 1 + μ 2 μ 1 μ 2 < r < μ 1 μ 2 + r 2 m 2 μ 1 μ 2 + r 2 r μ 1 + μ 2 + r 2 r - μ 1 μ 2 μ 1 + μ 2 + r 2 < r
Algorithm 2:Double 1 Regularized Sparse Coding
Input: the pixel set y 1 , , y N , the spectral dictionary ( D ), the transform matrix ( P ), the regularization parameters ( λ 1 and λ 2 ), and the number of iterations ( L = 5 ).
For i = 1   t o   N do
 Initialize α i ( 0 ) = 0 ;
 For l = 1   t o   L do
    κ l 1 = j N N i p i j α j l 1
    τ i l = 1 C P D T y i P D T D a + a
    α i l + 1 = S μ 1 , μ 2 , κ l - 1 τ i l                       κ l - 1 0 S μ 1 , μ 2 , κ l - 1 τ i l               κ l - 1 < 0
 End For
End For
Output: the sparse coefficients A = α 1 , , α N .
Algorithm 3:HSI SR by Adaptive Dictionary Learning and Double 1 Regularized Sparse Representation
Input: LR HSI ( X ), HR MSI ( Y ), and the regularization parameters ( λ 1 and λ 2 ).
   (1) Learn the spectral dictionary, D , from X by using Algorithm 1;
   (2) Obtain the sparse representation, A , from Y and D by using Algorithm 2.
Output: the HR HSI Z = D A .

3. Experimental Results and Analysis

In this section, we demonstrate the effectiveness of the proposed method on some popular datasets, using a series of experiments. Both qualitative and quantitative metrics are used to evaluate the performance.

3.1. Datasets and Experimental Setup

We performed verifying experiments on Cuprite and Pavia Center datasets, as shown in Figure 4. There are 105 spectral bands in the Cuprite and 102 bands in the Pavia Center dataset. We cropped each dataset to 480 × 480 pixels in spatial resolution. The real HSI of these two datasets were treated as ground-truth, and they were used to produce the simulated LR HSI and HR MSI. Specifically, the LR HSI, X , was generated by first applying a 9 × 9 Gaussian kernel of standard deviation 2 to the real HSI and then averaging pixels within an s × s window, where s is the scaling factor (e.g., s = 8 , 16 , 32 ). For each dataset, we directly chose the blue, green, red, and near-infrared channels (corresponding to bands 7, 15, 25, and 42 in Cuprite, and bands 13, 33, 58, and 101 in Pavia Center, respectively) of ground-truth, to simulate the HR MSI, Y . To facilitate the numerical calculation, the intensities of each band in HSI were normalized to [0, 255].
The proposed method is compared with five representative algorithms: SASFM [32], G-SOMP+ [34], SSR [40], NNSR [38], and SSCSR [39]. To ensure the reliability of the results, we repeat each super resolution method 20 times on the test datasets.

3.2. Quality Metrics

We adopt three quantitative measures for the evaluation: relative dimensionless global error in synthesis (ERGAS) [49], root mean square error (RMSE), and spectral angle mapper (SAM) [50].
The RMSE measures the deviation between the reference HR HSI, R , and the reconstructed HR HSI, Z :
RMSE = R Z F 2 B N
The ERGAS metric [49] calculates the average amount of spectral deviation in each band, as defined below:
ERGAS = 100 s 1 B RMSE r b , z b μ r b
where s is the scaling factor, r b and z b represents the b -th band of R and Z , respectively, and μ r b is the mean of r b .
At last, we calculated SAM [50], which is defined as the angle between two spectral vectors, r b and z b , averaged over all pixels:
SAM = 1 N arccos r b T z b r b T r b 1 / 2 z b T z b 1 / 2
According to the above definition, the smaller the RMSE, ERGAS, and SAM metrics, the better the super-resolution performance.

3.3. Performance Comparison of Different Methods

Table 1 shows the average RMSE, ERGAS, and SAM results of the two datasets under different downsampling factors by the six compared methods. Our approach outperforms the others in terms of the RMSE, ERGAS, and SAM results, which clearly indicates that the adaptive learned dictionary can exploit the underlying structures in the HSI. The double 1 regularized sparse representation illustrates the superior performance over other competing methods. Thus, these numerical results validate the power of the proposed model for HSI super-resolution.
To facilitate visual comparison, Figure 5c–h shows the reconstructed HR HSI of band 100 by different competing approaches, with a scaling factor of s = 8 on the Cuprite dataset. We can see that all compared methods can generate spatial structures very well. To describe the differences of different methods more intuitively, Figure 5i–n presents a comparison of the differences (absolute value) in pixel values between each reconstructed image and the reference HR image. The mountain and river regions are not well preserved, resulting in large errors in the SASFM. The NNSR and SSCSR methods deliver a better result, but still cannot recover the missing details in the mountain. By contrast, our method can recover more details. Thus, the smallest reconstruction errors are achieved in Figure 5n.
To illustrate the consistency of the overall performance, we present the related results of the same band (band 100) with another scaling factor ( s = 32 ) on the Cuprite dataset in Figure 6. We can draw the same conclusion that the six compared methods can significantly enhance the spatial resolution of the input LR HSI. However, it should be noted that our method achieves the best result.
To verify the robustness of the proposed method, Figure 7 and Figure 8 present the SR results of different methods on the Pavia Center dataset, which has more varied content in the captured area. When the scaling factor ( s = 8 ) is small, the building regions are not well reconstructed by the SASFM method, as shown in Figure 7i. When we increase the scaling factor (s = 32), larger errors occur in the SASFM reconstructed image (the edges of the buildings and the river are clearly visible in Figure 8i). This is because SASFM ignored the structural similarity in the MSI during the SR process. Although the GSOMP+ method made use of the structure similarity, it was only exploited with a fixed window; hence, it cannot use the spatial information sufficiently. Accordingly, the outlines of the buildings and the river are still visible in Figure 7j and Figure 8j. Figure 7l and Figure 8l demonstrate that the nonnegative structured sparse-representation model can efficiently preserve many details and edges. From Figure 7m,n and Figure 8m,n, we can observe that the proposed method slightly outperforms the SSCSR method [30] in recovering the details of the original HSI.

3.4. Effects of the Adaptive Size Dictionary

(1) In Figure 9, we present the values of objective function (10) vs. the iteration times. This proves that the proposed algorithm terminates in finite steps. The optimal value of the objective function is achieved after 15 iterations for each dataset. In Algorithm 1, a random dictionary is selected for the initialization. Thus, further experiments are conducted to evaluate the sensitivity of our learning algorithm to different starting points. For each dataset, we generate 200 different initial dictionaries randomly, starting from which we can train different dictionaries. The related statistics on the final learned dictionary sizes are listed in Table 2. The variance of the sizes is very small. The minimum size is very similar to the maximum size. This suggests that different initial dictionaries have little effect on the final training results.
(2) To evaluate the adaption of our learning strategy, the traditional dictionary learning method (i.e., Equation (7)) and our proposed method are applied to the test datasets (see Figure 4). The dictionary size is set to 300, according to Equation (7), while 38 and 43 reflectance vectors are sufficient to represent the structure variations by our adaptive strategy for the Cuprite and Pavia Center datasets, respectively. The experimental results indicate that the sizes of the learned dictionaries vary significantly according to different learning methods. This demonstrates that our estimated dictionary sizes are adaptive and suitable. For quantitative evaluation, the average sparse reconstruction error, 1 n i = 1 n x i D β i 2 2 , is calculated for each test dataset in Table 2. We can see that our learned dictionaries can reduce the redundancy effectively and avoid the interference of nonspecific errors. This demonstrates the efficiency of the proposed optimization algorithms.
(3) As we know, K-SVD [51] is popular for learning fixed-size dictionaries. Thus, we compare the proposed adaptive dictionary learning method with the K-SVD method. To ensure the reliability of the K-SVD algorithm, we executed 50 iterations to train the spectral dictionary by using orthogonal matching pursuit (OMP) [52] for sparse coding with T = 6 ( T is the sparsity of each trained atom). Finally, the number of atoms for K-SVD is chosen as 300, which is the best or nearly the best atom number according to 20 experiments. Although the resultant images produced by K-SVD have good visual quality, K-SVD is less able to sharpen the details in the mountain area. Fortunately, the adaptive dictionary can effectively reconstruct most spatial details with less obvious spectral distortions, as shown in Figure 10e,f. In other words, it is not effective to reflect the complex structures of HSI by a dictionary with a fixed size. Their numerical results from different test datasets are reported in Table 3.

3.5. Parameters Analysis

The regularization parameters, λ 1 and λ 2 , balance the different contributions of regularization terms and need to be carefully tuned in Equation (21). Thus, many experiments are performed to demonstrate the effectiveness of the proposed regularizations. Figure 11 and Figure 12 present the curves of RMSE variations with different parameters and scaling factors ( s = 8 , 16 , 32 ) on the Cuprite and Pavia Center datasets, respectively. With a smaller value of λ 1 (between 0.005 and 0.01), the proposed method generates unsatisfactory results. As λ 1 increases, better results can be achieved, as shown by the top rows in Figure 11 and Figure 12. However, we also note that the performance exhibits a descending trend with increasing values of λ 1 (larger than 0.02). Nonetheless, when the values of λ 1 are between 0.01 and 0.02, the RMSE measurements are stable. According to the bottom rows in Figure 11 and Figure 12, we can see that the optimal value of λ 2 is 8 × 10 5 . Moreover, when the values depart significantly from 8 × 10 5 , the corresponding RMSE values clearly fluctuate. The above experimental results demonstrate that our proposed combination can achieve promising results. In view of this, to achieve optimal or nearly optimal performance, we recommend setting λ 1 0.01 , 0.02 and λ 2 = 8 × 10 5 .

3.6. Discussion on Computational Complexity

The proposed SR method incurs major costs from two aspects: the spectral dictionary learning and the sparse representation. For each step in Algorithm 1, the computation of the spectral dictionary learning is O K 2 n + 2 K n B . For the parameter δ , it needs l o g 2 δ steps to achieve the optimal result. The overall complexity of Algorithm 1 is O l o g 2 δ K 2 n + 2 K n B . In Algorithm 2, the complexity of updating the sparse coefficients is O L K for each pixel; therefore, N pixels need approximately O N L K .
The CPU times of different SR methods are presented in Table 4. All the algorithms were implemented with MATLAB on an Intel Core i7-6820 2.7-GHz CPU. The SASFM method has the fastest running time through the sparse coding technique of OMP [52]. The SSR and SSCSR methods have a long processing time due to the clustering-based sparse representation framework. Our proposed method runs quite slowly and requires approximately 5 min. In the future, we can expect to speed up the proposed algorithm by using a graphics-processing unit.

4. Discussion

Compared to other methods, the proposed method achieves a superior SR performance. There are mainly two reasons why. First, the adaptive dictionary can represent different variations reasonably compared with traditional dictionary learning methods. Second, the nonlocal similarities of the sparse coefficients are exploited to improve the HSI SR quality.
From the parameter analysis, we can find that the RMSE stay relatively stable, without any incremental performance, when the parameters are set to the recommended values (see Figure 11 and Figure 12). In other words, the performance of our proposed method is robust. From the comparison of the execution time, we observe that the proposed model is not computationally efficient (see Table 4). However, our method could achieve better performance in comparison with other methods on the two hyperspectral datasets. In light of this, it will be interesting to design an architecture with multicore CPU [53], to optimize the execution time.

5. Conclusions

This paper proposed a new and effective method for HSI super-resolution based on sparse representation. There are two distinctive features of the proposed method. On the one hand, an adaptive learning strategy is used to learn a spectral dictionary, which represents different content and features reasonably. On the other hand, double 1 regularized constraints are employed to characterize the similarities of the sparse coefficients, to improve the HSI SR quality. Extensive experimental results from two popular HSI datasets validate the superior performance of the proposed method over other competitive methods. The experiments of the parameters demonstrate the robustness of the proposed method.
In the future, we can combine the tensor model [54] and the shape-adaptive technique [55,56] to explore the spatial–spectral information adaptively and sufficiently. Deep-learning approaches have recently gained great attention in many fields [57,58,59,60,61,62,63,64]. It will be a new task to design a deep architecture to improve the performance of the HSI SR.

Author Contributions

Experiments and writing, S.T.; supervision, Y.X.; review and editing, L.S. and L.H.

Funding

This research was supported in part by the Fundamental Research Funds for the Central Universities (Grant No. LGZD201702, LGYB201807), the Natural Science Foundation of Jiangsu Province (Grant No. BK20171074 and BK 20150792), and the National Natural Science Foundation of China (Grant No. 61702269 and 61971233).

Acknowledgments

The authors would like to thank the Assistant Editor who handled our paper and the anonymous reviewers for providing help comments that significantly helped us improve the technical quality and presentation of our paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sun, L.; Zhan, T.; Wu, Z.; Xiao, L.; Jeon, B. Hyperspectral mixed denoising via spectral difference-induced total variation and low-rank approximation. Remote Sens. 2018, 10, 1956. [Google Scholar] [CrossRef]
  2. Sun, L.; Jeon, B.; Bushra, N.S.; Zheng, Y.H.; Wu, Z.B.; Xiao, L. Fast superpixel based subspace low rank learning method for hyperspectral denoising. IEEE Access. 2018, 6, 12031–12043. [Google Scholar] [CrossRef]
  3. Gao, H.; Yang, Y.; Li, C.; Zhou, H.; Qu, X. Joint Alternate Small Convolution and Feature Reuse for Hyperspectral Image Classification. ISPRS Int. J. Geo Inf. 2018, 7, 349. [Google Scholar] [CrossRef]
  4. Sun, L.; Ma, C.; Chen, Y.; Zheng, Y.; Shim, H.J.; Wu, Z.; Jeon, B. Low Rank Component Induced Spatial-spectral Kernel Method for Hyperspectral Image Classification; IEEE: New York, NY, USA, 2019; pp. 1–14. [Google Scholar]
  5. Van Nguyen, H.; Banerjee, A.; Chellappa, R. Tracking via Object Reflectance Using a Hyperspectral Video Camera. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Workshops, San Francisco, CA, USA, 13–18 June 2010; pp. 44–51. [Google Scholar]
  6. Uzair, M.; Mahmood, A.; Mian, A. Hyperspectral Face Recognition using 3D-DCT and Partial Least Squares. In Proceedings of the British Machine Vision Conference, Bristol, UK, 9–13 September 2013; p. 57. [Google Scholar]
  7. Yang, J.; Jiang, Z.; Hao, S.; Zhang, H. Higher Order Support Vector Random Fields for Hyperspectral Image Classification. ISPRS Int. J. Geo Inf. 2018, 7, 19. [Google Scholar] [CrossRef]
  8. Wang, Y.; Chen, X.; Han, Z.; He, S. Hyperspectral Image Super-Resolution via Nonlocal Low-Rank Tensor Approximation and Total Variation Regularization. Remote. Sens. 2017, 9, 1286. [Google Scholar] [CrossRef]
  9. Tang, S.; Xiao, L.; Huang, W.; Liu, P.; Wu, H. Pan-sharpening using 2D CCA. Remote Sens. Lett. 2015, 6, 341–350. [Google Scholar] [CrossRef]
  10. Tu, T.-M.; Huang, P.; Hung, C.-L.; Chang, C.-P. A Fast Intensity–Hue–Saturation Fusion Technique With Spectral Adjustment for IKONOS Imagery. IEEE Geosci. Remote. Sens. Lett. 2004, 1, 309–312. [Google Scholar] [CrossRef]
  11. Liu, P.; Xiao, L. A Novel Generalized Intensity-Hue-Saturation (GIHS) Based Pan-Sharpening Method With Variational Hessian Transferring. IEEE Access 2018, 6, 46751–46761. [Google Scholar] [CrossRef]
  12. El-Mezouar, M.C.; Kpalma, K.; Taleb, N.; Ronsin, J. A pan-sharpening based on the non-subsampled contourlet transform: application to WorldView-2 imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1806–1815. [Google Scholar] [CrossRef]
  13. Garzelli, A.; Aiazzi, B.; Alparone, L.; Lolli, S.; Vivone, G. Multispectral pansharpening with radiative transfer-based detail-injection modeling for preserving changes in vegetation cover by Andrea. Remote Sens. 2018, 10, 1308. [Google Scholar] [CrossRef]
  14. Li, S.; Yang, B. A new pan-sharpening method using a compressed sensing technique. IEEE Trans. Geosci. Remote Sens. 2010, 49, 738–746. [Google Scholar] [CrossRef]
  15. Liu, P.; Xiao, L.; Li, T. A Variational Pan-Sharpening Method Based on Spatial Fractional-Order Geometry and Spectral–Spatial Low-Rank Priors. IEEE Trans. Geosci. Remote. Sens. 2018, 56, 1788–1802. [Google Scholar] [CrossRef]
  16. Tang, S. Pansharpening via sparse regression. Opt. Eng. 2017, 56, 1. [Google Scholar] [CrossRef]
  17. Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and Multispectral Data Fusion: A comparative review of the recent literature. IEEE Geosci. Remote. Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
  18. Veganzones, M.A.; Simoes, M.; Licciardi, G.; Yokoya, N.; Bioucas-Dias, J.M.; Chanussot, J. Hyperspectral super-resolution of locally low rank images from complementary multisource data. IEEE Trans. Image Proc. 2015, 25, 274–288. [Google Scholar] [CrossRef]
  19. Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion. IEEE Geosci. Remote Sens. 2011, 50, 528–537. [Google Scholar] [CrossRef]
  20. Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [CrossRef]
  21. Lee, D.D.; Seung, H.S. Algorithms for Non-Negative Matrix Factorization. In Advances in Neural Information Processing Systems; Michael, I.J., Yann, L.C., Sara, A.S., Eds.; MIT Press: Cambridge, MA, USA, 2001; pp. 556–562. [Google Scholar]
  22. Bendoumi, M.A.; He, M.; Mei, S. Hyperspectral Image Resolution Enhancement Using High-Resolution Multispectral Image Based on Spectral Unmixing. IEEE Trans. Geosci. Remote. Sens. 2014, 52, 6574–6583. [Google Scholar] [CrossRef]
  23. Keshava, N.; Mustard, J.F. Spectral unmixing. IEEE Signal Process Mag. 2002, 19, 44–57. [Google Scholar]
  24. Iordache, M.-D.; Bioucas-Dias, J.M.; Plaza, A. Sparse Unmixing of Hyperspectral Data. IEEE Trans. Geosci. Remote. Sens. 2011, 49, 2014–2039. [Google Scholar] [CrossRef]
  25. Kawakami, R.; Matsushita, Y.; Wright, J.; Ben-Ezra, M.; Tai, Y.-W.; Ikeuchi, K. High-Resolution Hyperspectral Imaging via Matrix Factorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2329–2336. [Google Scholar]
  26. Li, S.; Dian, R.; Fang, L.; Bioucas-Dias, J.M. Fusing Hyperspectral and Multispectral Images via Coupled Sparse Tensor Factorization. IEEE Trans. Image Process. 2018, 27, 4118–4130. [Google Scholar] [CrossRef] [PubMed]
  27. Dian, R.; Fang, L.; Li, S. Hyperspectral Image Super-Resolution via Non-local Sparse Tensor Factorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017 ; pp. 3862–3871. [Google Scholar]
  28. Wei, Q.; Bioucas-Dias, J.; Dobigeon, N.; Tourneret, J.Y. Hyperspectral and multispectral image fusion based on a sparse representation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3658–3668. [Google Scholar] [CrossRef]
  29. Akhtar, N.; Shafait, F.; Mian, A. Bayesian Sparse Representation for Hyperspectral Image Super Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 8–12 June 2015; pp. 3631–3640. [Google Scholar]
  30. Yi, C.; Zhao, Y.Q.; Chan, J.C.W. Hyperspectral image super-resolution based on spatial and spectral correlation rusion. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4165–4177. [Google Scholar] [CrossRef]
  31. Zhang, L.; Wei, W.; Bai, C.; Gao, Y.; Zhang, Y. Exploiting Clustering Manifold Structure for Hyperspectral Imagery Super-Resolution. IEEE Trans. Image Process. 2018, 27, 5969–5982. [Google Scholar] [CrossRef] [PubMed]
  32. Huang, B.; Song, H.; Cui, H.; Peng, J.; Xu, Z. Spatial and spectral image fusion using sparse matrix factorization. IEEE Trans. Geosci. Remote Sens. 2013, 52, 1693–1704. [Google Scholar] [CrossRef]
  33. Grohnfeldt, C.; Zhu, X.X.; Bamler, R. Jointly Sparse Fusion of Hyperspectral and Multispectral Imagery. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium, Melbourne, Australia, 21–26 July 2013; pp. 4090–4093. [Google Scholar]
  34. Akhtar, N.; Shafait, F.; Mian, A. Sparse Spatio-Spectral Representation for Hyper-Spectral Image Super-Resolution. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 63–78. [Google Scholar]
  35. Liang, J.; Zhang, Y.; Mei, S. Hyperspectral and Multispectral Image Fusion using Dual-Source Localized Dictionary Pair. In Proceedings of the International Symposium on Intelligent Signal Processing and Communication Systems, Xiamen, Fujian, China, 6–9 November 2017; pp. 261–264. [Google Scholar]
  36. Lanaras, C.; Baltsavias, E.; Schindler, K.; Charis, L.; Emmanuel, B.; Konrad, S. Hyperspectral Super-Resolution by Coupled Spectral Unmixing. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2015; pp. 3586–3594. [Google Scholar]
  37. Lanaras, C.; Baltsavias, E.; Schindler, K. Hyperspectral Super-Resolution with Spectral Unmixing Constraints. Remote. Sens. 2017, 9, 1196. [Google Scholar] [CrossRef]
  38. Dong, W.; Fu, F.; Shi, G.; Cao, X.; Wu, J.; Li, G.; Li, X. Hyperspectral Image Super-Resolution via Non-Negative Structured Sparse Representation. IEEE Trans. Image Process. 2016, 25, 2337–2352. [Google Scholar] [CrossRef]
  39. Han, X.-H.; Shi, B.; Zheng, Y. Self-Similarity Constrained Sparse Representation for Hyperspectral Image Super-Resolution. IEEE Trans. Image Process. 2018, 27, 5625–5637. [Google Scholar] [CrossRef]
  40. Fang, L.Y.; Zhuo, H.J.; Li, S.T. Super-resolution of hyperspectral image via superpixel-based sparse representation. Neurocomputing 2018, 273, 171–177. [Google Scholar] [CrossRef]
  41. Buades, A.; Coll, B.; Morel, J.-M. A Non-Local Algorithm for Image Denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 05), San Diego, CA, USA, 20–25 June 2005; 2, pp. 60–65. [Google Scholar]
  42. Glasner, D.; Bagon, S.; Irani, M. Super-resolution from a single image. In Proceedings of the IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 349–356. [Google Scholar]
  43. Cotter, S.; Rao, B.; Engan, K.; Kreutz-Delgado, K. Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans. Signal Process. 2005, 53, 2477–2488. [Google Scholar] [CrossRef]
  44. Lu, C.; Shi, J.; Jia, J. Scale adaptive dictionary learning. IEEE Trans. Image Process. 2013, 23, 837–847. [Google Scholar] [CrossRef] [PubMed]
  45. Aharon, M.; Elad, M. Sparse and redundant modeling of image content using an image-signature-dictionary. SIAM J. Imaging Sci. 2008, 1, 228–247. [Google Scholar] [CrossRef] [Green Version]
  46. Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  47. Tang, S.; Zhou, N. Local Similarity Regularized Sparse Representation for Hyperspectral Image Super-Resolution. IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 21–29 July 2018; pp. 5120–5123. [Google Scholar]
  48. Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  49. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  50. Alparone, L.; Wald, L.; Chanussot, J.; Thomas, C.; Gamba, P.; Bruce, L. Comparison of Pansharpening Algorithms: Outcome of the 2006 GRS-S Data-Fusion Contest. IEEE Trans. Geosci. Remote. Sens. 2007, 45, 3012–3021. [Google Scholar] [CrossRef] [Green Version]
  51. Aharon, M.; Elad, M.; Bruckstein, Y. K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  52. Pati, Y.C.; Rezaiifar, R.; Krishnaprasad, P.S. Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet Decomposition. In Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–3 November 1993; pp. 40–44. [Google Scholar]
  53. Jiang, Y.; Zhao, M.; Hu, C.; He, L.; Bai, H.; Wang, J. A parallel FP-growth algorithm on World Ocean Atlas data with multi-core CPU. J. Supercomput. 2019, 75, 732–745. [Google Scholar] [CrossRef]
  54. Kolda, T.G.; Bader, B.W. Tensor Decompositions and Applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  55. Foi, A.; Katkovnik, V.; Egiazarian, K. Pointwise shape-adaptive DCT for high-quality denoising and deblocking of grayscale and color images. IEEE Trans. Image Process. 2007, 16, 1395–1411. [Google Scholar] [CrossRef]
  56. Müller, H.-G.; Fan, J.; Gijbels, I. Local Polynomial Modeling and Its Applications. J. Am. Stat. Assoc. 1998, 93, 835. [Google Scholar] [CrossRef]
  57. Tu, Y.; Lin, Y.; Wang, J.; Kim, J.U. Semi-supervised learning with generative adversarial networks on digital signal modulation classification. Comput. Mater. Contin. 2018, 55, 243–254. [Google Scholar]
  58. Meng, R.; Rice, S.G.; Wang, J.; Sun, X. A fusion steganographic algorithm based on faster R-CNN. Comput. Mater. Contin. 2018, 55, 1–16. [Google Scholar]
  59. Long, M.; Zeng, Y. Detecting Iris Liveness with Batch Normalized Convolutional Neural Network. Comput. Mater. Contin. 2019, 58, 493–504. [Google Scholar] [CrossRef]
  60. He, S.; Li, Z.; Tang, Y.; Liao, Z.; Wang, J.; Kim, H.J. Parameters compressing in deep learning. Comput. Mater. Contin. 2019, 62, 1–16. [Google Scholar]
  61. Song, Y.; Yang, G.; Xie, H.; Zhang, D.; Xingming, S. Residual domain dictionary learning for compressed sensing video recovery. Multimed. Tools Appl. 2017, 76, 10083–10096. [Google Scholar] [CrossRef]
  62. Zeng, D.; Dai, Y.; Li, F.; Wang, J.; Sangaiah, A.K. Aspect based sentiment analysis by a linguistically regularized CNN with gated mechanism. J. Intell. Fuzzy Syst. 2019, 36, 3971–3980. [Google Scholar] [CrossRef]
  63. Zhang, J.; Jin, X.; Sun, J.; Wang, J.; Sangaiah, A.K. Spatial and semantic convolutional features for robust visual object tracking. Multimedia Tools Appl. 2018, 1, 1–21. [Google Scholar] [CrossRef]
  64. Zhou, S.; Ke, M.; Luo, P. Multi-camera transfer GAN for person re-dentification. J. Vis. Commun. Image R. 2019, 59, 393–400. [Google Scholar] [CrossRef]
Figure 1. Hyperspectral image (HSI) super-resolution (SR) is generated by fusing a low-resolution (LR) HSI with a high-resolution (HR) multispectral image (MSI).
Figure 1. Hyperspectral image (HSI) super-resolution (SR) is generated by fusing a low-resolution (LR) HSI with a high-resolution (HR) multispectral image (MSI).
Remotesensing 11 02809 g001
Figure 2. The flowchart of the proposed method.
Figure 2. The flowchart of the proposed method.
Remotesensing 11 02809 g002
Figure 3. The distribution of α i - j N N i p i j α j corresponding to the 23rd (a) and 57th (b) atom in the dictionary.
Figure 3. The distribution of α i - j N N i p i j α j corresponding to the 23rd (a) and 57th (b) atom in the dictionary.
Remotesensing 11 02809 g003
Figure 4. The data cubes: (a) Cuprite and (b) Pavia Center.
Figure 4. The data cubes: (a) Cuprite and (b) Pavia Center.
Remotesensing 11 02809 g004
Figure 5. SR results of the 100th band of the Cuprite dataset with a scaling factor of s = 8 . The second row presents the reconstructed images achieved by the different methods. The third row shows the corresponding error images. (a) LR image, (b) original HR HSI image, and SR results by different methods: (c) SASFM, (d) G-SOMP+, (e) SSR, (f) NNSR, (g) SSCSR, (h) Proposed; corresponding error images: (i) SASFM, (j) G-SOMP+, (k) SSR, (l) NNSR, (m) SSCSR, (n) Proposed.
Figure 5. SR results of the 100th band of the Cuprite dataset with a scaling factor of s = 8 . The second row presents the reconstructed images achieved by the different methods. The third row shows the corresponding error images. (a) LR image, (b) original HR HSI image, and SR results by different methods: (c) SASFM, (d) G-SOMP+, (e) SSR, (f) NNSR, (g) SSCSR, (h) Proposed; corresponding error images: (i) SASFM, (j) G-SOMP+, (k) SSR, (l) NNSR, (m) SSCSR, (n) Proposed.
Remotesensing 11 02809 g005
Figure 6. SR results of the 100th band of the Cuprite dataset with a scaling factor of s = 32 . The second row presents the reconstructed images achieved by the different methods. The third row shows the corresponding error images. (a) LR image, (b) original HR HSI image, and SR results by different methods: (c) SASFM, (d) G-SOMP+, (e) SSR, (f) NNSR, (g) SSCSR, (h) Proposed; corresponding error images: (i) SASFM, (j) G-SOMP+, (k) SSR, (l) NNSR, (m) SSCSR, (n) Proposed.
Figure 6. SR results of the 100th band of the Cuprite dataset with a scaling factor of s = 32 . The second row presents the reconstructed images achieved by the different methods. The third row shows the corresponding error images. (a) LR image, (b) original HR HSI image, and SR results by different methods: (c) SASFM, (d) G-SOMP+, (e) SSR, (f) NNSR, (g) SSCSR, (h) Proposed; corresponding error images: (i) SASFM, (j) G-SOMP+, (k) SSR, (l) NNSR, (m) SSCSR, (n) Proposed.
Remotesensing 11 02809 g006
Figure 7. SR results of the 48th band of the Pavia Center dataset with a scaling factor of s = 8 . The second row presents the reconstructed images achieved by the different methods. The third row shows the corresponding error images. (a) LR image, (b) original HR HSI image, SR results by different methods: (c) SASFM, (d) G-SOMP+, (e) SSR, (f) NNSR, (g) SSCSR, (h) Proposed; corresponding error images: (i) SASFM, (j) G-SOMP+, (k) SSR, (l) NNSR, (m) SSCSR, (n) Proposed.
Figure 7. SR results of the 48th band of the Pavia Center dataset with a scaling factor of s = 8 . The second row presents the reconstructed images achieved by the different methods. The third row shows the corresponding error images. (a) LR image, (b) original HR HSI image, SR results by different methods: (c) SASFM, (d) G-SOMP+, (e) SSR, (f) NNSR, (g) SSCSR, (h) Proposed; corresponding error images: (i) SASFM, (j) G-SOMP+, (k) SSR, (l) NNSR, (m) SSCSR, (n) Proposed.
Remotesensing 11 02809 g007
Figure 8. SR results of the 94th band of the Pavia Center dataset with scaling factor of s = 32 . The second row presents the reconstructed images achieved by the different methods. The third row shows the corresponding error images. (a) LR image, (b) original HR HSI image, SR results by different methods: (c) SASFM, (d) G-SOMP+, (e) SSR, (f) NNSR, (g) SSCSR, (h) Proposed; corresponding error images: (i) SASFM, (j) G-SOMP+, (k) SSR, (l) NNSR, (m) SSCSR, (n) Proposed.
Figure 8. SR results of the 94th band of the Pavia Center dataset with scaling factor of s = 32 . The second row presents the reconstructed images achieved by the different methods. The third row shows the corresponding error images. (a) LR image, (b) original HR HSI image, SR results by different methods: (c) SASFM, (d) G-SOMP+, (e) SSR, (f) NNSR, (g) SSCSR, (h) Proposed; corresponding error images: (i) SASFM, (j) G-SOMP+, (k) SSR, (l) NNSR, (m) SSCSR, (n) Proposed.
Remotesensing 11 02809 g008
Figure 9. Value of the objective function evaluated by the proposed learning method with a scaling factor of 32 on different datasets: (a) Cuprite and (b) Pavia Center.
Figure 9. Value of the objective function evaluated by the proposed learning method with a scaling factor of 32 on different datasets: (a) Cuprite and (b) Pavia Center.
Remotesensing 11 02809 g009
Figure 10. Images of the 70th band of the Cuprite dataset. (a) LR image; (b) original HR image; (c) and (d) are the SR results by K-SVD and adaptive dictionary learning method, respectively; (e) and (f) are the corresponding error images.
Figure 10. Images of the 70th band of the Cuprite dataset. (a) LR image; (b) original HR image; (c) and (d) are the SR results by K-SVD and adaptive dictionary learning method, respectively; (e) and (f) are the corresponding error images.
Remotesensing 11 02809 g010
Figure 11. RMSE curves of different parameters and scaling factors on the Cuprite dataset. The top and bottom rows show the RMSE results with different values of λ 1 and λ 2 , respectively. From left to right, RMSE variations with different scaling factors: (a,d) s = 8 ; (b,e) s = 16 ; and (c,f) s = 32 .
Figure 11. RMSE curves of different parameters and scaling factors on the Cuprite dataset. The top and bottom rows show the RMSE results with different values of λ 1 and λ 2 , respectively. From left to right, RMSE variations with different scaling factors: (a,d) s = 8 ; (b,e) s = 16 ; and (c,f) s = 32 .
Remotesensing 11 02809 g011
Figure 12. RMSE curves of different parameters and scaling factors on the Pavia Center dataset. The top and bottom rows show the RMSE results with different values of λ 1 and λ 2 , respectively. From left to right, RMSE variations with different scaling factors: (a,d) s = 8 ; (b,e) s = 16 ; and (c,f) s = 32 .
Figure 12. RMSE curves of different parameters and scaling factors on the Pavia Center dataset. The top and bottom rows show the RMSE results with different values of λ 1 and λ 2 , respectively. From left to right, RMSE variations with different scaling factors: (a,d) s = 8 ; (b,e) s = 16 ; and (c,f) s = 32 .
Remotesensing 11 02809 g012
Table 1. Quantitative measures by different methods on Cuprite and Pavia Center.
Table 1. Quantitative measures by different methods on Cuprite and Pavia Center.
Downsampling FactorMethodsCupritePavia Center
RMSEERGASSAMRMSEERGASSAM
s = 8 SASFM1.00650.59872.16241.87340.70032.3561
G-SOMP+0. 84100.56831.89991.55520.58261.9708
SSR0.76270.55111.82391.23900.57471.9089
NNSR0.63730.45621.61201.15370.55211.8295
SSCSR0.56630.33181.22780.09900.51591.7566
Proposed0.48520.29610.90881.05740.49751.6388
s = 16 SASFM1.18180.31361.98522.16210.35192.4395
G-SOMP+0.91090.27961.76741.78610.30092.1477
SSR0.85670.27881.75851.36970.29782.1344
NNSR0.76290.20051.56621.27900.29771.9591
SSCSR0.69370.18111.30391.21900.28911.8889
Proposed0.54740.16951.01571.19040.28371.8650
s = 32 SASFM1.40760.16292.00114.23230.30024.0387
G-SOMP+1.12670.14361.81173.80840.25113.3054
SSR0.98450.13971.75463.44860.23322.9203
NNSR0.83930.11281.46582.80990.20082.5580
SSCSR0.76540.10951.32672.27901.17442.3016
Proposed0.68260.10151.19902.18470.15752.0664
Table 2. Average sparse reconstruction errors by different learning methods on Cuprite and Pavia Center.
Table 2. Average sparse reconstruction errors by different learning methods on Cuprite and Pavia Center.
Downsampling FactorCupritePavia Center
TraditionalAdaptiveTraditionalAdaptive
s = 8 11.12332.148819.04096.3434
s = 16 2.14881.975215.51542.9938
s = 32 0.61150.516510.34440.7820
Table 3. Super-resolution results by K-SVD and adaptive dictionary learning on the Cuprite (the top values) and Pavia Center (the bottom values) datasets.
Table 3. Super-resolution results by K-SVD and adaptive dictionary learning on the Cuprite (the top values) and Pavia Center (the bottom values) datasets.
MethodRMSEERGASSAM
K-SVD1.0451
3.2085
0.1322
0.2288
1.6000
3.0525
Adaptive dictionary0.6826
2.18747
0.1015
0.1575
1.1900
2.0664
Table 4. Average running time (seconds) of the compared methods on the simulated datasets with a scaling factor of 32.
Table 4. Average running time (seconds) of the compared methods on the simulated datasets with a scaling factor of 32.
MethodSASFMG-SOMP+SSRNNSRSSCSRProposed
Time11235579146723276

Share and Cite

MDPI and ACS Style

Tang, S.; Xu, Y.; Huang, L.; Sun, L. Hyperspectral Image Super-Resolution via Adaptive Dictionary Learning and Double 1 Constraint. Remote Sens. 2019, 11, 2809. https://doi.org/10.3390/rs11232809

AMA Style

Tang S, Xu Y, Huang L, Sun L. Hyperspectral Image Super-Resolution via Adaptive Dictionary Learning and Double 1 Constraint. Remote Sensing. 2019; 11(23):2809. https://doi.org/10.3390/rs11232809

Chicago/Turabian Style

Tang, Songze, Yang Xu, Lili Huang, and Le Sun. 2019. "Hyperspectral Image Super-Resolution via Adaptive Dictionary Learning and Double 1 Constraint" Remote Sensing 11, no. 23: 2809. https://doi.org/10.3390/rs11232809

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop