Next Article in Journal
SVASeg: Sparse Voxel-Based Attention for 3D LiDAR Point Cloud Semantic Segmentation
Previous Article in Journal
IMU-Aided Precise Point Positioning Performance Assessment with Smartphones in GNSS-Degraded Urban Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusing Hyperspectral and Multispectral Images via Low-Rank Hankel Tensor Representation

1
State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
2
Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
Inspur Cloud Information Technology Co., Ltd., Jinan 250100, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(18), 4470; https://doi.org/10.3390/rs14184470
Submission received: 4 July 2022 / Revised: 30 August 2022 / Accepted: 5 September 2022 / Published: 7 September 2022
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Hyperspectral images (HSIs) have high spectral resolution and low spatial resolution. HSI super-resolution (SR) can enhance the spatial information of the scene. Current SR methods have generally focused on the direct utilization of image structure priors, which are often modeled in global or local lower-order image space. The spatial and spectral hidden priors, which are accessible from higher-order space, cannot be taken advantage of when using these methods. To solve this problem, we propose a higher-order Hankel space-based hyperspectral image-multispectral image (HSI-MSI) fusion method in this paper. In this method, the higher-order tensor represented in the Hankel space increases the HSI data redundancy, and the hidden relationships are revealed by the nonconvex penalized Kronecker-basis-representation-based tensor sparsity measure (KBR). Weighted 3D total variation (W3DTV) is further applied to maintain the local smoothness in the image structure, and an efficient algorithm is derived under the alternating direction method of multipliers (ADMM) framework. Extensive experiments on three commonly used public HSI datasets validate the superiority of the proposed method compared with current state-of-the-art SR approaches in image detail reconstruction and spectral information restoration.

1. Introduction

The hyperspectral (HS) remote sensing technique, which emerged in the 1980s, is a comprehensive scientific technique combining information processing, computer technology and other technologies. Hyperspectral images (HSIs) consist of dozens to hundreds of spectral bands in the same area of the Earth’s surface. HSIs have the characteristics of high dimensionality, high redundant band information and high spectral resolution. Compared to traditional remote sensing techniques, hyperspectral remote sensing techniques can be used to qualitatively and quantitatively detect substances with ultrastrong spectral information. The advantage of HSIs lies in their high spectral resolution. In HSIs, many bands are required to increase the physical size of the photoreceptor, spatial resolution is sacrificed as a result. However, high spatial resolution images are required in many applications of HS remote sensing, such as marine research [1,2], food detection [3], military [4] and other fields [5,6,7,8,9]. Therefore, research on super-resolution (SR) reconstruction technology for HSIs has important scientific significance and engineering application value.
Recent studies of HSI SR can be divided into four types: Bayesian framework [10,11,12,13], matrix factorization [14,15,16,17], tensor factorization [18,19,20,21], and deep learning [22,23,24].
The Bayesian framework is a common method that is used to establish the posterior distribution of high spatial resolution hyperspectral image (HR-HSI) based on prior information and observational models. In [10], a multi-sparse Bayesian learning method was proposed based on the sparsity prior and the temporal correlation between successive frames. Wei et al. [11] adopted a Bayesian model based on sparse coding and dictionary learning to solve the fusion problem. The maximum a posterior (MAP) estimator, which uses a coregistered HSI from an auxiliary sensor, was introduced in [12]. In [13], a Bayesian sparse representation and spectral decomposition fusion method was adopted to improve image resolution. Bayesian-based methods are very sensitive to the independence of input data, which limits the practical application of the Bayesian framework.
Matrix factorization-based methods usually decompose HSI into two matrix forms to represent the spectral dictionary and low retention numbers. These two matrices are estimated by observing HS-multispectral (HS-MS) pairs. In [14], a spectral matrix decomposition and dictionary learning method was proposed to train spectral dictionaries of high spatial-spectral information by low spatial resolution HSI (LR-HSI) and HR multispectral image (HR-MSI) matrices. In [15], a spatial-spectral sparse representation method was adopted by using spectral decomposition priors, sparse priors and nonlocal self-similarity. In [16], a non-negative structure sparse representation (NSSR) method based on the sparsity of HR-HSI was proposed for the joint estimation of the HS dictionary and sparse code. In [17], based on sparse representation and local spatial low-rank, a method was proposed to solve HSI SR by estimating spectral dictionary and regression coefficients. Hyperspectral data are three-dimensional images, which have one more dimension of spectral information than ordinary two-dimensional images. Matrix-based methods destroy the data structure.
As multidimensional arrays, tensors provide a natural expression of HSI data. Tensor representation has been widely applied to high-dimensional data denoising [25,26,27,28], completion [29,30,31] and SR [18,19,20,21] in the past few years. A coupled non-negative tensor decomposition (NTD) method was introduced in [18] to extend the non-negative matrix decomposition to a tensor. In this method, Tucker decomposition of LR-HSI and HR-MSI is performed under NTD constraints. A joint tensor decomposition (JTF)-based method for solving HR-HSI was proposed by Ren et al. [19]. These methods are designed by treating the HSI data as a 3rd-order tensor and assuming that the tensor is of sufficiently global low-rank. Nonlocal self-similarity is an important feature of images. It depicts the repeated appearance of the nonlocal regional structure of the image, effectively retains the edge information of the image, and has certain advantages in image restoration. Dian et al. [20] adopted a fusion method based on low tensor train rank (LTTR) decomposition. This method was applied to HR-HSI through the nonlocal similarity learned from HR-MSI, and multiple group 4D similarity cubes were formed. The SR problem was efficiently solved using LTTR priors for the 4D cubes. In [21], a fusion method based on nonlocal Tucker decomposition, which uses the nonlocal similarity of HS data, was proposed. The clustering blocks obtained by nonlocal self-similarity are very dependent on the accuracy of block matching. The nonlocal clustering space does not improve the data redundancy, which makes it difficult to effectively explore the low-rank characteristics contained in the data.
With its superior effect in detection, recognition, classification and other tasks [32,33,34,35,36,37,38,39], deep learning has been gradually applied to deal with low-level vision tasks in recent years [22,23,24]. Bing et al. [22] proposed an improved generative adversarial network (GAN) to improve the squeeze and excitation (SE) block. By increasing the weight value of important features and reducing the weight value of weak features, the SE block is included in the simplified enhanced deep residual networks for the single image SR (SISR) model to recover HSIs. To improve the computing speed of deep learning networks, Kim et al. [23] designed an SISR network, which improves the fire modules based on Squeeze-Net, and asymmetrically arranges the fire modules. The number of network parameters is effectively reduced. In [24], a video SR network based on learning temporal dynamics was introduced. In this network, filters with different temporal scales are used to fully exploit the temporal relationship of continuous LR frames, and a spatial alignment network is employed to reduce motion complexity. The performance of deep learning networks is closely determined by the size of the training data. However, the size of the acquired HSI data is limited by the physical size of the photosensor, and there are only dozens to hundreds of spectral bands, which is not sufficient for training.
Considering the limitation of HSI data and to better maintain the data structure, we adopt a tensor-based method. To improve data redundancy and fully explore the low-rank characteristics of data, we propose using HSIs in the Hankel space to carry out high-order low-rank tensor SR (HTSR). The Hankel space is an embedded space obtained by employing delay-embedding transform in a multi-way manner for tensors, which consists of duplicated high-order tensors with low-rankness. Different from traditional high-order tensor decomposition, we use a folded-concave penalized Kronecker-basis-representation-based tensor sparsity measure (KBR) tensor decomposition [40] to reasonably and effectively represent the low-rank properties of high-order tensor in the Hankel space. From the spatial-spectral domain of HS data, there are local smoothing characteristics between adjacent pixels. We adopt W3DTV to constrain the spatial-spectral local continuity of HSIs. To further improve the effect of HSI SR recovery, HSI-MSI fusion is used in our method. Unlike HSIs, multispectral images (MSIs) contain 3–20 discontinuous bands with a higher signal-to-noise ratio (SNR) and spatial resolution. HR-MSI-assisted enhancement can compensate for the lack of LR-HSI, so the fusion method can more accurately reconstruct HR-HSI.
The contributions are highlighted as follows:
  • To exploit the low-rank information hidden in HSI data, we propose modeling the SR problem in the Hankel space. Compared with the global and nonlocal image spaces, the effectiveness of the designed Hankel space SR has been demonstrated.
  • A folded-concave penalized KBR tensor decomposition is proposed to describe the low-rank characteristics in the spatial–spectral domain of the high-order tensor in the Hankel space.
  • Extensive experiments demonstrate that our HTSR method can obtain a relatively great performance compared with the current state-of-the-art methods.
The rest of this paper is organized as follows. In Section 2, we introduce the tensor preparatory knowledge. In Section 3, we present the proposed HTSR fusion method. In Section 4, we introduce the optimization problem of the solution algorithm of the HTSR method in detail. The experimental results on three commonly used hyperspectral datasets are described in Section 5. Finally, conclusions are presented in Section 6.

2. Notions and Preliminaries on Tensors

A tensor is a multidimensional array, and a given tensor of order N is denoted as X R I 1 × I 2 × × I N . The related operations of the tensor are introduced in Table 1.

3. HSI-MSI Fusion Problem Formulation

In this section, the HTSR method for HR-HSI SR is presented. The main framework of HTSR is shown in Figure 1.

3.1. Observation Model

In this paper, all HR-HSI, HR-MSI, and LR-HSI are denoted as 3D tensors. The target HR-HSI is denoted as X R W × H × B , where W and H represent the spatial size of the spatial mode and B represents the band size of the spectral mode. LR-HSI is denoted as Y R w × h × B . Through spatial blurring and downsampling from X , Y can be obtained as follows:
Y = D S X
where S is the blurring operator and D is the downsampling operator.
Z R W × H × b denotes the acquired HR-MSI of the same scene, where b represents the spectral bands. Through downsampling from X along the spectral mode, Z can be modeled as
Z = X × 3 R
where R R b × B represents the spectral response matrix.

3.2. Multi-Way Delay-Embedding Transform on HR-HSI

Traditionally, global or nonlocal low-rank techniques are used to model direct correlations in data. These techniques are insufficient for exploiting indirect correlations hidden in data. To better employ the low-rank correlations hidden in X , we map it to the Hankel space. We perform Hankel processing on all modes of X to acquire Hankel tensor H X via multi-way delay-embedding transform (MDT) [41]. The Hankel tensor H X can be obtained via MDT with an N-th order tensor X and ξ R N as follows:
H ξ ( X ) = f o l d ( I , ξ ) ( X × 1 J 1 × N J N )
where a 2N-th order tensor from the input N-th order tensor is constructed by f o l d ( I , ξ ) : R ξ 1 ( I 1 ξ 1 + 1 ) × × ξ N ( I N ξ N + 1 ) R ξ 1 × ( I 1 ξ 1 + 1 ) × × ξ N × ( I N ξ N + 1 ) , J n 0 , 1 ξ n ( I n ξ n + 1 ) × I n ( n [ 1 , N ] ) is a duplication matrix. The inverse MDT for Hankel tensor H X is defined by
H ξ 1 ( H X ) = u n f o l d ( I , ξ ) ( H X ) × 1 J 1 × N J N
where u n f o l d ( I , ξ ) = f o l d ( I , ξ ) 1 , J = ( J T J ) 1 J T .
The MDT is a combination of multi-way folding and multi-linear duplication operations. Thus, the Hankel tensor H X has the characteristic of high redundancy and is a high-order tensor. The KBR is more suitable for exploiting the low-rank information hidden in the high-order Hankel tensor. For a given N-dimensional tensor X R I 1 × I 2 × × I N , its KBR decomposition can be represented as
S ( X ) = t O 0 + ( 1 t ) j = 1 N r a n k ( X ( j ) )
where t [ 0 , 1 ] is used to balance the two terms in the equation and O R r 1 × r 2 × × r N is the core tensor of the Tucker decomposition of X . The first term characterizes the number of non-zero elements in the nuclear tensor, and the second term characterizes the volume size of the non-zero square in the nuclear tensor.
Since the l 0 norm and the nuclear norm in Equation (5) are discrete measures, it is computationally difficult to directly model them, and it is necessary to reasonably relax them. Many studies have shown that it is easy to relax the l 0 norm and the nuclear norm [21,42,43] using the minimax concave plus (MCP) norm, which is very reasonable and can compensate for the biased estimates brought by l 0 norm modeling.
The MCP penalty is a typical folded-concave penalty as follows:
ψ λ ( t ) = γ λ 2 / 2         , i f t γ λ . λ t t 2 2 γ , o t h e r w i s e .
where γ is a fixed constant.
To better approximate H X , we adopt the MCP penalty as the sparsity constraint, and the above KBR sparse model is formulated as
S * ( H X ) = M λ 1 ( O ) + λ 3 j = 1 N M λ 2 * ( H X ( j ) )
where M λ 1 ( O ) = i , j , k ψ λ 1 ( O i , j , k ) , M * λ 2 ( H X ( j ) ) = m ψ λ 2 ( σ m ( H X ( j ) ) ) , and σ m ( H X ( j ) ) is the m-th singular value of H X ( j ) .

3.3. Weighted 3D Total Variation Regularization

Total variation (TV) regularization has been commonly applied to study the spatial piecewise smooth structure to address HSI restoration tasks [21,44,45]. Considering that HR-HSI is a 3D tensor and has a strong spatial-spectral continuity structure, we use a weighted 3DTV (W3DTV) term to explore the local smooth structure of X as follows:
X 3 D T V = a , b , c w 1 x a , b , c x a , b , c 1 + w 2 x a , b , c x a , b 1 , c + w 3 x a , b , c x a 1 , b , c
We obtain the following form:
X 3 D T V = G w ( X ) 1
where G w ( · ) = [ w 1 × G h ( · ) ; w 2 × G v ( · ) ; w 3 × G t ( · ) ] is the weighted 3D difference operator. G h ( · ) , G v ( · ) and G t ( · ) are the first-order difference operators along the three modes of the HSI cube.

3.4. HSI-MSI Fusion via HTSR

By integrating the regulation terms mentioned above (Equations (1), (2), (7) and (9)), we formulate the HSI-MSI fusion problem as
S * ( H X ) + λ 4 Y D S X F 2 + λ 5 Z X × 3 R F 2 + λ T V X 3 D T V
which is equal to
M λ 1 ( O ) + λ 3 j = 1 N M λ 2 * ( H X ( j ) ) + λ 4 Y D S X F 2 + λ 5 Z X × 3 R F 2 + λ T V G w ( X ) 1 H X = O × 1 U 1 × N U N
where λ 1 , λ 2 , λ 3 , λ 4 , λ 5 and λ T V are regularization parameters. This is a nonconvex optimization problem.

4. Optimization Procedure

4.1. Algorithm

Equation (11) is an unconstrained optimization problem. We propose an effective algorithm to solve it under the ADMM framework. By introducing two auxiliary variables F = G w ( X ) and X = M , we have
M λ 1 ( O ) + λ 3 j = 1 N M λ 2 * ( H M j ( j ) ) + λ 4 Y D S X F 2 + λ 5 Z X × 3 R F 2 + λ T V F 1 s . t . H M j = O × 1 U 1 × N U N X = M j F = G w ( X ) U j T U j = I , j = 1 , , N
The corresponding augmented Lagrangian function is given as:
L ( X , M , O , U , F , K , P , V ) = M λ 1 ( O ) + λ 3 j = 1 N M λ 2 * ( H M j ( j ) ) + λ 4 Y D S X F 2 + λ 5 Z X × 3 R F 2 + λ T V F 1 + F G w X , K + l 2 F G w X F 2 + j = 1 N X M j , P j + j = 1 N μ 2 X M j F 2 + j N H M j O × 1 U 1 × N U N , V j + j N v 2 H M j O × 1 U 1 × N U N F 2
where K , P j = 1 N and V j = 1 N are the Lagrangian multipliers. μ , v and l are the penalty parameters and U j T U j = I , j = 1 , , N . Based on ADMM, we can minimize Equation (13) by solving the following subproblems:
Update X . By fixing the other variables, Equation (13) leads to the following optimization problem:
X = arg min X L ( X , M , O , U , F , K , P , V ) = λ 4 Y D S X F 2 + λ 5 Z X × 3 R F 2 + F G w X , K + l 2 F G w X F 2 + j = 1 N X M j , P j + j = 1 N μ 2 X M j F 2
which is equivalent to solving the following linear equation:
2 λ 4 ( D S ) T D S X + μ N X + l G w G w X + 2 λ 5 X = 2 λ 4 ( D S ) T Y + j = 1 N ( μ M j P j ) + l G w F + G w K + 2 λ 5 ( Z × 3 R T )
where ( D S ) T denotes the transpose of D S and G w denotes the adjoint of G w . Equation (15) can be addressed by the off-the-shelf conjugate gradient method.
Update M k . By fixing the other variables, Equation (13) with regard to H M j is
H M j = arg min H M j L ( X , M , O , U , F , K , P , V ) = λ 3 j = 1 N M λ 2 * ( H M j ( j ) ) + j = 1 N X M j , P j + μ 2 X M j F 2 + j = 1 N H M j O × 1 U 1 × N U N , V j + v 2 H M j O × 1 U 1 × N U N F 2
With H M j ( j k ) , H M k can be solved via
arg min H M k λ 3 j k N M λ 2 * ( H M j ( j ) ) M λ 2 * ( H M k ( k ) ) + H X H M k , H P k + μ 2 H X H M k F 2 + H M k O × 1 U 1 × N U N , V k + v 2 H M k O × 1 U 1 × N U N F 2
The above equation is equal to
arg min H M k β k M λ 2 * ( H M k ( k ) ) + 1 2 H M k T F 2
where T = H P k + μ H X V k + v O × 1 U 1 × N U N μ + v , β k = λ 3 j k α μ + v , ( k = 1 , 2 , , N ) .
In addition, α = 1 , i f M λ 2 * ( H M j ( j ) ) = 0 M λ 2 * ( H M j ( j ) ) , o t h e r w i s e .
We can solve Equation (18) via
H M k = f o l d k [ S λ 2 β k , T k ( T ( k ) ) ] , k = 1 , 2 , N
S Ω ( X ) is the singular value shrinkage operator which is formed as S Ω ( X ) : = U X D Ω ( X ) V X T , where X = U X X V X T is the singular value decomposition. For a matrix Y, [ D Ω ( Y ) ] m n = sgn ( Y ( m n ) ( Y m n Ω ) + . Additionally, the weight matrix T k is defined by T k = D i a g ( ( λ σ ( H M ( j ) ) / γ ) ) + ) for some fixed γ > 1 .
Therefore, we can obtain M k ( k [ 1 , N ] ) by
M k = H M k 1
where H M k 1 is the inverse MDT for Hankel tensor H M k .
Update O . By extracting all terms containing O , Equation (13) can be deduced as follows:
O = arg min O M λ 1 ( O ) + j = 1 N v 2 O × 1 U 1 × N U N ( V j v + H M j ) F 2
For any tensor A , there is the A × n U F 2 = A F 2 form. By mode-j producting U j T on each mode, Equation (21) can be rewritten as:
arg min O M λ 1 ( O ) + j = 1 N v 2 O N F 2
where N = ( V j v + H M j ) × 1 U T 1 × N U T N .
O can be updated by:
O = D δ , W ( N )
where δ = λ 1 / v N , W = ( λ 1 O / γ ) + .
Update U k . By extracting all terms containing U k , Equation (13) can be deduced as follows:
U K = arg min U k T U k = I H M k O × 1 U 1 × N U N , V k + v 2 H M k O × 1 U 1 × N U N F 2
When U j ( j k ) and the other variables are fixed, we need to solve the following:
arg min U k T U k = I O × 1 U 1 × N U N R F 2
where R = V k v + H M k .
Combining equations D × n U F 2 = D F 2 and B = D × n U B ( n ) = U D ( n ) , Equation (25) is equivalent to:
max U k T U k = I Z k , U k
where Z k = R ( k ) ( u n f o l d k ( O × k { U j } j = 1 N ) ) T and O × k { U j } j = 1 N = O × 1 U 1 × k 1 U k 1 × k + 1 U k + 1 × N U N .
According to von Neumann’s trace inequality [46], U k can be solved as:
U k + = B k C k T , k = 1 , 2 , N
where Z k = B k D C k T is the SVD of Z k .
Update F . By selecting all terms containing F , Equation (13) can be deduced:
F = arg min F λ T V F 1 + F G w X , K + l 2 F G w X F 2
We can update F as
F = f o l d j [ S o f t λ T V β j / l ( G w X ( j ) K ( j ) l ) ]
where S o f t λ T V β j / l ( · ) is the soft-thresholding operator, which has the following form:
S o f t ψ ( a ) = a ψ , i f a > ψ , a + ψ , i f a < ψ , 0 , o t h e r w i s e .
where ψ > 0 and a R .
Update Lagrangian multipliers P j , V j , K .
K = K + θ · l ( F G w X ) P j = P j + θ · μ ( X M j ) V j = V j + θ · v ( H M j O × 1 U 1 × N U N )
where the parameter θ is fixed, θ = 1.05 , and parameters μ , v and l have a certain adaptive update strategy that promotes the astringency of our optimization algorithm. For instance, we initialize l = 10 5 , and we can update l as
l p 1 · l , i f Re s > p 2 · Re s p r e
where Res = Y D S X m + 1 F and Res p r e = Y D S X m F , and p 1 and p 2 can be taken as 1.05 and 0.95, respectively. The proposed algorithm for the HTSR model (11) is shown in Algorithm 1.
Algorithm 1: HTSR-Based HSI Super-Resolution.
Remotesensing 14 04470 i001

4.2. Computational Cost Analysis

The main computational cost of our method HTSR lies in the following subproblems: as for the subproblem of updating X , the main cost lies in solving a large linear system by using conjugate gradient technique. It is known that the computational cost of conjugate gradient method for solving a linear system A r = b is O ( m k ) , where m is the number of nonzero entries in A and k is its condition number. Thus, the main cost solving such a subproblem is O ( ( W H ) 2 k 1 + k 2 + B 2 k 3 + k 4 ) , where k 1 , k 2 , k 3 and k 4 are the condition numbers for the linear system, see Equation (15). It is not hard to see that these condition numbers could not be very large, since all such linear systems are well defined. For the subproblem of updating M k and O , the singular value shrinkage cost is O ( W ( H B ) 2 + B ( W H ) 2 + H ( W B ) 2 ) . For the subproblem of updating U k , it requires performing several SVDs for each Z k , where k is the number of iterations used in the algorithm. Thus, if we adopt a parallel computing procedure, the cost of solving this subproblem is comparable to a simple SVD algorithm, which gives the cost as O ( W H B ) . For the subproblems of updating F and Lagrangian multipliers, they can be solved through a simple algebraic calculation. In all, the total computational complexity of HTSR is O ( K ( ( W H ) 2 k 1 + k 2 + B 2 k 3 + k 4 + W ( H B ) 2 + B ( W H ) 2 + H ( W B ) 2 + W H B ) ) . The computational cost of HTSR is comparable with the existing matrix and tensor-based methods for dealing with HSI tasks [16,21], and thus reasonable in practice.

5. Experiments

5.1. Datasets

To test the effectiveness of the proposed HTSR method, three different HSI datasets are used in the experiment.
The first dataset is Pavia University [47]. The size of this dataset is 610 × 340 × 115. We select 93 clean spectral bands from this dataset as the ground truth (GT) HR-HSI with a size of 128 × 128 × 93. We use a 5 × 5 Gaussian spatial filter with a standard deviation of 2 and a downsampling operator with 4 × 4 disjoint spatial blocks averaged on the HR-HSI to generate the LR-HSI with a size of 32 × 32 × 93. The HR-MSI with a size of 128 × 128 × 4 is constructed using an IKONOS-like spectral reflectance response filter [48]. The second dataset is the Washington DC Mall [49]. This dataset has a size of 1208 × 307 × 191. We take a 128 × 128 × 191 subset from this dataset as the GT HR-HSI. The third dataset is the Urban. The size of this dataset is 307 × 307 × 210. We select 176 clean spectral bands from this dataset as GT HR-HSI, with a size of 128 × 128 × 176. The LR-HSI and HR-MSI of these two datasets are constructed and applied to the first dataset.

5.2. Compared Methods and Quantitative Metrics

Eight state-of-the-art (SOTA) HSI SR methods are compared with our HTSR method, including the GS adaptive (GSA) method [50], coupled nonnegative matrix factorization (CNMF) method [51], coupled spectral unmixing (CSU) method [52], NSSR method [16], local low-rank and sparse representations (LRSR) method [17], LTTR method [20], HS-MS image Fusion with spectral Variability (FuVar) method [53], and nonlocal low-rank tensor decomposition and spectral unmixing (LRTD) method [21]. Five assessments are used in this paper to measure the quality of recovered HSIs, including the peak SNR (PSNR), spectral angle mapper (SAM) [10], universal image quality index (UIQI) [54], relative dimensionless global error in synthesis (ERGAS) [55], and the degree of distortion (DD). Note that the higher values of UIQI and PSNR, and the lower values of DD, ERGAS, and SAM indicate better performance.
All experiments are performed in MATLAB R2020a on a 3.60 GHz Intel i9-9900K CPU and 36-GB RAM localhost.

5.3. Experimental Results

We quantitatively and qualitatively compare our HTSR method with eight SOTA SR methods on the Pavia University, Washington DC Mall and Urban datasets. Table 2 shows the PSNR, SAM, UIQI, ERGAS, and DD of all competitive methods on the above HSI datasets. The best and second-best performance measures are highlighted in red and blue, respectively.
From Table 2, Table 3 and Table 4, we can see that our HTSR method is superior to the eight competitive methods in terms of PSNR, ERGAS, and DD metrics, and ranks second in terms of UIQI and SAM metrics. HTSR method outperforms the GSA, CNMF, CUS, NSSR, FuVar, and LTTR methods in all metrics on the three HSI datasets. Notably, the GSA method only uses a simple image transformation operation, and the metrics are relatively low. Furthermore, the proposed HTSR method outperforms the LRTD method for most metrics on each dataset. It also has better PSNR, ERGAS, and DD values than LRSR. The above results show that using a high-order Hankel space data structure in HSI SR is a viable idea. We infer that the good performance of the HTSR method is a result of its ability to exploit priors of the HSI data, including the local smoothness structures and low-rank characteristics.
To qualitatively compare the performance of these competing methods, Figure 2, Figure 3 and Figure 4 show two recovered bands and their corresponding error images on each dataset. The closer the dark blue the error image is, the smaller the error between the GT and the reconstructed band.
Figure 2 shows the reconstructed image results of the 22nd and 40th bands and their corresponding error images on the Pavia University dataset. Among all the comparison methods, satisfactory results are achieved based on the HR-HSI recovered by the HTSR and LRTD methods. The error images generated by the HTSR method in the 40th band are the closest to the ground truth, and a few spectral distortions are produced when using the LRTD method. Figure 3 shows the reconstructed 15th and 38th bands and their corresponding error images yielded by GSA, CNMF, CSU, NSSR, LRSR, LTTR, FuVar, LRTD, and our HTSR method on the Washington dataset. The reconstructed image results of the 25th and 40th bands and their corresponding error images on the Urban dataset are shown in Figure 4. From the error images in Figure 3 and Figure 4, we can easily find that our HTSR method has much bluer and smoother results. The error images reconstructed by competitive methods result in obviously different shades of yellow. Compared with other methods, better recovery performance in terms of spatial details is achieved by using our HTSR method.
To further prove the significance of MDT in the HTSR model, we conduct experiments on HTSR and HTSR without MDT (HTSRWM), which can be rewritten as the following model:
M λ 1 ( O ) + λ 3 j = 1 N M λ 2 * ( X ( j ) ) + λ 4 Y D S X F 2 + λ 5 Z X × 3 R F 2 + λ T V G w ( X ) 1 X = O × 1 U 1 × N U N
From Table 5, we can see that the MDT in the HTSR model can indeed improve the performance of HTSR. For clarity, we mark the best results in bold red. For example, Figure 5, Figure 6 and Figure 7 show the 63rd band of Pavia University, the 30th band of Washington DC Mall and the 15th band of Urban datasets and the corresponding error images that are reconstructed by the HTSR method and HTSRWM method. From Figure 5, Figure 6 and Figure 7, we can see that the HSI recovered by the HTSRWM method has obvious artifacts. The HR-HSI details recovered by the HTSR method are closer to the GT, which further demonstrates the significance of MDT in the HTSR model.

5.4. Experimental Setup

In our HTSR model, there are six regularization parameters λ 1 , λ 2 , λ 3 , λ 4 , λ 5 , and λ T V in Equation (11) and the MDT parameter ξ in Equation (4), the MCP parameter γ in Equation (6) and the W3DTV parameter w in Equation (8). Taking parameter λ 4 as an example, we fixed other parameters and then optimized λ 4 from a group of candidate values (chosen first from a large range with a big interval then from a relatively small range determined by the performance with a small interval). We thus chose λ 4 = 8 × 10 2 . The above parameter tune rules are also suitable for other regularization parameters, then we have λ 1 = 10 3 , λ 2 = 10 1 , λ 3 = 10 1 , λ 5 = 1.3 , and λ T V = 10 2 . Since LR-HSI does not degrade the spectrum and the spectrum information is relatively complete, we selected the window length of MDT along the spectral dimension with a small step size 2. The window length of MDT along the spatial dimension was tuned according to the rules mentioned in regularization terms. For the sake of calculation complexity and performance, we chose the spatial dimension window length of 4, and then we had ξ = [ 4 , 4 , 2 ] . γ is a fixed constant in MCP. According to the parameter value commonly used in [42,56], we set γ = 5 . For simplicity, we fixed the value of w j ( j = 1 , 2 ) to 0.1 since the spatial dimensions have a similar effect, and tune the spectral weight w 3 between 0 to 1. We found that the reconstruction performance was stable when 0.6 w 3 1 . As such, we fixed the value of w 3 to 0.8 in our model, and then we had w = [ 0.1 , 0.1 , 0.8 ] . For the selection of ranks, we first fixed the ranks of the first, third and fifth dimensions according to ξ , and then optimized the ranks of other dimensions according to the tune rules mentioned above. Thus, we have the rank value of the Hankel tensor as R a n k = [ 4 , 60 , 4 , 60 , 2 , 30 ] . The values of each parameter in our HTSR model are shown in Table 6. The algorithm is stopped when the difference of X k X k 1 F X k X k 1 F H R H R F between iterations is less than 10 5 .

6. Conclusions

In this paper, we address the issue of HSI SR from the viewpoint of higher-order Hankel space prior modeling. Instead of directly using the global or local prior, we seek to model the SR problem by mapping the HSI dataset to the Hankel space via MDT. To fully exploit the low-rank characteristic of the spatial domain versus the spectral domain of HR-HSI in the higher-order Hankel tensor form, we attempt to use the nonconvex penalized KBR. W3DTV is further applied to maintain the local smoothness in the image structure. Compared with the SOTA SR methods, the results of our experiment show that relatively good reconstruction results can be obtained using our HTSR method, and the effectiveness of higher-order tensor modeling in the Hankel space has also been verified. Notice that our method is in common with singular spectrum analysis (SSA)-like scheme [57] from the perspective of processing flow, i.e., they both need embedding/transformation, low-rank approximation, and inverse embedding/transformation. However, they are different in the embedding strategies and the low-rank approximation techniques. The SSA-like methods construct a 2D trajectory matrix for vector, 2D and multivariate time series. Then, singular value decomposition is applied to the trajectory matrix to extract signals representing different components of the original time series. It is a special case of principal component analysis based on 2D matrix data. While for the MDT used in our method, it embeds the data into a high-dimensional feature space and constructs a duplicate tensor with high dimensions. Then, the high-order tensor data is represented by low-rank tensor decompositions in the embedded space. The high-order tensor presentation provides a way to exploit the relationship between each order hidden in the data and our model is designed in this framework.

Author Contributions

All authors made significant contributions to this work. Conceptualization, S.G. and X.C.; methodology, S.G., X.C. and Z.H.; investigation, X.C.; software, S.G. and Z.D.; data curation, S.G. and H.J.; validation, Z.D.; writing—original draft preparation, S.G.; writing—review and editing, H.J., Z.H. and Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Natural Science Foundation of China under Grant 61903358, Grant 61873259, and Grant 61821005, in part by the Youth Innovation Promotion Association of the Chinese Academy of Sciences under Grant 2022196 and Grant Y202051, National Science Foundation of Liaoning Province under Grant 2021-BS-023.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Balsi, M.; Moroni, M.; Chiarabini, V.; Tanda, G. High-Resolution Aerial Detection of Marine Plastic Litter by Hyperspectral Sensing. Remote Sens. 2021, 13, 1557. [Google Scholar] [CrossRef]
  2. Pu, Y.; Hu, S.; Luo, Y.; Liu, X.; Hu, L.; Ye, L.; Li, H.; Xia, F.; Gao, L. Multiscale Perspectives on an Extreme Warm-Sector Rainfall Event over Coastal South China. Remote Sens. 2022, 14, 3110. [Google Scholar] [CrossRef]
  3. Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; González-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A Deep Learning-Based Approach for Automated Yellow Rust Disease Detection from High-Resolution Hyperspectral UAV Images. Remote Sens. 2019, 11, 1554. [Google Scholar] [CrossRef]
  4. Huang, B.; Song, H.; Cui, H.; Peng, J.; Xu, Z. Spatial and Spectral Image Fusion Using Sparse Matrix Factorization. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1693–1704. [Google Scholar] [CrossRef]
  5. Farrar, M.B.; Wallace, H.M.; Brooks, P.; Yule, C.M.; Tahmasbian, I.; Dunn, P.K.; Hosseini Bai, S. A Performance Evaluation of Vis/NIR Hyperspectral Imaging to Predict Curcumin Concentration in Fresh Turmeric Rhizomes. Remote Sens. 2021, 13, 1807. [Google Scholar] [CrossRef]
  6. Légaré, B.; Bélanger, S.; Singh, R.K.; Bernatchez, P.; Cusson, M. Remote Sensing of Coastal Vegetation Phenology in a Cold Temperate Intertidal System: Implications for Classification of Coastal Habitats. Remote Sens. 2022, 14, 3000. [Google Scholar] [CrossRef]
  7. Cen, Y.; Huang, Y.; Hu, S.; Zhang, L.; Zhang, J. Early Detection of Bacterial Wilt in Tomato with Portable Hyperspectral Spectrometer. Remote Sens. 2022, 14, 2882. [Google Scholar] [CrossRef]
  8. Yin, C.; Lv, X.; Zhang, L.; Ma, L.; Wang, H.; Zhang, L.; Zhang, Z. Hyperspectral UAV Images at Different Altitudes for Monitoring the Leaf Nitrogen Content in Cotton Crops. Remote Sens. 2022, 14, 2576. [Google Scholar] [CrossRef]
  9. Thornley, R.H.; Verhoef, A.; Gerard, F.F.; White, K. The Feasibility of Leaf Reflectance-Based Taxonomic Inventories and Diversity Assessments of Species-Rich Grasslands: A Cross-Seasonal Evaluation Using Waveband Selection. Remote Sens. 2022, 14, 2310. [Google Scholar] [CrossRef]
  10. Liu, Y.; Yang, Y.; Shu, Y.; Zhou, T.; Luo, J.; Liu, X. Super-Resolution Ultrasound Imaging by Sparse Bayesian Learning Method. IEEE Access. 2019, 7, 47197–47205. [Google Scholar] [CrossRef]
  11. Wei, Q.; Bioucas-Dias, J.; Dobigeon, N.; Tourneret, J.-Y. Hyperspectral and multispectral image fusion based on a sparse representation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3658–3668. [Google Scholar] [CrossRef]
  12. Hardie, R.C.; Eismann, M.T.; Wilson, G.L. MAP estimation for hyperspectral image resolution enhancement using an auxiliary sensor. IEEE Trans. Image Process. 2004, 13, 1174–1184. [Google Scholar] [CrossRef] [PubMed]
  13. Ghasrodashti, E.K.; Karami, A.; Heylen, R.; Scheunders, P. Spatial Resolution Enhancement of Hyperspectral Images Using Spectral Unmixing and Bayesian Sparse Representation. Remote Sens. 2017, 9, 541. [Google Scholar] [CrossRef]
  14. Zhao, Y.; Yi, C.; Yang, J.; Chan, J.C.-W. Spectral super-resolution based on matrix factorization and spectral dictionary. In Proceedings of the 2016 8th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Los Angeles, CA, USA, 21–24 August 2016; pp. 1–6. [Google Scholar]
  15. Dian, R.; Li, S.; Fang, L.; Wei, Q. Multispectral and hyperspectral image fusion with spatial-spectral sparse representation. Inf. Fusion. 2019, 49, 262–270. [Google Scholar] [CrossRef]
  16. Dong, W.; Fu, F.; Shi, G.; Cao, X.; Wu, J.; Li, G.; Li, X. Hyperspectral Image Super-Resolution via Non-Negative Structured Sparse Representation. IEEE Trans. Image Process. 2016, 25, 2337–2352. [Google Scholar] [CrossRef] [PubMed]
  17. Dian, R.; Li, S.; Fang, L.; Bioucas-Dias, J. Hyperspectral Image Super-Resolution via Local Low-Rank and Sparse Representations. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 4003–4006. [Google Scholar]
  18. Zare, M.; Helfroush, M.S.; Kazemi, K.; Scheunders, P. Hyperspectral and Multispectral Image Fusion Using Coupled Non-Negative Tucker Tensor Decomposition. Remote Sens. 2021, 13, 2930. [Google Scholar] [CrossRef]
  19. Ren, X.; Lu, L.; Chanussot, J. Toward Super-Resolution Image Construction Based on Joint Tensor Decomposition. Remote Sens. 2020, 12, 2535. [Google Scholar] [CrossRef]
  20. Dian, R.; Li, S.; Fang, L. Learning a Low Tensor-Train Rank Representation for Hyperspectral Image Super-Resolution. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2672–2683. [Google Scholar] [CrossRef]
  21. Wang, K.; Wang, Y.; Zhao, X.-L.; Chan, J.C.-W.; Meng, D. Hyperspectral and Multispectral Image Fusion via Nonlocal Low-Rank Tensor Decomposition and Spectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7654–7671. [Google Scholar] [CrossRef]
  22. Bing, X.; Zhang, W.; Zheng, L.; Zhang, Y. Medical Image Super Resolution Using Improved Generative Adversarial Networks. IEEE Access. 2019, 7, 145030–145038. [Google Scholar] [CrossRef]
  23. Kim, H.; Kim, G. Single Image Super-Resolution Using Fire Modules with Asymmetric Configuration. IEEE Signal Process. Lett. 2020, 27, 516–519. [Google Scholar] [CrossRef]
  24. Liu, D.; Wang, Z.; Fan, Y.; Liu, X.; Wang, Z.; Chang, S.; Wang, X.; Huang, T.S. Learning Temporal Dynamics for Video Super-Resolution: A Deep Learning Approach. IEEE Trans. Image Process. 2018, 27, 516–519. [Google Scholar] [CrossRef]
  25. Chen, X.; Han, Z.; Wang, Y.; Zhao, Q.; Meng, D.; Lin, L.; Tang, Y. A Generalized Model for Robust Tensor Factorization With Noise Modeling by Mixture of Gaussians. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5380–5393. [Google Scholar] [CrossRef] [PubMed]
  26. Lin, J.; Huang, T.; Zhao, X.; Jiang, T.; Zhuang, L. A Tensor Subspace Representation-Based Method for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7739–7757. [Google Scholar] [CrossRef]
  27. Wu, Y.; Fang, L.; Li, S. Weighted Tensor Rank-1 Decomposition for Nonlocal Image Denoising. IEEE Trans. Image Process. 2019, 28, 2719–2730. [Google Scholar] [CrossRef]
  28. Gong, X.; Chen, W.; Chen, J.; Ai, B. Tensor Denoising Using Low-Rank Tensor Train Decomposition. IEEE Signal Process. Lett. 2020, 27, 1685–1689. [Google Scholar] [CrossRef]
  29. Zhao, Q.; Zhou, G.; Zhang, L.; Cichocki, A.; Amari, S.-I. Bayesian Robust Tensor Factorization for Incomplete Multiway Data. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 736–748. [Google Scholar] [CrossRef] [PubMed]
  30. Liu, Y.; Shang, F.; Fan, W.; Cheng, J.; Cheng, H. Generalized higher order orthogonal iteration for tensor learning and decomposition. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 2551–2563. [Google Scholar] [CrossRef]
  31. Xie, T.; Li, S.; Fang, L.; Liu, L. Tensor Completion via Nonlocal Low-Rank Regularization. IEEE Trans. Neural Netw. Learn. Syst. 2019, 49, 2344–2354. [Google Scholar] [CrossRef] [PubMed]
  32. Qi, C.; Xie, J.; Zhang, H. Joint Antenna Placement and Power Allocation for Target Detection in a Distributed MIMO Radar Network. Remote Sens. 2022, 14, 2650. [Google Scholar] [CrossRef]
  33. Rukhovich, D.I.; Koroleva, P.V.; Rukhovich, D.D.; Rukhovich, A.D. Recognition of the Bare Soil Using Deep Machine Learning Methods to Create Maps of Arable Soil Degradation Based on the Analysis of Multi-Temporal Remote Sensing Data. Remote Sens. 2022, 14, 2224. [Google Scholar] [CrossRef]
  34. An, W.; Zhang, X.; Wu, H.; Zhang, W.; Du, Y.; Sun, J. LPIN: A Lightweight Progressive Inpainting Network for Improving the Robustness of Remote Sensing Images Scene Classification. Remote Sens. 2022, 14, 2224. [Google Scholar] [CrossRef]
  35. Jia, X.; Jing, X.; Zhu, X.; Chen, S.; Du, B.; Cai, Z.; He, Z.; Yue, D. Semi-Supervised Multi-View Deep Discriminant Representation Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 2496–2509. [Google Scholar] [CrossRef] [PubMed]
  36. Zhong, G.; Zhang, K.; Wei, H.; Zheng, Y.; Dong, J. Marginal Deep Architecture: Stacking Feature Learning Modules to Build Deep Learning Models. IEEE Access 2019, 7, 30220–30233. [Google Scholar] [CrossRef]
  37. Massaoudi, M.; Chihi, I.; Abu-Rub, H.; Refaat, S.S.; Oueslati, F.S. Convergence of Photovoltaic Power Forecasting and Deep Learning: State-of-Art Review. IEEE Access 2021, 9, 136593–136615. [Google Scholar] [CrossRef]
  38. Lin, J.; Mou, L.; Zhu, X.; Ji, X.; Wang, Z.J. Attention-Aware Pseudo-3-D Convolutional Neural Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7790–7802. [Google Scholar] [CrossRef]
  39. Lin, J.; Wang, Q.; Yuan, Y. In defense of iterated conditional mode for hyperspectral image classification. In Proceedings of the 2014 IEEE International Conference on Multimedia and Expo (ICME), Chengdu, China, 14–18 July 2014; pp. 1–6. [Google Scholar]
  40. Xie, Q.; Zhao, Q.; Meng, D.; Xu, Z. ronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1888–1902. [Google Scholar] [CrossRef] [PubMed]
  41. Yokota, T.; Erem, B.; Guler, S.; Warfield, S.K.; Hontani, H. Missing Slice Recovery for Tensors Using a Low-Rank Model in Embedded Space. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8251–8259. [Google Scholar]
  42. Wang, Y.; Chen, X.; Han, Z.; He, S. Hyperspectral Image Super-Resolution via Nonlocal Low-Rank Tensor Approximation and Total Variation Regularization. Remote Sens. 2017, 9, 1286. [Google Scholar] [CrossRef]
  43. He, S.; Zhou, H.; Wang, Y.; Cao, W.; Han, Z. Super-resolution reconstruction of hyperspectral images via low rank tensor modeling and total variation regularization. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 6962–6965. [Google Scholar]
  44. Shi, F.; Cheng, J.; Wang, L.; Yap, P.-T.; Shen, D. LRTV: MR Image Super-Resolution With Low-Rank and Total Variation Regularizations. IEEE Trans. Med. Imaging. 2015, 34, 2459–2466. [Google Scholar] [CrossRef] [PubMed]
  45. Zhang, Y.; Tuo, X.; Huang, Y.; Yang, J. A TV Forward-Looking Super-Resolution Imaging Method Based on TSVD Strategy for Scanning Radar. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4517–4528. [Google Scholar] [CrossRef]
  46. Mirsky, L. A trace inequality of john von neumann. Monatshefte für Math. 1975, 79, 303–306. [Google Scholar] [CrossRef]
  47. Dell’Acqua, F.; Gamba, P.; Ferrari, A.; Palmason, J.A.; Benediktsson, J.A.; Arnason, K. Exploiting spectral and spatial information in hyperspectral urban data with high resolution. IEEE Geosci. Remote Sens. Lett. 2004, 1, 322–326. [Google Scholar] [CrossRef]
  48. Wei, Q.; Dobigeon, N.; Tourneret, J.-Y. Fast Fusion of Multi-Band Images Based on Solving a Sylvester Equation. IEEE Trans. Image Process 2015, 24, 4109–4121. [Google Scholar] [CrossRef] [PubMed]
  49. Basedow, R.W.; Carmer, D.C.; Andersonand, M.E. HYDICE system: Implementation and performance. In Proceedings of the 1995 Imaging Spectrometry Conference, Orlando, FL, USA, 17–18 April 1995; pp. 258–267. [Google Scholar]
  50. Aiazzi, B.; Baronti, S.; Selva, M. Improving Component Substitution Pansharpening Through Multivariate Regression of MS +Pan Data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  51. Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled Nonnegative Matrix Factorization Unmixing for Hyperspectral and Multispectral Data Fusion. IEEE Trans. Geosci. Remote Sens. 2012, 50, 528–537. [Google Scholar] [CrossRef]
  52. Lanaras, C.; Baltsavias, E.; Schindler, K. Hyperspectral Super-Resolution by Coupled Spectral Unmixing. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 3586–3594. [Google Scholar]
  53. Borsoi, R.A.; Imbiriba, T.; Bermudez, J. Super-Resolution for Hyperspectral and Multispectral Image Fusion Accounting for Seasonal Spectral Variability. IEEE Trans. Image Process. 2020, 29, 116–127. [Google Scholar] [CrossRef] [PubMed]
  54. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  55. Wald, L. Quality of high resolution synthesised images: Is there a simple criterion? In Proceedings of the Third Conference Fusion of Earth Data: Merging Point Measurements, Raster Maps and Remotely Sensed Images, Sophia Antipolis, France, 28–30 January 2000; pp. 99–103. [Google Scholar]
  56. Jia, H.; Chen, X.; Han, Z.; Liu, B.; Wen, T.; Tang, Y. Nonconvex Nonlocal Tucker Decomposition for 3D Medical Image Super-Resolution. Front. Neuroinform. 2022, 16, 880301. [Google Scholar] [CrossRef] [PubMed]
  57. Golyandina, N.; Korobeynikov, A.; Shlemov, A.; Usevich, K. Multivariate and 2D Extensions of Singular Spectrum Analysis with the Rssa Package. J. Stat. Software. 2015, 67, 1–78. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The framework of our HTSR method.
Figure 1. The framework of our HTSR method.
Remotesensing 14 04470 g001
Figure 2. Reconstructed HR-HSI and corresponding error images for Pavia University. First and second rows: reconstructed HSI and its corresponding error image at the 22nd band. Third and fourth rows: reconstructed HSI and its corresponding error image at the 40h band. (a) GSA; (b) CNMF; (c) CSU; (d) NSSR; (e) LRSR; (f) LTTR; (g) FuVar; (h) LRTD; (i) HTSR; (j) Ground truth.
Figure 2. Reconstructed HR-HSI and corresponding error images for Pavia University. First and second rows: reconstructed HSI and its corresponding error image at the 22nd band. Third and fourth rows: reconstructed HSI and its corresponding error image at the 40h band. (a) GSA; (b) CNMF; (c) CSU; (d) NSSR; (e) LRSR; (f) LTTR; (g) FuVar; (h) LRTD; (i) HTSR; (j) Ground truth.
Remotesensing 14 04470 g002
Figure 3. Reconstructed HR-HSI and corresponding error images for Washington DC Mall. First and second rows: reconstructed HSI and its corresponding error image at the 15th band. Third and fourth rows: reconstructed HSI and its corresponding error image at the 38th band. (a) GSA; (b) CNMF; (c) CSU; (d) NSSR; (e) LRSR; (f) LTTR; (g) FuVar; (h) LRTD; (i) HTSR; (j) Ground truth.
Figure 3. Reconstructed HR-HSI and corresponding error images for Washington DC Mall. First and second rows: reconstructed HSI and its corresponding error image at the 15th band. Third and fourth rows: reconstructed HSI and its corresponding error image at the 38th band. (a) GSA; (b) CNMF; (c) CSU; (d) NSSR; (e) LRSR; (f) LTTR; (g) FuVar; (h) LRTD; (i) HTSR; (j) Ground truth.
Remotesensing 14 04470 g003
Figure 4. Reconstructed HR-HSI and corresponding error images for Ubran. First and second rows: reconstructed HSI and its corresponding error image at the 25th band. Third and fourth rows: reconstructed HSI and its corresponding error image at the 40th band. (a) GSA; (b) CNMF; (c) CSU; (d) NSSR; (e) LRSR; (f) LTTR; (g) FuVar; (h) LRTD; (i) HTSR; (j) Ground truth.
Figure 4. Reconstructed HR-HSI and corresponding error images for Ubran. First and second rows: reconstructed HSI and its corresponding error image at the 25th band. Third and fourth rows: reconstructed HSI and its corresponding error image at the 40th band. (a) GSA; (b) CNMF; (c) CSU; (d) NSSR; (e) LRSR; (f) LTTR; (g) FuVar; (h) LRTD; (i) HTSR; (j) Ground truth.
Remotesensing 14 04470 g004
Figure 5. Reconstructed HR-HSI and its corresponding error image at the 63rd band for Pavia University. (a) HTSRWM method. (b) HTSR method. (c) Ground truth.
Figure 5. Reconstructed HR-HSI and its corresponding error image at the 63rd band for Pavia University. (a) HTSRWM method. (b) HTSR method. (c) Ground truth.
Remotesensing 14 04470 g005
Figure 6. Reconstructed HR-HSI and its corresponding error image at the 30th band for Washington DC Mall. (a) HTSRWM method; (b) HTSR method; (c) Ground truth.
Figure 6. Reconstructed HR-HSI and its corresponding error image at the 30th band for Washington DC Mall. (a) HTSRWM method; (b) HTSR method; (c) Ground truth.
Remotesensing 14 04470 g006
Figure 7. Reconstructed HR-HSI and its corresponding error image at the 15th band for Ubran. (a) HTSRWM method; (b) HTSR method; (c) Ground truth.
Figure 7. Reconstructed HR-HSI and its corresponding error image at the 15th band for Ubran. (a) HTSRWM method; (b) HTSR method; (c) Ground truth.
Remotesensing 14 04470 g007
Table 1. Tensor notations and operations.
Table 1. Tensor notations and operations.
NotationDefinition
x , x , X , X scalar, vector, matrix, tensor
x i 1 , i 2 , , i N tensor element
X : , i 2 , i 3 , , i N tensor fiber
X : , : , i 3 , , i N tensor slice
Y = X × n U tensor mode n product
X ( n ) or u n f o l d n ( X ) tensor mode n matrix
f o l d n ( X ( n ) ) the inverse operation of u n f o l d n ( X )
r n = r a n k ( X ( n ) ) multilinear rank
X = G × 1 U 1 × 2 U 2 · · · × N U N Tucker decomposition
X = r R x r ( 1 ) x r ( 2 ) x r ( N ) CP decomposition
X , Y inner product
X 0 0 norm
X 1 1 norm
X F F norm
Table 2. Quantitative metrics of the competitive methods on the Pavia University dataset.
Table 2. Quantitative metrics of the competitive methods on the Pavia University dataset.
MethodPavia University
PSNRSAMUIQIERGASDD
GSA [50]34.7883.9340.9672.8613.476
CNMF [51]32.8483.3730.9603.3513.341
CSU [52]38.6332.3460.9802.0571.878
NSSR [16]36.3432.3900.9832.2842.473
LRSR [17]41.5681.9990.9921.2561.704
LTTR [20]41.8932.4070.9861.5841.508
FuVar [53]41.1452.4970.9841.4701.596
LRTD [21]41.8712.1910.9921.2461.491
HTSR42.2512.1220.9921.2161.400
The best and second-best performance measures are highlighted in red and blue, respectively.
Table 3. Quantitative metrics of the competitive methods on the Washington DC Wall dataset.
Table 3. Quantitative metrics of the competitive methods on the Washington DC Wall dataset.
MethodWashington
PSNRSAMUIQIERGASDD
GSA [50]27.8135.8160.9713.0538.831
CNMF [51]28.6603.6690.9742.6086.772
CSU [52]31.9944.2500.9304.3323.993
NSSR [16]30.1664.1340.9832.2306.212
LRSR [17]34.8352.5320.9941.3763.636
LTTR [20]33.0215.6710.9732.5445.997
FuVar [53]33.1284.6040.9861.8474.512
LRTD [21]35.1013.6990.9901.6134.042
HTSR37.4583.4600.9921.3093.273
The best and second-best performance measures are highlighted in red and blue, respectively.
Table 4. Quantitative metrics of the competitive methods on the Urban dataset.
Table 4. Quantitative metrics of the competitive methods on the Urban dataset.
MethodUrban
PSNRSAMUIQIERGASDD
GSA [50]29.4177.0590.9623.6967.240
CNMF [51]29.6914.6990.9772.7524.681
CSU [52]33.2664.1170.9812.6903.410
NSSR [16]31.9344.7540.9802.7784.913
LRSR [17]34.3393.1480.9922.0964.204
LTTR [20]33.4446.6480.9663.3305.170
FuVar [53]35.2455.3000.9812.5063.925
LRTD [21]35.7294.3980.9872.1923.626
HTSR37.6394.1120.9872.0033.243
The best and second-best performance measures are highlighted in red and blue, respectively.
Table 5. Quantitative metrics of HTSRWM method and HTSR method on the Pavia University, Washington DC Wall and Urban data sets.
Table 5. Quantitative metrics of HTSRWM method and HTSR method on the Pavia University, Washington DC Wall and Urban data sets.
MethodPavia University
PSNRSAMUIQIERGASDD
HTSRWM39.5832.8690.9841.7501.975
HTSR42.2512.1220.9921.2161.400
MethodWashington
PSNRSAMUIQIERGASDD
HTSRWM34.3833.6440.9841.8984.658
HTSR37.4583.4600.9921.3093.273
MethodUrban
PSNRSAMUIQIERGASDD
HTSRWM34.4285.3050.9752.8794.694
HTSR37.6394.1120.9872.0033.243
The best performance measures are highlighted in red.
Table 6. The values of each parameter in HTSR model.
Table 6. The values of each parameter in HTSR model.
Trade-off λ 1 λ 2 λ 3 λ 4 λ 5 λ T V
0.0010.10.18001.30.01
Others ξ γ w
[4, 4, 2]5[0.1, 0.1, 0.8]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guo, S.; Chen, X.; Jia, H.; Han, Z.; Duan, Z.; Tang, Y. Fusing Hyperspectral and Multispectral Images via Low-Rank Hankel Tensor Representation. Remote Sens. 2022, 14, 4470. https://doi.org/10.3390/rs14184470

AMA Style

Guo S, Chen X, Jia H, Han Z, Duan Z, Tang Y. Fusing Hyperspectral and Multispectral Images via Low-Rank Hankel Tensor Representation. Remote Sensing. 2022; 14(18):4470. https://doi.org/10.3390/rs14184470

Chicago/Turabian Style

Guo, Siyu, Xi’ai Chen, Huidi Jia, Zhi Han, Zhigang Duan, and Yandong Tang. 2022. "Fusing Hyperspectral and Multispectral Images via Low-Rank Hankel Tensor Representation" Remote Sensing 14, no. 18: 4470. https://doi.org/10.3390/rs14184470

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop