Next Article in Journal
Characteristics of Precipitation and Floods during Typhoons in Guangdong Province
Previous Article in Journal
Editorial for the Special Issue: “3D Virtual Reconstruction for Cultural Heritage”
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feedback Refined Local-Global Network for Super-Resolution of Hyperspectral Imagery

1
School of Civil Engineering, Tianjin University, Tianjin 300350, China
2
College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
3
Science and Technology on Special System Simulation Laboratory, Beijing Simulation Center, Beijing 100854, China
4
Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
5
School of Statistics and Data Science, KLMDASR, LEBPS, and LPMC, Nankai University, Tianjin 300071, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2022, 14(8), 1944; https://doi.org/10.3390/rs14081944
Submission received: 14 March 2022 / Revised: 8 April 2022 / Accepted: 13 April 2022 / Published: 18 April 2022

Abstract

:
Powered by advanced deep-learning technology, multi-spectral image super-resolution methods based on convolutional neural networks have recently achieved great progress. However, the single hyperspectral image super-resolution remains a challenging problem due to the high-dimensional and complex spectral characteristics of hyperspectral data, which make it difficult for general 2D convolutional neural networks to simultaneously capture spatial and spectral prior information. To deal with this issue, we propose a novel Feedback Refined Local-Global Network (FRLGN) for the super-resolution of hyperspectral image. To be specific, we develop a new Feedback Structure and a Local-Global Spectral block to alleviate the difficulty in spatial and spectral feature extraction. The Feedback Structure can transfer the high-level information to guide the generation process of low-level features, which is achieved by a recurrent structure with finite unfoldings. Furthermore, in order to effectively use the high-level information passed back, a Local-Global Spectral block is constructed to handle the feedback connections. The Local-Global Spectral block utilizes the feedback high-level information to correct the low-level feature from local spectral bands and generates powerful high-level representations among global spectral bands. By incorporating the Feedback Structure and Local-Global Spectral block, the FRLGN can fully exploit spatial-spectral correlations among spectral bands and gradually reconstruct high-resolution hyperspectral images. Experimental results indicate that FRLGN presents advantages on three public hyperspectral datasets.

1. Introduction

Hyperspectral imaging sensors collect and process information across different bands of the entire electromagnetic spectrum. Compared with multi-spectral image, the resulting hyperspectral image (HSI) contains richer spectral information and has been applied to resource management, target detection and land cover detection [1,2,3,4,5], etc. However, because of the limitation of imagery system, it is difficult to acquire an HSI with high spatial resolution. Therefore, how to obtain a reliable high-resolution HSI is still a very challenging problem.
Recently, HSI super-resolution approaches have been intensively studied in remote sensing [6]. Based on the number of input images, the HSI super-resolution methods can be roughly divided into fusion-based HSI super-resolution [7,8,9] and single HSI super-resolution [10,11,12]. The fusion-based HSI super-resolution methods improve the spatial resolution by combining the observed low-resolution HSI with high-resolution multispectral images or panchromatic. For example, Wei et al. [13] introduced a variational-based approach to merge a high-resolution multispectral image with a low-resolution HSI. By considering the HSI as a 3D tensor, Wan et al. [14] designed a nonlocal 4-D tensor dictionary learning-based fusion approach. More recently, deep learning-based fusion methods have achieved excellent performance with the powerful representation capability of convolution neural network. Wei et al. [15] suggested using the deep neural network to capture plenty of HSI statistics and then putting these priors to regularize the super-resolution procedure of HSIs. Wei et al. [16] recently further designed a deep recursive residual network to probe the deep statistical prior information. Most fusion-based methods assume that the high-resolution auxiliary image is well co-registered with the low-resolution HSI. In real applications, it is difficult to obtain these co-registered auxiliary images, which hinders the progress of such technique.
By contrast, the single HSI super-resolution approaches do not need any auxiliary information and have a better feasibility in practice, which only reconstruct the high-resolution HSI from a low-resolution HSI. In order to explore the spatial-spectral prior information of HSIs, some single HSI super-resolution methods based on dictionary learning, sparse representation and low-rank approximation have been proposed. For instance, Huang et al. [17] designed a noise-insensitive super-resolution mapping method based on multi-dictionary sparse representation. Wang et al. [18] introduced a new tensor-based approach to solve the HSI super-resolution problem by modeling three intrinsic characteristics of hyperspectral data. However, the hand-crafted priors can only reflect one aspect of the hyperspectral data, which make the reconstruction effect obvious only for the specified HSIs. In recent years, due to the success of deep learning technology in many fields [19,20,21], it has been applied to the single hyperspectral super-resolution task, and achieved satisfying super-resolution results. For example, Hu et al. [22] designed a spectral difference convectional network to alleviate spectral distortion. Besides, Mei et al. [23] constructed a 3D super-resolution network to extract the prior information. Although the spectral correlation can be well exploited by 3D convolution operator, the amount of computation required by the model is very large. To solve the problem of high computation of 3D convolution, Jiang et al. [24] introduced a group convolution to explore the spatial information and the correlation among the spectral bands. Wang et al. [25] further designed a recurrent structure to investigate the spectral correlation among groups. Nonetheless, due to the high dimension and complex spectral patterns of hyperspectral data, it is still hard to simultaneously explore the joint spatial and spectral information between continuous bands.
In this paper, in order to alleviate the difficulty of extracting spatial-spectral prior information from hyperspectral data, we propose a novel network for the single HSI super-resolution task, namely Feedback Refined Local-Global Network (FRLGN). FRLGN is motivated by the feedback mechanism [26], which can make the network transmit high-level semantic information back to the previous layers and refine these low-level feature representations. Recently, some researchers have adopted this feedback mechanism to design the network architecture for various vision tasks [27,28,29]. For instance, Li et al. [29] proposed a recurrent neural network with the feedback manner for the multi-spectral image super-resolution task. Han et al. [30] designed a two-state recurrent neural network, which makes the information exchange between two hidden states in both directions. We design a Feedback Structure (FS) and a Local-Global Spectral Block (LGSB) in FRLGN by using feedback mechanism to enhance the super-resolution results of HSIs. To be specific, the Feedback Structure allows to use the feedback high-level information to correct the low-level representations through feedback connections. The FS is achieved by a recurrent structure with finite unfoldings. Furthermore, we construct a Local-Global Spectral Block to take full advantage of the feedback high-level information. The LGSB is composed of local and global spectral feature extraction layers, which can adjust the local spectral low-level representation input using the feedback high-level global spectral information and create a powerful high-level global spectral representation. The FRLGN is essentially a recurrent neural network with a Local-Global Spectral Block, which is specifically designed to explore the spatial and spectral prior of hyperspectral data. Experimental results indicate that LGSB is more suitable for HSI super-resolution task. Besides, we aggregate the losses of each iteration to optimize the network model and make the feedback high-level feature contain high-resolution HSI information. In summary, the principle of the feedback mechanism of our model is to generate a better super-resolution HSI with coarse reconstructed HSI facilitates.
The main contributions of our work can be summarized as follows:
  • A novel Feedback Refined Local-Global Network is proposed for the single HSI super-resolution task, which can effectively explore spatial-spectral priors between spectral bands.
  • We construct a new Feedback Structure to correct the low-level representation using feedback high-level semantic information.
  • We design a Local-Global Spectral Block to refine the local spectral low-level representations using the feedback information, and then generate a more powerful global spectral high-level representation.

2. Proposed Method

In this section, the detail description of FRLGN will be presented, and then the proposed Feedback Structure and Local-Global Spectral Block are introduced. At last, we interpret the loss function used in FRLGN approach.

2.1. Network Architecture

As shown in Figure 1, the FRLGN is unfolded to T iterations, where the order of each iteration t is from 1 to T. We link the losses of each iteration together so that the hidden state in FRLGN can contain the information of the high-resolution HSI. We describe the specific details of the loss function in the next Loss Function part. The sub-network in each iteration t consists of three Blocks: the Embedding Block, the Local-Global Spectral Block and the Reconstruction Block. Each iteration shares the weights of each block. For each iteration t, we also design a global skip connection that transmits an up-sampled HSI to the final output. Therefore, each iteration t of the sub-network is used to recover a residual image when a low-resolution HSI is input.

2.1.1. The Embedding Block

Different from the previous method of treating the HSI as a whole or multiple single-channel images, we divide the entire input low-resolution HSI into several groups. With this strategy, we can not only explore the correlation between adjacent spectral bands of the input HSI more easily, but also reduce the spectral dimension of the HSI. Specifically, the input low-resolution HSI I L R is divided into G spectral groups without overlapping. We pad the last spectral group with zero bands to ensure that each group has the same number of bands. More details are discussed in the experiment section. As shown in Figure 1, for each group I L R g , we use one convolution operation to extract its shallow feature F E B g . The mathematical formulations of the Embedding Block are as follows:
I L R = [ I L R 1 , I L R 2 , I L R g , · · · , I L R G ]
F E B g = f E B ( I L R g )
F E B = [ F E B 1 , F E B 2 , F E B g , · · · , F E B G ]
where the f E B denotes the operations of Embedding Block, eg., feature extraction layer for all groups. The [ ] represents a concatenating operation. After that, F E B is used as input to the Local-Global Spectral Block.

2.1.2. The Local-Global Spectral Block

For t-th iteration, Local-Global Spectral Block receives the the shallow feature F E B and hidden state from past iteration F L G S B t 1 through a feedback connection. F L G S B t denotes the result of LGSB. The mathematical formula of LGSB is as follows:
F L G S B t = f L G S B ( F L G S B t 1 , F E B )
where f L G S B denotes the operations of the LGSB. More details of the LGSB can be found in Local-Global Spectral Block part.

2.1.3. The Reconstruction Block

The reconstruction block firstly uses PixelShuffle [31] to upscale the feature F L G S B t to high-resolution one, and then a 3 × 3 convolution operation is applied to create the residual image I R e s t . The formula for reconstruction block is defined as:
I R e s t = f R B ( F L G S B t )
where f R B is the operation of the reconstruction block.
For the t-th iteration, the output super-resolution image I S R t is obtained by:
I S R t = I R e s t + f U P ( I L R )
where f U P represents an upsampling operation. The choice of upsampling method is arbitrary. In this paper, we apply a Bicubic upsample approach. After T iterations, we will generate T super-resolution images ( I S R 1 , I S R 2 , · · · , I S R T ) .

2.2. Feedback Structure

In HSI super-resolution task, some researchers [16,25,32] have made an effort to introduce the recurrent structure to improve super resolution results. However, in their network frameworks, the information flow from the low-resolution HSI to final super-resolution HSI is still feed-forward. As can be seen from Figure 2b, the recurrent structure adopted by these methods can be abstracted into a single-state recurrent network. These methods improve the feature representation of the model by running recursively on a specially designed network structure.
In this work, we design a Feedback Structure to reroute the output of the HSI super-resolution system to correct the input in each iteration. Figure 2a illustrates the Feedback Structure of FRLGN. Specifically, the Local-Global Spectral Block receives the information of input low-resolution HSI and feedback high-level information from last iteration, then generates coarse super-resolution result and high-level semantic guidance information for next iteration. The Feedback Structure can be characterized by:
I S R t = f F S ( I L R , I S R t 1 )
where the f F S denotes the function of Feedback Structure.

2.3. Local-Global Spectral Block

As an ill-posed problem, image super-resolution requires additional prior knowledge to regularize the reconstruction process. Traditional super-resolution methods usually make an effort to construct the regular terms of the super-resolution model [33,34], such as low-rank [35,36], nonlocal self-similarity [37,38] and sparsity [39,40]. Whether the designed prior knowledge can characterize the observed HSI data directly determines the performance of the super-resolution method. Therefore, for the HSI super-resolution task, it is also essential to study the inherent characteristics of hyperspectral data, e.g., the spatial non-local self-similarity and the high-correlation among spectral bands [41]. However, the manually designed constraints are not enough to achieve accurate restoration of HSIs.
In this work, a novel Local-Global Spectral Block is introduced to exploit the spatial-spectral prior with the help of feedback high-level semantic information from hidden state. As can be seen in Figure 3a, for iteration t, the LGSB inputs the feedback global spectral high-level information F L G S B t 1 to correct the G groups local spectral low-level representations, F E B = [ F E B 1 , F E B 2 , F E B g , · · · , F E B G ] , and then creates more effective high-level feature F L G S B t for the next iteration and the reconstruction block. The LGSB contains G groups local spectral feature extraction layers and one global spectral feature extraction layer. For simplicity, we use C o n v ( k ) and D e c o n v ( k ) to denote a convolution operation and a deconvolutional operation, where k represents the size of the convolution kernel.
At the beginning of the LGSB, the downsampled F L G S B t 1 and each group F E B g are concatenated and compressed by one C o n v ( 1 ) operation to refine the input each group feature F E B g by feedback information F L G S B t 1 , producing the refined group feature F g t .
F g t = f C o m ( [ f D o w n ( F L G S B t 1 ) , F E B g ] )
where f D o w n refers to downsample operation using average pooling with a kernel size of 2 and a stride size of 2. The [ f D o w n ( F L G S B t 1 ) , F E B g ] refers to the concatenation of f D o w n ( F L G S B t 1 ) and F E B g . The f C o m denotes the initial compression operation.
After obtaining the refined group feature F g t , we add a local spectral feature extraction layer to explore the local spectral correlation, which consists of two residual blocks as shown in Figure 3b. Let L g t be the g-th group local spectral LR feature map. L g t can be obtained by:
L g t = f L o c a l ( F g t )
where the f L o c a l denotes local spectral feature extraction layer.
After that, we pass all the local spectral LR feature maps to the global spectral feature extraction layer, which contains one upsample D e c o n v ( 2 ) operation and two residual blocks. Note that we propose a new feedback strategy to enhance the high-level semantic information passed back. Specifically, we enable the transfer of high-level information from high-resolution space to low-resolution space by adding an upsampling operation in the global spectral feature extraction layer. The effectiveness of this strategy is analyzed in the experimental section. At last, the global spectral high-level feature F L G S B t can be obtained by:
F L G S B t = f G l o b a l ( [ L 1 t , L 2 t , · · · , L G t ] )
where the f G l o b a l denotes the the global spectral feature extraction layer.

2.4. Loss Function

To measure the HSI reconstruction performance, several loss functions have been investigated to make super-resolution results approximate to the ground truth (high-resolution HSI), such as l 1 , l 2 and perceptual losses, etc. Since the l 1 loss function can effectively penalize small errors and have better convergence during training stage, we choose the l 1 loss to measure the HSI super-resolution performance. Finally, in order to make the feedback high-level feature contain high-resolution HSI information, the output result of FRLGN is the weighted average of all intermediate super-resolution results. Thus, we have
I S R = 1 T t = 1 T I R e s t + f U P ( I L R )
The loss function of FRLGN is determined by:
L ( Θ ) = I H R I S R 1
where the Θ represents the parameters of our proposed FRLGN and the I H R is the corresponding target high-resolution HSI. The training procedure of FRLGN is shown in Algorithm 1.    
Algorithm 1: Training Process of FRLGN
Remotesensing 14 01944 i001

3. Experiments and Results

3.1. Datasets

3.1.1. CAVE Dataset

The CAVE dataset [42] is a HSI dataset of real-world materials and objects, which is captured by a Cooled CCD camera. The hyperspectral camera collects information from the 400 nm–700 nm spectral range in 10 nm steps. This dataset consists of 32 HSIs with a size of 512 × 512 × 31 pixels, which are further divided into 5 groups, namely food and drinks, skin and hair, paints, real and fake, and stuff (e.g., feathers, flowers, superballs, etc.). Each band is stored in 16-bit PNG format.

3.1.2. Harvard Dataset

The Harvard dataset [43] contains 77 HSIs of 1040 × 1392 × 31 size from outdoor and indoor scenes. These HSIs are captured by a commercial hyperspectral camera, which collects the spectral data in 10 nm steps over the wavelength range of 400 nm to 700 nm. Different from the CAVE dataset, this dataset is stored as a .mat file.

3.1.3. Chikusei Dataset

The Chikusei dataset [44] consists of 2517 × 2335 pixels with a spatial resolution of 2.5 m. The dataset is taken by an airborne hyperspectral imaging sensor in the agricultural and urban areas of Chikusai, Japan. This dataset captures 128 spectral bands from the 363 nm to 1018 nm. Besides, this dataset is also stored as a .mat file. Since the lack of edge information, we first cut the original HSI to generate an image of 2304 × 2048 × 128 pixels and then the generated image is further split into a training set and a test set. In particular, we extract the top region of the generated image to create the test set, which consists of four HSIs with a pixel size of 512 × 512 × 128 that do not overlap each other. And the remaining region of the generated image is used as training data.

3.2. Implementation Details

Since HSIs are collected by different hyperspectral imaging sensors, HSI datasets tend to have different numbers of spectral channels. Therefore, we need to learn a super-resolution HSI model separately for each HSI dataset. In the next experiments, 80% of samples in the dataset are used to train the super-resolution models and the remaining samples are utilized for testing.
During training, 12 randomly selected patches are fed to the FRLGN network. In order to obtain low-resolution HSIs, we down-sample these patches to 32 × 32 × L pixels based on the scale factor s. L represents the number of spectral bands of HSI. Similar to [24], we set the scale factor s to 4 and 8 in the next experiments. Furthermore, we use the bicubic interpolation function to down-sample these patches. In our network, the convolution operators with a kernel size of 3 adopt a zero-padding strategy to ensure that the intermediate features have the same spatial size. The base number of filters is set to 256. We up-sample the resulting features by a factor of 2 using a deconvolution with a kernel size of 2 and a stride size of 2 in LGSB. The ADAM [45] with an initial learning rate of 2 × 10 4 is used to optimize the FRLGN network.
At the testing stage, in order to improve testing efficiency, we use only the 512 × 512 area in the upper left corner of test HSIs for evaluation. In this work, the Pytorch library is used to implement and train our proposed FRLGN network.

3.3. Evaluation Metrics

In this section, we choose six commonly used quantitative metrics to evaluate the performance of FRLGN, i.e., cross correlation (CC) [46], spectral angle mapper (SAM) [47], root mean squared error (RMSE), the erreur relative globale adimensionnelle de synthese (ERGAS) [48], peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) [49]. As the CC, RMSE, PSNR and SSIM are widely used quantitative metrics in HSI super-resolution tasks, we omit their detailed description here. In addition, ERGAS performs a global statistical measure on the reconstructed HSIs, which is calculated by
E R G A S ( I H R , I S R ) = 100 s 1 L l = 1 L R M S E l μ l 2
in which R M S E l = ( I S R l I H R l F / n ) . Here, n and μ l represent the number of spatial pixels and mean of the lth band from the ground truth I H R , respectively. The I S R l and I H R l denote the lth band of I S R and I H R , respectively. SAM is used to evaluate the preservation of spectral band information for each spatial location of the HSI. SAM is obtained by calculating the angle between two spectral vectors from the same spatial position of I S R and I H R . The formula of SAM is presented as
S A M ( x , x ^ ) = arccos x , x ^ x 2 x ^ 2
in which x ^ and x denote the two spectral vectors from I S R and I H R , respectively. And the · , · is the dot product of two vectors, x 2 represent the l 2 regularization operation of a vector. For PSNR and SSIM, we present the average metric values of all spectral bands. The best values for CC, SAM, RMSE, ERGAS, PSNR, SSIM are 1, 0, 0, 0, + , and 1, respectively.

3.4. Study of T and G

In this part, we discuss the effect of iterations (denoted as T) and local spectral groups (denoted as G) in the Local-Global Spectral Block with a scale factor of 4 on the CAVE dataset. By fixing G to 8, we first explore the influence of T on HSI reconstruction. Table 1 shows that the super-resolution performance is improved with the help of feedback connections compared to the network without feedback connections (T = 1). Moreover, the quality of reconstruction has been further improved as the increasing iteration T. On the other hand, it also indicates that our proposed Local-Global Spectral Block would certainly benefit from cross-time feedback information. It is worth noting that with the increase of T, the performance of FRLGN will be gradually improved, but the calculation amount of the model is also constantly expanding. After that, we also discuss the influence of G by fixing the T to 6. From Table 2, we can observe that with the help of local-spectral grouping strategy, the spectrum reconstruction performance is enhanced compared to the network without the grouping strategy (G = 1). In addition, with the increase of G, the spectral representation of FRLGN becomes more powerful and the spatial-spectral reconstruction quality is improved. However, it will also greatly increase the computational overhead. Because the more branches of FRLGN, the more calculations are required. Therefore, in order to achieve a balance between the calculation amount and objective results, we set T = 6, G = 8 for CAVE dataset and Harvard dataset, and T = 6, G = 12 for Chikusei dataset unless otherwise stated.

3.5. Study of Feedback Structure

In this subsection, we analyze the Feedback Structure from two aspects. On the one hand, we compare the HSI reconstruction performance of the LGSB-based feedback structure (LGSB + FS) and the LGSB-based traditional recursive structure (LGSB + TRS). Specifically, we construct a traditional recurrent super-resolution network with LGSB as a recursive module, i.e. only the last iteration of LGSB is input into the reconstruction block to generate the final super-resolution result. Table 3 shows the quantitative analysis of LGSB + FS and LGSB + TRS on the CAVE dataset with a scale factor of 4. From Table 3, we can observe that LGSB + FS performs better on 4 out of 6 metrics. In terms of spectral reconstruction (SAM), LGSB + FS is significantly better than LGSB + TRS, indicating that the feedback structure can better capture the spectral prior information of HSI. However, the spatial reconstruction effect of LGSB + FS and LGSB + TRS is not much different. On the other hand, we study the feedback strategy proposed in LGSB, which transfers high-level information from high-resolution (HR) space to low-resolution (LR) space by adding an upsampling operation (HR to LR). To show the advantages of this strategy, we construct two other feedback strategies for comparison, namely LR to LR and LR, HR to LR. For the LR to LR feedback strategy, we directly remove the upsampling operation in LGSB to realize the information transfer from LR space to LR space. As for the LR, HR to LR feedback strategy, we pass the output of local spectral feature extraction layer and global spectral feature extraction layer to guide the low-level feature extraction. Table 4 presents the quantitative performance of three feedback strategies on the CAVE dataset. From Table 4, it can be seen that HR to LR achieves better performance than LR to LR on most quantitative metrics, which indicates that high-level information from HR space can guide the extraction of low-level features more effectively than high-level information from LR space. The reason is that the output from the HR space contains more high-resolution HSI information. Moreover, compared with HR to LR, the LR, HR to LR gets worse results, which indicates that directly transferring high-level information from HR space and LR space back to the shallow layers of the network reduces the feedback effect. In conclusion, the HR to LR strategy is more suitable for HSI super-resolution task than the other two feedback strategies.

3.6. Comparisons with the State-of-the-Art Methods

In this section, we evaluate the single image super-resolution effect of FRLGN in detail on three benchmarks, namely CAVE dataset [42], Harvard dataset [43] and Chikusei dataset [44]. Specifically, we compare the FRLGN with five existing super-resolution approaches, including two advanced deep multispectral image super-resolution methods, VDSR [50], RCAN [51] and three representative HSI super-resolution methods, 3DCNN [23], GDRRN [32] and SSPSR [24]. In addition, we carefully tune the hyper-parameters of these super-resolution methods to obtain a good performance. Moreover, the bicubic interpolation is used as our baseline model. Table 5, Table 6 and Table 7 depict the quantitative performance of all super-resolution algorithms over testing images on three datasets, where bold indicates the best results.
Table 5 shows that our FRLGN method outperforms other comparative methods in all objective assessment metrics. Specifically, the baseline approach has the worst performance among these compared algorithms. As the competitive multispectral image super-resolution methods, VDSR and RCAN can generate very satisfactory results. Nonetheless, in comparison with those HSI super-resolution methods, i.e., 3DCNN [23], GDRRN [32] and SSPSR [24], their spectral reconstruction effect (SAM) is relatively poor due to the lack of a network structure dedicated to exploring high-dimensional HSI data. This indicates that the multispectral super-resolution approaches cannot effectively explore the spectral prior information from the hyperspectral data. The 3DCNN [23] and GDRRN [32] employ 3D convolution operations and recursive residual blocks to extract spatial-spectral prior information, respectively, and obtain good super-resolution results. Similar to our work, SSPSR [24] also adopts a group strategy but neglects the continuous relationship among band groups. Therefore, it achieves the suboptimal results. Compared with other comparison SR methods, our proposed FRLGN can obtain better performance in spectral and spatial dimensions. In term of PSNR, the FRLGN was 0.8 and 0.6 higher than the suboptimal method for upsampling factors s of 4 and 8, respectively. The Table 6 and Table 7 show the similar results. In conclusion, FRLGN has presented advantages on three datasets compared to some SR methods, especially for PSNR and SSIM.
In order to further prove the effectiveness of FRLGN, Figure 4 and Figure 5 display the mean absolute error maps across all spectral bands of two HSIs with the scale factor × 4 from the CAVE testing dataset and Harvard testing dataset, respectively. Principally, more bluer the color of the error map, more better the reconstructed HSI. From Figure 4 and Figure 5, we can easily discover that the FRLGN method can obtain better reconstruction fidelity when restoring the spatial information of the original HSI. Specifically, in contrast to with the suboptimal SSPSR method, FRLGN performs better in reconstructing textures such as edges and structures. Besides, we also display two reconstructed high-resolution HSIs from Chikusei test dataset with a downsampling factor of 4 in Figure 6 and Figure 7. As can be seen from Figure 6 and Figure 7, our FRLGN can restore finer texture details than other comparison methods.
In addition, to prove our advantage in reconstructing spectral information, Figure 8, Figure 9 and Figure 10 show the average absolute difference of all comparison methods along the spectral dimension. The average spectral error curve has a better visualization effect than displaying the spectral reflectance of multiple locations. As shown in Figure 8, Figure 9 and Figure 10, our method has the lowest average spectral error curve, which indicates that FRLGN has better spectral reconstruction ability. This can be attributed to the guidance of the global spectral feedback information to the local spectral band group. Moreover, as iterations increase, the local spectral group information gradually accumulates, leading to better spectral reconstruction performance.

4. Conclusions

Considering the difficulties of simultaneously exploring the spatial and spectral information of hyperspectral data, a new approach for the single HSI super-resolution task has been proposed, called Feedback Refined Local-Global Network. FRLGN can produce a clear high-resolution HSI by introducing a Feedback Structure and a Local-Global Spectral Block. In particular, a recurrent neural network with feedback connection has been built, and it is able to refine low-level feature representations by using feedback global spectral high-level semantic information. Furthermore, taking advantage of the feedback high-level semantic information, a Local-Global Spectral Block is designed to guide the extraction process of low-level representations between local spectral bands using the feedback information, and then generate a more powerful high-level feature among global spectral bands. With the increasing number of iterations, the spatial-spectral prior gradually accumulates, leading to better HSI reconstruction performance. The comprehensive experimental results and visual data analysis show the effectiveness of the proposed FRLGN.
Although FRLGN has achieved good results, it is still difficult to apply in practical applications due to the large amount of computation brought by the recursive structure. However, the main contribution of this work is that the way in which global spectral high-level information guides local spectral feature extraction can improve the feature extraction capability of high-dimensional data in convolutional neural networks. Therefore, in the future, we will introduce the lightweight network structure to reduce the heavy computational overhead.

Author Contributions

Methodology, Z.T., Q.X. and B.P.; Software, Z.T.; Writing—original draft, Z.T. and B.P.; Writing—review & editing, P.W., Z.S. and B.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key R&D Program of China under the Grant 2019YFC1510905, the National Natural Science Foundation of China under the Grant 62001252, the Beijing-Tianjin-Hebei Basic Research Cooperation Project under the Grant F2021203109 and the Scientific and Technological Research Project of Hebei Province Universities and Colleges under the Grant ZD2021311.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets presented in this study are available public datasets.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Q.; Yuan, Z.; Du, Q.; Li, X. GETNET: A General End-to-End 2-D CNN Framework for Hyperspectral Image Change Detection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3–13. [Google Scholar] [CrossRef] [Green Version]
  2. Zou, Z.; Shi, Z. Hierarchical Suppression Method for Hyperspectral Target Detection. IEEE Trans. Geosci. Remote Sens. 2016, 54, 330–342. [Google Scholar] [CrossRef]
  3. Liu, J.; Wu, Z.; Xiao, L.; Sun, J.; Yan, H. A Truncated Matrix Decomposition for Hyperspectral Image Super-Resolution. IEEE Trans. Image Process. 2020, 29, 8028–8042. [Google Scholar] [CrossRef]
  4. Du, B.; Zhang, M.; Zhang, L.; Hu, R.; Tao, D. PLTD: Patch-Based Low-Rank Tensor Decomposition for Hyperspectral Images. IEEE Trans. Multimed. 2017, 19, 67–79. [Google Scholar] [CrossRef]
  5. Shi, C.; Pun, C.M. Multiscale Superpixel-Based Hyperspectral Image Classification Using Recurrent Neural Networks With Stacked Autoencoders. IEEE Trans. Multimed. 2020, 22, 487–501. [Google Scholar] [CrossRef]
  6. Li, Q.; Wang, Q.; Li, X. Exploring the Relationship Between 2D/3D Convolution for Hyperspectral Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8693–8703. [Google Scholar] [CrossRef]
  7. Pan, B.; Qu, Q.; Xu, X.; Shi, Z. Structure–Color Preserving Network for Hyperspectral Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
  8. Wang, Q.; Li, Q.; Li, X. Hyperspectral Image Superresolution Using Spectrum and Feature Context. IEEE Trans. Ind. Electron. 2021, 68, 11276–11285. [Google Scholar] [CrossRef]
  9. Xue, J.; Zhao, Y.Q.; Bu, Y.; Liao, W.; Chan, J.C.W.; Philips, W. Spatial-Spectral Structured Sparse Low-Rank Representation for Hyperspectral Image Super-Resolution. IEEE Trans. Image Process. 2021, 30, 3084–3097. [Google Scholar] [CrossRef]
  10. Fu, Y.; Liang, Z.; You, S. Bidirectional 3D Quasi-Recurrent Neural Network for Hyperspectral Image Super-Resolution. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2674–2688. [Google Scholar] [CrossRef]
  11. Liu, D.; Li, J.; Yuan, Q. A Spectral Grouping and Attention-Driven Residual Dense Network for Hyperspectral Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7711–7725. [Google Scholar] [CrossRef]
  12. Wang, X.; Ma, J.; Jiang, J.; Zhang, X.P. Dilated projection correction network based on autoencoder for hyperspectral image super-resolution. Neural Netw. 2022, 146, 107–119. [Google Scholar] [CrossRef] [PubMed]
  13. Wei, Q.; Bioucas-Dias, J.; Dobigeon, N.; Tourneret, J.Y. Hyperspectral and Multispectral Image Fusion Based on a Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3658–3668. [Google Scholar] [CrossRef] [Green Version]
  14. Wan, W.; Guo, W.; Huang, H.; Liu, J. Nonnegative and Nonlocal Sparse Tensor Factorization-Based Hyperspectral Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8384–8394. [Google Scholar] [CrossRef]
  15. Wei, W.; Nie, J.; Zhang, L.; Zhang, Y. Unsupervised Recurrent Hyperspectral Imagery Super-Resolution Using Pixel-Aware Refinement. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  16. Wei, W.; Nie, J.; Li, Y.; Zhang, L.; Zhang, Y. Deep Recursive Network for Hyperspectral Image Super-Resolution. IEEE Trans. Comput. Imaging 2020, 6, 1233–1244. [Google Scholar] [CrossRef]
  17. Huang, H.; Yu, J.; Sun, W. Super-resolution mapping via multi-dictionary based sparse representation. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 3523–3527. [Google Scholar] [CrossRef]
  18. Wang, Y.; Chen, X.; Han, Z.; He, S. Hyperspectral image super-resolution via nonlocal low-rank tensor approximation and total variation regularization. Remote Sens. 2017, 9, 1286. [Google Scholar] [CrossRef] [Green Version]
  19. Yang, W.; Zhang, X.; Tian, Y.; Wang, W.; Xue, J.H.; Liao, Q. Deep Learning for Single Image Super-Resolution: A Brief Review. IEEE Trans. Multimed. 2019, 21, 3106–3121. [Google Scholar] [CrossRef] [Green Version]
  20. Wang, G.; Zuluaga, M.A.; Li, W.; Pratt, R.; Patel, P.A.; Aertsen, M.; Doel, T.; David, A.L.; Deprest, J.; Ourselin, S.; et al. DeepIGeoS: A Deep Interactive Geodesic Framework for Medical Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1559–1572. [Google Scholar] [CrossRef]
  21. Pei, Y.; Huang, Y.; Zou, Q.; Zhang, X.; Wang, S. Effects of Image Degradation and Degradation Removal to CNN-Based Image Classification. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 1239–1253. [Google Scholar] [CrossRef]
  22. Hu, J.; Li, Y.; Xie, W. Hyperspectral Image Super-Resolution by Spectral Difference Learning and Spatial Error Correction. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1825–1829. [Google Scholar] [CrossRef]
  23. Mei, S.; Yuan, X.; Ji, J.; Zhang, Y.; Wan, S.; Du, Q. Hyperspectral image spatial super-resolution via 3D full convolutional neural network. Remote Sens. 2017, 9, 1139. [Google Scholar] [CrossRef] [Green Version]
  24. Jiang, J.; Sun, H.; Liu, X.; Ma, J. Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral Imagery. IEEE Trans. Comput. Imaging 2020, 6, 1082–1096. [Google Scholar] [CrossRef]
  25. Wang, X.; Ma, J.; Jiang, J. Hyperspectral Image Super-Resolution via Recurrent Feedback Embedding and Spatial–Spectral Consistency Regularization. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  26. Hupé, J.; James, A.; Payne, B.; Lomber, S.; Girard, P.; Bullier, J. Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons. Nature 1998, 394, 784–787. [Google Scholar] [CrossRef]
  27. Tian, C.; Xu, Y.; Zuo, W.; Zhang, B.; Fei, L.; Lin, C.W. Coarse-to-Fine CNN for Image Super-Resolution. IEEE Trans. Multimed. 2021, 23, 1489–1502. [Google Scholar] [CrossRef]
  28. Carreira, J.; Agrawal, P.; Fragkiadaki, K.; Malik, J. Human Pose Estimation with Iterative Error Feedback. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 4733–4742. [Google Scholar] [CrossRef] [Green Version]
  29. Li, Z.; Yang, J.; Liu, Z.; Yang, X.; Jeon, G.; Wu, W. Feedback Network for Image Super-Resolution. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3862–3871. [Google Scholar] [CrossRef] [Green Version]
  30. Han, W.; Chang, S.; Liu, D.; Yu, M.; Witbrock, M.; Huang, T.S. Image Super-Resolution via Dual-State Recurrent Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1654–1663. [Google Scholar] [CrossRef] [Green Version]
  31. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1874–1883. [Google Scholar] [CrossRef] [Green Version]
  32. Li, Y.; Zhang, L.; Dingl, C.; Wei, W.; Zhang, Y. Single Hyperspectral Image Super-Resolution with Grouped Deep Recursive Residual Network. In Proceedings of the 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM), Xi’an, China, 13–16 September 2018; pp. 1–4. [Google Scholar] [CrossRef]
  33. Zhang, M.; Ling, Q. Supervised Pixel-Wise GAN for Face Super-Resolution. IEEE Trans. Multimed. 2021, 23, 1938–1950. [Google Scholar] [CrossRef]
  34. Irmak, H.; Akar, G.B.; Yuksel, S.E. A MAP-Based Approach for Hyperspectral Imagery Super-Resolution. IEEE Trans. Image Process. 2018, 27, 2942–2951. [Google Scholar] [CrossRef]
  35. Veganzones, M.A.; Simões, M.; Licciardi, G.; Yokoya, N.; Bioucas-Dias, J.M.; Chanussot, J. Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data. IEEE Trans. Image Process. 2016, 25, 274–288. [Google Scholar] [CrossRef] [Green Version]
  36. Dian, R.; Li, S. Hyperspectral Image Super-Resolution via Subspace-Based Low Tensor Multi-Rank Regularization. IEEE Trans. Image Process. 2019, 28, 5135–5146. [Google Scholar] [CrossRef]
  37. Jiang, J.; Ma, X.; Chen, C.; Lu, T.; Wang, Z.; Ma, J. Single Image Super-Resolution via Locally Regularized Anchored Neighborhood Regression and Nonlocal Means. IEEE Trans. Multimed. 2017, 19, 15–26. [Google Scholar] [CrossRef]
  38. Han, X.H.; Shi, B.; Zheng, Y. Self-Similarity Constrained Sparse Representation for Hyperspectral Image Super-Resolution. IEEE Trans. Image Process. 2018, 27, 5625–5637. [Google Scholar] [CrossRef] [PubMed]
  39. Peng, Y.; Li, W.; Luo, X.; Du, J. Hyperspectral Image Superresolution Using Global Gradient Sparse and Nonlocal Low-Rank Tensor Decomposition With Hyper-Laplacian Prior. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 5453–5469. [Google Scholar] [CrossRef]
  40. Zhu, Z.; Guo, F.; Yu, H.; Chen, C. Fast Single Image Super-Resolution via Self-Example Learning and Sparse Representation. IEEE Trans. Multimed. 2014, 16, 2178–2190. [Google Scholar] [CrossRef]
  41. Wang, S.; Zhou, T.; Lu, Y.; Di, H. Contextual Transformation Network for Lightweight Remote-Sensing Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  42. Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S.K. Generalized Assorted Pixel Camera: Postcapture Control of Resolution, Dynamic Range, and Spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253. [Google Scholar] [CrossRef] [Green Version]
  43. Chakrabarti, A.; Zickler, T. Statistics of Real-World Hyperspectral Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 193–200. [Google Scholar] [CrossRef] [Green Version]
  44. Yokoya, N.; Iwasaki, A. Airborne Hyperspectral Data over Chikusei; Technical Report SAL-2016-05-27; Space Application Laboratory, University of Tokyo: Tokyo, Japan, 2016. [Google Scholar]
  45. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  46. Loncan, L.; de Almeida, L.B.; Bioucas-Dias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simões, M.; et al. Hyperspectral Pansharpening: A Review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 27–46. [Google Scholar] [CrossRef] [Green Version]
  47. Yuhas, R.H.; Goetz, A.F.; Boardman, J.W. Discrimination among semi-arid landscape endmembers using the spectral angle mapper (SAM) algorithm. In Proceedings of the Summaries 3rd Annual JPL Airborne Geoscience Workshop, Pasadena, CA, USA, 1–5 June 1992; Volume 1, pp. 147–149. [Google Scholar]
  48. Wald, L. Data Fusion: Definitions and Architectures: Fusion of Images of Different Spatial Resolutions; Presses des MINES: Pairs, France, 2002. [Google Scholar]
  49. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  50. Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1646–1654. [Google Scholar] [CrossRef] [Green Version]
  51. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
Figure 1. The overview of our proposed Feedback Refined Local-Global Network (FRLGN). The blue arrows are the feedback connections and the Local-Global Spectral Block (LGSB) represented by the trapezoidal block is specifically designed for the task of super-resolution of hyperspectral images.
Figure 1. The overview of our proposed Feedback Refined Local-Global Network (FRLGN). The blue arrows are the feedback connections and the Local-Global Spectral Block (LGSB) represented by the trapezoidal block is specifically designed for the task of super-resolution of hyperspectral images.
Remotesensing 14 01944 g001
Figure 2. (a) The illustration of the Feedback Structure (FS) in the proposed FRLGN network and the blue arrow is the feedback connection. (b) The architecture of Traditional Recurrent Structure.
Figure 2. (a) The illustration of the Feedback Structure (FS) in the proposed FRLGN network and the blue arrow is the feedback connection. (b) The architecture of Traditional Recurrent Structure.
Remotesensing 14 01944 g002
Figure 3. The network architecture of the Local-Global Spectral Block (LGSB): (a) the Local-Global Spectral Block, (b) the Residual Block.
Figure 3. The network architecture of the Local-Global Spectral Block (LGSB): (a) the Local-Global Spectral Block, (b) the Residual Block.
Remotesensing 14 01944 g003
Figure 4. Mean error maps of superballs and paints hyperspectral images from the CAVE testing dataset with a scale factor of 4.
Figure 4. Mean error maps of superballs and paints hyperspectral images from the CAVE testing dataset with a scale factor of 4.
Remotesensing 14 01944 g004
Figure 5. Mean error maps of two hyperspectral images from the Harvard testing dataset with the scale factor 4.
Figure 5. Mean error maps of two hyperspectral images from the Harvard testing dataset with the scale factor 4.
Remotesensing 14 01944 g005
Figure 6. The third reconstructed hyperspectral image from the Chikusei testing dataset with the scale factor 4, in which the bands 70-100-36 are treated as R-G-B.
Figure 6. The third reconstructed hyperspectral image from the Chikusei testing dataset with the scale factor 4, in which the bands 70-100-36 are treated as R-G-B.
Remotesensing 14 01944 g006
Figure 7. The fourth reconstructed hyperspectral image from the Chikusei testing dataset with the scale factor 4, in which the bands 70-100-36 are treated as R-G-B.
Figure 7. The fourth reconstructed hyperspectral image from the Chikusei testing dataset with the scale factor 4, in which the bands 70-100-36 are treated as R-G-B.
Remotesensing 14 01944 g007
Figure 8. Mean spectral difference curve of two hyperspectral images from the CAVE testing dataset with the scale factor 4: (a) sushi, (b) chart_and_stuffed.
Figure 8. Mean spectral difference curve of two hyperspectral images from the CAVE testing dataset with the scale factor 4: (a) sushi, (b) chart_and_stuffed.
Remotesensing 14 01944 g008
Figure 9. Mean spectral difference curve of two hyperspectral images (a,b) from the Harvard testing dataset with the scale factor 4.
Figure 9. Mean spectral difference curve of two hyperspectral images (a,b) from the Harvard testing dataset with the scale factor 4.
Remotesensing 14 01944 g009
Figure 10. Mean spectral difference curve of two hyperspectral images (a,b) from the Chikusei testing dataset with the scale factor 4.
Figure 10. Mean spectral difference curve of two hyperspectral images (a,b) from the Chikusei testing dataset with the scale factor 4.
Remotesensing 14 01944 g010
Table 1. Convergence analysis of T when G = 8.
Table 1. Convergence analysis of T when G = 8.
T123456
PSNR37.081537.507337.757437.813137.863937.8905
SAM3.72723.57543.52563.47893.45853.4332
Table 2. Convergence analysis of G when T = 6.
Table 2. Convergence analysis of G when T = 6.
G1248
PSNR37.725137.732937.820137.8905
SAM3.59773.52763.47923.4332
Table 3. Quantitative analysis of feedback structure and traditional recursive structure.
Table 3. Quantitative analysis of feedback structure and traditional recursive structure.
CC↑SAM↓RMSE↓ERGAS↓PSNR↑SSIM↑
LGSB + TRS0.99303.53930.01515.166537.90500.9734
LGSB + FS0.99303.43320.01525.159937.89050.9737
Table 4. Quantitative analysis of three Feedback strategies.
Table 4. Quantitative analysis of three Feedback strategies.
StrategiessCC↑SAM↓RMSE↓ERGAS↓PSNR↑SSIM↑
LR to LR40.99303.44300.01535.192937.91940.9728
LR, HR to LR40.99293.44160.01545.238537.82390.9731
HR to LR40.99303.43320.01525.159937.89050.9737
LR to LR80.97135.20590.032510.369731.39100.9139
LR, HR to LR80.97065.18540.032910.448731.32670.9146
HR to LR80.97125.05500.032310.398231.40070.9159
Table 5. Quantitative analysis of seven different comparison methods on CAVE test dataset involving six metrics.
Table 5. Quantitative analysis of seven different comparison methods on CAVE test dataset involving six metrics.
sCC↑SAM↓RMSE↓ERGAS↓PSNR↑SSIM↑
Bicubic40.98465.18320.02247.738434.50690.9472
VDSR [50]40.98964.36220.01886.306736.13480.9612
RCAN [51]40.99134.30580.01725.779636.79790.9657
3DCNN [23]40.98624.22970.02127.318234.98530.9549
GDRRN [32]40.98914.29700.01926.508735.84650.9594
SSPSR [24]40.99153.73840.01685.752737.04790.9682
FRLGN40.99303.43320.01525.159937.89050.9737
Bicubic80.95647.32100.038512.832329.57630.8741
VDSR [50]80.96155.86920.036912.052730.00800.8999
RCAN [51]80.96715.90080.034011.137330.73720.9061
3DCNN [23]80.95945.60790.037012.334129.88800.8961
GDRRN [32]80.96115.88640.036812.068430.00420.8966
SSPSR [24]80.96755.66170.034111.050630.79760.9098
FRLGN80.97125.05500.032310.398231.40070.9159
Table 6. Quantitative analysis of seven different comparison methods on Harvard test dataset involving six metrics.
Table 6. Quantitative analysis of seven different comparison methods on Harvard test dataset involving six metrics.
sCC↑SAM↓RMSE↓ERGAS↓PSNR↑SSIM↑
Bicubic40.96062.56710.01013.095743.90370.9582
VDSR [50]40.96402.57090.00902.860244.64860.9634
RCAN [51]40.96712.40970.00862.753745.12040.9663
3DCNN [23]40.96142.39170.00983.032444.18150.9600
GDRRN [32]40.96302.49240.00932.927644.45770.9620
SSPSR [24]40.97042.27660.00822.589345.54600.9684
FRLGN40.97222.24960.00742.446346.18660.9730
Bicubic80.90983.01650.01795.069439.66810.9131
VDSR [50]80.91853.00930.01654.736940.24900.9223
RCAN [51]80.93122.78080.01504.343840.98530.9313
3DCNN [23]80.91282.78530.01724.942239.96150.9175
GDRRN [32]80.91752.86690.01664.794640.18310.9214
SSPSR [24]80.93382.62020.01494.245841.18690.9313
FRLGN80.93732.76650.01394.031641.63200.9374
Table 7. Quantitative analysis of seven different comparison methods on Chikusei test dataset involving six metrics.
Table 7. Quantitative analysis of seven different comparison methods on Chikusei test dataset involving six metrics.
sCC↑SAM↓RMSE↓ERGAS↓PSNR↑SSIM↑
Bicubic40.89873.76660.01767.653236.56030.8882
VDSR [50]40.91763.10030.01556.953437.56480.9113
RCAN [51]40.91423.09360.01567.109937.43130.9104
3DCNN [23]40.90473.48080.01697.341936.90900.8931
GDRRN [32]40.91443.21780.01597.042637.37540.9060
SSPSR [24]40.92502.82810.01486.608237.96980.9193
FRLGN40.92832.75800.01436.495338.20850.9240
Bicubic80.75465.96170.027411.966532.70470.7829
VDSR [50]80.78405.31030.025010.909733.49640.8069
RCAN [51]80.76306.54470.025811.907833.04750.7946
3DCNN [23]80.77235.55060.025711.097133.31070.7955
GDRRN [32]80.78425.30330.024910.910733.52360.8062
SSPSR [24]80.78805.24150.024710.786333.61940.8106
FRLGN80.78875.21220.024610.803333.63320.8145
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tang, Z.; Xu, Q.; Wu, P.; Shi, Z.; Pan, B. Feedback Refined Local-Global Network for Super-Resolution of Hyperspectral Imagery. Remote Sens. 2022, 14, 1944. https://doi.org/10.3390/rs14081944

AMA Style

Tang Z, Xu Q, Wu P, Shi Z, Pan B. Feedback Refined Local-Global Network for Super-Resolution of Hyperspectral Imagery. Remote Sensing. 2022; 14(8):1944. https://doi.org/10.3390/rs14081944

Chicago/Turabian Style

Tang, Zhenjie, Qing Xu, Pengfei Wu, Zhenwei Shi, and Bin Pan. 2022. "Feedback Refined Local-Global Network for Super-Resolution of Hyperspectral Imagery" Remote Sensing 14, no. 8: 1944. https://doi.org/10.3390/rs14081944

APA Style

Tang, Z., Xu, Q., Wu, P., Shi, Z., & Pan, B. (2022). Feedback Refined Local-Global Network for Super-Resolution of Hyperspectral Imagery. Remote Sensing, 14(8), 1944. https://doi.org/10.3390/rs14081944

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop