Hyperspectral Imagery Super-Resolution by Adaptive POCS and Blur Metric

The spatial resolution of a hyperspectral image is often coarse as the limitations on the imaging hardware. A novel super-resolution reconstruction algorithm for hyperspectral imagery (HSI) via adaptive projection onto convex sets and image blur metric (APOCS-BM) is proposed in this paper to solve these problems. Firstly, a no-reference image blur metric assessment method based on Gabor wavelet transform is utilized to obtain the blur metric of the low-resolution (LR) image. Then, the bound used in the APOCS is automatically calculated via LR image blur metric. Finally, the high-resolution (HR) image is reconstructed by the APOCS method. With the contribution of APOCS and image blur metric, the fixed bound problem in POCS is solved, and the image blur information is utilized during the reconstruction of HR image, which effectively enhances the spatial-spectral information and improves the reconstruction accuracy. The experimental results for the PaviaU, PaviaC and Jinyin Tan datasets indicate that the proposed method not only enhances the spatial resolution, but also preserves HSI spectral information well.


Introduction
Hyperspectral imagery (HSI) containing about 200 spectral bands in the visible and infrared wavelength regions is an efficient way to describe and store visual information [1,2]. HSI also has a wide range of applications such as terrain classification, mineral detection and exploration, environmental studies, pharmaceutical counterfeiting, and military surveillance, etc. [3][4][5][6][7]. Our focus in this paper will be on the remote sensing field, wherein the spectral images are typically gained by airborne or spaceborne sensors. However, due to high sensitivity in the spectral domain when designing the imaging hardware device, the spatial resolution of the hyperspectral image is often coarser [8]. Therefore, the image super-resolution reconstruction (SRR) technique is utilized to improve the spatial resolution of hyperspectral images. A high-resolution (HR) image is gained from a sequence of observed low-resolution (LR) images through the SRR technique.
The SRR technique was first achieved in the frequency domain by Tsai and Huang [9], who proposed a formulation for the reconstruction of an HR image from LR images. SRR methods based on discrete cosine transform (DCT) [10] or wavelet transform were subsequently proposed [11,12]. However, the frequency domain approaches are hard to combine with the information in the spatial domain. The spectral information of the HR image is also usually difficult to reserve via the DCTor wavelet transform-based methods. Therefore, many spatial domain-based methods have been

Projection onto Convex Set-Based Super-Resolution Reconstruction (SRR)
In mathematics, projections onto convex sets (POCS) is a method to find a point in the intersection of two closed convex sets. Stark and Oskoui [17] first proposed the POCS formulation of super-resolution reconstruction, and the method was also extended by [18]. The POCS-based SRR methods usually utilize an alternative iterative approach to incorporating image prior knowledge about the solution into the reconstruction process. Therefore, the restoration and interpolation problem both can be solved during the estimates of registration parameters.
According to the basic principle of POCS, incorporating a priori knowledge into the solution is an interpretation of restricting the solution to be a member of a closed convex set C i , which is defined as a set of vectors satisfying a particular property. The main purpose of the POCS is to find a vector through the recursion: x n+1 = P m P m−1 · · · P 2 P 1 x n (1) where the x is the single to be proposed, and it would be the pixel value in the application of SRR. P i is the projection operator which projects the x onto a closed convex set C i (i = 1, 2, . . . m), n is a iteration number, and x 0 is an arbitrary starting point. Then, a data consistency constraint set used in the image SRR for each pixel within the LR images y k [m 1 , m 2 ] is defined as: where W k [m 1 , m 2 ; n 1 , n 2 ] is a matrix to describe the degradation model via blurring, motion and subsampling operation. The residual error r (x) reflects the difference between the reconstructed image and the real image, and the two image pixel values are closer when the residual error is smaller. Therefore, the HR image can be reconstructed by calculating the residual value in Equation (3) and the data consistency constraint convex set in Equation (2). The δ k in Equation (2) partly determines the results of the reconstructed HR image. As the value of δ k is large, the possibility of the residual error r (x) belonging to [−δ k , δ k ] is high. Then, the number of the correctional pixels by Equation (2) in reconstructed HR image is low, which leads to rough reconstruction results. While the value of δ k is small, the reconstructed HR image generally suffers the noise problem due to the overflow pixel correction by Equation (2). Most of the δ k used in the existed POCS-based SRR methods is a fixed value, which affects the accuracy of the HR reconstruction image. In order to overcome these problems, a novel APOCS-BM method combining APOCS and blur metric is proposed in this paper, where the δ k is adaptive and automatically calculated by the image blur metric.

Image Blur Metric Based on Gabor Wavelet Transform
Image blur is a major operation in the degradation model (W k [m 1 , m 2 ; n 1 , n 2 ] in Equation (3)), which makes a contribution to the HR reconstruction image. In order to achieve APOCS, a novel image blur metric assessment method based on Gabor wavelet transform is presented in this paper. According to the feature of human vision system model (HVS), the edge and contour information in an image is more sensitive than others. The edge and contour information is a kind of high frequency information in the image processing domain, and they can be extracted from Gabor wavelet transform. Therefore, the presented novel image blur metric assessment method is mainly based on the image frequency information and its statistical features.
The flowchart of the image blur metric assessment method based on Gabor wavelet transform is shown in Figure 1. The original image ( f (m 1 , m 2 )) in Figure 1 is from the Pavia university dataset, and the spectral band is 28. The Gabor feature in Figure 1 is extracted by the Gabor wavelet transform, in which the transform kernel function is where k v , ϕ u , and σ are the parameters utilized to gain the frequency and texture feature information, and → m is the coordinate vector of image pixel. In order to divide the Gabor feature into two classes (high and low frequency), an adaptive threshold-based frequency information extraction method is employed in this paper. Let GF(m 1 , m 2 ) be a value of the Gabor feature in Figure 1 with location (m 1 , m 2 ), then the GF(m 1 , m 2 ) mean value mean(m 1 , m 2 ) and variance value ε(m 1 , m 2 ) are defined as: ε(m 1 , m 2 ) = 1 p 2 where p is the neighborhood size, and it is an odd number. The frequency information in the neighborhood of GF(m 1 , m 2 ) is captured by the mean(m 1 , m 2 ) and ε(m 1 , m 2 ). Then an adaptive threshold is utilized to achieve the GF(m 1 , m 2 ) classification via the captured frequency information. The adaptive threshold t(m 1 , m 2 ) is defined as: the classification result C(m 1 , m 2 ) of the GF(m 1 , m 2 ) is defined as: where value 1 in C(m 1 , m 2 ) describes the high frequency information, value 0 represents the low frequency information. Let HF(m 1 , m 2 ) = C(m 1 , m 2 ), and LF(m 1 , m 2 ) = 1 − HF(m 1 , m 2 ). Therefore, the high frequency region HFR(m 1 , m 2 ) and low frequency region LFR(m 1 , m 2 ) in the original image f (m 1 , m 2 ) are calculated by where (·) means the multiplication of two elements in different matrices with the same location.    where ()  means the multiplication of two elements in different matrices with the same location. The separated frequency information is gained from Gabor wavelet transform and the adaptive threshold-based frequency information extraction. In order to calculate the image blur metric automatically, four statistical features extracted from separated frequency information are utilized in our method,: horizontal absolute difference, mean horizontal absolute difference, vertical absolute The separated frequency information is gained from Gabor wavelet transform and the adaptive threshold-based frequency information extraction. In order to calculate the image blur metric automatically, four statistical features extracted from separated frequency information are utilized in our method,: horizontal absolute difference, mean horizontal absolute difference, vertical absolute difference and mean vertical absolute difference. The four statistical features in the high frequency (HFR had (m 1 , m 2 ), HFR mhad , HFR vad (m 1 , m 2 ), HFR mvad ) are defined as: where M, N is the original image size. The four statistical features in the low frequency (LFR had (m 1 , m 2 ), LFR mhad , LFR vad (m 1 , m 2 ), LFR mvad ) are defined as: The statistical features of separated frequency information are gained by Equations (10) and (11). Combined with these statistical features, an image blur metric assessment (A IBM ) is utilized to describe the hyperspectral image blur metric in our method: Figure 2 shows A IBM with different blur images, and the spectral band of the blur images is 28. These blur images are gained from the Pavia university dataset with a 5 × 5 Gaussian kernel of different standard deviations (the standard deviation of Gaussian kernel is 0.1, 0.5, 1 and 2). From the comparison of different blur images, it can be observed that the A IBM is decreasing with the increasing level of image blur, and the blur metric of the image is well described by the proposed method. The Algorithm 1 is the steps of the image blur metric assessment method.  (11) where , MN is the original image size. The four statistical features in the low frequency The statistical features of separated frequency information are gained by Equations (10) and (11). Combined with these statistical features, an image blur metric assessment ( IBM A ) is utilized to describe the hyperspectral image blur metric in our method:

Proposed APOCS-Blur Metrics (BM) Method
The presented image blur metric assessment A IBM (Equation (12)) in the last subsection is automatically calculated by the statistical features of separated frequency information. With the contribution of A IBM , a novel super-resolution reconstruction algorithm for hyperspectral imagery based on adaptive projection onto convex sets and image blur metric is proposed in this paper. The steps of Algorithm 2 are shown below: Steps of the proposed APOCS-BM method: Step 1: Set the initial value of p (Equation (5)), α, β, t 0 (Equation (13)) and iteration number Itn; Step 2: Gain the initial HR image H from the LR image L 1 by linear interpolation, calculate the A IBM [m 1 ,m 2 ] and A LR IBM for each LR image L 1~L4 ; Step 3: For i = 1,2, . . . , Itn for j = 1,2, 3, 4 Step 3.1: Calculate the affine motion parameters for LR image L j ; Step 3.2: Gain the estimation value H es of H via the affine motion parameters and point spread function; Step 3.3: Calculate the residual R j (i); Step 3.4: Calculate the adaptive threshold value δ k [m 1 ,m 2 ]; Step 3.5: Step 3.5.1: refresh H with the estimation value H es ; end If end for end for Step 4: output the reconstructed HR image H.
The input of the proposed APOCS-BM method is four LR images L 1 ∼ L 4 , and an iteration operation (in Step 3) is employed to improve the accuracy of the reconstructed HR image. The initial HR image H is calculated from L 1 via a linear interpolation operation. In the steps of calculating the HR image estimation value H es (Steps 3.1 and 3.2), affine motion parameters and a point spread function are utilized. Then the residual R j (i) between estimation value H es and the initial HR image H is gained. The adaptive threshold value δ k [m 1 , m 2 ] (Equation (2)) used in the APOCS-BM method is defined as: where A IBM [m 1 , m 2 ] is the blur metric assessment of an image patch with center location (m 1 , m 2 ). The image patch size is 8 × 8 in our method. A LR IBM is the blur metric assessment of the LR image, α is the weight coefficient, β is the correction factor, and t 0 is a threshold value. Finally, the reconstructed HR image H is refreshed by Step 3.5. In order to describe the algorithm in more detail, Figure 3 presents a flowchart of the single iteration processing for the proposed APOCS-BM method.
HR image H is gained. The adaptive threshold value 12 [ , ] k mm  (Equation (2)) used in the APOCS-BM method is defined as: where 12 [ , ] IBM A m m is the blur metric assessment of an image patch with center location 12 ( , ) mm . The image patch size is 8 × 8 in our method.

LR IBM
A is the blur metric assessment of the LR image,  is the weight coefficient,  is the correction factor, and 0 t is a threshold value. Finally, the reconstructed HR image H is refreshed by Step 3.5. In order to describe the algorithm in more detail, Figure 3 presents a flowchart of the single iteration processing for the proposed APOCS-BM method. The image used in Figure 3 is from the Pavia university dataset with the same spectral band in Figure 2. In Figure 3, the input of the proposed APOCS-BM method is four LR images; the LR image 1 is down-sampled from the original hyperspectral imagery. LR images 2, 3 and 4 are gained from LR image 1 convolving with a 5 × 5 Gaussian kernel of standard deviation 0.1, 0.2 and 0.5, respectively. The initial HR image used in Figure 3 is calculated from LR image 1 via a linear interpolation operation. Then the iteration processing from LR image 1 to LR image 4 is utilized to update the initial HR image. The input of each iteration processing is the output (HR image) of each last iteration, and the iteration processing order is LR images 1, 2, 3 and 4. The reconstructed HR image is refreshed by the iteration processing of four LR images. It can be observed that the spatial information and visual details in the HR image are effectively recovered by the proposed APOCS-BM method.
In the proposed method, the patch size selection in the calculation of IBM A is mainly determined by the size of LR image. All the LR image sizes used in the experiment are 128 × 128. In comparing with other sizes, we found that the image patch 8 × 8 has the best performance in our method. In some other POCS-based SRR methods, the number and the LR image scale factor of inputs can be increased. However, the time cost is also increased, and the robustness and reliability The image used in Figure 3 is from the Pavia university dataset with the same spectral band in Figure 2. In Figure 3, the input of the proposed APOCS-BM method is four LR images; the LR image 1 is down-sampled from the original hyperspectral imagery. LR images 2, 3 and 4 are gained from LR image 1 convolving with a 5 × 5 Gaussian kernel of standard deviation 0.1, 0.2 and 0.5, respectively. The initial HR image used in Figure 3 is calculated from LR image 1 via a linear interpolation operation. Then the iteration processing from LR image 1 to LR image 4 is utilized to update the initial HR image. The input of each iteration processing is the output (HR image) of each last iteration, and the iteration processing order is LR images 1, 2, 3 and 4. The reconstructed HR image is refreshed by the iteration processing of four LR images. It can be observed that the spatial information and visual details in the HR image are effectively recovered by the proposed APOCS-BM method.
In the proposed method, the patch size selection in the calculation of A IBM is mainly determined by the size of LR image. All the LR image sizes used in the experiment are 128 × 128. In comparing with other sizes, we found that the image patch 8 × 8 has the best performance in our method. In some other POCS-based SRR methods, the number and the LR image scale factor of inputs can be increased. However, the time cost is also increased, and the robustness and reliability in these algorithms are hard to be maintained. In order to speed up the algorithm, the initial HR image H is gained from the LR image L 1 by linear interpolation. If the initial estimation of H is set to 0, the reconstructed result is very close to the original one, and only the iteration number Itn should be larger than usual.

Experiments and Results
To evaluate the performance of the proposed APOCS-BM method, a series of experiments are performed on the Pavia and Jinyin Tan databases. All our experiments are done using MATLAB R2016b (Mathworks Corporation, MA, USA)on a 3.1 GHz Intel i5-2400 with 16GB RAM. The Pavia database consists of the Pavia University (PaviaU) scene and Pavia Centre (PaviaC) scene, which were captured by ROSIS sensor (German Aerospace Center (DLR), Cologne, Germany) during a flight campaign over Pavia in northern Italy. Part of the channels are removed due to noise; the number of spectral bands is 102 for PaviaC and 103 for PaviaU. The Jinyin Tan dataset is a scene of Jinyin Tan, a grassland located in Qinghai province, western China, which was captured by an airborne sensor named Lantian [25,26]. The number of spectral bands is 103. The image size of all HR used in the experiment is 256 × 256 pixels, which is part of the original dataset, and the image size of LR is 128 × 128 pixels. In order to evaluate the algorithm fairly, average peak signal noise ratio (A-PSNR), average structural similarity (A-SSIM) and spectral angle mapper (SAM) are employed as quality indexes. The A-PSNR and A-SSIM are calculated from the average value of whole spectral bands. The SAM represents the spectral distortion between the original and reconstructed HR image by absolute angles. The value of SAM should be zero when the reconstructed HR image is the same as the original. In the experiment, the proposed method is also compared with the linear interpolation method, DCT-based method [10], Kim [19], POCS [17] and sparse representation-based SR (SR-SR) method [20].

PaviaU and PaviaC Dataset
In the proposed APOCS-BM method, the original input is a single LR image (marked as LR image 1 in Figure 3) with a size of 128 × 128 pixels. The first step of the proposed method is to obtain another three LR images (marked as LR image 1, LR image 2 and LR image 3 in Figure 3), which are gained from the original input LR image convolving with a 5 × 5 Gaussian kernel of standard deviation 0.1, 0.2 and 0.5, respectively. Then, the four LR images are utilized to reconstruct the HR image. In the experiment, in order to achieve the universal property of the proposed method on different datasets, the patch size (p), iteration number (Itn) and t 0 are all the same, where p = 5, Itn = 2 and t 0 = 1.

(a) PaviaU dataset results
The other parameters used in the PaviaU dataset are: α = 0.8, β = 1. Figure 4 shows the visual results of the PaviaU HSI dataset using different SRR methods. For the purpose of visualization by a human observer, the 80th, 28th and 9th spectral bands of the PaviaU dataset are chosen as the R, G and B channels of the color images in Figure 4. It can be observed that the results gained by linear interpolation method or DCT-based method [10] are blurrier than the others. The POSC [17]-based results in Figure 4e are over-sharpened, and the edges and corners are partly changed. From the visual comparison in Figure 4, it can be seen that the proposed APOCS-BM method achieves better spatial-spectral information recovery than the other methods.
In order to further compare the results with other methods, Figure 5a shows the spectral curves of reconstructed HR images (all spectral images). The horizontal axis is the spectral number, and the vertical axis is the gray value of the spectral image in the same coordinate (the coordinate located at (181,23), shown in Figure 5c). The difference values between the reconstructed spectral curve and the original spectral curve are presented in Figure 5b, the baseline represented by a black dotted line is the original spectral curve. It can be observed that the closer the spectral curves to the baseline, the better the result. From the comparison of different spectral curves in Figure 5, the spectral curve reconstructed by the proposed APOCS-BM method is the best of all reconstructed spectral curves. Table 1 shows the A-PSNR, A-SSIM and SAM of different reconstructed results. We can find that the proposed method has better performance than A-PSNR, A-SSIM and SAM.   [10] results; (d) Kim [19] results; (e) POCS [17] results; (f) SRSR [20] results; (g) proposed APOCS-BM method results.
In order to further compare the results with other methods, Figure 5a shows the spectral curves of reconstructed HR images (all spectral images). The horizontal axis is the spectral number, and the vertical axis is the gray value of the spectral image in the same coordinate (the coordinate located at (181,23), shown in Figure 5c). The difference values between the reconstructed spectral curve and the original spectral curve are presented in Figure 5b, the baseline represented by a black dotted line is the original spectral curve. It can be observed that the closer the spectral curves to the baseline, the better the result. From the comparison of different spectral curves in Figure 5, the spectral curve reconstructed by the proposed APOCS-BM method is the best of all reconstructed spectral curves. Table 1 shows the A-PSNR, A-SSIM and SAM of different reconstructed results. We can find that the proposed method has better performance than A-PSNR, A-SSIM and SAM. (d) Kim [19] results; (e) POCS [17] results; (f) SRSR [20] results; (g) proposed APOCS-BM method results.

(b) PaviaC dataset results
The parameters used in the PaviaC dataset are: α = 1, β = 1. The false color image of experimental results for the PaviaC dataset are shown in Figure 6, and the spectral band numbers chosen to be the R, G and B channels are the same as PaviaU. The spectral curves with location (233, 163) are shown in Figure 7. From the comparison of false color image and spectral curves, the reconstructed HR images via the proposed APOCS-BM method contain more spatial-spectral information than the other aforementioned methods. The mean difference value in Figure 7b is smaller than the value in Figure 5b, which is mainly caused by the different material properties. The A-PSNR, A-SSIM and SAM results are shown in Table 2; similar to Table 1, the proposed method performs the best in the different quality indexes.  The parameters used in the PaviaC dataset are: The false color image of experimental results for the PaviaC dataset are shown in Figure 6, and the spectral band numbers chosen to be the R, G and B channels are the same as PaviaU. The spectral curves with location (233, 163) are shown in Figure 7. From the comparison of false color image and spectral curves, the reconstructed HR images via the proposed APOCS-BM method contain more spatial-spectral information than the other aforementioned methods. The mean difference value in Figure 7b is smaller than the value in Figure 5b, which is mainly caused by the different material properties. The A-PSNR, A-SSIM and SAM results are shown in Table 2; similar to Table 1, the proposed method performs the best in the different quality indexes. (d) Kim [19] results; (e) POCS [17] results; (f) SRSR [20] results; (g) proposed APOCS-BM method results. (d) Kim [19] results; (e) POCS [17] results; (f) SRSR [20] results; (g) proposed APOCS-BM method results.

Jinyin Tan Dataset
The Jinyin Tan dataset was captured by the airborne sensor Lantian [21,22]. The Jinyin Tan dataset 1 (the main scene is a water box) and Jinyin Tan dataset 2 (the main scene is grassland) comprise the Jinyin Tan dataset. The details are shown in Figure 8, and the whole image size of the Jinyin Tan is 1681 × 1681 pixels. The 48th, 30th and 11th spectral bands in the Jinyin Tan dataset 1 are chosen as the R, G and B channels of the false color images in Figures 8 and 9. The parameters used in the two datasets are: α = 1.2, β = 1, which have the best performance in the experiment. α and β affect the calculation of the adaptive threshold value δ k , which is used in APOCS. When α and β are increased, the adaptive threshold value is increased. In Algorithm 2's Step 3.5, the amount of the pixel refreshed in H is decreased, but the accuracy of the constructed result (HR image) may be low. When these parameters are decreased, the amount of the pixel refreshed in H is increased, but the noise may be added through this increase. Therefore, α and β are different with different applications.

Measures
Linear Interpolation

Jinyin Tan Dataset
The Jinyin Tan dataset was captured by the airborne sensor Lantian [21,22]. The Jinyin Tan dataset 1 (the main scene is a water box) and Jinyin Tan dataset 2 (the main scene is grassland) comprise the Jinyin Tan dataset. The details are shown in Figure 8, and the whole image size of the Jinyin Tan is 1681 1681  pixels. The 48th, 30th and 11th spectral bands in the Jinyin Tan dataset 1 are chosen as the R, G and B channels of the false color images in Figures 8 and 9. The parameters used in the two datasets are:   , which have the best performance in the experiment.  and  affect the calculation of the adaptive threshold value k  , which is used in APOCS. When  and  are increased, the adaptive threshold value is increased. In Algorithm 2's Step 3.5, the amount of the pixel refreshed in H is decreased, but the accuracy of the constructed result (HR image) may be low. When these parameters are decreased, the amount of the pixel refreshed in H is increased, but the noise may be added through this increase. Therefore,  and  are different with different applications. The false color images of experimental results for the Jinyin Tan dataset 1 are shown in Figure 9, and the spectral curves with location (82, 184) are shown in Figure 10. The difference value in The false color images of experimental results for the Jinyin Tan dataset 1 are shown in Figure 9, and the spectral curves with location (82, 184) are shown in Figure 10. The difference value in Figure 10b is much smaller than the values in Figures 5b and 7b, which has a better reconstruction performance in the Jinyin Tan dataset 1. From the visual comparison in Figure 9, we can see that the corners and image texture information of the water box obtained by the proposed method are much better than for the others. It also can be observed that the spatial-spectral information gained via the proposed method is closer to the original signals from Figure 10. Table 3 shows the A-PSNR, A-SSIM and SAM of different experimental results for the Jinyin Tan dataset 1. The A-PSNR in the proposed method is 44.7879, and the constructed HR image is really close to the original HR image. The SAM in the proposed method is 0.0411, which means the spectral distortion between the original and reconstructed HR image is really small. The quality indexes in Table 3 prove that the proposed APOCS-BM method performs better than the others in the Jinyin Tan 1 dataset. Figure 10b is much smaller than the values in Figures 5b and 7b, which has a better reconstruction performance in the Jinyin Tan dataset 1. From the visual comparison in Figure 9, we can see that the corners and image texture information of the water box obtained by the proposed method are much better than for the others. It also can be observed that the spatial-spectral information gained via the proposed method is closer to the original signals from Figure 10. Table 3 shows the A-PSNR, A-SSIM and SAM of different experimental results for the Jinyin Tan dataset 1. The A-PSNR in the proposed method is 44.7879, and the constructed HR image is really close to the original HR image. The SAM in the proposed method is 0.0411, which means the spectral distortion between the original and reconstructed HR image is really small. The quality indexes in Table 3 prove that the proposed APOCS-BM method performs better than the others in the Jinyin Tan 1 dataset.  [10] results; (d) Kim [19] results; (e) POCS [17] results; (f) SRSR [20] results; (g) proposed APOCS-BM method results.  [10] results; (d) Kim [19] results; (e) POCS [17] results; (f) SRSR [20] results; (g) proposed APOCS-BM method results.   Like the Jinyin Tan dataset 1, the false color images of experimental results for the Jinyin Tan dataset 2 are shown in Figure 11, the spectral curves with location (182, 44) are shown in Figure 12, and the quality indexes are shown in Table 4. The 48th, 30th and 11th spectral bands are chosen as the R, G and B channels of the false color images in Figure 11. It can be observed that the difference value in Figure 12b is smaller than 2, and the reconstructed HR images in the Jinyin Tan dataset 2 have the best performance of difference value. From the comparison in the figures and quality indexes, we can see that the reconstructed HR images by the proposed method are much better than by the others, and the spatial-spectral information is well enhanced.  Like the Jinyin Tan dataset 1, the false color images of experimental results for the Jinyin Tan dataset 2 are shown in Figure 11, the spectral curves with location (182, 44) are shown in Figure 12, and the quality indexes are shown in Table 4. The 48th, 30th and 11th spectral bands are chosen as the R, G and B channels of the false color images in Figure 11. It can be observed that the difference value in Figure 12b is smaller than 2, and the reconstructed HR images in the Jinyin Tan dataset 2 have the best performance of difference value. From the comparison in the figures and quality indexes, we can see that the reconstructed HR images by the proposed method are much better than by the others, and the spatial-spectral information is well enhanced.  [10] results; (d) Kim [19] results; (e) POCS [17] results; (f) SRSR [20] results; (g) proposed APOCS-BM method results.  [10] results; (d) Kim [19] results; (e) POCS [17] results; (f) SRSR [20] results; (g) proposed APOCS-BM method results.   In the experiment, we also compared the execution times of different methods. Table 5 shows the average execution time of different methods for the PaviaU dataset, PaviaC dataset, Jinyin Tan dataset 1 and Jinyin Tan dataset 2. The average execution time is for the reconstruction of a single HR image, not the whole spectral band. It can be observed that the average execution times in line interpolation, DCT-based method [10] and Kim [19] are close, and line interpolation is the fastest. However, the results gained by these methods do not perform well in the comparison of visual or spectral curves. The average execution times in the proposed method and POSC [17] are close, but they are much slower than line interpolation method. The SR-SR method [20] suffers the largest execution time, the main reason being the big dictionary used in the sparse coding and reconstruction. Considering the execution time and reconstruction accuracy of the HR image, the proposed method has the best performance.

Conclusions
In this paper, a novel super-resolution reconstruction algorithm for hyperspectral imagery via adaptive projection onto convex sets and image blur metric is proposed. In the step of assessing the low resolution (LR) image blur metric, a no-reference image blur metric assessment method based on the Gabor wavelet transform is utilized. Then, the bound (Equation (13)) is automatically calculated via image blur metric. Finally, the high resolution (HR) image is reconstructed by the adaptive projection onto convex sets (APOCS) method. The fixed bound problem in POCS is efficiently solved by the no-reference image blur metric assessment method. With the contribution of APOCS and image blur metric, the image blur information is utilized during the reconstruction of the HR image, which enhances the spatial-spectral information and effectively improves the reconstruction accuracy. The experimental results for the PaviaU, PaviaC and Jinyin Tan datasets indicate that the proposed method not only enhances spatial resolution, but also preserves the hyperspectral imagery (his) spectral information well. Planned future work includes: (i) further improving the spatial resolution and reconstruction accuracy; and (ii) achieving the hyperspectral imagery super-resolution via Convolutional Neural Network (CNN).