A Closed-Loop Network for Single Infrared Remote Sensing Image Super-Resolution in Real World
Abstract
:1. Introduction
- We proposed a novel closed-loop framework to better super-resolve infrared remote sensing LR images in real world. Since our closed-loop network can work with LR input only, i.e., in unsupervised manner, our method has better practical utility.
- We provided reasonable training suggestions about how to train our network in the ways of supervised learning, weakly supervised learning and unsupervised learning.
- We experimentally studied the resolution level adaptability and spectrum adaptability of our proposed infrared remote sensing image SR method. Such analyses are especially needed for infrared remote sensing, but usually neglected in natural image super-resolution.
2. Related Works
2.1. Supervised Learning
2.2. Unsupervised Learning
2.3. Weakly Supervised Learning
3. Method
3.1. Framework
3.2. Loss Functions
3.3. Datasets for Study
3.3.1. PROBA-V Dataset
3.3.2. Landsat-8 Dataset
3.4. Training Mode
Algorithm 1 Pseudo-codes of our framework in three training modes. |
Require: Infrared remote sensing training images , , , , |
Goal The well-trained S and D |
1: Initialize , |
2: repeat |
3: |
4: |
5: |
6: |
7: |
8: |
9: |
10: |
11: Choose training mode: |
12: forward loop: |
13: backward loop: |
14: double loop: |
15: ADAM-optimizer (, variable )) |
16: until Reach maximum iteration of minibatch updating |
17: |
18: function test () |
19: |
20: return |
21: end function |
- Supervised learning: The real HR data and LR data in the PROBA-V dataset are represented by and , respectively. In order to prove the superiority of our framework, we also separately train the super-resolution network S for comparison in addition to three different training modes. When the super-resolution network is trained separately, the paired loss in Algorithm 1 is used as the total loss.
- Weakly supervised learning: The real unpaired HR-LR data can be obtained by shuffling the order of PROBA-V data. In order to distinguish from the data for supervised learning, the unpaired HR and LR data are represented by and , respectively. For the purpose of calculating paired loss, we need to down-sample the 100m HR data to obtain the matching LR data .
- Unsupervised learning: In most cases, the super-resolution task is carried out with only LR images. We use the real LR data from the PROBA-V dataset and denote it as and down-sample it to get paired data .
4. Experimental Results
4.1. Training Mode Selection
4.2. Comparison with Learning Based SR Methods
SR Type | Method | PSNR(dB) | SSIM |
---|---|---|---|
Supervised learning | S | 40.0677 | 0.9671 |
Ours | 40.3701 | 0.9675 | |
RDN [54] | 33.5486 | 0.9187 | |
EDSR [52] | 40.5574 | 0.9675 | |
SRFBN [53] | 40.6953 | 0.9679 | |
Weakly supervised learning | Ours | 39.9832 | 0.9645 |
Cycle-CNN [42] | 39.9691 | 0.9646 | |
S | 39.1031 | 0.9635 | |
RDN [54] | 33.8350 | 0.9527 | |
EDSR [52] | 39.1036 | 0.9636 | |
SRFBN [53] | 39.0796 | 0.9635 | |
Unsupervised learning | Ours | 40.1415 | 0.9645 |
IBP [20] | 39.1382 | 0.9643 | |
BDB [33] | 37.3865 | 0.9348 | |
FSR [55] | 39.2305 | 0.9674 | |
GPR [56] | 37.5735 | 0.9631 | |
UGSR [37] | 35.4371 | 0.9255 | |
EUSR [38] | 23.6012 | 0.4987 | |
ZSSR [36] | 38.1762 | 0.9597 | |
MZSR [57] | 38.0026 | 0.8942 | |
dSRVAE [58] | 38.3937 | 0.9486 | |
DASR [59] | 39.7955 | 0.9637 |
4.3. Ablation Study of Attention Mechanism
4.4. Resolution Adaptability Analysis
4.5. Spectrum Adaptability Analysis
5. Discussions
5.1. Training Mode Selection
5.2. Advantages and Limitations
5.3. Adaptability of Resolution and Spectrum
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Fernandezbeltran, R.; Latorrecarmona, P.; Pla, F. Single-frame super-resolution in remote sensing: A practical overview. Int. J. Remote Sens. 2017, 38, 314–354. [Google Scholar] [CrossRef]
- Pham, M.; Aptoula, E.; Lefevre, S. Feature Profiles from Attribute Filtering for Classification of Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 249–256. [Google Scholar] [CrossRef]
- Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef]
- Lin, H.; Shi, Z.; Zou, Z. Fully Convolutional Network With Task Partitioning for Inshore Ship Detection in Optical Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1665–1669. [Google Scholar] [CrossRef]
- Wu, T.; Luo, J.; Fang, J.; Ma, J.; Song, X. Unsupervised Object-Based Change Detection via a Weibull Mixture Model-Based Binarization for High-Resolution Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 63–67. [Google Scholar] [CrossRef]
- Bai, Y.; Zhang, Y.; Ding, M.; Ghanem, B. SOD-MTGAN: Small Object Detection via Multi-Task Generative Adversarial Network. In Proceedings of the European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2018; pp. 210–226. [Google Scholar]
- Tatem, A.J.; Lewis, H.G.; Atkinson, P.M.; Nixon, M.S. Super-resolution target identification from remotely sensed images using a Hopfield neural network. IEEE Trans. Geosci. Remote Sens. 2001, 39, 781–796. [Google Scholar] [CrossRef]
- Dai, D.; Wang, Y.; Chen, Y.; Van Gool, L. Is image super-resolution helpful for other vision tasks? In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Lake Placid, NY, USA, 7–10 March 2016; pp. 1–9. [Google Scholar] [CrossRef]
- Zanotta, D.C.; Ferreira, M.P.; Zorte, M.; Shimabukuro, Y. A statistical approach for simultaneous segmentation and classification. In Proceedings of the IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 4899–4901. [Google Scholar] [CrossRef]
- Hou, B.; Zhou, K.; Jiao, L. Adaptive Super-Resolution for Remote Sensing Images Based on Sparse Representation with Global Joint Dictionary Model. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2312–2327. [Google Scholar] [CrossRef]
- Liao, R.; Tao, X.; Li, R.; Ma, Z.; Jia, J. Video Super-Resolution via Deep Draft-Ensemble Learning. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 531–539. [Google Scholar] [CrossRef]
- Xu, J.; Liang, Y.; Liu, J.; Huang, Z. Multi-Frame Super-Resolution of Gaofen-4 Remote Sensing Images. Sensors 2017, 17, 2142. [Google Scholar] [CrossRef]
- Yang, C.; Ma, C.; Yang, M. Single-Image Super-Resolution: A Benchmark. In Proceedings of the Europeon Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 372–386. [Google Scholar]
- Yang, D.; Li, Z.; Xia, Y.; Chen, Z. Remote sensing image super-resolution: Challenges and approaches. In Proceedings of the IEEE International Conference on Digital Signal Processing, Singapore, 21–24 July 2015; pp. 196–200. [Google Scholar]
- Pohl, C.; Genderen, J.V. Remote Sensing Image Fusion. In Encyclopedia of Image Processing; Laplante, P.A., Ed.; CRC Press: Boca Raton, FL, USA, 2018; pp. 627–640. [Google Scholar]
- Harris, J.L. Diffraction and Resolving Power. J. Opt. Soc. Am. 1964, 54, 931–936. [Google Scholar] [CrossRef]
- Turkowski, K. Filters for common resampling tasks. In Graphics Gems; Glassner, A.S., Ed.; Morgan Kaufmann: San Diego, CA, USA, 1990; pp. 147–165. [Google Scholar]
- Keys, R. Cubic convolution interpolation for digital image processing. IEEE Trans. Acoust. Speech Signal Process. 1981, 29, 1153–1160. [Google Scholar] [CrossRef]
- Nasrollahi, K.; Moeslund, T. Super-resolution: A comprehensive survey. Mach. Vis. Appl. 2014, 25, 1423–1468. [Google Scholar] [CrossRef]
- Irani, M.; Peleg, S. Improving resolution by image registration. Graph. Model. Image Process. 1991, 53, 231–239. [Google Scholar] [CrossRef]
- Schultz, R.R.; Stevenson, R.L. Extraction of high-resolution frames from video sequences. IEEE Trans. Image Process. 1996, 5, 996–1011. [Google Scholar] [CrossRef] [PubMed]
- Stark, H.; Oskoui, P. High-resolution image recovery from image-plane arrays, using convex projections. J. Opt. Soc. Am. A 1989, 6, 1715–1726. [Google Scholar] [CrossRef] [PubMed]
- Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image super-resolution as sparse representation of raw image patches. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
- Zeyde, R.; Elad, M.; Protter, M. On single image scale-up using sparse-representations. In Proceedings of the Curves and Surfaces, Avignon, France, 24–30 June 2010; pp. 711–730. [Google Scholar]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 105–114. [Google Scholar] [CrossRef]
- Syrris, V.; Ferri, S.; Ehrlich, D.; Pesaresi, M. Image Enhancement and Feature Extraction Based on Low-Resolution Satellite Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1986–1995. [Google Scholar] [CrossRef]
- Timofte, R.; De, V.; Gool, L.V. Anchored Neighborhood Regression for Fast Example-Based Super-Resolution. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1920–1927. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a Deep Convolutional Network for Image Super-Resolution. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 184–199. [Google Scholar]
- Polatkan, G.; Zhou, M.; Carin, L.; Blei, D.M.; Daubechies, I. A Bayesian Nonparametric Approach to Image Super-Resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 346–358. [Google Scholar] [CrossRef]
- Efrat, N.; Glasner, D.; Apartsin, A.; Nadler, B.; Levin, A. Accurate Blur Models vs. Image Priors in Single Image Super-resolution. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 2832–2839. [Google Scholar] [CrossRef]
- Michaeli, T.; Irani, M. Blind Deblurring Using Internal Patch Recurrence. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 783–798. [Google Scholar]
- Glasner, D.; Bagon, S.; Irani, M. Super-resolution from a single image. In Proceedings of the IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 349–356. [Google Scholar]
- Huang, J.; Singh, A.; Ahuja, N. Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5197–5206. [Google Scholar] [CrossRef]
- Shocher, A.; Cohen, N.; Irani, M. “Zero-Shot” Super-Resolution Using Deep Internal Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Haut, J.M.; Fernandezbeltran, R.; Paoletti, M.E.; Plaza, J.; Plaza, A.; Pla, F. A New Deep Generative Network for Unsupervised Remote Sensing Single-Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6792–6810. [Google Scholar] [CrossRef]
- Sheikholeslami, M.M.; Nadi, S.; Naeini, A.A.; Ghamisi, P. An Efficient Deep Unsupervised Superresolution Model for Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1937–1945. [Google Scholar] [CrossRef]
- Zhu, J.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar] [CrossRef]
- Yuan, Y.; Liu, S.; Zhang, J.; Zhang, Y.; Dong, C.; Lin, L. Unsupervised Image Super-Resolution Using Cycle-in-Cycle Generative Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 814–81409. [Google Scholar] [CrossRef]
- Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. (Eds.) To Learn Image Super-Resolution, Use a GAN to Learn How to Do Image Degradation First; Springer International Publishing: Cham, Switzerland, 2018. [Google Scholar]
- Wang, P.; Zhang, H.; Zhou, F.; Jiang, Z. Unsupervised Remote Sensing Image Super-Resolution Using Cycle CNN. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 3117–3120. [Google Scholar] [CrossRef]
- Zhang, H.; Wang, P.; Jiang, Z. Nonpairwise-Trained Cycle Convolutional Neural Network for Single Remote Sensing Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2021, 59, 4250–4261. [Google Scholar] [CrossRef]
- Guo, Y.; Chen, J.; Wang, J.; Chen, Q.; Cao, J.; Deng, Z.; Xu, Y.; Tan, M. Closed-Loop Matters: Dual Regression Networks for Single Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 5406–5415. [Google Scholar] [CrossRef]
- Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 294–310. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [PubMed]
- Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
- Märtens, M.; Izzo, D.; Krzic, A.; Cox, D. Super-resolution of PROBA-V images using convolutional neural networks. Astrodynamics 2019, 3, 387–402. [Google Scholar] [CrossRef]
- Kelvins-PROBA-V Super Resolution—Data. Available online: https://kelvins.esa.int/proba-v-super-resolution/data/ (accessed on 20 November 2019).
- Earth Explorer. Available online: https://earthexplorer.usgs.gov/ (accessed on 20 November 2019).
- Zhang, Y.; He, X.; Jing, M.; Fan, Y.; Zeng, X. Enhanced Recursive Residual Network for Single Image Super-Resolution. In Proceedings of the 2019 IEEE 13th International Conference on ASIC, Chongqing, China, 29 October–1 November 2019; pp. 1–4. [Google Scholar] [CrossRef]
- Li, Z.; Yang, J.; Liu, Z.; Yang, X.; Jeon, G.; Wu, W. Feedback Network for Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3862–3871. [Google Scholar]
- Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual Dense Network for Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2472–2481. [Google Scholar]
- Zhao, N.; Wei, Q.; Basarab, A.; Dobigeon, N.; Kouame, D.; Tourneret, J. Fast Single Image Super-Resolution Using a New Analytical Solution for ℓ2 – ℓ2 Problems. IEEE Trans. Image Process. 2016, 25, 3683–3697. [Google Scholar] [CrossRef] [PubMed]
- He, H.; Siu, W. Single image super-resolution using Gaussian process regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CL, USA, 20–25 June 2011; pp. 449–456. [Google Scholar]
- Soh, J.W.; Cho, S.; Cho, N.I. Meta-Transfer Learning for Zero-Shot Super-Resolution. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3513–3522. [Google Scholar] [CrossRef]
- Liu, Z.S.; Siu, W.C.; Wang, L.W.; Li, C.T.; Cani, M.P.; Chan, Y.L. Unsupervised Real Image Super-Resolution via Generative Variational AutoEncoder. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 1788–1797. [Google Scholar] [CrossRef]
- Wang, L.; Wang, Y.; Dong, X.; Xu, Q.; Yang, J.; An, W.; Guo, Y. Unsupervised Degradation Representation Learning for Blind Super-Resolution. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 10576–10585. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
Dataset | Spectral Range (m) | Training Set | Validation Set | Testing Set | Image Size | Resolution (m) |
---|---|---|---|---|---|---|
PROBA-V | 0.772–0.902 | 500 | 6 | 60 | HR:384 × 384 | HR:100 |
LR:128 × 128 | LR:300 |
Dataset | Spectral Range (m) | Training Set | Validation Set | Testing Set | Image Size | Resolution (m) |
---|---|---|---|---|---|---|
Landsat-8-NIR | 0.845–0.885 | 500 | 6 | 60 | HR:384 × 384 | HR:30 |
LR:128 × 128 | LR:90 | |||||
B5 | 0.845–0.885 | 2000 | 5 | 2 scenes | HR:384 × 384 | HR:30 |
B6 | 1.560–1.660 | 2000 | 5 | 2 scenes | HR:384 × 384 | HR:30 |
B7 | 2.100–2.300 | 2000 | 5 | 2 scenes | HR:384 × 384 | HR:30 |
B10 | 10.60–11.19 | 2000 | 5 | 2 scenes | HR:384 × 384 | HR:30 |
B11 | 11.50–12.51 | 2000 | 5 | 2 scenes | HR:384 × 384 | HR:30 |
Data Situation | Mode | ||||
---|---|---|---|---|---|
Supervised | Forward loop | — | |||
Backward loop | — | ||||
Double loop | |||||
S | — | — | |||
Weak supervised | Forward loop | — | |||
Backward loop | — | ||||
Double loop | |||||
S | — | — | |||
Unsupervised | Forward loop | — | |||
Backward loop | — | ||||
Double loop | |||||
S | — | — |
Data Situation | Mode | ||||
---|---|---|---|---|---|
Real paired HR-LR data | Forward loop | — | |||
Backward loop | — | ||||
Double loop | |||||
S | — | — | |||
Real unpaired HR-LR data | Forward loop | — | |||
Backward loop | — | ||||
Double loop | |||||
S | — | — | |||
Real LR data | Forward loop | — | |||
Backward loop | — | ||||
Double loop | |||||
S | — | — |
Data Situation | S | Forward Loop | Backward Loop | Double Loop |
---|---|---|---|---|
PSNR(dB)/SSIM | PSNR(dB)/SSIM | PSNR(dB)/SSIM | PSNR(dB)/SSIM | |
Real paired HR-LR dataset | 40.0677/0.9671 | 40.3701/0.9675 | 40.3701/0.9675 | 40.0874/0.9647 |
Real unpaired HR-LR dataset | 39.1032/0.9635 | 39.4057/0.9621 | 39.9832/0.9645 | 39.4217/0.9631 |
Real LR dataset | 39.1266/0.9635 | 40.1415/0.9645 | 39.6151/0.9645 | 39.9268/0.9637 |
Method | PSNR(dB) | SSIM |
---|---|---|
Ours (CA [47]) | 40.3701 | 0.9675 |
Ours (CBAM [60]) | 41.1102 | 0.9689 |
RDN [54] | 33.5486 | 0.9187 |
EDSR [52] | 40.5574 | 0.9675 |
SRFBN [53] | 40.6953 | 0.9679 |
Testing Data | |||
---|---|---|---|
Training Data | |||
47.1727/0.9904 | 39.5389/0.9769 | ||
47.1168/0.9904 | 39.9341/0.9793 | ||
47.1564/0.9904 | 39.5677/0.9771 | ||
47.0138/0.9902 | 39.8937/0.9791 |
Scale | Test | B5 | B6 | B7 | B10 | B11 | |
---|---|---|---|---|---|---|---|
Train | |||||||
B5 | 42.5729/0.9759 | 48.5109/0.9900 | 50.0885/0.9929 | 58.3780/0.9987 | 58.7213/0.9988 | ||
B6 | 42.2653/0.9744 | 48.5291/0.9900 | 50.1445/0.9929 | 58.3357/0.9987 | 58.7127/0.9988 | ||
B7 | 42.2117/0.9743 | 48.5502/0.9900 | 50.2106/0.9929 | 58.6805/0.9988 | 59.0529/0.9989 | ||
B10 | 38.7340/0.9443 | 45.3003/0.9804 | 47.0148/0.9864 | 59.6865/0.9990 | 59.9845/0.9991 | ||
B11 | 38.8092/0.9454 | 45.4116/0.9809 | 47.1310/0.9867 | 59.8778/0.9990 | 60.1900/0.9991 | ||
B5 | 38.5995/0.9430 | 44.9772/0.9790 | 46.6235/0.9852 | 55.7259/0.9983 | 56.0587/0.9984 | ||
B6 | 38.3891/0.9403 | 45.0956/0.9794 | 46.7962/0.9856 | 56.1163/0.9985 | 56.4618/0.9986 | ||
B7 | 38.3776/0.9402 | 45.0734/0.9793 | 46.7922/0.9856 | 56.2103/0.9986 | 56.5410/0.9987 | ||
B10 | 36.2983/0.9069 | 42.9761/0.9677 | 44.6739/0.9772 | 56.1572/0.9986 | 56.4653/0.9986 | ||
B11 | 36.2152/0.9058 | 42.8671/0.9670 | 44.5453/0.9766 | 55.5850/0.9984 | 55.9038/0.9985 | ||
B5 | 36.4659/0.9108 | 42.9500/0.9680 | 44.5885/0.9773 | 52.8712/0.9978 | 53.1948/0.9979 | ||
B6 | 36.3036/0.9075 | 42.9999/0.9682 | 44.6664/0.9775 | 53.6337/0.9978 | 53.9671/0.9980 | ||
B7 | 36.2905/0.9072 | 43.0121/0.9682 | 44.6960/0.9776 | 53.6969/0.9979 | 54.0170/0.9980 | ||
B10 | 35.0405/0.8796 | 41.6516/0.9575 | 43.3137/0.9696 | 54.8017/0.9981 | 54.0919/0.9982 | ||
B11 | 35.0061/0.8789 | 41.6026/0.9571 | 43.2639/0.9694 | 54.4611/0.9982 | 54.7579/0.9983 |
Scale | Test | B5 | B6 | B7 | B10 | B11 | |
---|---|---|---|---|---|---|---|
Train | |||||||
B5 | 42.4992/0.9760 | 48.3702/0.9899 | 49.9142/0.9927 | 55.0392/0.9985 | 55.3785/0.9986 | ||
B6 | 42.1859/0.9744 | 48.4360/0.9900 | 50.0237/0.9929 | 55.0141/0.9985 | 55.3801/0.9986 | ||
B7 | 42.1216/0.9741 | 48.4302/0.9900 | 50.0325/0.9929 | 54.9818/0.9985 | 55.3517/0.9986 | ||
B10 | 40.1325/0.9594 | 46.6713/0.9855 | 48.3643/0.9899 | 54.7714/0.9986 | 55.0933/0.9987 | ||
B11 | 40.1150/0.9593 | 46.6681/0.9854 | 48.3653/0.9863 | 55.1840/0.9988 | 55.5098/0.9989 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, H.; Zhang, C.; Xie, F.; Jiang, Z. A Closed-Loop Network for Single Infrared Remote Sensing Image Super-Resolution in Real World. Remote Sens. 2023, 15, 882. https://doi.org/10.3390/rs15040882
Zhang H, Zhang C, Xie F, Jiang Z. A Closed-Loop Network for Single Infrared Remote Sensing Image Super-Resolution in Real World. Remote Sensing. 2023; 15(4):882. https://doi.org/10.3390/rs15040882
Chicago/Turabian StyleZhang, Haopeng, Cong Zhang, Fengying Xie, and Zhiguo Jiang. 2023. "A Closed-Loop Network for Single Infrared Remote Sensing Image Super-Resolution in Real World" Remote Sensing 15, no. 4: 882. https://doi.org/10.3390/rs15040882
APA StyleZhang, H., Zhang, C., Xie, F., & Jiang, Z. (2023). A Closed-Loop Network for Single Infrared Remote Sensing Image Super-Resolution in Real World. Remote Sensing, 15(4), 882. https://doi.org/10.3390/rs15040882