Multi-Degradation Super-Resolution Reconstruction for Remote Sensing Images with Reconstruction Features-Guided Kernel Correction
Abstract
:1. Introduction
- (1)
- We design a deep learning SR network (RFKCNext) for degradation kernel correction in multi-degradation SR for remote sensing images. Unlike the potential expression of learning degraded kernels directly from LR images, RFKCNext utilizes features from the SR images to correct the estimated degradation kernels. It enables the kernel correction network to better capture the deviation between the estimated and the real-world degradation kernels, thereby improving the accuracy of the degradation kernel estimation and the quality of the final reconstructed images.
- (2)
- We design a CNB-based SR reconstruction subnetwork module (SRConvNext) and a reconstruction features-guided kernel correction subnetwork module (RFGKCorrector) to form RFKCNext. The introduction of CNBs addresses the limitation of traditional CNN structures in globally modeling the features of remote sensing images. To our knowledge, we are the first to utilize CNBs to construct a network for multi-degradation SR reconstruction in remote sensing images.
- (3)
- Extensive experiments are conducted on the NWPU-RESISC45 dataset, UCMERCED dataset, and real-world remote sensing dataset provided by the “Tianzhi Cup” Artificial Intelligence Challenge. The qualitative and quantitative experimental results indicate that our method outperforms other methods, demonstrating the effectiveness of the designed method.
2. Related Work
2.1. CNN-Based Multi-Degradation SR Methods
2.2. Multi-Degradation SR Methods for Remote Sensing Images
2.3. The Method of Kernel Correction
3. Methodology
3.1. Degradation Formulation
3.2. Network Architecture
3.3. Super-Resolution Network (SRConvNext)
3.4. Reconstruction Features-Guided Kernel Corrector (RFGKCorrector)
3.5. Loss Function
4. Experiment
4.1. Datasets and Metrics
4.2. Experimental Settings
4.3. Experiments on NWPU-RESISC45 Synthetic Images
4.3.1. Quantitative Results
4.3.2. Qualitative Results
4.4. Experiments on UCMERCED Remote Sensing Images
Qualitative Results
4.5. Experiments on Real-World Remote Sensing Images
Qualitative Results
4.6. Ablation Studies
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wang, X.; Yi, J.; Guo, J.; Song, Y.; Lyu, J.; Xu, J.; Yan, W.; Zhao, J.; Cai, Q.; Min, H. A Review of Image Super-Resolution Approaches Based on Deep Learning and Applications in Remote Sensing. Remote Sens. 2022, 14, 5423. [Google Scholar] [CrossRef]
- Huang, L.; An, R.; Zhao, S.; Jiang, T.; Hu, H. A Deep Learning-Based Robust Change Detection Approach for Very High Resolution Remotely Sensed Images with Multiple Features. Remote Sens. 2020, 12, 1441. [Google Scholar] [CrossRef]
- Tang, X.; Zhang, H.; Mou, L.; Liu, F.; Zhang, X.; Xiang, X.; Zhu, X.; Jiao, L. An Unsupervised Remote Sensing Change Detection Method Based on Multiscale Graph Convolutional Network and Metric Learning. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5609715. [Google Scholar] [CrossRef]
- Li, X.; Yong, X.; Li, T.; Tong, Y.; Gao, H.; Wang, X.; Xu, Z.; Fang, Y.; You, Q.; Lyu, X. A Spectral–Spatial Context-Boosted Network for Semantic Segmentation of Remote Sensing Images. Remote Sens. 2024, 16, 1214. [Google Scholar] [CrossRef]
- Chen, X.; Li, D.; Liu, M.; Jia, J. CNN and Transformer Fusion for Remote Sensing Image Semantic Segmentation. Remote Sens. 2023, 15, 4455. [Google Scholar] [CrossRef]
- Rabbi, J.; Ray, N.; Schubert, M.; Chowdhury, S.; Chao, D. Small-Object Detection in Remote Sensing Images with End-to-End Edge-Enhanced GAN and Object Detector Network. Remote Sens. 2020, 12, 1432. [Google Scholar] [CrossRef]
- Liu, C.; Zhang, S.; Hu, M.; Song, Q. Object Detection in Remote Sensing Images Based on Adaptive Multi-Scale Feature Fusion Method. Remote Sens. 2024, 16, 907. [Google Scholar] [CrossRef]
- Shi, J.; Liu, W.; Shan, H.; Li, E.; Li, X.; Zhang, L. Remote Sensing Scene Classification Based on Multibranch Fusion Attention Network. IEEE Geosci. Remote Sens. Lett. 2023, 20, 3001505. [Google Scholar] [CrossRef]
- Wang, G.; Zhang, N.; Liu, W.; Chen, H.; Xie, Y. MFST: A Multi-Level Fusion Network for Remote Sensing Scene Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6516005. [Google Scholar] [CrossRef]
- Zhang, J.; Xu, T.; Li, J.; Jiang, S.; Zhang, Y. Single-Image Super Resolution of Remote Sensing Images with Real-world Degradation Modeling. Remote Sens. 2022, 14, 2895. [Google Scholar] [CrossRef]
- Huang, B.; Guo, Z.; Wu, L.; He, B.; Li, X.; Lin, Y. Pyramid Information Distillation Attention Network for Super-Resolution Reconstruction of Remote Sensing Images. Remote Sens. 2021, 13, 5143. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef] [PubMed]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
- Tong, T.; Li, G.; Liu, X.; Gao, Q. Image super-resolution using dense skip connections. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4799–4807. [Google Scholar]
- Li, J.; Du, S.; Wu, C.; Leng, Y.; Song, R.; Li, Y. Drcr net: Dense residual channel re-calibration network with non-local purification for spectral super resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 1259–1268. [Google Scholar]
- Larsson, G.; Maire, M.; Shakhnarovich, G. FractalNet: Ultra-Deep Neural Networks without Residuals. arXiv 2016, arXiv:1605.07648. [Google Scholar]
- Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; Hasan, M.; Van Essen, B.C.; Awwal, A.A.S.; Asari, V.K. A State-of-the-Art Survey on Deep Learning Theory and Architectures. Electronics 2019, 8, 292. [Google Scholar] [CrossRef]
- Cheng, K.; Shen, Y.; Dinov, I.D. Applications of Deep Neural Networks with Fractal Structure and Attention Blocks for 2D and 3D Brain Tumor Segmentation. J. Stat. Theory Pract. 2024, 18, 31. [Google Scholar] [CrossRef]
- Ding, C.; Chen, Y.; Algarni, A.M.; Zhang, G.; Peng, H. Application of fractal neural network in network security situation awareness. Fractals. 2022, 30, 2240090. [Google Scholar] [CrossRef]
- Anil, B.C.; Dayananda, P. Automatic liver tumor segmentation based on multi-level deep convolutional networks and fractal residual network. IETE J. Res. 2023, 69, 1925–1933. [Google Scholar] [CrossRef]
- Ding, S.; Gao, Z.; Wang, J.; Lu, M.; Shi, J. Fractal graph convolutional network with MLP-mixer based multi-path feature fusion for classification of histopathological images. Expert Syst. Appl. 2023, 212, 118793. [Google Scholar] [CrossRef]
- Song, X.; Liu, W.; Liang, L.; Shi, W.; Xie, G.; Lu, X.; Hei, X. Image super-resolution with multi-scale fractal residual attention network. Comput. Graph. 2023, 113, 21–31. [Google Scholar] [CrossRef]
- Feng, X.; Li, X.; Li, J. Multi-scale fractal residual network for image super-resolution. Appl. Intell. 2021, 51, 1845–1856. [Google Scholar] [CrossRef]
- Zhou, Y.; Dong, J.; Yang, Y. Deep fractal residual network for fast and accurate single image super resolution. Neurocomputing 2020, 398, 389–398. [Google Scholar] [CrossRef]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Gool, L.; Timofte, R. SwinIR: Image Restoration Using Swin Transformer. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Montreal, BC, Canada, 11–17 October 2021; pp. 1833–1844. [Google Scholar]
- Chen, X.; Wang, X.; Zhou, J.; Qiao, Y.; Dong, C. Activating More Pixels in Image Super-Resolution Transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 22367–22377. [Google Scholar]
- Wu, D.; Li, H.; Hou, Y.; Xu, C.; Cheng, G.; Guo, L.; Liu, H. Spatial–Channel Attention Transformer with Pseudo Regions for Remote Sensing Image-Text Retrieval. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4704115. [Google Scholar] [CrossRef]
- Han, K.; Wang, Y.; Chen, H.; Chen, X.; Guo, J.; Liu, Z.; Tang, Y.; Xiao, A.; Xu, C.; Xu, Y.; et al. A Survey on Vision Transformer. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 87–110. [Google Scholar] [CrossRef] [PubMed]
- Wang, T.; Yuan, L.; Feng, J.; Yan, S. PnP-DETR: Towards Efficient Visual Analysis with Transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 4641–4650. [Google Scholar]
- Dai, Z.; Liu, H.; Le, Q.; Tan, M. Coatnet: Marrying convolution and attention for all data sizes. Adv. Neural Inf. Process. Syst. 2021, 34, 3965–3977. [Google Scholar]
- Liu, Y.; Zhang, Y.; Wang, Y.; Hou, F.; Yuan, J.; Tian, J.; Zhang, Y.; Shi, Z.; Fan, J.; He, Z. A Survey of Visual Transformers. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 7478–7498. [Google Scholar] [CrossRef] [PubMed]
- Jamil, S.; Piran, M.J.; Kwon, O.-J. A Comprehensive Survey of Transformers for Computer Vision. Drones 2023, 7, 287. [Google Scholar] [CrossRef]
- Raghu, M.; Unterthiner, T.; Kornblith, S.; Zhang, C.; Dosovitskiy, A. Do vision transformers see like convolutional neural networks? Adv. Neural Inf. Process. Syst. 2021, 34, 12116–12128. [Google Scholar]
- Liu, Z.; Mao, H.; Wu, C.; Feichtenhofer, C.; Darrell, T.; Xie, S. A ConvNet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11966–11976. [Google Scholar]
- Zhang, N.; Wang, Y.; Zhang, X.; Xu, D.; Wang, X.; Ben, G.; Zhao, Z.; Li, Z. A Multi-Degradation Aided Method for Unsupervised Remote Sensing Image Super Resolution with Convolution Neural Networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5600814. [Google Scholar] [CrossRef]
- Gu, J.; Lu, H.; Zuo, W.; Dong, C. Blind Super-Resolution with Iterative Kernel Correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1604–1613. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Real-worldistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 105–114. [Google Scholar]
- Haris, M.; Shakhnarovich, G.; Ukita, N. Deep Back-Projection Networks for Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1664–1673. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2020; Springer: Cham, Swizerland, 2018; pp. 294–310. [Google Scholar]
- Zhou, Y.; Li, Z.; Guo, C.-L.; Bai, S.; Cheng, M.-M.; Hou, Q. SRFormer: Permuted Self-Attention for Single Image Super-Resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 12734–12745. [Google Scholar]
- Zhang, K.; Zuo, W.; Zhang, L. Learning a Single Convolutional Super-Resolution Network for Multiple Degradations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3262–3271. [Google Scholar]
- Xu, Y.; Tseng, S.; Tseng, Y.; Kuo, H.; Tsai, Y. Unified Dynamic Convolutional Network for Super-Resolution with Variational Degradations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 12493–12502. [Google Scholar]
- Zhang, K.; Gool, L.; Timofte, R. Deep Unfolding Network for Image Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3214–3223. [Google Scholar]
- Zhang, K.; Zuo, W.; Zhang, L. Deep Plug-And-Play Super-Resolution for Arbitrary Blur Kernels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1671–1681. [Google Scholar]
- Liu, Q.; Gao, P.; Han, K.; Liu, N.; Xiang, W. Degradation-aware self-attention based transformer for blind image super-resolution. IEEE Trans. Multimed. 2024, 26, 7516–7528. [Google Scholar] [CrossRef]
- Zhang, J.; Zhou, Y.; Bi, J.; Xue, Y.; Deng, W.; He, W.; Zhao, T.; Sun, K.; Tong, T.; Gao, Q.; et al. A blind image super-resolution network guided by kernel estimation and structural prior knowledge. Sci. Rep. 2024, 14, 9525. [Google Scholar] [CrossRef]
- Zhang, W.; Tan, Z.; Lv, Q.; Li, J.; Zhu, B.; Liu, Y. An Efficient Hybrid CNN-Transformer Approach for Remote Sensing Super-Resolution. Remote Sens. 2024, 16, 880. [Google Scholar] [CrossRef]
- Wang, Y.; Shao, Z.; Lu, T.; Huang, X.; Wang, J.; Chen, X.; Huang, H.; Zuo, X. Remote Sensing Image Super-Resolution via Multi-Scale Texture Transfer Network. Remote Sens. 2023, 15, 5503. [Google Scholar] [CrossRef]
- Yue, X.; Chen, X.; Zhang, W.; Ma, H.; Wang, L.; Zhang, J.; Wang, M.; Jiang, B. Super-Resolution Network for Remote Sensing Images via Preclassification and Deep–Shallow Features Fusion. Remote Sens. 2022, 14, 925. [Google Scholar] [CrossRef]
- Wang, Y.; Zhao, L.; Liu, L.; Hu, H.; Tao, W. URNet: A U-Shaped Residual Network for Lightweight Image Super-Resolution. Remote Sens. 2021, 13, 3848. [Google Scholar] [CrossRef]
- Xiong, Y.; Guo, S.; Chen, J.; Deng, X.; Sun, L.; Zheng, X.; Xu, W. Improved SRGAN for Remote Sensing Image Super-Resolution Across Locations and Sensors. Remote Sens. 2020, 12, 1263. [Google Scholar] [CrossRef]
- Kang, X.; Li, J.; Duan, P.; Ma, F.; Li, S. Multilayer Degradation Representation-Guided Blind Super-Resolution for Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5534612. [Google Scholar] [CrossRef]
- Dong, R.; Mou, L.; Zhang, L.; Fu, H.; Zhu, X. Real-world remote sensing image super-resolution via a practical degradation model and a kernel-aware network. ISPRS J. Photogramm. Remote Sens. 2022, 191, 155–170. [Google Scholar] [CrossRef]
- Zhao, Z.; Ren, C.; Teng, Q.; He, X. A practical super-resolution method for multi-degradation remote sensing images with deep convolutional neural networks. J. Real-Time Image Process. 2022, 19, 1139–1154. [Google Scholar] [CrossRef]
- Xiao, Y.; Yuan, Q.; Jiang, K.; He, J.; Wang, Y.; Zhang, L. From degrade to upgrade: Learning a self-supervised degradation guided adaptive network for blind remote sensing image super-resolution. Inf. Fusion 2023, 96, 297–311. [Google Scholar] [CrossRef]
- Luo, Z.; Huang, Y.; Li, S.; Wang, L.; Tan, T. Unfolding the alternating optimization for blind super resolution. Adv. Neural Inf. Process. Syst. 2019, 33, 5632–5643. [Google Scholar]
- Yan, Q.; Niu, A.; Wang, C.; Dong, W.; Woźniak, M.; Zhang, Y. KGSR: A kernel guided network for real-world blind super-resolution. Pattern Recognit. 2024, 147, 110095. [Google Scholar] [CrossRef]
- Ates, H.F.; Yildirim, S.; Gunturk, B.K. Deep learning-based blind image super-resolution with iterative kernel reconstruction and noise estimation. Comput. Vis. Image Underst. 2023, 233, 103718. [Google Scholar] [CrossRef]
- Zhou, H.; Zhu, X.; Zhu, J.; Han, Z.; Zhang, S.; Qin, J.; Yin, X. Learning Correction Filter via Degradation-Adaptive Regression for Blind Single Image Super-Resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 12331–12341. [Google Scholar]
- Cheng, G.; Han, J.; Lu, X. Remote Sensing Image Scene Classification: Benchmark and State of the Art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef]
- Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
- Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
- Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 2366–2369. [Google Scholar]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
Dataset | Number of Scene Classes | Number of Images | Image Size (Pixels) | Spatial Resolution (m) | Image Type | Coverage Area |
---|---|---|---|---|---|---|
NWPU-RESISC45 | 45 | 31,500 | 256 × 256 | 0.2–30 | satellite images andaerial image | more than 100 countries and regions |
UCMERCED | 21 | 2100 | 256 × 256 | 0.3 | aerial image | 21 regions in the United States |
Tianzhi Cup | aircraft, dense residential, runway and other scenes | 430 | 4096 × 4096 | 0.5–1 | satellite images | not mentioned in the dataset description |
Model | Noise | Param | Running Time (ms/image) | k1 | k2 | k3 | k4 | k5 | Average |
---|---|---|---|---|---|---|---|---|---|
PSNR↑/SSIM↑ | PSNR↑/SSIM↑ | PSNR↑/SSIM↑ | PSNR↑/SSIM↑ | PSNR↑/SSIM↑ | PSNR↑/SSIM↑ | ||||
SRMD | 0 | 1.51 M | 1.284 | 32.958/ 0.899 | 32.265/ 0.888 | 32.751/ 0.896 | 32.431/ 0.889 | 32.641/ 0.893 | 32.809/ 0.893 |
USRNet | 0.81 M | 23.177 | 31.539/ 0.860 | 31.182/ 0.855 | 31.660/ 0.868 | 32.013/ 0.877 | 31.395/ 0.858 | 31.558/ 0.858 | |
IKC | 9.02 M | 97.907 | 32.900/ 0.890 | 31.310/ 0.873 | 32.422/ 0.888 | 32.046/ 0.878 | 32.608/ 0.886 | 32.257/ 0.883 | |
DAN | 4.65 M | 113.937 | 33.329/ 0.902 | 33.011/ 0.894 | 33.217/ 0.910 | 33.287/ 0.902 | 33.167/ 0.903 | 33.202/ 0.902 | |
DRSR | 5.59 M | 26.347 | 33.310/ 0.910 | 33.026/ 0.899 | 33.329/ 0.907 | 33.305/ 0.904 | 33.177/ 0.901 | 33.229/ 0.904 | |
DSAT | 15.50 M | 140.251 | 33.400/ 0.906 | 33.078/ 0.904 | 33.360/ 0.903 | 33.329/ 0.910 | 33.171/ 0.899 | 33.268/ 0.904 | |
KESPKNet | 21.83 M | 119.631 | 33.413/ 0.908 | 33.112/ 0.904 | 33.390/ 0.908 | 33.370/ 0.901 | 33.212/ 0.903 | 33.299/ 0.905 | |
Ours | 2.78 M | 7.342 | 33.752/ 0.913 | 33.261/ 0.906 | 33.723/ 0.913 | 33.556/ 0.910 | 33.326/ 0.908 | 33.524/ 0.910 | |
SRMD | 5 | 1.51 M | 1.282 | 29.810/ 0.790 | 29.609/ 0.788 | 30.135/ 0.802 | 30.618/ 0.823 | 29.658/ 0.791 | 29.966/ 0.799 |
USRNet | 0.81 M | 23.232 | 29.621/ 0.784 | 29.425/ 0.777 | 29.974/ 0.799 | 30.551/ 0.822 | 29.469/ 0.778 | 29.808/ 0.792 | |
IKC | 9.02 M | 96.969 | 29.822/ 0.795 | 29.272/ 0.787 | 29.944/ 0.807 | 30.433/ 0.823 | 29.645/ 0.790 | 29.823/ 0.800 | |
DAN | 4.65 M | 115.089 | 29.863/ 0.792 | 29.722/ 0.785 | 30.319/ 0.808 | 30.894/ 0.837 | 29.739/ 0.786 | 30.107/ 0.802 | |
DRSR | 5.59 M | 25.752 | 29.870/ 0.800 | 29.755/ 0.788 | 30.302/ 0.811 | 30.794/ 0.832 | 29.767/ 0.788 | 30.098/ 0.804 | |
DSAT | 15.50 M | 136.542 | 29.879/ 0.798 | 29.760/ 0.789 | 30.310/ 0.814 | 30.861/ 0.828 | 29.775/ 0.789 | 30.117/ 0.804 | |
KESPKNet | 21.83 M | 114.694 | 29.900/ 0.794 | 29.747/ 0.792 | 30.326/ 0.809 | 30.908/ 0.833 | 29.791/ 0.791 | 30.134/ 0.804 | |
Ours | 2.78 M | 7.612 | 30.090/ 0.801 | 29.833/ 0.793 | 30.455/ 0.815 | 31.010/ 0.837 | 29.870/ 0.794 | 30.252/ 0.808 | |
SRMD | 10 | 1.51 M | 1.283 | 28.804/ 0.747 | 28.640/ 0.746 | 29.065/ 0.758 | 29.538/ 0.780 | 28.676/ 0.743 | 28.945/ 0.755 |
USRNet | 0.81 M | 22.964 | 28.738/ 0.746 | 28.576/ 0.739 | 29.032/ 0.760 | 29.523 /0.782 | 28.615/ 0.740 | 28.897/ 0.753 | |
IKC | 9.02 M | 97.491 | 28.798/ 0.752 | 28.512/ 0.746 | 28.977/ 0.764 | 29.418/ 0.781 | 28.673/ 0.747 | 28.876/ 0.758 | |
DAN | 4.65 M | 114.425 | 28.848/ 0.750 | 28.700/ 0.745 | 29.159/ 0.765 | 29.683/ 0.787 | 28.770/ 0.748 | 29.032/ 0.759 | |
DRSR | 5.59 M | 26.058 | 28.867/ 0.750 | 28.709/ 0.744 | 29.165/ 0.764 | 29.687/ 0.791 | 28.753/ 0.748 | 29.036/ 0.759 | |
DSAT | 15.50 M | 139.972 | 28.869/ 0.753 | 28.721/ 0.746 | 29.172/ 0.764 | 29.690/ 0.789 | 28.773/ 0.744 | 29.045/ 0.759 | |
KESPKNet | 21.83 M | 117.972 | 28.883/ 0.752 | 28.731/ 0.745 | 29.181/ 0.767 | 29.709/ 0.791 | 28.782/ 0.749 | 29.057/ 0.761 | |
Ours | 2.78 M | 7.542 | 29.015/ 0.757 | 28.820/ 0.750 | 29.318/ 0.771 | 29.808/ 0.794 | 28.856/ 0.752 | 29.163/ 0.765 |
Model | Noise | Param | Running Time (ms/image) | k1 | k2 | k3 | k4 | k5 | Average |
---|---|---|---|---|---|---|---|---|---|
PSNR↑/SSIM↑ | PSNR↑/SSIM↑ | PSNR↑/SSIM↑ | PSNR↑/SSIM↑ | PSNR↑/SSIM↑ | PSNR↑/SSIM↑ | ||||
SRMD | 0 | 1.55 M | 1.162 | 28.663/ 0.746 | 28.553/ 0.743 | 28.625/ 0.745 | 28.610/ 0.745 | 28.644/ 0.746 | 28.619/ 0.745 |
USRNet | 0.81 M | 22.982 | 28.612/ 0.743 | 28.574/ 0.742 | 28.610/ 0.743 | 28.605/ 0.743 | 28.569/ 0.743 | 28.594/ 0.743 | |
IKC | 9.17 M | 96.656 | 28.625/ 0.741 | 28.546/ 0.734 | 28.633/ 0.738 | 28.711/ 0.745 | 28.594/ 0.736 | 28.622/ 0.741 | |
DAN | 4.80 M | 112.472 | 28.777/ 0.747 | 28.707/ 0.744 | 28.796/ 0.748 | 28.592/ 0.745 | 28.788/ 0.750 | 28.732/ 0.747 | |
DRSR | 5.74 M | 25.841 | 28.743/ 0.746 | 28.673/ 0.747 | 28.764/ 0.749 | 28.678/ 0.751 | 28.763/ 0.752 | 28.724/ 0.749 | |
DSAT | 15.64 M | 132.822 | 28.786/ 0.752 | 28.708/ 0.745 | 28.790/ 0.751 | 28.682/ 0.749 | 28.782/ 0.753 | 28.750/ 0.750 | |
KESPKNet | 21.98 M | 110.469 | 28.809/ 0.751 | 28.718/ 0.749 | 28.798/ 0.753 | 28.702/ 0.749 | 28.796/ 0.751 | 28.765/ 0.751 | |
Ours | 2.92 M | 7.425 | 28.916/ 0.756 | 28.859/ 0.755 | 28.888/ 0.757 | 28.829/ 0.754 | 28.887/ 0.756 | 28.876/ 0.756 | |
SRMD | 5 | 1.55 M | 1.181 | 27.750/ 0.699 | 27.664/ 0.696 | 27.826/ 0.703 | 27.936/ 0.709 | 27.697/ 0.696 | 27.775/ 0.701 |
USRNet | 0.81 M | 22.679 | 27.774/ 0.701 | 27.707/ 0.698 | 27.859/ 0.705 | 27.967/ 0.711 | 27.729/ 0.698 | 27.807/ 0.703 | |
IKC | 9.17 M | 96.053 | 27.715/ 0.696 | 27.613/ 0.689 | 27.831/ 0.701 | 28.010/ 0.712 | 27.627/ 0.689 | 27.759/ 0.697 | |
DAN | 4.80 M | 113.919 | 27.792/ 0.705 | 27.733/ 0.698 | 27.877/ 0.706 | 27.972/ 0.717 | 27.756/ 0.700 | 27.826/ 0.705 | |
DRSR | 5.74 M | 25.649 | 27.795/ 0.706 | 27.723/ 0.701 | 27.814/ 0.707 | 28.001/ 0.713 | 27.740/ 0.701 | 27.815/ 0.706 | |
DSAT | 15.64 M | 133.457 | 27.813/ 0.702 | 27.745/ 0.699 | 27.873/ 0.711 | 28.013/ 0.715 | 27.755/ 0.703 | 27.840/ 0.706 | |
KESPKNet | 21.98 M | 111.249 | 27.824/ 0.707 | 27.762/ 0.700 | 27.882/ 0.711 | 28.020/ 0.716 | 27.763/ 0.702 | 27.850/ 0.707 | |
Ours | 2.92 M | 7.427 | 27.925/ 0.709 | 27.844/ 0.705 | 28.007/ 0.713 | 28.105/ 0.719 | 27.867/ 0.706 | 27.950/ 0.710 | |
SRMD | 10 | 1.55 M | 1.157 | 26.988/ 0.663 | 26.910/ 0.660 | 27.079/ 0.669 | 27.219/ 0.676 | 26.937/ 0.661 | 27.027/ 0.666 |
USRNet | 0.81 M | 22.730 | 27.031/ 0.666 | 26.906/ 0.661 | 27.061/ 0.671 | 27.197/ 0.679 | 26.925/ 0.658 | 27.024/ 0.667 | |
IKC | 9.17 M | 95.913 | 26.900/ 0.660 | 26.842/ 0.655 | 27.041/ 0.666 | 27.191/ 0.675 | 26.831/ 0.654 | 26.961/ 0.662 | |
DAN | 4.80 M | 112.247 | 27.014/ 0.670 | 26.906/ 0.664 | 27.086/ 0.673 | 27.256/ 0.683 | 26.905/ 0.666 | 27.033/ 0.671 | |
DRSR | 5.74 M | 25.725 | 26.981/ 0.669 | 26.882/ 0.665 | 27.083/ 0.675 | 27.157/ 0.679 | 26.917/ 0.663 | 27.004/ 0.670 | |
DSAT | 15.64 M | 134.343 | 27.009/ 0.671 | 26.903/ 0.663 | 27.098/ 0.671 | 27.252/ 0.682 | 26.926/ 0.665 | 27.038/ 0.670 | |
KESPKNet | 21.98 M | 112.032 | 27.033/ 0.670 | 26.914/ 0.666 | 27.107/ 0.674 | 27.260/ 0.685 | 26.933/ 0.667 | 27.049/ 0.672 | |
Ours | 2.92 M | 7.487 | 27.120/ 0.672 | 27.046/ 0.668 | 27.219/ 0.677 | 27.352/ 0.685 | 27.064/ 0.670 | 27.160/ 0.674 |
Model | Scale Factor | |
---|---|---|
SRMD | 2 | 22.184 |
USRNet | 22.311 | |
IKC | 22.699 | |
DAN | 22.925 | |
DRSR | 22.794 | |
DSAT | 22.561 | |
KESPKNet | 22.762 | |
Ours | 20.787 | |
SRMD | 4 | 20.176 |
USRNet | 19.388 | |
IKC | 18.913 | |
DAN | 19.348 | |
DRSR | 19.091 | |
DSAT | 19.511 | |
KESPKNet | 19.505 | |
Ours | 18.415 |
Model | Scale Factor | |
---|---|---|
SRMD | 2 | 16.959 |
USRNet | 17.831 | |
IKC | 21.541 | |
DAN | 23.032 | |
DRSR | 22.731 | |
DSAT | 21.736 | |
KESPKNet | 22.004 | |
Ours | 16.392 | |
SRMD | 4 | 17.715 |
USRNet | 17.925 | |
IKC | 20.848 | |
DAN | 17.188 | |
DRSR | 18.321 | |
DSAT | 18.750 | |
KESPKNet | 18.034 | |
Ours | 16.691 |
Model | Noise | k1 | k2 | k3 | k4 | k5 | Average |
---|---|---|---|---|---|---|---|
PSNR↑/ SSIM↑ | PSNR↑/ SSIM↑ | PSNR↑/ SSIM↑ | PSNR↑/ SSIM↑ | PSNR↑/ SSIM↑ | PSNR↑/ SSIM↑ | ||
SRConvNext | 0 | 33.649/0.906 | 33.115/0.892 | 33.551/0.904 | 33.325/0.895 | 33.060/0.891 | 33.340/0.898 |
RFKCNext | 33.752/0.913 | 33.261/0.906 | 33.723/0.913 | 33.556/0.910 | 33.326/0.908 | 33.524/0.910 | |
SRConvNext | 5 | 29.893/0.793 | 29.755/0.788 | 30.231/0.812 | 30.860/0.830 | 29.697/0.787 | 30.087/0.802 |
RFKCNext | 30.090/0.801 | 29.833/0.793 | 30.455/0.815 | 31.010/0.837 | 29.870/0.794 | 30.252/0.808 | |
SRConvNext | 10 | 28.897/0.751 | 28.693/0.747 | 29.163/0.769 | 29.701/0.789 | 28.735/0.746 | 29.038/0.761 |
RFKCNext | 29.015/0.757 | 28.820/0.750 | 29.318/0.771 | 29.808/0.794 | 28.856/0.752 | 29.163/0.765 |
Model | Noise | k1 | k2 | k3 | k4 | k5 | Average |
---|---|---|---|---|---|---|---|
PSNR↑/ SSIM↑ | PSNR↑/ SSIM↑ | PSNR↑/ SSIM↑ | PSNR↑/ SSIM↑ | PSNR↑/ SSIM↑ | PSNR↑/ SSIM↑ | ||
SRConvNext | 0 | 28.773/0.754 | 28.674/0.751 | 28.717/0.753 | 28.707/0.753 | 28.747/0.753 | 28.724/0.753 |
RFKCNext | 28.916/0.756 | 28.859/0.755 | 28.888/0.757 | 28.829/0.754 | 28.887/0.756 | 28.876/0.756 | |
SRConvNext | 5 | 27.835/0.706 | 27.732/0.702 | 27.889/0.711 | 28.002/0.717 | 27.790/0.704 | 27.850/0.708 |
RFKCNext | 27.925/0.709 | 27.844/0.705 | 28.007/0.713 | 28.105/0.719 | 27.867/0.706 | 27.950/0.710 | |
SRConvNext | 10 | 27.045/0.670 | 26.953/0.666 | 27.116/0.675 | 27.261/0.683 | 26.986/0.667 | 27.072/0.672 |
RFKCNext | 27.120/0.672 | 27.046/0.668 | 27.219/0.677 | 27.352/0.685 | 27.064/0.670 | 27.160/0.674 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Qin, Y.; Nie, H.; Wang, J.; Liu, H.; Sun, J.; Zhu, M.; Lu, J.; Pan, Q. Multi-Degradation Super-Resolution Reconstruction for Remote Sensing Images with Reconstruction Features-Guided Kernel Correction. Remote Sens. 2024, 16, 2915. https://doi.org/10.3390/rs16162915
Qin Y, Nie H, Wang J, Liu H, Sun J, Zhu M, Lu J, Pan Q. Multi-Degradation Super-Resolution Reconstruction for Remote Sensing Images with Reconstruction Features-Guided Kernel Correction. Remote Sensing. 2024; 16(16):2915. https://doi.org/10.3390/rs16162915
Chicago/Turabian StyleQin, Yi, Haitao Nie, Jiarong Wang, Huiying Liu, Jiaqi Sun, Ming Zhu, Jie Lu, and Qi Pan. 2024. "Multi-Degradation Super-Resolution Reconstruction for Remote Sensing Images with Reconstruction Features-Guided Kernel Correction" Remote Sensing 16, no. 16: 2915. https://doi.org/10.3390/rs16162915
APA StyleQin, Y., Nie, H., Wang, J., Liu, H., Sun, J., Zhu, M., Lu, J., & Pan, Q. (2024). Multi-Degradation Super-Resolution Reconstruction for Remote Sensing Images with Reconstruction Features-Guided Kernel Correction. Remote Sensing, 16(16), 2915. https://doi.org/10.3390/rs16162915