Global Prior-Guided Distortion Representation Learning Network for Remote Sensing Image Blind Super-Resolution
Abstract
1. Introduction
2. Materials and Methods
2.1. Global Prior Modulated by Fusion Gradient
2.2. Global Prior-Guided Distortion-Enhanced Representation Learning
2.3. SR Network for Distortion-Related Matching
2.4. Implementation Details
3. Results
3.1. Datasets
3.2. Experiments on Noise-Free Degradations with Isotropic Gaussian Kernels
3.3. Experiments on General Degradations with Anisotropic Gaussian Kernels and Noise
3.4. Experiments on Real-World Remote Sensing Images
3.5. Ablation Studies
4. Discussion
4.1. Comparison of Different Numbers of DCSMs
4.2. Validity of MSCN_G Coefficients
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Wang, Y.; Zhang, W.; Liu, X.; Peng, H.; Lin, M.; Li, A.; Jiang, A.; Ma, N.; Wang, L. A Deep Learning Method for Land Use Classification Based on Feature Augmentation. Remote Sens. 2025, 17, 1398. [Google Scholar] [CrossRef]
- Tola, D.; Bustillos, L.; Arragan, F.; Chipana, R.; Hostache, R.; Resongles, E.; Espinoza-Villar, R.; Zolá, R.P.; Uscamayta, E.; Perez-Flores, M.; et al. High Spatial Resolution Soil Moisture Mapping over Agricultural Field Integrating SMAP, IMERG, and Sentinel-1 Data in Machine Learning Models. Remote Sens. 2025, 17, 2129. [Google Scholar] [CrossRef]
- Li, J.; Tang, X.; Lu, J.; Fu, H.; Zhang, M.; Huang, J.; Zhang, C.; Li, H. TDMSANet: A Tri-Dimensional Multi-Head Self-Attention Network for Improved Crop Classification from Multitemporal Fine-Resolution Remotely Sensed Images. Remote Sens. 2024, 16, 4755. [Google Scholar] [CrossRef]
- Tan, Y.; Sun, K.; Wei, J.; Gao, S.; Cui, W.; Duan, Y.; Liu, J.; Zhou, W. STFNet: A Spatiotemporal Fusion Network for Forest Change Detection Using Multi-Source Satellite Images. Remote Sens. 2024, 16, 4736. [Google Scholar] [CrossRef]
- Jiao, D.; Su, N.; Yan, Y.; Liang, Y.; Feng, S.; Zhao, C.; He, G. SymSwin: Multi-Scale-Aware Super-Resolution of Remote Sensing Images Based on Swin Transformers. Remote Sens. 2024, 16, 4734. [Google Scholar] [CrossRef]
- Jia, X.; Li, X.; Wang, Z.; Hao, Z.; Ren, D.; Liu, H.; Du, Y.; Ling, F. Enhancing Cropland Mapping with Spatial Super-Resolution Reconstruction by Optimizing Training Samples for Image Super-Resolution Models. Remote Sens. 2024, 16, 4678. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar] [CrossRef]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1132–1140. [Google Scholar]
- Shi, W.; Caballero, J.; Huszar, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
- Lu, T.; Wang, J.; Zhang, Y.; Wang, Z.; Jiang, J. Satellite Image Super-Resolution via Multi-Scale Residual Deep Neural Network. Remote Sens. 2019, 11, 1588. [Google Scholar] [CrossRef]
- Lei, S.; Shi, Z.; Zou, Z. Super-Resolution for Remote Sensing Images via Local–Global Combined Network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1243–1247. [Google Scholar] [CrossRef]
- Safarov, F.; Khojamuratova, U.; Komoliddin, M.; Bolikulov, F.; Muksimova, S.; Cho, Y.-I. MBGPIN: Multi-Branch Generative Prior Integration Network for Super-Resolution Satellite Imagery. Remote Sens. 2025, 17, 805. [Google Scholar] [CrossRef]
- Yu, S.; Wu, K.; Zhang, G.; Yan, W.; Wang, X.; Tao, C. MEFSR-GAN: A Multi-Exposure Feedback and Super-Resolution Multitask Network via Generative Adversarial Networks. Remote Sens. 2024, 16, 3501. [Google Scholar] [CrossRef]
- Li, H.; Deng, W.; Zhu, Q.; Guan, Q.; Luo, J. Local-Global Context-Aware Generative Dual-Region Adversarial Networks for Remote Sensing Scene Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–14. [Google Scholar] [CrossRef]
- Jiang, K.; Wang, Z.; Yi, P.; Wang, G.; Lu, T.; Jiang, J. Edge-Enhanced GAN for Remote Sensing Image Superresolution. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5799–5812. [Google Scholar] [CrossRef]
- Sui, J.; Wu, Q.; Pun, M.-O. Denoising Diffusion Probabilistic Model with Adversarial Learning for Remote Sensing Super-Resolution. Remote Sens. 2024, 16, 1219. [Google Scholar] [CrossRef]
- Han, L.; Zhao, Y.; Lv, H.; Zhang, Y.; Liu, H.; Bi, G.; Han, Q. Enhancing Remote Sensing Image Super-Resolution with Efficient Hybrid Conditional Diffusion Model. Remote Sens. 2023, 15, 3452. [Google Scholar] [CrossRef]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. SwinIR: Image Restoration Using Swin Transformer. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021; pp. 1833–1844. [Google Scholar]
- Xiao, Y.; Yuan, Q.; Jiang, K.; He, J.; Jin, X.; Zhang, L. EDiffSR: An Efficient Diffusion Probabilistic Model for Remote Sensing Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–14. [Google Scholar] [CrossRef]
- Guo, M.; Xiong, F.; Huang, Y.; Zhang, Z.; Zhang, J. A Multi-Path Feature Extraction and Transformer Feature Enhancement DEM Super-Resolution Reconstruction Network. Remote Sens. 2025, 17, 1737. [Google Scholar] [CrossRef]
- Qin, Y.; Nie, H.; Wang, J.; Liu, H.; Sun, J.; Zhu, M.; Lu, J.; Pan, Q. Multi-Degradation Super-Resolution Reconstruction for Remote Sensing Images with Reconstruction Features-Guided Kernel Correction. Remote Sens. 2024, 16, 2915. [Google Scholar] [CrossRef]
- Gu, J.; Lu, H.; Zuo, W.; Dong, C. Blind Super-Resolution with Iterative Kernel Correction. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 1604–1613. [Google Scholar]
- Michaeli, T.; Irani, M. Nonparametric Blind Super-Resolution. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, 19–23 October 2013; pp. 945–952. [Google Scholar]
- Bell-Kligler, S.; Shocher, A.; Irani, M. Blind Super-Resolution Kernel Estimation Using an Internal-GAN. In Advances in Neural Information Processing Systems (NeurIPS 2019); Neural Information Processing Systems Foundation Inc.; Curran Associates, Inc.: Vancouver, BC, Canada, 2019; Volume 32, pp. 284–293. [Google Scholar]
- Liang, J.; Zhang, K.; Gu, S.; Van Gool, L.; Timofte, R. Flow-based kernel prior with application to blind super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 10601–10610. [Google Scholar]
- Li, X.; Liu, Y.; Hua, Z.; Chen, S. An Unsupervised Band Selection Method via Contrastive Learning for Hyperspectral Images. Remote Sens. 2023, 15, 5495. [Google Scholar] [CrossRef]
- Zhang, G.; Li, J.; Ye, Z. Unsupervised Joint Contrastive Learning for Aerial Person Re-Identification and Remote Sensing Image Classification. Remote Sens. 2024, 16, 422. [Google Scholar] [CrossRef]
- Wan, H.; Nurmamat, P.; Chen, J.; Cao, Y.; Wang, S.; Zhang, Y.; Huang, Z. Fine-Grained Aircraft Recognition Based on Dynamic Feature Synthesis and Contrastive Learning. Remote Sens. 2025, 17, 768. [Google Scholar] [CrossRef]
- Wang, L.; Wang, Y.; Dong, X.; Xu, Q.; Yang, J.; An, W.; Guo, Y. Unsupervised Degradation Representation Learning for Blind Super-Resolution. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 10576–10585. [Google Scholar]
- Wang, X.; Ma, J.; Jiang, J. Contrastive Learning for Blind Super-Resolution via A Distortion-Specific Network. IEEE/CAA J. Autom. Sin. 2023, 10, 78–89. [Google Scholar] [CrossRef]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum Contrast for Unsupervised Visual Representation Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 9729–9738. [Google Scholar] [CrossRef]
- Chen, X.; Fan, H.; Girshick, R.; He, K. Improved Baselines with Momentum Contrastive Learning. arXiv 2020, arXiv:2003.04297. [Google Scholar] [CrossRef]
- Park, T.; Efros, A.A.; Zhang, R.; Zhu, J.Y. Contrastive Learning for Unpaired Image-to-Image Translation. In Computer Vision—ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; Volume 12354, pp. 319–345. [Google Scholar] [CrossRef]
- He, J.; Dong, C.; Qiao, Y. Interactive Multi-dimension Modulation with Dynamic Controllable Residual Learning for Image Restoration. In Computer Vision—ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; Volume 12365, pp. 66–83. [Google Scholar] [CrossRef]
- Cheng, G.; Han, J.; Lu, X. Remote Sensing Image Scene Classification: Benchmark and State of the Art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef]
- Li, H.; Jiang, H.; Gu, X.; Peng, J.; Li, W.; Hong, L.; Tao, C. CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification. Sensors 2020, 20, 1226. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
- Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef]
- Zhang, K.; Van Gool, L.; Timofte, R. Deep Unfolding Network for Image Super-Resolution. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3214–3223. [Google Scholar] [CrossRef]
Layer | Kernel | Stride | Input_Channel | Output_Channel |
---|---|---|---|---|
CSL | 3 | 1 | 64 | 64 |
CSL | 3 | 1 | 48 | 64 |
CSL | 3 | 1 | 48 | 64 |
CSL | 3 | 1 | 48 | 16 |
Method | Scale | NWPU-RESISC45 | CLRS | ||||
---|---|---|---|---|---|---|---|
Kernel Width | Kernel Width | ||||||
0.6 | 1.2 | 1.8 | 0.6 | 1.2 | 1.8 | ||
PSNR (dB)/SSIM | PSNR (dB)/SSIM | ||||||
Bicubic | ×2 | 29.31/0.8174 | 27.38/0.7558 | 25.42/0.7193 | 31.25/0.8433 | 29.42/0.8165 | 27.11/0.7549 |
RCAN (2018) | 31.28/0.8546 | 28.43/0.7692 | 25.63/0.7264 | 33.45/0.8529 | 30.76/0.8476 | 27.43/0.7587 | |
IKC (2019) | 33.12/0.9154 | 32.86/0.9078 | 30.54/0.8610 | 35.27/0.9158 | 34.55/0.9102 | 32.68/0.8912 | |
DASR (2021) | 33.24/0.9189 | 33.18/0.9169 | 32.27/0.8775 | 35.33/0.9215 | 35.13/0.9207 | 34.09/0.8983 | |
CRDNet (2023) | 33.65/0.9205 | 33.52/0.9178 | 32.35/0.8783 | 35.71/0.9243 | 35.42/0.9224 | 34.18/0.8991 | |
Ours | 34.33/0.9246 | 34.10/0.9196 | 32.73/0.8790 | 36.49/0.9330 | 35.92/0.9258 | 34.54/0.9005 | |
Kernel Width | Kernel Width | ||||||
0.8 | 1.6 | 2.4 | 0.8 | 1.6 | 2.4 | ||
PSNR (dB)/SSIM | PSNR (dB)/SSIM | ||||||
Bicubic | ×3 | 28.23/0.7252 | 26.45/0.6984 | 24.31/0.6549 | 30.12/0.7534 | 28.57/0.7279 | 26.35/0.6783 |
RCAN (2018) | 29.14/0.7935 | 27.15/0.7128 | 24.42/0.6613 | 31.39/0.8042 | 29.68/0.7466 | 26.47/0.6719 | |
DASR (2021) | 30.47/0.8398 | 30.33/0.8343 | 29.86/0.8074 | 32.25/0.8578 | 32.11/0.8515 | 31.63/0.8274 | |
Ours | 30.94/0.8418 | 30.86/0.8372 | 30.11/0.8095 | 32.70/0.8609 | 32.52/0.8544 | 31.78/0.8285 | |
Kernel Width | Kernel Width | ||||||
1.2 | 2.4 | 3.6 | 1.2 | 2.4 | 3.6 | ||
PSNR (dB)/SSIM | PSNR (dB)/SSIM | ||||||
Bicubic | ×4 | 27.15/0.6786 | 25.21/0.6375 | 23.39/0.5633 | 28.65/0.6953 | 26.74/0.6798 | 24.57/0.6372 |
RCAN (2018) | 27.79/0.7348 | 25.64/0.6412 | 23.46/0.5641 | 29.58/0.7245 | 27.69/0.6883 | 24.64/0.6425 | |
IKC (2019) | 27.94/0.7419 | 27.52/0.7288 | 26.43/0.6557 | 29.94/0.7426 | 29.75/0.7451 | 29.30/0.7446 | |
DASR (2021) | 28.64/0.7628 | 28.12/0.7497 | 27.58/0.6943 | 30.51/0.7908 | 30.38/0.7824 | 29.65/0.7478 | |
CRDNet (2023) | 28.72/0.7634 | 28.26/0.7569 | 27.83/0.7186 | 30.64/0.7923 | 30.46/0.7837 | 29.68/0.7480 | |
Ours | 28.98/0.7656 | 28.84/0.7581 | 28.05/0.7195 | 30.75/0.7932 | 30.54/0.7843 | 29.72/0.7484 |
Bicubic | RCAN (2018) | IKC (2019) | DASR (2021) | CRDNet (2023) | Ours | |
---|---|---|---|---|---|---|
Time (ms) | 2.7 | 159 | 341 | 51 | 38 | 16 |
FLOPs (1010) | \ | 26.15 | 8.06 | 4.21 | 3.30 | 1.98 |
Method | Param | Noise | Blur Kernel | |||||||
---|---|---|---|---|---|---|---|---|---|---|
° | ° | ° | ° | ° | ° | |||||
PSNR (dB)/SSIM | ||||||||||
DnCNN + RCAN (2017 + 2018) | 650 K + 15.2 M | 0 | 26.85/0.7018 | 27.85/0.7327 | 25.47/0.6566 | 25.52/0.6586 | 25.58/0.6634 | 25.12/0.6372 | 24.67/0.6055 | 24.34/0.5989 |
5 | 26.43/0.6649 | 27.18/0.6940 | 25.36/0.6269 | 25.41/0.6308 | 25.47/0.6298 | 25.34/0.6237 | 24.47/0.5783 | 24.15/0.5548 | ||
15 | 25.51/0.6152 | 26.07/0.6414 | 24.74/0.5843 | 24.78/0.5882 | 24.73/0.5852 | 24.64/0.5843 | 23.98/0.5512 | 23.76/0.5419 | ||
KernelGAN + USRNet (2019 + 2020) | 193 K + 17 M | 0 | 26.57/0.6904 | 27.45/0.7174 | 25.05/0.6498 | 25.67/0.6621 | 25.65/0.6656 | 25.09/0.6355 | 24.56/0.6061 | 24.34/0.5971 |
5 | 26.12/0.6608 | 27.28/0.7031 | 25.34/0.6273 | 25.37/0.6316 | 25.51/0.6338 | 25.31/0.6127 | 24.56/0.5836 | 24.15/0.5785 | ||
15 | 25.77/0.6246 | 26.35/0.6524 | 24.86/0.5886 | 24.76/0.5890 | 24.83/0.5889 | 24.67/0.5863 | 24.07/0.5538 | 23.88/0.5463 | ||
DASR (2021) | 5.86 M | 0 | 28.54/0.0.7215 | 28.47/0.7459 | 28.06/0.7387 | 27.83/0.7256 | 28.12/0.7415 | 28.24/0.7341 | 27.85/0.7070 | 27.35/0.6800 |
5 | 27.54/0.6980 | 27.84/0.6599 | 26.92/0.6143 | 26.72/0.6147 | 27.06/0.6128 | 27.12/0.6189 | 26.32/0.5815 | 25.93/0.6091 | ||
15 | 26.13/0.6391 | 26.48/0.6599 | 25.71/0.6143 | 25.69/0.6147 | 25.70/0.6128 | 25.73/0.6189 | 25.02/0.5815 | 24.64/0.5640 | ||
CRDNet (2023) | 2.8 M | 0 | 28.62/0.7499 | 28.63/0.7524 | 28.30/0.7417 | 28.19/0.7345 | 28.36/0.7437 | 28.44/0.7401 | 27.97/0.7184 | 27.48/0.6921 |
5 | 27.55/0.7011 | 27.97/0.7178 | 27.03/0.6768 | 26.99/0.6747 | 27.10/0.6754 | 27.16/0.6768 | 26.35/0.6345 | 25.95/0.6154 | ||
15 | 26.03/0.6407 | 26.55/0.6618 | 25.75/0.6159 | 25.83/0.6186 | 25.78/0.6162 | 25.83/0.6204 | 25.12/0.5845 | 24.82/0.5683 | ||
GDRNet (Ours) | 1.6 M | 0 | 28.70/0.7558 | 28.82/0.7584 | 28.54/0.7447 | 28.48/0.7430 | 28.53/0.7455 | 28.53/0.7464 | 28.08/0.7239 | 27.59/0.6984 |
5 | 27.61/0.7042 | 28.08/0.7235 | 27.14/0.6775 | 27.17 0.6793 | 27.14/0.6788 | 27.19/0.6831 | 26.37/0.6399 | 25.97/0.6172 | ||
15 | 26.22/0.6426 | 26.67/0.6632 | 25.79/0.6171 | 25.85/0.6207 | 25.79/0.6181 | 25.83/0.6216 | 25.20/0.5865 | 24.90/0.5699 |
Contrastive Learning | MSCN | MSCN_G | PSNR (dB)/SSIM |
---|---|---|---|
√ | √ | 28.98/0.7656 | |
√ | 28.81/0.7641 | ||
√ | √ | 28.75/0.7638 |
DCS-Strategy | Non-DCS-Strategy | PSNR (dB)/SSIM | Param |
---|---|---|---|
√ | 28.98/0.7656 | 1.6 M | |
√ | 28.78/0.7640 | 2.7 M |
Model | PSNR/dB | SSIM |
---|---|---|
DCSM-1 | 27.95 | 0.7316 |
DCSM-2 | 28.36 | 0.7458 |
DCSM-3 | 28.52 | 0.7571 |
DCSM-4 | 28.64 | 0.7602 |
DCSM-5 | 28.72 | 0.7634 |
DCSM-6 | 28.59 | 0.7587 |
H | V | D1 | D2 | Mean | ||
---|---|---|---|---|---|---|
Original | 0.8751 | 0.8672 | 0.7895 | 0.7776 | 0.8274 | |
MSCN | 0.3916 | 0.3962 | 0.1323 | 0.0770 | 0.2493 | |
MSCN_G | 0.3599 | 0.3486 | 0.1166 | 0.0581 | 0.2208 |
λ | 3 | 5 | 8 | 9.6 | 11 |
---|---|---|---|---|---|
PSNR (dB)/SSIM | 27.86/0.7392 | 28.14/0.7483 | 28.75/0.7635 | 28.98/0.7656 | 28.42/0.7568 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, G.; Sun, T.; Yu, S.; Wu, S. Global Prior-Guided Distortion Representation Learning Network for Remote Sensing Image Blind Super-Resolution. Remote Sens. 2025, 17, 2830. https://doi.org/10.3390/rs17162830
Li G, Sun T, Yu S, Wu S. Global Prior-Guided Distortion Representation Learning Network for Remote Sensing Image Blind Super-Resolution. Remote Sensing. 2025; 17(16):2830. https://doi.org/10.3390/rs17162830
Chicago/Turabian StyleLi, Guanwen, Ting Sun, Shijie Yu, and Siyao Wu. 2025. "Global Prior-Guided Distortion Representation Learning Network for Remote Sensing Image Blind Super-Resolution" Remote Sensing 17, no. 16: 2830. https://doi.org/10.3390/rs17162830
APA StyleLi, G., Sun, T., Yu, S., & Wu, S. (2025). Global Prior-Guided Distortion Representation Learning Network for Remote Sensing Image Blind Super-Resolution. Remote Sensing, 17(16), 2830. https://doi.org/10.3390/rs17162830