Comprehensive Review of Deep Learning Approaches for Single-Image Super-Resolution
Abstract
1. Introduction
- (1)
- This article provides an overview of DL-based SISR methods and introduces various SR methods, elaborating in detail on the current status of SR method development;
- (2)
- This article analyzes emerging SISR methods and lists various applications in specific fields, such as interdisciplinary SISR methods related to medical imaging and remote sensing;
- (3)
- We compare the reconstruction results of some SISR models and compare their performance to provide an analysis that is simple and intuitive;
- (4)
- We analyze the technical bottlenecks in the SISR field and explore the future development directions of SR technology by combining strategic frameworks such as cross-modal fusion.
2. Problem Statement and Related Works
2.1. Problem Statement
2.2. Benchmark Datasets
2.2.1. Degradation Mode
2.2.2. Training and Test Datasets
2.3. Upsampling Methods
2.3.1. Based on Interpolation Upsampling Methods
2.3.2. Based on Learning Upsampling Methods
2.4. Optimization Objective
2.4.1. Learning Strategy
2.4.2. Loss Function
2.5. Assessment Methods
2.5.1. Objective Evaluation Methods
2.5.2. Subjective Evaluation Methods
2.5.3. Fine-Grained Error Analyses
2.6. Ablation Studies for Component Validation
3. Image Super-Resolution
3.1. Simulation SISR
3.1.1. Efficient Network/Mechanism Design Methods
3.1.2. Efficient Structures
3.1.3. Transformer-Based Methods
3.1.4. Mamba-Based Methods
3.2. Perceptual Quality Methods
3.2.1. Adversarial Training
3.2.2. Cycle Consistency
3.2.3. Diffusion-Based Methods
3.3. Additional Information Utilization Methods
3.4. Comparison of Method Types for Simulating SISR
3.5. Real-World Image SR
3.5.1. Blind Image SR
3.5.2. Arbitrary-Scale SR
3.6. Domain-Specific Applications
3.6.1. Stereo Image SR
3.6.2. Remote Sensing Image SR
3.6.3. Light-Field Image SR
3.6.4. Face Image SR
3.6.5. Hyperspectral Image SR
3.6.6. Medical Image SR
4. Reconstruction Results
Model | Set5 PSNR/SSIM | Set14 PSNR/SSIM | Urban100 PSNR/SSIM | MOS | Training Datasets | Parameters |
---|---|---|---|---|---|---|
SRCNN [7] | 30.48/0.8628 | 27.50/0.7513 | 24.52/0.7221 | 3.2 | T91+ImageNet | 57 K |
VDSR [26] | 31.35/0.8838 | 28.02/0.7680 | 25.18/0.7540 | 3.5 | BSD+T91 | 665 K |
LapSRN [40] | 31.54/0.8855 | 28.19/0.7720 | 25.21/0.7560 | 3.6 | BSD+T91 | 812 K |
MemNet [34] | 31.74/0.8893 | 28.26/0.7723 | 25.50/0.7630 | 3.7 | BSD+T91 | 677 K |
IDN [59] | 31.82/0.8903 | 28.25/0.7730 | 25.41/0.7632 | 3.7 | BSD+T91 | 678 K |
RFDN [60] | 32.18/0.8948 | 28.58/0.7812 | 26.04/0.7848 | 3.8 | DIV2K | 441 K |
DSRN [37] | 31.40/0.8830 | 28.07/0.7700 | 25.08/0.7470 | 3.4 | T91 | 1.2 M |
MSRN [29] | 32.07/0.8903 | 28.60/0.7751 | 26.04/0.7896 | 3.9 | DIV2K | 6.3 M |
CARN [38] | 32.13/0.8937 | 28.60/0.7806 | 26.07/0.7837 | 4.0 | BSD+T91+DIV2K | 1.6 M |
SeaNet [111] | 32.33/0.8970 | 28.81/0.7855 | 26.32/0.7942 | 4.1 | DIV2K | 7.4 M |
CRN [38] | 32.34/0.8971 | 28.74/0.7855 | 26.44/0.7967 | 4.1 | DIV2K | 9.5 M |
EDSR [27] | 32.46/0.8968 | 28.80/0.7876 | 26.64/0.8033 | 4.2 | DIV2K | 43 M |
RDN [194] | 32.47/0.8990 | 28.81/0.7871 | 26.61/0.8028 | 4.2 | DIV2K | 22.6 M |
MDCN [45] | 32.48/0.8985 | 28.83/0.7879 | 26.69/0.8049 | 4.3 | DIV2K | 4.5 M |
SRRFN [39] | 32.56/0.8993 | 28.86/0.7882 | 26.78/0.8071 | 4.3 | DIV2K | 4.2 M |
RCAN [48] | 32.63/0.9002 | 28.87/0.7889 | 26.82/0.8087 | 4.4 | DIV2K | 16 M |
SwinIR [69] | 32.92/0.9044 | 29.09/0.7950 | 27.45/0.8254 | 4.5 | DIV2K+Flickr2K | 11.8 M |
5. Potential Issues and Future Directions
5.1. Lightweight SISR for Mobile Intelligence
5.2. SISR Applicable to Multiple Scenarios
5.3. New Loss Functions and Assessment Methods
5.4. Mutual Promotion with High-Level Tasks
5.5. Building Real SISR
5.6. Efficient and Accurate Arbitrary-Scale SISR
5.7. Considering Features of Different Images
5.8. System Security of SISR
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Duchon, C.E. Lanczos Filtering in One and Two Dimensions. J. Appl. Meteorol. 1979, 18, 1016–1022. [Google Scholar] [CrossRef]
- Sun, J.; Xu, Z.; Shum, H.Y. Image super-resolution using gradient profile prior. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
- Kim, K.I.; Kwon, Y. Single-Image Super-Resolution Using Sparse Regression and Natural Image Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1127–1133. [Google Scholar] [PubMed]
- Chang, H.; Yeung, D.Y.; Xiong, Y. Super-resolution through neighbor embedding. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 1. [Google Scholar]
- Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image super-resolution via sparse representation. IEEE Trans. Image Process. 2010, 19, 2861–2873. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part IV 13. Springer: Berlin/Heidelberg, Germany, 2014; pp. 184–199. [Google Scholar]
- Hinton, G.; Deng, L.; Yu, D.; Dahl, G.E.; Mohamed, A.R.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T.N.; et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Process. Mag. 2012, 29, 82–97. [Google Scholar] [CrossRef]
- Collobert, R.; Weston, J. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; pp. 160–167. [Google Scholar]
- Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the Eighth IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; Volume 2, pp. 416–423. [Google Scholar]
- Agustsson, E.; Timofte, R. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 126–135. [Google Scholar]
- Timofte, R.; Agustsson, E.; Gool, E.A. NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1110–1121. [Google Scholar]
- Bevilacqua, M.; Roumy, A.; Guillemot, C.; line Alberi Morel, M. Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 791–799. [Google Scholar]
- Zeyde, R.; Elad, M.; Protter, M. On Single Image Scale-Up Using Sparse-Representations. In Proceedings of the International Conference on Curves and Surfaces, Avignon, France, 24–30 June 2010; pp. 711–730. [Google Scholar]
- Huang, J.B.; Singh, A.; Ahuja, N. Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5197–5206. [Google Scholar]
- Matsui, Y.; Ito, K.; Aramaki, Y.; Fujimoto, A.; Ogawa, T.; Yamasaki, T.; Aizawa, K. Sketch-based manga retrieval using manga109 dataset. Multimed. Tools Appl. 2017, 76, 21811–21838. [Google Scholar] [CrossRef]
- Belharbi, S.; Whitford, M.K.; Hoang, P.; Murtaza, S.; McCaffrey, L.; Granger, E. SR-CACO-2: A dataset for confocal fluorescence microscopy image super-resolution. Adv. Neural Inf. Process. Syst. 2024, 37, 59948–59983. [Google Scholar]
- Shocher, A.; Cohen, N.; Irani, M. “zero-shot” super-resolution using deep internal learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3118–3126. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Fuoli, D.; Van Gool, L.; Timofte, R. Fourier space losses for efficient perceptual image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 2360–2369. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- He, K.; Sun, J. Convolutional neural networks at constrained time cost. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5353–5360. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
- Lu, Z.; Tu, Z.; Zhang, Z.; Jiang, C.; Chen, P. Single Image Super-Resolution Reconstruction Based on Multilayer Residual Networks With Attention Mechanisms. IEEE Access 2025, 13, 73794–73802. [Google Scholar] [CrossRef]
- Li, J.; Fang, F.; Mei, K.; Zhang, G. Multi-scale residual network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Munich, Germany, 8–14 September 2018; pp. 517–532. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
- Inderjeet; Sahambi, J.S. GAMSRN: Global Attention Multi-Scale Residual Network for Single-Image Super-Resolution and Low-Light Enhancement. In Proceedings of the 2025 National Conference on Communications (NCC), New Delhi, India, 6–9 March 2025; pp. 1–6. [Google Scholar]
- Tong, T.; Li, G.; Liu, X.; Gao, Q. Image super-resolution using dense skip connections. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4799–4807. [Google Scholar]
- Purohit, K.; Mandal, S.; Rajagopalan, A. Deep Networks for Image and Video Super-Resolution. arXiv 2022, arXiv:2201.11996. [Google Scholar] [CrossRef]
- Tai, Y.; Yang, J.; Liu, X.; Xu, C. Memnet: A persistent memory network for image restoration. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4539–4547. [Google Scholar]
- Mei, K.; Jiang, A.; Li, J.; Liu, B.; Ye, J.; Wang, M. Deep residual refining based pseudo-multi-frame network for effective single image super-resolution. IET Image Process. 2019, 13, 591–599. [Google Scholar] [CrossRef]
- Shen, M.; Yu, P.; Wang, R.; Yang, J.; Xue, L.; Hu, M. Multipath feedforward network for single image super-resolution. Multimed. Tools Appl. 2019, 78, 19621–19640. [Google Scholar] [CrossRef]
- Han, W.; Chang, S.; Liu, D.; Yu, M.; Witbrock, M.; Huang, T.S. Image super-resolution via dual-state recurrent networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Ahn, N.; Kang, B.; Sohn, K.A. Fast, accurate, and lightweight super-resolution with cascading residual network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Munich, Germany, 8–14 September 2018; pp. 252–268. [Google Scholar]
- Li, J.; Yuan, Y.; Mei, K.; Fang, F. Lightweight and accurate recursive fractal network for image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019; pp. 3814–3823. [Google Scholar]
- Lai, W.S.; Huang, J.B.; Ahuja, N.; Yang, M.H. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 624–632. [Google Scholar]
- Wang, Y.; Perazzi, F.; McWilliams, B.; Sorkine-Hornung, A.; Sorkine-Hornung, O.; Schroers, C. A fully progressive approach to single-image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 864–873. [Google Scholar]
- Wong, L.; Zhao, D.; Wan, S.; Zhang, B. Perceptual image super-resolution with progressive adversarial network. arXiv 2020, arXiv:2003.03756. [Google Scholar] [CrossRef]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Li, J.; Fang, F.; Li, J.; Mei, K.; Zhang, G. MDCN: Multi-scale dense cross network for image super-resolution. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 2547–2561. [Google Scholar] [CrossRef]
- Qin, J.; Huang, Y.; Wen, W. Multi-scale feature fusion residual network for single image super-resolution. Neurocomputing 2020, 379, 334–342. [Google Scholar] [CrossRef]
- Chang, C.Y.; Chien, S.Y. Multi-scale dense network for single-image super-resolution. In Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1742–1746. [Google Scholar]
- Cao, F.; Liu, H. Single image super-resolution via multi-scale residual channel attention network. Neurocomputing 2019, 358, 424–436. [Google Scholar] [CrossRef]
- Yazıcı, Z.A.; Öksüz, İ.; Ekenel, H.K. GLIMS: Attention-guided lightweight multi-scale hybrid network for volumetric semantic segmentation. Image Vis. Comput. 2024, 146, 105055. [Google Scholar] [CrossRef]
- Li, Y.; Zhao, X.; Zhang, X.; Yang, Y.; Li, T. Long-Range Multi-Scale Fusion for Efficient Single Image Super-Resolution. In Proceedings of the ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India, 6–11 April 2025; pp. 1–5. [Google Scholar]
- Mei, K.; Jiang, A.; Li, J.; Ye, J.; Wang, M. An effective single-image super-resolution model using squeeze-and-excitation networks. In Proceedings of the Neural Information Processing: 25th International Conference, ICONIP 2018, Siem Reap, Cambodia, 13–16 December 2018; Proceedings, Part VI 25. Springer: Berlin/Heidelberg, Germany, 2018; pp. 542–553. [Google Scholar]
- Wang, Y.; Li, Y.; Wang, G.; Liu, X. Multi-scale attention network for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–18 June 2024; pp. 5950–5960. [Google Scholar]
- Wan, C.; Yu, H.; Li, Z.; Chen, Y.; Zou, Y.; Liu, Y.; Yin, X.; Zuo, K. Swift parameter-free attention network for efficient super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–18 June 2024; pp. 6246–6256. [Google Scholar]
- Ahmadi, M.; Raghavan, J. Novel Spatial-Edge Residual Attention Model for Face Super-Resolution Enhancement Nouveau modèle d’attention résiduelle spatiale et sur les contours pour l’amélioration de la super-résolution des visages. IEEE Can. J. Electr. Comput. Eng. 2025, 48, 294–304. [Google Scholar] [CrossRef]
- Yu, C.; Pei, H. Super-Resolution Reconstruction Method of Face Image Based on Attention Mechanism. IEEE Access 2025, 13, 121250–121260. [Google Scholar] [CrossRef]
- Li, Z.; Yang, J.; Liu, Z.; Yang, X.; Jeon, G.; Wu, W. Feedback network for image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3867–3876. [Google Scholar]
- Barman, T.; Deka, B. A Deep Learning-Based Joint Image Super-Resolution and Deblurring Framework. IEEE Trans. Artif. Intell. 2023, 5, 3160–3173. [Google Scholar] [CrossRef]
- Fan, Y.; Jiang, M.; Liu, Z.; Zhao, Z.; Zheng, L. Lightweight Image Super-Resolution Reconstruction Guided by Gated Feedback and LatticeFormer. IEEE Access 2025, 13, 53214–53226. [Google Scholar] [CrossRef]
- Hui, Z.; Wang, X.; Gao, X. Fast and accurate single image super-resolution via information distillation network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 723–731. [Google Scholar]
- Liu, J.; Tang, J.; Wu, G. Residual feature distillation network for lightweight image super-resolution. In Proceedings of the Computer Vision–ECCV 2020 Workshops, Glasgow, UK, 23–28 August 2020; Proceedings, Part III 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 41–55. [Google Scholar]
- Zhou, L.; Cai, H.; Gu, J.; Li, Z.; Liu, Y.; Chen, X.; Qiao, Y.; Dong, C. Efficient image super-resolution using vast-receptive-field attention. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 256–272. [Google Scholar]
- Du, Z.; Liu, D.; Liu, J.; Tang, J.; Wu, G.; Fu, L. Fast and memory-efficient network towards efficient image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 853–862. [Google Scholar]
- Lee, W.; Lee, J.; Kim, D.; Ham, B. Learning with privileged information for efficient image super-resolution. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXIV 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 465–482. [Google Scholar]
- Lin, C.; Chen, Y.; Zou, X.; Deng, X.; Dai, F.; You, J.; Xiao, J. An unconstrained palmprint region of interest extraction method based on lightweight networks. PLoS ONE 2024, 19, e0307822. [Google Scholar] [CrossRef] [PubMed]
- Sun, L.; Pan, J.; Tang, J. Shufflemixer: An efficient convnet for image super-resolution. Adv. Neural Inf. Process. Syst. 2022, 35, 17314–17326. [Google Scholar]
- Chen, Z.; Zhang, Y.; Gu, J.; Kong, L.; Yang, X. Recursive generalization transformer for image super-resolution. arXiv 2023, arXiv:2303.06373. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
- Chen, H.; Wang, Y.; Guo, T.; Xu, C.; Deng, Y.; Liu, Z.; Ma, S.; Xu, C.; Xu, C.; Gao, W. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 12299–12310. [Google Scholar]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. SwinIR: Image restoration using swin transformer. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021; pp. 1833–1844. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5728–5739. [Google Scholar]
- Chen, X.; Wang, X.; Zhou, J.; Qiao, Y.; Dong, C. Activating more pixels in image super-resolution transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 22367–22377. [Google Scholar]
- Jiang, N.; Zhao, W.; Wang, H.; Luo, H.; Chen, Z.; Zhu, J. Lightweight super-resolution generative adversarial network for SAR images. Remote Sens. 2024, 16, 1788. [Google Scholar] [CrossRef]
- Guo, F.; Feng, Q.; Yang, S.; Yang, W. CMTNet: Convolutional Meets Transformer Network for Hyperspectral Images Classification. arXiv 2024, arXiv:2406.14080. [Google Scholar] [CrossRef]
- Zhang, D.; Liang, S.; He, T.; Shao, J.; Qin, K. CVIformer: Cross-View Interactive Transformer for Efficient Stereoscopic Image Super-Resolution. IEEE Trans. Emerg. Top. Comput. Intell. 2025, 9, 1107–1118. [Google Scholar] [CrossRef]
- He, Y.; He, Y. MPSI: Mamba enhancement model for pixel-wise sequential interaction Image Super-Resolution. arXiv 2024, arXiv:2412.07222. [Google Scholar] [CrossRef]
- Wang, X.; Li, J.; Li, J.; Wang, S.; Yan, L.; Xu, Y. A Collaborative Network of Mamba and CNN for Lightweight Image Super-Resolution. IEEE Trans. Consum. Electron. 2025, 71, 3591–3604. [Google Scholar] [CrossRef]
- Aalishah, R.; Navardi, M.; Mohsenin, T. MambaLiteSR: Image Super-Resolution with Low-Rank Mamba Using Knowledge Distillation. In Proceedings of the 2025 26th International Symposium on Quality Electronic Design (ISQED), San Francisco, CA, USA, 23–25 April 2025; pp. 1–8. [Google Scholar]
- Jiang, K.; Yang, M.; Xiao, Y.; Wu, J.; Wang, G.; Feng, X.; Jiang, J. Rep-Mamba: Re-Parameterization in Vision Mamba for Lightweight Remote Sensing Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5637012. [Google Scholar] [CrossRef]
- Hsu, C.C.; Lee, C.M.; Chou, Y.S. Drct: Saving image super-resolution away from information bottleneck. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–18 June 2024; pp. 6133–6142. [Google Scholar]
- Blau, Y.; Michaeli, T. The perception-distortion tradeoff. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6228–6237. [Google Scholar]
- Xue, D.; Herranz, L.; Corral, J.V.; Zhang, Y. Burst perception-distortion tradeoff: Analysis and evaluation. In Proceedings of the ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Gatys, L.; Ecker, A.S.; Bethge, M. Texture synthesis using convolutional neural networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef][Green Version]
- Gatys, L.A.; Ecker, A.S.; Bethge, M. A neural algorithm of artistic style. arXiv 2015, arXiv:1508.06576. [Google Scholar] [CrossRef]
- Ople, J.J.M.; Tan, D.S.; Azcarraga, A.; Yang, C.L.; Hua, K.L. Super-resolution by image enhancement using texture transfer. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 953–957. [Google Scholar]
- Rad, M.S.; Bozorgtabar, B.; Marti, U.V.; Basler, M.; Ekenel, H.K.; Thiran, J.P. Srobb: Targeted perceptual loss for single image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 2710–2719. [Google Scholar]
- Guo, J.; Lv, F.; Shen, J.; Liu, J.; Wang, M. An improved generative adversarial network for remote sensing image super-resolution. IET Image Process. 2023, 17, 1852–1863. [Google Scholar] [CrossRef]
- Song, J.; Yi, H.; Xu, W.; Li, X.; Li, B.; Liu, Y. Dual perceptual loss for single image super-resolution using esrgan. arXiv 2022, arXiv:2201.06383. [Google Scholar] [CrossRef]
- Korkmaz, C.; Tekalp, A.M.; Dogan, Z. Training generative image super-resolution models by wavelet-domain losses enables better control of artifacts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 5926–5936. [Google Scholar]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Change Loy, C. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Munich, Germany, 8–14 September 2018; pp. 63–79. [Google Scholar]
- Wang, Y.; Lin, C.; Luo, D.; Tai, Y.; Zhang, Z.; Xie, Y. High-resolution gan inversion for degraded images in large diverse datasets. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 2716–2723. [Google Scholar]
- Yang, S.; Yuan, J.; Tang, W.; Liu, R.; Yuan, M.; Huang, J. Super-Resolution Reconstruction of Medical Images Based on Improved Generative Adversarial Networks. In Proceedings of the 2024 3rd International Conference on Artificial Intelligence and Computer Information Technology (AICIT), Yichang, China, 20–22 September 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–4. [Google Scholar]
- Huang, J.; Li, K.; Jia, J.; Wang, X. Single Image Super-Resolution Through Image Pixel Information Clustering and Generative Adversarial Network. Big Data Min. Anal. 2025, 8, 1044–1059. [Google Scholar] [CrossRef]
- Bulat, A.; Yang, J.; Tzimiropoulos, G. To learn image super-resolution, use a gan to learn how to do image degradation first. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Munich, Germany, 8–14 September 2018; pp. 185–200. [Google Scholar]
- Yuan, Y.; Liu, S.; Zhang, J.; Zhang, Y.; Dong, C.; Lin, L. Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 701–710. [Google Scholar]
- Guo, Y.; Chen, J.; Wang, J.; Chen, Q.; Cao, J.; Deng, Z.; Xu, Y.; Tan, M. Closed-loop matters: Dual regression networks for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 5407–5416. [Google Scholar]
- Yang, Y.; Qi, Y.; Qi, S. Relation-consistency graph convolutional network for image super-resolution. Vis. Comput. 2024, 40, 619–635. [Google Scholar] [CrossRef]
- Li, H.; Yang, Y.; Chang, M.; Chen, S.; Feng, H.; Xu, Z.; Li, Q.; Chen, Y. Srdiff: Single image super-resolution with diffusion probabilistic models. Neurocomputing 2022, 479, 47–59. [Google Scholar] [CrossRef]
- Saharia, C.; Ho, J.; Chan, W.; Salimans, T.; Fleet, D.J.; Norouzi, M. Image super-resolution via iterative refinement. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 4713–4726. [Google Scholar] [CrossRef]
- Gao, S.; Liu, X.; Zeng, B.; Xu, S.; Li, Y.; Luo, X.; Liu, J.; Zhen, X.; Zhang, B. Implicit diffusion models for continuous super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 10021–10030. [Google Scholar]
- Wang, Z.; Zhang, Z.; Zhang, X.; Zheng, H.; Zhou, M.; Zhang, Y.; Wang, Y. Dr2: Diffusion-based robust degradation remover for blind face restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 1704–1713. [Google Scholar]
- Wang, J.; Yue, Z.; Zhou, S.; Chan, K.C.; Loy, C.C. Exploiting diffusion prior for real-world image super-resolution. Int. J. Comput. Vis. 2024, 132, 5929–5949. [Google Scholar] [CrossRef]
- Lin, X.; He, J.; Chen, Z.; Lyu, Z.; Dai, B.; Yu, F.; Qiao, Y.; Ouyang, W.; Dong, C. Diffbir: Toward blind image restoration with generative diffusion prior. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; Springer: Cham, Switzerland, 2024; pp. 430–448. [Google Scholar]
- Xia, B.; Zhang, Y.; Wang, S.; Wang, Y.; Wu, X.; Tian, Y.; Yang, W.; Van Gool, L. Diffir: Efficient diffusion model for image restoration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 13095–13105. [Google Scholar]
- Wu, H.; Mo, J.; Sun, X.; Ma, J. Latent Diffusion, Implicit Amplification: Efficient Continuous-Scale Super-Resolution for Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2025, 63, 1–17. [Google Scholar] [CrossRef]
- Gendy, G.; He, G.; Sabor, N. Diffusion models for image super-resolution: State-of-the-art and future directions. Neurocomputing 2025, 617, 128911. [Google Scholar] [CrossRef]
- Zontak, M.; Irani, M. Internal statistics of a single natural image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 977–984. [Google Scholar]
- Shaham, T.R.; Dekel, T.; Michaeli, T. Singan: Learning a generative model from a single natural image. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4570–4580. [Google Scholar]
- Wang, Y.; Ying, X.; Wang, L.; Yang, J.; An, W.; Guo, Y. Symmetric parallax attention for stereo image super-resolution. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 766–775. [Google Scholar]
- Yang, W.; Feng, J.; Yang, J.; Zhao, F.; Liu, J.; Guo, Z.; Yan, S. Deep edge guided recurrent residual learning for image super-resolution. IEEE Trans. Image Process. 2017, 26, 5895–5907. [Google Scholar] [CrossRef]
- Fang, F.; Li, J.; Zeng, T. Soft-edge assisted network for single image super-resolution. IEEE Trans. Image Process. 2020, 29, 4656–4668. [Google Scholar] [CrossRef] [PubMed]
- Wang, X.; Yu, K.; Dong, C.; Loy, C.C. Recovering realistic texture in image super-resolution by deep spatial feature transform. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 606–615. [Google Scholar]
- Ma, C.; Rao, Y.; Cheng, Y.; Chen, C.; Lu, J.; Zhou, J. Structure-preserving super resolution with gradient guidance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 7769–7778. [Google Scholar]
- Chen, C.; Shi, X.; Qin, Y.; Li, X.; Han, X.; Yang, T.; Guo, S. Real-world blind super-resolution via feature matching with implicit high-resolution priors. In Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 10–14 October 2022; pp. 1329–1338. [Google Scholar]
- Yu, J.; Li, X.; Koh, J.Y.; Zhang, H.; Pang, R.; Qin, J.; Ku, A.; Xu, Y.; Baldridge, J.; Wu, Y. Vector-quantized image modeling with improved vqgan. arXiv 2021, arXiv:2110.04627. [Google Scholar]
- Huang, D.; Song, J.; Huang, X.; Hu, Z.; Zeng, H. Multi-Modal Prior-Guided Diffusion Model for Blind Image Super-Resolution. IEEE Signal Process. Lett. 2025, 32, 316–320. [Google Scholar] [CrossRef]
- Yue, H.; Sun, X.; Yang, J.; Wu, F. Landmark image super-resolution by retrieving web images. IEEE Trans. Image Process. 2013, 22, 4865–4878. [Google Scholar] [CrossRef] [PubMed]
- Zheng, H.; Ji, M.; Wang, H.; Liu, Y.; Fang, L. Crossnet: An end-to-end reference-based super resolution network using cross-scale warping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Munich, Germany, 8–14 September 2018; pp. 88–104. [Google Scholar]
- Zhang, Z.; Wang, Z.; Lin, Z.; Qi, H. Image super-resolution by neural texture transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7982–7991. [Google Scholar]
- Yang, F.; Yang, H.; Fu, J.; Lu, H.; Guo, B. Learning texture transformer network for image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 5791–5800. [Google Scholar]
- Cong, R.; Liao, R.; Li, F.; Sheng, R.; Bai, H.; Wan, R.; Kwong, S.; Zhang, W. Reference-Based Iterative Interaction with P2-Matching for Stereo Image Super-Resolution. IEEE Trans. Image Process. 2025, 34, 3779–3789. [Google Scholar] [CrossRef]
- Gao, Q.; Zhao, Y.; Li, G.; Tong, T. Image super-resolution using knowledge distillation. In Proceedings of the Asian Conference on Computer Vision, Perth, Australia, 2–6 December 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 527–541. [Google Scholar]
- Luo, X.; Liang, Q.; Liu, D.; Qu, Y. Boosting lightweight single image super-resolution via joint-distillation. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event China, 20–24 October 2021; pp. 1535–1543. [Google Scholar]
- Wang, X.; Li, Y.; Zhang, H.; Shan, Y. Towards real-world blind face restoration with generative facial prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 9168–9178. [Google Scholar]
- Li, Y.; Fan, Y.; Xiang, X.; Demandolx, D.; Ranjan, R.; Timofte, R.; Van Gool, L. Efficient and explicit modelling of image hierarchies for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 18278–18289. [Google Scholar]
- Liu, A.; Liu, Y.; Gu, J.; Qiao, Y.; Dong, C. Blind image super-resolution: A survey and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5461–5480. [Google Scholar] [CrossRef]
- Bell-Kligler, S.; Shocher, A.; Irani, M. Blind super-resolution kernel estimation using an internal-gan. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
- Gu, J.; Lu, H.; Zuo, W.; Dong, C. Blind super-resolution with iterative kernel correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1604–1613. [Google Scholar]
- Luo, Z.; Huang, Y.; Li, S.; Wang, L.; Tan, T. Unfolding the alternating optimization for blind super resolution. Adv. Neural Inf. Process. Syst. 2020, 33, 5632–5643. [Google Scholar]
- Sun, H.; Yuan, Y.; Su, L.; Shao, H. Learning correction errors via frequency-self attention for blind image super-resolution. In Proceedings of the 2024 9th International Conference on Image, Vision and Computing (ICIVC), Suzhou, China, 15–17 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 384–392. [Google Scholar]
- Li, F.; Wu, Y.; Liang, Z.; Cong, R.; Bai, H.; Zhao, Y.; Wang, M. Blinddiff: Empowering degradation modelling in diffusion models for blind image super-resolution. arXiv 2024, arXiv:2403.10211. [Google Scholar] [CrossRef]
- Ohayon, G.; Elad, M.; Michaeli, T. Perceptual fairness in image restoration. Adv. Neural Inf. Process. Syst. 2024, 37, 70259–70312. [Google Scholar]
- Ates, H.F.; Yildirim, S.; Gunturk, B.K. Deep learning-based blind image super-resolution with iterative kernel reconstruction and noise estimation. Comput. Vis. Image Underst. 2023, 233, 103718. [Google Scholar] [CrossRef]
- Liu, Q.; Zhuang, C.; Gao, P.; Qin, J. Cdformer: When degradation prediction embraces diffusion model for blind image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 7455–7464. [Google Scholar]
- Wang, L.; Wang, Y.; Dong, X.; Xu, Q.; Yang, J.; An, W.; Guo, Y. Unsupervised degradation representation learning for blind super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10581–10590. [Google Scholar]
- Yang, F.; Yang, H.; Zeng, Y.; Fu, J.; Lu, H. Degradation-guided meta-restoration network for blind super-resolution. arXiv 2022, arXiv:2207.00943. [Google Scholar]
- Chen, G.Y.; Weng, W.D.; Su, J.N.; Gan, M.; Chen, C.P. Dynamic degradation intensity estimation for adaptive blind super-resolution: A novel approach and benchmark dataset. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 4762–4772. [Google Scholar] [CrossRef]
- Liu, Q.; Gao, P.; Han, K.; Liu, N.; Xiang, W. Degradation-aware self-attention based transformer for blind image super-resolution. IEEE Trans. Multimed. 2024, 26, 7516–7528. [Google Scholar] [CrossRef]
- Yuan, J.; Ma, J.; Wang, B.; Hu, W. Content-Decoupled Contrastive Learning-Based Implicit Degradation Modeling for Blind Image Super-Resolution. IEEE Trans. Image Process. 2025, 34, 4751–4766. [Google Scholar] [CrossRef]
- Wang, S.; Zhang, M.; Miao, M. The super-resolution reconstruction algorithm of multi-scale dilated convolution residual network. Front. Neurorobotics 2024, 18, 1436052. [Google Scholar] [CrossRef]
- Li, G.; Xiao, H.; Liang, D.; Ling, B.W.K. Multi-scale cross-fusion for arbitrary scale image super resolution. Multimed. Tools Appl. 2024, 83, 79805–79814. [Google Scholar] [CrossRef]
- Wei, J.; Yang, G.; Wei, W.; Liu, A.; Chen, X. Multi-Contrast MRI Arbitrary-Scale Super-Resolution via Dynamic Implicit Network. IEEE Trans. Circuits Syst. Video Technol. 2025, 35, 8973–8988. [Google Scholar] [CrossRef]
- Chu, X.; Chen, L.; Yu, W. Nafssr: Stereo image super-resolution using nafnet. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–20 June 2022; pp. 1239–1248. [Google Scholar]
- Xu, Z.; Tang, Y.; Xu, B.; Li, Q. NeurOp-Diff: Continuous Remote Sensing Image Super-Resolution via Neural Operator Diffusion. arXiv 2025, arXiv:2501.09054. [Google Scholar]
- Yu, Z.; Chen, L.; Zeng, Z.; Yang, K.; Luo, S.; Chen, S.; Zhong, C. LGFN: Lightweight Light Field Image Super-Resolution using Local Convolution Modulation and Global Attention Feature Extraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–18 June 2024; pp. 6712–6721. [Google Scholar]
- Zhang, M.; Zhang, C.; Zhang, Q.; Guo, J.; Gao, X.; Zhang, J. Essaformer: Efficient transformer for hyperspectral image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 23073–23084. [Google Scholar]
- Georgescu, M.I.; Ionescu, R.T.; Miron, A.I.; Savencu, O.; Ristea, N.C.; Verga, N.; Khan, F.S. Multimodal multi-head convolutional attention with various kernel sizes for medical image super-resolution. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 2195–2205. [Google Scholar]
- Jeon, D.S.; Baek, S.H.; Choi, I.; Kim, M.H. Enhancing the spatial resolution of stereo images using a parallax prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1721–1730. [Google Scholar]
- Wang, L.; Guo, Y.; Wang, Y.; Liang, Z.; Lin, Z.; Yang, J.; An, W. Parallax attention for unsupervised stereo correspondence learning. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 2108–2125. [Google Scholar] [CrossRef] [PubMed]
- Wang, L.; Wang, Y.; Liang, Z.; Lin, Z.; Yang, J.; An, W.; Guo, Y. Learning parallax attention for stereo image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12250–12259. [Google Scholar]
- Wang, Y.; Wang, L.; Yang, J.; An, W.; Guo, Y. Flickr1024: A large-scale dataset for stereo image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019; pp. 3852–3857. [Google Scholar]
- Ying, X.; Wang, Y.; Wang, L.; Sheng, W.; An, W.; Guo, Y. A stereo attention module for stereo image super-resolution. IEEE Signal Process. Lett. 2020, 27, 496–500. [Google Scholar] [CrossRef]
- Dai, Q.; Li, J.; Yi, Q.; Fang, F.; Zhang, G. Feedback network for mutually boosted stereo image super-resolution and disparity estimation. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event China, 20–24 October 2021; pp. 1985–1993. [Google Scholar]
- Chen, L.; Chu, X.; Zhang, X.; Sun, J. Simple baselines for image restoration. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Cham, Switzerland, 2022; pp. 17–33. [Google Scholar]
- Lin, J.; Yin, L.; Wang, Y. Steformer: Efficient stereo image super-resolution with transformer. IEEE Trans. Multimed. 2023, 25, 8396–8407. [Google Scholar] [CrossRef]
- Cao, W.; Lei, X.; Jiang, Y.; Bai, Z.; Qian, X. Leveraging more information for blind stereo super-resolution via large-window cross-attention. Appl. Soft Comput. 2025, 168, 112492. [Google Scholar] [CrossRef]
- Haut Hurtado, J.M.; Fernández Beltrán, R.; Paoletti Ávila, M.E.; Plaza Miguel, J.; Plaza, A.; Pla, F. A new deep generative network for unsupervised remote sensing single-image super-resolution. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6792–6810. [Google Scholar] [CrossRef]
- Gu, J.; Sun, X.; Zhang, Y.; Fu, K.; Wang, L. Deep residual squeeze and excitation network for remote sensing image super-resolution. Remote Sens. 2019, 11, 1817. [Google Scholar] [CrossRef]
- Zhang, D.; Shao, J.; Li, X.; Shen, H.T. Remote sensing image super-resolution via mixed high-order attention network. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5183–5196. [Google Scholar] [CrossRef]
- Dong, X.; Wang, L.; Sun, X.; Jia, X.; Gao, L.; Zhang, B. Remote sensing image super-resolution using second-order multi-scale networks. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3473–3485. [Google Scholar] [CrossRef]
- Lei, S.; Shi, Z. Hybrid-scale self-similarity exploitation for remote sensing image super-resolution. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5401410. [Google Scholar] [CrossRef]
- Wang, Y.; Shao, Z.; Lu, T.; Wu, C.; Wang, J. Remote sensing image super-resolution via multiscale enhancement network. IEEE Geosci. Remote Sens. Lett. 2023, 20, 5000905. [Google Scholar] [CrossRef]
- Liu, Z.; Feng, R.; Wang, L.; Han, W.; Zeng, T. Dual learning-based graph neural network for remote sensing image super-resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5628614. [Google Scholar] [CrossRef]
- Zhang, Z.; Hao, X.; Li, J.; Pang, J. ESC-MISR: Enhancing Spatial Correlations for Multi-image Super-Resolution in Remote Sensing. In Proceedings of the International Conference on Multimedia Modeling, Nara, Japan, 8–10 January 2025; Springer: Berlin/Heidelberg, Germany, 2025; pp. 373–387. [Google Scholar]
- Wang, Y.; Wang, L.; Yang, J.; An, W.; Yu, J.; Guo, Y. Spatial-angular interaction for light field image super-resolution. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXIII 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 290–308. [Google Scholar]
- Yoon, Y.; Jeon, H.G.; Yoo, D.; Lee, J.Y.; Kweon, I.S. Light-field image super-resolution using convolutional neural network. IEEE Signal Process. Lett. 2017, 24, 848–852. [Google Scholar] [CrossRef]
- Wang, Y.; Liu, F.; Zhang, K.; Hou, G.; Sun, Z.; Tan, T. LFNet: A novel bidirectional recurrent convolutional neural network for light-field image super-resolution. IEEE Trans. Image Process. 2018, 27, 4274–4286. [Google Scholar] [CrossRef]
- Liang, Z.; Wang, Y.; Wang, L.; Yang, J.; Zhou, S. Light field image super-resolution with transformers. IEEE Signal Process. Lett. 2022, 29, 563–567. [Google Scholar] [CrossRef]
- Wang, Y.; Liang, Z.; Wang, L.; Yang, J.; An, W.; Guo, Y. Real-World Light Field Image Super-Resolution Via Degradation Modulation. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 5559–5573. [Google Scholar] [CrossRef]
- Zhou, E.; Fan, H.; Cao, Z.; Jiang, Y.; Yin, Q. Learning face hallucination in the wild. In Proceedings of the AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; Volume 29. [Google Scholar]
- Yu, X.; Porikli, F. Hallucinating very low-resolution unaligned and noisy face images by transformative discriminative autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3760–3768. [Google Scholar]
- Dogan, B.; Gu, S.; Timofte, R. Exemplar guided face image super-resolution without facial landmarks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 1814–1823. [Google Scholar]
- Zhang, K.; Zhang, Z.; Cheng, C.W.; Hsu, W.H.; Qiao, Y.; Liu, W.; Zhang, T. Super-identity convolutional neural network for face hallucination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Austin, TX, USA, 25–30 January 2018; pp. 183–198. [Google Scholar]
- Gao, G.; Tang, L.; Wu, F.; Lu, H.; Yang, J. JDSR-GAN: Constructing an efficient joint learning network for masked face super-resolution. IEEE Trans. Multimed. 2023, 25, 1505–1512. [Google Scholar] [CrossRef]
- Qiu, T.; Yan, Y. DPHNet: Dual-Path Hybrid Network for Blurry Face Image Super-Resolution. IEEE Access 2025, 13, 56607–56615. [Google Scholar] [CrossRef]
- Gu, Y.; Wang, X.; Xie, L.; Dong, C.; Li, G.; Shan, Y.; Cheng, M.M. Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Cham, Switzerland, 2022; pp. 126–143. [Google Scholar]
- Zhou, S.; Chan, K.; Li, C.; Loy, C.C. Towards robust blind face restoration with codebook lookup transformer. Adv. Neural Inf. Process. Syst. 2022, 35, 30599–30611. [Google Scholar]
- Zhu, F.; Zhu, J.; Chu, W.; Zhang, X.; Ji, X.; Wang, C.; Tai, Y. Blind face restoration via integrating face shape and generative priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 7662–7671. [Google Scholar]
- Rickard, L.J.; Basedow, R.W.; Zalewski, E.F.; Silverglate, P.R.; Landers, M. HYDICE: An airborne system for hyperspectral imaging. In Proceedings of the Imaging Spectrometry of the Terrestrial Environment, Orlando, FL, USA, 11–16 April 1993; SPIE: Bellingham, WA, USA, 1993; Volume 1937, pp. 173–179. [Google Scholar]
- Wu, L.; Wang, H.; Zhang, T. A multiscale 3D convolution with context attention network for hyperspectral image classification. Earth Sci. Inform. 2022, 15, 2553–2569. [Google Scholar] [CrossRef]
- Wang, B.; Chen, J.; Wang, H.; Tang, Y.; Chen, J.; Jiang, Y. A spectral and spatial transformer for hyperspectral remote sensing image super-resolution. Int. J. Digit. Earth 2024, 17, 2313102. [Google Scholar] [CrossRef]
- Dang, L.; Weng, L.; Hou, Y.; Zuo, X.; Liu, Y. Double-branch feature fusion transformer for hyperspectral image classification. Sci. Rep. 2023, 13, 272. [Google Scholar] [CrossRef] [PubMed]
- Fei, M.; Fang, W.; Shuai, H. Adaptive deep prior for hyperspectral image super-resolution. Laser Technol. 2024, 48, 491–498. [Google Scholar]
- Jia, Y.; Xie, Y.; An, P.; Tian, Z.; Hua, X. DiffHSR: Unleashing Diffusion Priors in Hyperspectral Image Super-Resolution. IEEE Signal Process. Lett. 2025, 32, 236–240. [Google Scholar] [CrossRef]
- Chen, Y.; Shi, F.; Christodoulou, A.G.; Xie, Y.; Zhou, Z.; Li, D. Efficient and accurate MRI super-resolution using a generative adversarial network and 3D multi-level densely connected network. In Proceedings of the International Conference on Medical Image Computing and Computer-assisted Intervention, Granada, Spain, 16–20 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 91–99. [Google Scholar]
- Wang, Y.; Teng, Q.; He, X.; Feng, J.; Zhang, T. CT-image of rock samples super resolution using 3D convolutional neural network. Comput. Geosci. 2019, 133, 104314. [Google Scholar] [CrossRef]
- Zhao, X.; Zhang, Y.; Zhang, T.; Zou, X. Channel splitting network for single MR image super-resolution. IEEE Trans. Image Process. 2019, 28, 5649–5662. [Google Scholar] [CrossRef]
- Peng, C.; Lin, W.A.; Liao, H.; Chellappa, R.; Zhou, S.K. Saint: Spatially aware interpolation network for medical slice synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 7750–7759. [Google Scholar]
- Feng, C.M.; Yan, Y.; Fu, H.; Chen, L.; Xu, Y. Task transformer network for joint MRI reconstruction and super-resolution. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part VI 24. Springer: Berlin/Heidelberg, Germany, 2021; pp. 307–317. [Google Scholar]
- Sheibanifard, A.; Yu, H. A novel implicit neural representation for volume data. Appl. Sci. 2023, 13, 3242. [Google Scholar] [CrossRef]
- Kui, X.; Ji, Z.; Zou, B.; Li, Y.; Dai, Y.; Chen, L.; Vera, P.; Ruan, S. Iterative Collaboration Network Guided by Reconstruction Prior for Medical Image Super-Resolution. IEEE Trans. Comput. Imaging 2025, 11, 827–838. [Google Scholar] [CrossRef]
- Li, Y.; Hao, W.; Zeng, H.; Wang, L.; Xu, J.; Routray, S.; Jhaveri, R.H.; Gadekallu, T.R. Cross-Scale Texture Supplementation for Reference-based Medical Image Super-Resolution. IEEE J. Biomed. Health Inform. 2025, 1–15. [Google Scholar] [CrossRef] [PubMed]
- Zhou, X.; Jiang, L.; Hu, C.; Lei, S.; Zhang, T.; Mou, X. YOLO-SASE: An Improved YOLO Algorithm for the Small Targets Detection in Complex Backgrounds. Sensors 2022, 22, 4600. [Google Scholar] [CrossRef]
- Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual dense network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2472–2481. [Google Scholar]
- Dong, L.; Fan, Q.; Yu, Y.; Zhang, Q.; Chen, J.; Luo, Y.; Zou, C. TinySR: Pruning Diffusion for Real-World Image Super-Resolution. arXiv 2025, arXiv:2508.17434. [Google Scholar]
- Zhang, L.; Li, H.; Liu, X.; Niu, J.; Wu, J. MobileSR: Efficient Convolutional Neural Network for Super-resolution. In Proceedings of the GLOBECOM 2020-2020 IEEE Global Communications Conference, Taipei, Taiwan, 7–11 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
- Zangeneh, E.; Rahmati, M.; Mohsenzadeh, Y. Low resolution face recognition using a two-branch deep convolutional neural network architecture. Expert Syst. Appl. 2020, 139, 112854. [Google Scholar] [CrossRef]
- Wang, L.; Li, D.; Zhu, Y.; Tian, L.; Shan, Y. Dual super-resolution learning for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3774–3783. [Google Scholar]
- Xiang, X.; Lin, Q.; Allebach, J.P. Boosting high-level vision with joint compression artifacts reduction and super-resolution. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 2390–2397. [Google Scholar]
- Kwon, H.; Kim, Y.; Yoon, H.; Choi, D. Optimal Cluster Expansion-Based Intrusion Tolerant System to Prevent Denial of Service Attacks. Appl. Sci. 2017, 7, 1186. [Google Scholar] [CrossRef]
Name | Usage | Amount | Format |
---|---|---|---|
BSDS300 [10] | Train | 300 | JPG |
DIV2K [11] | Train | 1000 | PNG |
Flickr2K [12] | Train | 2650 | PNG |
Set5 [13] | Test | 5 | PNG |
Set14 [14] | Test | 14 | PNG |
Urban100 [15] | Test | 100 | PNG |
Manga109 [16] | Test | 109 | PNG |
Quality Score | Degree of Image Distortion | Image Quality |
---|---|---|
1 | The level of distortion is so great that vision is hindered | Very poor |
2 | Noticeable distortion can be observed in the image | Poor |
3 | Distortion is observable in the image | Ordinary |
4 | The degree of distortion is low | Good |
5 | Visual distortion is not easily noticeable | Excellent |
Type | Model | Set5 PSNR/SSIM | Set14 PSNR/SSIM | Urban100 PSNR/SSIM | Training Dataset |
---|---|---|---|---|---|
Efficient Network/ Mechanism Design Methods | SRCNN [7] | 30.48/0.8628 | 27.50/0.7513 | 24.52/0.7221 | T91+ImageNet |
DSRN [37] | 31.40/0.8830 | 28.07/0.7700 | 25.08/0.7470 | T91 | |
Perceptual Quality Methods | MSRN [29] | 32.07/0.8903 | 28.60/0.7751 | 26.04/0.7896 | DIV2K |
SeaNet [111] | 32.33/0.8970 | 28.81/0.7855 | 26.32/0.7942 | DIV2K | |
CRN [38] | 32.34/0.8971 | 28.74/0.7855 | 26.44/0.7967 | DIV2K | |
EDSR [27] | 32.46/0.8968 | 28.80/0.7876 | 26.64/0.8033 | DIV2K | |
RCAN [48] | 32.63/0.9002 | 28.87/0.7889 | 26.82/0.8087 | DIV2K | |
Additional Information Utilization Methods | SwinIR [69] | 32.92/0.9044 | 29.09/0.7950 | 27.45/0.8254 | DIV2K+Flickr2K |
GRL-B [125] | 33.10/0.9094 | 29.37/0.8058 | 28.53/0.8504 | DIV2K+Flickr2K |
Model | Platform | Latency | Memory Usage | Application Scenario |
---|---|---|---|---|
DSRN [37] | NVIDIA Jetson Nano | 23 ms | 0.8 GB | General image SR tasks |
CARN [38] | NVIDIA Jetson Nano | 15 ms | 0.5 GB | Lightweight real-time SR |
RDN [194] | NVIDIA Jetson Nano | 26 ms | 1.2 GB | Medium-precision image SR |
MDCN [45] | NVIDIA Jetson Nano | 19 ms | 0.7 GB | Medium-speed image SR |
SRRFN [39] | NVIDIA Jetson Nano | 24 ms | 1.1 GB | Near-real-time SR |
RCAN [48] | NVIDIA Jetson Nano | 40 ms | 2.5 GB | High-precision image SR |
TinySR [195] | Arduino Uno | 320 ms | 28 KB | Environmental cameras |
MobileSR [196] | Arduino Uno | 450 ms | 31 KB | Low-frequency sensors |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Z.; Jiang, S.; Feng, S.; Song, Q.; Zhang, J. Comprehensive Review of Deep Learning Approaches for Single-Image Super-Resolution. Sensors 2025, 25, 5768. https://doi.org/10.3390/s25185768
Liu Z, Jiang S, Feng S, Song Q, Zhang J. Comprehensive Review of Deep Learning Approaches for Single-Image Super-Resolution. Sensors. 2025; 25(18):5768. https://doi.org/10.3390/s25185768
Chicago/Turabian StyleLiu, Zirun, Shijie Jiang, Shuhan Feng, Qirui Song, and Ji Zhang. 2025. "Comprehensive Review of Deep Learning Approaches for Single-Image Super-Resolution" Sensors 25, no. 18: 5768. https://doi.org/10.3390/s25185768
APA StyleLiu, Z., Jiang, S., Feng, S., Song, Q., & Zhang, J. (2025). Comprehensive Review of Deep Learning Approaches for Single-Image Super-Resolution. Sensors, 25(18), 5768. https://doi.org/10.3390/s25185768