Progressive Upsampling Generative Adversarial Network with Collaborative Attention for Single-Image Super-Resolution
Abstract
1. Introduction
2. Related Works
2.1. Learning-Based SISR Methods
2.2. Attention Mechanism
3. Methodology
3.1. Network Architecture

3.2. Noise Collection and Frequency Decomposition
3.3. Frequency Collaborative Progressive Generator
3.4. Image Perceptual Discriminator
3.5. Loss Function
4. Experimental Results and Analysis
4.1. Implementation Details
4.2. Experimental Settings
4.3. Comparisons on the Synthetic Datasets
4.4. Comparisons on the Real-World Datasets
4.5. Ablation Study
4.6. Comparison of Computational Complexity
4.7. Application on Pathological Images
4.8. Limitation and Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Su, H.; Li, Y.; Xu, Y.; Fu, X.; Liu, S. A review of deep-learning-based super-resolution: From methods to applications. Pattern Recognit. 2025, 157, 110935. [Google Scholar] [CrossRef]
- Yue, Z.; Liao, K.; Loy, C.C. Arbitrary-steps image super-resolution via diffusion inversion. In Proceedings of the Computer Vision and Pattern Recognition Conference, Nashville, TN, USA, 11–15 June 2025; pp. 23153–23163. [Google Scholar]
- Jiang, Q.; Sun, H.; Chen, Q.; Huang, Y.; Li, Q.; Tian, J.; Zheng, C.; Mao, X.; Jiang, X.; Cheng, Y.; et al. High-resolution computed tomography with 1,024-matrix for artificial intelligence-based computer-aided diagnosis in the evaluation of pulmonary nodules. J. Thorac. Dis. 2025, 17, 289. [Google Scholar] [CrossRef] [PubMed]
- Brandt, M.; Chave, J.; Li, S.; Fensholt, R.; Ciais, P.; Wigneron, J.P.; Gieseke, F.; Saatchi, S.; Tucker, C.; Igel, C. High-resolution sensors and deep learning models for tree resource monitoring. Nat. Rev. Electr. Eng. 2025, 2, 13–26. [Google Scholar] [CrossRef]
- Jiang, K.; Yang, M.; Xiao, Y.; Wu, J.; Wang, G.; Feng, X.; Jiang, J. Rep-Mamba: Re-parameterization in vision Mamba for lightweight remote sensing image super-resolution. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5637012. [Google Scholar] [CrossRef]
- Zhu, C.; Liu, Y.; Huang, S.; Wang, F. Taming a diffusion model to revitalize remote sensing image super-resolution. Remote Sens. 2025, 17, 1348. [Google Scholar] [CrossRef]
- Zhang, J.; Cheng, S.; Liu, X.; Li, N.; Rao, G.; Zeng, S. Cytopathology image super-resolution of portable microscope based on convolutional window-integration transformer. IEEE Trans. Comput. Imaging 2025, 11, 77–88. [Google Scholar] [CrossRef]
- Chen, Z.; Guo, X.; Yang, C.; Ibragimov, B.; Yuan, Y. Joint spatial-wavelet dual-stream network for super-resolution. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2020; pp. 184–193. [Google Scholar]
- Xu, X.; Kapse, S.; Prasanna, P. SuperDiff: A diffusion super-resolution method for digital pathology with comprehensive quality assessment. Med. Image Anal. 2025, 107, 103808. [Google Scholar] [CrossRef]
- Zhang, Y.; Yu, W. Comparison of DEM super-resolution methods based on interpolation and neural networks. Sensors 2022, 22, 745. [Google Scholar] [CrossRef] [PubMed]
- Khan, R.; Sablatnig, R.; Bais, A.; Khawaja, Y.M. Comparison of reconstruction and example-based super-resolution. In 2011 7th International Conference on Emerging Technologies; IEEE: Piscataway, NJ, USA, 2011; pp. 1–6. [Google Scholar]
- Yu, M.; Shi, J.; Xue, C.; Hao, X.; Yan, G. A review of single image super-resolution reconstruction based on deep learning. Multimed. Tools Appl. 2024, 83, 55921–55962. [Google Scholar] [CrossRef]
- Patel, R.; Thakar, V.; Joshi, R. Dictionary learning-based image super-resolution for multimedia devices. Multimed. Tools Appl. 2023, 82, 17243–17262. [Google Scholar] [CrossRef]
- Meng, K.; Zhao, M.; Cattani, P.; Mei, S. Single image super-resolution based on Bendlets analysis and structural dictionary learning. Results Phys. 2024, 57, 107367. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 184–199. [Google Scholar]
- Shang, S.; Shan, Z.; Liu, G.; Wang, L.; Wang, X.; Zhang, Z.; Zhang, J. Resdiff: Combining cnn and diffusion model for image super-resolution. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 8975–8983. [Google Scholar]
- Xu, Y.; Zhou, Y.; Ma, H.; Yang, H.; Wang, H.; Zhang, S.; Li, X. Wavelet-based dual discriminator GAN for image super-resolution. Knowl.-Based Syst. 2025, 317, 113383. [Google Scholar] [CrossRef]
- Ma, C.; Mi, J.; Gao, W.; Tao, S. DESRGAN: Detail-enhanced generative adversarial networks for small sample single image super-resolution. Neurocomputing 2025, 617, 129121. [Google Scholar] [CrossRef]
- Chen, B.; Li, G.; Wu, R.; Zhang, X.; Chen, J.; Zhang, J.; Zhang, L. Adversarial diffusion compression for real-world image super-resolution. In Proceedings of the Computer Vision and Pattern Recognition Conference, Nashville, TN, USA, 11–15 June 2025; pp. 28208–28220. [Google Scholar]
- Umer, R.M.; Foresti, G.L.; Micheloni, C. Deep generative adversarial residual convolutional networks for real-world super-resolution. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops; IEEE: Piscataway, NJ, USA, 2020; pp. 438–439. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops; IEEE: Piscataway, NJ, USA, 2017; pp. 136–144. [Google Scholar]
- Kong, X.; Zhao, H.; Qiao, Y.; Dong, C. ClassSR: A general framework to accelerate super-resolution networks by data characteristic. In IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2021; pp. 12016–12025. [Google Scholar]
- Zhao, L.; Gao, J.; Deng, D.; Li, X. SSIR: Spatial shuffle multi-head self-attention for single image super-resolution. Pattern Recognit. 2024, 148, 110195. [Google Scholar] [CrossRef]
- Li, A.; Zhang, L.; Liu, Y.; Zhu, C. Exploring frequency-inspired optimization in Transformer for efficient single image super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 3141–3158. [Google Scholar] [CrossRef]
- Wu, Q.; Zeng, H.; Zhang, J.; Li, W.; Xia, H. A hybrid network of CNN and transformer for subpixel shifting-based multi-image super-resolution. Opt. Lasers Eng. 2024, 182, 108458. [Google Scholar] [CrossRef]
- Liu, C.; Zhang, D.; Lu, G.; Yin, W.; Wang, J.; Luo, G. SRMamba-T: Exploring the hybrid mamba-transformer network for single image super-resolution. Neurocomputing 2025, 624, 129488. [Google Scholar] [CrossRef]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Change Loy, C. ESRGAN: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision Workshops; Springer: Cham, Switzerland, 2019; pp. 63–79. [Google Scholar]
- Prajapati, K.; Chudasama, V.; Patel, H.; Upla, K.; Raja, K.; Ramachandra, R.; Busch, C. Direct unsupervised super-resolution using generative adversarial network (DUS-GAN) for real-world data. IEEE Trans. Image Process. 2021, 30, 8251–8264. [Google Scholar] [CrossRef] [PubMed]
- Dong, Y.; Zhou, H.; Zheng, L.; Wang, X.; Ma, J. Cross dropout based dynamic learning for blind super resolution. Neurocomputing 2025, 620, 129234. [Google Scholar] [CrossRef]
- Cho, S.; Cho, N.I. Blind image super-resolution with efficient network design using frequency domain information. IEEE Signal Process. Lett. 2025, 32, 2524–2528. [Google Scholar] [CrossRef]
- Niu, Z.; Zhong, G.; Yu, H. A review on the attention mechanism of deep learning. Neurocomputing 2021, 452, 48–62. [Google Scholar] [CrossRef]
- Soydaner, D. Attention mechanism in neural networks: Where it comes and where it goes. Neural Comput. Appl. 2022, 34, 13371–13385. [Google Scholar] [CrossRef]
- Zhong, J.; Tian, W.; Xie, Y.; Liu, Z.; Ou, J.; Tian, T.; Zhang, L. PMFSNet: Polarized multi-scale feature self-attention network for lightweight medical image segmentation. Comput. Methods Programs Biomed. 2025, 261, 108611. [Google Scholar] [CrossRef]
- Hu, L.; Wang, X.; Liu, Y.; Liu, N.; Huai, M.; Sun, L.; Wang, D. Towards stable and explainable attention mechanisms. IEEE Trans. Knowl. Data Eng. 2025, 37, 3047–3061. [Google Scholar] [CrossRef]
- Sun, X.; Zhang, B.; Wang, Y.; Mai, J.; Wang, Y.; Tan, J.; Wang, W. A multiscale attention mechanism super-resolution confocal microscopy for wafer defect detection. IEEE Trans. Autom. Sci. Eng. 2025, 22, 1016–1027. [Google Scholar] [CrossRef]
- Park, J.; Woo, S.; Lee, J.Y.; Kweon, I.S. BAM: Bottleneck attention module. arXiv 2018, arXiv:1807.06514. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
- Wang, Y.; Li, Y.; Wang, G.; Liu, X. Multi-scale attention network for single image super-resolution. In IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2024; pp. 5950–5960. [Google Scholar]
- Wu, Q.; Yang, Z.; Zeng, H.; Zhang, J.; Xia, H. Super-resolution reconstruction of sequential images based on an active shift via a hybrid attention calibration mechanism. Eng. Appl. Artif. Intell. 2025, 144, 110178. [Google Scholar] [CrossRef]
- Guo, Y.; Tian, C.; Liu, J.; Di, C.; Ning, K. HADT: Image super-resolution restoration using Hybrid Attention-Dense Connected Transformer Networks. Neurocomputing 2025, 614, 128790. [Google Scholar] [CrossRef]
- Malkocoglu, A.B.V.; Samli, R. A novel model for higher performance object detection with deep channel attention super resolution. Eng. Sci. Technol. Int. J. 2025, 64, 102003. [Google Scholar] [CrossRef]
- Su, J.N.; Gan, M.; Chen, G.Y.; Guo, W.; Chen, C.L.P. High-similarity-pass attention for single image super-resolution. IEEE Trans. Image Process. 2024, 33, 610–624. [Google Scholar] [CrossRef]
- Zhang, Q.L.; Yang, Y.B. SA-Net: Shuffle attention for deep convolutional neural networks. In Proceedings of the ICASSP 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); IEEE: Piscataway, NJ, USA, 2021; pp. 2235–2239. [Google Scholar]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27. [Google Scholar]
- Chen, J.; Chen, J.; Chao, H.; Yang, M. Image blind denoising with generative adversarial network based noise modeling. In IEEE Conference on Computer Vision and Pattern Rrecognition; IEEE: Piscataway, NJ, USA, 2018; pp. 3155–3164. [Google Scholar]
- Zhou, R.; Susstrunk, S. Kernel modeling super-resolution on real low-resolution images. In IEEE/CVF International Conference on Computer Vision; IEEE: Piscataway, NJ, USA, 2019; pp. 2433–2443. [Google Scholar]
- Fritsche, M.; Gu, S.; Timofte, R. Frequency separation for real-world super-resolution. In 2019 IEEE/CVF International Conference on Computer Vision Workshop; IEEE: Piscataway, NJ, USA, 2019; pp. 3599–3608. [Google Scholar]
- Lan, R.; Sun, L.; Liu, Z.; Lu, H.; Pang, C.; Luo, X. MADNet: A fast and lightweight network for single-image super resolution. IEEE Trans. Cybern. 2020, 51, 1443–1453. [Google Scholar] [CrossRef]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2017; pp. 4681–4690. [Google Scholar]
- Lugmayr, A.; Danelljan, M.; Timofte, R. Ntire 2020 challenge on real-world image super-resolution: Methods and results. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops; IEEE: Piscataway, NJ, USA, 2020; pp. 494–495. [Google Scholar]
- Huang, J.B.; Singh, A.; Ahuja, N. Single image super-resolution from transformed self-exemplars. In IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2015; pp. 5197–5206. [Google Scholar]
- Arbelaez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 898–916. [Google Scholar] [CrossRef] [PubMed]
- Ignatov, A.; Kobyshev, N.; Timofte, R.; Vanhoey, K.; Van Gool, L. Dslr-quality photos on mobile devices with deep convolutional networks. In IEEE International Conference on Computer Vision; IEEE: Piscataway, NJ, USA, 2017; pp. 3277–3285. [Google Scholar]
- Ji, X.; Cao, Y.; Tai, Y.; Wang, C.; Li, J.; Huang, F. Real-world super-resolution via kernel estimation and noise injection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops; IEEE: Piscataway, NJ, USA, 2020; pp. 466–467. [Google Scholar]
- Wei, Y.; Gu, S.; Li, Y.; Timofte, R.; Jin, L.; Song, H. Unsupervised real-world image super resolution via domain-distance aware training. In IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2021; pp. 13385–13394. [Google Scholar]
- Zhang, Y.; Dong, L.; Yang, H.; Qing, L.; He, X.; Chen, H. Weakly-supervised contrastive learning-based implicit degradation modeling for blind image super-resolution. Knowl.-Based Syst. 2022, 249, 108984. [Google Scholar] [CrossRef]
- Zhang, Y.; Liu, Z.; Liu, S.; Sun, Y. Frequency aggregation network for blind super-resolution based on degradation representation. Digit. Signal Process. 2023, 133, 103837. [Google Scholar] [CrossRef]
- Liu, Z.; Huang, J.; Wang, W.; Lu, H.; Lan, R. Learning distinguishable degradation maps for unknown image super-resolution. IEEE Trans. Multimed. 2025, 27, 2530–2542. [Google Scholar] [CrossRef]
- Aggarwal, A.; Bharadwaj, S.; Corredor, G.; Pathak, T.; Badve, S.; Madabhushi, A. Artificial intelligence in digital pathology—Time for a reality check. Nat. Rev. Clin. Oncol. 2025, 22, 283–291. [Google Scholar] [CrossRef]
- Liu, Y.; Yang, P.; Wang, T.; Lei, B. Utilizing state space model for diffusion processes in breast tumor pathological image super-resolution. In 2025 IEEE 22nd International Symposium on Biomedical Imaging (ISBI); IEEE: Piscataway, NJ, USA, 2025; pp. 1–4. [Google Scholar]













| Methods | Scale | NTIRE 2020 | Urban 100 | B 100 | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | ||
| ESRGAN [27] | 30.989() | 0.9179() | 0.4435() | 29.507() | 0.8946() | 0.5107() | 29.987() | 0.9056() | 0.4789() | |
| SDSR [48] | 31.218() | 0.9011() | 0.3574() | 31.075() | 0.8956() | 0.3689() | 31.173() | 0.8997() | 0.3613() | |
| TDSR [48] | 31.369() | 0.9237() | 0.2917() | 31.101() | 0.9063() | 0.3104() | 31.238() | 0.9113() | 0.2988() | |
| RealSR [55] | 32.015() | 0.9318() | 0.2540() | 31.997() | 0.9219() | 0.2678() | 32.000() | 0.9267() | 0.2588() | |
| DASR [56] | 30.586() | 0.9112() | 0.2309() | 30.054() | 0.9084() | 0.2113() | 30.510() | 0.9109() | 0.2029() | |
| DUSGAN [28] | 30.111() | 0.9099() | 0.2619() | 30.093() | 0.9001() | 0.2310() | 30.108() | 0.9085() | 0.2403() | |
| IDMBSR [57] | 31.268() | 0.9326() | 0.2213() | 31.001() | 0.9254() | 0.2271() | 31.217() | 0.9300() | 0.2100() | |
| FASR [58] | 32.385() | 0.9384() | 0.2190() | 32.107() | 0.9289() | 0.2037() | 31.589() | 0.9412() | 0.2095() | |
| DMGSR [59] | 32.867() | 0.9334() | 0.1901() | 31.271() | 0.9129() | 0.1945() | 31.968() | 0.9299() | 0.1934() | |
| SRMamba-T [26] | 33.247() | 0.9591() | 0.1393() | 33.103() | 0.9483() | 0.1536() | 33.119() | 0.9522() | 0.1496() | |
| Ours | 33.987() | 0.9673() | 0.1210() | 32.966() | 0.9483() | 0.1431() | 33.627() | 0.9546() | 0.1354() | |
| ESRGAN [27] | 23.745() | 0.6852() | 0.2071() | 23.379() | 0.6671() | 0.2173() | 23.401() | 0.6798() | 0.2239() | |
| SDSR [48] | 22.909() | 0.6854() | 0.4384() | 22.764() | 0.6672() | 0.4697() | 22.849() | 0.6917() | 0.4462() | |
| TDSR [48] | 21.998() | 0.4358() | 0.4609() | 21.590() | 0.4083() | 0.4731() | 21.783() | 0.4576() | 0.4547() | |
| RealSR [55] | 24.989() | 0.6919() | 0.2270() | 24.869() | 0.6704() | 0.2346() | 24.953() | 0.6888() | 0.2241() | |
| DASR [56] | 25.461() | 0.7992() | 0.2013() | 25.307() | 0.7597() | 0.2039() | 25.436() | 0.7793() | 0.2010() | |
| DUSGAN [28] | 24.218() | 0.6172() | 0.5625() | 24.192() | 0.5423() | 0.5512() | 24.203() | 0.5593() | 0.5104() | |
| IDMBSR [57] | 25.429() | 0.8079() | 0.2000() | 25.379() | 0.7540() | 0.2017() | 25.400() | 0.7998() | 0.2028() | |
| FASR [58] | 25.971() | 0.8113() | 0.1996() | 25.648() | 0.8023() | 0.2208() | 25.946() | 0.8099() | 0.2175() | |
| DMGSR [59] | 26.078() | 0.7782() | 0.1979() | 25.976() | 0.8347() | 0.2002() | 26.013() | 0.8498() | 0.2113() | |
| SRMamba-T [26] | 26.201() | 0.8603() | 0.1980() | 26.119() | 0.8417() | 0.1999() | 26.198() | 0.8541() | 0.2054() | |
| Ours | 26.349() | 0.8721() | 0.1975() | 26.110() | 0.8614() | 0.1983() | 26.306() | 0.8803() | 0.1978() | |
| Scale | ESRGAN | SDSR | TDSR | RealSR | DUSGAN |
| 0.3579() | 0.2917() | 0.3001() | 0.2785() | 0.2711() | |
| 0.3784() | 0.3037() | 0.3209() | 0.2699() | 0.2868() | |
| Scale | IDMSBSR | FASR | DMGSR | SRMammba-T | Ours |
| 0.2432 () | 0.2347() | 0.2137() | 0.1758() | 0.1560() | |
| 0.2559() | 0.2466() | 0.2394() | 0.2003() | 0.1884() |
| Operations | Scale | PSNR | SSIM | LPIPS |
|---|---|---|---|---|
| -w/o FD | 25.997() | 0.8104() | 0.2317() | |
| 26.010() | 0.8494() | 0.2011() | ||
| 26.101() | 0.8500() | 0.1998() | ||
| Ours | 26.349() | 0.8721() | 0.1975() | |
| Post | 26.001() | 0.8499() | 0.2010() | |
| Prog. | 26.349() | 0.8721() | 0.1975() |
| Methods | Scale | PSNR | SSIM | LPIPS |
|---|---|---|---|---|
| CAM | 26.349() | 0.8721() | 0.1975() | |
| CBAM | 26.001() | 0.8341() | 0.2103() | |
| C → R | 26.130() | 0.8512() | 0.1987() | |
| R → C | 26.349() | 0.8721() | 0.1975() |
| Method | Param (M) | Flops (G) |
|---|---|---|
| ESRGAN [27] | 16.71 | 12.02 |
| SDSR [48] | 0.02 | 0.15 |
| TDSR [48] | 6.20 | 2.01 |
| RealSR [55] | 1.70 | 5.30 |
| DASR [56] | 1.48 | 5.00 |
| DUSGAN [28] | 1.97 | 3.03 |
| IDMBSR [57] | 1.71 | 3.33 |
| FASR [58] | 1.28 | 6.80 |
| DMGSR [59] | 1.51 | 3.09 |
| SRMamba-T [26] | 1.26 | 4.22 |
| Ours | 0.23 | 1.72 |
| Methods | Histo_Nearest | Histo_Bicubic | ||||
|---|---|---|---|---|---|---|
| PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | |
| ESRGAN [27] | 31.984() | 0.8867() | 0.2496() | 31.998() | 0.8718() | 0.2319() |
| SDSR [48] | 32.004() | 0.9287() | 0.2667() | 32.901() | 0.9413() | 0.2579() |
| TDSR [48] | 32.500() | 0.9238() | 0.2419() | 33.010() | 0.9589() | 0.2274() |
| RealSR [55] | 32.499() | 0.9301() | 0.2398() | 33.0299() | 0.9574() | 0.2299() |
| DASR [56] | 32.489() | 0.9000() | 0.2501() | 31.887() | 0.9153() | 0.2243() |
| DUSGAN [28] | 32.413() | 0.9236() | 0.2517() | 33.002() | 0.9564() | 0.2287() |
| IDMBSR [57] | 32.350() | 0.9148() | 0.2418() | 33.098() | 0.9583() | 0.2235() |
| FASR [58] | 32.217() | 0.8997() | 0.2467() | 32.763() | 0.9401() | 0.2268() |
| DMGSR [59] | 32.203() | 0.9023() | 0.2971() | 33.111() | 0.8999() | 0.3105() |
| SRMamba-T [26] | 32.323() | 0.9317() | 0.2370() | 33.104() | 0.9599() | 0.2310() |
| Ours | 32.601() | 0.9398() | 0.2376() | 33.138() | 0.9609() | 0.2217() |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Lu, H.; Zhang, J.; Jing, M.; Wang, Z.; Wang, W. Progressive Upsampling Generative Adversarial Network with Collaborative Attention for Single-Image Super-Resolution. J. Imaging 2026, 12, 79. https://doi.org/10.3390/jimaging12020079
Lu H, Zhang J, Jing M, Wang Z, Wang W. Progressive Upsampling Generative Adversarial Network with Collaborative Attention for Single-Image Super-Resolution. Journal of Imaging. 2026; 12(2):79. https://doi.org/10.3390/jimaging12020079
Chicago/Turabian StyleLu, Haoxiang, Jing Zhang, Mengyuan Jing, Ziming Wang, and Wenhao Wang. 2026. "Progressive Upsampling Generative Adversarial Network with Collaborative Attention for Single-Image Super-Resolution" Journal of Imaging 12, no. 2: 79. https://doi.org/10.3390/jimaging12020079
APA StyleLu, H., Zhang, J., Jing, M., Wang, Z., & Wang, W. (2026). Progressive Upsampling Generative Adversarial Network with Collaborative Attention for Single-Image Super-Resolution. Journal of Imaging, 12(2), 79. https://doi.org/10.3390/jimaging12020079
