SupGAN: A General Super-Resolution GAN-Promoting Training Method
Abstract
1. Introduction
- We design a multi-scale feature-extraction block in the generator network. It features a five-level perceptual hierarchy that captures multi-scale features from LR images while suppressing artifacts.
- We propose a network-training strategy that adjusts learning priorities based on the proportion of high-frequency information in the training images (BHFTM), which includes an edge detection module, a weight-setting module, and a loss-construction module, to dynamically adjust the weights of loss functions in the super-resolution model according to the proportion of high-frequency information in the training samples.
- We employ the four-patch method to better simulate the degradation of complex real-world scenarios.
- We demonstrate that SupGAN produces SR images with better recovery of high-frequency details and improved perceptual metrics while reducing the occurrence of artifacts, resulting in SR images that better meet human perceptual needs (see Figure 1).
2. Related Work
- Single image super-resolution: Over the past few years, deep learning-based approaches have achieved remarkable progress in the field of single-image super-resolution (SISR) [6,12,13]. SRCNN [14] was the first to introduce deep learning into super-resolution, significantly outperforming traditional methods. Dong et al. later proposed FSRCNN [15], which improved on SRCNN by offering a faster and more real-time solution. ESPCN [16] introduced a subpixel convolution layer that extracts features directly from low-resolution input, thus avoiding convolutions in high-resolution images and reducing computational cost. Generative Adversarial Networks (GANs) have also been widely adopted in SISR tasks, utilizing adversarial training between a generator and a discriminator to produce more realistic high-resolution outputs [7,8,10]. More recently, researchers have begun exploring the powerful modeling capabilities of transformers [17] in computer vision. For example, Yang et al. present TTSR [18], which formulates LR and HR images as queries and keys within the transformer architecture to promote joint feature learning between LR and HR representations.
- Perception-driven approaches: Image super-resolution methods that optimize for peak signal-to-noise ratio (PSNR) often produce overly smooth high-resolution images that lack fine high-frequency details and fail to align with human perceptual quality. To address this limitation, perception-driven approaches have gained significant attention in recent years. RCAN [19] enhances perceptual quality by employing residual blocks along with channel attention mechanisms within its super-resolution architecture. To capture multi-scale information and improve feature discrimination, RFB-ESRGAN [20] integrates a Receptive Field Block (RFB) [21] into its design and achieved top performance in the NTIRE 2020 Perceptual Extreme Super-Resolution Challenge.
- Loss function: Traditional super-resolution methods commonly use mean squared error (MSE) as the loss function [14]. While effective for optimizing pixel-wise accuracy, MSE tends to produce overly smooth high-resolution outputs that lack high-frequency details. Although the use of L1 loss has shown improvements over MSE [22], it still fails to adequately capture the perceptual quality of the generated images. To address this issue, Li Fei-Fei et al. introduced a perceptual loss function [23], based on the concept of perceptual similarity [24]. This loss evaluates the perceptual difference between the reconstructed and ground-truth images by computing feature distances within a pre-trained deep neural network. SRGAN [7] introduced adversarial loss, employing a discriminator network to guide the generator towards producing images that lie on the natural image manifold. To further enhance texture realism, Sajjadi et al. proposed a texture matching loss [25], which compares the texture statistics between generated and real images. Contextual loss [26] was also proposed to better capture structural and content-level similarities between images. Since a single loss function often cannot fully account for the diverse aspects of image quality in super-resolution tasks, many modern approaches adopt a combination of multiple loss terms. A commonly used composite loss includes L1 loss, perceptual loss, and adversarial loss [7,8,10].
- Degradation models: Deep learning-based super-resolution methods often rely on classical degradation models [27,28], which typically consist of a sequence of operations, including blurring, downsampling, and noise addition. However, the performance of these methods can degrade significantly when real-world degradation deviates from these assumed models. To handle this situation, some approaches use implicit degradation modeling, learning abstract representations of degradation types in the feature space rather than explicitly modeling them in the pixel space [29,30,31]. BSRGAN [32] proposes a more practical and diverse degradation model by randomly shuffling different degradation operations, including blur, downsampling, and noise, while Real-ESRGAN [10] further improves on this by introducing a higher-order degradation process designed to more accurately reflect the complexity of real-world image corruptions.
- Evaluation metrics: Super-resolution methods based on deep convolutional neural networks are typically evaluated using two categories of metrics: distortion-based metrics (e.g., PSNR, SSIM, IFC, VIF [33,34,35]) and perceptual quality metrics. The latter includes human subjective assessments and no-reference image quality indicators such as the Ma score [36], NIQE [37], BRISQUE [38], and the perceptual index (PI) [39,40]. Yochai et al. [39] highlighted the inherent trade-off between distortion accuracy and perceptual quality, where methods achieving high perceptual quality often score lower on traditional distortion metrics such as PSNR and SSIM. In this work, we adopt NIQE as our primary evaluation metric. As a no-reference metric, NIQE assesses the perceptual quality of reconstructed images without requiring access to ground truth images [41].
3. Proposed Methods
3.1. Generator Architecture
3.2. Four-Patch Degradation Model
3.3. Network Training Method
- Edge detection module: To differentiate the proportion of high-frequency information in images, we use edge detection algorithms to preprocess the images, converting digital images into images composed of lines and curves, providing shape and boundary information of objects. We compared several commonly used edge detection algorithms and used the Canny operator to detect image edges. The detection effect of the Canny operator is shown in Figure 5.
- The Canny operator [43] is an edge detection algorithm based on Gaussian filtering and non-maximum suppression. It has good edge detection performance and is suitable for detecting fine-edge parts. Compared with other first-order differential operators (such as Roberts operator [44], Sobel operator [45], and Prewitt operator [46]), the Canny operator has stronger noise reduction ability.
- Weight-setting module: Image samples with a lower proportion of high-frequency content contain less effective information, and, during training, these samples should be trained to produce high-definition images with equally limited information. To achieve this, it is necessary to reduce the proportion of the perceptual loss and adversarial loss and emphasize the role of the L1 loss. Therefore, the L1 loss weight is increased for samples with a lower proportion of high-frequency content, and decreased for those with a higher proportion. The main process of the BHFTM, which adjusts the weights of the L1 loss for each sample based on the proportion of high-frequency information obtained by the edge detection module, is described in Algorithm 1.
- Loss-construction module: The generator’s loss function combines pixel loss , perceptual loss , and adversarial loss into a single unified formula. The overall loss function of the generator can be expressed as
Algorithm 1 Edge detection and weight setting. |
Input: GT image set . |
Output: Weight of L1 loss, Perceptual loss and Adversarial loss .
|
4. Experiments
4.1. Training Details
4.2. Data
4.3. Qualitative Results
4.4. Ablation Study
4.5. Efficiency Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhang, L.; Wu, X. An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Trans. Image Process. 2006, 15, 2226–2238. [Google Scholar] [CrossRef] [PubMed]
- Duchon, C.E. Lanczos filtering in one and two dimensions. J. Appl. Meteorol. Climatol. 1979, 18, 1016–1022. [Google Scholar] [CrossRef]
- Dai, S.; Han, M.; Xu, W.; Wu, Y.; Gong, Y.; Katsaggelos, A.K. Softcuts: A soft edge smoothness prior for color image super-resolution. IEEE Trans. Image Process. 2009, 18, 969–981. [Google Scholar] [CrossRef] [PubMed]
- Sun, J.; Xu, Z.; Shum, H.-Y. Image super-resolution using gradient profile prior. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
- Yan, Q.; Xu, Y.; Yang, X.; Nguyen, T.Q. Single image super-resolution based on gradient profile sharpness. IEEE Trans. Image Process. 2015, 24, 3187–3202. [Google Scholar] [PubMed]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Loy, C.C. ESRGAN: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision Workshops, Munich, Germany, 8–14 September 2018; pp. 63–79. [Google Scholar]
- Jolicoeur-Martineau, A. The relativistic discriminator: A key element missing from standard gan. arXiv 2018, arXiv:1807.00734. [Google Scholar] [CrossRef]
- Wang, X.; Xie, L.; Dong, C.; Shan, Y. Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. In Proceedings of the ICCVW, Montreal, BC, Canada, 11–17 October 2021; pp. 1905–1914. [Google Scholar]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Lai, W.-S.; Huang, J.-B.; Ahuja, N.; Yang, M. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5835–5843. [Google Scholar]
- Timofte, R.; Agustsson, E.; Van Gool, L.; Yang, M.H.; Zhang, L.; Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M.; et al. Ntire 2017 challenge on single image super-resolution: Methods and results. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1110–1121. [Google Scholar]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the ECCV, Zurich, Switzerland, 6–12 September 2014; pp. 184–199. [Google Scholar]
- Dong, C.; Loy, C.C.; Tang, X. Accelerating the Super-Resolution Convolutional Neural Network. In Proceedings of the ECCV, Amsterdam, The Netherlands, 11–14 October 2016; pp. 391–407. [Google Scholar]
- Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5998–6008. [Google Scholar]
- Yang, F.; Yang, H.; Fu, J.; Lu, H.; Guo, B. Learning texture transformer network for image super-resolution. In Proceedings of the CVPR, Seattle, WA, USA, 13–19 June 2020; pp. 5791–5800. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the ECCV, Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
- Shang, T.; Dai, Q.; Zhu, S.; Yang, T.; Guo, Y. Perceptual Extreme Super Resolution Network with Receptive Field Block. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 1778–1787. [Google Scholar]
- Liu, S.; Huang, D. Receptive field block net for accurate and fast object detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 385–400. [Google Scholar]
- Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss functions for neural networks for image processing. arXiv 2015, arXiv:1511.08861. [Google Scholar]
- Johnson, J.; Alahi, A.; Li, F.-F. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision; Springer International Publishing: Cham Switzerland, 2016; pp. 694–711. [Google Scholar]
- Bruna, J.; Sprechmann, P.; LeCun, Y. Super-resolution with deep convolutional sufficient statistics. arXiv 2015, arXiv:1511.05666. [Google Scholar]
- Sajjadi, M.S.M.; Scholkopf, B.; Hirsch, M. EnhanceNet: Single image super-resolution through automated texture synthesis. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4501–4510. [Google Scholar]
- Mechrez, R.; Talmi, I.; Zelnik-Manor, L. The Contextual Loss for Image Transformation with Non-Aligned Data. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 800–815. [Google Scholar]
- Elad, M.; Feuer, A. Restoration of a single super-resolution image from several blurred, noisy, and undersampled measured images. IEEE Trans. Image Process. 1997, 6, 1646–1658. [Google Scholar] [CrossRef] [PubMed]
- Liu, C.; Sun, D. On bayesian adaptive video super resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 346–360. [Google Scholar] [CrossRef] [PubMed]
- Yuan, Y.; Liu, S.; Zhang, J.; Zhang, Y.; Dong, C.; Lin, L. Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Fritsche, M.; Gu, S.; Timofte, R. Frequency separation for real-world super-resolution. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Republic of Korea, 27–28 October 2019; pp. 3599–3608. [Google Scholar]
- Wang, L.; Wang, Y.; Dong, X.; Xu, Q.; Yang, J.; An, W.; Guo, Y. Unsupervised Degradation Representation Learning for Blind Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 10581–10590. [Google Scholar]
- Zhang, K.; Liang, J.; Gool, L.V.; Timofte, R. Designing a practical degradation model for deep blind image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 4771–4780. [Google Scholar]
- Sheikh, H.R.; Bovik, A.C.; Veciana, G.D. An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Trans. Image Process. 2005, 14, 2117–2128. [Google Scholar] [CrossRef] [PubMed]
- Sheikh, H.R.; Bovik, A.C. Image information and visual quality. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, QC, Canada, 17–21 May 2004; pp. 709–712. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Ma, C.; Yang, C.Y.; Yang, X.; Yang, M.H. Learning a noreference quality metric for single-image super-resolution. Comput. Vis. Image Underst. 2017, 158, 1–16. [Google Scholar] [CrossRef]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
- Blau, Y.; Michaeli, T. The perception-distortion tradeoff. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 6228–6237. [Google Scholar]
- Vasu, S.; Madam, N.T.; Rajagopalan, A.N. Analyzing Perception-Distortion Tradeoff Using Enhanced Perceptual Super-Resolution Network. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018; pp. 114–131. [Google Scholar]
- Blau, Y.; Mechrez, R.; Timofte, R.; Michaeli, T.; Zelnik-Manor, L. The 2018 PIRM Challenge on Perceptual Image Super-Resolution. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018; pp. 334–355. [Google Scholar]
- Christian, S.; Wei, L.; Yangqing, J.; Pierre, S.; Scott, R.; Dragomir, A.; Dumitru, E.; Vincent, V.; Andrew, R. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 6, 679–698. [Google Scholar]
- Roberts, L.G. Machine Perception of Three-Dimensional Solids. Optical and Electro-Optical Information Processing. Doctoral’s Dissertation, Massachusetts Institute of Technology, Cambridge, MA, USA, 1963. [Google Scholar]
- Sobel, I. An isotropic 3 × 3 image gradient operator. In Machine Vision for Three-Dimensional Scenes; Freeman, H., Ed.; Academic Press: San Diego, CA, USA, 1990; pp. 376–379. [Google Scholar]
- Prewitt, J.M. Object enhancement and extraction. Pict. Process. Psychopictorics 1970, 10, 15–19. [Google Scholar]
- Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. In Proceedings of the ICLR, San Diego, CA, USA, 7–9 May 2015; pp. 1–15. [Google Scholar]
- Agustsson, E.; Timofte, R. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 1122–1131. [Google Scholar]
- Wang, X.; Yu, K.; Dong, C.; Loy, C.C. Recovering realistic texture in image super-resolution by deep spatial feature transform. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 606–615. [Google Scholar]
- Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A database of human segmented naturalimages and its application to evaluating segmentation algorithms and measuringecological statistics. In Proceedings of the Eighth IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001. [Google Scholar]
- Lugmayr, A.; Danelljan, M.; Timofte, R. Ntire 2020 challenge on real-world image super-resolution: Methods and results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 494–495. [Google Scholar]
- Huang, J.B.; Singh, A.; Ahuja, N. Single image super-resolution from transformedself-exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
Bicubic | BSRGAN | PDM_SR | SwinIR | LDL | RealSR | Real-ESRGAN+ | SupGAN | |
---|---|---|---|---|---|---|---|---|
OST300 | 7.600 | 3.309 | 4.319 | 2.921 | 2.817 | 3.521 | 2.806 | 2.633 |
RealWorld38 | 7.701 | 4.677 | 4.939 | 4.452 | 4.376 | 4.606 | 4.538 | 4.302 |
BSDS100 | 7.548 | 3.865 | 4.785 | 3.623 | 3.788 | 3.773 | 3.902 | 3.257 |
BSDS200 | 7.576 | 4.081 | 5.029 | 3.833 | 3.910 | 3.968 | 4.124 | 3.536 |
urban100 | 7.235 | 4.258 | 4.984 | 4.080 | 3.637 | 4.024 | 4.193 | 3.820 |
2020track1 | 7.596 | 3.783 | 4.101 | 3.622 | 3.181 | 3.790 | 3.820 | 3.372 |
RealEsrgan+ | SupGAN | BSRGAN | ||||
---|---|---|---|---|---|---|
Origin | Origin+Sup | Origin | Origin+Sup | Origin | Origin+Sup | |
OST300 | 2.806 | 2.685 | 2.637 | 2.633 | 3.309 | 3.151 |
2020track1 | 3.82 | 3.631 | 3.489 | 3.372 | 3.783 | 3.650 |
RealWorld38 | 4.538 | 4.386 | 4.339 | 4.302 | 4.677 | 4.201 |
BSDS100 | 3.899 | 3.412 | 3.302 | 3.257 | 3.863 | 3.745 |
BSDS200 | 4.123 | 3.72 | 3.583 | 3.536 | 4.076 | 3.841 |
urban100 | 4.193 | 3.913 | 3.985 | 3.820 | 4.258 | 3.948 |
Real-ESRGAN | Loss-Real-ESRGAN | Sup-Real-ESRGAN | BSRGAN | Loss-BSRGAN | Sup-BSRGAN | |
---|---|---|---|---|---|---|
OST300 | 11 m 24 s | 11 m 51 s | 11 m 17 s | 10 m 10 s | 10 m 56 s | 10 m 54 s |
2020track 1 | 1 m 52 s | 1 m 42 s | 2 m 10 s | 2 m 4 s | 2 m 11 s | 2 m 14 s |
RealWorld38 | 22 s | 23 s | 24 s | 17 s | 13 s | 13 s |
DIV2K 100 | 2 m | 2 m 32 s | 2 m 34 s | 2 m 11 s | 2 m 17 s | 2 m 15 s |
BSDS100 | 1 m 45 s | 2 m 2 s | 2 m | 1 m 51 s | 2 m 4 s | 2 m 7 s |
BSDS200 | 3 m 28 s | 3 m 6 s | 3 m 5 s | 3 m 55 s | 4 m 8 s | 4 m 10 s |
Urban100 | 8 m 53 s | 8 m 25 s | 8 m 54 s | 11 m 8 s | 11 m 10 s | 11 m 12 s |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, T.; Xiong, S.; Chen, Q.; Liu, H.; Cao, W.; Tuo, H. SupGAN: A General Super-Resolution GAN-Promoting Training Method. Appl. Sci. 2025, 15, 9231. https://doi.org/10.3390/app15179231
Wu T, Xiong S, Chen Q, Liu H, Cao W, Tuo H. SupGAN: A General Super-Resolution GAN-Promoting Training Method. Applied Sciences. 2025; 15(17):9231. https://doi.org/10.3390/app15179231
Chicago/Turabian StyleWu, Tao, Shuo Xiong, Qiuhang Chen, Huaizheng Liu, Weijun Cao, and Haoran Tuo. 2025. "SupGAN: A General Super-Resolution GAN-Promoting Training Method" Applied Sciences 15, no. 17: 9231. https://doi.org/10.3390/app15179231
APA StyleWu, T., Xiong, S., Chen, Q., Liu, H., Cao, W., & Tuo, H. (2025). SupGAN: A General Super-Resolution GAN-Promoting Training Method. Applied Sciences, 15(17), 9231. https://doi.org/10.3390/app15179231