DAE-GAN: Underwater Image Super-Resolution Based on Symmetric Degradation Attention Enhanced Generative Adversarial Network
Abstract
:1. Introduction
- We propose a method to simulate the actual degradation process for underwater images, enabling the network to better learn the mapping between high-resolution and low-resolution images, thereby enhancing the quality of reconstructed images.
- The adaptive residual attention module designed for underwater images automatically assesses image importance using an energy function and, when integrated into dense residual blocks, enhances the precision of key feature extraction and the effectiveness of super-resolution reconstruction.
- Experimental results demonstrate that our approach achieves high PSNR values while maintaining low LPIPS scores, traditionally seen as opposing outcomes.
2. Related Work
2.1. Deep Networks for Image Super-Resolution
2.2. Degradation Models
2.3. Attention-Based Image Super-Resolution
3. Methods
3.1. A Practical Degradation Model
3.1.1. Resize
3.1.2. Noise
3.1.3. Blur
3.1.4. Suspended Particles
3.1.5. Validation of the Degradation Model Efficacy
3.2. Network Architecture
3.2.1. Overall Structure
3.2.2. Attention Enhanced Residual Dense Block
3.3. Networks and Training
3.3.1. Generator
3.3.2. Discriminator with Spectral Normalization
3.3.3. Loss Function
4. Experiments
4.1. Datasets and Experiments Settings
4.1.1. Datasets
4.1.2. Implementation Details
4.1.3. Evaluation Metrics
4.2. Comparisons of Super-Resolution Results
4.2.1. Quantitative Results
4.2.2. Qualitative Results
4.3. Model Performance Evaluation on Test Datasets
4.4. Ablation Study
5. Conclusions
6. Patents
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Shi, A.; Ding, H. Underwater image super-resolution via dual-aware integrated network. Appl. Sci. 2023, 13, 12985. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; pp. 184–199. [Google Scholar]
- Dong, C.; Loy, C.C.; Tang, X. Accelerating the super-resolution convolutional neural network. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; pp. 391–407. [Google Scholar]
- Kong, X.; Zhao, H.; Qiao, Y.; Dong, C. Classsr: A general framework to accelerate super-resolution networks by data characteristic. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 12016–12025. [Google Scholar]
- Li, Z.; Liu, Y.; Chen, X.; Cai, H.; Gu, J.; Qiao, Y.; Dong, C. Blueprint separable residual network for efficient image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June2022; pp. 833–843. [Google Scholar]
- Shi, S.; Xiangli, B.; Yin, Z. Multiframe super-resolution of color images based on cross channel prior. Symmetry 2021, 13, 901. [Google Scholar] [CrossRef]
- Liang, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. Mutual affine network for spatially variant kernel estimation in blind image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 4096–4105. [Google Scholar]
- Park, S.H.; Moon, Y.S.; Cho, N.I. Perception-oriented single image super-resolution using optimal objective estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 1725–1735. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Zhang, K.; Liang, J.; Van Gool, L.; Timofte, R. Designing a practical degradation model for deep blind image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 4791–4800. [Google Scholar]
- Wang, X.; Xie, L.; Dong, C.; Shan, Y. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 1905–1914. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
- Wu, C.; Wang, D.; Bai, Y.; Mao, H.; Li, Y.; Shen, Q. Hsr-diff: Hyperspectral image super-resolution via conditional diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 7083–7093. [Google Scholar]
- Zhang, T.; Yang, J. Transformer with Hybrid Attention Mechanism for Stereo Endoscopic Video Super Resolution. Symmetry 2023, 15, 1947. [Google Scholar] [CrossRef]
- Zhao, Z.; Zhang, J.; Gu, X.; Tan, C.; Xu, S.; Zhang, Y.; Timofte, R.; Van Gool, L. Spherical space feature decomposition for guided depth map super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 12547–12558. [Google Scholar]
- Choi, H.; Lee, J.; Yang, J. N-gram in swin transformers for efficient lightweight image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 2071–2081. [Google Scholar]
- Aghelan, A.; Rouhani, M. Underwater image super-resolution using generative adversarial network-based model. In Proceedings of the 2023 13th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 1–2 November 2023; pp. 480–484. [Google Scholar]
- Umer, R.M.; Micheloni, C. Real Image Super-Resolution using GAN through modeling of LR and HR process. In Proceedings of the 2022 18th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Madrid, Spain, 29 November 2022–2 December 2022; pp. 1–8. [Google Scholar]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Change Loy, C. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Dermany, 8–14 September 2018. [Google Scholar]
- Chen, Z.; Zhang, Y.; Gu, J.; Kong, L.; Yang, X.; Yu, F. Dual aggregation transformer for image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 12312–12321. [Google Scholar]
- Liu, Y.; Chu, Z. A Dynamic Fusion of Local and Non-Local Features-Based Feedback Network on Super-Resolution. Symmetry 2023, 15, 885. [Google Scholar] [CrossRef]
- Gao, Y.; Liu, J.; Li, W.; Hou, M.; Li, Y.; Zhao, H. Augmented Grad-CAM++: Super-Resolution Saliency Maps for Visual Interpretation of Deep Neural Network. Electronics 2023, 12, 4846. [Google Scholar] [CrossRef]
- Mei, Y.; Fan, Y.; Zhou, Y. Image super-resolution with non-local sparse attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 3517–3526. [Google Scholar]
- Niu, B.; Wen, W.; Ren, W.; Zhang, X.; Yang, L.; Wang, S.; Zhang, K.; Cao, X.; Shen, H. Single image super-resolution via a holistic attention network. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 191–207. [Google Scholar]
- Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; Aila, T. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8110–8119. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July2017; pp. 136–144. [Google Scholar]
- Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual dense network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2472–2481. [Google Scholar]
- Zhou, S.; Zhang, J.; Zuo, W.; Loy, C.C. Cross-scale internal graph neural network for image super-resolution. Adv. Neural Inf. Process. Syst. 2020, 33, 3499–3509. [Google Scholar]
- Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H.U. A general u-shaped transformer for image restoration. arXiv 2021, arXiv:2106.03106. [Google Scholar]
- Li, G.; Zhao, L.; Sun, J.; Lan, Z.; Zhang, Z.; Chen, J.; Lin, Z.; Lin, H.; Xing, W. Rethinking Multi-Contrast MRI Super-Resolution: Rectangle-Window Cross-Attention Transformer and Arbitrary-Scale Upsampling. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 21230–21240. [Google Scholar]
- Li, A.; Zhang, L.; Liu, Y.; Zhu, C. Feature modulation transformer: Cross-refinement of global representation via high-frequency prior for image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 12514–12524. [Google Scholar]
- Zhou, Y.; Li, Z.; Guo, C.-L.; Bai, S.; Cheng, M.-M.; Hou, Q. Srformer: Permuted self-attention for single image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 12780–12791. [Google Scholar]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 1833–1844. [Google Scholar]
- Lai, W.-S.; Huang, J.-B.; Ahuja, N.; Yang, M.-H. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 624–632. [Google Scholar]
- Zhang, K.; Zuo, W.; Zhang, L. Deep plug-and-play super-resolution for arbitrary blur kernels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1671–1681. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Dermany, 8–14 September 2018; pp. 286–301. [Google Scholar]
- Dai, T.; Cai, J.; Zhang, Y.; Xia, S.-T.; Zhang, L. Second-order attention network for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 11065–11074. [Google Scholar]
- Park, J.; Son, S.; Lee, K.M. Content-aware local gan for photo-realistic super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 10585–10594. [Google Scholar]
- Liu, C.; Sun, D. On Bayesian adaptive video super resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 346–360. [Google Scholar] [CrossRef] [PubMed]
- Yang, W.; Zhang, X.; Tian, Y.; Wang, W.; Xue, J.-H.; Liao, Q. Deep learning for single image super-resolution: A brief review. IEEE Trans. Multimed. 2019, 21, 3106–3121. [Google Scholar] [CrossRef]
- Keys, R. Cubic convolution interpolation for digital image processing. IEEE Trans. Acoust. Speech Signal Process. 1981, 29, 1153–1160. [Google Scholar] [CrossRef]
- Luo, Z.; Huang, H.; Yu, L.; Li, Y.; Fan, H.; Liu, S. Deep constrained least squares for blind image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 17642–17652. [Google Scholar]
- Zhang, W.; Li, X.; Xu, S.; Li, X.; Yang, Y.; Xu, D.; Liu, T.; Hu, H. Underwater Image Restoration via Adaptive Color Correction and Contrast Enhancement Fusion. Remote Sens. 2023, 15, 4699. [Google Scholar] [CrossRef]
- Yang, L.; Zhang, R.-Y.; Li, L.; Xie, X. Simam: A simple, parameter-free attention module for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Vienna, Austria, 18–24 July 2021; pp. 11863–11874. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Islam, M.J.; Xia, Y.; Sattar, J. Fast underwater image enhancement for improved visual perception. IEEE Robot. Autom. Lett. 2020, 5, 3227–3234. [Google Scholar] [CrossRef]
- Islam, M.J.; Luo, P.; Sattar, J. Simultaneous enhancement and super-resolution of underwater imagery for improved visual perception. arXiv 2020, arXiv:2002.01155. [Google Scholar]
- Sharma, P.; Bisht, I.; Sur, A. Wavelength-based attributed deep neural network for underwater image restoration. ACM Trans. Multimed. Comput. Commun. Appl. 2023, 19, 1–23. [Google Scholar] [CrossRef]
- Chen, Z.; Liu, C.; Zhang, K.; Chen, Y.; Wang, R.; Shi, X. Underwater-image super-resolution via range-dependency learning of multiscale features. Comput. Electr. Eng. 2023, 110, 108756. [Google Scholar] [CrossRef]
- Islam, M.J.; Enan, S.S.; Luo, P.; Sattar, J. Underwater image super-resolution using deep residual multipliers. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 4–8 June 2020; pp. 900–906. [Google Scholar]
- Zhang, Y.; Yang, S.; Sun, Y.; Liu, S.; Li, X. Attention-guided multi-path cross-CNN for underwater image super-resolution. Signal Image Video Process. 2022, 16, 155–163. [Google Scholar] [CrossRef]
- Fang, J.; Lin, H.; Chen, X.; Zeng, K. A hybrid network of cnn and transformer for lightweight image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 1103–1112. [Google Scholar]
- Ren, T.; Xu, H.; Jiang, G.; Yu, M.; Zhang, X.; Wang, B.; Luo, T. Reinforced swin-convs transformer for simultaneous underwater sensing scene image enhancement and super-resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
- Yue, Z.; Zhao, Q.; Xie, J.; Zhang, L.; Meng, D.; Wong, K.-Y.K. Blind image super-resolution with elaborate degradation modeling on noise and kernel. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 2128–2138. [Google Scholar]
Image Sample | RRDB Attention Score | RRDB + ARAM Attention Score |
---|---|---|
Sea Slug | 0.72 | 0.78 |
Sea Turtle | 0.53 | 0.75 |
Stingray | 0.58 | 0.68 |
Clownfish | 0.49 | 0.56 |
Name | Type | Train Samps | Val/Test Samps | HR Image Size |
---|---|---|---|---|
USR-248 | Train | 1060 | 248 | 640 × 480 |
UFO120 | Train | 1500 | 120 | 640 × 480 |
EUVP | Val | - | 12,000 | 256 × 256 |
SQUID | Val | - | 57 | 256 × 256 |
Method | PSNR(dB)↑ | SSIM↑ | UIQM↑ | LPIPS↓ | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
×2 | ×4 | ×8 | ×2 | ×4 | ×8 | ×2 | ×4 | ×8 | ×2 | ×4 | ×8 | |
SRCNN | 26.81 | 23.68 | 19.07 | 0.76 | 0.65 | 0.57 | 2.59 | 2.38 | 2.01 | 0.56 | 0.71 | 0.86 |
VDSR | 27.98 | 24.70 | 20.15 | 0.79 | 0.69 | 0.61 | 2.61 | 2.44 | 2.09 | 0.53 | 0.67 | 0.83 |
SRGAN | 26.68 | 23.46 | 19.83 | 0.73 | 0.63 | 0.54 | 2.55 | 2.38 | 1.98 | 0.30 | 0.48 | 0.61 |
ESRGAN | 28.08 | 24.50 | 20.08 | 0.76 | 0.67 | 0.57 | 2.59 | 2.45 | 2.02 | 0.24 | 0.40 | 0.54 |
BSRGAN | 28.15 | 25.05 | 20.33 | 0.79 | 0.69 | 0.59 | 2.63 | 2.47 | 2.07 | 0.20 | 0.32 | 0.42 |
Real-ESRGAN | 28.86 | 25.11 | 20.45 | 0.80 | 0.71 | 0.62 | 2.68 | 2.50 | 2.10 | 0.19 | 0.33 | 0.44 |
Deep WaveNet | 29.09 | 25.40 | 21.70 | 0.83 | 0.73 | 0.63 | 2.72 | 2.53 | 2.13 | 0.44 | 0.61 | 0.72 |
RDLN | 29.76 | 25.59 | 22.40 | 0.82 | 0.71 | 0.62 | 2.74 | 2.58 | 2.19 | 0.29 | 0.50 | 0.66 |
DAIN | 29.97 | 26.16 | 22.86 | 0.84 | 0.73 | 0.63 | 2.77 | 2.64 | 2.17 | 0.43 | 0.63 | 0.69 |
DAE-GAN(ours) | 29.95 | 26.23 | 22.83 | 0.85 | 0.75 | 0.64 | 2.80 | 2.68 | 2.20 | 0.19 | 0.31 | 0.40 |
Method | PSNR(dB)↑ | SSIM↑ | UIQM↑ | LPIPS↓ | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
×2 | ×3 | ×4 | ×2 | ×3 | ×4 | ×2 | ×3 | ×4 | ×2 | ×3 | ×4 | |
SRCNN | 24.75 | 22.22 | 19.05 | 0.72 | 0.65 | 0.56 | 2.39 | 2.24 | 2.12 | 0.56 | 0.65 | 0.71 |
SRGAN | 25.11 | 23.01 | 19.93 | 0.75 | 0.70 | 0.58 | 2.44 | 2.39 | 2.35 | 0.24 | 0.33 | 0.37 |
Deep WaveNet | 25.71 | 25.23 | 23.26 | 0.77 | 0.76 | 0.73 | 2.89 | 2.86 | 2.85 | 0.40 | - | 0.53 |
AMPCNet | 25.24 | 25.43 | 25.08 | 0.71 | 0.70 | 0.70 | 2.76 | 2.65 | 2.68 | 0.31 | - | 0.47 |
ESRGCNN | 25.82 | 25.98 | 24.70 | 0.73 | 0.71 | 0.71 | 2.88 | 2.86 | 2.75 | 0.34 | 0.46 | 0.51 |
HNCT | 25.73 | 25.86 | 24.91 | 0.72 | 0.73 | 0.70 | 2.76 | 2.78 | 2.64 | 0.27 | 0.40 | 0.47 |
URSCT | 25.96 | - | 25.37 | 0.81 | - | 0.69 | - | - | - | 0.37 | 0.49 | 0.50 |
RDLN | 26.20 | 26.13 | 25.56 | 0.78 | 0.74 | 0.73 | 2.87 | 2.84 | 2.83 | 0.29 | 0.37 | 0.39 |
DAE-GAN(ours) | 26.26 | 26.19 | 25.89 | 0.80 | 0.76 | 0.74 | 2.88 | 2.87 | 2.85 | 0.19 | 0.25 | 0.30 |
Method | Scale | EUVP | SQUID | ||||||
---|---|---|---|---|---|---|---|---|---|
PSNR↑ | SSIM↑ | UIQM↑ | LPIPS↓ | PSNR↑ | SSIM↑ | UIQM↑ | LPIPS↓ | ||
SRCNN | ×4 | 23.64 | 0.63 | 2.37 | 0.70 | 23.66 | 0.65 | 2.38 | 0.70 |
SRGAN | 23.31 | 0.59 | 2.38 | 0.48 | 23.37 | 0.63 | 2.40 | 0.45 | |
ESRGAN | 24.40 | 0.66 | 2.44 | 0.38 | 23.62 | 0.68 | 2.48 | 0.38 | |
BSRGAN | 24.89 | 0.70 | 2.47 | 0.32 | 25.11 | 0.72 | 2.51 | 0.31 | |
Real-ESRGAN | 25.01 | 0.73 | 2.48 | 0.33 | 25.23 | 0.75 | 2.53 | 0.33 | |
PDM-SRGAN | 25.89 | 0.74 | - | 0.29 | 26.04 | 0.74 | - | 0.28 | |
BSRDM | 26.35 | 0.76 | 2.40 | 0.38 | 26.40 | 0.73 | 2.43 | 0.36 | |
CAL-GAN | 26.09 | 0.71 | - | 0.31 | 26.11 | 0.69 | - | 0.29 | |
DAE-GAN (ours) | 26.33 | 0.78 | 2.63 | 0.30 | 26.41 | 0.77 | 2.68 | 0.29 |
DM | ARAM | PSNR↑ | SSIM↑ | UIQM↑ | LPIPS↓ |
---|---|---|---|---|---|
24.50 | 0.66 | 2.45 | 0.40 | ||
√ | 25.02 | 0.68 | 2.64 | 0.34 | |
√ | 25.89 | 0.72 | 2.47 | 0.37 | |
√ | √ | 26.23 | 0.75 | 2.68 | 0.31 |
Image Flipping Applied | PSNR↑ | SSIM↑ | UIQM↑ | LPIPS↓ |
---|---|---|---|---|
Not Applied | 26.23 | 0.75 | 2.68 | 0.31 |
Applied | 26.33 | 0.77 | 2.69 | 0.30 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gao, M.; Li, Z.; Wang, Q.; Fan, W. DAE-GAN: Underwater Image Super-Resolution Based on Symmetric Degradation Attention Enhanced Generative Adversarial Network. Symmetry 2024, 16, 588. https://doi.org/10.3390/sym16050588
Gao M, Li Z, Wang Q, Fan W. DAE-GAN: Underwater Image Super-Resolution Based on Symmetric Degradation Attention Enhanced Generative Adversarial Network. Symmetry. 2024; 16(5):588. https://doi.org/10.3390/sym16050588
Chicago/Turabian StyleGao, Miaowei, Zhongguo Li, Qi Wang, and Wenbin Fan. 2024. "DAE-GAN: Underwater Image Super-Resolution Based on Symmetric Degradation Attention Enhanced Generative Adversarial Network" Symmetry 16, no. 5: 588. https://doi.org/10.3390/sym16050588
APA StyleGao, M., Li, Z., Wang, Q., & Fan, W. (2024). DAE-GAN: Underwater Image Super-Resolution Based on Symmetric Degradation Attention Enhanced Generative Adversarial Network. Symmetry, 16(5), 588. https://doi.org/10.3390/sym16050588