A Two-Stage Framework for Distortion Information Estimation and Underwater Image Restoration
Abstract
1. Introduction
- We propose DR-Net, a novel two-stage deep learning framework specifically designed to address underwater image degradation caused by turbulence and water fluctuations, integrating the strengths of Transformer and GAN architectures.
- The staged design of DR-Net enables targeted handling of distinct degradation factors: DE-Net focuses on distortion estimation and correction, while IR-GAN specializes in deblurring and detail recovery, effectively mitigating the cumulative artifacts introduced during sequential processing.
- Our method achieves superior performance on both synthetic and real-world datasets, outperforming classical and state-of-the-art approaches in both qualitative and quantitative evaluations. And DR-Net enhances practical applicability in scenarios by enabling high-quality restoration from a single frame.
2. Related Work
3. Method: DR-Net
- The degraded captured image is first fed into DE-Net, which integrates a U-Net architecture with a ViT for global feature extraction at high dimensions, supplemented by a dual-channel attention block (DA-Block). Specifically, DE-Net initially employs U-Net to capture multi-scale features, enabling adaptation to diverse distortion factors in the image. For high-dimensional features, the DA-Block is utilized to extract effective features, which are then processed by ViT for global feature modeling. This process ultimately yields valid global distortion information (or a distortion map), which is used to generate a distortion-corrected image.
- Subsequently, the distortion-corrected image, combined with the estimated distortion information as guidance, is input to IR-GAN. Leveraging the powerful generative modeling capability of GANs, IR-GAN further deblurs the image and regenerates lost details, completing the overall restoration process.
3.1. IR-GAN: GAN-Based Restoration
3.2. DE-Net: Distortion Information Estimation
3.3. Network and Training Object
- The Content Loss: We use L1 loss to enforce pixel-wise similarity between the restored image and ground-truth:where denotes the ground-truth image, and is the generator output. However, L1 loss alone may result in insufficient high-frequency details, motivating the inclusion of adversarial loss to enhance detail restoration and even recover information lost due to significant distortions.
- The Adversarial Loss: Adopting LSGAN’s loss function stabilizes training and mitigates mode collapse:
- The Perceptual Loss: To reduce artifacts introduced by GANs, we incorporate perceptual loss, which measures feature-level similarity using a pre-trained network. It is defined asHere, represents the output of an intermediate feature layer from a pre-trained network.
4. Experiments
4.1. Network Training
| Algorithm 1 Training process of our network |
|
4.2. Comparison with the State-of-The-Arts
4.3. Ablation Studies
5. Conclusions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Xue, Q.; Li, H.; Lu, F.; Bai, H. Underwater Hyperspectral Imaging System for Deep-sea Exploration. Front. Phys. 2022, 10, 1058733. [Google Scholar] [CrossRef]
- Chen, W.; Tang, M.; Wang, L. Optical Imaging, Optical Sensing and Devices. Sensors 2023, 23, 2882. [Google Scholar] [CrossRef]
- Zhou, J.; Wei, X.; Shi, J.; Chu, W.; Zhang, W. Underwater Image Enhancement Method with Light Scattering Characteristics. Comput. Electr. Eng. 2022, 100, 107898. [Google Scholar] [CrossRef]
- Alford, M.H.; Gerdt, D.W.; Adkins, C.M. An Ocean Refractometer: Resolving Millimeter-Scale Turbulent Density Fluctuations via the Refractive Index. J. Atmos. Ocean. Technol. 2006, 23, 121–137. [Google Scholar] [CrossRef]
- Wang, M.; Peng, Y.; Liu, Y.; Li, Y.; Li, L. Three-Dimensional Reconstruction and Analysis of Target Laser Point Cloud Data in Simulated Turbulent Water Environment. Opto-Electron. Eng. 2023, 50, 230004-1. [Google Scholar]
- Levin, I.M.; Savchenko, V.V.; Osadchy, V.J. Correction of an Image Distorted by a Wavy Water Surface: Laboratory Experiment. Appl. Opt. 2008, 47, 6650–6655. [Google Scholar] [CrossRef]
- Shefer, R.; Malhi, M.; Shenhar, A. Waves Distortion Correction Using Cross Correlation. Tech. Rep. Isr. Inst. Technol. 2001. Available online: http://visl.technion.ac.il/projects/2000maor (accessed on 17 September 2025).
- Oreifej, O.; Shu, G.; Pace, T.; Shah, M. A Two-Stage Reconstruction Approach for Seeing Through Water. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 1153–1160. [Google Scholar]
- Fried, D.L. Probability of Getting a Lucky Short-Exposure Image Through Turbulence. JOSA 1978, 68, 1651–1658. [Google Scholar] [CrossRef]
- Efros, A.; Isler, V.; Shi, J.; Visontai, M. Seeing Through Water. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2004; Volume 17. [Google Scholar]
- Halder, K.K.; Tahtali, M.; Anavatti, S.G. Simple and Efficient Approach for Restoration of Non-Uniformly Warped Images. Appl. Opt. 2014, 53, 5576–5584. [Google Scholar] [CrossRef]
- Tian, Y.; Narasimhan, S.G. Seeing Through Water: Image Restoration Using Model-Based Tracking. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 Sepember–2 October 2009; pp. 2303–2310. [Google Scholar]
- Thapa, S.; Li, N.; Ye, J. Learning to Remove Refractive Distortions from Underwater Images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 5007–5016. [Google Scholar]
- Li, Z.; Murez, Z.; Kriegman, D.; Ramamoorthi, R.; Chandraker, M. Learning to See Through Turbulent Water. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 512–520. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H. Restormer: Efficient Transformer for High-Resolution Image Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5728–5739. [Google Scholar]
- Sun, S.; Ren, W.; Gao, X.; Wang, R.; Cao, X. Restoring Images in Adverse Weather Conditions via Histogram Transformer. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2022; Springer: Berlin/Heidelberg, Germany, 2025; pp. 111–129. [Google Scholar]
- Huang, S.; Liu, X.; Tan, T.; Hu, M.; Wei, X.; Chen, T.; Sheng, B. TransMRSR: Transformer-Based Self-Distilled Generative Prior for Brain MRI Super-Resolution. Vis. Comput. 2023, 39, 3647–3659. [Google Scholar] [CrossRef]
- James, J.G.; Agrawal, P.; Rajwade, A. Restoration of Non-Rigidly Distorted Underwater Images Using a Combination of Compressive Sensing and Local Polynomial Image Representations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 7839–7848. [Google Scholar]
- Thapa, S.; Li, N.; Ye, J. Dynamic Fluid Surface Reconstruction Using Deep Neural Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 21–30. [Google Scholar]
- Li, N.; Thapa, S.; Whyte, C.; Reed, A.W.; Jayasuriya, S.; Ye, J. Unsupervised Non-Rigid Image Distortion Removal via Grid Deformation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 2522–2532. [Google Scholar]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Donate, A.; Dahme, G.; Ribeiro, E. Classification of Textures Distorted by Water Waves. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, 20–24 August 2006; Volume 2, pp. 421–424. [Google Scholar]
- Donate, A.; Ribeiro, E. Improved Reconstruction of Images Distorted by Water Waves. In Proceedings of the Advances in Computer Graphics and Computer Vision: International Conferences VISAPP and GRAPP 2006, Revised Selected Papers, Setúbal, Portugal, 25–28 February 2006; Springer: Berlin/Heidelberg, Germany, 2007; pp. 264–277. [Google Scholar]
- Wen, Z.; Lambert, A.; Fraser, D.; Li, H. Bispectral Analysis and Recovery of Images Distorted by a Moving Water Surface. Appl. Opt. 2010, 49, 6376–6384. [Google Scholar] [CrossRef] [PubMed]
- Wen, Z.; Fraser, D.; Lambert, A.; Li, H. Reconstruction of Underwater Image by Bispectrum. In Proceedings of the 2007 IEEE International Conference on Image Processing, San Antonio, TX, USA, 16–19 September 2007; Volume 3, pp. III–545. [Google Scholar]
- Fraser, D.; Thorpe, G.; Lambert, A. Atmospheric Turbulence Visualization with Wide-Area Motion-Blur Restoration. JOSA A 1999, 16, 1751–1758. [Google Scholar] [CrossRef]
- Tahtali, M.; Lambert, A.J.; Fraser, D. Restoration of Nonuniformly Warped Images Using Accurate Frame by Frame Shiftmap Accumulation. In Proceedings of the Image Reconstruction from Incomplete Data IV, San Diego, CA, USA, 14–15 August 2006; Volume 6316, pp. 24–35. [Google Scholar]
- Yang, B.; Zhang, W.; Xie, Y.; Li, Q. Distorted Image Restoration via Non-Rigid Registration and Lucky-Region Fusion Approach. In Proceedings of the 2013 IEEE Third International Conference on Information Science and Technology (ICIST), Yangzhou, China, 23–25 March 2013; pp. 414–418. [Google Scholar]
- Zhang, Z.; Tang, Y.G.; Yang, K. A Two-Stage Restoration of Distorted Underwater Images Using Compressive Sensing and Image Registration. Adv. Manuf. 2021, 9, 273–285. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; Van Der Smagt, P.; Cremers, D.; Brox, T. Flownet: Learning Optical Flow with Convolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 2758–2766. [Google Scholar]
- Muller, P.; Savakis, A. Flowdometry: An Optical Flow and Deep Learning Based Approach to Visual Odometry. In Proceedings of the 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA, 27–29 March 2017; pp. 624–631. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; Part III. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2014; Volume 27. [Google Scholar]
- Chen, J.; Chen, J.; Chao, H.; Yang, M. Image Blind Denoising with Generative Adversarial Network Based Noise Modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3155–3164. [Google Scholar]
- Wang, F.; Xu, Z.; Ni, W.; Chen, J.; Pan, Z. An Adaptive Learning Image Denoising Algorithm Based on Eigenvalue Extraction and the GAN Model. Comput. Intell. Neurosci. 2022, 2022, 5792767. [Google Scholar] [CrossRef]
- Zhao, C.; Yin, W.; Zhang, T.; Yao, X.; Qiao, S. Neutron Image Denoising and Deblurring Based on Generative Adversarial Networks. Nucl. Instruments Methods Phys. Res. Sect. Accel. Spectrometers, Detect. Assoc. Equip. 2023, 1055, 168505. [Google Scholar] [CrossRef]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Kim, Y.H.; Nam, S.H.; Hong, S.B.; Park, K.R. GRA-GAN: Generative Adversarial Network for Image Style Transfer of Gender, Race, and Age. Expert Syst. Appl. 2022, 198, 116792. [Google Scholar] [CrossRef]
- Zheng, C.; Cham, T.J.; Cai, J. Pluralistic Image Completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1438–1447. [Google Scholar]
- Zeng, Y.; Fu, J.; Chao, H.; Guo, B. Aggregated Contextual Transformations for High-Resolution Image Inpainting. IEEE Trans. Vis. Comput. Graph. 2022, 29, 3266–3280. [Google Scholar] [CrossRef] [PubMed]
- de Farias, E.C.; Di Noia, C.; Han, C.; Sala, E.; Castelli, M.; Rundo, L. Impact of GAN-Based Lesion-Focused Medical Image Super-Resolution on the Robustness of Radiomic Features. Sci. Rep. 2021, 11, 21361. [Google Scholar] [CrossRef]
- Wang, H.; Zhong, G.; Sun, J.; Chen, Y.; Zhao, Y.; Li, S.; Wang, D. Simultaneous Restoration and Super-Resolution GAN for Underwater Image Enhancement. Front. Mar. Sci. 2023, 10, 1162295. [Google Scholar] [CrossRef]
- Ma, Q.; Li, X.; Li, B.; Zhu, Z.; Wu, J.; Huang, F.; Hu, H. STAMF: Synergistic Transformer and Mamba Fusion Network for RGB-Polarization Based Underwater Salient Object Detection. Inf. Fusion 2025, 122, 103182. [Google Scholar] [CrossRef]
- Cong, X.; Gui, J.; Zhang, J. Underwater Image Enhancement Network Based on Visual Transformer with Multiple Loss Functions Fusion. Chin. J. Intell. Sci. Technol. 2022, 4, 522–532. [Google Scholar]
- Peng, L.; Zhu, C.; Bian, L. U-Shape Transformer for Underwater Image Enhancement. IEEE Trans. Image Process. 2023, 32, 3066–3079. [Google Scholar] [CrossRef]
- Xue, J.; Wu, Q. Lightweight Cross-Gated Transformer for Image Restoration and Enhancement. J. Front. Comput. Sci. Technol. 2024, 18. [Google Scholar]
- Zheng, S.; Wang, R.; Zheng, S.; Wang, F.; Wang, L.; Liu, Z. A Multi-scale Feature Modulation Network for Efficient Underwater Image Enhancement. J. King Saud Univ.-Comput. Inf. Sci. 2024, 36, 101888. [Google Scholar] [CrossRef]
- Gao, Z.; Yang, J.; Zhang, L.; Jiang, F.; Jiao, X. TEGAN: Transformer Embedded Generative Adversarial Network for Underwater Image Enhancement. Cogn. Comput. 2024, 16, 191–214. [Google Scholar] [CrossRef]
- Xu, S.; Wang, J.; He, N.; Xu, G.; Zhang, G. Optimizing Underwater Image Enhancement: Integrating Semi-Supervised Learning and Multi-Scale Aggregated Attention. Vis. Comput. 2024, 41, 3437–3455. [Google Scholar] [CrossRef]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. SwinIR: Image Restoration Using Swin Transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 1833–1844. [Google Scholar]
- Zhu, Z.; Li, X.; Ma, Q.; Zhai, J.; Hu, H. FDNet: Fourier Transform Guided Dual-Channel Underwater Image Enhancement Diffusion Network. Sci. China Technol. Sci. 2025, 68, 1100403. [Google Scholar] [CrossRef]
- Alsakar, Y.M.; Sakr, N.A.; El-Sappagh, S.; Abuhmed, T.; Elmogy, M. Underwater Image Restoration and Enhancement: A Comprehensive Review of Recent Trends, Challenges, and Applications. Vis. Comput. 2024, 41, 3735–3783. [Google Scholar] [CrossRef]
- Jaderberg, M.; Simonyan, K.; Zisserman, A.; Kavukcuoglu, K. Spatial Transformer Networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2015; Volume 28. [Google Scholar]
- Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.; Wang, Z.; Paul Smolley, S. Least Squares Generative Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2794–2802. [Google Scholar]
- Liu, W.; Lu, H.; Fu, H.; Cao, Z. Learning to Upsample by Learning to Sample. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 6027–6037. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2012; Volume 25. [Google Scholar]
- Huynh-Thu, Q.; Ghanbari, M. Scope of Validity of PSNR in Image/Video Quality Assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Sara, U.; Akter, M.; Uddin, M.S. Image Quality Assessment Through FSIM, SSIM, MSE and PSNR—A Comparative Study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
- Ding, K.; Ma, K.; Wang, S.; Simoncelli, E.P. Image Quality Assessment: Unifying Structure and Texture Similarity. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 2567–2581. [Google Scholar] [CrossRef]
- Kastryulin, S.; Zakirov, J.; Prokopenko, D.; Dylov, D.V. PyTorch Image Quality: Metrics for Image Quality Assessment. arXiv 2022, arXiv:2208.14818. [Google Scholar] [CrossRef]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
- Farnebäck, G. Two-Frame Motion Estimation Based on Polynomial Expansion. In Proceedings of the Image Analysis: 13th Scandinavian Conference, SCIA 2003, Halmstad, Sweden, 29 June–2 July 2003; Proceedings 13. Springer: Berlin/Heidelberg, Germany, 2003; pp. 363–370. [Google Scholar]









| Methods | PSNR ↑ | SSIM ↑ | FSIM ↑ | GMSD ↓ | LPIPS ↓ | DISTS ↓ | BRISQUE ↓ |
|---|---|---|---|---|---|---|---|
| Isola et al. [22] | 19.1077 | 0.6504 | 0.7487 | 0.2016 | 0.4163 | 0.2280 | 14.7516 |
| Li et al. [14] | 20.2547 | 0.7058 | 0.7859 | 0.1790 | 0.2826 | 0.1512 | 20.4872 |
| Restormer [16] | 21.9327 | 0.7659 | 0.7802 | 0.1709 | 0.3260 | 0.2087 | 46.0945 |
| TransMRSR [18] | 19.5300 | 0.6812 | 0.7391 | 0.1924 | 0.4135 | 0.2709 | 57.0114 |
| Histoformer [17] | 21.8466 | 0.7527 | 0.7689 | 0.1739 | 0.3524 | 0.2333 | 50.4283 |
| Ours | 22.5133 | 0.7828 | 0.8177 | 0.1554 | 0.2243 | 0.1321 | 28.6267 |
| Datasets | Metrics | Tian-61 | Tian-11 | Oreifej-61 | Oreifej-11 | Halder-61 | Halder-11 | Isola et al. | Li et al. | James-61 | James-11 | Restormer | TransMRSR | Histoformer | Ours |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| PSNR↑ | 20.3856 | 18.1961 | 22.2898 | 17.9748 | 17.9410 | 16.4482 | 16.6478 | 16.8975 | 19.7365 | 18.6611 | 19.3051 | 18.2819 | 19.4668 | 20.4482 | |
| SSIM↑ | 0.6285 | 0.4694 | 0.7802 | 0.5154 | 0.6822 | 0.4438 | 0.4257 | 0.5128 | 0.6426 | 0.5384 | 0.5402 | 0.4272 | 0.5341 | 0.6799 | |
| TianSet | LPIPS↓ | 0.3615 | 0.3647 | 0.1618 | 0.2359 | 0.2571 | 0.3406 | 0.5162 | 0.2258 | 0.1994 | 0.2322 | 0.3327 | 0.5520 | 0.3792 | 0.1611 |
| DISTS↓ | 0.2651 | 0.2193 | 0.1679 | 0.1662 | 0.2583 | 0.2637 | 0.3485 | 0.1563 | 0.1648 | 0.1663 | 0.2543 | 0.3765 | 0.3115 | 0.1179 | |
| BRISQUE↓ | 41.7878 | 43.9543 | 56.4836 | 53.6872 | 46.4372 | 45.4426 | 48.5798 | 71.6325 | 50.0134 | 50.9611 | 44.1252 | 62.7634 | 53.6525 | 57.2292 | |
| PSNR↑ | 21.4906 | 18.4396 | 19.994 | 18.6198 | 14.9126 | 14.6094 | 15.2982 | 17.1299 | 22.7130 | 18.7089 | 20.2811 | 18.8414 | 18.9443 | 20.1782 | |
| SSIM↑ | 0.6222 | 0.4280 | 0.5854 | 0.4881 | 0.7093 | 0.3935 | 0.3528 | 0.5158 | 0.7574 | 0.4759 | 0.5494 | 0.3831 | 0.4292 | 0.5688 | |
| JamesSet | LPIPS↓ | 0.2879 | 0.2774 | 0.1532 | 0.1787 | 0.2085 | 0.3081 | 0.5094 | 0.1958 | 0.1297 | 0.1907 | 0.3679 | 0.3855 | 0.4290 | 0.1439 |
| DISTS↓ | 0.3020 | 0.2351 | 0.1715 | 0.1688 | 0.2955 | 0.3255 | 0.4071 | 0.1843 | 0.1835 | 0.1893 | 0.3500 | 0.3861 | 0.3834 | 0.1368 | |
| BRISQUE↓ | 27.8425 | 26.2433 | 13.4870 | 13.0378 | 38.2839 | 25.0573 | 22.6950 | 19.5195 | 35.4909 | 32.2424 | 52.1540 | 67.9440 | 53.3034 | 15.9079 | |
| PSNR↑ | 26.7043 | 25.8777 | 19.6219 | 20.4880 | 23.2897 | 23.6934 | 17.1358 | 20.5535 | 27.9386 | 26.2761 | 29.5838 | 24.5663 | 29.7377 | 32.8341 | |
| SSIM↑ | 0.9565 | 0.9401 | 0.7853 | 0.8099 | 0.9185 | 0.9177 | 0.6389 | 0.8429 | 0.9653 | 0.9467 | 0.9728 | 0.9115 | 0.9740 | 0.9815 | |
| ThapaSet | LPIPS↓ | 0.0410 | 0.0355 | 0.1502 | 0.0970 | 0.0980 | 0.0737 | 0.4069 | 0.1067 | 0.0479 | 0.0427 | 0.0192 | 0.0689 | 0.0193 | 0.0193 |
| DISTS↓ | 0.0338 | 0.0251 | 0.1340 | 0.0761 | 0.1098 | 0.0908 | 0.2705 | 0.0902 | 0.0614 | 0.0423 | 0.0124 | 0.1025 | 0.0143 | 0.0146 | |
| BRISQUE↓ | 44.1472 | 39.5778 | 46.2971 | 40.5773 | 40.7131 | 38.0676 | 27.5812 | 40.4162 | 39.1398 | 39.2585 | 41.6130 | 41.8464 | 42.5070 | 39.8312 | |
| PSNR↑ | 23.6908 | 22.7268 | 22.6756 | 21.0862 | 18.0662 | 17.7436 | 18.8707 | 20.1866 | 25.5396 | 24.3508 | 21.2622 | 21.2562 | 21.4424 | 21.5932 | |
| SSIM↑ | 0.6747 | 0.6390 | 0.8070 | 0.7478 | 0.8648 | 0.8466 | 0.8088 | 0.8538 | 0.7742 | 0.6947 | 0.8768 | 0.8694 | 0.8637 | 0.6043 | |
| NianSet | LPIPS↓ | 0.3753 | 0.3834 | 0.2887 | 0.3278 | 0.4119 | 0.4334 | 0.4891 | 0.3016 | 0.3116 | 0.3420 | 0.4025 | 0.4534 | 0.3754 | 0.2767 |
| DISTS↓ | 0.2329 | 0.2171 | 0.1703 | 0.2013 | 0.2414 | 0.2595 | 0.2660 | 0.1613 | 0.2144 | 0.2078 | 0.2225 | 0.2738 | 0.2300 | 0.1832 | |
| BRISQUE↓ | 55.5353 | 42.0099 | 53.9865 | 54.5593 | 48.1926 | 47.1852 | 0.7258 | 39.8342 | 53.5041 | 49.2360 | 53.4011 | 67.9567 | 58.6706 | 45.0817 |
| Method | PSNR ↑ | SSIM ↑ | LPIPS ↓ | DISTS ↓ | BRISQUE ↓ |
|---|---|---|---|---|---|
| IR-GAN | 19.2210 | 0.6231 | 0.3602 | 0.1914 | 14.8061 |
| CNNWarp | 21.2186 | 0.7325 | 0.2908 | 0.1820 | 34.1997 |
| CNNWarp+IR-GAN | 21.3630 | 0.7406 | 0.2889 | 0.1683 | 25.8712 |
| DE-Net | 21.8322 | 0.7523 | 0.2687 | 0.1665 | 33.6816 |
| DE-Net+IR-GAN | 22.3290 | 0.7760 | 0.2310 | 0.1365 | 27.8354 |
| DR-Net (Ours) | 22.5133 | 0.7828 | 0.2243 | 0.1321 | 28.6267 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, J.; Wang, C.; Feng, C.; Liu, L.; Gong, W.; Chen, Z.; Liao, L.; Feng, C. A Two-Stage Framework for Distortion Information Estimation and Underwater Image Restoration. Photonics 2025, 12, 975. https://doi.org/10.3390/photonics12100975
Liu J, Wang C, Feng C, Liu L, Gong W, Chen Z, Liao L, Feng C. A Two-Stage Framework for Distortion Information Estimation and Underwater Image Restoration. Photonics. 2025; 12(10):975. https://doi.org/10.3390/photonics12100975
Chicago/Turabian StyleLiu, Jianming, Congzheng Wang, Chuncheng Feng, Lei Liu, Wanqi Gong, Zhibo Chen, Libin Liao, and Chang Feng. 2025. "A Two-Stage Framework for Distortion Information Estimation and Underwater Image Restoration" Photonics 12, no. 10: 975. https://doi.org/10.3390/photonics12100975
APA StyleLiu, J., Wang, C., Feng, C., Liu, L., Gong, W., Chen, Z., Liao, L., & Feng, C. (2025). A Two-Stage Framework for Distortion Information Estimation and Underwater Image Restoration. Photonics, 12(10), 975. https://doi.org/10.3390/photonics12100975

