Delving into Underwater Image Utility: Benchmark Dataset and Prediction Model
Abstract
1. Introduction
- To the best of our knowledge, we are the first to construct a large-scale Object Detection-oriented Underwater Image Utility Assessment (OD-UIUA) dataset for the task of UIUA. The OD-UIUA dataset comprises 1200 raw underwater images, corresponding 12,000 enhanced images generated by 10 representative UIE algorithms, and 13,200 utility scores for all images in the dataset. The construction of such a dataset not only provides a fair and reliable benchmark platform for the UIUA task but also inspires further innovations of machine vision-oriented UIE algorithms.
- Unlike vanilla UIQA methods that mainly focus on visual perception-oriented image quality evaluation, we train a deep UIUA network (DeepUIUA) specifically tailored to assess the utility of underwater images for machine vision tasks (in particular, the object detection task). DeepUIUA is capable of accurately capturing object region-oriented features to facilitate accurately predicting the utility score of underwater images. Extensive experiments demonstrate that our proposed DeepUIUA model achieves superior predictive accuracy compared with the existing state-of-the-art no-reference image quality assessment (NR-IQA) methods in assessing the utility of underwater images.
2. Materials and Methods
2.1. Related Works
2.1.1. Underwater Image Dataset
2.1.2. Underwater Image Enhancement
2.1.3. Underwater Image Quality Assessment
2.2. Construction of OD-UIUA Dataset
2.2.1. Raw Underwater Image Collection
2.2.2. Enhanced Image Generation
2.2.3. Utility Score Generation
2.2.4. Dataset Analysis
2.3. Proposed DeepUIUA
2.3.1. Stage1: Object Region-Oriented Feature Capturing Network
2.3.2. Stage2: Multi-Scale Quality Assessment Network
3. Results
3.1. Experimental Protocol
3.1.1. Utility Score Generation Details
3.1.2. Training Details of DeepUIUA
3.1.3. Evaluation Metrics
3.2. Performance Comparison with the State-of-the-Art Methods
4. Discussion
4.1. Ablation Study
4.1.1. Ablation of the Proposed Components
4.1.2. Visual Explanation
4.2. Limitations
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Lin, Z.; He, Z.; Jin, C.; Luo, T.; Chen, Y. Joint Luminance-Saliency Prior and Attention for Underwater Image Quality Assessment. Remote Sens. 2024, 16, 3021. [Google Scholar] [CrossRef]
- Li, F.; Li, W.; Zheng, J.; Wang, L.; Xi, Y. Contrastive Feature Disentanglement via Physical Priors for Underwater Image Enhancement. Remote Sens. 2025, 17, 759. [Google Scholar] [CrossRef]
- Saleem, A.; Awad, A.; Paheding, S.; Lucas, E.; Havens, T.C.; Esselman, P.C. Understanding the influence of image enhancement on underwater object detection: A quantitative and qualitative study. Remote Sens. 2025, 17, 185. [Google Scholar] [CrossRef]
- Jaffe, J.S. Computer modeling and the design of optimal underwater imaging systems. IEEE J. Ocean. Eng. 1990, 15, 101–111. [Google Scholar] [CrossRef]
- Li, L.; Li, Y.; Wang, H.; Yue, C.; Gao, P.; Wang, Y.; Feng, X. Side-Scan Sonar Image Generation Under Zero and Few Samples for Underwater Target Detection. Remote Sens. 2024, 16, 4134. [Google Scholar] [CrossRef]
- Hao, Y.; Yuan, Y.; Zhang, H.; Zhang, Z. Underwater Optical Imaging: Methods, Applications and Perspectives. Remote Sens. 2024, 16, 3773. [Google Scholar] [CrossRef]
- Wen, X.; Wang, J.; Cheng, C.; Zhang, F.; Pan, G. Underwater side-scan sonar target detection: YOLOv7 model combined with attention mechanism and scaling factor. Remote Sens. 2024, 16, 2492. [Google Scholar] [CrossRef]
- Esmaeilzehi, A.; Ou, Y.; Ahmad, M.O.; Swamy, M.N.S. DMML: Deep Multi-Prior and Multi-Discriminator Learning for Underwater Image Enhancement. IEEE Trans. Broadcast. 2024, 70, 637–653. [Google Scholar] [CrossRef]
- Qiao, N.; Dong, L.; Sun, C. Adaptive deep learning network with multi-scale and multi-dimensional features for underwater image enhancement. IEEE Trans. Broadcast. 2022, 69, 482–494. [Google Scholar] [CrossRef]
- Song, W.; Wang, Y.; Huang, D.; Liotta, A.; Perra, C. Enhancement of underwater images with statistical model of background light and optimization of transmission map. IEEE Trans. Broadcast. 2020, 66, 153–169. [Google Scholar] [CrossRef]
- Kang, Y.; Jiang, Q.; Li, C.; Ren, W.; Liu, H.; Wang, P. A Perception-Aware Decomposition and Fusion Framework for Underwater Image Enhancement. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 988–1002. [Google Scholar] [CrossRef]
- Jiang, Q.; Kang, Y.; Wang, Z.; Ren, W.; Li, C. Perception-Driven Deep Underwater Image Enhancement Without Paired Supervision. IEEE Trans. Multimed. 2024, 26, 4884–4897. [Google Scholar] [CrossRef]
- Liu, Y.; Jiang, Q.; Wang, X.; Luo, T.; Zhou, J. Underwater Image Enhancement with Cascaded Contrastive Learning. IEEE Trans. Multimed. 2024, 27, 1512–1525. [Google Scholar] [CrossRef]
- Liao, X.; Wei, X.; Zhou, M.; Kwong, S. Full-Reference Image Quality Assessment: Addressing Content Misalignment Issue by Comparing Order Statistics of Deep Features. IEEE Trans. Broadcast. 2024, 70, 305–315. [Google Scholar] [CrossRef]
- Zhou, T.; Tan, S.; Zhou, W.; Luo, Y.; Wang, Y.G.; Yue, G. Adaptive Mixed-Scale Feature Fusion Network for Blind AI-Generated Image Quality Assessment. IEEE Trans. Broadcast. 2024, 70, 833–843. [Google Scholar] [CrossRef]
- Hu, B.; Zhao, T.; Zheng, J.; Zhang, Y.; Li, L.; Li, W.; Gao, X. Blind Image Quality Assessment with Coarse-Grained Perception Construction and Fine-Grained Interaction Learning. IEEE Trans. Broadcast. 2024, 70, 533–544. [Google Scholar] [CrossRef]
- Zhou, M.; Lang, S.; Zhang, T.; Liao, X.; Shang, Z.; Xiang, T.; Fang, B. Attentional Feature Fusion for End-to-End Blind Image Quality Assessment. IEEE Trans. Broadcast. 2023, 69, 144–152. [Google Scholar] [CrossRef]
- Zhou, M.; Wang, H.; Wei, X.; Feng, Y.; Luo, J.; Pu, H.; Zhao, J.; Wang, L.; Chu, Z.; Wang, X.; et al. HDIQA: A Hyper Debiasing Framework for Full Reference Image Quality Assessment. IEEE Trans. Broadcast. 2024, 70, 545–554. [Google Scholar] [CrossRef]
- Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 2017, 27, 379–393. [Google Scholar] [CrossRef]
- Wang, Y.; Song, W.; Fortino, G.; Qi, L.Z.; Zhang, W.; Liotta, A. An experimental-based review of image enhancement and image restoration methods for underwater imaging. IEEE Access 2019, 7, 140233–140251. [Google Scholar] [CrossRef]
- Jiang, Q.; Liu, Z.; Gu, K.; Shao, F.; Zhang, X.; Liu, H.; Lin, W. Single image super-resolution quality assessment: A real-world dataset, subjective studies, and an objective metric. IEEE Trans. Image Process. 2022, 31, 2279–2294. [Google Scholar] [CrossRef] [PubMed]
- Jiang, Q.; Yi, X.; Ouyang, L.; Zhou, J.; Wang, Z. Towards dimension-enriched underwater image quality assessment. IEEE Trans. Circuits Syst. Video Technol. 2024, 35, 1385–1398. [Google Scholar] [CrossRef]
- Jiang, Q.; Gu, Y.; Li, C.; Cong, R.; Shao, F. Underwater Image Enhancement Quality Evaluation: Benchmark Dataset and Objective Metric. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 5959–5974. [Google Scholar] [CrossRef]
- Zheng, Y.; Chen, W.; Lin, R.; Zhao, T.; Le Callet, P. UIF: An objective quality assessment for underwater image enhancement. IEEE Trans. Image Process. 2022, 31, 5456–5468. [Google Scholar] [CrossRef]
- Cheng, J.; Wu, Z.; Wang, S.; Demonceaux, C.; Jiang, Q. Bidirectional collaborative mentoring network for marine organism detection and beyond. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 6595–6608. [Google Scholar] [CrossRef]
- Jiang, Q.; Liu, Z.; Wang, S.; Shao, F.; Lin, W. Toward top-down just noticeable difference estimation of natural images. IEEE Trans. Image Process. 2022, 31, 3697–3712. [Google Scholar] [CrossRef] [PubMed]
- Jiang, Q.; Gu, Y.; Wu, Z.; Li, C.; Xiong, H.; Shao, F.; Wang, Z. Deep Underwater Image Quality Assessment with Explicit Degradation Awareness Embedding. IEEE Trans. Image Process. 2025, 34, 1297–1310. [Google Scholar] [CrossRef] [PubMed]
- Liu, R.; Fan, X.; Zhu, M.; Hou, M.; Luo, Z. Real-World Underwater Enhancement: Challenges, Benchmarks, and Solutions Under Natural Light. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 4861–4875. [Google Scholar] [CrossRef]
- Li, H.; Li, J.; Wang, W. A fusion adversarial underwater image enhancement network with a public test dataset. arXiv 2019, arXiv:1906.06819. [Google Scholar]
- Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An Underwater Image Enhancement Benchmark Dataset and Beyond. IEEE Trans. Image Process. 2020, 29, 4376–4389. [Google Scholar] [CrossRef] [PubMed]
- Islam, M.J.; Xia, Y.; Sattar, J. Fast Underwater Image Enhancement for Improved Visual Perception. IEEE Robot. Autom. Lett. 2020, 5, 3227–3234. [Google Scholar] [CrossRef]
- Liu, Y.; Zhang, B.; Hu, R.; Gu, K.; Zhai, G.; Dong, J. Underwater Image Quality Assessment: Benchmark Database and Objective Method. IEEE Trans. Multimed. 2024, 26, 7734–7747. [Google Scholar] [CrossRef]
- Zhou, J.; Wang, S.; Lin, Z.; Jiang, Q.; Sohel, F. A pixel distribution remapping and multi-prior retinex variational model for underwater image enhancement. IEEE Trans. Multimed. 2024, 26, 7838–7849. [Google Scholar] [CrossRef]
- Jiang, Q.; Mao, Y.; Cong, R.; Ren, W.; Huang, C.; Shao, F. Unsupervised decomposition and correction network for low-light image enhancement. IEEE Trans. Intell. Transp. Syst. 2022, 23, 19440–19455. [Google Scholar] [CrossRef]
- Fu, X.; Cao, X. Underwater image enhancement with global–local networks and compressed-histogram equalization. Signal Process. Image Commun. 2020, 86, 115892. [Google Scholar] [CrossRef]
- Chen, X.; Zhang, P.; Quan, L.; Yi, C.; Lu, C. Underwater Image Enhancement based on Deep Learning and Image Formation Model. arXiv 2021, arXiv:2101.00991. [Google Scholar]
- Naik, A.; Swarnakar, A.; Mittal, K. Shallow-UWnet: Compressed Model for Underwater Image Enhancement (Student Abstract). In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 15853–15854. [Google Scholar]
- Zhang, W.; Zhuang, P.; Sun, H.H.; Li, G.; Kwong, S.; Li, C. Underwater Image Enhancement via Minimal Color Loss and Locally Adaptive Contrast Enhancement. IEEE Trans. Image Process. 2022, 31, 3997–4010. [Google Scholar] [CrossRef]
- Xiao, Z.; Han, Y.; Rahardja, S.; Ma, Y. USLN: A statistically guided lightweight network for underwater image enhancement via dual-statistic white balance and multi-color space stretch. arXiv 2022, arXiv:2209.02221. [Google Scholar]
- Guo, C.; Wu, R.; Jin, X.; Han, L.; Zhang, W.; Chai, Z.; Li, C. Underwater ranker: Learn which is better and how to be better. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 702–709. [Google Scholar]
- Li, K.; Wu, L.; Qi, Q.; Liu, W.; Gao, X.; Zhou, L.; Song, D. Beyond single reference for training: Underwater image enhancement via comparative learning. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 2561–2576. [Google Scholar] [CrossRef]
- Jiang, J.; Ye, T.; Bai, J.; Chen, S.; Chai, W.; Jun, S.; Liu, Y.; Chen, E. Five A+ Network: You Only Need 9K Parameters for Underwater Image Enhancement. arXiv 2023, arXiv:2305.08824. [Google Scholar]
- Zhang, W.; Zhou, L.; Zhuang, P.; Li, G.; Pan, X.; Zhao, W.; Li, C. Underwater image enhancement via weighted wavelet visual perception fusion. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 2469–2483. [Google Scholar] [CrossRef]
- Rao, Y.; Liu, W.; Li, K.; Fan, H.; Wang, S.; Dong, J. Deep color compensation for generalized underwater image enhancement. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 2577–2590. [Google Scholar] [CrossRef]
- Wang, Y.; Guo, J.; He, W.; Gao, H.; Yue, H.; Zhang, Z.; Li, C. Is underwater image enhancement all object detectors need? IEEE J. Ocean. Eng. 2023, 49, 606–621. [Google Scholar] [CrossRef]
- Zhou, J.; Liu, C.; Zhang, D.; He, Z.; Sohel, F.; Jiang, Q. RSUIA: Dynamic No-Reference Underwater Image Assessment via Reinforcement Sequences. IEEE Trans. Multimed. 2025. early access. [Google Scholar] [CrossRef]
- Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef]
- Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 2015, 41, 541–551. [Google Scholar] [CrossRef]
- Wang, Y.; Li, N.; Li, Z.; Gu, Z.; Zheng, H.; Zheng, B.; Sun, M. An imaging-inspired no-reference underwater color image quality assessment metric. Comput. Electr. Eng. 2018, 70, 904–913. [Google Scholar] [CrossRef]
- Yang, N.; Zhong, Q.; Li, K.; Cong, R.; Zhao, Y.; Kwong, S. A reference-free underwater image quality assessment metric in frequency domain. Signal Process. Image Commun. 2021, 94, 116218. [Google Scholar] [CrossRef]
- Guo, P.; Liu, H.; Zeng, D.; Xiang, T.; Li, L.; Gu, K. An underwater image quality assessment metric. IEEE Trans. Multimed. 2022, 25, 5093–5106. [Google Scholar] [CrossRef]
- Wang, Z.; Shen, L.; Wang, Z.; Lin, Y.; Jin, Y. Generation-based joint luminance-chrominance learning for underwater image quality assessment. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 1123–1139. [Google Scholar] [CrossRef]
- Liu, Y.; Gu, K.; Cao, J.; Wang, S.; Zhai, G.; Dong, J.; Kwong, S. UIQI: A comprehensive quality evaluation index for underwater images. IEEE Trans. Multimed. 2023, 26, 2560–2573. [Google Scholar] [CrossRef]
- Jin, J.; Jiang, Q.; Wu, Q.; Xu, B.; Cong, R. Underwater Salient Object Detection via Dual-stage Self-paced Learning and Depth Emphasis. IEEE Trans. Circuits Syst. Video Technol. 2024, 35, 2147–2160. [Google Scholar] [CrossRef]
- Liu, C.; Li, H.; Wang, S.; Zhu, M.; Wang, D.; Fan, X.; Wang, Z. A dataset and benchmark of underwater object detection for robot picking. In Proceedings of the 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Shenzhen, China, 5–9 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6. [Google Scholar]
- Ross, T.Y.; Dollár, G. Focal loss for dense object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2980–2988. [Google Scholar]
- Tian, Z.; Shen, C.; Chen, H.; He, T. FCOS: A simple and strong anchor-free object detector. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 1922–1933. [Google Scholar] [CrossRef] [PubMed]
- Feng, C.; Zhong, Y.; Gao, Y.; Scott, M.R.; Huang, W. Tood: Task-aligned one-stage object detection. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 10–17 October 2024; IEEE Computer Society: Washington, DC, USA, 2021; pp. 3490–3499. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Cai, Z.; Vasconcelos, N. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6154–6162. [Google Scholar]
- Zheng, G.; Songtao, L.; Feng, W.; Zeming, L.; Jian, S. YOLOX: Exceeding YOLO series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Chen, K.; Wang, J.; Pang, J.; Cao, Y.; Xiong, Y.; Li, X.; Sun, S.; Feng, W.; Liu, Z.; Xu, J.; et al. MMDetection: Open mmlab detection toolbox and benchmark. arXiv 2019, arXiv:1906.07155. [Google Scholar]
- Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1733–1740. [Google Scholar]
- Kang, L.; Ye, P.; Li, Y.; Doermann, D. Simultaneous estimation of image quality and distortion via multi-task convolutional neural networks. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 2791–2795. [Google Scholar]
- Bosse, S.; Maniry, D.; Müller, K.R.; Wiegand, T.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2017, 27, 206–219. [Google Scholar] [CrossRef]
- Zhang, W.; Ma, K.; Yan, J.; Deng, D.; Wang, Z. Blind Image Quality Assessment Using a Deep Bilinear Convolutional Neural Network. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 36–47. [Google Scholar] [CrossRef]
- Su, S.; Yan, Q.; Zhu, Y.; Zhang, C.; Ge, X.; Sun, J.; Zhang, Y. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3667–3676. [Google Scholar]
- Zhu, H.; Li, L.; Wu, J.; Dong, W.; Shi, G. MetaIQA: Deep meta-learning for no-reference image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 14143–14152. [Google Scholar]
- Golestaneh, S.A.; Dadsetan, S.; Kitani, K.M. No-reference image quality assessment via transformers, relative ranking, and self-consistency. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 1220–1230. [Google Scholar]
- Pan, Z.; Zhang, H.; Lei, J.; Fang, Y.; Shao, X.; Ling, N.; Kwong, S. DACNN: Blind image quality assessment via a distortion-aware convolutional neural network. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 7518–7531. [Google Scholar] [CrossRef]
- Pan, Z.; Yuan, F.; Lei, J.; Fang, Y.; Shao, X.; Kwong, S. VCRNet: Visual compensation restoration network for no-reference image quality assessment. IEEE Trans. Image Process. 2022, 31, 1613–1627. [Google Scholar] [CrossRef]
- Qin, G.; Hu, R.; Liu, Y.; Zheng, X.; Liu, H.; Li, X.; Zhang, Y. Data-efficient image quality assessment with attention-panel decoder. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 2091–2100. [Google Scholar]
- Sun, W.; Min, X.; Tu, D.; Ma, S.; Zhai, G. Blind quality assessment for in-the-wild images via hierarchical feature fusion and iterative mixed database training. IEEE J. Sel. Top. Signal Process. 2023, 17, 1178–1192. [Google Scholar] [CrossRef]
- Roy, S.; Mitra, S.; Biswas, S.; Soundararajan, R. Test time adaptation for blind image quality assessment. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 16742–16751. [Google Scholar]
- Chen, C.; Mo, J.; Hou, J.; Wu, H.; Liao, L.; Sun, W.; Yan, Q.; Lin, W. Topiq: A top-down approach from semantics to distortions for image quality assessment. IEEE Trans. Image Process. 2024, 33, 2404–2418. [Google Scholar] [CrossRef] [PubMed]
Comparison Method | PLCC | SRCC | KRCC | RMSE |
---|---|---|---|---|
CNN-IQA [64] | 0.4625 | 0.4450 | 0.3052 | 0.1536 |
IQA-CNN++ [65] | 0.4529 | 0.4435 | 0.3037 | 0.1548 |
WaDIQaM-NR [66] | 0.4112 | 0.3729 | 0.2537 | 0.1632 |
DBCNN [67] | 0.4788 | 0.4647 | 0.3210 | 0.1569 |
HyperIQA [68] | 0.5301 | 0.5036 | 0.3517 | 0.1470 |
MetaIQA [69] | 0.5070 | 0.4907 | 0.3423 | 0.1538 |
TRes [70] | 0.3444 | 0.3410 | 0.2314 | 0.1895 |
DACNN [71] | 0.4413 | 0.4117 | 0.2817 | 0.1559 |
VCRNet [72] | 0.4580 | 0.4279 | 0.2947 | 0.1557 |
DEIQT [73] | 0.5287 | 0.4912 | 0.3416 | 0.1549 |
StairIQA [74] | 0.5307 | 0.4971 | 0.3486 | 0.1474 |
TTA-IQA [75] | 0.3617 | 0.3270 | 0.2220 | 0.1803 |
TOPIQ [76] | 0.5179 | 0.4891 | 0.3407 | 0.1528 |
ATUIQP [32] | 0.2512 | 0.2214 | 0.1476 | 0.1737 |
Ours | 0.6004 | 0.5595 | 0.3963 | 0.1386 |
Comparison Method | Params (MB) | FLOPs (G) | Running Times (s) |
---|---|---|---|
CNN-IQA [64] | 0.35 | 0.73 | 0.011 |
IQA-CNN++ [65] | 0.05 | 0.08 | 0.007 |
WaDIQaM-NR [66] | 3.28 | 4.98 | 0.038 |
DBCNN [67] | 16.50 | 15.31 | 0.116 |
HyperIQA [68] | 4.34 | 27.38 | 0.015 |
MetaIQA [69] | 1.83 | 13.24 | 0.034 |
TRes [70] | 8.39 | 34.46 | 0.053 |
DACNN [71] | 0.33 | 2.90 | 0.143 |
VCRNet [72] | 10.27 | 11.41 | 0.043 |
DEIQT [73] | 4.26 | 22.77 | 0.019 |
StairIQA [74] | 5.11 | 30.49 | 0.015 |
TTA-IQA [75] | 8.30 | 34.40 | 0.043 |
ATUIQP [32] | 4.87 | 28.10 | 0.019 |
Ours | 25.61 | 4.13 | 0.106 |
BL | PT | MSQP | PLCC | SRCC | KRCC | RMSE | |
---|---|---|---|---|---|---|---|
AM1 | ✓ | 0.5202 | 0.4951 | 0.3459 | 0.1488 | ||
AM2 | ✓ | ✓ | 0.5663 | 0.5263 | 0.3701 | 0.1443 | |
AM3 | ✓ | ✓ | 0.5395 | 0.5110 | 0.3570 | 0.1481 | |
AM4 | ✓ | ✓ | ✓ | 0.6004 | 0.5595 | 0.3963 | 0.1386 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, J.; Liu, Y.; Jiang, Q. Delving into Underwater Image Utility: Benchmark Dataset and Prediction Model. Remote Sens. 2025, 17, 1906. https://doi.org/10.3390/rs17111906
Liu J, Liu Y, Jiang Q. Delving into Underwater Image Utility: Benchmark Dataset and Prediction Model. Remote Sensing. 2025; 17(11):1906. https://doi.org/10.3390/rs17111906
Chicago/Turabian StyleLiu, Jiapeng, Yi Liu, and Qiuping Jiang. 2025. "Delving into Underwater Image Utility: Benchmark Dataset and Prediction Model" Remote Sensing 17, no. 11: 1906. https://doi.org/10.3390/rs17111906
APA StyleLiu, J., Liu, Y., & Jiang, Q. (2025). Delving into Underwater Image Utility: Benchmark Dataset and Prediction Model. Remote Sensing, 17(11), 1906. https://doi.org/10.3390/rs17111906