Next Article in Journal
Hybrid Attention Mechanism Combined with U-Net for Extracting Vascular Branching Points in Intracavitary Images
Previous Article in Journal
Path Planning for Mobile Robots in Dynamic Environments: An Approach Combining Improved DBO and DWA Algorithms
Previous Article in Special Issue
A Lightweight Degradation-Aware Framework for Robust Object Detection in Adverse Weather
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

UCA-Net: A Transformer-Based U-Shaped Underwater Enhancement Network with a Compound Attention Mechanism

1
State Key Laboratory of Advanced Technology for Materials Synthesis and Processing, Wuhan University of Technology, Wuhan 430070, China
2
Hubei Key Laboratory of Broadband Wireless Communication and Sensor Networks, Wuhan University of Technology, Wuhan 430070, China
3
School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
4
National Deep Sea Center, Qingdao 266237, China
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(2), 318; https://doi.org/10.3390/electronics15020318 (registering DOI)
Submission received: 27 November 2025 / Revised: 7 January 2026 / Accepted: 8 January 2026 / Published: 11 January 2026

Abstract

Images captured underwater frequently suffer from color casts, blurring, and distortion, which are mainly attributable to the unique optical characteristics of water. Although conventional UIE methods rooted in physics are available, their effectiveness is often constrained, particularly in challenging aquatic and illumination conditions. More recently, deep learning has become a leading paradigm for UIE, recognized for its superior performance and operational efficiency. This paper proposes UCA-Net, a lightweight CNN-Transformer hybrid network. It incorporates multiple attention mechanisms and utilizes composite attention to effectively enhance textures, reduce blur, and correct color. A novel adaptive sparse self-attention module is introduced to jointly restore global color consistency and fine local details. The model employs a U-shaped encoder–decoder architecture with three-stage up- and down-sampling, facilitating multi-scale feature extraction and global context fusion for high-quality enhancement. Experimental results on multiple public datasets demonstrate UCA-Net’s superior performance, achieving a PSNR of 24.75 dB and an SSIM of 0.89 on the UIEB dataset, while maintaining an extremely low computational cost with only 1.44M parameters. Its effectiveness is further validated by improvements in various downstream image tasks. UCA-Net achieves an optimal balance between performance and efficiency, offering a robust and practical solution for underwater vision applications.
Keywords: composite attention; convolutional network; underwater image enhancement; transformer composite attention; convolutional network; underwater image enhancement; transformer

Share and Cite

MDPI and ACS Style

Yu, C.; Zhou, J.; Wang, L.; Liu, G.; Ding, Z. UCA-Net: A Transformer-Based U-Shaped Underwater Enhancement Network with a Compound Attention Mechanism. Electronics 2026, 15, 318. https://doi.org/10.3390/electronics15020318

AMA Style

Yu C, Zhou J, Wang L, Liu G, Ding Z. UCA-Net: A Transformer-Based U-Shaped Underwater Enhancement Network with a Compound Attention Mechanism. Electronics. 2026; 15(2):318. https://doi.org/10.3390/electronics15020318

Chicago/Turabian Style

Yu, Cheng, Jian Zhou, Lin Wang, Guizhen Liu, and Zhongjun Ding. 2026. "UCA-Net: A Transformer-Based U-Shaped Underwater Enhancement Network with a Compound Attention Mechanism" Electronics 15, no. 2: 318. https://doi.org/10.3390/electronics15020318

APA Style

Yu, C., Zhou, J., Wang, L., Liu, G., & Ding, Z. (2026). UCA-Net: A Transformer-Based U-Shaped Underwater Enhancement Network with a Compound Attention Mechanism. Electronics, 15(2), 318. https://doi.org/10.3390/electronics15020318

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop