Next Article in Journal
Vibro-Acoustic Coupling Characteristics Underwater of Disc-Shaped Double-Layer Shell with Stiffeners
Previous Article in Journal
Numerical Study on the Hydrodynamic Performance of Offshore Wind Turbine Jacket Foundation Under Extreme Wave–Current: A Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Progressive Color Correction and Vision-Inspired Adaptive Framework for Underwater Image Enhancement

by
Zhenhua Li
1,2,
Wenjing Liu
1,2,*,
Ji Wang
1,2 and
Yuqiang Yang
1
1
School of Electronic and Information Engineering, Guangdong Ocean University, Zhanjiang 524088, China
2
Guangdong Province Smart Ocean Sensor Network and Equipment Engineering Technology Research Center, Guangdong Ocean University, Zhanjiang 524088, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(9), 1820; https://doi.org/10.3390/jmse13091820
Submission received: 22 August 2025 / Revised: 13 September 2025 / Accepted: 16 September 2025 / Published: 19 September 2025
(This article belongs to the Section Ocean Engineering)

Abstract

Underwater images frequently exhibit color distortion, detail blurring, and contrast degradation due to absorption and scattering by the underwater medium. This study proposes a progressive color correction strategy integrated with a vision-inspired image enhancement framework to address these issues. Specifically, the progressive color correction process includes adaptive color quantization-based global color correction, followed by guided filter-based local color refinement, aiming to restore accurate colors while enhancing visual perception. Within the vision-inspired enhancement framework, the color-adjusted image is first decomposed into a base layer and a detail layer, corresponding to low- and high-frequency visual information, respectively. Subsequently, detail enhancement and noise suppression are applied in the detail pathway, while global brightness correction is performed in the structural pathway. Finally, results from both pathways are fused to yield the enhanced underwater image. Extensive experiments on four datasets verify that the proposed method effectively handles the aforementioned underwater enhancement challenges and significantly outperforms state-of-the-art techniques.

Graphical Abstract

1. Introduction

With the growing exploration of marine resources, underwater images have become a critical medium for transmitting marine information, playing a vital role in various fields such as visual navigation of underwater robots, seabed topographic mapping, marine engineering inspection, and aquaculture monitoring [1]. However, due to the influence of light attenuation and low-light conditions, underwater images often suffer from quality degradation issues, including color distortion, blurred details, and low contrast. Enhancing image quality to achieve visually clear results has thus garnered significant attention.
Over the past two decades, a large number of studies on improving the visibility of underwater images have been proposed, encompassing traditional methods (such as model-free image enhancement and model-based image restoration) as well as data-driven techniques. Among these, model-free image enhancement methods include color equalization-based [2], color balance-based [3,4,5], and color channel compensation-based approaches [6,7,8]. By contrast, model-based methods typically utilize underwater imaging models and prior knowledge to obtain high-quality underwater images by inverting the degradation process. Variational models [9,10,11,12], dark channel prior [13,14,15], red channel prior [16,17], light attenuation prior [18,19,20,21], and Retinex theory [22,23] are widely applied in model-based underwater image restoration to yield visually pleasing results. Although such methods exhibit a certain level of image enhancement performance, their stability is comparatively limited when applied to complex underwater scenes, thereby negatively impacting the restoration results. Notably, data-driven methods restore and enhance degraded underwater images by learning the nonlinear relationships between ground-truth images and their degraded counterparts. These approaches, which include Convolutional Neural Networks (CNN) [24,25], Generative Adversarial Networks (GAN) [26,27,28,29,30], Transformer Networks [31,32], and contemporary mainstream Mamba architectures [33,34], have reshaped the field of underwater image restoration. Despite their remarkable performance, the requirement for large-scale training datasets, the complexity of model architecture design, and generalization challenges may limit their applicability in practical scenarios.
While existing model-free underwater image enhancement approaches have demonstrated promising performance, they still suffer from limitations such as oversaturation, undersaturation, and uneven brightness. In this paper, we propose a novel model-free underwater image enhancement method that integrates a progressive color correction module with a vision-inspired image enhancement framework. As illustrated in Figure 1, the progressive color correction module processes degraded underwater images via global color mapping and local color adjustment to achieve effective color recovery. Within the visual perception-driven enhancement module, the color-corrected image is first decomposed into a base layer (encoding low-frequency information) and a detail layer (encoding high-frequency information). Subsequently, the base layer undergoes non-uniform illumination correction using an adaptive gain function; meanwhile, the detail layer is adaptively sharpened via a local energy-weighted filter to accentuate fine-grained structures. Finally, these layers are fused to yield the restored image.
The key ideas and contributions of this work are summarized as follows:
(1)
A progressive underwater image color correction algorithm is proposed to enhance color balance while systematically mitigating the risk of overcompensation.
(2)
A vision-inspired underwater image enhancement algorithm is designed to uncover latent information within the image while preserving the naturalness of the scene and fine-grained details.
(3)
The proposed method is extensively evaluated on multiple datasets and benchmarked against state-of-the-art methods. It is assessed in terms of quantitative metrics, qualitative results, and computational efficiency.
The remainder of this paper is structured as follows: Section 2 reviews the related studies; Section 3 elaborates on the proposed methodology; Section 4 presents the experimental results and corresponding evaluations; Section 5 provides concluding remarks.

2. Related Work

Approaches to improving underwater image quality can be broadly classified into three categories: model-driven image restoration methods, model-free image enhancement methods, and data-driven methods.
Model-driven image restoration methods typically utilize hand-crafted priors to identify parameters for underwater optical imaging, followed by model inversion to recover a clear image. Such priors include the underwater dark channel prior [14], haze line prior [35], light scattering prior [36], and minimal color loss [4,37], etc. For example, in Sea-Thru [38], the traditional imaging model is modified to account for dependencies between optical water types and illumination conditions. Galdran et al. [39] constructed an image restoration model by leveraging the characteristic of rapid attenuation of red light in underwater environments. In [13], a generalized dark channel prior was utilized to estimate transmission maps and mitigate color casts. This method adaptively adjusts kernel sizes based on image characteristics, thereby effectively reducing artifacts in regions affected by bright objects or artificial light interference. Similarly, Liang et al. proposed a generalized underwater dark channel prior [14], which integrates multiple priors for transmission map estimation. Liu et al. [40] proposed the rank-one prior (ROP) to mathematically characterize the superimposition properties of clear images, thus facilitating the restoration of degraded images under various conditions. Building upon the haze imaging model, Liang et al. [15] proposed a color decomposition and contrast restoration method that integrates color balance, detail enhancement, and contrast improvement procedures to enhance underwater image quality. Although effective in most underwater scenes, the brightness improvement of this method is generally limited for low-light underwater images, as the haze imaging model is physically invalid under such conditions. Variational strategies, as specialized techniques in image restoration, are often integrated with underwater imaging models or priors to enhance the perceptual quality of underwater images. For instance, leveraging the complete underwater image formation model, Xie et al. [41] developed a red-channel sparse prior-guided variational framework, which enables simultaneous dehazing and deblurring of underwater images. In HLRP [10], the hyper-laplacian reflectance priors established with an 1 / 2 -norm penalty on the first-order and second-order gradients of the reflectance are used to regularize the Retinex variational model. This model separately yet simultaneously estimates reflection and illumination, which are then used to enhance underwater images. By accounting for underwater noise and texture priors, Zhou et al. [9] also introduced a Retinex variational model to effectively decouple illumination and reflectance components in underwater imagery. Despite notable advancements in this field, the aforementioned methods fail to adequately address the comprehensive analysis of factors influencing imaging quality.
Model-free image enhancement methods typically leverage existing image enhancement techniques to enhance the visual quality of underwater images by adjusting pixel values of degraded images. A variety of effective enhancement techniques, including Retinex-based methods [22,23], histogram stretching methods [2,42], color compensation methods [7,43], and multiscale fusion methods [44], are widely adopted. For instance, Zhang et al. [42] proposed an effective underwater image color correction method by measuring the proportional relationship between the dominant color channel and other lower color channels. Ancuti et al. [5] performed brightness correction and detail sharpening on the white-balanced image, respectively, and employed a multi-weight fusion strategy to derive the restored underwater image. Furthermore, Zhang et al. [45] proposed an approach that employs sub-interval linear transformation-based color correction and bi-interval histogram-based contrast equalization, effectively mitigating color distortion, enhancing image visibility, and emphasizing fine details. In the Color Balance and Lightness Adjustment (CBLA) method [3], both the channel-adaptive color restoration module and the information-adaptive contrast enhancement module are strategically designed for color correction and contrast enhancement, respectively. Owing to their favorable performance in image enhancement, multi-scale texture enhancement and fusion techniques have also been widely applied. For instance, both ref. [46] and Zhang et al. [47] employed a weighted wavelet-based visual perception fusion strategy to integrate high- and low-frequency components across multiple scales, thereby enhancing the quality of underwater images. In the Structural Patch Decomposition Fusion (SPDF) method [44], the complementary preprocessed images are first decomposed into mean intensity, contrast, and structure components via structural patch decomposition. Subsequently, tailored fusion strategies are employed to integrate these components for reconstructing the restored underwater image. In the High-Frequency Modulation (HFM) method [6], both pixel intensity-based and global gradient-based weighting mechanisms are designed to fuse underwater images with improved visibility and contrast.
Data-driven methods have attracted considerable attention owing to their superiority in learning nonlinear mappings from large-scale training data for high-quality image reconstruction. For instance, Li et al. [25] proposed a convolutional neural network, namely WaterNet, which directly reconstructs clear images by leveraging an underwater scene prior. Sun et al. [27] introduced an integrated learning framework termed UMGAN, embedding a noise reduction network into the GAN to recover underwater images across diverse scenes. In PUGAN [29], a well-designed GAN framework comprising two subnetworks—the parameter estimation and interaction enhancement subnetworks—is employed to generate visually pleasing images. In [48], a dual-Unet network is incorporated into the diffusion model to simultaneously enable distribution transformation of degraded images and elimination of latent noise. To enhance attention toward different color channels and spatial regions, thereby better eliminating color artifacts and color casts, the U-shaped Transformer network (UTransNet) [31] integrates both channel-level and spatial-level transformer modules. Furthermore, Wang et al. [49] proposed a two-stage domain adaptation network that incorporates a triple alignment network and a ranking-based image quality assessment network to simultaneously bridge the inter-domain and intra-domain gaps, thus rendering the generated images more realistic. In PixMamba [50], both the patch-level Mamba Net and pixel-level Mamba Net are designed to efficiently model global dependencies in underwater scenes. To capitalize on the distinctions between learning-based and physics-based compensation methods, Ouyang et al. [51] proposed a perceptual underwater enhancement technique that adapts the performance of data-driven semantic transfer and scene-relevant reconstruction. Although these deep learning-based methods offer notable advantages in handling complex and dynamic underwater scenes, their heavy reliance on large datasets may limit their robustness and practicality.

3. Methodology

In the following sections, we first present the progressive color correction strategy for underwater images, which comprises global color pre-mapping and local color refinement, followed by a detailed explanation of the vision-inspired image enhancement framework with adaptive mechanisms.

3.1. Progressive Color Correction

Due to the selective scattering and absorption of light, different wavelengths of light undergo varying degrees of attenuation, resulting in underwater images that predominantly display blue or green tones. This degradation in visual quality, induced by the underwater environment and lighting conditions, greatly obscures critical information within the scene.

3.1.1. Global Color Pre-Mapping

Extensive research has demonstrated that the three-channel histograms of natural images closely conform to a Rayleigh distribution with a high degree of inter-channel overlap [9]. Leveraging this characteristic, this study first fits each color channel histogram of underwater images using the Rayleigh function, then applies pixel remapping to minimize disparities in their color distributions.
For a given underwater image I, the probabilistic distributions of the histogram for each channel are first calculated as follows:
p d f c ( i ) = hist c ( i ) / hist c ( i ) , c { R , G , B }
where p d f c ( · ) represents the probability density function of channel c , and hist c ( i ) denotes the histogram function that counts the occurrences of intensity value i ( i [ 0 , 255 ] ) within color channel c .
A least squares-based parameter estimation strategy is then employed to estimate the mean and variance of the Rayleigh distribution being fitted. Given the probability density function p d f c ( · ) , the parameters of the Rayleigh distribution can be expressed as follows:
{ μ c , s c } = arg min μ c , s c [ p d f c ( i ) f ( i ; μ c , s c ) ] 2 ,
f ( i ; μ c , s c ) = i μ c s c 2 e x p ( ( i μ c ) 2 2 s c 2 ) .
where μ c and s c are the mean and variance of the Rayleigh distribution f ( · ) , respectively, which govern the Rayleigh probability density distribution. Once the color channel-related parameters μ c and s c are determined, the distribution differences among the color channels become apparent.
To mitigate the distribution differences among the color channels in underwater scenes, drawing on the gray world algorithm [5], we first use the following formula to determine the dominant color channel:
D = max ( μ R , μ G , μ B ) ,
where max ( ) and D denote the maximum operation and dominant mean, respectively. Subsequently, we utilize the dominant value to perform adaptive color quantization, which is defined as follows:
I p c ( x , y ) = β × μ c D × I c ( x , y ) × 255 ,
β = ( s c m a x ( s R , s G , s B ) ) α .
where I p c ( x , y ) denotes the quantized pixel intensity at location ( x , y ) in color channel c . The gain parameter α adjusts the post-quantization value range and is set to 0.05 in experiments to optimally balance visual perception and quantitative metrics. To enhance the naturalness of image intensity, an additional equalization is applied to I p c .

3.1.2. Local Color Refining

Due to selective absorption in underwater environments, the red channel typically undergoes the most significant attenuation among color channels. Although the proposed global color pre-mapping strategy delivers a preliminary correction for color cast, local red artifacts may emerge due to overcompensation. Therefore, further local color refinement is applied to restore color-accurate images with enhanced visual perception. Based on the strong correlation between the red and green channels [7], we first propose a guided filter-based local color correction approach for the red channel, which is mathematically formulated as follows:
I LC r ( i ) = j ω i W ( I p g ( j ) ) · I p r ( i ) ,
W ( I p g ( j ) ) = 1 | ω | 2 k : ( i , j ) ω k ( 1 + ( I p g ( i ) μ k ) ( I p g ( j ) μ k ) σ k 2 + ϵ ) .
where ω is a window centered at the pixel point i, μ k is the average value of I p g within ω , σ k is the variance in I p g over ω , and ϵ is a regularization parameter that is set as ϵ = 0.0001 .
The processed red channel data are then fused with the blue and green channels derived from global color remapping to generate the color-corrected image. In addition, Figure 2 compares our progressive color correction method against other state-of-the-art methods. For the first test image, the results obtained by Gray-Edge [52], WB_sRGB [53], 3C [7], NUCE [54], and Shades-of-Gray [55] contain limited color information. For the second test image, both Gray-Edge [52] and Shades-of-Gray [55] exhibit significant overcompensation artifacts. In contrast, our method generates corrected images that are free of color distortion and dominant color cast, while simultaneously preserving all critical image details.

3.2. Vision Inspired Adaptive Image Enhancement

The proposed progressive color correction process effectively eliminates color casts; however, it does not generate an enhanced image with improved visibility. Therefore, this study proposes a pathway separation and adaptive enhancement approach to tackle the issues of contrast degradation and detail blurring in underwater scenes.

3.2.1. Pathway Separation

Underwater images typically contain significant noise, and directly enhancing their brightness or contrast may amplify such noise, thereby complicating noise suppression and removal. Therefore, to improve their visual quality, it is critical to suppress or eliminate such noise during the image enhancement process.
The input underwater image I p c can be decomposed into two distinct components: the base layer I base c and the detail layer I detail c , i.e.,
I p c ( x , y ) = I base c ( x , y ) + I detail c ( x , y ) .
Since structural information typically resides in the base layer, while fine details and noise reside in the detail layer, we draw inspiration from biological vision-inspired visual information processing mechanisms [56]. Specifically, we first adopt a Gaussian curvature filter-based image decomposition approach to decompose I p c into the two layers, and then achieve luminance adaptation and detail-preserving contrast enhancement by processing the two layers separately.
The Gaussian curvature filter model can be expressed as the following total variation-based framework:
min I base c ( x , y ) ( I base c ( x , y ) I p c ( x , y ) ) + λ c | G ( I base c ( x , y ) ) | ,
where λ c denotes the scalar regularization parameter. This objective function includes two parts: the first is a data-fitting term that preserves meaningful scene structures; the second is the Gaussian curvature regularization term, defined as G ( I ) = I x x I y y I x y 2 1 + I x 2 + I y 2 . Minimizing this term is equivalent to rendering the image as developable as possible.
The base layer I base c is first obtained by implicitly minimizing Equation (10) using Gaussian curvature filtering [57]. This method searches for the tangent plane closest to the current pixel among those formed by all neighboring pixels, thereby iteratively adjusting pixel values. The detail layer I detail c is then computed as follows:
I detail c ( x , y ) = I p c ( x , y ) I base c ( x , y ) .

3.2.2. Global Brightness Adjust

To tackle the problem of uneven illumination induced by lighting conditions, we propose to estimate the illumination field of underwater images and conduct illumination adjustment for brightness correction. Given a base-layer image I base , an initial illumination T ^ can be obtained by taking the minimum value across RGB channels as the illumination value for each pixel, which is expressed as follows:
T ^ ( x , y ) = min c { R , G , B } I base c ( x , y ) .
where I base c denotes the c -th color channel of the base-layer image, and ( x , y ) is the pixel position. To preserve salient structures while eliminating redundant texture details, the following objective function is introduced to derive the desired illumination map T:
min T ( x , y ) ( T ( x , y ) T ^ ( x , y ) ) 2 + τ d { h , v } W d ( x , y ) ( d T ( x , y ) ) 2 | d T ^ ( x , y ) | + ζ .
where τ is the balancing parameter, and W d is the weight matrix defined as follows:
W d ( x , y ) = 1 | k Ω ( x , y ) d T ^ ( k ) | + ζ .
Here, | · | denotes the absolute value operator; Ω ( x , y ) is the local window centered at pixel ( x , y ) and ζ is a small constant to prevent division by zero. d represents the first-order derivative operator, which includes h (horizontal) and v (vertical).
Since the objective function (Equation (13)) involves only quadratic terms, it can be rewritten in matrix form as follows: ( t t ^ ) T ( t t ^ ) + τ ( t T D h T U h D h t + t T D v T U v D v t ) . Here, t and t ^ represent vertical vectors formed by concatenating all elements of T and T ^ , respectively. U h and U v denote diagonal matrices formed by the square roots of W h | h T ^ | + ζ and W v | v T ^ | + ζ , respectively; D h and D v are Toeplitz matrices corresponding to discrete gradient operators with forward differences. Thus, the vector t that minimizes Equation (13) can be uniquely determined by solving the linear system:
t = ( τ M + I ) 1 t ^ .
where I denotes an identity matrix, and M = D h T U h D h + D v T U v D v is a five-point positive-definite Laplacian matrix. Once t is obtained, the refined illumination T can be recovered via inverse vectorization of t .
The resulting illumination indicates which regions of the image will be lightened or darkened. Specifically, areas with insufficient luminous intensity require incremental adjustment. To this end, we formulate the following pixel-wise brightness correction factor:
γ ( x , y ) = 2 0.5 T ( x , y ) 0.5 .
As is evident from the formulation, this coefficient adaptively adjusts based on the illumination values. Specifically, when the luminous intensity is less than 0.5, it will result in a γ greater than 1. Finally, the brightness-adjusted base layer O base c is defined as follows:
O base c ( x , y ) = I base c ( x , y ) γ ( x , y ) .

3.2.3. Contextual Adaptive Enhancement

Using the base-detail decomposition in Equations (9)–(11), high-frequency details and noise are retained in the detail pathway. For relatively smooth regions containing primarily noise, their local energy tends to be low; in contrast, regions containing both noise and texture details generally exhibit high local energy. Based on this analysis, we can calculate the detail preservation weights by evaluating the energy values within the local neighborhood of the detail layer, thereby achieving noise suppression and detail preservation. The specific formula is as follows:
w c ( x , y ) = ( | I detail c | G ) ( x , y ) ,
G ( x , y ) = 1 2 π σ c 2 exp ( x 2 + y 2 2 σ c 2 ) ,
where ∗ denotes the convolution operator and G ( x , y ) represents a Gaussian filter. A smaller w c ( x , y ) indicates that the current pixel is not located in texture regions and can be smoothed. Conversely, a larger w c ( x , y ) suggests the presence of texture details that should be preserved. Therefore, the proposed detail-preserving and noise-suppression mechanism in the detail pathway can be formulated as follows:
O detail c ( x , y ) = I detail c ( x , y ) × ( w c ( x , y ) ) κ ,
where the exponential parameter κ is introduced to tune the contrast of details and is set to κ = 1.2 for optimal visual and quantitative results. To generate the final enhanced image, the processed layers are fused as follows:
O c ( x , y ) = O base c ( x , y ) + ω d · O detail c ( x , y ) ,
where O c denotes the adaptively enhanced underwater image, and ω d controls the trade-off between noise suppression and detail enhancement. To avoid over-enhancement of image details, we set ω d = 3.0 in this work. The workflow of our vision-inspired adaptive enhancement algorithm for underwater images is summarized in Algorithm 1.
Algorithm 1: Vision-Inspired Adaptive Enhancement for Underwater Images
1  Input: Color-corrected image I p c , parameters λ c , τ , κ , ω d ;
    Output: Enhanced image O c ;
2  Compute the base and detail layers I base c , I detail c using Equations (10) and (11);
3  Compute the illumination map T via Equations (12)–(15);
4  Obtain the brightness-adjusted base layer O base c via Equations (16) and (17);
5  Compute the weights for detail preservation w c via Equations (18) and (19);
6  Obtain the detail-preserving enhanced detail layer O detail c via Equation (20);
7  Obtain the enhanced image O c via Equation (21);

4. Results and Analysis

Comparative Methods. We compare our method with 12 state-of-the-art methods, including Attenuated Color Channel Correction and Detail-Preserved Contrast Enhancement (ACDC) [58], Pixel Distribution Remapping and Multi-Prior Retinex Variational Model (PDRMRV) [9], Minimal Color Loss and Locally Adaptive Contrast Enhancement (MMLE) [4], Weighted Wavelet Visual Perception Fusion (WWPF) [47], Multi-Color Components and Light Attenuation (MCLA) [21], HFM [6], Hyper-Laplacian Reflectance Priors (HLRP) [10], L2UWE [59], SPDF [44], UDHTV [60], WaterNet [25], and U-shape Transformer (UTransNet) [31]. Specifically, PDRMRV, MCLA, UDHTV, and HLRP are categorized as restoration-based methods; ACDC, MMLE, WWPF, HFM, L2UWE, and SPDF as enhancement-based methods and WaterNet and UTransNet as data-driven approaches. To ensure a fair comparison, the parameters of the compared algorithms are set to optimal values following the recommendations of the original authors.
Benchmark Datasets. All the aforementioned methods were evaluated using four benchmark underwater image datasets: UIEB [25], RUIE [61], OceanDark [59], and Color-Checker7 [5]. The UIEB dataset contains 890 degraded images captured under diverse challenging conditions, such as various color casts, poor illumination, turbid water, and deep-sea environments. Notably, it includes a substantial number of images featuring marine organisms and artificial objects, thus making it particularly suitable for evaluating the robustness of underwater image enhancement algorithms. In contrast, the RUIE dataset, collected using waterproof cameras near Zhangzi Island, provides complementary test scenarios for evaluation. The OceanDark dataset comprises 183 low-light underwater images acquired by Ocean Networks Canada. Additionally, the Color-Checker7 dataset consists of seven underwater images captured by various cameras in a shallow swimming pool, each containing a color checker.
Evaluation Metrics. Several commonly used quantitative metrics are employed to evaluate the effectiveness of various methods, including Average Gradient (AG) [62], Edge Intensity (EI) [54], Colorfulness, Contrast, and Fog Density index (CCF) [63], Fog Density Index (FADE) [64], Underwater Image Quality Metric (UIQM) [65], Underwater Color Image Quality Evaluation metric (UCIQE) [66], and CIEDE2000 [67]. Among these metrics, higher AG values indicate improved image sharpness; higher UCIQE values reflect enhanced brightness; lower FADE scores correspond to less fog density; higher EI values indicate better detail preservation and higher CCF and UIQM scores suggest improved contrast and perceptual quality, respectively. Unlike the aforementioned metrics, which rely on global statistical features, CIEDE2000 specifically quantifies the color differences between the recovered image and the reference ground-truth, with lower values indicating superior color restoration.

4.1. Parameters Setting

The proposed algorithm involves two key parameters: the adjustment parameter κ for detail enhancement and the fusion weight parameter ω d . To determine their appropriate values, we randomly selected several images and conducted experiments following these strategies: (1) fixing ω d while varying κ ; (2) fixing κ while varying ω d . As shown in the results of Figure 3, when κ increases from 0.8 to 1.6, the details of the resulting images gradually become richer; however, an excessively large κ leads to over-enhancement. Similarly, a smaller ω d risks weakening detail contrast, while a larger ω d makes the enhanced image appear unnatural. Therefore, to balance the visual naturalness and contrast of the enhanced image, we empirically set κ = 1.2 and ω d = 3.0 .

4.2. Comparisons on the Color-Check7 Dataset

To evaluate the robustness and accuracy of underwater color correction and white balance methods, we conducted comparative experiments on the Color-Check7 dataset. As shown in the first row of Figure 4, ACDC and UTransNet produce visually unsatisfactory results for red-tinged underwater images, with unnatural color reproduction. While MMLE, WWPF, HLRP, and UDHTV partially correct color shifts, their outputs exhibit spatially inconsistent color distributions. Notably, MCLA and SPDF introduce a systematic blue cast, whereas L2UWE and WaterNet fail to sufficiently suppress residual red tones. Although PDRMRV, HFM, and our proposed method successfully eliminate red chromatic aberrations, PDRMRV and HFM yield weaker contrast and less effective detail enhancement compared with our approach. For the blue-tinged underwater image in the second row of Figure 4, ACDC yields excessively dark outputs, compromising overall color clarity. MCLA, L2UWE, and WaterNet retain excessive blue chromaticity, failing to achieve balanced color representation. SPDF produces low-contrast results with inadequate detail preservation, while HFM introduces undesirable artifacts due to overcompensation. By contrast, the proposed method demonstrates superior detail preservation and a more natural appearance. Comprehensive evaluation across the Color-Check7 dataset confirms that our approach consistently outperforms existing methods in color distortion correction and contrast enhancement.
For quantitative analysis, the CIEDE2000 values for all images are presented in Table 1. While the proposed method does not achieve the lowest CIEDE2000 values for all images, its overall average score is 11.067, which is significantly better than those of the compared methods.

4.3. Comparisons on the UIEB Dataset

As illustrated in Figure 5, three underwater images randomly selected from the UIEB dataset—exhibiting diverse color casts and degradations, such as greenish tones, backscatter effects, and bluish tones—are employed for comparative experiments. For the greenish underwater image (first row), MCLA, L2UWE, HFM, WaterNet, and UTransNet fail to correct the color cast. PDRMRV introduces a reddish cast in the upper left corner due to overcompensation of the red channel. Similarly, SPDF produces noticeable blue artifacts, primarily due to its inappropriate local color correction strategy. Both HLRP and SPDF fail to effectively remove the greenish cast; instead, they introduce varying degrees of color distortion, resulting in overall visual darkening of the image. Although ACDC, MMLE, and WWPF mitigate these issues, their visual effects are not sufficiently natural, and detail preservation remains inadequate. For the bluish image (third row), PDRMRV produces noticeable blue artifacts, while MMLE, WWPF, SPDF, and UDHTV exhibit a purple cast in distant scenes. MCLA, HLRP, L2UWE, HFM, and UTransNet fail to eliminate the blue distortion; additionally, MCLA and HLRP introduce a red cast. In contrast, the proposed method effectively reduces color deviations and artifacts while markedly improving the visibility of underwater scenes. The quantitative evaluation results for the entire UIEB dataset are summarized in Table 2. Compared with competing methods, the proposed method achieves the best performance in FADE, UIQM, and UCIQE, with values of 0.833, 3.588, and 0.603, respectively. Although its AG and EI values are slightly lower than those of SPDF, the proposed method outperforms all counterparts in overall metrics.

4.4. Comparisons on the RUIE and OceanDark Datasets

Figure 6 presents images randomly selected from the RUIE dataset. Notably, the PDRMRV method suffers from color overcompensation, with all three restored images exhibiting distinct blue blocks. In the dark regions of the images, MCLA, HLRP, SPDF, and HFM introduce significant red casts after enhancement. While the L2UWE method enhances texture details, it fails to mitigate the inherent color cast in the original images. For white shell-like objects, the UDHTV method also produces obvious blue blocks. Furthermore, images processed by UTransNet exhibit a slight red cast in distant regions. As shown in the quantitative results in Table 2, the proposed method has a slightly lower EI value than SPDF and a marginally lower FADE value compared with WWPF. Nevertheless, it outperforms competing methods in the remaining four metrics: AG, CCF, UIQM, and UCIQE.
Figure 7 shows images randomly chosen from the OceanDark dataset. The degraded underwater images suffer not only from color distortion but also from non-uniform lighting. For instance, in the first low-light underwater image, methods such as MMLE, MCLA, HLRP, L2UWE, SPDF, UDHTV, WaterNet, and UTransNet enhance brightness and contrast to some extent but fail to reveal underlying information in dark regions. Although ACDC and PDRMRV yield relatively uniform brightness, color artifacts emerge due to excessive color compensation. Notably, for the second greenish-distorted image and the third bluish-distorted image, methods like WWPF, MCLA, HLRP, L2UWE, and HFM are ineffective in eliminating color distortion. In contrast, the processing results of ACDC and the proposed method for these two distorted images are superior to those of the aforementioned methods. Table 3 summarizes the quantitative evaluations of all compared methods on the entire OceanDark dataset. It is evident that the proposed method outperforms others in most cases across these five metrics, confirming its effectiveness in improving color and brightness for underwater images captured under suboptimal lighting conditions.

4.5. Ablation Study

To verify the effectiveness of each component in the proposed method, a series of ablation experiments are conducted on the OceanDark dataset, including (1) our method without color correction (-w/o CC); (2) our method without local color refining (-w/o LCR); (3) our method without global brightness adjustment (-w/o GBA) and (4) our method without contextual adaptive enhancement (-w/o CAE). Figure 8 presents a comparative visualization of the ablation experiments. As shown in the results, the -w/o CC variant enhances detail preservation but fails to rectify color distortion. The -w/o LCR configuration exhibits limited capability in addressing subtle color deviations in the enhanced imagery. Similarly, the -w/o GBA approach fails to achieve meaningful visibility improvement, particularly in underexposed regions of the underwater scene. While the -w/o CAE method succeeds in improving overall visibility, it shows a pronounced deficiency in high-frequency details. Additionally, Table 4 summarizes the quantitative evaluations of all ablation configurations. The experimental results demonstrate that our complete framework—integrating all proposed components—achieves the best performance, indicating that each component plays an important role in the proposed method.

4.6. Runtime Analysis

For computational efficiency analysis, we quantitatively evaluated the average running time of various methods on 1280 × 720-pixel images from the OceanDark dataset and 400 × 300-pixel images from the RUIE dataset. All traditional algorithms were implemented in MATLAB 2019b and executed on a Windows 11 PC equipped with a 13th Gen Intel(R) Core(TM) i9-13900KF CPU (3.00 GHz) and 32 GB RAM. Additionally, a 24 GB NVIDIA GeForce RTX 4090 GPU was employed to run deep learning-based algorithms implemented using the PyTorch 1.10.0 framework. As summarized in Table 5, the proposed method achieves an average processing time of 4.51 s for 1280 × 720-pixel images, significantly outperforming most traditional methods (i.e., PDRMRV, MCLA, HLRP, L²UWE, SPDF, UDHTV, and HFM) in terms of speed. Although our algorithm is slower than ACDC, MMLE, and WWPE, its quantitative evaluation results from the aforementioned experiments are superior to those of these algorithms.

4.7. Application to Image Segmentation

The primary objective of image enhancement is to provide high-quality input images for subsequent high-level vision applications, thereby facilitating more effective feature extraction in advanced computer vision tasks. To rigorously evaluate the enhancement performance of our method, we employ a robust color image segmentation algorithm, superpixel-based fast FCM clustering (SFFCM) [68], to partition images into mutually exclusive and homogeneous regions.
Figure 9 demonstrates the results of various methods on images randomly selected from the OceanDark dataset, with their corresponding segmentation outcomes presented in Figure 10. The segmentation results show that our enhanced images enable more accurate fish-body contour extraction using SFFCM, while comparative methods yield inferior contour localization. Thus, experimental results demonstrate that our method generates images with enhanced structural details, superior contrast, and uniform illumination distribution.

5. Conclusions

In this work, we propose an effective and robust underwater image enhancement method comprising two key modules: progressive color correction and vision-inspired adaptive image enhancement. During the color correction stage, we first employ a Rayleigh distribution prior to developing a globally color-balanced pre-mapping strategy for color shift correction. Subsequently, we design a guided filter-based local color refinement method that explicitly models inter-channel correlations to alleviate potential local reddish artifacts caused by overcompensation. Within the vision-inspired adaptive enhancement framework, we introduce a global brightness adjustment module to tackle non-uniform illumination caused by lighting conditions. Additionally, a contextually adaptive enhancement module derived from the local energy characteristics of detail layers is designed to jointly suppress noise and enhance structural details. Extensive experiments conducted across multiple underwater image benchmarks verify the proposed method’s superior performance in color cast correction, brightness adjustment, and structural detail enhancement.

Author Contributions

Z.L. proposed the original idea and wrote the manuscript. W.L. and J.W. collected materials and wrote the manuscript. Y.Y. supervised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Key Research and Development (R&D) Plan of Shanxi Province (No. 2023-ZDLGY-15), the Guangdong Basic and Applied Basic Research Foundation (No. 2023A1515110770), the Non-funded Science and Technology Research Program of Zhanjiang (No. 2024B01051), and the Program for Scientific Research Start-up Funds of Guangdong Ocean University (Nos. 060302112319 and 060302112309).

Data Availability Statement

The underwater datasets used in this study are publicly available on GitHub via the following URL: https://github.com/xahidbuffon/Awesome_Underwater_Datasets (accessed on 15 September 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hu, K.; Weng, C.; Zhang, Y.; Jin, J.; Xia, Q. An overview of underwater vision enhancement: From traditional methods to recent deep learning. J. Mar. Sci. Eng. 2022, 10, 241. [Google Scholar] [CrossRef]
  2. Zhou, J.; Pang, L.; Zhang, D.; Zhang, W. Underwater image enhancement method via multi-interval subhistogram perspective equalization. IEEE J. Ocean. Eng. 2023, 48, 474–488. [Google Scholar] [CrossRef]
  3. Jha, M.; Bhandari, A.K. Cbla: Color-balanced locally adjustable underwater image enhancement. IEEE Trans. Instrum. Meas. 2024, 73, 1–11. [Google Scholar] [CrossRef]
  4. Zhang, W.; Zhuang, P.; Sun, H.-H.; Li, G.; Kwong, S.; Li, C. Underwater image enhancement via minimal color loss and locally adaptive contrast enhancement. IEEE Trans. Image Process. 2022, 31, 3997–4010. [Google Scholar] [CrossRef]
  5. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 2017, 27, 379–393. [Google Scholar] [CrossRef]
  6. An, S.; Xu, L.; Member, I.S.; Deng, Z.; Zhang, H. Hfm: A hybrid fusion method for underwater image enhancement. Eng. Appl. Artif. Intell. 2024, 127, 107219. [Google Scholar] [CrossRef]
  7. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Sbert, M. Color channel compensation (3c): A fundamental pre-processing step for image enhancement. IEEE Trans. Image Process. 2019, 29, 2653–2665. [Google Scholar] [CrossRef] [PubMed]
  8. Subramani, B.; Veluchamy, M. Pixel intensity optimization and detail-preserving contextual contrast enhancement for underwater images. Opt. Laser Technol. 2025, 180, 111464. [Google Scholar] [CrossRef]
  9. Zhou, J.; Wang, S.; Lin, Z.; Jiang, Q.; Sohel, F. A pixel distribution remapping and multi-prior retinex variational model for underwater image enhancement. IEEE Trans. Multimed. 2024, 26, 7838–7849. [Google Scholar] [CrossRef]
  10. Zhuang, P.; Wu, J.; Porikli, F.; Li, C. Underwater image enhancement with hyper-laplacian reflectance priors. IEEE Trans. Image Process. 2022, 31, 5442–5455. [Google Scholar] [CrossRef]
  11. Hou, G.; Li, J.; Wang, G.; Yang, H.; Huang, B.; Pan, Z. A novel dark channel prior guided variational framework for underwater image restoration. J. Vis. Commun. Image Represent. 2020, 66, 102732. [Google Scholar] [CrossRef]
  12. Zhou, J.; Chen, S.; Zhang, D.; He, Z.; Lam, K.-M.; Sohel, F.; Vivone, G. Adaptive variational decomposition for water-related optical image enhancement. ISPRS J. Photogramm. Remote Sens. 2024, 216, 15–31. [Google Scholar] [CrossRef]
  13. Peng, Y.-T.; Cao, K.; Cosman, P.C. Generalization of the dark channel prior for single image restoration. IEEE Trans. Image Process. 2018, 27, 2856–2868. [Google Scholar] [CrossRef]
  14. Liang, Z.; Ding, X.; Wang, Y.; Yan, X.; Fu, X. Gudcp: Generalization of underwater dark channel prior for underwater image restoration. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 4879–4884. [Google Scholar] [CrossRef]
  15. Liang, Z.; Zhang, W.; Ruan, R.; Zhuang, P.; Xie, X.; Li, C. Underwater image quality improvement via color, detail, and contrast restoration. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 1726–1742. [Google Scholar] [CrossRef]
  16. Huo, G.; Wu, Z.; Li, J. Underwater image restoration based on color correction and red channel prior. In Proceedings of the 2018 IEEE International Conference on Systems, Man and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 3975–3980. [Google Scholar]
  17. Cui, Z.; Yang, C.; Wang, S.; Wang, X.; Duan, H.; Na, J. Underwater image enhancement by illumination map estimation and adaptive high-frequency gain. IEEE Sens. J. 2024, 24, 24677–24689. [Google Scholar] [CrossRef]
  18. Wang, Y.; Liu, H.; Chau, L.-P. Single underwater image restoration using adaptive attenuation-curve prior. IEEE Trans. Circuits Syst. I Regul. Pap. 2017, 65, 992–1002. [Google Scholar] [CrossRef]
  19. Song, W.; Wang, Y.; Huang, D.; Tjondronegoro, D. A rapid scene depth estimation model based on underwater light attenuation prior for underwater image restoration. In Advances in Multimedia Information Processing–PCM 2018, Proceedings of the Pacific Rim Conference on Multimedia, Hefei, China, 21–22 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 678–688. [Google Scholar]
  20. Liu, K.; Liang, Y. Underwater image enhancement method based on adaptive attenuation-curve prior. Opt. Express 2021, 29, 10321–10345. [Google Scholar] [CrossRef]
  21. Zhou, J.; Wang, Y.; Li, C.; Zhang, W. Multicolor light attenuation modeling for underwater image restoration. IEEE J. Ocean. Eng. 2023, 48, 1322–1337. [Google Scholar] [CrossRef]
  22. Zhuang, P.; Li, C.; Wu, J. Bayesian retinex underwater image enhancement. Eng. Appl. Artif. Intell. 2021, 101, 104171. [Google Scholar] [CrossRef]
  23. Li, D.; Zhou, J.; Wang, S.; Zhang, D.; Zhang, W.; Alwadai, R.; Alenezi, F.; Tiwari, P.; Shi, T. Adaptive weighted multiscale retinex for underwater image enhancement. Eng. Appl. Artif. Intell. 2023, 123, 106457. [Google Scholar] [CrossRef]
  24. Yeh, C.-H.; Lai, Y.-W.; Lin, Y.-Y.; Chen, M.-J.; Wang, C.-C. Underwater image enhancement based on light field-guided rendering network. J. Mar. Sci. Eng. 2024, 12, 1217. [Google Scholar] [CrossRef]
  25. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2020, 29, 4376–4389. [Google Scholar] [CrossRef]
  26. Li, F.; Li, W.; Zheng, J.; Wang, L.; Xi, Y. Contrastive feature disentanglement via physical priors for underwater image enhancement. Remote Sens. 2025, 17, 759. [Google Scholar] [CrossRef]
  27. Sun, B.; Mei, Y.; Yan, N.; Chen, Y. Umgan: Underwater image enhancement network for unpaired image-to-image translation. J. Mar. Sci. Eng. 2023, 11, 447. [Google Scholar] [CrossRef]
  28. Fu, F.; Liu, P.; Shao, Z.; Xu, J.; Fang, M. Mevo-gan: A multi-scale evolutionary generative adversarial network for underwater image enhancement. J. Mar. Sci. Eng. 2024, 12, 1210. [Google Scholar] [CrossRef]
  29. Cong, R.; Yang, W.; Zhang, W.; Li, C.; Guo, C.-L.; Huang, Q.; Kwong, S. Pugan: Physical model-guided underwater image enhancement using gan with dual-discriminators. IEEE Trans. Image Process. 2023, 32, 4472–4485. [Google Scholar] [CrossRef]
  30. Islam, M.J.; Xia, Y.; Sattar, J. Fast underwater image enhancement for improved visual perception. IEEE Robot. Autom. Lett. 2020, 5, 3227–3234. [Google Scholar] [CrossRef]
  31. Peng, L.; Zhu, C.; Bian, L. U-shape transformer for underwater image enhancement. IEEE Trans. Image Process. 2023, 32, 3066–3079. [Google Scholar] [CrossRef]
  32. Tang, Y.; Iwaguchi, T.; Kawasaki, H.; Sagawa, R.; Furukawa, R. Autoenhancer: Transformer on u-net architecture search for underwater image enhancement. In Proceedings of the Asian Conference on Computer Vision, Macao, China, 4–8 December 2022; pp. 1403–1420. [Google Scholar]
  33. An, G.; He, A.; Wang, Y.; Guo, J. Uwmamba: Underwater image enhancement with state space model. IEEE Signal Process. Lett. 2024, 31, 2725–2729. [Google Scholar] [CrossRef]
  34. Guan, M.; Xu, H.; Jiang, G.; Yu, M.; Chen, Y.; Luo, T.; Song, Y. Watermamba: Visual State Space Model for Underwater Image Enhancement. arXiv 2024, arXiv:2405.08419. [Google Scholar] [CrossRef]
  35. Berman, D.; Levy, D.; Avidan, S.; Treibitz, T. Underwater single image color restoration using haze-lines and a new quantitative dataset. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 2822–2837. [Google Scholar] [CrossRef] [PubMed]
  36. Zhou, J.; Wang, Y.; Zhang, W. Underwater image restoration via information distribution and light scattering prior. Comput. Electr. Eng. 2022, 100, 107908. [Google Scholar] [CrossRef]
  37. Li, C.-Y.; Guo, J.-C.; Cong, R.-M.; Pang, Y.-W.; Wang, B. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans. Image Process. 2016, 25, 5664–5677. [Google Scholar] [CrossRef] [PubMed]
  38. Akkaynak, D.; Treibitz, T. Sea-thru: A method for removing water from underwater images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1682–1691. [Google Scholar]
  39. Galdran, A.; Pardo, D.; Picón, A.; Alvarez-Gila, A. Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef]
  40. Liu, J.; Liu, R.W.; Sun, J.; Zeng, T. Rank-one prior: Real-time scene recovery. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 8845–8860. [Google Scholar] [CrossRef]
  41. Xie, J.; Hou, G.; Wang, G.; Pan, Z. A variational framework for underwater image dehazing and deblurring. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 3514–3526. [Google Scholar] [CrossRef]
  42. Zhang, W.; Pan, X.; Xie, X.; Li, L.; Wang, Z.; Han, C. Color correction and adaptive contrast enhancement for underwater image enhancement. Comput. Electr. Eng. 2021, 91, 106981. [Google Scholar] [CrossRef]
  43. Balta, H.; Ancuti, C.O.; Ancuti, C. Effective underwater image restoration based on color channel compensation. In Proceedings of the OCEANS 2021: San Diego–Porto, San Diego, CA, USA, 20–23 September 2021; pp. 1–4. [Google Scholar]
  44. Kang, Y.; Jiang, Q.; Li, C.; Ren, W.; Liu, H.; Wang, P. A perception-aware decomposition and fusion framework for underwater image enhancement. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 988–1002. [Google Scholar] [CrossRef]
  45. Zhang, W.; Dong, L.; Zhang, T.; Xu, W. Enhancing underwater image via color correction and bi-interval contrast enhancement. Signal Process. Image Commun. 2021, 90, 116030. [Google Scholar] [CrossRef]
  46. Zhang, T.; Su, H.; Fan, B.; Yang, N.; Zhong, S.; Yin, J. Underwater image enhancement based on red channel correction and improved multiscale fusion. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–20. [Google Scholar] [CrossRef]
  47. Zhang, W.; Zhou, L.; Zhuang, P.; Li, G.; Pan, X.; Zhao, W.; Li, C. Underwater image enhancement via weighted wavelet visual perception fusion. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 2469–2483. [Google Scholar] [CrossRef]
  48. Lu, S.; Guan, F.; Zhang, H.; Lai, H. Underwater image enhancement method based on denoising diffusion probabilistic model. J. Vis. Commun. Image Represent. 2023, 96, 103926. [Google Scholar] [CrossRef]
  49. Wang, Z.; Shen, L.; Xu, M.; Yu, M.; Wang, K.; Lin, Y. Domain adaptation for underwater image enhancement. IEEE Trans. Image Process. 2023, 32, 1442–1457. [Google Scholar] [CrossRef]
  50. Lin, W.-T.; Lin, Y.-X.; Chen, J.-W.; Hua, K.-L. Pixmamba: Leveraging state space models in a dual-level architecture for underwater image enhancement. In Proceedings of the Asian Conference on Computer Vision, Hanoi, Vietnam, 8–12 December 2024; pp. 3622–3637. [Google Scholar]
  51. Ouyang, W.; Liu, J.; Wei, Y. An underwater image enhancement method based on balanced adaption compensation. IEEE Signal Process. Lett. 2024, 31, 1034–1038. [Google Scholar] [CrossRef]
  52. Van De Weijer, J.; Gevers, T.; Gijsenij, A. Edge-based color constancy. IEEE Trans. Image Process. 2007, 16, 2207–2214. [Google Scholar] [CrossRef]
  53. Afifi, M.; Price, B.; Cohen, S.; Brown, M.S. When color constancy goes wrong: Correcting improperly white-balanced images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1535–1544. [Google Scholar]
  54. Azmi, K.Z.M.; Ghani, A.S.A.; Yusof, Z.M.; Ibrahim, Z. Natural-based underwater image color enhancement through fusion of swarm-intelligence algorithm. Appl. Soft Comput. 2019, 85, 105810. [Google Scholar] [CrossRef]
  55. Finlayson, G.D.; Trezzi, E. Shades of gray and colour constancy. In Proceedings of the Color and Imaging Conference, Scottsdale, AZ, USA, 9 November 2004; Society of Imaging Science and Technology: Springfield, VA, USA, 2004; Volume 12, pp. 37–41. [Google Scholar]
  56. Yang, K.-F.; Zhang, X.-S.; Li, Y.-J. A biological vision inspired framework for image enhancement in poor visibility conditions. IEEE Trans. Image Process. 2019, 29, 1493–1506. [Google Scholar] [CrossRef]
  57. Gong, Y.; Sbalzarini, I.F. Curvature filters efficiently reduce certain variational energies. IEEE Trans. Image Process. 2017, 26, 1786–1798. [Google Scholar] [CrossRef]
  58. Zhang, W.; Wang, Y.; Li, C. Underwater image enhancement by attenuated color channel correction and detail preserved contrast enhancement. IEEE J. Ocean. Eng. 2022, 47, 718–735. [Google Scholar] [CrossRef]
  59. Marques, T.P.; Albu, A.B. L2uwe: A framework for the efficient enhancement of low-light underwater images using local contrast and multi-scale fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 538–539. [Google Scholar]
  60. Li, Y.; Hou, G.; Zhuang, P.; Pan, Z. Dual high-order total variation model for underwater image restoration. arXiv 2024, arXiv:2407.14868. [Google Scholar] [CrossRef]
  61. Liu, R.; Fan, X.; Zhu, M.; Hou, M.; Luo, Z. Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 4861–4875. [Google Scholar] [CrossRef]
  62. Liu, R.; Ma, L.; Zhang, J.; Fan, X.; Luo, Z. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 10561–10570. [Google Scholar]
  63. Wang, Y.; Li, N.; Li, Z.; Gu, Z.; Zheng, H.; Zheng, B.; Sun, M. An imaging-inspired no-reference underwater color image quality assessment metric. Comput. Electr. Eng. 2018, 70, 904–913. [Google Scholar] [CrossRef]
  64. Choi, L.K.; You, J.; Bovik, A.C. Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Trans. Image Process. 2015, 24, 3888–3901. [Google Scholar] [CrossRef]
  65. Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 2015, 41, 541–551. [Google Scholar] [CrossRef]
  66. Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef]
  67. Gómez-Polo, C.; Muñoz, M.P.; Luengo, M.C.L.; Vicente, P.; Galindo, P.; Casado, A.M.M. Comparison of the cielab and ciede2000 color difference formulas. J. Prosthet. Dent. 2016, 115, 65–70. [Google Scholar] [CrossRef]
  68. Lei, T.; Jia, X.; Zhang, Y.; Liu, S.; Meng, H.; Nandi, A.K. Superpixel-based fast fuzzy c-means clustering for color image segmentation. IEEE Trans. Fuzzy Syst. 2018, 27, 1753–1766. [Google Scholar] [CrossRef]
Figure 1. Overview of our proposed method.
Figure 1. Overview of our proposed method.
Jmse 13 01820 g001
Figure 2. Visual comparison of different color correction methods. From left to right: (a) Images from test datasets; (b) Gray-Edge [52] results; (c) WB_sRGB [53] results; (d) NUCE [54] results; (e) 3C [7] results; (f) Shades-of-Gray [55] results; (g) Ours progressive color correction.
Figure 2. Visual comparison of different color correction methods. From left to right: (a) Images from test datasets; (b) Gray-Edge [52] results; (c) WB_sRGB [53] results; (d) NUCE [54] results; (e) 3C [7] results; (f) Shades-of-Gray [55] results; (g) Ours progressive color correction.
Jmse 13 01820 g002
Figure 3. The effect of different values of κ and ω d on the enhanced image. Specifically, in the first to second rows, the results correspond to different values of κ with the parameter ω d fixed at 3.0; in the third to fourth rows, the results correspond to different values of ω d with the parameter κ fixed at 1.2.
Figure 3. The effect of different values of κ and ω d on the enhanced image. Specifically, in the first to second rows, the results correspond to different values of κ with the parameter ω d fixed at 3.0; in the third to fourth rows, the results correspond to different values of ω d with the parameter κ fixed at 1.2.
Jmse 13 01820 g003
Figure 4. Visual performance comparison of different enhancement methods on the Color-Check7 dataset. (a) Raw images; (b) Enhanced results of ACDC [58]; (c) PDRMRV [9]; (d) MMLE [4]; (e) WWPF [47]; (f) MCLA [21]; (g) HLRP [10]; (h) L2UWE [59]; (i) SPDF [44]; (j) UDHTV [60]; (k) HFM [6]; (l) WaterNet [25]; (m) UTransNet [31]; (n) Proposed method.
Figure 4. Visual performance comparison of different enhancement methods on the Color-Check7 dataset. (a) Raw images; (b) Enhanced results of ACDC [58]; (c) PDRMRV [9]; (d) MMLE [4]; (e) WWPF [47]; (f) MCLA [21]; (g) HLRP [10]; (h) L2UWE [59]; (i) SPDF [44]; (j) UDHTV [60]; (k) HFM [6]; (l) WaterNet [25]; (m) UTransNet [31]; (n) Proposed method.
Jmse 13 01820 g004
Figure 5. Visual performance comparison of various methods on the UIEB dataset. (a) Raw images; (b) enhanced results of ACDC [58]; (c) PDRMRV [9]; (d) MMLE [4]; (e) WWPF [47]; (f) MCLA [21]; (g) HLRP [10]; (h) L2UWE [59]; (i) SPDF [44]; (j) UDHTV [60]; (k) HFM [6]; (l) WaterNet [25]; (m) UTransNet [31]; (n) proposed method.
Figure 5. Visual performance comparison of various methods on the UIEB dataset. (a) Raw images; (b) enhanced results of ACDC [58]; (c) PDRMRV [9]; (d) MMLE [4]; (e) WWPF [47]; (f) MCLA [21]; (g) HLRP [10]; (h) L2UWE [59]; (i) SPDF [44]; (j) UDHTV [60]; (k) HFM [6]; (l) WaterNet [25]; (m) UTransNet [31]; (n) proposed method.
Jmse 13 01820 g005
Figure 6. Visual performance comparison of various methods on the RUIE dataset. (a) Raw images; (b) enhanced results of ACDC [58]; (c) PDRMRV [9]; (d) MMLE [4]; (e) WWPF [47]; (f) MCLA [21]; (g) HLRP [10]; (h) L2UWE [59]; (i) SPDF [44]; (j) UDHTV [60]; (k) HFM [6]; (l) WaterNet [25]; (m) UTransNet [31]; (n) proposed method.
Figure 6. Visual performance comparison of various methods on the RUIE dataset. (a) Raw images; (b) enhanced results of ACDC [58]; (c) PDRMRV [9]; (d) MMLE [4]; (e) WWPF [47]; (f) MCLA [21]; (g) HLRP [10]; (h) L2UWE [59]; (i) SPDF [44]; (j) UDHTV [60]; (k) HFM [6]; (l) WaterNet [25]; (m) UTransNet [31]; (n) proposed method.
Jmse 13 01820 g006
Figure 7. Visual performance comparison of various methods on the OceanDark dataset. (a) Raw images; (b) enhanced results of ACDC [58]; (c) PDRMRV [9]; (d) MMLE [4]; (e) WWPF [47]; (f) MCLA [21]; (g) HLRP [10]; (h) L2UWE [59]; (i) SPDF [44]; (j) UDHTV [60]; (k) HFM [6]; (l) WaterNet [25]; (m) UTransNet [31]; (n) proposed method.
Figure 7. Visual performance comparison of various methods on the OceanDark dataset. (a) Raw images; (b) enhanced results of ACDC [58]; (c) PDRMRV [9]; (d) MMLE [4]; (e) WWPF [47]; (f) MCLA [21]; (g) HLRP [10]; (h) L2UWE [59]; (i) SPDF [44]; (j) UDHTV [60]; (k) HFM [6]; (l) WaterNet [25]; (m) UTransNet [31]; (n) proposed method.
Jmse 13 01820 g007
Figure 8. Ablation study of the proposed method. (a) Raw images; (b) Results of -w/o CC; (c) Results of -w/o LCR; (d) Results of -w/o GBA; (e) Results of -w/o CAE; (f) Proposed method.
Figure 8. Ablation study of the proposed method. (a) Raw images; (b) Results of -w/o CC; (c) Results of -w/o LCR; (d) Results of -w/o GBA; (e) Results of -w/o CAE; (f) Proposed method.
Jmse 13 01820 g008
Figure 9. Visual performance comparison of various methods on the OceanDark dataset. (a) Raw images; (b) enhanced results of ACDC [58]; (c) PDRMRV [9]; (d) MMLE [4]; (e) WWPF [47]; (f) MCLA [21]; (g) HLRP [10]; (h) L2UWE [59]; (i) SPDF [44]; (j) UDHTV [60]; (k) HFM [6]; (l) WaterNet [25]; (m) UTransNet [31]; (n) proposed method.
Figure 9. Visual performance comparison of various methods on the OceanDark dataset. (a) Raw images; (b) enhanced results of ACDC [58]; (c) PDRMRV [9]; (d) MMLE [4]; (e) WWPF [47]; (f) MCLA [21]; (g) HLRP [10]; (h) L2UWE [59]; (i) SPDF [44]; (j) UDHTV [60]; (k) HFM [6]; (l) WaterNet [25]; (m) UTransNet [31]; (n) proposed method.
Jmse 13 01820 g009
Figure 10. Segmentation performance of the SFFCM algorithm on images provided by (a) Raw images; (b) ACDC [58]; (c) PDRMRV [9]; (d) MMLE [4]; (e) WWPF [47]; (f) MCLA [21]; (g) HLRP [10]; (h) L2UWE [59]; (i) SPDF [44]; (j) UDHTV [60]; (k) HFM [6]; (l) WaterNet [25]; (m) UTransNet [31]; (n) proposed method.
Figure 10. Segmentation performance of the SFFCM algorithm on images provided by (a) Raw images; (b) ACDC [58]; (c) PDRMRV [9]; (d) MMLE [4]; (e) WWPF [47]; (f) MCLA [21]; (g) HLRP [10]; (h) L2UWE [59]; (i) SPDF [44]; (j) UDHTV [60]; (k) HFM [6]; (l) WaterNet [25]; (m) UTransNet [31]; (n) proposed method.
Jmse 13 01820 g010
Table 1. Quantitative Comparison Results of CIEDE2000 for the ColorCheck-7 Dataset.
Table 1. Quantitative Comparison Results of CIEDE2000 for the ColorCheck-7 Dataset.
MethodsRaw ImageACDCPDRMRVMMLEWWPFMCLAHLRPL2UWESPDFUDHTVHFMWaterNetUTransNetProposed
T600023.27728.51312.16627.54326.05323.61920.80312.95424.67210.45412.51925.03323.18911.563
T800029.08118.63012.05119.04819.41618.55120.42711.16320.23810.63311.69620.41721.94810.249
D1022.18921.64413.53620.60422.15123.25316.61713.15820.90312.47713.34420.46218.71612.254
TS124.34120.1529.10318.00021.17220.45812.93212.16119.1189.41214.18521.80618.4389.538
W6022.07025.60111.72622.72923.56922.13114.77612.31223.81611.66413.91423.79120.76211.043
Z3323.26223.04513.48526.13625.15522.91336.18615.33624.46514.33313.95324.46323.46112.637
W8026.17224.83111.30321.57823.05621.52616.99610.52720.62311.70713.56622.84520.76610.182
Average24.34223.20211.91022.23422.93921.77919.82012.51621.97611.52613.31122.68821.04011.067
Note: Lower CIEDE2000 values mean better performance; bold font denotes the minimum CIEDE2000 value for each test image.
Table 2. Quantitative performance analyses on the UIEB and RUIE datasets. The best performance value for each metric is highlighted in red, and the second-best is marked in blue.
Table 2. Quantitative performance analyses on the UIEB and RUIE datasets. The best performance value for each metric is highlighted in red, and the second-best is marked in blue.
MethodsUIEBRUIE
AG EI CCF FADE UIQM UCIQE AG EI CCF FADE UIQM UCIQE
ACDC9.76296.07326.4291.0562.4470.52610.526104.78430.9070.9032.5860.553
PDRMRV10.332100.61530.1261.1633.4490.58111.484115.71532.4551.1863.3280.572
MMLE12.020102.06631.2201.1072.7100.53610.776103.32938.3700.9292.8020.579
WWPF11.660105.16327.1851.1642.4510.51411.152112.23737.1630.8722.9290.563
AG EI CCF FADE UIQM UCIQE AG EI CCF FADE UIQM UCIQE
MCLA8.16573.56132.5361.4122.1750.5298.11673.13040.1541.5361.8140.502
HLRP7.05268.19026.4821.0292.2580.4697.40970.24735.7081.3363.2290.547
L2UWE12.565115.81630.1831.3642.0180.37411.337120.94031.0371.3492.2670.401
SPDF15.161130.25827.6760.9153.0400.56611.702138.13529.5370.9812.8480.553
UDHTV8.74878.46132.7151.1462.7690.5519.01775.39834.1261.4062.5330.527
HFM7.26072.00934.2471.3912.3260.4929.44387.59643.5331.5481.9220.516
WaterNet10.59190.16329.1551.0882.8190.52711.665108.28330.3080.9023.0570.572
UTransNet11.191110.53031.4901.2862.7310.54412.238123.54134.0480.9162.970.563
Proposed14.860128.22732.2190.8833.5880.60313.836133.35745.1840.8913.5140.610
Note: ↑ means higher values reflect better performance, while ↓ means lower values reflect better performance.
Table 3. Quantitative performance analyses on the OceanDark dataset. The best performance value for each metric is highlighted in red, and the second-best is marked in blue.
Table 3. Quantitative performance analyses on the OceanDark dataset. The best performance value for each metric is highlighted in red, and the second-best is marked in blue.
Methods AG EI CCF FADE UIQM UCIQE
ACDC5.91267.04822.8250.8372.2570.511
PDRMRV7.69282.77524.9830.8442.9160.547
MMLE9.51987.13126.7820.8312.4260.517
WWPF9.38584.26921.3290.8412.2690.491
MCLA5.86664.57825.8370.9252.0840.549
HLRP5.62561.57422.3341.4222.6830.513
L2UWE10.74590.67421.6030.8942.0120.536
SPDF13.236116.17422.7400.8552.8060.531
UDHTV5.91766.23326.1750.9482.5530.549
HFM5.43462.11325.0680.9042.1270.473
WaterNet8.37980.15823.2560.8363.0660.577
UTransNet9.01686.44124.6980.8592.5110.528
Proposed12.966103.36826.4270.8173.1150.593
Note: ↑ means higher values reflect better performance, while ↓ means lower values reflect better performance.
Table 4. Quantitative Evaluations of Ablation Results on the OceanDark Datasets. The best performance for each metric is highlighted in red.
Table 4. Quantitative Evaluations of Ablation Results on the OceanDark Datasets. The best performance for each metric is highlighted in red.
Methods AG EI CCF FADE UIQM UCIQE
-w/o CC8.53178.07218.1941.7921.3630.254
-w/o LCR10.27393.78521.8691.2073.0820.586
-w/o GBA9.38587.15625.8061.3862.2380.318
-w/o CAE6.24952.32826.0630.9412.7250.572
Proposed12.966103.36826.4270.8173.1150.593
Note: ↑ means higher values reflect better performance, while ↓ means lower values reflect better performance.
Table 5. Comparison of running times for images with different resolutions (in seconds).
Table 5. Comparison of running times for images with different resolutions (in seconds).
Image ResolutionACDCPDRMRVMMLEWWPFMCLAHLRPL2UWESPDFUDHTVHFMWaterNetUTransNetProposed
400 × 3000.7215.770.160.422.261.275.151.432.441.200.120.270.69
1280 × 7203.08137.501.292.8312.605.2638.209.3719.945.251.171.534.51
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Z.; Liu, W.; Wang, J.; Yang, Y. Progressive Color Correction and Vision-Inspired Adaptive Framework for Underwater Image Enhancement. J. Mar. Sci. Eng. 2025, 13, 1820. https://doi.org/10.3390/jmse13091820

AMA Style

Li Z, Liu W, Wang J, Yang Y. Progressive Color Correction and Vision-Inspired Adaptive Framework for Underwater Image Enhancement. Journal of Marine Science and Engineering. 2025; 13(9):1820. https://doi.org/10.3390/jmse13091820

Chicago/Turabian Style

Li, Zhenhua, Wenjing Liu, Ji Wang, and Yuqiang Yang. 2025. "Progressive Color Correction and Vision-Inspired Adaptive Framework for Underwater Image Enhancement" Journal of Marine Science and Engineering 13, no. 9: 1820. https://doi.org/10.3390/jmse13091820

APA Style

Li, Z., Liu, W., Wang, J., & Yang, Y. (2025). Progressive Color Correction and Vision-Inspired Adaptive Framework for Underwater Image Enhancement. Journal of Marine Science and Engineering, 13(9), 1820. https://doi.org/10.3390/jmse13091820

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop