Next Article in Journal
Thermo-Mechanical Fault Diagnosis for Marine Steam Turbines: A Hybrid DLinear–Transformer Anomaly Detection Framework
Previous Article in Journal
The Impact of Canal Construction on the Hydro-Morphodynamic Processes in Coastal Tidal Channels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Underwater Low-Light Image Enhancement Algorithm Based on Image Fusion and Color Balance

1
Donghai Laboratory, Zhoushan 316021, China
2
Research Institute of Underwater Vehicles and Intelligent Systems, University of Shanghai for Science and Technology, Jungong Road 516, Shanghai 200093, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(11), 2049; https://doi.org/10.3390/jmse13112049
Submission received: 2 October 2025 / Revised: 21 October 2025 / Accepted: 24 October 2025 / Published: 26 October 2025
(This article belongs to the Section Ocean Engineering)

Abstract

Underwater vehicles are widely used in underwater salvage and underwater photography. However, the processing of underwater images has always been a significant challenge. Due to low light conditions in underwater environments, images are often affected by color casts, low visibility and missing edge details. These issues seriously affect the accuracy of underwater object detection by underwater vehicles. To address these problems, an underwater low-light image enhancement method based on image fusion and color balance is proposed in this paper. First, color compensation and white balance algorithms are employed to restore the natural appearance of the images. The texture characteristics of these white-balanced images are then enhanced using unsharp masking (USM) technology. Subsequently, a dual channel dehazing is applied, the image visibility is improved and the blocking artifacts common in traditional dark channel dehazing is avoided. Finally, through multi-scale fusion, the sharpened and dehazed image are combined to obtain the final enhanced image. In quantitative analysis, PSNR (Peak Signal-to-Noise Ratio), SSIM (Structural Similarity index), UIQM (Underwater Image Quality Measurement) and UCIQE (Underwater Color Image Quality Evaluation) were 28.62, 0.8753, 0.8831 and 0.5928, respectively. The results show that the images generated by this enhancement technique have higher visibility compared with other methods. It also produces images with more details while preserving edge information.

1. Introduction

The role of the underwater vehicle’s vision system is similar to human eyes. It is used as the main means to obtain information about the surrounding environment. Location information of underwater objects is provided by this visual system. Currently, underwater object detection and recognition technologies are primarily based on acoustic vision and light vision. Sonar provides an overview and outline of underwater objects, but the underwater acoustic environment introduces significant noise and artifacts. These degradations not only impair image quality but also hinder accurate object detection [1]. In contrast, underwater object detection and recognition technology based on light vision employs light vision sensors. The optical vision sensor is characterized by its high imaging resolution. This makes it especially suitable for precise positioning of underwater objects. It is also suitable for short-range identification of underwater objects. High-definition images are produced by this technology. This allows underwater robots to accurately identify objects. Appropriate actions, such as fishing and observation, can also be undertaken by the vehicles.
Underwater object detection and recognition technology based on light vision is seen as one of the keys. It is crucial for realizing intelligent operation of underwater robots. However, underwater images acquired by optical vision systems are often severely degraded. This degradation is mainly attributed to the attenuation of light propagation and heavy clutter. It is caused by absorption and scattering effects in seawater. Light energy is reduced and redirected by absorption and scattering in seawater. This gives underwater images a foggy appearance. Contrast is reduced and distant objects are blurred. Effective information for object recognition is lacking in these severely degraded underwater images. This increases the difficulty of underwater object detection and recognition. The quality of underwater images has improved with the continuous development of high-tech underwater imaging equipment. However, problems such as color fading, low contrast, and blurred details are still encountered. Therefore, the enhancement of underwater images is still necessary.
Underwater image enhancement algorithms are mainly categorized into traditional algorithms, algorithms based on deep learning, and algorithms based on image fusion. Traditional algorithms encompass grayscale transformation, spatial domain filtering [2], frequency domain filtering, histogram equalization (HE) [3], and edge detection [4]. HE, the most common traditional method, aims to improve the contrast and visual quality of images. However, HE can exacerbate noise and irrelevant details in the image. Therefore, improved algorithms such as adaptive histogram equalization (AHE) [5] and contrast-limited adaptive histogram equalization (CLAHE) [6] have been proposed. Additionally, a histogram equalization algorithm based on optimized adaptive image quadruple segmentation cropping (AQSCHE) was introduced. This achieves adaptive histogram cropping by optimizing the histogram distribution and constructing cropping parameters. Furthermore, in order to better detect underwater targets, linear mapping changes in pixels of different gray levels were applied to the dataset based on the histogram to improve image contrast [7]. Despite the effectiveness of HE and its derived algorithms in enhancing underwater and low-light images, issues with color and detail still persist.
Certain limitations are present in traditional image enhancement algorithms. Therefore, many researchers turn to neural networks to solve this problem and propose various network structures [8]. A fast and robust method for enhancing depth scene curves in underwater images is proposed. Lightweight hyperparameter estimation for noise reduction can be performed. The Asymmetric Large-kernel Attention Network is utilized [9]. Asymmetric deep convolutional attention is employed instead of multi-head self-attention. This enables faster and higher quality images. A new method for infrared and visible light image fusion based on a deep learning framework of generative adversarial network (GAN) and residual network was proposed and applied to road scenes [10]. Due to the scarcity of underwater environment datasets, a large-scale Real World Underwater Image Enhancement (RUIE) dataset was constructed [11]. This dataset is for evaluating model performance. These datasets typically include subsets targeting different challenges. Namely, image visibility quality, color casts, and higher-level detection and classification are among them. Overall, significant progress has been made by these studies. However, there are still problems with insufficient image processing for special underwater environments. These issues require further research and improvement.
Image enhancement algorithms based on image fusion improve image quality by combining multiple images or different parts of an image. The visibility and texture features of the image can be further enhanced while reducing noise by wavelet fusion of dark channel prior (DCP) images and DCP + unsharp masking (USM) images [12]. Combining color compensated and white-balanced versions of the original images defines the images to be fused [13]. It also defines their weight maps to facilitate the transfer of edge and color contrast to the output image. In order to avoid artifacts in the low-frequency components of the reconstructed image, a multi-scale fusion strategy is adopted. Although good contrast enhancement effects are achieved in low-light images by these methods, they still suffer from the problem of detail loss.
A new underwater image enhancement algorithm suitable for low-light environments is proposed in this paper. The contributions of this paper are described as follows. Firstly, color compensation is used to eliminate the effects of underwater light attenuation and wavelength absorption and restore the true color. Then, dual channel dehazing is used to remove underwater blur. Finally, multi-scale fusion is used to combine the advantages of color compensation and dehazing to further enhance the image.
The rest of this paper is organized as follows. In Section 2, the proposed fusion algorithm is introduced to address the problems of color cast, low visibility, and missing details in underwater images. The improvements and innovations of the algorithm are discussed. In Section 3, the performance of the fusion algorithm is comprehensively evaluated and analyzed. These evaluations are conducted through experiments on an underwater dataset sourced from the Underwater Image Enhancement Benchmark (UIEB) [14]. The benchmark, consisting of 890 real underwater images and their corresponding reference images, is used to verify the excellent performance of the proposed algorithm in underwater image enhancement. In Section 4, the work in this paper is summarized, highlighting the research contributions and potential directions for future improvements.

2. Proposed Method

This method is mainly divided into three parts: color compensation, underwater defogging, and multi-scale fusion. The method framework is shown in Figure 1. The color-compensated image is enhanced in detail. Then it is combined with the defogging module and used as input for multi-scale fusion. Finally, enhanced low-light underwater images are produced.
Color compensation is used to compensate for color casts caused by depth-selective absorption of color. The natural appearance of the image is restored by this process. With color compensation, dehazing algorithms and USM can be applied to underwater images. In the underwater defogging stage, an improved defogging algorithm is used to process the image. The depth information of the image is analyzed based on atmospheric light estimation and transmittance estimation. This process mitigates the reduction in contrast induced by underwater scattering, thereby recovering clarity and brightness. Multi-scale fusion is used to extract and enhance the edges and details of the image at different scales. The image is decomposed and reconstructed to alleviate the loss of contrast and detail caused by backscattering. The fusion steps are described in detail in Section 2.3.

2.1. Color Compensation

Images acquired by underwater imaging systems are affected by severe degradation. This is mainly due to low visibility in underwater environments. This reduced visibility is caused by the attenuation of light in the water. According to the Lambert–Beer empirical law, light is attenuated exponentially in the medium. The degree of light attenuation is directly affected by the characteristics of the medium. The light attenuation model is given by Equation (1). In this equation, E represents the illumination, r represents the distance, and c represents the total medium attenuation coefficient. When natural light enters water, it is scattered and absorbed by the medium. Therefore, the light attenuation model in water is expressed by Equation (2). In this equation, a represents the absorption coefficient, and b represents the scattering coefficient. The sum of a and b is equivalent to the total medium attenuation coefficient c. In the sea area defined in this article as low light, affected by phytoplankton and light scattering. a = 0.2 to 1.0, b = 0.5 to 5.0.
E r = E 0 e c r
E r = E 0 e a r e b r
As shown in Figure 2, light of different wavelengths is absorbed by the water medium to different degrees. The red light has a longer wavelength and a lower frequency, so it disappears earlier. As the water depth increases, the green light is gradually absorbed and disappears. The blue light is absorbed and disappears at a depth of more than 60 m. Therefore, images acquired underwater generally appear in blue-green tones. Image quality is generally poor in underwater environments.
White balance is used in our proposed image enhancement method to improve images. Undesirable color casts caused by various lighting or medium attenuation characteristics are primarily eliminated by this process. However, according to the analysis of underwater light attenuation, color correction should be performed before white balance processing. In underwater environments, color perception is highly dependent on depth. The color bias needs to be corrected. The green channel is relatively well preserved underwater compared to the red and blue channels because it suffers less attenuation. Opponent color information is contained in the green channel. This makes it especially important to compensate for the loss of the red channel. The attenuation of the red channel is compensated by adding a signal to the green channel. This allows a small portion of the green channel to become the red channel. The color balance of underwater images is effectively improved by this method. Color casts are reduced, making the images more natural and easier to understand.
Mathematically, the compensated red channel Irc for each pixel position (x) can be expressed as:
I r c ( x ) = I r x + d I g ¯ I r ¯ · 1 I r x · I g x
In the equation, the red and green channels of image I are, respectively, represented by Ir and Ig. Each channel is normalized by the upper limit of the dynamic range. The values are placed in the interval [0, 1]. The average values of Ig and Ir are represented by I g ¯ and I r ¯ . The parameter d is set as a constant, which is 1 in this context. According to the study by [15], significant attenuation of the blue channel is caused by the absorption of organic matter in turbid waters or places with high concentrations of plankton. Therefore, in these cases, the blue channel is strongly attenuated and the red channel is undercompensated. The compensated blue channel Ibc is calculated as follows:
I b c ( x ) = I r x + d I g ¯ I b ¯ · 1 I b x · I g x
Among them, the blue and green channels of image I are represented by Ib and Ig, respectively.
After red and blue channel attenuation is compensated, the color cast of the image is greatly reduced. Next, the gray world assumption in white balance is used. The color cast of the light source is estimated and compensated. The gray world algorithm is based on the assumption that, for images with many color changes, the average value of the R, G, and B components tends to the same gray level K. K is calculated as K = ( I r ¯ + I g ¯ + I b ¯ )/3. Here, I r ¯ , I g ¯ , and I b ¯ represent the average of the red, green, and blue channels, respectively. The gain of each channel is calculated by dividing K by the average value of each channel. This gain is then multiplied by the channel value to obtain the new channel value. Finally, the maximum value of all channels is calculated. Using this maximum value, the data is re-linearly mapped into the range [0, 255]. After color compensation, the image is restored to its natural color performance. The values of the R, G, and B channels tend to be those of a normal image. However, the image still suffers from insufficient details and low visibility.

2.2. Underwater Defogging

Although white balance is crucial for restoring color, the problems of foggy blur, low contrast, and color distortion in underwater images cannot be solved by relying solely on white balance correction. The edges and details of the scene are affected by scattering. Blur and hazy problems are still present in the image. Therefore, the image fusion method is adopted. Improved DCP and sharpening are used to address the blur problem in white balance images. The principle of underwater image degradation is found to be similar to the principle of outdoor foggy image degradation. Many similarities in visual characteristics are shared by both. Therefore, the DCP method proposed in [16] can theoretically be used for underwater image restoration to achieve certain results. However, essential differences exist between the propagation characteristics of light underwater and in the air. Specifically, the absorption effect of light underwater is highly selective. The longer wavelength of light is more severely absorbed and attenuated in water. Therefore, different transmittance values for light of various wavelengths should be considered. To better apply to underwater low-light images, improvements based on DCP have been made. The improved DCP method is used as the first input in the fused image to reduce the haze effect. Additionally, a low-light image enhancement method based on dual channel prior is proposed. It is based on the existing two image priors—dark channel prior and bright channel prior [17]. This method allows for better extraction of information from images to improve their quality and visibility.
Formally, given an image I, the dark channel can be defined as
I d a r k x = m i n c r , g , b m i n y Ω x I c y
where Ic is the color channel of image I, Ω x represents the local block centered on x, and y is a pixel in the local block Ω x . For haze-free images, I d a r k x 0 , while for blurred images, I d a r k x is no longer black. In a blurred image, the intensity of dark pixels in this channel is mainly affected by air light. Therefore, accurate estimates of haze transmittance can be directly provided by these dark pixels.
Inspired by DCP, the bright channel prior is proposed. In most well-lit image patches, at least one color channel has very high intensity at some pixels. Therefore, for such a patch, the maximum intensity should have a very high value. This provides an accurate estimate of the haze transmittance. Formally, given an image I, the bright channel prior can be expressed as
I b r i g h t x = m a x c r , g , b m a x y Ω x I c y
The local block size for both dark and bright channel operations is set to 15 × 15. It can be observed that the image structure of the bright channel is more complete than that of the dark channel. However, some missed structures in the bright channel appear in the dark channel. Therefore, the dark channel can be used as a complement. It corrects potentially erroneous transmittance estimates obtained from the bright channel prior. This method of combining dark and light channels takes full advantage of their respective strengths. It thereby improves the accuracy of transmittance estimates.
Given an input underwater low-light image I, its image exposure model can be expressed as
I x = J x t x + A 1 t x
In the equation, J(x) represents the radiance of the real scene. A denotes the global ambient illumination. t(x) signifies the medium transmittance. It reflects the degree of light attenuation as it propagates in the medium. To correct the low-light image I and obtain a well-exposed image J, J(x) can be solved from Equation (7):
J x = I x A t x + A
It can be seen from Equation (8) that if t(x) and A are known, then a well-exposed image J can be calculated.
For atmospheric light estimation, a similar approach to that used by [16] is adopted. In this approach, the top 10% of the brightest pixels are selected in the bright channel. Then, the pixels with the average intensity of these pixels in the input image are chosen as the atmospheric light. This is done to avoid overestimating it.
Rewrite Equation (7) according to Equation (6)
I x b r i g h t = J b r i g h t x t x b r i g h t + A c 1 t x b r i g h t
According to the bright channel I b r i g h t x 255
I b r i g h t x = m a x c m a x y Ω x I c y = 255
From Equations (9) and (10) we obtain:
255 A c = t x b r i g h t J ( x ) b r i g h t A c + 1 t x b r i g h t
From Equation (11):
t x b r i g h t = 255 A c J ( x ) b r i g h t A c
From Equation (12), the initial transmittance t x b r i g h t of the bright channel prior can be estimated. In order to correct the potential erroneous transmittance estimation obtained from the bright channel prior, the dark channel is used as a supplement. Firstly, the difference between the light and dark channels is calculated:
I c h a n n e l d i f f e r e n c e x = I b r i g h t x I d a r k x
Then, it is determined whether the I c h a n n e l d i f f e r e n c e x value is less than a predefined threshold a, where a = 0.4. If the value of I c h a n n e l d i f f e r e n c e x is found to be less than the predefined threshold a, the pixel x is considered to belong to a dark object. As a result, its transmission rate is considered unreliable. For each unreliable depth pixel, its transmittance is measured using the value obtained from the dark channel. Pixels with smaller dark weights are attenuated more. Therefore, the transmittance obtained a priori for the dark channel is recorded as t(x)dark, and the transmittance can be corrected:
t ( x ) c o r r e c t e d = t ( x ) d a r k × t ( x ) b r i g h t
Although, a more complete structure can be obtained using the dual channel prior. However, there is still obvious blockage in some areas. This is caused by the dark channel prior’s reliance on the minimum filter. During the transmission process, edge information is usually lost in the transmission graph. Therefore, guided filtering technology is used next for image denoising and enhancement. This ensures that the details and edges of the image are preserved.
Once the atmospheric light A and the final transmittance t(x) are estimated, the scene brightness can be restored according to Equation (8). In Equation (8), when the transmittance t(x) is close to 0, a very large value will be produced by I x A t ( x ) . The directly restored scene brightness J(x) will be affected by introducing noise. To avoid such problems, the transmission rate t(x) is limited to a value with a lower bound of 0.1. The final scene brightness J(x) is then determined as follows:
J x = I x A m a x ( t x , t 0 ) + A
The pseudocode of the dual channel prior algorithm is shown in Algorithm 1.
Algorithm 1. Pseudocode of the dual channel prior
Input: The input image Ic, color compensation underwater dataset
Output: Dual channel prior processed J(x)
Initialization: A, ω = 0.75, t0 = 0.1
For Ic in color compensation underwater dataset do:
  Calculate I d a r k x via Equation (5)
  Calculate I b r i g h t x via Equation (6)
  Calculate t x b r i g h t via Equation (12)
  Calculate
   t x d a r k = 1 ω m i n c r , g , b m i n y Ω x I d a r k x A
  Calculate I c h a n n e l d i f f e r e n c e x via Eqn. (13)
  If  I c h a n n e l d i f f e r e n c e x < 0.4 then:
   Calculate t ( x ) c o r r e c t e d via Equation (14)
  Else:
   Calculate t ( x ) c o r r e c t e d = t x b r i g h t
  Calculate the guided filtering of t ( x ) c o r r e c t e d
  Calculate J x via Equation (15)
End for
To compensate for the lack of details caused by the dehazing algorithm, unsharp masking (USM) technology is employed. USM is utilized to enhance the edges and details of the image, akin to sharpening the white balance image. The high-frequency information in the image is utilized as the basis for highlighting edges and details. This is achieved by subtracting the original image from the blurred version. The second input of the fusion is determined by this subtraction process. This process can be expressed by the following equation:
J s h a r p ( x ) = J ( x ) + γ J x J b l u r ( x )
Among them, J(x) is the white-balanced image, and Jblue(x) is the image obtained by blurring J(x). The enhancement coefficient γ is used to control the degree of enhancement.

2.3. Multi-Scale Fusion

In A and B, the image was preprocessed using the color compensation white balance algorithm and the dual channel prior algorithm. These methods were utilized to enhance the contrast of the image and correct the color shift problem. However, the improvement of a single feature alone is still insufficient to achieve the desired visual effect of the enhanced image. On this basis, a multi-scale fusion strategy is introduced in this section. The fusion weight of the image is defined to further improve the image quality. Firstly, the Laplacian contrast weight (WL) is utilized to calculate edge and texture information in the image. Secondly, the saliency weight (WS) is used to determine the importance of the highlighted areas in the image. Finally, the saturation weight (WSat) is employed to measure the richness of colors in the image. With these weights, different features of the image can be fully considered. Subsequently, a multi-scale fusion method based on pyramid decomposition is applied to further enhance the preprocessed image. Specifically, the color-corrected image and underwater dehazed image obtained in the first two sections serve as input to the fusion algorithm. According to the multi-scale fusion strategy, the input image and its normalized weight map are subjected to multi-scale decomposition and fusion. This process aims to maximize the advantages of the two images. Ultimately, it obtains an underwater enhanced image with richer global contrast and detailed information.
In fact, for each input, three weight maps are merged into one weight map, as shown in Figure 3. First, for each input image, WL, WS, and WSat are weighted and summed to obtain the aggregate weight map Wk. Factors such as edges, textures, highlighted areas, and rich colors are taken into account. Then, each aggregated weight map Wk is normalized. This normalization ensures that the weight of each pixel remains consistent in relative proportion across all input images. Formally, the normalized weight map w k ¯ for each input is calculated as W k ¯ = W k + δ Σ k = 1 K W k + K · δ . Here, δ is represented as a small regularization term to ensure that each input contributes to the output. The value of δ is set to 0.1.
Multiscale decomposition is based on the Laplacian pyramid and the Gaussian pyramid. In this process, Gaussian filtering is first applied to the input image to generate a Gaussian pyramid. Each layer of the Gaussian pyramid is obtained by filtering the input image using a low-pass Gaussian kernel G and then downsampling. Next, the result of each layer is upsampled by 2 times and subtracted from the original image. One layer of the Laplacian pyramid is then obtained. This operation is applied to each level of the pyramid until the desired scale is reached. In this way, a series of Gaussian pyramids and corresponding Laplacian pyramids with different resolutions are obtained. These pyramids are used for subsequent image fusion and processing. The levels of the pyramid are defined as follows:
I x = I x G 1 I x + G 1 I x L 1 I x + G 1 I x = L 1 I x + G 1 I x G 2 I x + G 2 I x = L 1 I x + L 2 I x + G 2 I x = = l = 1 N L 1 I ( x )
In Equation (17), L1 and G1 represent the first layer of the Laplacian pyramid and the Gaussian pyramid, respectively. The fusion process is shown in Figure 3.

3. Experiment and Analysis

In this section, the effectiveness and robustness of the algorithm in various scenarios are verified. The public dataset UIEB is utilized for experiments. The experiment platform is based on Python 3.8. The operating environment is 64-bit Windows 11. The computer configuration includes an Intel (R) Core (TM) i7-12700H CPU @ 2.30 GHz and 16 GB of RAM. A comprehensive verification of the color compensation method introduced in Section 2 is included in the experiment. Additionally, the underwater defogging technology proposed in this paper is compared with the traditional DCP. Finally, through ablation experiments and subjective and objective evaluation, the accuracy and universality of the method in underwater datasets are proven. The algorithm’s effectiveness in key point matching applications is also verified.

3.1. Evaluation Metrics

In order to evaluate the quality of the enhanced underwater images and the subsequent comparison of different algorithms, some image evaluation indicators are used to quantify the results. The subjective visual perception and objective image quality evaluation indexes are combined for comparison. The objective evaluation method requires reference to the evaluation index of the image, necessitating the provision of pairs of underwater images for comparison. The reference images provided by UIEB are used for comparison in this paper. Then, the peak signal-to-noise ratio (PSNR) [18] and structural similarity index (SSIM) [19] are employed as evaluation indicators.
PSNR: PSNR is defined based on MSE (mean square error). Given a real reference image y and a generated image x of size m*n, MSE can be defined as follows:
M S E ( x , y ) = 1 m n i = 0 m 1 j = 0 n 1 [ x i , j y ( i , j ) ] 2
P S N R ( x , y ) = 10 log 10 ( M A X I 2 M S E ( x , y ) )
SSIM: The similarity of two images is evaluated mainly from brightness, contrast and structure.
S S I M ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
where x and y are represented as the true value of the generated image and the real reference image, respectively. The mean value of each channel of the image is represented by µx (µy). The variance of each channel of the image is represented by σx (σy). The covariance of x and y is represented by σxy. In addition, c1 and c2 are defined as constants to ensure the stability of the value. They are usually 0.0001 and 0.0009.
In this paper, PSNR and SSIM are used to compare the similarity of two images. The quality of enhanced underwater images from the perspective of underwater scenes is not defined by these metrics. Instead, non-reference evaluation indicators such as UIQM (Underwater Image Quality Measurement) [20] and UCIQE (Underwater Color Image Quality Evaluation) [21] are utilized for this purpose.
UIQM: UIQM contains three attributes of underwater images, such as underwater image chromatism measurement (UICM), underwater image sharpness measurement (UISM), and underwater image contrast measurement (UIConM).
U I Q M = w 1   U I C M + w 2   U I S M + w 3   U I C o n M
where w1, w2, and w3 represent the weight of each component, respectively. They are 0.0282, 0.2963 and 3.5753.
UCIQE: UCIQE employs a linear combination of chroma, saturation, and contrast for quantitative evaluation to quantify uneven color skew, blur, and low contrast, respectively.
U C I Q E = c 1 × σ c + c 2 × σ b + c 3 × μ s
σc is the standard deviation of chromaticity, σb is the standard deviation of brightness, µs is the mean value of saturation, and c1, c2, and c3 are the weighting coefficients.

3.2. Experimentation and Analysis of Color Compensation

As shown in Figure 4, the color compensation method proposed in this article is compared with four classic white balance methods. It is shown that the brightness of the image is reduced and a lot of noise is added at the edges by Gray Edge [22]. Although the color cast is improved by Shadow of Gray [23], the overall background color bias effect is poor. Little change is observed in the appearance of blue-green with MAX RGB [24]. Due to the severe attenuation of the red and blue channels of the image, other channels’ gain is mistakenly increased by the Gray World [25] algorithm. This attempt is made to restore the hue of the image. This results in obvious red and blue blocks in the image, causing distortion. In the proposed method, the red and blue channels of the image are prioritized for compensation. The color is then restored through the gray world. From Figure 4f, it is observed that this method successfully adjusts the color deviation. Additionally, it does not increase artificial noise. In summary, the proposed color compensation algorithm is shown to significantly improve the color of underwater images. Combined with the quantitative data in Table 1, it is shown that the color compensation method proposed in this paper achieves the optimal effect. It is demonstrated that the tone, contrast, and saturation of the image have been restored as much as possible before fog removal and detail enhancement. Moreover, to some extent, the noise of the original image is reduced, and the structural similarity of the image is improved.

3.3. Experiments and Analysis of Dual-Channel Priors

The comparison effect between DCP and dual channel prior is shown in Figure 5. The original image has been color compensated. Figure 5b,c depict the propagation diagram and processing results of traditional DCP, respectively. The visibility of the image is improved to a certain extent by traditional DCP. However, the image contrast is high, and an obvious color cast is present. The entire image is biased toward red. Many blocking artifacts are observed at the bottom of the stone statues and at the edges of the images. Additionally, numerous artifacts are present on the transmission map, contributing to noise. In comparison with traditional DCP, the improved dual channel prior is found to be more effective, as depicted in Figure 5d–f. When comparing the transmission maps in Figure 5c with Figure 5d,e, it is evident that the improved dual channel prior transmission map provides more detailed information. The transition is smoother, resulting in a better effect in Figure 5f. The dual channel prior not only addresses color distortion effects, but also further improves visibility. Combined with the quantitative data in Table 2, it is found that the underwater image has better image quality after the dual channel prior. The structural similarity of the image is greatly improved.

3.4. Experiments and Analysis of Fused Images

3.4.1. Ablation Experiment

Four evaluation indexes (PSNR, SSIM, UIQM, and UCIQE) were used to conduct ablation experiments to further study the role of different modules in the algorithm. The effects of the proposed method were verified under different conditions. A dataset of 890 pairs of underwater images provided by UIEB was used, with each pair consisting of an original image and a reference image. The results are presented in Table 3.
Based on the ablation studies, it can be observed that color compensation consistently improves various image quality metrics across different experimental setups. The dark channel prior method contributes more significantly to the enhancement of SSIM and UCIQE, thereby substantially increasing the structural and color similarity between the processed image and the original. In contrast, the USM technique primarily focuses on improving the UIQM, which effectively enhances the visual attractiveness of the image. Through the integration of these approaches, the proposed fusion algorithm achieves an optimal balance among all evaluation metrics. This ensures that no single metric exhibits noticeable deficiencies, while maintaining competitive performance across the others.

3.4.2. Visual Quality Assessment

A comparison experiment was conducted with other image enhancement algorithms under the same conditions to verify the superiority of this algorithm in different scenes. The advantages and disadvantages of the related algorithms were compared from various perspectives. The comparison involved combining subjective visual perception and objective image quality evaluation indexes. Furthermore, the effect of this algorithm on image detail enhancement was verified through feature point matching.
Subjective evaluation relies on individual participation in image quality evaluation. It involves evaluation based on human eye observation. Participants may be asked to rank, rate, or describe the perceived quality of images. This evaluation method can provide direct and intuitive evaluation results. However, it is greatly affected by subjective factors. On the other hand, objective evaluation is based on selected indicators. In underwater image processing, common objective evaluation indexes include PSNR, SSIM, UIQM, and UCIQE. These indicators can provide quantitative evaluation results. However, they may not fully reflect the perception of the human visual system.
Due to the influence of the underwater environment, images in the underwater image dataset usually exhibit color casts, haze, low brightness, and noise. These factors result in extremely low visibility levels. Seven representative underwater images of different scenes were selected as the research objects for comparison. They were compared with CLAHE, HE, Gamma Correction (GC) [26], Integrated Color Model (ICM) [27], Relative Global Histogram Stretching (RGHS) [28], Unsupervised Color Correction Method (UCM) [29], and Minimum Information Preservation (MIP) [30]. Figure 6a–h, respectively, depict the original images of the seven groups. They also show the renderings after processing using the aforementioned seven classic algorithms: (i) represents the processing renderings of the algorithm in this article.
It can be observed from the seven sets of images that although the HE algorithm can enhance image details and contrast to a certain extent, underwater images still suffer from a serious color cast problem. They cannot accurately represent the true color of objects in the water and tend to be overexposed. This indicates that direct application of histogram equalization fails to yield satisfactory results when processing underwater degraded images. The CLAHE, GC, and MIP methods demonstrate poor dehazing effects on images with heavy color casts. They struggle to distinguish objects and backgrounds effectively, leading to inadequate image clarity. While some improvement in contrast is achieved, the visual effect is inferior to the original image. Images processed with the UCM exhibit excessive yellow-green coloration and obvious color cast issues. In comparison, the underwater degradation images processed by this algorithm successfully restore the color of the underwater image. They also improve overall brightness and contrast. The resulting images feature more natural and realistic colors, clearer details and texture. They also exhibit overall better visual effects, providing much greater satisfaction to viewers.
Objective evaluation provides strong support for qualitative understanding. It is less influenced by personal preferences than subjective evaluations. To better verify the effectiveness of this method, the average quantitative index analysis results of each algorithm on the dataset are calculated. These results are shown in Table 4. As shown in the table, various indicators of the image are improved to varying degrees after processing with 8 different algorithms. Each algorithm improves image quality to some extent but has its own advantages in processing images. This results in significant differences in the improvement of image indicators after processing. In the evaluation index PSNR, which focuses on image noise reduction, the proposed method ranks second only to ICM and RGHS, with a small gap. For the evaluation index SSIM, which measures image similarity, the method in this paper achieves the optimal result. Color compensation corrects color bias. Dual channel complementarity realizes underwater dehazing. This shows that the underwater image processed by this method is closest to the reference image. For unreferenced UIQM and UCIQE, the image fusion method is adopted. The effect on UIQM, considering underwater characteristics, is significant. In some areas of the enhanced image, the color saturation is low. Therefore, for UCIQE, which focuses on color concentration, the algorithm in this paper has no advantage. Considering these four indexes comprehensively, the proposed algorithm performs well in improving underwater image quality. The effectiveness and robustness of the proposed method are further verified.
Combining subjective visual effects and objective evaluation index data performance, this algorithm performs well in improving image color distortion. It also enhances contrast and removes underwater image blur. The restored color information is natural and the visual effect is significantly enhanced.
The purpose of image enhancement processing is to obtain clearer and better visual effects of underwater images. This facilitates subsequent practical application analysis, such as underwater biological classification and seafood identification. In order to prove the practical application effect of the images processed by this algorithm, image feature extraction and matching experiments were conducted. These experiments were based on the SURF (Speeded Up Robust Feature) [31] algorithm. SURF is an algorithm for detecting image features. It can intuitively illustrate the effectiveness of the algorithm in improving image details. The higher the clarity of the processed image, the richer the texture details. This results in a greater number of matched image feature points. Figure 7 shows the results of the SURF feature point matching experiment using an underwater image in the UIEB dataset. The image matching results processed by different algorithms were analyzed. Table 5 provides the number of matched feature points for each group of algorithms. The test results indicated that the number of correct feature point matches in the images processed by this algorithm was significantly higher than that achieved by most other algorithms. It was second only to ICM and MIP. However, the indicators in Table 4 demonstrate that the images generated by these two algorithms are of low quality, leading to misdetection of matching feature points. In conclusion, the algorithm presented in this paper not only improves image quality and visual effects. It can also achieve better results in practical applications such as object recognition.

4. Conclusions

Enhancement methods for underwater images with low visibility, little edge information, and blue-green color are studied in this paper. Color compensation is identified as a key step. It not only corrects the color cast problem but also lays the foundation for the effective application of subsequent methods in underwater images. A multi-scale fusion method is proposed to combine white-balanced images with dual channel dehazed images. The algorithm presented in this paper not only effectively addresses the blue-green problem of underwater images. It also enhances the overall brightness of the image and restores the original color. In tests on UIEB underwater images, the results demonstrate that the algorithm is capable of generating high-quality underwater images. It performs well in visual perception tasks. According to the image quality evaluation index, the PSNR, SSIM, UIQM and UCIQE of this algorithm are 28.62, 0.8753, 0.8831 and 0.5928. Compared with traditional methods, this algorithm demonstrates superior performance in terms of visibility, texture features, information content and feature matching. It achieves a comprehensive enhancement effect on the tested underwater images. In the future, deep learning methods will be incorporated to further improve the enhancement effect of low-light underwater images.

Author Contributions

Conceptualization and Funding acquisition, D.Z.; Methodology, R.X., W.P. and M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This project is supported by the Science Foundation of Donghai Laboratory (DH2022KF01013) and the Creative Activity Plan for Science and Technology Commission of Shanghai (23550730300, 21DZ2293500).

Data Availability Statement

Requests to access the datasets should be directed to [https://li-chongyi.github.io/proj_benchmark.html].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhao, L.; Ren, X.; Fu, L.; Yun, Q.; Yang, J. UWS-YOLO: Advancing Underwater Sonar Object Detection Via Transfer Learning and Orthogonal-Snake Convolution Mechanisms. J. Mar. Sci. Eng. 2025, 13, 1847. [Google Scholar] [CrossRef]
  2. Noah, J.A.; Zhang, X.Z.; Dravida, S.; DiCocco, C.; Suzuki, T.; Aslin, R.N.; Tachtsidis, I.; Hirsch, J. Comparison Of Short-Channel Separation And Spatial Domain Filtering For Removal Of Non-Neural Components In Functional Near-Infrared Spectroscopy Signals. Neurophotonics 2021, 8, 015004. [Google Scholar] [CrossRef] [PubMed]
  3. Jha, K.; Sakhare, A.; Chavhan, N.; Lokulwar, P.P. A Review On Image Enhancement Techniques Using Histogram Equalization. In Proceedings of the AIDE-2023 and PCES-2023, Bengaluru, India, 27–28 October 2023; Hinweis Research: Trivandrum, India, 2023; pp. 497–502. [Google Scholar]
  4. Pu, M.; Huang, Y.; Liu, Y.; Guan, Q.; Ling, H. Edter: Edge Detection With Transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 1402–1412. [Google Scholar]
  5. Paul, A. Adaptive Tri-Plateau Limit Tri-Histogram Equalization Algorithm For Digital Image Enhancement. Vis. Comput. 2023, 39, 297–318. [Google Scholar] [CrossRef]
  6. Bury, T. GAN-CLAHE: Generative Adversarial Networks Enhanced with CLAHE For Image Generation Process. In Information and Software Technologies; Springer Nature: Cham, Switzerland, 2024; pp. 61–70. [Google Scholar]
  7. Xu, R.; Zhu, D.; Chen, M. A Novel Underwater Object Detection Enhanced Algorithm Based On Yolov5-MH. IET Image Process. 2024, 18, 3415–3429. [Google Scholar] [CrossRef]
  8. Xue, X.; Ma, T.; Han, Y.; Ma, L.; Liu, R. Learning Deep Scene Curve For Fast And Robust Underwater Image Enhancement. IEEE Signal Process. Lett. 2023, 31, 6–10. [Google Scholar] [CrossRef]
  9. Chen, Q.; Qin, J.; Wen, W. ALAN: Self-Attention Is Not All You Need For Image Super-Resolution. IEEE Signal Process. Lett. 2023, 31, 11–15. [Google Scholar] [CrossRef]
  10. Chen, Y.; Ma, X.; Wang, Q.; He, Y.; Xie, S. Research On Water Surface Object Detection Method Based On Image Fusion. J. Mar. Sci. Eng. 2025, 13, 1832. [Google Scholar] [CrossRef]
  11. Liu, R.; Fan, X.; Zhu, M.; Hou, M.; Luo, Z. Real-World Underwater Enhancement: Challenges, Benchmarks, And Solutions Under Natural Light. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 4861–4875. [Google Scholar] [CrossRef]
  12. Zhu, D.; Liu, Z.; Zhang, Y. Underwater Image Enhancement Based On Colour Correction And Fusion. IET Image Process. 2021, 15, 2591–2603. [Google Scholar] [CrossRef]
  13. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color Balance And Fusion For Underwater Image Enhancement. IEEE Trans. Image Process. 2017, 27, 379–393. [Google Scholar] [CrossRef] [PubMed]
  14. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An Underwater Image Enhancement Benchmark Dataset And Beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar] [CrossRef]
  15. Lu, H.; Li, Y.; Zhang, L.; Serikawa, S. Contrast Enhancement For Images In Turbid Water. J. Opt. Soc. Am. A 2015, 32, 886–893. [Google Scholar] [CrossRef]
  16. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [CrossRef]
  17. Wang, Y.; Zhuo, S.; Tao, D.; Bu, J.; Li, N. Automatic Local Exposure Correction Using Bright Channel Prior For Under-Exposed Images. Signal Process. 2013, 93, 3227–3238. [Google Scholar] [CrossRef]
  18. Korhonen, J.; You, J. Peak Signal-To-Noise Ratio Revisited: Is Simple Beautiful? In Proceedings of the 2012 Fourth International Workshop on Quality of Multimedia Experience, Melbourne, Australia, 5–7 July 2012; pp. 37–38. [Google Scholar]
  19. Brunet, D.; Vrscay, E.R.; Wang, Z. On The Mathematical Properties Of The Structural Similarity Index. IEEE Trans. Image Process. 2011, 21, 1488–1499. [Google Scholar] [CrossRef]
  20. Panetta, K.; Gao, C.; Agaian, S. Human-Visual-System-Inspired Underwater Image Quality Measures. IEEE J. Ocean. Eng. 2015, 41, 541–551. [Google Scholar] [CrossRef]
  21. Yang, M.; Sowmya, A. An Underwater Color Image Quality Evaluation Metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef]
  22. Tan, X.; Lai, S.; Wang, B.; Zhang, M.; Xiong, Z. A Simple Gray-Edge Automatic White Balance Method With FPGA Implementation. J. Real-Time Image Process. 2015, 10, 207–217. [Google Scholar] [CrossRef]
  23. Adams, R.J.; Smart, P.; Huff, A.S. Shades Of Grey: Guidelines For Working With The Grey Literature In Systematic Reviews For Management And Organizational Studies. Int. J. Manag. Rev. 2017, 19, 432–454. [Google Scholar] [CrossRef]
  24. Hussain, M.A.; Akbari, A.S. Max-RGB Based Colour Constancy Using The Sub-Blocks Of The Image. In Proceedings of the 2016 9th International Conference on Developments in eSystems Engineering (DeSE), Liverpool, UK, 31 August–1 September 2016; pp. 289–294. [Google Scholar]
  25. Liu, C.; Chen, X.; Wu, Y. Modified Grey World Method To Detect And Restore Colour Cast Images. IET Image Process. 2019, 13, 1090–1096. [Google Scholar] [CrossRef]
  26. Mishra, A.K.; Kumar, M.; Choudhry, M.S. Fusion Of Multiscale Gradient Domain Enhancement And Gamma Correction For Underwater Image/Video Enhancement And Restoration. Opt. Lasers Eng. 2024, 178, 108154. [Google Scholar] [CrossRef]
  27. Iqbal, K.; Salam, R.A.; Osman, A.; Talib, A.Z. Underwater Image Enhancement Using an Integrated Colour Model. IAENG Int. J. Comput. Sci. 2007, 34, 529–534. [Google Scholar]
  28. Huang, D.; Wang, Y.; Song, W.; Sequeira, J.; Mavromatis, S. Shallow-Water Image Enhancement Using Relative Global Histogram Stretching Based On Adaptive Parameter Acquisition. In MultiMedia Modeling; Springer International Publishing: Cham, Switzerland, 2018; pp. 453–465. [Google Scholar]
  29. Iqbal, K.; Odetayo, M.; James, A.; Salam, R.A.; Talib, A.Z.H. Enhancing The Low Quality Images Using Unsupervised Colour Correction Method. In Proceedings of the 2010 IEEE International Conference on Systems, Man and Cybernetics, Istanbul, Turkey, 10–13 October 2010; pp. 1703–1709. [Google Scholar]
  30. Carlevaris-Bianco, N.; Mohan, A.; Eustice, R.M. Initial Results In Underwater Single Image Dehazing. In Proceedings of the Oceans 2010 Mts/IEEE Seattle, Seattle, WA, USA, 20–23 September 2010; pp. 1–8. [Google Scholar]
  31. Katoch, S.; Singh, V.; Tiwary, U.S. Indian Sign Language Recognition System Using SURF With SVM And CNN. Array 2022, 14, 100141. [Google Scholar] [CrossRef]
Figure 1. Method framework.
Figure 1. Method framework.
Jmse 13 02049 g001
Figure 2. Schematic diagram of light absorption in underwater environment.
Figure 2. Schematic diagram of light absorption in underwater environment.
Jmse 13 02049 g002
Figure 3. Fusion method frame diagram.
Figure 3. Fusion method frame diagram.
Jmse 13 02049 g003
Figure 4. Comparison with classic white balance methods. (a) Original image. (b) Gray Edge. (c) Shade of Gray. (d) MAX RGB. (e) Gray World. (f) The proposed color compensation.
Figure 4. Comparison with classic white balance methods. (a) Original image. (b) Gray Edge. (c) Shade of Gray. (d) MAX RGB. (e) Gray World. (f) The proposed color compensation.
Jmse 13 02049 g004
Figure 5. Comparison of transmission graphs between DCP and dual channel prior. (a) Original image. (b) DCP. (c) Transmission map of DCP. (d) Dark channel prior map of guided filtering. (e) Corrected bright channel transmission map. (f) Results after dual channel prior.
Figure 5. Comparison of transmission graphs between DCP and dual channel prior. (a) Original image. (b) DCP. (c) Transmission map of DCP. (d) Dark channel prior map of guided filtering. (e) Corrected bright channel transmission map. (f) Results after dual channel prior.
Jmse 13 02049 g005
Figure 6. Comparison results of different image processing. (a) Original image (b) CLAHE (c) HE (d) GC (e) ICM (f) RGHS (g) UCM (h) MIP (i) ours.
Figure 6. Comparison results of different image processing. (a) Original image (b) CLAHE (c) HE (d) GC (e) ICM (f) RGHS (g) UCM (h) MIP (i) ours.
Jmse 13 02049 g006
Figure 7. Comparison of feature point matching between different methods. (a) Original image, (b) CLAHE, (c) HE, (d) GC, (e) ICM, (f) RGHS, (g) UCM, (h) MIP, (i) Ours.
Figure 7. Comparison of feature point matching between different methods. (a) Original image, (b) CLAHE, (c) HE, (d) GC, (e) ICM, (f) RGHS, (g) UCM, (h) MIP, (i) Ours.
Jmse 13 02049 g007aJmse 13 02049 g007b
Table 1. Quantitative indicators for Figure 4.
Table 1. Quantitative indicators for Figure 4.
ImagePSNRSSIMUIQMUCIQE
Original27.710.80310.71920.5159
Gray Edge27.880.63170.80460.5000
Shade of Gray27.740.62920.66560.4806
MAX RGB27.720.80310.70990.5138
Gray World28.080.78290.73030.5474
Ours28.150.83180.83940.5888
Table 2. Quantitative indicators for Figure 5.
Table 2. Quantitative indicators for Figure 5.
ImagePSNRSSIMUIQMUCIQE
Original27.880.66960.50270.5325
DCP27.890.66060.58580.5615
Dual channel prior28.420.82320.65540.5687
Table 3. Ablation experiment results.
Table 3. Ablation experiment results.
Color CompensationDual Channel PriorUSMPSNRSSIMUIQMUCIQE
×××28.150.81250.54320.5435
××28.220.82460.61190.5661
×28.320.89940.76190.6194
×28.260.83131.16280.5688
28.620.87530.88310.5928
Table 4. Comparison of image quality indicators of different underwater image enhancement algorithms on the same dataset.
Table 4. Comparison of image quality indicators of different underwater image enhancement algorithms on the same dataset.
OriginalCLAHE [6]HE [3]GC [26]ICM [27]RGHS [28]UCM [29]MIP [30]Ours
PSNR28.1528.2928.1727.9528.7328.7328.5928.0628.62
SSIM0.81250.80250.72740.69720.78520.81290.75600.63640.8753
UIQM0.54320.85191.04540.50200.71020.83580.86560.67550.8831
UCIQE0.54350.58360.66480.52140.56830.62050.63070.58710.5928
Table 5. Quantitative comparison of different underwater image augmentation algorithms on datasets.
Table 5. Quantitative comparison of different underwater image augmentation algorithms on datasets.
OriginalCLAHEHEGCICMRGHSUCMMIPOurs
Matched feature points123121127102149136135148136
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, R.; Zhu, D.; Pang, W.; Chen, M. An Underwater Low-Light Image Enhancement Algorithm Based on Image Fusion and Color Balance. J. Mar. Sci. Eng. 2025, 13, 2049. https://doi.org/10.3390/jmse13112049

AMA Style

Xu R, Zhu D, Pang W, Chen M. An Underwater Low-Light Image Enhancement Algorithm Based on Image Fusion and Color Balance. Journal of Marine Science and Engineering. 2025; 13(11):2049. https://doi.org/10.3390/jmse13112049

Chicago/Turabian Style

Xu, Ruishen, Daqi Zhu, Wen Pang, and Mingzhi Chen. 2025. "An Underwater Low-Light Image Enhancement Algorithm Based on Image Fusion and Color Balance" Journal of Marine Science and Engineering 13, no. 11: 2049. https://doi.org/10.3390/jmse13112049

APA Style

Xu, R., Zhu, D., Pang, W., & Chen, M. (2025). An Underwater Low-Light Image Enhancement Algorithm Based on Image Fusion and Color Balance. Journal of Marine Science and Engineering, 13(11), 2049. https://doi.org/10.3390/jmse13112049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop