Underwater Image Enhancement Based on Multi-Scale Fusion and Global Stretching of Dual-Model

: Aimed at the two problems of color deviation and poor visibility of the underwater image, this paper proposes an underwater image enhancement method based on the multi-scale fusion and global stretching of dual-model (MFGS), which does not rely on the underwater optical imaging model. The proposed method consists of three stages: Compared with other color correction algorithms, white-balancing can effectively eliminate the undesirable color deviation caused by medium attenuation, so it is selected to correct the color deviation in the ﬁrst stage. Then, aimed at the problem of the poor performance of the saliency weight map in the traditional fusion processing, this paper proposed an updated strategy of saliency weight coefﬁcient combining contrast and spatial cues to achieve high-quality fusion. Finally, by analyzing the characteristics of the results of the above steps, it is found that the brightness and clarity need to be further improved. The global stretching of the full channel in the red, green, blue (RGB) model is applied to enhance the color contrast, and the selective stretching of the L channel in the Commission International Eclairage-Lab (CIE-Lab) model is implemented to achieve a better de-hazing effect. Quantitative and qualitative assessments on the underwater image enhancement benchmark dataset (UIEBD) show that the enhanced images of the proposed approach achieve signiﬁcant and sufﬁcient improvements in color and visibility. methodology, R.W; investigation, H.S.; resources, H.S.; data R.W.; writing—original R.W.; writing—review


Introduction
Underwater vision has played an important role in different fields of science such as marine biology research [1], inspection of underwater man-made objects [2], and control of underwater vehicles [3]. Underwater images suffer from color cast and poor visibility resulting from absorption and scattering effects in the optical propagation process [4,5], which makes the enhancement and restoration of underwater images a challenging task.
Underwater image processing methods are categorized into two broad classes based on whether the physical principle of underwater optical propagation is applied or not. These two kinds of categories are called image restoration techniques and image enhancement methods, respectively. Image restoration based on the basic physics of light propagation establishes an underwater imaging model via some prior knowledge and finally recovers the degraded image. Equation (1) is considered as the simplified underwater image imaging model [6]: where x represents one particular pixel of the image, the wavelength λ ∈ {red, green, blue}, I λ (x) and J λ (x) respectively represent the image to be restored and the image after restoration, and B λ is defined as the background light, t λ (x) as the transmission map. In 2017, Peng et al. [7] proposed a method based on image blurring and light absorption (IBLA) to estimate more accurate background light and underwater scene depth. In 2018, they proposed another method based on generalized dark channel prior (GDCP) [8] by calculating the difference between the observed intensity and the background light. Due to the complexity of the combination of natural light and abnormal artificial light in underwater real image shooting, the common imaging model can hardly illustrate the color attenuation and absorption. In this paper, we apply the method based on pixel intensity redistribution to process the distorted image. The image enhancement based on pixel intensity redistribution changes the pixel values in either the spatial domain or a transformed domain [9], which uses qualitative subjective criteria to produce a more visually pleasing image and does not rely on any physical model for the image formation [10]. In 2017, Ghani and Isa [11] proposed a method based on recursive adaptive histogram modification (RAHIM). The saturation and brightness of the HSV color model are modified by Rayleigh distribution and the human visual system to improve the natural performance of image color. Ancuti et al. [12] introduced a method based on white balancing and multi-scale fusion. Their approach improves global contrast and is especially good at processing color deviation images, but the contrast of the resulting image needs to be further improved. In 2018, Huang et al. [13] put forward relative global histogram stretching (RGHS) in the red, green, blue (RGB) and Commission International Eclairage-Lab (CIE-Lab) color models, which effectively improves the visual effect of the blurred image, except for the color deviation image. In 2020, Bai et al. [14] proposed an enhancement method based on global and local equalization of the histogram and dual-image multi-scale fusion (GLHDF), which obtained good performance, but there existed some limitations in processing turbid water images.
In recent years, deep learning methods, especially convolutional neural networks (CNN) and generative adversarial networks (GAN), have been applied in image-based tasks such as image enhancement and underwater object detection [15]. Li et al. proposed WaterGAN [16] in 2018 and UWGAN [17] in 2019, both based on generative adversarial networks (GANs), to generate realistic underwater images. To solve the difficulty of lack of dataset in underwater image processing, Hou et al. [18] designed an underwater image synthesis algorithm (UISA) based on the physical model of underwater optical propagation and built the synthetic underwater image dataset (SUID) by outdoor ground-truth images. Li et al. [19] built a large-scale and real-world underwater image enhancement benchmark dataset (UIEBD). In 2020, Anwar and Li [20] provided a comprehensive and in-depth survey of the deep learning-based underwater image enhancement and pointed out that, in most cases, the deep learning-based methods fall behind state-of-the-art conventional methods. It is difficult to develop the deep learning method because of the lack of a real underwater image dataset. The reality of the generated underwater images has hardly been examined.
To enhance the underwater image, it is necessary to give priority to the correction of color deviation. In the color correction study, the unsupervised color correction method (UCM) [21], RGHS [13], and integrated color model-Rayleigh distribution (ICM-RD) [22] achieve color deviation correction through global stretching. However, the first two methods cannot effectively remove the blue-green deviation and ICM-RD has an obvious overcompensation in the red channel. Ancuti et al. [12] proposed underwater white-balancing, which can effectively reduce the color deviation caused by various lighting or medium attenuations. In this paper, white-balancing is selected to correct the color deviation.
Considering the lack of contrast and clarity of the image after white-balancing, it is necessary to further enhance the image. In the current image enhancement methods, fusion is widely used, which can achieve the multi-scale fusion of two enhanced images and improve the image quality, and the selection of weight coefficient is particularly critical. In the classical fusion algorithm, the saliency weight map has the disadvantage of low contrast in effectively distinguishing the saliency target of the underwater image. In this paper, the saliency detection method, combined with spatial and contrast cues proposed by Fu et al. [23], is combined into the fusion algorithm, which can effectively distinguish saliency targets and make the fusion image more consistent with human vision.
After analyzing the characteristics of the image processed in the first two stages, the problem of color deviation has been greatly improved. Considering the poor color contrast and dark brightness of the image caused by the relatively concentrated histogram, it is necessary to enhance the brightness of the image. The global stretching algorithm has poor performance in color deviation correction, but it can effectively improve the image contrast and achieve de-hazing. To stretch the brightness elements of the CIE-Lab model better, firstly, the RGB model channels are globally stretched to enhance the overall color contrast of the image. The image is then transferred to the Lab model, and the L channel of the CIE-Lab model is selectively stretched globally. Figure 1 shows the output images of each stage. The proposed method can effectively deal with a variety of underwater degradation. sion is widely used, which can achieve the multi-scale fusion of two enhanced images and improve the image quality, and the selection of weight coefficient is particularly critical. In the classical fusion algorithm, the saliency weight map has the disadvantage of low contrast in effectively distinguishing the saliency target of the underwater image. In this paper, the saliency detection method, combined with spatial and contrast cues proposed by Fu et al. [23], is combined into the fusion algorithm, which can effectively distinguish saliency targets and make the fusion image more consistent with human vision.
After analyzing the characteristics of the image processed in the first two stages, the problem of color deviation has been greatly improved. Considering the poor color contrast and dark brightness of the image caused by the relatively concentrated histogram, it is necessary to enhance the brightness of the image. The global stretching algorithm has poor performance in color deviation correction, but it can effectively improve the image contrast and achieve de-hazing. To stretch the brightness elements of the CIE-Lab model better, firstly, the RGB model channels are globally stretched to enhance the overall color contrast of the image. The image is then transferred to the Lab model, and the L channel of the CIE-Lab model is selectively stretched globally. Figure 1 shows the output images of each stage. The proposed method can effectively deal with a variety of underwater degradation. This paper will discuss the three phases of the underwater image enhancement algorithm, and the results of the qualitative and quantitative comparison of the proposed classical and state-of-the-art methods will be shown. This paper will discuss the three phases of the underwater image enhancement algorithm, and the results of the qualitative and quantitative comparison of the proposed classical and state-of-the-art methods will be shown.

Methodology
As illustrated in Figure 2, the proposed multi-scale fusion and global stretching of dual-model (MFGS) method consists of three stages, namely: (1) Color correction based on white-balancing, (2) Updated multi-scale fusion, and (3) Global stretching of dual-model. The first step aims to correct the color deviation of the underwater image. Due to the poor contrast and clarity of the restored image, sharpening and gamma correction are used to enhance the image, and the multi-scale pyramid with updated saliency weight is used for image fusion in the process of the second step. Finally, to solve the problem of poor brightness caused by a too concentrated histogram of the color model, global stretching of the corresponding channels is carried out in RGB and CIE-Lab models, respectively. Next, we will introduce each stage in detail. on white-balancing, (2) Updated multi-scale fusion, and (3) Global stretching of dual-model. The first step aims to correct the color deviation of the underwater image. Due to the poor contrast and clarity of the restored image, sharpening and gamma correction are used to enhance the image, and the multi-scale pyramid with updated saliency weight is used for image fusion in the process of the second step. Finally, to solve the problem of poor brightness caused by a too concentrated histogram of the color model, global stretching of the corresponding channels is carried out in RGB and CIE-Lab models, respectively. Next, we will introduce each stage in detail.

Color Correction Based on White-Balancing
Due to the scattering of light and the absorption of light by the water medium, the color of the underwater image (especially deep water images) tends to have a blue-green distortion. Before enhancing the image quality, priority should be given to correct the color deviation. In the color correction study, white-balancing can effectively eliminate the undesirable color deviation caused by various lighting or medium attenuation characteristics. Traditional white balance algorithms such as principal component analysis (PCA) [24], gray world [25], and white patch retinex [26] are compared and analyzed in Figure 3. The gray world algorithm can achieve good visual performance for reasonable distortion underwater scenes. Due to the absorption of red light, the average value of the red channel is very small, which may lead to overcompensation of the channel at the position where red appears (Gray world divides each channel according to the average value). Taking advantage of the relatively good preservation of the green channel underwater, some values of the green channel are selected to compensate for the red light at-

Color Correction Based on White-Balancing
Due to the scattering of light and the absorption of light by the water medium, the color of the underwater image (especially deep water images) tends to have a blue-green distortion. Before enhancing the image quality, priority should be given to correct the color deviation. In the color correction study, white-balancing can effectively eliminate the undesirable color deviation caused by various lighting or medium attenuation characteristics. Traditional white balance algorithms such as principal component analysis (PCA) [24], gray world [25], and white patch retinex [26] are compared and analyzed in Figure 3. The gray world algorithm can achieve good visual performance for reasonable distortion underwater scenes. Due to the absorption of red light, the average value of the red channel is very small, which may lead to overcompensation of the channel at the position where red appears (Gray world divides each channel according to the average value).
al-model. The first step aims to correct the color deviation of the underwater image. Due to the poor contrast and clarity of the restored image, sharpening and gamma correction are used to enhance the image, and the multi-scale pyramid with updated saliency weight is used for image fusion in the process of the second step. Finally, to solve the problem of poor brightness caused by a too concentrated histogram of the color model, global stretching of the corresponding channels is carried out in RGB and CIE-Lab models, respectively. Next, we will introduce each stage in detail.

Color Correction Based on White-Balancing
Due to the scattering of light and the absorption of light by the water medium, the color of the underwater image (especially deep water images) tends to have a blue-green distortion. Before enhancing the image quality, priority should be given to correct the color deviation. In the color correction study, white-balancing can effectively eliminate the undesirable color deviation caused by various lighting or medium attenuation characteristics. Traditional white balance algorithms such as principal component analysis (PCA) [24], gray world [25], and white patch retinex [26] are compared and analyzed in Figure 3. The gray world algorithm can achieve good visual performance for reasonable distortion underwater scenes. Due to the absorption of red light, the average value of the red channel is very small, which may lead to overcompensation of the channel at the position where red appears (Gray world divides each channel according to the average value). Taking advantage of the relatively good preservation of the green channel underwater, some values of the green channel are selected to compensate for the red light at- Taking advantage of the relatively good preservation of the green channel underwater, some values of the green channel are selected to compensate for the red light attenuation. Based on the upper limit of the dynamic range of each channel, all channel values are normalized and limited to the range [0, 1]. The compensation ratio coefficient of the red channel is designed as follows: where I r (x) and I g (x) are the pixel values of the image in the red and green channel, The final pixel value of the red channel is defined as: where I r and I g are the average of I r and I g . In turbid water or water areas with a high concentration of plankton, the blue light attenuation is obvious due to the absorption of organic matter (lines 5-7). To calculate the compensation coefficient of the blue channel and compensate the blue channel, the following equations are used: where I b (x) is the pixel value of the blue channel. After the attenuation has been compensated, gray world is applied to compensate for the illuminant color cast. The first stage can effectively correct the color deviation. Next, we need to deal with the problem of insufficient contrast and clarity caused by the loss of edge and detail information.

Multi-Scale Fusion with Updated Saliency Weight
After color deviation correction, the image details are blurred. We need to further enhance the image contrast. Image fusion is widely used in image decomposition [27], image de-hazing [28], multispectral video enhancement [29], and underwater image enhancement. Ancuti et al. improved their previous fusion-based underwater de-hazing approach [30] and proposed an alternative definition of inputs and weights to deal with severely degraded scenes [12]. The first input is obtained by gamma correction to correct the global contrast, and the second input is a sharpened version of the white balance image to reduce the degradation caused by scattering. By independently employing a fusion process at every scale level, the potential artifacts due to the sharp transitions of the weight maps are minimized. However, the selection of saliency weight cannot effectively highlight the obvious target in the image. In this paper, the fusion process from Ancuti et al. [12] is used, and we adjust the saliency weight coefficient to improve the attention of saliency objects in the fusion process. The process of this step is shown in Figure 4. The sharpened version of the white balance image is selected to be the first input of the fusion process, and the formula is defined as follows: where S is the output image after sharpening and I is the white-balancing image.
GI  denotes the Gaussian filtered version of I .  N  represents the linear normalization operator. Another input is the gamma-corrected image. In the process of blending, pixels with high weight values are more represented in the final image. In this paper, three weight coefficients were selected according to the algorithm of Ancuti et al. [12].
Laplacian contrast weight ( The sharpened version of the white balance image is selected to be the first input of the fusion process, and the formula is defined as follows: where S is the output image after sharpening and I is the white-balancing image. G × I denotes the Gaussian filtered version of I. N{·} represents the linear normalization operator. Another input is the gamma-corrected image. In the process of blending, pixels with high weight values are more represented in the final image. In this paper, three weight coefficients were selected according to the algorithm of Ancuti et al. [12]. Laplacian contrast weight (W L ) estimates the global contrast by computing the absolute value of a Laplacian filter applied on each input luminance channel.
Saturation weight (W Sat ) enables the fusion algorithm to adapt to chromatic information by advantaging highly saturated regions. This weight map is simply computed as the deviation between the R k , G k , and B k color channels and the luminance, L k , of the k th input: Saliency weight (W S ) is aimed at emphasizing the salient objects that lose their prominence in the underwater scene. Ancuti et al. [12] have employed the saliency estimator of Achantay et al. [31]. However, the effect of the estimated saliency map is not ideal and cannot effectively highlight the main target in the image. For underwater scenes, the center of the image is usually brighter than the surrounding area due to the influence of artificial lighting. This scenario is known as the 'central bias rule', which means that when the distance between the object and the image center increases, the attention gain is depreciating. In this paper, we have employed the cluster-based saliency algorithm of Fu et al. [23], which combines the contrast and spatial cues. The spatial cue w s (k) of the cluster C k is defined as: where δ(·) is the Kronecker delta function, c j denotes the center of the image, Gaussian kernel, K(·), computes the Euclidean distance between the pixel, z i j , and the image center, c j , the variance, σ 2 , is the normalized radius of images, and the normalization coefficient, n k , is the pixel number of clusters, C k . The result of this saliency map is better than the original map in the paper of Ancuti et al. [12] (Figure 5).  Three corresponding normalized weight maps are merged into one weight map: Three corresponding normalized weight maps are merged into one weight map: where K = 2 (the number of input images) and k ∈ [1, 3], W k denotes the normalized weight map and δ denotes a small regularization term that ensures that each input contributes to the output. The multi-scale decomposition is based on the Laplacian pyramid originally described in Burt and Adelson [32]. Each source input, I k is decomposed into a Laplacian pyramid while the normalized weight maps, W k , are decomposed using a Gaussian pyramid. The mixing of the Laplacian inputs with the Gaussian normalized weights is performed independently at each level, l: where L l and G l present the level of the Laplacian pyramid and Gaussian pyramid, k denotes the number of the input images, and R l denotes the reconstructed image. The enhanced output is obtained by summing the fused contribution of all levels. This stage enhances the edge and detail information, but the image tone is dark. The brightness and clarity should be further improved.

Global Stretching of Dual-Model
After multi-scale fusion processing, the color balance and the contrast of the image have been greatly improved. It was found that, after the fusion of the gamma-corrected image, the brightness of the image is relatively dark. The histogram of the underwater image is relatively concentrated, resulting in low contrast and visibility. In the contrast enhancement study, UCM [21], ICM-RD [22], and RGHS [13] achieve a global enhancement in the whole histogram modification process. UCM only stretches the histogram in the channels of the RGB model, and it cannot deal with the color deviation effectively. ICM-RD and RGHS are based on the assumption that the histogram distribution of the RGB channel is consistent with the Rayleigh distribution. RGHS is combined with an imaging model to calculate the relative stretching parameters. Because of the complexity of image distortion, the imaging model cannot fully describe the underwater illumination, and most of the histograms of the RGB channels in real underwater images do not conform to the assumption of the Rayleigh distribution.
In this paper, according to the previous related works, the RGB channel histogram is first globally stretched to enhance the color contrast. After the pre-processing, the color of the image is relatively satisfactory. The L element representing the brightness in the Lab model is stretched globally to improve the overall brightness of the image. The commonly used linear histogram stretching formula is shown in Equation (11).
where p i and p o are the input and output pixels, respectively. I min , I max , O min , and O max are the fixed parameters for the before and after stretching images, respectively. Global stretching in RGB color model. After separating the three channels of the RGB color model, we set the desired range [O min , O max ] to [0, 255]. Specifically, where the channel k ∈ {red, green, blue}. The stretching process is shown in Figure 6. After stretching, the histogram of the image covers the whole range of values, and the visual contrast and brightness of the image are improved. To further improve the image brightness, we chose the CIE-Lab model, which defines the most kinds of color to adjust the image feature parameters. After this step, the image is transformed from the RGB model to the Lab model. Global stretching in CIE-Lab color model. In the Lab model, the 'L' component is used to adjust the brightness of the image, and the values ranging from 0 to 100 denote darkest to the brightest. The color gradations of the 'a' and 'b' components are modified to acquire color correction. After the first two stages of processing, the color deviation of the underwater image has been better improved. There is no extra stretching and compensation for the 'a' and 'b' components. The 'L' component is applied with linear slide stretching, given by Equation (11)    Global stretching in CIE-Lab color model. In the Lab model, the 'L' component is used to adjust the brightness of the image, and the values ranging from 0 to 100 denote darkest to the brightest. The color gradations of the 'a' and 'b' components are modified to acquire color correction. After the first two stages of processing, the color deviation of the underwater image has been better improved. There is no extra stretching and compensation for the 'a' and 'b' components. The 'L' component is applied with linear slide stretching, given by Equation (11), which ranges between 0.1% and 99.9% and is stretched to the range [0, 100]. The 0.1% of the lower and upper values in the image are set to 0 and 100, respectively.

Results and Discussion
where L min = L s (N/100), L max = L s (−N/100), and L s (x) = sort(L(x)). N represents the number of components in the L channel. As shown in Figure 7, the stretched image is transformed to the RGB model and is taken as the final output. Compared with the stretched image in the RGB model, the brightness and contrast of the image are increased.
In the next section, the results of the proposed algorithm will be evaluated quantitatively and qualitatively and compared with the current algorithm. Global stretching in CIE-Lab color model. In the Lab model, the 'L' component is used to adjust the brightness of the image, and the values ranging from 0 to 100 denote darkest to the brightest. The color gradations of the 'a' and 'b' components are modified to acquire color correction. After the first two stages of processing, the color deviation of the underwater image has been better improved. There is no extra stretching and compensation for the 'a' and 'b' components. The 'L' component is applied with linear slide stretching, given by Equation (11) Figure 7, the stretched image is transformed to the RGB model and is taken as the final output. Compared with the stretched image in the RGB model, the brightness and contrast of the image are increased. In the next section, the results of the proposed algorithm will be evaluated quantitatively and qualitatively and compared with the current algorithm.

Results and Discussion
In this section, we compare the proposed approach with the existing classical and state-of-the-art underwater restoration/enhancement techniques, namely, unsupervised

Results and Discussion
In this section, we compare the proposed approach with the existing classical and state-of-the-art underwater restoration/enhancement techniques, namely, unsupervised color correction (UCM) [21], Rayleigh distribution (RD) [22], relative global histogram stretching (RGHS) [13], image blurriness and light absorption (IBLA) [7], global and local equalization of histogram and fusion (GLHDF) [14], and color balance and fusion (Fusion) [12]. The results of each method are evaluated qualitatively and quantitatively. In 2018, Mangeruga et al. [33] concluded that, even if the quantitative metrics can provide a useful indication about image quality, they do not seem reliable enough to be blindly employed for an objective evaluation of the performances of an underwater image enhancement algorithm. They then proposed some guidelines for underwater image enhancement based on the benchmarking of different methods [34].
Qualitative evaluation is mainly evaluated in terms of contrast, visibility, and color. Quantitative evaluation is mainly evaluated in terms of underwater image quality measure (UIQM) [35], patch-based contrast quality index (PCQI) [36], and underwater color image quality evaluation (UCIQE) [37]. The UIQM and UCIQE metrics are dedicated to underwater image assessment. They address three important criteria: colorfulness, sharpness, and contrast. A high PCQI value indicates an enhanced image with high contrast. Finally, the execution time of each method is evaluated.
The proposed underwater image enhancement method has no limitation on the resolution of input images. The resolution of the output image is consistent with that of the input image. In the UIEBD, the resolution of the raw images is quite different, and the range is from 300 × 168 to 2180 × 1447. According to the types of image degradation, 890 images in the UIEBD dataset [19] are divided into three categories. Data A represents images with obvious blue/green color deviation (Figure 8), Data B represents shallow water images with natural light (lines 1-4 in Figure 9) and Data C represents turbid water images (lines 5-7 in Figure 9). Figures 8 and 9 only present the experimental results of some randomly selected images from the Data A/B/C. Tables 1 and 2 respectively show the corresponding quantitative analysis results of Figures 8 and 9. Because of the poor performance of UCM [21] in qualitative evaluation, we do not carry out a quantitative analysis of UCM. As illustrated in Table 1, RD [22] and GLHDF [14] perform better in UCIQE index. The color of these two algorithms is more saturated. The UCIQE value of the proposed method is significantly higher than Fusion [12]. In terms of UIQM and PCQI index, Fusion achieves the highest average value, followed by the proposed method.   It can be seen from the data in Table 2 that the average value of the PCQI index of our algorithm is the highest for shallow water images. The GLHDF algorithm [13], which performs well in color and saturation, obtains a higher UIQM value. Although IBLA [7] does not perform well in qualitative evaluation, it has the highest UCIQE value.  Table 3 shows the average results of quantitative evaluation of 890 real underwater images in the underwater image enhancement benchmark dataset (UIBDE) [16]. The proposed method achieves the highest average value of the PCQI index, which indicates that the method has a good performance on contrast enhancement.   Figure 8 shows the qualitative results of various algorithms for the image with blue/green deviation (Data A). The results of RD [22] and GLHDF [14] are more colorful and produce unexpected oversaturation with a red hue. The results of UCM [21], RGHS [13], and IBLA [7] still have obvious color deviation. Fusion [12] and the proposed approach have a good performance on this kind of image distortion. The results of the proposed algorithm are clearer and brighter than Fusion, and the color contrast is more obvious.
As illustrated in Table 1, RD [22] and GLHDF [14] perform better in UCIQE index. The color of these two algorithms is more saturated. The UCIQE value of the proposed method is significantly higher than Fusion [12]. In terms of UIQM and PCQI index, Fusion achieves the highest average value, followed by the proposed method. For Data B and Data C in Figure 9, the results of UCM [21] and IBLA [7] are dark. There is excessive red compensation in the results of RD [22]. RGHS [13] and GLHDF [14] have a better effect on shallow water image processing (Data B), but when dealing with turbid water images (Data C), there exist blue and red deviation, respectively. Fusion [12] and our approach can effectively smooth the image color. And the brightness of underwater images is enhanced effectively by the proposed method.
It can be seen from the data in Table 2 that the average value of the PCQI index of our algorithm is the highest for shallow water images. The GLHDF algorithm [13], which performs well in color and saturation, obtains a higher UIQM value. Although IBLA [7] does not perform well in qualitative evaluation, it has the highest UCIQE value. Table 3 shows the average results of quantitative evaluation of 890 real underwater images in the underwater image enhancement benchmark dataset (UIBDE) [16]. The proposed method achieves the highest average value of the PCQI index, which indicates that the method has a good performance on contrast enhancement. In the execution time evaluation, we randomly selected 100 images of different sizes to test the execution time under the same equipment conditions. The device processor used in this paper is Intel (R) Pentium (R) CPU G620 @ 2.60GHz, and the two kinds of software involved are MATLAB R2017b and Pycharm 2020.2.3. The average execution time of different algorithms for a single image is shown in Table 4. It can be seen from Table 4 that the average execution speed of the GLHDF is the fastest among the tested algorithms, and the proposed algorithm is inferior to the GLHDF and the Fusion algorithm.
From the results of qualitative evaluation, the proposed algorithm and Fusion [12] have certain advantages in correcting color deviation, which can effectively remove different degrees of green, blue, and yellow deviation. The performance of UCM [21], RGHS [13], and IBLA [7] is unsatisfactory. In the case of serious color deviation, RD [22] and GL-HDF [14] have the problem of incomplete color deviation removal and excessive red compensation. Although the results of the quantitative evaluation can not be employed blindly, they can be used as a reference. The proposed algorithm achieves a higher PCQI value, which means that it performs better in contrast enhancement. RD [22] and GL-HDF [14], which have better performance in color richness, have higher UCIQE values. Fusion, with better color balance, has the highest UIQM value. Additionally, Fusion [12] and GLHDF [14] perform well in algorithm execution time. To shorten our execution time, a simplified version of the Fusion method can be chose to replace multi-scale fusion with the price of sacrificing the quality of image details. Specifically, the sharpened image and the gamma-corrected image are directly combined with the corresponding weight coefficients to get the fusion results. In general, the proposed method deals effectively with a variety of underwater image distortion scenes.

Conclusions
This paper proposes an enhancement method based on multi-scale fusion and global stretching of dual-model (MFGS). This method is realized by simple pixel value redistribution without relying on the underwater optical imaging model. The main contributions of this paper can be summarized as follows: (1) Regarding the problem that the saliency weight map in the fusion algorithm cannot effectively distinguish the saliency target, we updated the saliency weight map by combining contrast and spatial cues to highlight the obvious target in the processed image. (2) To further enhance the brightness of the fused image, after stretching the histogram of the RGB channel, the pixel values in the L channel of the Lab model that range between 0.1% and 99.9% are stretched to the range [0, 100].
Through qualitative and quantitative analysis, the proposed method deals with a variety of underwater image distortion scenes effectively and has advantages in improving contrast and correcting color deviation compared with other algorithms. In terms of the color richness of the resulting images and the execution time, there are still deficiencies with the latest algorithm. In future work, the structure of our algorithm will be further adjusted to shorten the execution time, and optimization of the color compensation method under different color deviation will also be the focus of future research. With the wide application of underwater vision in different scientific research fields, underwater image enhancement will play an increasingly important role in the process of image and video processing in marine biology research and underwater archaeology. Most of the target images of the current algorithms are shallow water images. When the artificial light source is added to deep water images, the raw images will face more diverse noises, and image enhancement will face more challenges. The effective enhancement of degraded images can provide convenience for many underwater studies.