Underwater Image Enhancement Based on Local Contrast Correction and Multi-Scale Fusion

: In this study, an underwater image enhancement method based on local contrast correction (LCC) and multi-scale fusion is proposed to resolve low contrast and color distortion of underwater images. First, the original image is compensated using the red channel, and the compensated image is processed with a white balance. Second, LCC and image sharpening are carried out to generate two different image versions. Finally, the local contrast corrected images are fused with sharpened images by the multi-scale fusion method. The results show that the proposed method can be applied to water degradation images in different environments without resorting to an image formation model. It can effectively solve color distortion, low contrast, and unobvious details of underwater images.


Introduction
Given the shortage of natural resources on land, humans need to develop alternative resources in different bodies of water, such as the ocean. Thus, increased attention has been paid to the research of underwater images. However, underwater images often show some degradation effects, such as color skewing, blurring, and poor detail [1]. In the underwater imaging process, the propagation of light presents exponential attenuation through two processes: light scattering and light absorption [2]. Light scattering is caused by particles suspended in the water. They cause the light to be reflected and refracted many times before it reaches the image acquisition device, thereby blurring the image. Light absorption is caused by the water medium. Different wavelengths of light in the water body can present different degrees of attenuation. The red light has a faster attenuation speed than the blue light and the green light. It also has the longest wavelength and the lowest frequency [3], which leads to visual color distortion in underwater images. Considerable research has attempted to find a suitable color correction and detail enhancement method for underwater images.
Existing underwater image enhancement methods can be classified from two perspectives: the image formation model-based (IFM-based) method and the image formation model-free (IFM-free) method [3]. The IFM-based method refers to the mathematical modeling of the degradation process of underwater images. By estimating the model parameters and inverting the degradation process, a clear underwater image can be obtained, which belongs to image restoration. The dark channel prior (DCP) [9] is a defog algorithm specially designed for outdoor image scenes. This method assumes that clear day images contain some pixels with extremely low intensities (close to zero) in at least one color channel. When it is directly applied to underwater scenes, this method loses its 3 of 16 underwater images, such as blurring, low contrast, and color distortion. In this study, an underwater image enhancement method is proposed on the basis of local contrast correction (LCC) and fusion. The characteristics of color and details of underwater images are also considered in the proposed method. First, the white balance method based on red channel compensation is used to correct the image color. Then, two image input versions, LCC and sharpening, are introduced. Finally, the weight is calculated, and multi-scale fusion is performed following the obtained weight. The results show that the proposed method can be applied to water degradation images in different environments without using the image formation model. Color distortion and unobvious details of underwater images are effectively solved, and the local contrast effect is improved.
The rest of this study is structured as follows. In the second part, the detailed underwater image enhancement method is introduced. In the third part, the qualitative and quantitative analysis of the experimental results is carried out. Then, the advantages of the proposed method are discussed, and the results are summarized.

Local Contrast Correction and Fusion Algorithm
Given the underwater image formation mechanism and the attenuation of light propagation in the water, an improved LCC method with a multi-scale image fusion strategy is proposed in this study. The underwater image is compensated using the red channel, and the color compensated image is processed by a white balance. Then, two versions of the image input are generated: the LCC image and the sharpened image. Next, the Laplacian contrast weight, saliency weight, and saturation weight of the LCC and sharpened images are calculated, and the two groups of weights are normalized. Finally, LCC images and sharpened images and their corresponding normalized weights are fused. The multi-scale fusion method is also adopted to avoid artifacts. The algorithm flow is shown in Figure 1.
es. The defect is that some images will appear with red supersaturation to varying degrees, making the overall image reddish. Therefore, an underwater image enhancement method that can solve color correction and detail enhancement should be determined. The underwater image enhancement algorithm based on fusion can effectively solve many problems of underwater images, such as blurring, low contrast, and color distortion. In this study, an underwater image enhancement method is proposed on the basis of local contrast correction (LCC) and fusion. The characteristics of color and details of underwater images are also considered in the proposed method. First, the white balance method based on red channel compensation is used to correct the image color. Then, two image input versions, LCC and sharpening, are introduced. Finally, the weight is calculated, and multi-scale fusion is performed following the obtained weight. The results show that the proposed method can be applied to water degradation images in different environments without using the image formation model. Color distortion and unobvious details of underwater images are effectively solved, and the local contrast effect is improved.
The rest of this study is structured as follows. In the second part, the detailed underwater image enhancement method is introduced. In the third part, the qualitative and quantitative analysis of the experimental results is carried out. Then, the advantages of the proposed method are discussed, and the results are summarized.

Local Contrast Correction and Fusion Algorithm
Given the underwater image formation mechanism and the attenuation of light propagation in the water, an improved LCC method with a multi-scale image fusion strategy is proposed in this study. The underwater image is compensated using the red channel, and the color compensated image is processed by a white balance. Then, two versions of the image input are generated: the LCC image and the sharpened image. Next, the Laplacian contrast weight, saliency weight, and saturation weight of the LCC and sharpened images are calculated, and the two groups of weights are normalized. Finally, LCC images and sharpened images and their corresponding normalized weights are fused. The multi-scale fusion method is also adopted to avoid artifacts. The algorithm flow is shown in Figure 1.  Figure 1. Details of the proposed method. Input1 and Input2 represent local contrast correction (LCC) and image sharp- Figure 1. Details of the proposed method. Input1 and Input2 represent local contrast correction (LCC) and image sharping, respectively. These two images are used as inputs of the fusion process. Then, the normalized weight maps are obtained, and multi-scale fusion is carried out on this basis.

Underwater Image White Balance Based on Red Channel Compensation
Given the physical characteristics of light propagation in water, red light is absorbed first, and underwater images are mainly blue and green [19]. White balance is an effective way to improve the tone of an image. It eliminates unwanted colors created by various lighting or light attenuation characteristics. The Gray-World algorithm [13] is an effective white balance processing method for outdoor images. However, due to its characteristics, the red channel will be overcompensated in the underwater environment where the red attenuation is severe, resulting in red artifacts in the image. The red channel compensation method is used to solve this problem [4]. The compensated red channel RC of each pixel position (i, j) in the image is where R and G represent the red channel and the green channel of the input image, and each channel is normalized. R and G represent the average value of pixels of the corresponding channel.
After the red channel is compensated, the Gray-Word can be applied to underwater image scenes. This method considers that the average value of each channel component is a grayscale for an RGB image with many color changes where RC represents the average value of pixels in the red channel. Next, the gain of each channel is calculated as Finally, the final pixel calculation method of each channel is shown as Equation (4) White balance processed image I can be obtained by combining ζ new .

Improved Local Contrast Correction Method
In addition to color aberration and blurring due to their characteristics, underwater images are usually interfered with by natural light or artificial light, which makes local areas of the image too bright. The white balance processing of the image will also lead to excessive brightness, so introducing the contrast correction method is necessary to solve this problem. The contrast reflects a measurement of the brightness level between the brightest and darkest areas of an image. Gamma correction is widely used as a global contrast correction method. It changes the image brightness by changing the value of a constant index γ. Gamma correction can be expressed as Equation (5) O(i, j) = 255 where I(i, j) and O(i, j) represent the pixel values of each coordinate of the input image and the output image, respectively, and γ is usually a positive number from 0 to 3. A simple gamma correction is a global method and does not apply to underexposed and overexposed scenes [20]. Local contrast correction method can carry out adaptive calculation following image properties [21], which is shown in Equation (6) O(i, j) = 255 where BF m (i, j) is an inverted low-pass version of the intensity of the input image. It is filtered with a bilateral filter. g is a parameter that depends on the image properties.
BF m (i, j) and g can be, respectively, expressed by Equations (7) and (8) g ∼ = ln(I/255) ln(0. 5) when BF m = 255, In Equation (7), f (k, l) represents the input pixel and ω(·) represents the weight coefficient, which can be obtained by multiplying the spatial domain function d(·) with the range domain function r(·) where σ 1 and σ 2 are standard deviations of the Gaussian function. They are in the spatial domain and the range domain, respectively.
In Equation (8), I is the overall average value of the pixels of the input image. When I < 128, the first part of Equation (8) is used for calculation; when I > 128, the second part is used for calculation.
When the overall average value of image pixels approaches I = 128, both parts of Equation (8) can be used for calculation, where the value of g is approximately constant to 1. In this case, the output image will hardly change. An improved LCC method is proposed in this study to solve this problem. Considering that the guided filter has better edge-preserving performance than the bilateral filter [22], the guided filter is used instead of the bilateral filter. For the uncertainty of variable g, the method proposed by Moroney et al. [23] is used to define it as constant 2. The improved method is shown in Equation (10) O(i, j) = 255 where GF m (i, j) is a mask provided by the guided filtered image of the inverted version of the input.
where I is the guide image and a and b are the constant coefficients of this linear function when the window center is located at k, which can be obtained from Equation (12) where µ and σ 2 represent the expectation and variance of I in window ω k , respectively, |ω| is the number of pixels in filter window ω k , and p k = 1 |ω| ∑ i∈ω k p i is the average value of input image in window ω k .
Guide image I is used as the input image to obtain GF m , indicating that I i = p i . The numerator term of a k in Equation (12) can be expressed as p k p k − p k p k . According to expectation and variance function V = E(X) 2 − (E(X)) 2 , Equation (12) can be expressed as (11) can be written as

Image Sharpening
Contrast correction can improve overexposure and underexposure of the image. It can also repair the missing color area. However, the underwater image is usually fuzzy, and the details are not obvious. Thus, the sharpening version of underwater images is introduced in this study as another input version.
The guided filter can achieve the edge smoothing effect of the bilateral filter, and it has a good performance for edge detection. Accordingly, the second input version adopts the guided filter method, and the sharpening result can be expressed as where I is the input image, q is the image obtained after guided filtered by I, and s is the constant coefficient. In this study, s = 1. The calculation method of q can be obtained from Equation (16)

Selection of Weights
Using a weight graph in the fusion process can highlight the pixels with high weight value in the result. For the selection of weight image, the Laplacian contrast, saliency, and saturation features of the image are selected in this study.
Laplacian contrast weight (W L ) estimates the global contrast. It calculates the absolute value of the Laplacian filter applied to each luminance channel. This filter can be used to extend the depth of field of the image [24] because it can ensure that the edges and textures of the image have high values. However, this weight is not enough to restore contrast because it cannot distinguish between the ramp and flat regions. Therefore, saliency weight is also used to overcome this problem.
Saliency weight (W Sal ) can highlight objects and regions that lose saliency in underwater scenes. The regional contrast-based salient object detection algorithm proposed by Cheng [25] is used to detect the saliency level. This method considers global contrast and spatial coherence that can produce a full resolution saliency map. The algorithm is shown in Equation (17) The algorithm expressed by Equation (17) is the histogram-based contrast method, where D(I p , I i ) is the color distance metric between pixels I p and I i in the L*a*b* space for perceptual accuracy [26].
Saturation weight (W Sat ) makes the fusion algorithm adapt to the chromatic information through a high saturation region. For each input I k , the weight can be calculated as the deviation between the R k , G k , B k color channel and the luminance L k of the k th input (for each pixel value position) After the weight estimates of two different input versions are obtained, the three weight estimates of each input version are combined into one weight in the following way: for each input version n, the resulting W L , W Sal , and W Sat are linearly superimposed to obtain the integrated weight. Then, N aggregated maps are normalized on a pixel-per-pixel basis. The weight of each pixel in each map is divided by the overall weight of the same pixels. The normalization method can be expressed by Equation (19) where W n is the normalized weight and N = 2. δ is the constant coefficient. The denominator is set to 0.001 to prevent it from becoming 0.

Multi-Scale Fusion
Concerning image fusion, Equation (20) can be used for simple processing of the two groups of input images. However, this method will lead to artifacts in the resulting images. Thus, the fusion method based on multi-scale Laplacian pyramid decomposition is adopted in this study to avoid this situation [27].
The Laplace operator is applied to get the first layer of the pyramid for the input image version. Then, the second layer image is obtained by downsampling the layer, and so on. A three-tier pyramid is set up in this study. Similarly, the normalized weight version W n , corresponding to each layer of the Laplacian pyramid, filters the input image using the low-pass Gaussian filter kernel function G to obtain the Gaussian pyramid of the normalized weight image. The pyramid of fusion can be expressed as follows where pyramid l (i, j) is the level of the pyramid, N is the number of input images, G l is the level l of the Gaussian pyramid, and L l is the level l of the Laplacian pyramid.

Underwater Image Quality Evaluation Metric
The underwater image quality evaluation metric aims to analyze and score the processed underwater image objectively. At present, two recognized methods can be used for underwater image quality evaluation. These methods are underwater color image quality evaluation (UCIQE) [1] and underwater image quality measures (UIQM) [28].
The UCIQE method is used to quantify uneven color blurring and low contrast in describing underwater images. The underwater image is converted from RGB color space to CIEL*a*b* color space. Then, each component is calculated, which can be expressed by Equation (22) where σ c is the standard deviation of chroma; con l is the contrast of luminance; µ s is the average value of saturation; and c 1 , c 2 , c 3 are the weight coefficients.
The UIQM method consists of three parts: underwater image colorfulness, sharpness, and contrast measurement. It can be expressed as follows where UICM, UISM, and UICONM correspond to image colorfulness, image sharpness, and image contrast, respectively, and c 1 , c 2 , c 3 are the corresponding weight coefficients. When the color correction results of underwater images need to be evaluated, UICM needs to be given a greater weight value. UISM and UIConM also need to be given greater weight when evaluating sharpness and contrast.

Results and Discussion
The underwater image data used in the experiment in this section are all real underwater scenes derived from Li's website, including datasets from Ancuti et al. [4], Fattal [29], Chiang et al. [10], and CB et al. [30]. These datasets include scenes of underwater green, blue, and blue-green [31]. The contrast experiments are carried out from the aspects of colorfulness recovery and contrast enhancement. Then, a comprehensive metric evaluation is performed to prove the advantages of this method.

Color Restoration Experiment
The photos taken by people submerged in water and holding standard color cards are shown in Figure 2. The color restoration effects of the original image, Reference [9], Reference [32], Reference [6], Reference [11], Reference [4], and the proposed method in this study are shown starting from the first line, respectively. Figure 2b has no change based on the original image. Figure 2c,e are sharper than our method, but they are dark in colorfulness and poor in visual effect. Figure 2d,f are better than others, in which Figure 2f is better in overall visual effect, but the photo suffers from overall redness. Among these methods, the color recovery of this algorithm is worse than that of Figure 2f. However, it reflects the better color recovery of standard color card. The photos taken by people submerged in water and holding standard color cards are shown in Figure 2. The color restoration effects of the original image, Reference [9], Reference [32], Reference [6], Reference [11], Reference [4], and the proposed method in this study are shown starting from the first line, respectively. Figure 2b has no change based on the original image. Figure 2c,e are sharper than our method, but they are dark in colorfulness and poor in visual effect. Figure 2d,f are better than others, in which Figure 2f is better in overall visual effect, but the photo suffers from overall redness. Among these methods, the color recovery of this algorithm is worse than that of Figure 2f. However, it reflects the better color recovery of standard color card.  [9], (c) Galdran [32], (d) Galdran [6], (e) Ancuti [11], (f) Ancuti [4], (g) our result.
From the perspective of quantitative metric, the underwater image colorfulness measure (UICM) metric mentioned in Section 2 (as shown in Table 1) shows that the two methods proposed by Ancuti et al. [11] and the method proposed in this study achieved a higher score than the other methods. The proposed algorithm has a lower score than that in Figure 2e but higher than other methods.   [9], (c) Galdran [32], (d) Galdran [6], (e) Ancuti [11], (f) Ancuti [4], (g) our result.

Contrast Correction Experiment
From the perspective of quantitative metric, the underwater image colorfulness measure (UICM) metric mentioned in Section 2 (as shown in Table 1) shows that the two methods proposed by Ancuti et al. [11] and the method proposed in this study achieved a higher score than the other methods. The proposed algorithm has a lower score than that in Figure 2e but higher than other methods. Table 1. Underwater image color restoration evaluation based on the UICM metric; the larger the metric, the better the resolution.

Contrast Correction Experiment
A bluish-green underwater scene image with extreme light and dark areas is selected from the dataset. It is used to prove the effectiveness of the improved LCC method in this study. Figure 3a is the image after white balance, and Figure 3b,c are the global contrast correction. The last two figures are the processing effect of γ = 0.7 and γ = 1.3, respectively. Figure 3b,c are brighter and darker overall than the original image, respectively. Concerning the area within the red frame, although Figure 3b brightens the dark area, it makes the original bright area too bright. Figure 3c makes the dark area darker while suppressing the bright area, thus losing more image information. Figure 3d is the improved LCC method in this study. In the red frame, the bright area is not too bright, and the image information of the dark area is obtained. The improved method has also captured more natural colors than the global contrast method. Note that the experimental results presented in this section are not the final results, but the results of LCC, it looks blurred visually because the image is not sharpened.  From the gray histogram (see Figure 4), compared with the global contrast correction, the improved method considers the suppression of extremely bright areas and enhancement of extremely dark areas. As a result, more pixels are concentrated in the middle area. From the gray histogram (see Figure 4), compared with the global contrast correction, the improved method considers the suppression of extremely bright areas and enhancement of extremely dark areas. As a result, more pixels are concentrated in the middle area. From the gray histogram (see Figure 4), compared with the global contrast correction, the improved method considers the suppression of extremely bright areas and enhancement of extremely dark areas. As a result, more pixels are concentrated in the middle area.

Comparison of Simple Weighted Fusion and Multi-Scale Fusion
The advantage of a multi-scale fusion strategy over a simple weighted fusion is that it can effectively avoid artifacts in fusion results. Figure 5 is a comparison between using simple weighted addition and multi-scale Laplacian pyramid decomposition fusion. The enlarged area in the red box shows that many unnecessary block artifacts will appear in the image when the simple weighted addition fusion method is used. The multi-scale fusion strategy can effectively suppress the artifacts, as shown in Figure 5b.

Comparison of Simple Weighted Fusion and Multi-Scale Fusion
The advantage of a multi-scale fusion strategy over a simple weighted fusion is that it can effectively avoid artifacts in fusion results. Figure 5 is a comparison between using simple weighted addition and multi-scale Laplacian pyramid decomposition fusion. The enlarged area in the red box shows that many unnecessary block artifacts will appear in the image when the simple weighted addition fusion method is used. The multi-scale fusion strategy can effectively suppress the artifacts, as shown in Figure 5b.

Image Qulity Evaluation
The typical underwater images of blue, green, and blue-green scenes are selected for comparative experiments. They are used to verify the applicability of the proposed algorithm in different scenes. The colorfulness and detail are compared, and the quantitative indicators UCIQE and UIQM are used for score comparison. The effect of the proposed algorithm on image detail enhancement is verified by feature point matching.
The comparison between the image enhancement results of the proposed and other algorithms is shown in Figure 6. The hue of Figure 6b is brighter based on the original

Image Qulity Evaluation
The typical underwater images of blue, green, and blue-green scenes are selected for comparative experiments. They are used to verify the applicability of the proposed algorithm in different scenes. The colorfulness and detail are compared, and the quantitative indicators UCIQE and UIQM are used for score comparison. The effect of the proposed algorithm on image detail enhancement is verified by feature point matching.
The comparison between the image enhancement results of the proposed and other algorithms is shown in Figure 6. The hue of Figure 6b is brighter based on the original image, and the oversaturation of the red channel makes the image redder. The image of Figure 6c is darker after processing, and the effect of Figure 6d is better than those of Figure 6a,c. However, these three methods cannot effectively remove the atomization effect of the underwater image. The details are also not prominent. The fusion strategy is used, as shown in Figure 6e. Compared with other methods, it can restore the color and enhance the image details. The details are strengthened, and the phenomenon of red saturation is suppressed in this study.

Image Qulity Evaluation
The typical underwater images of blue, green, and blue-green scenes are select comparative experiments. They are used to verify the applicability of the proposed rithm in different scenes. The colorfulness and detail are compared, and the quanti indicators UCIQE and UIQM are used for score comparison. The effect of the pro algorithm on image detail enhancement is verified by feature point matching.
The comparison between the image enhancement results of the proposed and algorithms is shown in Figure 6. The hue of Figure 6b is brighter based on the or image, and the oversaturation of the red channel makes the image redder. The im Figure 6c is darker after processing, and the effect of Figure 6d is better than th Figure 6a,c. However, these three methods cannot effectively remove the atomiz effect of the underwater image. The details are also not prominent. The fusion strat used, as shown in Figure 6e. Compared with other methods, it can restore the colo enhance the image details. The details are strengthened, and the phenomenon o saturation is suppressed in this study. ure 6. (a) Initial image, (b) Gibson et al. [33], (c) Fattal et al. [29], (d) Lu et al. [5], (e) Ancuti et al. [4], and (f) our metho  [29], (d) Lu et al. [5], (e) Ancuti et al. [4], and (f) our method.
More underwater image contrast experiments, including serious color distortion and unclear details, are shown in Figure 7. Concerning color correction, Reference [9] and Reference [32] fail to improve color distortion. The results of Reference [6] produce different degrees of red saturation phenomenon. By contrast, the proposed method effectively suppresses the red oversaturation phenomenon, and the visual experience is better than the other methods. Concerning detail enhancement, the performance of Reference [11] in the underwater environment with serious color distortion is not good. The results are generally bright, and the image details are lost. The improved effect of Reference [4] is better. The proposed method further improves the detail effect of the image. It also considers color correction and detail enhancement, and the overall visual effect is better than Reference [9] and Reference [33] and slightly better than Reference [4]. More underwater image contrast experiments, including serious color distortion and unclear details, are shown in Figure 7. Concerning color correction, Reference [9] and Reference [32] fail to improve color distortion. The results of Reference [6] produce different degrees of red saturation phenomenon. By contrast, the proposed method effectively suppresses the red oversaturation phenomenon, and the visual experience is better than the other methods. Concerning detail enhancement, the performance of Reference [11] in the underwater environment with serious color distortion is not good. The results are generally bright, and the image details are lost. The improved effect of Reference [4] is better. The proposed method further improves the detail effect of the image. It also considers color correction and detail enhancement, and the overall visual effect is better than Reference [9] and Reference [33] and slightly better than Reference [4].

Figure 7.
Comparison to different outdoor approaches and underwater enhancement approaches. The quantitative evaluation associated with these images is provided in Table 2.
Next, the comparison results in Figure 7 are evaluated by the UCIQE and UIQM performance metrics, and the evaluation results are shown in Table 2. The highest evaluation metric result indicates the underline. In the UCIQE metric, this method is better than the comparison method in the UQIM metric after integrating the uneven color blur and low contrast of the underwater image. Overall, this method is better than the contrast method in terms of image color, sharpness, and contrast.
The UCIQE and UIQM evaluation metrics obtained by applying different methods in different underwater scenes in Table 2 are drawn into a line chart to show the advantages of this method intuitively. The UCIQE metric is greatly improved compared with other methods, as shown in Figure 8. The UIQM metric also has a better performance than Ref. [6], Ref. [11], and Ref. [4], as shown in Figure 9. The UCIQE and UIQM evaluation metrics obtained by applying different methods in different underwater scenes in Table 2 are drawn into a line chart to show the advantages of this method intuitively. The UCIQE metric is greatly improved compared with other methods, as shown in Figure 8. The UIQM metric also has a better performance than Reference [6], Ref. [11], and Reference [4], as shown in Figure 9. The robustness of the method is discussed, and the data in Table 2 are further analyzed. The box graph is drawn, as shown in Figures 10 and 11. The upper and lower edges of each box graph represent the upper and lower limits of the evaluation metric, respectively, and the green line represents the median of the evaluation metric. No outliers are observed in the evaluation of the UCIQE and UIQM metrics, and the fluctuation is smaller than the other methods. Thus, the proposed method has good performance in underwater images in different scenes. He [9] Galdran [32] Galdran [6] Ancuti [11] Ancuti [4] Our result The robustness of the method is discussed, and the data in Table 2 are further analyzed. The box graph is drawn, as shown in Figures 10 and 11. The upper and lower edges of each box graph represent the upper and lower limits of the evaluation metric, respectively, and the green line represents the median of the evaluation metric. No outliers are observed in the evaluation of the UCIQE and UIQM metrics, and the fluctuation is smaller than the other methods. Thus, the proposed method has good performance in underwater images in different scenes.

UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM
lyzed. The box graph is drawn, as shown in Figures 10 and 11. The upper and lower edges of each box graph represent the upper and lower limits of the evaluation metric, respectively, and the green line represents the median of the evaluation metric. No outliers are observed in the evaluation of the UCIQE and UIQM metrics, and the fluctuation is smaller than the other methods. Thus, the proposed method has good performance in underwater images in different scenes.  Feature point matching is one of the basic tasks in computer vision algorithms, which is helpful to underwater animal classification and fish recognition [34]. The accuracy of feature point matching can prove the effectiveness of the method in detail enhancement. In this study, the scale-invariant feature transform (SIFT) operator is used for matching the feature points of two groups of underwater images, and the same experiment is done on the processed images. The results show that the number of local feature points of the processed images has increased significantly (see Figure 12).  Feature point matching is one of the basic tasks in computer vision algorithms, which is helpful to underwater animal classification and fish recognition [34]. The accuracy of feature point matching can prove the effectiveness of the method in detail enhancement. In this study, the scale-invariant feature transform (SIFT) operator is used for matching the feature points of two groups of underwater images, and the same experiment is done on the processed images. The results show that the number of local feature points of the processed images has increased significantly (see Figure 12).
The proposed method is superior to the existing newer methods in overall effect, but it also has some limitations. It cannot effectively suppress the scattered speckle interference light in the underwater scene (such as ship and Ancuti2 in Figure 7). Although it aims to restore and enhance the underwater image, for the image with low resolution (the image contains some mosaics), the unnatural block mosaics will be enhanced in image detail enhancement. It cannot also process the motion-blurred image effectively.
Feature point matching is one of the basic tasks in computer vision algorithms, which is helpful to underwater animal classification and fish recognition [34]. The accuracy of feature point matching can prove the effectiveness of the method in detail enhancement. In this study, the scale-invariant feature transform (SIFT) operator is used for matching the feature points of two groups of underwater images, and the same experiment is done on the processed images. The results show that the number of local feature points of the processed images has increased significantly (see Figure 12). The proposed method is superior to the existing newer methods in overall effect, but it also has some limitations. It cannot effectively suppress the scattered speckle interference light in the underwater scene (such as ship and Ancuti2 in Figure 7). Although it aims to restore and enhance the underwater image, for the image with low resolution (the image contains some mosaics), the unnatural block mosaics will be enhanced in image detail enhancement. It cannot also process the motion-blurred image effectively.

Conclusions
In this study, the traditional global contrast method is improved and combined with a multi-scale fusion strategy, aiming at a single image, without the help of image formation model and many datasets. The experimental results show that the proposed method considers the color restoration and detail enhancement of underwater images, and it has a good effect in contrast correction. In the qualitative and quantitative com-

Conclusions
In this study, the traditional global contrast method is improved and combined with a multi-scale fusion strategy, aiming at a single image, without the help of image formation model and many datasets. The experimental results show that the proposed method considers the color restoration and detail enhancement of underwater images, and it has a good effect in contrast correction. In the qualitative and quantitative comparative experiments, the improved method has shown better performance than the recent methods. The SIFT feature point matching is also compared before and after image enhancement. The comparison results prove the effectiveness of this method in image detail enhancement. The limitations of this study will be solved in future works.