You are currently viewing a new version of our website. To view the old version click .
Journal of Marine Science and Engineering
  • Article
  • Open Access

16 May 2023

Distance-Independent Background Light Estimation Method

,
and
College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
This article belongs to the Section Ocean Engineering

Abstract

A distance-independent background light estimation method is proposed for underwater overhead images. The method addresses the challenge of the absence of the farthest point in underwater overhead images by adopting a global perspective to select the optimal solution and estimate the background light by minimizing the loss function. Moreover, to enhance the information retention in the images, a translation function is employed to adjust the transmission map values within the range of [0.1, 0.95]. Additionally, the method capitalizes on the redundancy of image information and the similarity of adjacent frames, resulting in higher computational efficiency. The comparative experimental results show that the proposed method has better restoration performance on underwater images in various scenarios, especially in handling color bias and preserving information.

1. Introduction

The development of modern technology has made unmanned underwater vehicles (UUVs) suitable oceanic equipment for performing various underwater tasks. In political, economic, and military fields, UUVs play an irreplaceable role. They are widely used in various underwater tasks, e.g., pipeline tracking, underwater terrain scanning, laying underwater cables, developing oceanic resources, etc. As the tasks performed by UUVs become more complex, acoustic technology is unable to meet the accuracy requirements for fine operations. In contrast, optical technology is more suitable for close-range, delicate operations. The visual of underwater optical images directly affects the performance of subsequent operations, such as target detection, feature extraction, and pose estimation. Therefore, obtaining clear and realistic underwater images is crucial for precise underwater operations. However, the special properties of the underwater environment can affect the quality of underwater images. Due to the different rates of attenuation of light of different wavelengths underwater and the scattering of light caused by underwater impurities, underwater images often have problems such as low contrast, blurred object edges, high noise, and blue-green color deviation. These problems greatly affect the effectiveness of UUVs in underwater tasks and may even lead to task failure. Therefore, image processing technology for underwater images plays a crucial role in UUV underwater operations.
This paper aims to address the issue of poor image restoration caused by the absence of the farthest point in underwater overhead images. To achieve this, a distance-independent real-time underwater image restoration method suitable for CPUs is proposed. Specifically, a loss function is constructed to minimize information loss and achieve histogram distribution equalization. For the severely attenuated red channel, the background light is calculated based on the minimum difference principle of gray value. In addition, this method also optimizes the transmission map estimation method [1] using a translation function to reduce information loss. To address real-time issues, two acceleration strategies are proposed based on the redundancy of image information and the similarity of adjacent frames. Experimental results show that the proposed method can restore underwater images with richer colors and more information.
The rest of the paper is organized as follows. In Section 2, the existing underwater image processing methods and their shortcomings are reviewed. In Section 3, the underwater imaging model is introduced. In Section 4, the proposed method for real-time underwater image restoration is explained. In Section 5, the experimental results are presented and discussed. Finally, Section 6 is the conclusion.

3. Background

Underwater images are generally degraded due to the absorption and scattering of light. The medium of water causes the absorption of light, reducing the energy of light based on its wavelength and depth. As different wavelengths of light have varying attenuation rates in water, this leads to color distortion. In addition, the scattering of light is caused by suspended particles in the water reflecting light in other directions, resulting in image blurring. The Jaffe–McGlamery underwater imaging model suggests that the light received by the camera in an underwater scene contains three components: the direct attenuation component, the forward scattering component, and the backscattering component, as shown in Figure 2. Therefore, an underwater image can be represented as a linear superposition of these three components. The model can be expressed as
E T = E d + E f + E b
where E T is the light arriving at the camera, E d is the direct attenuation, E f is the forward scattering, and E b is the backscattering.
Figure 2. Jaffe–McGlamery underwater imaging model.
Direct attenuation is the attenuation due to the main radiation within the medium with increasing propagation distance. It is mathematically defined as
E d x , λ = J x , λ t x , λ , λ { R , G , B }
where x denotes a pixel, λ is the wavelength of light, J x , λ is the non-degraded image, and t x , λ is the transmission map.
Forward scattering is the light reflected from the target object that reaches the camera through scattering. It can be expressed as
E f x , λ = E d x , λ     g d ( x ) , λ { R , G , B }
where g d ( x ) is the point spread function; because forward scattering has little effect on image quality, it is usually neglected.
Backscattering is the light reaching the camera due to the scattering effect of impurities in the water. It can be expressed as
E b x , λ = B ( λ ) ( 1 t x , λ ) , λ { R , G , B }
where B ( λ ) is the background light, it represents the color of the body of water, and its value can be expressed as the grayscale value of a point at a distance of infinity from the camera.
The simplified underwater imaging model [25] can be expressed as
I x , λ = J x , λ t x , λ + B ( λ ) ( 1 t x , λ ) , λ { R , G , B }
where I x , λ is the observed image.
Equation (5) can be transformed into Equation (6). The main task of underwater image recovery is to estimate the transmission map t x , λ and the background light B ( λ ) .
J x , λ = I x , λ B ( λ ) ( 1 t x , λ ) t x , λ , λ { R , G , B }
Methods for solving the background light and the transmission map often involve estimating the distance between the camera and the target. This is because the background light represents the grayscale value of a point at infinity from the camera. In addition, according to the Beer–Lambert law [29], as shown in Equation (7), the transmission map decreases exponentially as the distance increases.
t x , λ = e c d x , λ { R , G , B }
where c is the attenuation coefficient; d(x) is the distance between the camera and the target.

4. The Proposed Method

In this section, this paper will introduce the proposed method for restoring overhead underwater images, as well as strategies for accelerating the process. The operational steps of the proposed method are shown in Figure 3. Firstly, the transmission map is calculated, and the values of the transmission map outside the range are transferred by a translation function to reduce information loss. Secondly, the background light for the blue and green channels is estimated by minimizing the loss function. Thirdly, the background light for the red channel is determined based on the principle of minimizing the average grayscale difference between the three channels. Finally, the degraded underwater image can be restored based on the obtained background light and transmission map.
Figure 3. Operation process of the proposed method.

4.1. Transmission Map Estimation

To achieve accurate transmission map solutions and avoid compounding errors, this paper utilizes the MIP algorithm’s [1] transmission map solution method instead of the DCP algorithm, which depends on the background light. In the MIP algorithm, the largest differences among the three different color channels are calculated as follows:
D x = max x Ω ,   λ = R I x , λ max x Ω ,   λ { B , G } I x , λ
where D x is the largest difference among the three color channels, and Ω is a local patch in the image.
The transmission map is
t x = D x + ( 1 max x D x )
As shown by Equation (5), t(x) and (1 − t(x)), respectively, represent the contribution of the main radiation and the background radiation to the image. The value of t(x) decreases as the background radiation dominates the grayscale value. However, as the background radiation is usually not as bright as the main radiation, the minimum value of t(x) is kept at 0.1 to avoid an overly dark recovered image. In addition, to preserve the image’s authenticity, a portion of the fog is retained in the recovered image. Hence, the maximum value of t(x) is set to 0.95 [5]. This leads to a transmission map range of [0.1, 0.95].
However, simply cutting off the transmission map values beyond the range of values will lead to the loss of information. Therefore, in this paper, the transmission map is transformed as follows to transfer the information in the transmission map to the range [0.1, 0.95] as much as possible.
τ = min ( t m a x 0.95 , t m i n 0.1 ) , t m a x > 0.95 , t m i n > 0.1 min ( 0.95 t m a x , 0.1 t m i n ) , t m a x < 0.95 , t m i n < 0.1 0 , otherwise t t r a n s x = t x τ
where τ is the quantity of the transformation, t m i n is the minimum value of the transmission map, t m a x is the maximum value of the transmission map, and t t r a n s x is the transmission map after the transformation.
This transformation technique retains some transmission map information beyond the effective range while maintaining the contrast of the original transmission map. The comparison results of the transmission maps before and after the transformation are depicted in Figure 4a,b. The entropy values of the transmission maps are 5.5424 and 5.5433, respectively, indicating that the transformed transmission map is more detailed and informative.
Figure 4. (a) Transmission map before transfer; (b) Transmission map after transfer.

4.2. Background Light of Blue and Green Channels Estimation

As mentioned above, it is not appropriate to use one or the mean value of a few pixels in the image to represent the background light in an overhead image because there is no point far enough in the image that can be reasonably approximated as background light. To overcome this limitation, this paper calculates the value of the background light through inference rather than relying on any specific pixel value in the image. The value of the background light is inferred from the desired image result, which provides a more accurate representation of the background light.
Firstly, motivated by Li et al. [30], who determined the transmission map by minimizing information loss, this paper aims to find the background light by minimizing the information loss of the image. To ensure that the grayscale value of the restored image is between 0 and 1, any values outside this range are directly assigned to 0 or 1. If there are too many pixels outside the grayscale range, large black or white areas will appear in the image, leading to information loss from the original image.
To solve this problem, the existing approach is to use the stretching function.
S v = v min ( V ) max V min ( V )
where V represents the grayscale range before stretching and v represents the grayscale value involved in the calculation.
The grayscale difference between the two pixels can be expressed as
Δ v = v 1 v 2 Δ v s = Δ v max V min ( V )
where Δ v represents the difference between the two pixels before stretching, and Δ v s represents the difference between the two pixels after stretching.
While the method is effective when the difference between the maximum and minimum grayscale values is less than 1, it is unsuitable for cases where this difference exceeds 1. In such scenarios, the method compresses the original grayscale values, resulting in decreased contrast between pixels. This can lead to loss of important information and reduced overall image quality. Therefore, compressing the contrast of the entire image to retain a few bright or dark pixels that contain minimal information is not a reasonable approach.
When the grayscale difference max V min ( V ) is greater than 1, the method described above directly assigns the under or over the part to 0 or 1. However, to minimize information loss and maintain the contrast of the image, this paper aims to minimize the number of pixels that fall outside the grayscale range. In other words, the number of such pixels must be kept as small as possible while minimizing the information loss.
min ( M     N n J b i 0,1 λ ) , λ { G , B }
where M and N represent the length and width of the image and n J b i 0,1 λ is the number of pixels whose grayscale values fall between 0 and 1 in the image recovered by taking the background light b i .
In addition, Li et al. [30] observed that the histograms of clear, fog-free images are distributed relatively uniformly without sharp points, based on their analysis of image datasets of five natural scenes. However, the histograms of underwater images are typically concentrated in a small range. In light of this, the present study aims to produce restored images with the most balanced grayscale distribution. This is achieved by minimizing the variance of the histogram of the restored image, which can be expressed as
min S o v e r ( λ ) , λ { G , B }
where S o v e r ( λ ) is the area of the histogram of the recovered images over the desired histogram (the desired histogram is the most uniform distribution of image grayscale, with the same number of pixels falling on each gray value as (M × N)/255), indicating the degree of unevenness of the histogram distribution, as shown in Figure 5.
Figure 5. The area of S o v e r ( λ ) .
In summary, the loss function for obtaining the background light values in the range [0, 1] is defined as follows:
L o s s b i , λ = = M     N n J b i 0,1 λ + S o v e r λ , λ G , B
Then the background light of the blue and green channels is
B λ = argmin b i L o s s b i , λ , λ { G , B }
Through several experiments, it is found that the loss function is generally convex, i.e., it shows a trend of decreasing and then increasing as the background light b i increases, as shown in Figure 6. Therefore, inspired by the loss function optimization method in deep learning, to find the background light that makes the loss function obtain the minimum value, instead of finding all the function values within the range of values of background light b i , the gradient descent method can be used, as shown in Equation (17), and B λ can be obtained after several steps.
b i + 1 λ = b i ϵ L o s s b i , λ b i
where ϵ is the learning rate.
Figure 6. The shape of the loss function.

4.3. Background Light of Red Channel Estimation

Due to the red channel’s limited information, it is challenging to accurately determine the background light using only this channel. Therefore, this paper infers the red channel background light by leveraging information from the recovered blue-green channels. The study analyzes the mean values of the red, green, and blue channels of both natural and underwater image datasets, as illustrated in Figure 7 and Figure 8. The details of the datasets are as follows:
Figure 7. The mean values of the red, green, and blue channels of the natural image dataset.
Figure 8. The mean values of the red, green, and blue channels of the underwater image dataset.
  • The Caltech-UCSD Birds-200-2011 Dataset (http://www.vision.caltech.edu/datasets/cub_200_2011/, accessed on 20 March 2023) [31]. This is a natural image dataset for bird image classification, which includes 11,788 images covering 200 bird species.
  • CBCL Street Scenes Dataset (http://cbcl.mit.edu/software-datasets/streetscenes/, accessed on 20 March 2023) [32]. This is a dataset of street scene images captured by a DSC-F717 camera from Boston and its surrounding areas in Massachusetts, belonging to the category of natural image datasets, with a total of 3547 images.
  • Real World Underwater Image Enhancement dataset (https://github.com/dlut-dimt/Realworld-Underwater-Image-Enhancement-RUIE-Benchmark, accessed on 20 March 2023) [33]. This underwater image dataset was collected from a real ocean environment testing platform consisting of 4231 images. The dataset is characterized by its large data size, diverse degree of light scattering effects, rich color tones, and abundant detection targets.
The natural images exhibit similar three-channel mean values, whereas the underwater images demonstrate a clear demarcation. To further investigate this difference, Equation (18) calculates the sum of the differences between the three-channel means, and Figure 9 displays the results. The analysis shows that the D R G B values for underwater images are significantly larger than those for natural images.
D R G B = | I R ¯ I B ¯ | + | I R ¯ I G ¯ | + | I G ¯ I B ¯ |
where I R ¯ , I B ¯ , and I G ¯ are the mean values of the red, green, and blue channels of the observed image.
Figure 9. The D R G B values of the underwater and natural images.
Therefore, this paper aims to minimize the D R G B values. Specifically, the gray mean value of the blue and green channels is utilized as the gray mean value of the red channel to derive the red channel background light. This approach allows for correcting the color shift problem in underwater images without additional image enhancement operations.
J R ¯ = J B ¯ + J G ¯ 2 B R = I R ¯ J R ¯ × t ¯ 1 t ¯
where J R ¯ , J B ¯ , and J G ¯ are the mean values of the red, green, and blue channels of the restored image, t ¯ is the mean value of the transmission map.
In some cases, there may be a significant difference between the mean value of the blue channel J B ¯ and the mean value of the green channel J G ¯ , leading to a color bias towards red in the resulting image. To address this issue, this paper proposes a solution where the background light of the channel with the higher mean value is kept, and its recovery map is used to estimate the background light of the remaining two channels. For instance, if the mean value of the green channel is larger, the background light of the other two channels can be estimated as follows:
B R = I R ¯ J G ¯ × t ¯ 1 t ¯ B B = I B ¯ J G ¯ × t ¯ 1 t ¯
The recovered image is obtained by bringing the transmission map and background light into Equation (6).

4.4. Strategies to Speed Up

Considering the need for higher computational efficiency, this paper aims to improve the operation speed by addressing two key factors: the spatial resolution of images and the similarity of neighboring images.
First, the proposed background light calculation method is based on the global consideration of the picture and is not sensitive to the size of the picture. As shown in Figure 10, the recovery effect of the picture is not affected too much after reducing the size of the picture to 1/K of the original one. Therefore, when applied in practice, the K value can be adjusted appropriately, weighing the imaging effect as well as the running speed.
Figure 10. Results of image restoration using the background light were calculated by reducing the image by a factor of K.
Second, in the continuous working state, the scene of several adjacent frames does not change much, and the background light values can be shared to further improve the running speed. As shown in Figure 11, the recovery images are almost the same after the neighboring images swap the background light values.
Figure 11. The results of image restoration by sharing the background light between several adjacent frames.

5. Experimental Results and Discussion

The proposed approach is compared with five existing techniques: the dark channel prior (DCP) method [5], underwater dark channel prior (UDCP) method [6], Carlevaris-Bianco’s (MIP) method [1], Differential Attenuation Compensation (DAC) method [2] and Shallow-UWnet method [34]. Underwater images were obtained from the Real World Underwater Image Enhancement dataset [33]. Qualitative and quantitative evaluations are carried out to assess the performance of different methods.
In this study, different types of images are selected from the dataset (including images with the green tune, blue tune, and images with less color bias). In addition, the performance of the proposed method is compared with other methods in restoring real underwater images. Qualitative experiments show the results of different methods in restoring underwater aerial images, as shown in Figure 12. The proposed method eliminates the effects of light absorption, corrects the color distortion of blue-green color cast in underwater images, and restores the rich and vibrant colors of the images, which is in line with common sense. In addition, the proposed method successfully eliminates the effects of scattering and makes the images clearer. Other methods improve the underwater images to different extents but still have room for improvement. Although the DCP and UDCP methods improve the clarity of the images, they do not correct the color bias of the images. The MIP method successfully performs defogging of the images but causes undesirable yellow color bias. DAC algorithm overcomes the color bias problem. However, the restored images are more blurry than other methods, and the corrected colors are relatively dim. The Shallow-UWnet algorithm successfully corrects the color bias of blue tune underwater images. Still, for images with green tune or less color bias, the restored images are over-corrected, resulting in additional yellow or red color distortion. Therefore, their method’s results are not satisfactory. The fact that every point in the underwater aerial image is near the camera is considered. The proposed method does not use distance-based background light solutions but instead chooses the best global solution for background light. As a result, the method produces more natural and superior results in terms of color and detail than other methods.
Figure 12. Subjective evaluation results.
Table 1 shows the average processing time and hardware requirements for different algorithms to process one image, as well as their integrated development environments (IDE). The DCP method [5], UDCP method [6], MIP method [1], DAC method [2], and the proposed method run on the Windows 10 operating system, with 16 GB of memory and 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80 GHz (8 CPUs). The Shallow-UWnet method [34] runs on a Tesla T4 GPU. As shown in Table 1, among the algorithms running on the CPUs, the DCP algorithm has the fastest speed. The proposed method in this paper is slightly slower than MIP and UDCP. The Shallow-UWnet algorithm has good real-time performance but requires additional hardware resources.
Table 1. The average processing time, hardware requirements, and IDE of each method.
To show the proposed method’s advantages quantitatively, comparisons are made with other restoration methods using three underwater quality evaluation metrics: the Underwater Color Image Quality Evaluation Metric (UCIQE) [35], the Underwater Image Quality Measure (UIQM) [36], and the entropy.
UCIQE is an objective evaluation expressed as a linear combination of chroma, saturation, and contrast:
U C I Q E = m 1 × σ c + m 2 × c o n l + m 3 × μ s
where m 1 , m 2 , and m 3 are the scale factors, they are set as per the original paper [35]. σ c is the standard deviation of chroma, c o n l is the contrast of brightness, and μ s is the saturation average.
UIQM evaluates the quality of underwater images through a linear combination of its three components: sharpness measure, colorfulness measure, and contrast measure.
U I Q M = c 1 × U I C M + c 2 × U I S M + c 3 × U I C o n M
where c 1 , c 2 , and c 3 are the scale factors, they are set as per the original paper [36]. UICM is the colorfulness measure, UISM is the sharpness measure, and UIConM is the contrast measure.
The entropy of the system reflects the degree of chaos, and the higher the entropy, the more information the image contains, which means the clearer the image.
Table 2 and Table 3 report the UCIQE and UIQM scores of images shown in Figure 12. The scores in bold denote the best results. The proposed method has stronger robustness and optimal overall performance in various scenarios, as indicated by its maximum average value. While the UCIQE and UIQM scores for some of the images processed by the proposed method were slightly lower than those of other methods, these methods exhibited larger fluctuations, with high scores in some images and low scores in others. In contrast, the proposed method demonstrates a more stable restoration performance for various types of images. However, it should be noted that some of the restoration images with higher scores were too focused on enhancing contrast, neglecting color correction.
Table 2. UCIQE scores * of images shown in Figure 12.
Table 3. UIQM scores * of images shown in Figure 12.
To further demonstrate the superiority of the proposed method in color correction, the colorfulness measures in UCIQE and UIQM scores ( σ c and UICM) are additionally listed in Table 4 and Table 5. The results show that the proposed method achieved the highest or near-highest scores in both evaluation metrics, indicating that the algorithm can effectively improve the color level of the restored images. This is because the histogram distribution balance was considered when constructing the loss function, and the value of the background light was obtained by minimizing the loss function. Therefore, the calculated background light can make the restored images more colorful and vivid.
Table 4. The colorfulness measure σ c * in UCIQE scores of images shown in Figure 12.
Table 5. The colorfulness measure UICM * in UIQM scores of images shown in Figure 12.
Table 6 gives the entropy values of the images shown in Figure 12. The images restored by the proposed method carry more information. This is because the transformation function in this study preserved more information about the transmitted image, and the loss of information was considered when constructing the loss function. This allows the calculated background light to retain more information about the image.
Table 6. Entropy * values of images shown in Figure 12.

6. Conclusions

In this paper, a new image recovery method has been proposed for overhead images taken by UUVs performing tasks such as underwater pipeline tracking, underwater mine clearance, underwater terrain detection, and seafood fishing. Firstly, a distance-independent background light calculation method has been proposed for the overhead images where the points are close to the camera, unlike the previous methods that use the most distant point in the image to approximate the background light. Next, an optimization function based on the overall information loss as well as the uniformity of the histogram distribution has been used to calculate the blue and green channel background light values. Then, based on the statistical results, the mean value of the red channel has been determined based on the principle of minimizing the sum of the differences between the mean values of the three channels, which has been used to invert the background light value of the red channel. Moreover, this paper also has used the translation function to control the value range of the transmission map between 0.1 and 0.95, which retains the information carried by the transmission map. Finally, based on the real-time consideration, two strategies have been proposed to speed up the operation from the perspective of the spatial resolution of the image and the similarity of two adjacent frames. The experimental results show that the proposed method has strong robustness in adapting to various underwater environments. Moreover, the method can effectively correct the blue-green color cast in underwater images while preserving more information. Importantly, the method can run on a CPU without the need for additional hardware resources. The average processing time per frame is 0.3345 s, demonstrating good real-time performance.
However, the proposed method still has some limitations. The method focuses on color restoration and information preservation without taking measures to enhance contrast. In future work, efforts will be made to enhance the contrast of underwater images to make image details clearer.

Author Contributions

Conceptualization, Y.W., A.Y. and S.Z.; methodology, A.Y.; software, A.Y.; validation, Y.W., A.Y. and S.Z.; formal analysis, A.Y.; investigation, Y.W.; resources, S.Z.; data curation, A.Y.; writing—original draft preparation, A.Y.; writing—review and editing, Y.W.; visualization, A.Y.; supervision, Y.W. and S.Z.; project administration, Y.W.; funding acquisition, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The code of the proposed method is available at GitHub: https://github.com/YADyuaidi/underwater, accessed on 4 May 2023.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Carlevaris-Bianco, N.; Mohan, A.; Eustice, R.M. Initial results in underwater single image dehazing. In Proceedings of the Oceans 2010 Mts/IEEE Seattle, Seattle, WA, USA, 20–23 September 2010; pp. 1–8. [Google Scholar]
  2. Lai, Y.; Zhou, Z.; Su, B.; Zhe, X.; Tang, J.; Yan, J.; Liang, W.; Chen, J. Single underwater image enhancement based on differential attenuation compensation. Front. Mar. Sci. 2022, 9, 1047053. [Google Scholar] [CrossRef]
  3. McGlamery, B. A computer model for underwater camera systems. In Proceedings of the Ocean Optics VI; SPIE: Bellingham, WA, USA, 1980; pp. 221–231. [Google Scholar]
  4. Jaffe, J.S. Computer modeling and the design of optimal underwater imaging systems. IEEE J. Ocean. Eng. 1990, 15, 101–111. [Google Scholar] [CrossRef]
  5. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [CrossRef] [PubMed]
  6. Drews, P., Jr.; do Nascimento, E.; Moraes, F.; Botelho, S.; Campos, M. Transmission Estimation in Underwater Single Images. In Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 2–8 December 2013; pp. 825–830. [Google Scholar]
  7. Song, W.; Wang, Y.; Huang, D.; Tjondronegoro, D. A rapid scene depth estimation model based on underwater light attenuation prior for underwater image restoration. In Proceedings of the Pacific Rim Conference on Multimedia, Hefei, China, 21–22 September 2018; pp. 678–688. [Google Scholar]
  8. Li, C.; Quo, J.; Pang, Y.; Chen, S.; Wang, J. Single underwater image restoration by blue-green channels dehazing and red channel correction. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 1731–1735. [Google Scholar]
  9. Liang, Z.; Wang, Y.; Ding, X.; Mi, Z.; Fu, X. Single underwater image enhancement by attenuation map guided color correction and detail preserved dehazing. Neurocomputing 2021, 425, 160–172. [Google Scholar] [CrossRef]
  10. Ding, X.; Liang, Z.; Wang, Y.; Fu, X. Depth-aware total variation regularization for underwater image dehazing. Signal Process. Image Commun. 2021, 98, 116408. [Google Scholar] [CrossRef]
  11. Emberton, S.; Chittka, L.; Cavallaro, A. Underwater image and video dehazing with pure haze region segmentation. Comput. Vis. Image Underst. 2018, 168, 145–156. [Google Scholar] [CrossRef]
  12. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Sbetr, M. Color Channel Transfer for Image Dehazing. IEEE Signal Process. Lett. 2019, 26, 1413–1417. [Google Scholar] [CrossRef]
  13. Li, T.; Wang, J.; Yao, K. Visibility enhancement of underwater images based on active polarized illumination and average filtering technology. Alex. Eng. J. 2022, 61, 701–708. [Google Scholar] [CrossRef]
  14. Dai, C.; Lin, M.; Wu, X.; Wang, Z.; Guan, Z. Single underwater image restoration by decomposing curves of attenuating color. Opt. Laser Technol. 2020, 123, 105947. [Google Scholar] [CrossRef]
  15. Peng, Y.-T.; Cosman, P.C. Underwater image restoration based on image blurriness and light absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef]
  16. Ke, K.; Zhang, C.; Wang, Y.; Zhang, Y.; Yao, B. Single underwater image restoration based on color correction and optimized transmission map estimation. Meas. Sci. Technol. 2023, 34, 055408. [Google Scholar] [CrossRef]
  17. Wang, X.; Tao, C.; Zheng, Z.J.O.; Engineering, L.i. Occlusion-aware light field depth estimation with view attention. Opt. Lasers Eng. 2023, 160, 107299. [Google Scholar] [CrossRef]
  18. Zhan, F.; Yu, Y.; Zhang, C.; Wu, R.; Hu, W.; Lu, S.; Ma, F.; Xie, X.; Shao, L. Gmlight: Lighting estimation via geometric distribution approximation. IEEE Trans. Image Process. 2022, 31, 2268–2278. [Google Scholar] [CrossRef]
  19. Wang, Y.; Yu, X.; An, D.; Wei, Y. Underwater image enhancement and marine snow removal for fishery based on integrated dual-channel neural network. Comput. Electron. Agric. 2021, 186, 106182. [Google Scholar] [CrossRef]
  20. Wang, K.; Shen, L.; Lin, Y.; Li, M.; Zhao, Q. Joint Iterative Color Correction and Dehazing for Underwater Image Enhancement. IEEE Robot. Autom. Lett. 2021, 6, 5121–5128. [Google Scholar] [CrossRef]
  21. Zong, X.; Chen, Z.; Wang, D.J.A.I. Local-CycleGAN: A general end-to-end network for visual enhancement in complex deep-water environment. Appl. Intell. 2021, 51, 1947–1958. [Google Scholar] [CrossRef]
  22. Zhu, S.; Luo, W.; Duan, S. Enhancement of Underwater Images by CNN-Based Color Balance and Dehazing. Electronics 2022, 11, 2537. [Google Scholar] [CrossRef]
  23. Hong, L.; Wang, X.; Xiao, Z.; Zhang, G.; Liu, J. WSUIE: Weakly Supervised Underwater Image Enhancement for Improved Visual Perception. IEEE Robot. Autom. Lett. 2021, 6, 8237–8244. [Google Scholar] [CrossRef]
  24. Gui, X.; Zhang, R.; Cheng, H.; Tian, L.; Chu, J. Multi-Turbidity Underwater Image Restoration Based on Neural Network and Polarization Imaging. Laser Optoelectron. Prog. 2022, 59, 0410001. [Google Scholar] [CrossRef]
  25. Tang, Z.; Li, J.; Huang, J.; Wang, Z.; Luo, Z. Multi-scale convolution underwater image restoration network. Mach. Vis. Appl. 2022, 33, 85. [Google Scholar] [CrossRef]
  26. Zhang, W.; Liu, W.; Li, L.; Jiao, H.; Li, Y.; Guo, L.; Xu, J. A framework for the efficient enhancement of non-uniform illumination underwater image using convolution neural network. Comput. Graph. 2023, 112, 60–71. [Google Scholar] [CrossRef]
  27. Han, J.; Shoeiby, M.; Malthus, T.; Botha, E.; Anstee, J.; Anwar, S.; Wei, R.; Armin, M.A.; Li, H.; Petersson, L. Underwater Image Restoration via Contrastive Learning and a Real-World Dataset. Remote Sens. 2022, 14, 4297. [Google Scholar] [CrossRef]
  28. Jamil, S.; Piran, M.J.; Rahman, M.; Kwon, O.-J. Learning-driven lossy image compression: A comprehensive survey. Eng. Appl. Artif. Intell. 2023, 123, 106361. [Google Scholar] [CrossRef]
  29. Han, M.; Lyu, Z.; Qiu, T.; Xu, M. A review on intelligence dehazing and color restoration for underwater images. IEEE Trans. Syst. Man Cybern. Syst. 2018, 50, 1820–1832. [Google Scholar] [CrossRef]
  30. Li, C.-Y.; Guo, J.-C.; Cong, R.-M.; Pang, Y.-W.; Wang, B. Underwater Image Enhancement by Dehazing With Minimum Information Loss and Histogram Distribution Prior. IEEE Trans. Image Process. 2016, 26, 5664–5677. [Google Scholar] [CrossRef] [PubMed]
  31. Welinder, P.; Branson, S.; Mita, T.; Wah, C.; Schroff, F.; Belongie, S.; Perona, P. Caltech-UCSD birds 200. In Computation & Neural Systems Technical Report,2010–001; California Institute of Technology: Pasadena, CA, USA, 2010. [Google Scholar]
  32. Korc, F.; Förstner, W. University of Bonn, Tech. Rep. TR-IGG-P-01. eTRIMS Image Database for Interpreting Images of Man-Made Scenes; University of Bonn: Bonn, Germany, 2009. [Google Scholar]
  33. Liu, R.; Fan, X.; Zhu, M.; Hou, M.; Luo, Z. Real-World Underwater Enhancement: Challenges, Benchmarks, and Solutions Under Natural Light. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 4861–4875. [Google Scholar] [CrossRef]
  34. Naik, A.; Swarnakar, A.; Mittal, K.; Assoc Advancement Artificial, I. Shallow-UWnet: Compressed Model for Underwater Image Enhancement (Student Abstract). In Proceedings of the 35th AAAI Conference on Artificial Intelligence/33rd Conference on Innovative Applications of Artificial Intelligence/11th Symposium on Educational Advances in Artificial Intelligence, Virtual, 2–9 February 2021; pp. 15853–15854. [Google Scholar]
  35. Yang, M.; Sowmya, A. An Underwater Color Image Quality Evaluation Metric. IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc. 2015, 24, 6062–6071. [Google Scholar] [CrossRef]
  36. Panetta, K.; Gao, C.; Agaian, S. Human-Visual-System-Inspired Underwater Image Quality Measures. IEEE J. Ocean. Eng. 2016, 41, 541–551. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.