Next Article in Journal
Prediction of Cable Behavior Using Finite Element Analysis Results for Flexible Cables
Next Article in Special Issue
The Objective Dementia Severity Scale Based on MRI with Contrastive Learning: A Whole Brain Neuroimaging Perspective
Previous Article in Journal
Quantitative Analysis Method and Correction Algorithm Based on Directivity Beam Pattern for Mismatches between Sensitive Units of Acoustic Dyadic Sensors
Previous Article in Special Issue
Human Motion Prediction via Dual-Attention and Multi-Granularity Temporal Convolutional Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancement and Optimization of Underwater Images and Videos Mapping

1
School of Electrical Engineering and Automation, Anhui University, Hefei 230601, China
2
State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(12), 5708; https://doi.org/10.3390/s23125708
Submission received: 4 June 2023 / Revised: 17 June 2023 / Accepted: 17 June 2023 / Published: 19 June 2023

Abstract

:
Underwater images tend to suffer from critical quality degradation, such as poor visibility, contrast reduction, and color deviation by virtue of the light absorption and scattering in water media. It is a challenging problem for these images to enhance visibility, improve contrast, and eliminate color cast. This paper proposes an effective and high-speed enhancement and restoration method based on the dark channel prior (DCP) for underwater images and video. Firstly, an improved background light (BL) estimation method is proposed to estimate BL accurately. Secondly, the R channel’s transmission map (TM) based on the DCP is estimated sketchily, and a TM optimizer integrating the scene depth map and the adaptive saturation map (ASM) is designed to refine the afore-mentioned coarse TM. Later, the TMs of G–B channels are computed by their ratio to the attenuation coefficient of the red channel. Finally, an improved color correction algorithm is adopted to improve visibility and brightness. Several typical image-quality assessment indexes are employed to testify that the proposed method can restore underwater low-quality images more effectively than other advanced methods. An underwater video real-time measurement is also conducted on the flipper-propelled underwater vehicle-manipulator system to verify the effectiveness of the proposed method in the real scene.

1. Introduction

The ocean is abundant in oil, combustible ice, fish stocks, and mineral resources, which has attracted the attention of more and more governments and researchers. Mainstream marine vehicles, such as underwater remotely operated vehicles (ROVs), autonomous underwater vehicles (AUVs), and commercial submarines, are equipped with deep-sea underwater imaging systems [1]. An underwater camera imaging system is a cost-effective resource that is used extensively because of its low cost, low technical requirements, easy transplantation, and convenient configuration compared with other techniques, such as multispectral stereoscopic imaging [2] and laser scanning imaging [3].
However, underwater images and video often display serious color cast, sharpness degradation, and contrast reduction affected by the complex underwater environment, absorption and scattering of light, suspended particles, and other factors [4]. Poor visibility brings inevitable problems in computer vision tasks, such as object recognition and object detection [5]. In a sense, how to restore underwater images and video is challenging and meaningful.
Existing underwater restoration methods are mostly based on the image formation model (IFM) [6,7,8], which describes the linear relationship among the direct component, the forward scattering component, and the backward scattering component. In this model, two crucial parameters, the background lights and the transmission map, were estimated to recover underwater haze-free images. He et al. [9] observed that in most local patches, the lowest pixel intensity of the R–G–B color channel based on land haze-free images is close to zero, and statistically summarized the dark channel prior (DCP), which was widely applied for underwater image restoration. Inspired and motivated by the imaging similarity between land-based images and underwater images, some typical image restoration methods based on the IFM and the DCP have been proposed. Chao et al. [10] directly applied the DCP to underwater images that can only handle the scattering problems effectively, but the problems of absorption were not solved. Considering the notable influence of red-light attenuation under most underwater circumstances on the dark channel, only the dark channel of the G–B color channels [11] was proposed to enhance underwater images, which improved the accuracy of the estimated transmission map. Galdran et al. [12] proposed a red channel compensation method as a variant of the DCP, which was described as the dark channel of the inverted R channel and G–B channel, and the pixel of the highest intensity in the R channel was used to estimate BL. Peng et al. [13] proposed a complex method based on image blurriness and light absorption (IBLA) to obtain more precise BL estimation and underwater scene-depth estimation. However, this method is quite time-consuming. Song et al. [14] proposed a new underwater dark channel prior (NUDCP), which included a BL estimation statistic model and the TM of the R–G–B channel estimation model combination of the compensation by the scene-depth map and the optimization by the saturation map to restore underwater images, but that parameter estimation was greatly affected by the environment. Liang et al. [15] proposed a backscattered light estimation that combined wavelength-dependent attenuation, image blurriness, and brightness priors based on a hierarchical searching technique.. Yu et al. [16] proposed a novel transmission map optimization model designed by Yin–Yang pair optimization to restore underwater images. This method can suppress background noise and enhance detailed texture features but simultaneously increases complexity and computing cost.
It is challenging to enhance and restore underwater images in distinct water types with different distortions. Available underwater image enhancement and restoration methods either cannot obtain the accurate BL and TM estimation or have low algorithm processing speed. To overcome these problems, we propose an effective and high-speed underwater image and video restoration method in this paper. The highlights of this paper are summarized as follows:
  • An accurate and high-speed background light estimation method is proposed, which is not only suitable for distinct types of underwater images but also for low complexity. The proposed method is time-saving and adaptable for most underwater images.
  • TM estimation with an improved optimizer is established. Integrating the compensation of the scene-depth map based on color attenuation prior (CAP) and the adaptive saturation map (ASM), an optimizer is designed to modify and refine the coarse TM. The proposed TM estimation method provides more accurate results and has lower complexity in different kinds of underwater images than other advanced models.
  • An improved white balance (WB) algorithm is employed to improve the color cast and visibility for restored images. The gain factor is adaptively selected related to the restored underwater image intensity to avoid over- or under-correction.
The rest of this paper is organized as follows. The underwater image formation model and underwater image restoration based on the DCP are described in Section 2. In Section 3, the BL estimation and the design of the TM optimizer are depicted and explained in detail. The underwater image restoration method with color correction is presented. Section 4 reports the experimental results compared with other state-of-the-art methods. Section 5 discusses the results and conclusion.

2. Related Works

Underwater image restoration and degradation are mutually inverse operations when the light propagates in the water medium. Underwater image restoration based on the image formation model (IFM) is introduced briefly. The parameter estimations of global background lights and a transmission map are two vital parameters to restore underwater images based on the DCP, which can affect the results directly.

2.1. Underwater Image Formation Model

A simplified underwater image formation model is used to describe the haze image [17]:
I C ( x , y ) = J C ( x , y )   t C ( x , y ) + B C ( 1 t C ( x , y ) )
where IC(x,y) represents the image intensity captured by the camera from the scene at the position (x,y); JC refers to the corresponding haze-free image; BC denotes the background lights, which is a three-dimensional vector; C ϵ {R, G, B} denotes one of the R–G–B color channels; and tC(x,y) denotes the transmission map, and it is an exponential attenuation function related to the spectral volume attenuation coefficient β(x,y) and the scene depth d(x,y), which can be expressed as follows:
t C ( x , y ) = e β ( x , y )   d ( x , y )

2.2. Underwater Image Restoration Based on the DCP

Owing to the optical imaging similarities between land-based images and underwater images, the coarse transmission map (TM) is estimated based on the DCP under the statistical assumption Jdark(x,y) = 0.
First, we apply the minimum value operator on a local patch Ω(x,y) on both sides in Equation (1):
min z Ω ( x , y ) I C ( z ) = min z Ω ( x , y ) J C ( z ) t C ( z ) + min z Ω ( x , y ) B C ( 1 t C ( z ) )
Generally, the background light BC is assumed to be constant and positive, and both sides of Equation (3) are divided by BC. Then, tC(z) is constant and continuous on a small patch, so we have
min z Ω ( x , y ) I C ( z ) B C = min z Ω ( x , y ) J C ( z ) B C t C ( z ) + 1 t C ( z )
Then, the minimum filter operator is applied among R–G–B color channels as follows:
min C min z Ω ( x , y ) I C ( z ) B C = min C min z Ω ( x , y ) J C ( z ) B C t C ( z ) + 1 min C t C ( z )
According to the dark channel prior, we get
min C min z Ω ( x , y ) J C ( z ) B C = 0
Combining Equation (5) with Equation (6), the multiplicative term is deleted and the coarse transmission map of the R color channel tR(z) is expressed as follows:
t R ( z ) = 1 min C min z Ω ( x , y ) I C ( z ) B C
Finally, the restored underwater haze-free image JC(x,y) is calculated from
J C ( x , y ) = I C ( x , y ) B C min ( max ( t C ( x , y ) , 0.1 ) , 0.9 ) + B C
The constants 0.1 and 0.9 are the lower and upper limits of the transmission map, respectively.
The coarse transmission maps of the R channel for typical underwater images are shown in Figure 1. The accurate R channel TMs based on the DCP are estimated for underwater images in Figure 1a–c, with uniform illumination and a gloomy foreground scene. On the contrary, in Figure 1d, the high-brightness target or white object near the camera is mistaken for the far scene, which causes the estimated transmission map of the region to be underestimated. In Figure 1e, because of some regions in which the red channel intensity is lower than the estimated background lights, such as the area behind the statue, the local TM is overestimated. Figure 1f shows the most inhomogeneous illumination scene in the shark back region and the poor TM estimation.
On the one hand, the TM based on the DCP can be estimated precisely when the illumination of the scene from the light source is from far away, which is regarded as uniform illumination, just contrary to the artificial light. On the other hand, the estimated BL is also inseparable from the accurate TM estimation. However, it is extremely difficult to estimate the BL and the TM accurately in most real underwater images. Like the last three salient cases in Figure 1, the transmission map estimation is inaccurate, which affects the subsequent image restoration.

3. Problem Formulation

To solve the above problems, an improved background light estimation and transmission map optimizer are proposed to restore underwater images from the aspects of visibility, contrast, and color. The flow-process diagram of the whole method is shown in Figure 2. First, the underwater image color channels are judged whether to be seriously attenuated, and an appropriate method is chosen to estimate the BL. Secondly, a TM optimizer of the R color channel is designed to modify and refine the coarse TM based on the DCP. Then, the TMs of the remaining G and B channels can be calculated by the attenuation coefficient ratios of their channels to the red channel relatively. Using the estimated BL and TM, we can obtain a haze-free underwater image. Finally, an adaptive color correction method is applied to adjust and correct the brightness and color of the restored image.

3.1. Background Light Estimation

Recently, various methods for estimating background light have been developed, such as DCPRGB or its variants [11], the quad-tree decomposition searching method [5,18], the BL candidate selection region [13], and BL estimation based on the statistic model [14], which worked well only for given underwater images, as it mostly had high computational complexity and was time consuming. Affected by the type of water, the degree of different color channel attenuation is different. The red channel is attenuated more severely than the G–B channel in these images captured in the open ocean. In the coastal oceans, the blue channel is more attenuated than the others.
Inspired by the paper in [15], we used Formula (9) to determine whether some color channel is a severe attenuation channel by calculating the ratios between the maximum channel pixel average and the channel pixel average of the input image and chose the relative method to compute BL. The BL estimation based on the statistical model can obtain an accurate BL of the raw image with no or little red component rapidly, which is not suitable for other channels with serious attenuation. Therefore, we extended it to other attenuation color channels, as shown in Equation (10). For underwater images with severe attenuation or darker intensity, it is more accurate to utilize an improved BL model based on statistic model to estimate the BL. For other situations, the DCPRGB method is utilized. Thus, we took full advantage of the advantages of these two methods to obtain
B L = B L 1 , max ( A v g ( I C ) ) / min ( A v g ( I C ) ) 2 B L 2 , o t h e r w i s e
Here,
B L 1 = 1.13 × A v g m + 1.11 × S t d m 25.6 , A v g ( I C ) > δ 140 1 + 14.4 × e x p ( 0.034 × M e d n ) , o t h e r w i s e
B L 2 = I C ( arg max { min Ω ( x , y ) ( min C ( I C ( x , y ) ) ) } )
where δ is a threshold and set δ = 64, which was consulted in the literature [15]; m denotes the color channel, with an average intensity value greater than δ; and n denotes the opposite channel. Avg, Std, and Med denote the average value, the standard deviation, and the median value of the channel in the underwater image I, respectively. To avoid producing over- or under-estimation, the BL value was limited to between 5 and 250. Therefore, the ultimate estimated BL is:
B C = min ( max ( B m , n , 5 ) , 250 ) , C { R , G , B }

3.2. TM Optimizer Design

The design steps of the TM optimizer are depicted minutely in this part, which consists of two components: the correction of the scene-depth map and the optimization of the adaptive saturation map (ASM). For some special cases, our optimizer can eliminate and refine the unreasonable and error areas in the transmission map based on the DCP to obtain the accurate TM.
Firstly, the scene depth is estimated based on the color attenuation prior (CAP). Through the experiments on extensive foggy images, the scene depth can be described as a positively correlated function with the fog concentration and the difference value between image brightness and saturation [19]. The scene-depth estimation model is:
d ( x , y ) = θ 0 + θ 1   v ( x , y ) + θ 2   s ( x , y ) + τ ( x , y )
where θ0, θ1, and θ2 are coefficients; τ is a random parameter; v(x,y) is the image brightness; and s(x,y) is the image saturation. Those parameters can be calculated by supervised learning, and we directly followed the optimal solution in this paper [19]: θ0 = 0.121779, θ1 = 0.959710, θ2 = −0.780245, τ = 0.041337.
The scene-depth maps of the underwater images might not be consistent with the reality in the corresponding region with the objects or targets with high brightness. Thus, the minimum filter is applied to settle those questions as follows:
d f ( x , y ) = min Ω ( x , y ) d ( x , y )
The transmission map (TM) of R–G–B color channels is easily obtained as follows:
t C ( x , y ) = e ( β C d f ( x , y ) ) ,   C { R , G , B }
Just the R channel is considered, and we set βR = 1 and obtain:
t d R ( x , y ) = e d f ( x , y )
whereas the estimated TM in Equation (7) is related to the scene, intensity, and detail information of the raw image, which produces inaccurate estimation in specific cases easily. For example, some close background areas with a small intensity value of the R channel overestimate the TM value and are considered foreground [14]. The color attenuation prior (CAP) supposes that the near background area with a small intensity value has low brightness and saturation, and the TM value is correspondingly small, which is used to compensate for and refine the TM. The modified TM is expressed as:
t m R ( x , y ) = min { t R ( x , y ) , t d R ( x , y ) }
Secondly, when the underwater environment lacks light, we usually use artificial light (AL) to increase the view field of the external scene. Therefore, we ought to eliminate the effect of AL. However, a region with high brightness cannot indicate the emergence of artificial light. We found that the saturation value of a region illuminated by AL is low in HSV color space [12,14]. The image saturation map is expressed as follows:
S ( x , y ) = 1 , max I C ( x , y ) = 0 max I C ( x , y ) min I C ( x , y ) max I C ( x , y ) , o t h e r w i s e
Image saturation characterizes the vividness of color in an image, which can be used to reflect the effects of artificial lights. With an increase in white light, the channels lose saturation gradually, and the regions illuminated by artificial light force the brightness of pixels in the three channels to be closed, which results in the saturation value being very low. To some extent, the areas with low saturation in the image can be considered the areas illuminated by AL. Thus, we defined an adaptive saturation map (ASM) to describe this phenomenon as follows:
S p ( x , y ) = 1 α   S ( x , y )
where α is the coefficient related to the mean value of the saturation map. We applied ASP to modify and optimize the TM to abate the intensity of the inhomogeneous illumination or artificial light region. Relevantly, the intensity in other areas has little impact.
t f R ( x , y ) = max { t m R ( x , y ) , S p ( x , y ) }
Applying our optimizer to modify and refine TM, the inaccurate TMs of Figure 1d–f can be corrected and refined. Then, guided filtering [20] is applied to preserve the edges. The above inaccurate TMs are refined by our TM optimizer, as shown in Figure 3.
In Equation (2), the transmission map needs to be calculated on each color channel, and three channel transmission maps are not disrelated. Only one metric and two scale coefficients [21] need to be computed as follows:
t f R ( x , y ) = e β R d ( x , y ) t f G ( x , y ) = e β G d ( x , y ) = ( e β R d ( x , y ) ) β G β R = ( t f R ( x , y ) ) β G β R t f B ( x , y ) = e β B d ( x , y ) = ( e β R d ( x , y ) ) β B β R = ( t f R ( x , y ) ) β B β R
Since the TM is estimated on a small pitch Ω(x,y), the transmission map tC(x,y) easily demonstrates some color halos and block artifacts. Finally, the guided filter [20] is applied to solve these problems.

3.3. Color Correction

Although our method can effectively remove the fog in underwater images, some new problems also appear, such as the restored images having poor contrast, uneven color, and low brightness, which makes it hard to obtain valuable information. Inspired by the white balance algorithm [21], an improved color correction is proposed. Compared to the original method, its gain factor is selected automatically based on the brightness of the input image and can be expressed as:
I o u t = I i n V p × ( m / m r e f ) + λ a s m r e f = ( m R ) 2 + ( m G ) 2 + ( m B ) 2
where Iin and Iout denote the input restored image and the output corrected image; mR, mG, and mB denote the mean value of the image Iin color channels; and Vp is the maximum intensity value of the image Iin. The parameter λas is the gain factor to adjust the color of the input image, whose range is 0 to 0.5. According to the color correction effect of the λas selection for the restored image, an auto-select gain factor mothed is proposed to obtain the desired λas. Our principle is as follows: When the maximum value of mR, mG, and mB is greater than 0.45, the image Iin is brighter and λas is close to 0.5; in other words, if the maximum value is less than 0.45, the image is dark and λas is closed to 0, which can be expressed as:
λ a s = 0.5 × tanh ( m 2 / m 1 ) , m 2 > 0.45 0.5 × tanh ( m 1 / m 2 ) , o t h e r w i s e
where m2 and m1 are the maximum value and the minimum value of mR, mG, and mB. The tanh(·) represents the hyperbolic tangent function.
Some typical underwater images restored by our method are shown in Figure 4 to explain the whole process of image restoration. Figure 4b,c represent the coarse TM based on DCP and the precise TM refined by the TM optimizer. Figure 4d shows the restored image without color correction. Figure 4e denotes the final enhanced image.

4. Results and Evaluation

In this section, we analyze and evaluate the proposed underwater image restoration and enhancement method qualitatively and quantitatively compared with other state-of-the-art methods. All tested underwater images were from the underwater benchmark dataset [22] and all codes were from open-source codes, which were run on a Windows 11 PC with AMD Ryzen 9 5900HX with Radeon [email protected] GHZ, 6.00 GB, running Python 3.6.0.

4.1. Evaluation of Objectives and Approaches

The experiments were designed to verify the effectiveness and performance of our method from the following three perspectives:
(1)
To prove the effectiveness of the TM optimizer;
(2)
To prove the comprehensive performance of the proposed method;
(3)
To test the real-time performance of underwater video.
In experiment (1), we implemented the evaluations by comparing them with other classical and representative underwater image restoration methods, which include the dark channel prior (DCP), the maximum intensity prior [23] (MIP), the underwater dark channel prior (UDCP), the image blurriness and light absorption (IBLA), the underwater light attenuation prior [24] (ULAP), and a new underwater dark channel prior (NUDCP).
To make the experimental results more convincing, four image-quality indexes were employed, which were as follows: two full-reference (FR) indexes (the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM)) and two non-reference (NR) indexes (the underwater image-quality measure [25] (UIQM) and the blind referenceless image spatial-quality evaluator [26] (BRISQUE)).
The PSRN index mainly measures the average ratio of signal maximum energy to signal noise energy and is widely used. A greater value indicates higher similarity. The SSIM index describes the structural similarity information between the enhanced image and the contrast reference image intuitively. A higher SSIM value represents better performance. Although the PSNR and SSIM indexes need contrast reference images, which are difficult to obtain, they can reflect the influence of the added noise of underwater restored images on the structure of the contrast reference image. The UIQM index takes colorfulness, sharpness, and contrast as the measurement components and combines the components linearly to assess image quality comprehensively. A higher value represents better visibility and quality. The BRISQUE index can measure the naturalness losses in underwater images. A lower value indicates a better result.

4.2. Performance of Transmission Map Optimizer for Single Underwater Image

First, the performance of our TM optimizer was verified with different kinds of underwater images, which included greenish and bluish underwater images. To express our performance objectively, we chose various kinds of restored underwater images compared with other existing methods shown in Figure 5 and Figure 6, and their quantitative image-quality assessment indexes are recorded in Table 1, Table 2, Table 3 and Table 4.
Figure 5 and Figure 6 show the restored results obtained by various transmission map estimation methods for two greenish underwater images and two bluish underwater images, respectively. Underwater images are commonly greenish and bluish owing to the light absorption and scattering, which lose valuable details and cause color deviation. For those underwater images, our restored approach not only met the requirement of improving contrast but also removed the color cast.
In Figure 5(1,2), the UDCP method caused more serious color distortion and a darker background than the raw image. The DCP and MIP methods almost had no effect on improving contrast and color deviation. We noticed that the restored underwater images in Figure 5(1,2) by the IBLA, ULAP, and NUDCP methods seemed to have similar results in either improving the contrast or eliminating the degraded color, and the restored images were still low contrast and greenish, which was confirmed by the quantitative PSNR, SSIM, UIQM, and BRISQUE indexes from Table 1 and Table 2. The contrast was improved evidently and the color distortion was removed in Figure 5(1f,2f), and the restored results were more realistic than others. Although the UIQM index wasn’t the best, as shown in Table 2, the other indexes were sufficient to prove the effectiveness of the proposed TM optimizer.
The image in Figure 6(3) is bluish and dark. From the restored results, we found that DCP, UDCP, MIP, and ULAP made the image less bright and a more serious bluish tone, and the SSIM index and UIQM index were low, which indicates that they produced little improvement. The IBLA and NUDCP methods improved brightness and contrast, but they were still insufficient compared with our method. The quantitative indicators of our method were still beaten slightly. For the slight bluish image in Figure 6(4), the DCP, IBLA, and ULAP methods still retained bluish illumination, which caused litter de-hazing and a restored effect. The MIP method not only did not improve the color distortion but also aggravated it. The quantitative evaluation shows that the proposed TM optimizer yielded the highest values of the PSNR, SSIM, UIQM, and BRISQUE indexes, which indicates that our method produced a more gratifying and effective performance than others.

4.3. Comprehensive Performance for Single Underwater Image

Our proposed algorithm aims to enhance the contrast, eliminate color deviation, and improve the visibility of the underwater image. Thus, the proposed color correction method was carried out on the restored images as post-processing to restore underwater images overall.
To evaluate the performance of the proposed method comprehensively, we compared with the DCP, UDCP, MIP, IBLA, ULAP, and NUDCP. These methods add histogram equalization (HE) to correct the color of restored and enhanced images. Since the NUDCP method designed its color correction algorithm, we continued to employ it and meanwhile named it NU_CC. For convenience, the proposed method without color correction was named O_WCC, and the overall restoration was called “ours”.
The quantitative evaluation and data indexes are recorded in Table 5, showing the average values of 100 raw images, which were picked randomly from the underwater benchmark datasets. Figure 7 and Figure 8 show two kinds of typical underwater image restoration results. From Table 5, we discovered that after they were handled by the HE algorithm, the DCP, UDCP, MIP, and IBLA methods produced distinct and valuable improvements in image quality; however, sometimes it was just the opposite.
The overall restoration results for the underwater images restored by different methods can be seen in Figure 7 and Figure 8. To increase the comparison, two opposite results were selected to illustrate the necessity of color correction. The DCP, UDCP, MIP, and IBLA methods often failed to eliminate the greenish and bluish color in Figure 7b–e and Figure 8b–e. Color correction can effectively improve brightness, remove the color cast, and enhance contrast. However, after being enhanced by HE, as shown in Figure 7h–l and Figure 8h–l, the restored underwater images demonstrated high brightness and their colors were overcorrected and oversaturated, which masked significant information in the images. Although the color correction method can improve the UIQM index value of the restored images, as shown in Table 5, the unnatural images did not satisfy our requirements. Our method could effectively overcome the image degradation to obtain the haze-free underwater images shown in Figure 7g and Figure 8g. The final image was enhanced and restored by our improved white balance method, shown in Figure 7n and Figure 8n, considering the uniqueness and correlation of each color channel and image brightness, and were neither oversaturated nor over-enhanced, which is very consistent with human vision.
Lastly, we selected some different underwater images from 60 challenging underwater images, including greenish underwater images, bluish underwater images, turbid underwater images, dark underwater images, and low-visibility underwater images. The ultimate results of restoration by different enhancement and restoration methods are shown in Figure 9 to show the overall performance of our proposed method.

4.4. Enhancement for Underwater Video

Our method is intended to carry out underwater or undersea image and video restoration on underwater mobile systems, such as AOVs and UAVs [27,28]. In general, the hardware resource and computational ability of those devices show pretty poor performance, so the component of real time is challenging and significant. We tested our proposed underwater image restoration method in real time on a flipper-propelled underwater vehicle-manipulator system [29], and the size of the underwater R–G–B image obtained by the system webcam was 312 pixels × 554 pixels. In the marine environment, a two-minute underwater video with 3000 frames was enhanced and restored by different methods, which included the DCP, UDCP, MIP, ULAB, NUDCP, and ours. The real-time measurement results are listed in Table 6.
The DCP, UDCP, MIP, and ULAP were only applied to enhance and restore a single image, and their mean processing speed for underwater video was much slower and could not be used for real-time enhancement and restoration. The NUDCP method could improve the speed of underwater video enhancement, but our processing time was 90 ms–100 ms per underwater image and the average restoration speed was more than 10 FPS, which is nearly four times as fast as the NUDCP because some functions were optimized and the parameters could be solved quickly. Our method can improve the processing speed of underwater video restoration greatly and can be applied to engineering tasks in real time.

5. Conclusions

This paper proposes an efficient underwater image and video restoration method that contains a fast and practical background light estimation method, an accurate transmission map estimation based on a TM optimizer, and an adaptive color correction algorithm. The designed method can be applied to a variety of environments and can meet real-time requirements. The quality of the restored underwater image and video depends on the accuracy of the estimated BL and the TM. Meanwhile, the BL and the TM are also closely connected, and the accuracy of the BL influences the TM estimation. Thus, an accurate BL estimation method is particularly significant. To verify the performance and the real time of our proposed method for underwater image and video both qualitatively and quantitatively, we designed several corresponding experiments to explain and demonstrate.
The proposed method performed better at enhancing underwater image and video. Because the background light cannot be estimated completely and accurately under special circumstances, such as a scene with generous artificial light and lots of sediment, our method may obtain the wrong transmission map, which makes the restored images have a color imbalance. For most attenuated underwater images, our method can obtain a better result. This will be instrumental in improving the visual perception ability of underwater vehicles.
We will continue to study how to accurately estimate the BL and optimize the TM for complex underwater environments in the future. Meanwhile, future research will pay attention to the real-time processing speed of underwater images and video to meet the physical demands of GPU programming to provide a visual basis for complex underwater operations.

Author Contributions

Conceptualization, X.D. and C.L.; methodology, X.D., C.L., Y.W. and S.W.; software, C.L.; validation, X.D. and C.L.; formal analysis, X.D. and C.L.; investigation, C.L.; resources, X.D.; data curation, X.D. and Y.W.; writing—original draft preparation, C.L.; writing—review and editing, X.D. and C.L.; visualization, C.L.; supervision, X.D.; project administration, X.D. and C.L.; funding acquisition, X.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the National Key Research and Development Project Monitoring and Prevention of Major Natural Disasters Special Program under grant No. 2020YFC1512202, partly supported by the National Natural Science Foundation of China under grant No. 62273342, and partly supported by the Key Research and Development Project of Anhui Province under grant No. 2022a05020035.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Special thanks to the Institute of Automation, Chinese Academy of Sciences, for providing the experimental site.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DCPdark channel prior
BLbackground lights
TMtransmission map
ASMadaptive saturation map
IFMimage formation model
ROVremotely operated vehicles
AUVautonomous underwater vehicles
CAPcolor attenuation prior
ALartificial light
WBwhite balance
MIPmaximum intensity prior
UDCPunderwater dark channel prior
IBLAimage blurriness and light absorption
ULAPunderwater light attenuation prior
NUDCPnew underwater dark channel prior
PSNRpeak signal-to-noise ratio
SSIMstructural similarity
UIQMunderwater image-quality measure
BRISQUEblind referenceless image spatial-quality evaluator
HEhistogram equalization
NU_CCNUDCP with color correction
O_WCCour method without color correction

References

  1. Bae, I.; Hong, J. Survey on the Developments of Unmanned Marine Vehicles: Intelligence and Cooperation. Sensors 2023, 23, 4643. [Google Scholar] [CrossRef]
  2. Kazemzadeh, F.; Haider, S.A.; Scharfenberger, C.; Wong, A.; Clausi, D.A. Multispectral stereoscopic imaging device: Simultaneous multiview imaging from the visible to the near-infrared. IEEE Trans. Instrum. Meas. 2014, 63, 1871–1873. [Google Scholar] [CrossRef]
  3. Han, M.; Lyu, Z.; Qiu, T.; Xu, M. A review on intelligence dehazing and color restoration for underwater images. IEEE Trans. Syst. Man Cybern. Syst. 2018, 50, 1820–1832. [Google Scholar] [CrossRef]
  4. Shahani, K.; Song, H.; Mehdi, S.R.; Sharma, A.; Tunio, G.; Qureshi, J.; Khaskheli, N. Design and testing of an underwater microscope with variable objective lens for the study of benthic communities. J. Mar. Sci. Appl. 2021, 20, 170–178. [Google Scholar] [CrossRef]
  5. Pan, H.; Lan, J.; Wang, H.; Li, Y.; Zhang, M.; Ma, M.; Zhang, D.; Zhao, X. UWV-Yolox: A Deep Learning Model for Underwater Video Object Detection. Sensors 2023, 23, 4859. [Google Scholar] [CrossRef]
  6. Zhu, D. Underwater Image Enhancement Based on the Improved Algorithm of Dark Channel. Mathematics 2023, 11, 1382. [Google Scholar] [CrossRef]
  7. Park, E.; Sim, J.Y. Underwater image restoration using geodesic color distance and complete image formation model. IEEE Access 2020, 8, 157918–157930. [Google Scholar] [CrossRef]
  8. Wu, Z.; Ji, Y.; Song, L.; Sun, J. Underwater Image Enhancement Based on Color Correction and Detail Enhancement. J. Mar. Sci. Eng. 2022, 10, 1513. [Google Scholar] [CrossRef]
  9. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [PubMed]
  10. Chao, L.; Wang, M. April. Removal of water scattering. In Proceedings of the 2010 2nd International Conference on Computer Engineering and Technology, Chengdu, China, 16–19 April 2010; IEEE: New York, NY, USA, 2010; Volume 2, pp. V2–V35. [Google Scholar]
  11. Drews, P.; Nascimento, E.; Moraes, F.; Botelho, S.; Campos, M. Transmission estimation in underwater single images. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 2–8 December 2013; pp. 825–830. [Google Scholar]
  12. Galdran, A.; Pardo, D.; Picón, A.; Alvarez-Gila, A. Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef] [Green Version]
  13. Peng, Y.T.; Cosman, P.C. Underwater image restoration based on image blurriness and light absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef]
  14. Song, W.; Wang, Y.; Huang, D.; Liotta, A.; Perra, C. Enhancement of underwater images with statistical model of background light and optimization of transmission map. IEEE Trans. Broadcast. 2020, 66, 153–169. [Google Scholar] [CrossRef] [Green Version]
  15. Liang, Z.; Ding, X.; Wang, Y.; Yan, X.; Fu, X. Gudcp: Generalization of underwater dark channel prior for underwater image restoration. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 4879–4884. [Google Scholar] [CrossRef]
  16. Yu, K.; Cheng, Y.; Li, L.; Zhang, K.; Liu, Y.; Liu, Y. Underwater Image Restoration via DCP and Yin–Yang Pair Optimization. J. Mar. Sci. Eng. 2022, 10, 360. [Google Scholar] [CrossRef]
  17. Guo, Q.; Xue, L.; Tang, R.; Guo, L. Underwater image enhancement based on the dark channel prior and attenuation compensation. J. Ocean Univ. China 2021, 16, 757–765. [Google Scholar] [CrossRef]
  18. Yu, H.; Li, X.; Lou, Q.; Lei, C.; Liu, Z. Underwater image enhancement based on DCP and depth transmission map. Multimed. Tools Appl. 2020, 79, 20373–20390. [Google Scholar] [CrossRef]
  19. Zhu, Q.; Mai, J.; Shao, L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar]
  20. Yang, M.; Sowmya, A.; Wei, Z.; Zheng, B. Offshore underwater image restoration using reflection-decomposition-based transmission map estimation. IEEE J. Ocean. Eng. 2019, 45, 521–533. [Google Scholar] [CrossRef]
  21. Li, X.; Lei, C.; Yu, H.; Feng, Y. Underwater image restoration by color compensation and color-line model. Signal Process. Image Commun. 2022, 101, 116569. [Google Scholar] [CrossRef]
  22. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.K.; Wong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar] [CrossRef] [Green Version]
  23. Carlevaris-Bianco, N.; Mohan, A.; Eustice, R.M. Initial results in underwater single image dehazing. In Proceedings of the Oceans 2010 Mts/IEEE Seattle, Washington, DC, USA, 20–23 September 2010; IEEE: New York, NY, USA, 2010; pp. 1–8. [Google Scholar]
  24. Song, W.; Wang, Y.; Huang, D.; Tjondronegoro, D. A rapid scene depth estimation model based on underwater light attenuation prior for underwater image restoration. In Proceedings of the Pacific Rim Conference on Multimedia, Hefei, China, 21–22 September 2018; Springer: Cham, Switzerland, 2018; pp. 678–688. [Google Scholar]
  25. Anwar, S.; Li, C. Diving deeper into underwater image enhancement: A survey. Signal Process. Image Commun. 2020, 89, 115978. [Google Scholar] [CrossRef]
  26. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
  27. Monterroso Muñoz, A.; Moron-Fernández, M.-J.; Cascado-Caballero, D.; Diaz-del-Rio, F.; Real, P. Autonomous Underwater Vehicles: Identifying Critical Issues and Future Perspectives in Image Acquisition. Sensors 2023, 23, 4986. [Google Scholar] [CrossRef]
  28. Hu, K.; Wang, T.; Shen, C.; Weng, C.; Zhou, F.; Xia, M.; Weng, L. Overview of Underwater 3D Reconstruction Technology Based on Optical Images. J. Mar. Sci. Eng. 2023, 11, 949. [Google Scholar] [CrossRef]
  29. Wang, R.; Wang, S.; Wang, Y.; Cai, M.; Tan, M. Vision-based autonomous hovering for the biomimetic underwater robot—RobCutt-II. IEEE Trans. Ind. Electron. 2018, 66, 8578–8588. [Google Scholar] [CrossRef]
Figure 1. Several classical underwater images and their TM of the R color channel based on the DCP. The upper layer is the original image, and the lower layer is their corresponding transmission map. (af) represent different underwater images.
Figure 1. Several classical underwater images and their TM of the R color channel based on the DCP. The upper layer is the original image, and the lower layer is their corresponding transmission map. (af) represent different underwater images.
Sensors 23 05708 g001
Figure 2. Flowchart of the proposed method.
Figure 2. Flowchart of the proposed method.
Sensors 23 05708 g002
Figure 3. The above inaccurate TMs of the R channel are refined with our optimizer for Figure 1d–f. (a) The underestimated TM for the clay pot in the foreground region is accurately corrected; (b) the overestimated TM for the back of the statue in the background region is accurately corrected; (c) the TM in the shark back area with the non-uniform illumination is improved.
Figure 3. The above inaccurate TMs of the R channel are refined with our optimizer for Figure 1d–f. (a) The underestimated TM for the clay pot in the foreground region is accurately corrected; (b) the overestimated TM for the back of the statue in the background region is accurately corrected; (c) the TM in the shark back area with the non-uniform illumination is improved.
Sensors 23 05708 g003
Figure 4. The whole processing of the proposed underwater image restoration method. (a) Raw image; (b) the TM of the R channel based on DCP; (c) refinement of the TM optimized by our TM optimizer; (d) the restored image; (e) the enhanced image with color correction.
Figure 4. The whole processing of the proposed underwater image restoration method. (a) Raw image; (b) the TM of the R channel based on DCP; (c) refinement of the TM optimized by our TM optimizer; (d) the restored image; (e) the enhanced image with color correction.
Sensors 23 05708 g004
Figure 5. Image restoration results with different TM obtained for the greenish images. (a) Raw images; (b) DCP; (c) UDCP; (d) MIP; (e) ILBA; (f) ULAP; (g) NUDCP; (h) ours.
Figure 5. Image restoration results with different TM obtained for the greenish images. (a) Raw images; (b) DCP; (c) UDCP; (d) MIP; (e) ILBA; (f) ULAP; (g) NUDCP; (h) ours.
Sensors 23 05708 g005
Figure 6. Image restoration results with different TM obtained for the bluish images. (a) Raw images; (b) DCP; (c) UDCP; (d) MIP; (e) ILBA; (f) ULAP; (g) NUDCP; (h) ours.
Figure 6. Image restoration results with different TM obtained for the bluish images. (a) Raw images; (b) DCP; (c) UDCP; (d) MIP; (e) ILBA; (f) ULAP; (g) NUDCP; (h) ours.
Sensors 23 05708 g006
Figure 7. Comparative results for greenish images. (a) Raw images; (b) DCP; (c) UDCP; (d) MIP; (e) IBLA; (f) NUDCP; (g) O_WCC; (h) DCP + HE; (i) UDCP + HE; (j) MIP + HE; (k) IBLA + HE; (l) ULAP + HE; (m) NU_CC; (n) ours.
Figure 7. Comparative results for greenish images. (a) Raw images; (b) DCP; (c) UDCP; (d) MIP; (e) IBLA; (f) NUDCP; (g) O_WCC; (h) DCP + HE; (i) UDCP + HE; (j) MIP + HE; (k) IBLA + HE; (l) ULAP + HE; (m) NU_CC; (n) ours.
Sensors 23 05708 g007
Figure 8. Comparative results for bluish images. (a) Raw images; (b) DCP; (c) UDCP; (d) MIP; (e) IBLA; (f) NUDCP; (g) O_WCC; (h) DCP + HE; (i) UDCP + HE; (j) MIP + HE; (k) IBLA + HE; (l) ULAP + HE; (m) NU_CC; (n) ours.
Figure 8. Comparative results for bluish images. (a) Raw images; (b) DCP; (c) UDCP; (d) MIP; (e) IBLA; (f) NUDCP; (g) O_WCC; (h) DCP + HE; (i) UDCP + HE; (j) MIP + HE; (k) IBLA + HE; (l) ULAP + HE; (m) NU_CC; (n) ours.
Sensors 23 05708 g008
Figure 9. Comparative results for challenge underwater images. (a) Raw images; (b) DCP; (c) UDCP; (d) MIP; (e)IBLA; (f) ULAP; (g) NUDCP; (h) O_WCC; (i) DCP + HE; (j) UDCP + HE; (k) MIP + HE; (l) IBLA + HE; (m) ULAP + HE; (n) NU_CC; (o) ours.
Figure 9. Comparative results for challenge underwater images. (a) Raw images; (b) DCP; (c) UDCP; (d) MIP; (e)IBLA; (f) ULAP; (g) NUDCP; (h) O_WCC; (i) DCP + HE; (j) UDCP + HE; (k) MIP + HE; (l) IBLA + HE; (m) ULAP + HE; (n) NU_CC; (o) ours.
Sensors 23 05708 g009
Table 1. Quantitative analysis of Figure 5(1).
Table 1. Quantitative analysis of Figure 5(1).
MethodsIndexes
PSNRSSIMUIQMBRISQUE
DCP15.650.6520.7849.56
UDCP17.660.2510.3544.56
MIP18.530.7341.2144.27
IBLA22.590.7621.1742.88
ULAP23.960.7591.2741.18
NUDCP25.540.7631.2942.57
Ours28.630.8961.7239.32
The bold is to highlight the best indicator results.
Table 2. Quantitative analysis of Figure 5(2).
Table 2. Quantitative analysis of Figure 5(2).
MethodsIndexes
PSNRSSIMUIQMBRISQUE
DCP18.320.6761.3118.73
UDCP16.810.3140.7719.08
MIP15.450.8151.6825.45
IBLA23.570.7861.5915.37
ULAP25.050.7631.4915.06
NUDCP25.190.8251.6116.34
Ours27.990.8521.4614.65
The bold is to highlight the best indicator results.
Table 3. Quantitative analysis of Figure 6(3).
Table 3. Quantitative analysis of Figure 6(3).
MethodsIndexes
PSNRSSIMUIQMBRISQUE
DCP16.150.3540.7361.81
UDCP17.060.2370.8258.92
MIP19.950.0580.5957.25
IBLA23.090.5011.1953.35
ULAP24.050.2970.8755.69
NUDCP27.020.4011.0754.32
Ours28.240.6511.3152.81
The bold is to highlight the best indicator results.
Table 4. Quantitative analysis of Figure 6(4).
Table 4. Quantitative analysis of Figure 6(4).
MethodsIndexes
PSNRSSIMUIQMBRISQUE
DCP18.280.5751.0945.62
UDCP16.710.5481.0942.38
MIP20.010.6091.1442.83
IBLA24.430.6281.1939.53
ULAP26.240.6111.1439.58
NUDCP28.580.6251.2239.95
Ours28.990.7361.4138.62
The bold is to highlight the best indicator results.
Table 5. Quantitative analysis of underwater image enhancement and restoration based on different methods.
Table 5. Quantitative analysis of underwater image enhancement and restoration based on different methods.
MethodsIndexes
PSNRSSIMUIQMBRISQUE
DCP18.860.370.9945.33
UDCP17.790.390.8243.91
MIP20.810.521.0238.89
IBLA22.590.651.3834.77
NUDCP25.390.711.4629.14
O_WCC28.540.771.6136.67
DCP + HE19.080.411.5838.96
UDCP + HE19.430.491.6338.91
MIP + HE21.980.621.7435.05
IBLA + HE22.660.711.7932.11
ULAP + HE24.660.681.8130.69
NU_CC26.040.751.6828.63
Ours28.750.851.7527.67
The bold is to highlight the best indicator results.
Table 6. The processing speed of enhancement and restoration for underwater videos.
Table 6. The processing speed of enhancement and restoration for underwater videos.
Test Time (s)TPF (ms)FPS
DCP9759821.0
UDCP21697231.4
MIP475815860.6
ULAP20166721.5
NUDCP11553852.6
Ours2889610.4
TPF: time per frame, FPS: frame per second. The bold is to highlight the best indicator results.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, C.; Dong, X.; Wang, Y.; Wang, S. Enhancement and Optimization of Underwater Images and Videos Mapping. Sensors 2023, 23, 5708. https://doi.org/10.3390/s23125708

AMA Style

Li C, Dong X, Wang Y, Wang S. Enhancement and Optimization of Underwater Images and Videos Mapping. Sensors. 2023; 23(12):5708. https://doi.org/10.3390/s23125708

Chicago/Turabian Style

Li, Chengda, Xiang Dong, Yu Wang, and Shuo Wang. 2023. "Enhancement and Optimization of Underwater Images and Videos Mapping" Sensors 23, no. 12: 5708. https://doi.org/10.3390/s23125708

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop