Next Article in Journal
Pinocytosis as the Biological Mechanism That Protects Pgp Function in Multidrug Resistant Cancer Cells and in Blood–Brain Barrier Endothelial Cells
Previous Article in Journal
Investigation of the Effect of Hydrothermal Waters on Radionuclide Activity Concentrations in Natural Marble with Multivariate Statistical Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Underwater Image Enhancement Using Successive Color Correction and Superpixel Dark Channel Prior

Department of Electronics Engineering, Pusan National University, Busan 46241, Korea
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(8), 1220; https://doi.org/10.3390/sym12081220
Submission received: 15 July 2020 / Revised: 24 July 2020 / Accepted: 24 July 2020 / Published: 25 July 2020
(This article belongs to the Section Computer)

Abstract

:
Underwater images generally suffer from quality degradations, such as low contrast, color cast, blurring, and hazy effect due to light absorption and scattering in the water medium. In applying these images to various vision tasks, single image-based underwater image enhancement has been challenging. Thus, numerous efforts have been made in the field of underwater image restoration. In this paper, we propose a successive color correction method with a minimal reddish artifact and a superpixel-based restoration using a color-balanced underwater image. The proposed successive color correction method comprises an effective underwater white balance based on the standard deviation ratio, followed by a new image normalization. The corrected image based on this color balance algorithm barely produces a reddish artifact. The superpixel-based dark channel prior is exploited to enhance the color-corrected underwater image. We introduce an image-adaptive weight factor using the mean of backscatter lights to estimate the transmission map. We perform intensive experiments for various underwater images and compare the performance of the proposed method with those of 10 state-of-the-art underwater image-enhancement methods. The simulation results show that the proposed enhancement scheme outperforms the existing approaches in terms of both subjective and objective quality.

1. Introduction

Light is attenuated due to the complicated underwater environment and lighting conditions when it propagates through water. Therefore, images captured under water have a reduced contrast and hazy effect. Two major factors lead to the degradation of underwater images. The first factor is that reflected light from the underwater object is absorbed and scattered by particles suspended in water, which lowers the image contrast and produces a hazy effect in the underwater image. The second factor is the attenuation of light, which depends on the optical wavelength, dissolved organic compounds and water salinity, which causes various color casts. Because red light has a longer wavelength, most underwater images look bluish or greenish. Low-quality underwater images may cause failures in computer vision applications such as inspection, environmental sensing, object detection, and object recognition. In the underwater environment, color correction is a difficult task because the distortion of color occurs asymmetrically depending on the wavelength of light. Therefore, enhancing underwater images is a challenging and important task [1,2,3].
The underwater image-enhancement methods can be classified roughly into two categories, model-free methods and model-based methods. Model-free underwater image-enhancement algorithms attempt to improve the contrast and color of images without using an underwater imaging model. These methods employ a wide range of image-processing techniques typically applied to natural images. Model-based underwater image-enhancement methods are based on a degradation model that analyses the underwater imaging mechanism and the basic physics of light propagation. A wide range of underwater image restoration algorithms use color correction and a specially designed enhancement method simultaneously or sequentially, because of the inherent color attenuation of underwater images.
Model-free underwater image-enhancement algorithms aim at improving the contrast and color of images without any underwater imaging model. These methods use various image processing techniques applied to natural images, and include histogram stretching [4,5,6], Retinex [7,8], color correction [7,9,10], and fusion-based [10,11] algorithms. Iqbal et al. [4] proposed an underwater image-enhancement method using an integrated color model. Their algorithm was based on a series of stretching, such as contrast stretching in RGB space, and saturation and brightness stretching in the HSI space. Hitam et al. [5] adjusted contrast limited adaptive histogram equalization (CLAHE) and built mixed CLAHE to improve the visibility of underwater images. CLAHE was applied to the RGB and HSV color models to generate two images. This method achieved improved visual quality of underwater images by enhancing contrast and reducing noise and artifacts. Ghani and Isa [6] applied the histogram modification technique to the two main color models (RGB and HSV) to enhance underwater images. Their method enhanced the image contrast, reduced the blue-green effect, and minimized under- and over-enhanced areas in the output image. Fu et al. [7] introduced the Retinex framework to enhance underwater images. They separated direct light from reflected light in CIE-Lab color space. Different strategies were used to highlight the separated light components to enhance the contrast of underwater images. Zhang et al. [8] extended the Retinex-based framework for underwater image enhancement. The brightness and color components were filtered using a bilateral filter and a trilateral filter to remove luminance in the Lab color model and suppress the halo artifacts. A statistical method [7,9] based on the mean and standard deviation values in RGB channels was used to generate color-corrected underwater images. Ancuti et al. [10] presented an improved underwater white balance (UWB) technique under the premise that red channel attenuation was the fastest. After removing the color cast, a fusion-based algorithm was applied to enhance the underwater images
Model-based underwater image-enhancement methods are based on a degradation model that analyzes the underwater imaging mechanism and the basic physics of light propagation. On the basis of these properties, several model-based approaches have been proposed for underwater image restoration. In particular, prior-based methods including dark channel prior (DCP) [12], underwater dark channel prior (UDCP) [13,14], red channel prior (RCP) [15,16], histogram distribution prior (HDP) [17], and blurriness prior (BP) [18], have received extensive attention. Using these priors, the backscatter light and transmission map can be estimated and then applied to underwater image restoration. DCP is a widely used prior to image dehazing. Due to the similarity between a hazy image and an underwater image, the DCP-based haze removal method is widely applied to underwater image restoration. However, the enhanced images show limited improvement because red light attenuates much faster than green and blue light when propagating in water. The red channel of most underwater images has a small value, and thus, dominates the calculation of a dark image. To eliminate the influence of the red channel, Drews et al. proposed UDCP [13], which only considers green and blue channels to produce DCP. Although UDCP can obtain a more accurate transmission map than DCP, the restored images are still not satisfactory. Li et al. presented a blue-green channel dehazing algorithm and a red channel correction algorithm [15] for underwater image enhancement. In this method, blue-green channels are first recovered based on the extended DCP, and then the red channel is corrected following gray-world assumption (GWA) [19]. However, because this algorithm uses GWA, the restored image can have a reddish artifact. Galdran et al. proposed an automatic red channel image restoration based on RCP [16], which extracts a dark channel from the reversed red channel and blue-green channels. However, the colors of some restored images are visually incorrect and unreal. A systematic underwater image-enhancement method [17] using image dehazing and contrast enhancement was introduced. This algorithm used the underwater image dehazing method based on the minimum information loss principle, and a contrast enhancement algorithm was proposed based on HDP. However, this method sometimes results in over-enhanced images. Peng et al. proposed an underwater image restoration method [18] based on image blurriness and light absorption for estimating a more accurate background light and the underwater scene depth. This method produces few reddish artifacts, but has the disadvantage that the color veil caused by the underwater environment cannot be completely removed.
Recently, machine learning-based underwater image-enhancement approaches, which include a data-and-prior-aggregated transmission network (DPATN) [20], an underwater convolutional neural network [21], and an underwater residual convolutional neural network [22], have gained popularity. However, the considerable training data required is difficult to achieve in deep-sea environments. To address the difficulty in the development of machine learning-based underwater image enhancement, an underwater image-enhancement benchmark dataset (UIEBD) was constructed [23]. To overcome the lack of datasets for a convolutional neural network, generative adversarial networks (GANs) were used to generate realistic underwater images. WaterGAN [24] and underwater-GAN [25] were proposed to enhance underwater images in an unsupervised pipeline.
A wide range of underwater image restoration algorithms use color correction and a specially designed enhancement method simultaneously or sequentially, because of the inherent color attenuation of underwater images. However, these approaches frequently produce reddish artifacts in the improved underwater image. This paper aims to introduce a new color correction method with minimal reddish artifacts and to improve underwater images by applying a superpixel-based haze removal scheme. The proposed underwater color balance method comprises two steps. In the first step, the standard deviation ratio is used to improve the conventional underwater white balance method (UWB) [10]. In the second step, a modified image normalization algorithm is proposed based on the previously corrected color channels. The proposed successive color correction method barely generates a reddish artifact in color-corrected underwater images. After color correction, a superpixel-based DCP is adopted to restore the underwater image. This process employs a simple weight determination method for a transmission map using the average of backscatter lights for three channels. The proposed method generates reasonable enhancement results in a wide variety of underwater images.
The remainder of this paper is organized as follows. The proposed successive color correction algorithm is presented in Section 2. Image-enhancement method using superpixel DCP is introduced in Section 3. Section 4 presents the experimental results obtained using the proposed approach. Subsequently, Section 5 presents the discussion, and finally, the paper is concluded in Section 6.

2. Proposed Successive Color Correction

2.1. Improvement of Underwater White Balance

Ancuti et al. proposed a UWB method [10] to correct the color cast of the red channel. When the blue channel was strongly attenuated, they selectively applied their method to it. Let Ic (c∈{r, g, b}) be the color channel of the given underwater image, and Ic(x) be the pixel value of Ic at the position x = (x,y) within the image. Using UWB algorithm, the color compensation is achieved as follows.
I U c ( x ) = I c ( x ) + ( m ( I g ) m ( I c ) ) ( 1 I c ( x ) ) I g ( x ) ,
where IcU(x) is a color-corrected pixel, and m(Ic) is the mean of Ic. In Equation (1), the term 1-Ic(x) is considered as a weakness measure of the color channel. The smaller the Ic(x), the greater is the pixel value to be compensated. (m(Ig)-m(Ic))Ig(x) is considered as the compensation term based on the mean difference. Because compensating for the color channel by only considering the difference in the mean is insufficient, IcU(x) is conceded as an initial estimate made for underwater image enhancement in the UWB method [10]. Therefore, the conventional GWA [19] is used to finally compensate the color cast as follows.
I A c ( x ) = m ( I g ) m ( I U c ) I U c ( x ) ,
where IcA(x) is the final balanced underwater image based on UWB method [10].
Figure 1 presents the color correction results obtained using UWB for greenish and bluish underwater images. As shown in Figure 1a,c, the standard deviation of the red channel of the corrected image is approximately doubled (from 0.109 to 0.214), resulting in a reddish balanced image. This negative result is due to the use of GWA. The UWB method generates a fairly well balanced image as shown in Figure 1f. However, the bottom region of the image shows a slight reddish artifact. Almost all underwater images have the condition m(Ig) > m(Ir). Based on Equation (2), the standard deviation of IcA is calculated as σ(IcA) = [m(Ig)/m(Ic)]σ(IcU), where σ() is the standard deviation. This value is affected by GWA, which can lead to an excessive red channel. Thus, the GWA-based methods, such as the Shades-of-Grey method [26], and Grey-Edge hypothesis [27], as well as the classical GWA [19], tend to enlarge the red component as the red channel becomes weaker. For this reason, the GWA-based underwater image-enhancement schemes frequently cause reddish enhanced images.
The UWB method [10] does not consider the shape of the histogram of the color channel. Although color channels have the same mean, their standard deviation can be different. Therefore, compensating for the color channel by only considering the difference in the mean is insufficient. In this paper, we present an improved UWB method using the standard deviation ratio of color channels. Let IcM (x) be the color-corrected pixel based on the proposed method, which is obtained by
I M c ( x ) = I c ( x ) + σ ( I g ) σ ( I c ) ( m ( I g ) m ( I c ) ) ( 1 I c ( x ) ) I g ( x ) .
In almost all underwater images, σ(Ig) > σ(Ic), and therefore, the pixel values of the red (or blue) channel are sufficiently increased by Equation (3).
Figure 2 shows the color-adjusted image after initial correction for a sample underwater image. Using the UWB method [10], the mean of the red channel is increased from 0.061 to 0.258 and the standard deviation of the red channel is changed from 0.051 to 0.110. Both the mean and standard deviation are not sufficiently enlarged. Therefore, the UWB algorithm essentially requires the GWA process, which frequently causes a reddish artifact. On the other hand, the proposed method can sufficiently increase the mean and standard deviation of the weak red channel. However, there is still a color cast as shown in Figure 2. To solve this problem, in the next step, we use a simple adaptive image normalization method based on the mean and standard deviation of IM.

2.2. Adaptive Image Normalization

Image normalization is a typical process in image processing that changes the range of pixel intensity values. The normalization of an image is obtained by
I N c ( x ) = I c ( x ) I min c I max c I min c ,
where IcN(x) is the normalized pixel value, and Icmax and Icmin are the maximum and minimum values of the color channel, respectively. Fu et al. [7] proposed the maximum and minimum color deviations in each color channel to obtain Icmax and Icmin. The maximum deviation of the color channel c, Icmax is defined as
I max c = m ( I c ) + η σ ( I c ) ,
where η is a parameter that tempers the saturation of the result. Similarly, the minimum deviation of the color channel c, Icmin is calculated as
I min c = m ( I c ) η σ ( I c ) .
Fu et al. [7] assigned a heuristic value to the saturation parameter η and set it to 3 for each color channel. Using (5) and (6), the color-compensated pixel of the underwater image is obtained by
I N c ( x ) = I c ( x ) m ( I c ) + η σ ( I c ) 2 η σ ( I c ) .
The image normalization method based on Equation (7) is simple and known as an effective color correction algorithm [7,8]. However, it attempts to make the mean 0.5 and standard deviation 1/2η (0.167 when η is set to 3). This may cause a negative effect in improving the color cast when the mean value of the color channel is considerably small or large. The corrected image may have an unnatural effect, such as a clipped histogram or a reddish artifact.
In this paper, we propose an adaptive image normalization algorithm that introduces a channel-dependent parameter, so that the standard deviation is not fixed at 1/2η. From the primary corrected color channel IcM based on Equation (3), we perform color correction once more using the proposed image normalization method. Let IcP be the final balanced underwater image based on the proposed method, it is obtained as
I P c ( x ) = I M c ( x ) m ( I M c ) + κ c σ ( I M c ) 2 κ c σ ( I M c ) ,
where κc is the channel-dependent parameter. In equation (8), κc is defined as
κ c = η σ   max ( I M c ) σ ( I M c ) ,
where σmax(IcM) is the maximum standard deviation of σ(IcM). From (8) and (9), the standard deviation of IcP decreases as κc increases. That is, the standard deviation of IcP can be controlled by κc so that the standard deviation of the weak channel does not become excessively large. Finally, the mean IcP is made equal to that of the green channel m(Ig) as in the normal white balance method. Figure 3 presents the color correction results obtained for the proposed algorithm. As shown in Figure 3, the final enhanced underwater images obtained using the proposed method have no reddish artifact.

3. Underwater Image Enhancement Using Superpixel Dark Channel Prior

3.1. Transmission Map Estimation

The color-corrected underwater image obtained using the proposed method can be enhanced using the well-known haze removal frameworks without any deformation. In this paper, we present a superpixel-based DCP algorithm with adaptive weight for the medium transmission. The color corrected underwater image IcP can be applied to the following mathematical model [28],
I P c ( x ) = J c ( x ) t ( x ) + A c ( 1 t ( x ) ) ,
where Jc(x) is the clean image, Ac is the backscatter light, and t(x) is the medium transmission describing the portion of light that is not scattered. DCP [12] is frequently used to estimate Ac and t(x). This prior is based on the assumption that the pixel value of at least one color channel in a given small area is close to zero. This assumption is expressed as follows.
J dark ( x ) = min y Ω ( x ) ( min c J c ( y ) ) 0 ,
where Jdark(x) is a dark image and Ω(x) is a local patch. However, Jdark(x) is not always zero, and transmission is not constant in a local patch. These facts produce an annoying artifact in the enhanced image. Therefore, a range of post-processing methods [29,30,31] are followed by a transmission estimation. Superpixel-based approaches [32,33] are an alternative for reducing a negative artifact when applying DCP to the images.
In this paper, we divide the color-corrected underwater image IP into superpixels and estimate the backscatter light and transmission map for the superpixel image. IP is decomposed into N superpixels using a simple linear iterative clustering (SLIC) [34] algorithm. Let Si be the i-th superpixel. According to DCP, the dark image of the superpixel-segmented image is obtained as follows.
I P dark ( i ) = min y S i ( min c I P c ( y ) ) ,
where IdarkP(i) is the dark pixel corresponding to the i-th superpixel. By using the dark image based on superpixel DCP, the transmission map is estimated as follows.
t sp ( i ) = 1 ω I P dark ( i ) A sp c ,
where tsp(i) is the medium transmission obtained using superpixel DCP, Acsp is the backscatter light using IdarkP(i), and ω is the weight. The parameter ω is introduced to optionally maintain very little haze for distant objects [12], and it is generally set to 0.95. However, it is inappropriate to set the weight as a constant because the atmospheric condition for the acquired image may be different.

3.2. Adaptive Weight

In this paper, we propose an adaptive weight determination method for estimating the transmission map. It can be observed that a larger dark image indicates more haziness in the image. In addition, ω should be larger if the haze is heavier, and vice versa. Because Acsp is commonly estimated using pixels with quite a large value in a dark image, a large Acsp is assumed to be roughly proportional to the amount of haze. Based on this assumption, the proposed weight ω is determined from the average of the backscatter lights for three color channels as follows.
ω = 1 3 ( A sp r + A sp g + A sp b ) .
From tsp(i) estimated using adaptive ω and Acsp, the underwater image is recovered as follows.
J c ( x ) = I P c ( x ) A sp c max ( t sp ( x ) , t 0 ) + A sp c .
where t0 is a small constant that prevents the tsp(i) value from becoming very small and is generally set to 0.1.
Figure 4 presents the transmission map and recovered underwater image when the weight is maintained constant and when it is adaptively determined according to Equation (14). As shown in Figure 4, the proposed adaptive weight selection method achieves better enhancement results (surrounded by red box) as compared to the case where ω is set as 0.95.

3.3. Summary of Proposed Method

Table 1 presents a summary of the proposed color correction and enhancement algorithm for an underwater image. The successive color correction is performed using Step (1), followed by Steps (2) and (3). The SLIC algorithm [34] is used to segment an image into superpixels in Step (4). By using a superpixel dark image based on Step (5), the backscatter light and adaptive weight are estimated in Steps (6) and (7), respectively. The transmission map is calculated in Step (8), and the recovered underwater image using the proposed method is obtained in Step (9).

4. Simulation Results

4.1. Underwater Color Correction Reuslts

The performance of the proposed color correction method is compared with that of the existing methods, namely, GWA [19], Fu et al.’s image normalization [7] and Ancuti et al.’s UWB [10] methods. To verify the effectiveness of the proposed method, we test it on three types of underwater images, such as normal, greenish, and bluish images.
Figure 5 shows the color correction results obtained for normal underwater images with a moderately weak red channel. As shown in Figure 5, all methods exhibit the same color correction performance, indicating that all comparison methods show good color correction performance in an underwater image with a moderately weak red channel. In conclusion, color correction is not a difficult task for a normal underwater image with a moderately weak red channel.
Figure 6 presents the color correction results obtained for greenish underwater images. As presented in Figure 6, the compensated images based on GWA and Fu et al.’s methods exhibit an over-enhanced red color. UWB achieves decent results, however, the enhanced images are still reddish. On the other hand, the corrected image using the proposed method hardly turns reddish and exhibits better results than the other three color correction methods.
The color correction results obtained for bluish underwater images are illustrated in Figure 7. As shown in the examples in Figure 7, GWA and Fu et al.’s methods fail to correct the color cast in some images. UWB performs color correction well, however, the proposed method produces better results. In contrast, our method can produce a color-balanced image with little reddish artifact. When the three figures are put together, the proposed algorithm shows the best results for color correction of various underwater images because most underwater images have a significantly decayed red channel.

4.2. Image Enhancement Reuslts

To verify the effectiveness of the proposed underwater image-enhancement method, we tested it on various underwater images in UIEBD [22], which comprise 890 underwater images. The performance of our approach is compared with those of 10 state-of-the-art methods, namely, DCP [12], UDCP [13], UWB-based fusion algorithm (UWBF) [10], BP [18], underwater color restoration based on haze-lines (HL) [35], natural-scene gradient distribution prior (GDP) [36], HDP [17], RCP [16], DPATN [20], and underwater light scattering model (ULSM) [37]. The code for the proposed underwater enhancement algorithm is available at [38]. In the simulation, we performed both qualitative and quantitative comparisons. For quantitative comparison, we used the non-reference underwater image quality measure (UIQM) [39], which is frequently used in the underwater image-enhancement field. UIQM comprises three underwater image attribute measures: image colorfulness, image sharpness, and image contrast. A higher UIQM score is considered to yield a higher visual quality.
Figure 8 presents the restoration results for various images with a weak underwater veil. UWBF [10], HL [35], GDP [36], HDP [17], RCP [17], and DPATN [20] recover the underwater images well, while HL [35] shows a reddish artifact on some test images (W5, W7, and W8). DCP [12], UDCP [13], BP [18], and ULSM [37] do not sufficiently restore the underwater image. In comparison, the proposed approach effectively recovers the image details and removes underwater veils. Furthermore, the proposed method does not produce reddish artifacts.
Table 2 lists the UIQM scores for 10 of the restored normal underwater images shown in Figure 8. As shown in Table 2, most methods except for BP [18] achieve good enhancement performance in terms of the UIQM score. In particular, UWBF [10], GDP [36], HDP [17], RCP [16], and ULSM [37] generate acceptable visual quality as well as UIQM score. This is because the test images have weak underwater veils with little color cast.
Figure 9 shows a qualitative comparison of the proposed results with those of the 10 state-of-the-art underwater image-enhancement algorithms for greenish underwater images. DCP [12], UDCP [13], BP [18], GDP [36], and ULSMS [37] do not exhibit good restoration performance. These algorithms do not sufficiently remove the green veil from the greenish underwater images. The fusion-based approach UWBF [10] effectively removes green veil and enhances the details of the scenes and objects. HL [35] and HDP [17] restore the image details appropriately, however, they tend to overestimate the details of the image and often produce a reddish artifact in restoring the greenish objects. RCP [16] does not sufficiently restores the underwater images. DPATN [20] exhibits a good restoration performance, but produces a color shift and reddish artifact in some images. On the other hand, the proposed method completely removes the greenish veil from the underwater image, restores the image details, and does not produce any reddish artifact.
Table 3 lists the UIQM scores for the 10 greenish underwater images shown in Figure 9. As shown in Table 3, most underwater image-enhancement methods that do not sufficiently remove the greenish veil have low UIQM scores. The proposed method achieves the best UIQM score, followed by UWBF [10] and then HDP [17]. However, as seen in Figure 9, HDP [17] has a reddish artifact in some test images (G1, G2, G3, G3, G9, and G10). Only the proposed algorithm and UWBF [10] have a relatively high UIQM score and satisfactory visual quality without a reddish artifact.
For a broader performance comparison, we present the underwater image enhancement results for the 10 bluish images shown in Figure 10. The detailed comparisons are almost the same as those in Figure 9. Almost all algorithms except UWBF [10] and the proposed method suffer from removal of the bluish veil from the underwater images. HDP [17] and DPATN [20] produce serious reddish artifacts. The restoration results obtained using the proposed algorithm are competitive or superior to those of the other methods in terms of the blue-veil removal capability, detailed recovery, color shift, and uniform region recovery. In particular, the proposed method does not produce any reddish artifact for all test images.
The visual quality of underwater images with bluish veil is the most difficult to improve. Table 4 presents the UIQM scores for 10 bluish underwater image samples. As shown in Table 4, the average UIQM score appears lower than those of the normal and greenish underwater images shown in Table 2 and Table 3, respectively. The two higher rankings are the same as those in Table 2 and Table 3, in the order of the proposed method and UWBE [10]. In Table 4, HDP [17] and DPATN [20] are equally ranked third, with a UIQM score of 0.448. However, the enhanced underwater images using these two methods show a serious reddish artifact as indicated in Figure 10. In UWBE [10], the color correction algorithm can produce a reddish artifact as shown in Figure 6 and Figure 7. However, this reddish artifact can be removed using a fusion approach. Among the 11 underwater image-enhancement methods, UWBE [10] and the proposed algorithm have acceptable image quality and a high UIQM score.

5. Discussion

In this paper, we calculate the average UIQM scores for all 890 test images in UIEBD [23]. Table 5 presents the average UIQM scores for the 11 underwater image-enhancement algorithms, including the proposed method. As shown in Table 5, our algorithm has the highest average UIQM score followed by UWBE [10]. HDP [17] and RCP [16] have relatively high average UIQM scores, however, they do not remove the underwater veil well as shown in Figure 9 and Figure 10, and sometimes cause a reddish artifact.
The experimental results indicate that the greatest challenge is encountered in restoring the underwater images with a bluish veil, followed by enhancing images with a greenish veil. Furthermore, many underwater image-enhancement algorithms tend to be unable to fully recover the image with a very weak red channel, or they exaggerate the red channel to cause a reddish artifact. However, the proposed algorithm almost eliminated a reddish artifact using initial color correction based on the standard deviation ratio and the subsequent modified image normalization. Annoying artifacts of the conventional DCP were alleviated without post-processing by introducing the superpixel DCP. In addition, the proposed image-adaptive weight factor enabled more accurate transmission map estimation, resulting in improved underwater image restoration. In conclusion, the proposed method recovered the weak red channel without excess reddish artifact and achieved a good enhancement result.

6. Conclusions

In this paper, we presented an efficient underwater image-enhancement method using successive color correction and a superpixel-based dehazing algorithm. The proposed color correction algorithm used the standard deviation ratio as a weighting factor for modifying the existing underwater white balance algorithm. Furthermore, the image normalization was exploited to improve color correction performance. For the corrected underwater image, the superpixel-based dark channel prior was used to restore the underwater image. In this process, an image-adaptive weight factor using the mean of backscatter lights was introduced to estimate the transmission map. We evaluated our algorithm for various underwater images, including images with a greenish veil, bluish veil, and a moderately weak veil. The performance of the proposed method was compared with that of the existing underwater image-enhancement algorithms. The simulation results showed that the proposed enhancement scheme outperforms state-of-the-art approaches in terms of both subjective and objective quality. The future work is to develop a unified underwater enhancement scheme that simultaneously performs color correction and restoration.

Author Contributions

H.S.L. and S.W.M. proposed the framework of this work, carried out the whole experiments, and drafted the manuscript. I.K.E. initiated the main algorithm of this work, supervised the whole work, and wrote the final manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by BK21PLUS, Creative Human Resource Development Program for IT Convergence.

Conflicts of Interest

The authors declare no conflict of interest.

Acronyms

The following acronyms are used in this paper.
AcronymDescription
CLAHEContrast Limited Adaptive Histogram Equalization
UWBUnderwater White Balance
DCPDark Channel Prior
UDCPUnderwater Dark Channel Prior
RCPRed Channel Prior
HDPHistogram Distribution Prior
BPBlurriness Prior
GWAGray World Assumption
DPATNData and Prior Aggregated Transmission Network
UIEBDUnderwater Image Enhancement Benchmark Dataset
GANGenerative Adversarial Network
SLICSimple Linear Iterative Clustering
UWBFUnderwater White Balance-based Fusion Algorithm
HLHaze Line
UIQMunderwater image quality measure

References

  1. Lu, H.; Li, Y.; Zhang, Y.; Chen, M.; Serikawa, S.; Kim, H. Underwater optical image processing: A comprehensive review. Mob. Netw. Appl. 2017, 22, 1204–1211. [Google Scholar] [CrossRef]
  2. Yang, M.; Hu, J.; Li, C.; Rohde, G.; Du, Y.; Hu, K. An in-depth survey of underwater image enhancement and restoration. IEEE Access 2019, 7, 123638–123657. [Google Scholar] [CrossRef]
  3. Wang, Y.; Song, W.; Fortino, G.; Ql, L.-Z.; Zhang, W.; Liotta, A. An experimental-based review of image enhancement and image restoration methods for underwater imaging. IEEE Access 2019, 7, 140233–140251. [Google Scholar] [CrossRef]
  4. Iqbal, K.; Salam, R.A.; Osman, A.; Talib, A.Z. Underwater image enhancement using an integrated colour model. Int. J. Compt. Sci. 2007, 34, 1–6. [Google Scholar]
  5. Hitam, M.S.; Awalludin, E.A.W.; Yussof, N.J.H.W.; Bachok, Z. Mixture contrast limited adaptive histogram equalization for underwater image enhancement. In Proceedings of the 2013 International Conference on Computer Applications Technology (ICCAT), Sousse, Tunisia, 20–22 January 2013; pp. 1–5. [Google Scholar] [CrossRef]
  6. Ghani, A.S.A.; Isa, N.A.M. Underwater image quality enhancement through integrated color model with Rayleigh distribution. Appl. Soft Comput. 2015, 27, 219–230. [Google Scholar] [CrossRef]
  7. Fu, X.; Zhuang, P.; Huang, Y.; Liao, Y.; Zhang, X.-P.; Ding, X. A Retinex-based enhancing approach for single underwater image. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 4572–4576. [Google Scholar] [CrossRef]
  8. Zhang, S.T.; Wang, S.; Dong, J.; Yu, H. Underwater image enhancement via extended multi-scale Retinex. Neurocomputing 2017, 245, 1–9. [Google Scholar] [CrossRef] [Green Version]
  9. Li, C.J.; Guo, C.; Guo, G.; Cong, R.; Gong, J. A hybrid method for underwater image correction. Pattern Recognit. Lett. 2017, 94, 62–67. [Google Scholar] [CrossRef]
  10. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 2018, 27, 379–393. [Google Scholar] [CrossRef] [Green Version]
  11. Ancuti, C.; Ancuti, C.O.; Haber, T.; Bekaert, P. Enhancing underwater images and videos by fusion. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 81–88. [Google Scholar] [CrossRef]
  12. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [CrossRef]
  13. Drews, P., Jr.; do Nascimento, E.; Moraes, F.; Botelho, S.; Campos, M. Transmission estimation in underwater single images. In Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops, Sydney, NSW, Australia, 2–8 December 2013; pp. 825–830. [Google Scholar] [CrossRef]
  14. Drews, P.L.J.; Nascimento, E.R.; Botelho, S.S.C.; Campos, M.F.M. Underwater depth estimation and image restoration based on single images. IEEE Comput. Graph. Appl. 2016, 36, 24–35. [Google Scholar] [CrossRef]
  15. Li, C.; Quo, J.; Pang, Y.; Chen, S.; Wang, J. Single underwater image restoration by blue-green channels dehazing and red channel correction. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 1731–1735. [Google Scholar] [CrossRef]
  16. Galdran, A.; Pardo, D.; Picón, A.; Alvarez-Gila, A. Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef] [Green Version]
  17. Li, C.; Guo, J.; Cong, R.; Pang, Y.; Wang, B. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans. Image Process. 2016, 25, 5664–5677. [Google Scholar] [CrossRef]
  18. Peng, Y.; Cosman, P.C. Underwater image restoration based on image blurriness and light absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef] [PubMed]
  19. Buchsbaum, G. A spatial processor model for object colour perception. J. Franklin Inst. 1980, 310, 1–26. [Google Scholar] [CrossRef]
  20. Liu, R.; Fan, X.; Hou, M.; Jiang, Z.; Luo, Z.; Zhang, L. Learning aggregated transmission propagation networks for haze removal and beyond. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2973–2986. [Google Scholar] [CrossRef] [Green Version]
  21. Wang, Y.; Zhang, J.; Cao, Y.; Wang, Z. A deep CNN method for underwater image enhancement. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–29 September 2017; pp. 1382–1386. [Google Scholar] [CrossRef]
  22. Hou, M.; Liu, R.; Fan, X.; Luo, Z. Joint residual learning for underwater image enhancement. In Proceedings of the 2018 IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 4043–4047. [Google Scholar] [CrossRef]
  23. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2020, 29, 4376–4389. [Google Scholar] [CrossRef] [Green Version]
  24. Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson-Roberson, M. Water-GAN: Unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 2018, 3, 387–394. [Google Scholar] [CrossRef] [Green Version]
  25. Fabbri, C.; Islam, M.J.; Sattar, J. Enhancing underwater imagery using generative adversarial networks. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 7159–7165. [Google Scholar] [CrossRef] [Green Version]
  26. Land, E.H. The Retinex theory of color vision. Sci. Am. 1977, 237, 108–129. [Google Scholar] [CrossRef]
  27. van de Weijer, J.; Gevers, T.; Gijsenij, A. Edge-based color constancy. IEEE Trans. Image Process. 2007, 16, 2207–2214. [Google Scholar] [CrossRef] [Green Version]
  28. Narasimhan, S.G.; Nayar, S.K. Vision and the Atmosphere. Int’l J. Computer Vision. 2002, 48, 233–254. [Google Scholar] [CrossRef]
  29. Xiao, C.; Gan, J. Fast image dehazing using guided joint bilateral filter. Vis. Comput. 2012, 28, 713–721. [Google Scholar] [CrossRef]
  30. Yeh, C.H.; Kang, L.W.; Lee, M.S.; Lin, C.Y. Haze effect removal from image via haze density estimation in optical model. Opt. Express. 2013, 21, 27127–27141. [Google Scholar] [CrossRef] [PubMed]
  31. Lin, Z.; Wang, X. Dehazing for image and video using guided filter. Appl. Sci. 2012, 2, 123–127. [Google Scholar] [CrossRef]
  32. Jiang, Y.; Sun, C.; Zhao, Y.; Yang, L. Image dehazing using adaptive bi-channel priors on superpixels. Comput. Vis. Image Und. 2017, 165, 17–32. [Google Scholar] [CrossRef]
  33. Yang, M.; Liu, J.; Li, Z. Superpixel-based single nighttime image haze removal. IEEE Trans. Multimed. 2018, 20, 3008–3018. [Google Scholar] [CrossRef]
  34. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Susstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 32, 2274–2282. [Google Scholar] [CrossRef] [Green Version]
  35. Berman, D.; Tali, T.; Shai, A. Diving into haze-lines: Color restoration of underwater images. In Proceedings of the British Machine Vision Conference (BMVC), London, UK, 4–7 September 2017; pp. 1–12. [Google Scholar] [CrossRef] [Green Version]
  36. Gong, Y.; Sbalzarini, I.F. A Natural-scene gradient distribution prior and its application in light-microscopy image processing. IEEE J. Sel. Top. Signal Process. 2016, 10, 99–114. [Google Scholar] [CrossRef]
  37. Cho, Y.; Kim, A. Visibility enhancement for underwater visual SLAM based on underwater light scattering model. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 710–717. [Google Scholar] [CrossRef]
  38. Source Code for the Proposed Method. Available online: https://sites.google.com/view/ispl-pnu/ (accessed on 24 July 2020).
  39. Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE J. Oceanic Eng. 2016, 41, 541–551. [Google Scholar] [CrossRef]
Figure 1. Color correction results using UWB. (a) Greenish underwater image. (b) Corrected greenish images before GWA IU. (c) Corrected greenish images after GWA IA. (d) Bluish underwater image. (e) Corrected bluish images before GWA IU. (f) Corrected bluish images after GWA IA. In this Figure, SD means standard deviation.
Figure 1. Color correction results using UWB. (a) Greenish underwater image. (b) Corrected greenish images before GWA IU. (c) Corrected greenish images after GWA IA. (d) Bluish underwater image. (e) Corrected bluish images before GWA IU. (f) Corrected bluish images after GWA IA. In this Figure, SD means standard deviation.
Symmetry 12 01220 g001
Figure 2. Color adjusted image after initial correction. (a) Underwater image I. (b) Image after initial correction by UWB. (c) Image after initial correction by proposed method.
Figure 2. Color adjusted image after initial correction. (a) Underwater image I. (b) Image after initial correction by UWB. (c) Image after initial correction by proposed method.
Symmetry 12 01220 g002
Figure 3. Color correction result. (a) Underwater image I. (b) Image IM after initial correction by proposed method. (c) Image IP after final correction by proposed method.
Figure 3. Color correction result. (a) Underwater image I. (b) Image IM after initial correction by proposed method. (c) Image IP after final correction by proposed method.
Symmetry 12 01220 g003
Figure 4. Transmission map and enhanced image according to the proposed adaptive ω. (a) Transmission map and recovered image at (i) ω = 0.55, (ii) ω = 0.75, (iii) ω = 0.95, (iv) ω = 0.76 (proposed adaptive ω marked in red rectangle). (b) Transmission map and recovered image at (i) ω = 0.55, (ii) ω = 0.75, (iii) ω = 0.95, (iv) ω = 0.81 (proposed adaptive ω marked in red rectangle).
Figure 4. Transmission map and enhanced image according to the proposed adaptive ω. (a) Transmission map and recovered image at (i) ω = 0.55, (ii) ω = 0.75, (iii) ω = 0.95, (iv) ω = 0.76 (proposed adaptive ω marked in red rectangle). (b) Transmission map and recovered image at (i) ω = 0.55, (ii) ω = 0.75, (iii) ω = 0.95, (iv) ω = 0.81 (proposed adaptive ω marked in red rectangle).
Symmetry 12 01220 g004
Figure 5. Color correction results obtained for normal underwater images with a moderately weak red channel. (a) Underwater image. (b) GWA [19]. (c) Fu et al.’s method [7]. (d) UWB [10]. (e) Proposed method.
Figure 5. Color correction results obtained for normal underwater images with a moderately weak red channel. (a) Underwater image. (b) GWA [19]. (c) Fu et al.’s method [7]. (d) UWB [10]. (e) Proposed method.
Symmetry 12 01220 g005
Figure 6. Color correction results obtained for greenish underwater images. (a) Underwater image. (b) GWA [19]. (c) Fu et al.’s method [7]. (d) UWB [10]. (e) Proposed method.
Figure 6. Color correction results obtained for greenish underwater images. (a) Underwater image. (b) GWA [19]. (c) Fu et al.’s method [7]. (d) UWB [10]. (e) Proposed method.
Symmetry 12 01220 g006
Figure 7. Color correction results obtained for bluish underwater images. (a) Underwater image. (b) GWA [19]. (c) Fu et al.’s method [7]. (d) UWB [10]. (e) Proposed method.
Figure 7. Color correction results obtained for bluish underwater images. (a) Underwater image. (b) GWA [19]. (c) Fu et al.’s method [7]. (d) UWB [10]. (e) Proposed method.
Symmetry 12 01220 g007
Figure 8. Image restoration results obtained for normal underwater images with not weak red channel.
Figure 8. Image restoration results obtained for normal underwater images with not weak red channel.
Symmetry 12 01220 g008
Figure 9. Image restoration results obtained for greenish underwater images.
Figure 9. Image restoration results obtained for greenish underwater images.
Symmetry 12 01220 g009
Figure 10. Image restoration results obtained for bluish underwater images.
Figure 10. Image restoration results obtained for bluish underwater images.
Symmetry 12 01220 g010
Table 1. Summary of the proposed method.
Table 1. Summary of the proposed method.
Input: underwater image I
Output: enhanced image J
(1)
Perform first color correction using (3). Obtain IM.
(2)
Perform second color correction using (8). Obtain IP.
(3)
Make the mean of IP equal to Ig.
(4)
Perform superpixel segmentation for IP.
(5)
Generate superpixel dark image using (12).
(6)
Estimate Acsp.
(7)
Obtain image adaptive weight using (14)
(8)
Estimate transmission map using (13)
(9)
Obtain enhanced underwater image using (15)
Table 2. UIQM scores for the 10 normal underwater image samples shown in Figure 8. The highest score is highlighted in bold, and the second rank is highlighted in italic.
Table 2. UIQM scores for the 10 normal underwater image samples shown in Figure 8. The highest score is highlighted in bold, and the second rank is highlighted in italic.
Underwater Images
W1W2W3W4W5W6W7W8W9W10Ave
DCP [12]0.4710.4170.7830.6880.6760.7960.7840.8250.6050.5910.664
UDCP [13]0.5880.4540.6710.6730.6760.6320.7310.7420.6070.5950.637
UWBF [10]0.6850.4780.7830.7320.6770.7730.7460.8180.5780.6350.691
BP [18]0.4170.4050.1060.3430.3060.1170.3260.3010.5590.5280.341
HL [35]0.6180.4610.7740.7850.6810.8420.6750.7210.580.590.673
GDP [36]0.4910.3540.7070.6150.6030.6870.6550.760.4870.5490.591
HDP [17]0.5690.4590.7610.7060.7310.8190.7290.7950.5790.5930.674
RCP [16]0.5180.4420.7050.6240.6160.7550.6870.790.5510.5070.62
DPATN [20]0.5460.4280.7830.7690.7310.8190.7380.7030.5630.5820.666
ULSM [37]0.5310.4510.6620.4830.5310.6780.610.7920.5630.3360.564
Proposed0.6530.4680.4680.8010.6950.820.810.830.6460.6870.724
Table 3. UIQM scores for 10 greenish underwater image samples shown in Figure 9. The highest score is highlighted in bold, and the second rank is highlighted in italic.
Table 3. UIQM scores for 10 greenish underwater image samples shown in Figure 9. The highest score is highlighted in bold, and the second rank is highlighted in italic.
Underwater Images
G1G2G3G4G5G6G7G8G9G10Ave
DCP [12]0.3760.420.6440.7520.720.2360.4130.4590.4760.3660.486
UDCP [13]0.4360.4630.6660.6560.6820.3270.5060.4420.5090.4810.517
UWBF [10]0.4930.4980.7890.7660.8410.3960.5370.6230.50.440.588
BP [18]0.4090.450.7370.7660.8140.2480.4390.5060.5220.3810.527
HL [35]0.4040.480.6960.6880.7030.3810.4910.5330.5470.4890.541
GDP [36]0.3340.420.720.7090.7840.2610.4130.480.4490.3540.492
HDP [17]0.4790.4970.7140.7960.7450.3650.5440.5480.6160.5430.585
RCP [16]0.4770.4410.7890.7630.8480.3730.5160.5730.5410.4580.578
DPATN [20]0.4460.5430.70.7350.7150.3290.5790.5330.5690.4630.561
ULSM [37]0.2830.3630.6920.7440.7810.20.3870.5140.4370.2770.468
Proposed0.5090.550.7870.8630.8490.4120.5930.6250.5130.5020.62
Table 4. UIQM scores for 10 bluish underwater image samples shown in Figure 10. The highest score is highlighted in bold, and the second rank is highlighted in italic.
Table 4. UIQM scores for 10 bluish underwater image samples shown in Figure 10. The highest score is highlighted in bold, and the second rank is highlighted in italic.
Underwater Images
B1B2B3B4B5B6B7B8B9B10Ave
DCP [12]0.3220.3740.3160.190.3970.3660.1940.5680.3990.3060.343
UDCP [13]0.3110.3570.3770.2480.4530.4240.2410.5130.4810.3910.38
UWBF [10]0.4380.4550.4320.2240.5510.4610.2680.7160.5860.4120.454
BP [18]0.360.410.3250.220.4180.380.2090.6570.4020.3180.37
HL [35]0.3810.3660.3380.2670.4460.5270.2310.730.3810.3630.403
GDP [36]0.3510.2380.3360.2220.3960.2530.2220.6240.4190.210.327
HDP [17]0.4510.4220.3820.320.5080.5260.2680.7130.4790.4140.448
RCP [16]0.3980.4440.3630.2350.480.5170.2540.7030.4730.4280.43
DPATN [20]0.4130.4620.430.3380.5320.4570.2880.6860.5230.3490.448
ULSM [37]0.2340.3320.3210.2210.3880.3650.2060.5880.4230.3230.34
Proposed0.4540.4150.4970.2120.5830.5060.3170.690.6410.4490.476
Table 5. Average UIQM scores for 890 test underwater images in UIEBD [23]. The highest score is highlighted in bold, and the second rank is highlighted in italic.
Table 5. Average UIQM scores for 890 test underwater images in UIEBD [23]. The highest score is highlighted in bold, and the second rank is highlighted in italic.
MethodsAverage UIQM Scores
DCP [12]0.536
UDCP [13]0.519
UWBF [10]0.632
BP [18]0.441
HL [35]0.579
GDP [36]0.519
HDP [17]0.623
RCP [16]0.619
DPATN [20]0.59
ULSM [37]0.531
Proposed0.65

Share and Cite

MDPI and ACS Style

Lee, H.S.; Moon, S.W.; Eom, I.K. Underwater Image Enhancement Using Successive Color Correction and Superpixel Dark Channel Prior. Symmetry 2020, 12, 1220. https://doi.org/10.3390/sym12081220

AMA Style

Lee HS, Moon SW, Eom IK. Underwater Image Enhancement Using Successive Color Correction and Superpixel Dark Channel Prior. Symmetry. 2020; 12(8):1220. https://doi.org/10.3390/sym12081220

Chicago/Turabian Style

Lee, Ho Sang, Sang Whan Moon, and Il Kyu Eom. 2020. "Underwater Image Enhancement Using Successive Color Correction and Superpixel Dark Channel Prior" Symmetry 12, no. 8: 1220. https://doi.org/10.3390/sym12081220

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop