Next Article in Journal
Laboratory Hyperspectral Image Acquisition System Setup and Validation
Next Article in Special Issue
Identifying CO2 Seeps in a Long-Dormant Volcanic Area Using Uncrewed Aerial Vehicle-Based Infrared Thermometry: A Qualitative Study
Previous Article in Journal
Osmotic Cloud-Edge Intelligence for IoT-Based Cyber-Physical Systems
Previous Article in Special Issue
Complex Electromagnetic Issues Associated with the Use of Electric Vehicles in Urban Transportation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Underwater Image Enhancement Based on Histogram-Equalization Approximation Using Physics-Based Dichromatic Modeling

1
Department of Computer Science, National Chengchi University, Taipei City 116, Taiwan
2
Department of Mechanical Aerospace Engineering, University of California, San Diego, CA 92093, USA
3
AI Research Center, National Taiwan Ocean University, Keelung City 202, Taiwan
4
Department of Electronic Engineering, National Taipei University of Technology, Taipei City 106, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(6), 2168; https://doi.org/10.3390/s22062168
Submission received: 24 January 2022 / Revised: 28 February 2022 / Accepted: 4 March 2022 / Published: 10 March 2022
(This article belongs to the Special Issue Feature Papers in Vehicular Sensing)

Abstract

:
This work proposes to develop an underwater image enhancement method based on histogram-equalization (HE) approximation using physics-based dichromatic modeling (PDM). Images captured underwater usually suffer from low contrast and color distortions due to light scattering and attenuation. The PDM describes the image formation process, which can be used to restore nature-degraded images, such as underwater images. However, it does not assure that the restored images have good contrast. Thus, we propose approximating the conventional HE based on the PDM to recover the color distortions of underwater images and enhance their contrast through convex optimization. Experimental results demonstrate the proposed method performs favorably against state-of-the-art underwater image restoration approaches.

1. Introduction

Underwater exploration has progressed greatly due to underwater remotely operated vehicles (ROVs) mounted with underwater surveillance video cameras or event data recorders [1]. Ones can collect and analyze data through computer vision applications, such as image classification, object detection, semantic segmentation, etc. One of the most critical factors for these systems to work well is that the recorded videos and the captured images are visually clear with good contrast. However, it is not always the case that the visual source has decent visibility [2]. Since the attenuated light reflected by the scene reaches the camera, causing absorption and scattering effects, underwater images often have color distortions and appear hazy and blurry [3].
Visibility improvement in underwater images is always a challenge. Contrast enhancement (CE) is a useful technique to enhance low-contrast images’ visual quality and bring out image details that would be otherwise unseen. Among different approaches for CE, global HE [4] is one of the most widely used methods due to its simple implementation and generally satisfactory results. HE tries to distribute intensity levels (IL) of an image uniformly across the available IL range through a transformation function calculated using the image cumulative distribution function (CDF). It works for general cases; however, the absorbed lights for underwater scenes make the images bluish or greenish, which may cause HE-based methods to under-enhance or over-enhance the images. Therefore, HE may not work well for such images since it does not consider underwater physics characteristics.
On the other hand, several attempts have been made to restore color and sharpness for underwater images using physics-based dichromatic modeling (PDM) [5,6,7,8,9]. Drews et al. [7], inspired by the dark channel prior (DCP) for image dehazing [8], restored underwater images using underwater DCP. Peng et al. [10] adopted image blurriness and light absorption to estimate depth information and background light for underwater scenes for underwater image restoration. Peng et al. [9] generalize the DCP to apply to various nature-degraded scenes, such as underwater, hazy, and sandstorm. However, physics-based dichromatic modeling often can only achieve color restoration, but the restored images may still not have good contrast.
In traditional image enhancement methods, [5,6,11,12,13,14,15,16] have all made related attempts. The scene depths used in [5,12,15] are all obtained from the transmission map (TM), which is derived from the DCP. He et al. [8] proposed to compute the amount of spatially uniform haze by using the darkest channel in the scene to remove haze in natural images. In natural scenes, DCP can be used to estimate TM and scene depth since points closer to the camera have shorter scattering paths. In contrast, points closer to dark scenes are still dark because of the intensity of scattered light they experience is lower. However, these traditional methods may still fail to restore red light for underwater images due to its long wavelengths and low frequencies, which attenuate faster underwater. Therefore, in underwater scenes, RGB channel-based of DCP often only considers the red channel to measure the transmission, resulting in incorrect depth estimation and poor restoration performances. Sequentially, an underwater DCP based only on the blue and green channels was proposed in [6,13,16] to solve the above problems. Additionally, Galdran et al. [14] proposed an approach based on DCP of green, blue, and inverted red channels, called the Red Channel method. Furthermore, Carlevaris-Bianco et al. [11] replaced DCP with the maximum intensity prior for estimating TM, which calculates the difference between the maximum intensity of the red channel and the maximum intensity of the blue and green. However, there are many exceptions to these priors due to the light scattering, absorption, and different lighting conditions in underwater images, which lead to no way for these methods to achieve satisfactory performance.
In recent years, deep learning has made significant progress, especially in the field of computer vision and low-level image processing tasks [17,18,19,20,21,22]. As a pioneering work, Cao et al. [17] proposed to synthesize degraded/clean underwater image pairs based on the image formation model to train a model to estimate scene depth and background light for an underwater image. Li et al. [23] extended it by using a Generative Adversarial Network (GAN) to make restored underwater images more realistic. In addition, Guo et al. [24] employed a residual multi-scale dense GAN model to enhance underwater images. Li et al. [25] collected an underwater image dataset with its subjectively better chosen restored results and proposed a gated fusion network to improve underwater images enhancement. A physical model for simulating real underwater images according to different water body types is proposed by Li et al. [26]. In total, they trained ten models together for underwater image enhancement, named UWCNN, each of which corresponds to one group of synthesized underwater images. Collected underwater images usually determine the performance of these underwater image enhancement models for training. Thus, it is challenging to have enough underwater images to represent real-world conditions. For this reason, the generalization capability and robustness of current deep learning-based models for underwater image enhancement/restoration are limited and somewhat dissatisfied.
Unlike the previously mentioned methods, this work proposes a novel idea that combines the conventional HE [4] with a PDM-based restoration method to strike a balance between “whether the image has good contrast” and “whether the image color is recovered” through convex optimization techniques. The proposed method using the HE approximation based on the PDM considers the physics characteristics of nature images to make the color restoration consistent among pixels while keeping the distributions of image intensities uniform across the entire IL range for better contrast.
The rest of the paper is organized as follows. Section 2 describes related works. The HE and how to use PDM to restore underwater images are briefly reviewed in Section 3. The proposed method is described in Section 4. The experimental results are reported in Section 5. Finally, Section 6 summarizes the conclusions.

2. Related Work

2.1. Histogram Equalization

Due to the absorption, scattering, and attenuation of propagated light with distance from the camera, the contrast of underwater imaging will be significantly reduced and cause fogging. CE is a helpful method to improve the visual quality of such images and bring out image details that would be otherwise invisible. In the field of CE, there are many related extension techniques, and global HE [4] is one of the most widely used because of its simplicity of implementation and generally satisfactory results. Global HE attempts to adjust the intensity levels (IL) presented in the image to be well-distributed over the IL range using a transformation function derived from the image cumulative distribution function (CDF). However, it can lead to under-enhancement or over-enhancement of the image. Significant peaks in the histogram may occupy wider ranges in the mapping transformation, leaving a relatively narrow range for other intensity levels. In addition, using global HE may also increase the noise in the input images and create unwanted artifacts. Much research has been conducted to improve global HE. In 1977, Kim [27] proposed brightness preserving bi-histogram equalization (BBHE) to maintain the original brightness in CE. It splits the image histogram into two sub-histograms based on the mean intensity of the global HE. The idea of mean brightness preservation, such as Dualistic sub-image histogram equalization (DSIHE) [28] which is similar to BBHE, separates the histogram according to the median intensity. Then, multiple histogram separations such as multi-peak histogram equalization (MPHE) [29], recursive mean separation histogram equalization (RMSHE) [30], and recursive sub-image histogram equalization (RSIHE) [31] are sequentially performed. They are proposed to maintain mean brightness.
In keeping mean brightness while avoiding added noise in global HE, the image histogram can be modified by cropping to control the CE rate. The method of HE with bin underflow and bin overflow (BUBOHE) [32] was presented to regulate the gradient of the mapping function by setting the lower and upper bounds of the calculated probability density function (PDF). Besides, gain-controllable clipped histogram equalization (GC-CHE) [33] is a clipped HE method using gain control. Furthermore, a hybrid histogram modifications (HM) method, bi-histogram equalization with a plateau limit (BHEPL) [34], was presented to integrate the histogram separation and clipping techniques for CE. Adaptive gamma correction with weighting distribution (AGCWD) [35] is another hybrid HM method for CE that combines the gamma correction [36] and global HE to keep the original mean brightness while curbing the under-enhancement and over-enhancement.
In subsequent studies, a 2D HE method that exploits the contextual information around each pixel in the input image was proposed and applied to global HE instead of using the 1D intensity histogram. In 2011 and 2012, Celik proposed two-dimensional histogram equalization (2DHE) [37] and contextual and variational contrast enhancement (CVCE) [38], respectively. Both use a sliding window to compute a two-dimensional histogram that counts the neighbor pixel intensities for each intensity level to enhance the contextual features further.
The global HE described above may effectively enhance contrast for most natural images. However, it may not be enough for restoring underwater images because they do not only suffer low contrast but color distortions.

2.2. Pdm-Based Restoration Methods

Many underwater image restoration methods have been proposed in recent years. Since our approach is relevant to PDM-based restoration, we introduce these relevant methods here. Since underwater images resemble hazy images in some cases, researchers have employed dehazing methods to deal with underwater image restoration. For instance, single image dehazing based on the dark channel prior [8] assumes that dark pixels exist in most local patches of natural images without haze. Based on this assumption, one can estimate the haze transmission and ambient light to restore hazy images based on the PDM. The underwater DCP (UDCP) [7] method observed that the absorption rate of red light is larger and modified DCP to fit underwater scenes. Similarly, the red channel method [14] is considered a dark channel method to recover degraded images by restoring the contrast and colors associated with short wavelengths. Peng et al. proposed a depth estimation method for underwater scenes based on image blurriness and light absorption (IBLA) [10], restoring underwater images. Besides, Peng et al. also proposed a Generalized Dark Channel Prior (GDCP) [9] approach for image restoration, which incorporates adaptive color correction into PDM. This approach can be reduced to several DCP variants for different special situations of turbid medium and ambient lighting conditions. Importantly, underwater images usually present blur and color casts caused by light absorption and scattering in a water medium. Therefore, the underwater light attenuation prior method (ULAP) [39] applys the correct depth map, the transmission maps (TMs), and the background light (BL) for RGB lights to recover the true scene radiance under the water through an image formation model. Song et al. [40] restores underwater images using transmission map optimization. However, these image restoration methods do not consider further enhancing the contrast of the restored images, sometimes presenting unsatisfying restored results.

3. Background Reviews

3.1. Histogram Equalization

Let I be the input image and I ( x ) 0 , L 1 be the intensity of the input image at pixel x, where L = 2 b for an image with a b-bit IL. Let the image histogram H = [ h 0 , h 1 , , h L 1 ] N L . The probability density function P = [ p 0 , p 1 , , p L 1 ] T R + n of I can be depicted in a vector form as:
P = H 1 T H ,
where 1 = [ 1 , 1 1 ] T N L The CDF C = [ c 0 , c 1 , , c L 1 ] T R + n of the image is computed as c l = i = 0 l p i . In HE, the transform function T f is derived as:
T f ( l ) = ( L 1 ) c l + 0.5 ,
where l [ 0 , L 1 ] . Finally, each distinct IL l of the input image I is remapped to a corresponding output IL via the transform function in Equation (2).

3.2. Pdm-Based Image Restoration

A simple PDM-based method for underwater images is given as [8,9]:
I c ( x ) = J c ( x ) t ( x ) + B c ( 1 t ( x ) ) ,
where I c is the observed intensity in the channel c, with c being one of the R, G, B channels, J c is the scene radiance, B c is the background light (BL), and t is the transmission map. The transmission describes how much light is not attenuated from traveling through the medium and reaches the camera. Besides, it can be formulated as an exponential decay term of the scene depth:
t ( x ) = e β d ( x ) ,
where β is the attenuation coefficient, and d ( x ) represents the corresponding depth for the pixel at x. Thus, if a scene point is farther from the camera its t gets smaller.
Let S 0.1 be the set of the positions of the top 10 % brightest pixels in I d a r k , where I d a r k ( x ) = min y Ω ( x ) min c I c ( y ) . The estimated BL B c can be calculated by:
B c = 1 | S 0.1 | x S 0.1 I c ( x ) , c { r , g , b } .
Based on the DCP, the depth map t ( x ) is estimated [8] as:
t ( x ) = 1 min y Ω ( x ) min c { R , G , B } I c ( y ) B c ,
where Ω ( x ) is a local patch centered at x. In our work, a 15 × 15 patch is adopted. Since the depth map has blocking artifacts, it can be refined by median filtering. Finally, plugging I c , t and B c into Equation (3), the scene radiance J c can be recovered as:
J c ( x ) = I c ( x ) B c max t ( x ) , r 0 + B c , c { r , g , b } ,
where r 0 is empirically set from 0.1 to increase the exposure of J for display.
Figure 1 shows two sets of underwater images: underwater images and their enhanced images obtained using the HE and the PDM-based method [9] respectively. We can observe that applying the HE to the images produces better contrast; however, the colors of these enhanced images are distorted. By contrast, the PDM-based method restores degraded images by reversing the image formation process, but the restored images may have low contrast. Therefore, the proposed method aims at combining the HE and PDM to obtain restored results with good contrast.

4. Proposed Method

In this section, we discuss the details of the proposed approach. The overall framework of the proposed method is depicted as Figure 2, where it combines HE and PDM-based restoration to balance between the input image’s HE transformation function and PDM mapping distribution via convex optimization. To this end, it outputs the enhanced image using the final transformation function generated.
To combine the HE and PDM, we first consider the HE part. Since the transformation function T f is one-to-one, it can be represented in a vector form T = [ t 0 , t 1 , . . . , t L 1 ] T N L as:
T = ( L 1 ) [ c 0 , c 1 , , c L 1 ] T + 0.5 = ( L 1 ) [ p 0 , p 0 + p 1 , , i = 0 L 1 p i ] T + 0.5 ,
which is referred to as the transformation matrix. According to Equation (8), the difference of two consecutive ILs can be given by:
t k t k 1 = ( L 1 ) c k + 0.5 ( L 1 ) c k 1 + 0.5 ( L 1 ) p k .
Then, the transformation function T can be further denoted as:
R T = 1 0 0 0 1 1 0 0 0 0 1 0 0 0 1 1 T ( L 1 ) P = P ^ ,
where R Z L × L is a tridiagonal matrix as shown above.
For the PDM part, since the recovered image is derived by Equation (7), there is no way to find a one-to-one function to describe it because each distinct IL l of the input image I c may be mapped to multiple ILs in the recovered scene radiance J. Hence, in order to combine the PDM with the HE, it can be assumed that the transformation matrix T c = [ t 0 c , t 1 c , . . . , t L 1 c ] is an independent Gaussian random vector, where T c N ( T c ¯ , Σ T c ) , and T c ¯ = [ t 0 c ¯ , t 1 c ¯ , , t L 1 c ¯ ] T R L and Σ T c = d i a g { σ t k c 2 } R L × L , k [ 0 , L 1 ] , where t k c ¯ and σ t k c 2 represent the mean and the variance of the random variable t k c , respectively.
However, not every element in the T c ¯ and Σ T c is valid because there may be empty bins in the histogram of the input image, i.e., the elements in T c that corresponds to empty bins are regarded as invalid. Thus, a matrix E R n ˜ × L is generated to get rid of those elements in T c that do not have valid Gaussian distributions, where n ˜ is the number of distinct ILs in the input image I. Assume each element t k c in T c corresponds to a row vector e k c R 1 × L where only the ( k + 1 ) t h element is 1 and the rest are 0 . E is given by:
E = e k c R n ˜ × L ,
where t k c is valid, and k [ 0 , L 1 ] . Hence, the assumption is modified as E T c N ( E T c ¯ , E Σ T c E T ) .
To this end, the objective function f 0 ( T c ) of combining the HE and the PDM is formulated as:
minimize R T c P ^ 2 α log G ( E T c ) subject to R T c 0 , t 0 c = 0 , t L 1 c = L 1 , c { r , g , b } ,
where α 0 is a control parameter, and the multivariate Gaussian distribution G is given by:
G ( E T c ) = 1 ( 2 π ) n ˜ | Σ T c ˜ | exp { ( Σ T c ˜ ) 1 2 E ( T c T c ¯ ) 2 2 2 } ,
where Σ T c ˜ = E Σ T c E T .
The objective function f 0 ( T c ) has two terms: the HE term R T c P ^ 2 and the PDM-based Gaussian distribution term log G ( E T c ) . Since G is log-concave, and the logarithm is monotonically increasing, log G is adopted to regularize the HE term by penalizing the transformation matrix T c being different from the PDM-based distribution and to make the objective function f 0 convex as well.
To solve Equation (12), the PDM term can be simplified as:
α log G ( E T c ) = α 2 n ˜ log 2 π + log | Σ T c ˜ | + ( Σ T c ˜ ) 1 2 E ( T c T c ¯ ) 2 2 = α ˜ ( Σ T c ˜ ) 1 2 E ( T c T c ¯ ) 2 2 + C ,
where α ˜ = α 2 , and C = n ˜ log 2 π + log | Σ T c ˜ | , which is constant in f 0 and thus can be dropped. Therefore, by introducing two slack variables, Equation (12) is equivalent to:
minimize t + α ˜ g subject to R T c 0 , t R T c P ^ 2 , g ( Σ T c ˜ ) 1 2 E ( T c T c ¯ ) 2 2 , t 0 c = 0 , t L 1 c = L 1 , c { r , g , b } ,
which can be solved by using CVX, a package for specifying and solving convex programs [41,42]. Through minimizing f 0 ( T c ) with an appropriate α ˜ , ranging from 10 4 to 10 5 in this paper, the HE approximation based on the PDM can further improve the enhanced image quality.

5. Experimental Results

In this section, we first describe the implementation details, then compare the proposed method against the HE and the PDM-based methods in an image enhancement context.

5.1. Experiment Settings

In this work, we employ the UIEB [25] dataset to test our method. The UIEB dataset is a real-world underwater image dataset consisting of 890 underwater images with the corresponding high-quality reference images, including haze-like, greenish, and blueish underwater images. We compare our proposed method against 7 state-of-the-art underwater image restoration methods, including the Fusion-based [43], Retinex-based [44], IBLA [10], AGC [45], GDCP [9], BL-TM [40], and UWCNN [26]. We set α ˜ = 5 × 10 5 and r 0 = 0.1 in our method for all the experiments. For all the other compared methods, we follow their default settings to process underwater images.
The performance of the compared methods is evaluated qualitatively and quantitatively. We choose three prominent cases for the qualitative assessment, comprising haze-like, greenish, and blueish underwater images. For the quantitative evaluation, we adopt two no-reference quality metrics for accessing the quality of underwater images: Underwater Color Image-Quality Evaluation (UCIQE) [46] and Natural Image Quality Evaluator (NIQE) [47]. The UCIQE linearly combines the variation of chrominance, average saturation, and luminance contrast to measure underwater image quality. Higher UCIQE values indicate better visual perceptual quality. The NIQE measures deviations of the tested image from regularities of natural scene statistics. Lower NIQE values suggest better image quality.

5.2. Qualitative Assessment

We chose representative cases from the UIEB dataset to demonstrate visual comparison enhancement results, including haze-like, greenish, and blueish underwater images. The competing methods either do not restore color distortions or produce low-contrast images. By contrast, the results generated by our method look natural with better contrast.
Figure 3 demonstrate restoration results for a common haze-like blueish underwater image. Fusion-based [43], Retinex-based [44], AGC [45], and UWCNN [26] successfully dehaze (dim) the image to increase its contrast. However, the blueish color cast remains. IBLA [10], in contrast, brighten the image and restore its color a little. BL-TM [40] brightens the image more, thus making it even hazier. GDCP [9] performs poorly here since it over-enhances the brightness of the image, making it unnatural. The proposed method restores the image’s color and enhances its contrast, presenting a visually satisfying result.
Figure 4 shows enhanced results for another blueish underwater image, where most methods work fine. Out of the compared methods, UWCNN [26] seems to do little for the restoration to the image. GDCP [9] and BL-TM [40] distort the color of the restored images. Fusion-based [43], Retinex-based [44], IBLA [10], AGC [45], and the proposed method all perform well for this case.
Figure 5 shows the comparisons of enhanced results on a haze-like blueish underwater image. As can be seen, the methods [43,44,45] enhance the image’s contrast but do not work well on color restoration. UWCNN [26] makes the output darker but seems not to work at all. IBLA [10] and GDCP [9] improve the contrast but introduce color distortions. BL-TM [40] corrects the color a bit but fails to recover its contrast. The proposed method performs better regarding contrast and color balance together.
Figure 6 and Figure 7 shows the comparisons of processed results on a haze-like greenish underwater image, where Fusion-based [43] and Retinex-based [44] methods remove the greenish color cast but the restored images look a little washed out and unnatural. IBLA [10], AGC [45], and UWCNN [26] fail to the restore color of the image. GDCP [9] overcorrects the input image with more red color added. BL-TM [40] again corrects the color but does not perform well on contrast. The proposed method works better than the other methods visually.
As can be seen from the above results, most methods can work in terms of color restoration or contrast enhancement to some extent. The proposed method can effectively remove the haze and possible color casts and improve the contrast without over-saturation and over-enhancement. Notably, we also show the objective performance here on each restored image, where our results achieve the best UCIQE/NIQE scores.

5.3. Quantitative Assessment

Table 1 displays objective comparisons among the methods on the UIEB dataset [25] based on UCIQE [46] and NIQE [47], where the proposed method performs favorably against the other state-of-the-art approaches.
To sum up, we can see by solving the convex problem formulated in Equation (15) with an appropriate control parameter α ˜ , the proposed method can effectively combine HE and PDM to present satisfying enhancement results.

5.4. Runtime Assessment

We compared the average runtime for all the compared methods on the UIEB dataset [25], shown in Table 2. As can be seen, the AGC [45] and Retinex [44] runs faster than the other methods. IBLA [10] is the slowest. Our proposed method takes 4.8 s on average.

5.5. Application to Feature Matching

Figure 8 shows an example of feature-point matching. We match an underwater image with its rotated version using SIFT [48] to demonstrate how an image enhancement helps improve the image’s quality and increase the matching feature points. In Table 3, we can see the number of feature matching points obtained using different enhancement methods, where our method presents more key feature points.

6. Conclusions

We proposed to enhance underwater images using a transformation function derived by a convex combination of Histogram Equalization and Physics-based Dichromatic modeling. The proposed method can generate visually pleasing results with restored color and better contrast. The proposed method can outperform state-of-the-art underwater image restoration approaches based on the qualitative and quantitative experimental results.

Author Contributions

Conceptualization, Y.-T.P.; methodology, Y.-T.P.; software, Y.-T.P.; validation, Y.-R.C.; formal analysis, Y.-R.C.; investigation, Y.-R.C. and Z.C.; resources, S.-C.H.; data curation, Y.-R.C.; writing—original draft preparation, Y.-T.P. and Y.-R.C.; writing—review and editing, Y.-T.P., J.-H.W. and S.-C.H.; visualization, Y.-R.C.; supervision, S.-C.H.; project administration, Y.-T.P.; funding acquisition, S.-C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported in part by the Ministry of Science and Technology, Taiwan under Grants MOST 110-2221-E-004-010, 110-2622-E-004-001, 109-2622-E-004-002, 110-2634-F-019-001, 110-2221-E-019-062, and 110-2622-E-019-006.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Narasimhan, S.G.; Nayar, S.K. Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 713–724. [Google Scholar] [CrossRef] [Green Version]
  2. Lee, Y.; Kim, J.; Kyung, C.M. Energy-aware video encoding for image quality improvement in battery-operated surveillance camera. IEEE Trans. Very Large Scale 2011, 20, 310–318. [Google Scholar] [CrossRef]
  3. Peng, Y.T.; Zhao, X.; Cosman, P.C. Single underwater image enhancement using depth estimation based on blurriness. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015. [Google Scholar]
  4. Hummel, R. Image Enhancement by Histogram Transformation. Unknown. 1975. Available online: https://ui.adsabs.harvard.edu/abs/1975ieht.rept.....H/abstract (accessed on 20 January 2022).
  5. Chiang, J.Y.; Chen, Y.C. Underwater image enhancement by wavelength compensation and dehazing. IEEE Trans. Image Process. 2011, 21, 1756–1769. [Google Scholar] [CrossRef] [PubMed]
  6. Wen, H.; Tian, Y.; Huang, T.; Gao, W. Single underwater image enhancement with a new optical model. In Proceedings of the IEEE International Symposium on Circuits and Systems, Beijing, China, 19–23 May 2013. [Google Scholar]
  7. Drews, P.L.; Nascimento, E.R.; Botelho, S.S.; Campos, M.F.M. Underwater depth estimation and image restoration based on single images. IEEE Comput. Graph. Appl. 2016, 36, 24–35. [Google Scholar] [CrossRef]
  8. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar]
  9. Peng, Y.T.; Cao, K.; Cosman, P.C. Generalization of the dark channel prior for single image restoration. IEEE Trans. Image Process. 2018, 27, 2856–2868. [Google Scholar] [CrossRef]
  10. Peng, Y.T.; Cosman, P.C. Underwater image restoration based on image blurriness and light absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef]
  11. Carlevaris-Bianco, N.; Mohan, A.; Eustice, R.M. Initial results in underwater single image dehazing. In Proceedings of the Oceans 2010 Mts/IEEE Seattle, Seattle, WA, USA, 20–23 September 2010. [Google Scholar]
  12. Yang, H.Y.; Chen, P.Y.; Huang, C.C.; Zhuang, Y.Z.; Shiau, Y.H. Low complexity underwater image enhancement based on dark channel prior. In Proceedings of the 2011 Second International Conference on Innovations in Bio-Inspired Computing and Applications, Shenzhen, China, 16–18 December 2011. [Google Scholar]
  13. Drews, P.; Nascimento, E.; Moraes, F.; Botelho, S.; Campos, M. Transmission estimation in underwater single images. In Proceedings of the International Conference Computer Vision Workshops, Sydney, NSW, Australia, 2–8 December 2013. [Google Scholar]
  14. Galdran, A.; Pardo, D.; Picón, A.; Alvarez-Gila, A. Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef] [Green Version]
  15. Zhao, X.; Jin, T.; Qu, S. Deriving inherent optical properties from background color and underwater image enhancement. Ocean. Eng. 2015, 94, 163–172. [Google Scholar] [CrossRef]
  16. Emberton, S.; Chittka, L.; Cavallaro, A. Hierarchical Rank-Based Veiling Light Estimation for Underwater Dehazing. 2015. Available online: http://chittkalab.sbcs.qmul.ac.uk/2015/Emberton%20et%20al%202015%20BMVC.pdf (accessed on 20 January 2022).
  17. Cao, K.; Peng, Y.T.; Cosman, P.C. Underwater image restoration using deep networks to estimate background light and scene depth. In Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI), Las Vegas, NV, USA, 8–10 April 2018. [Google Scholar]
  18. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef] [Green Version]
  19. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
  20. Pan, J.; Sun, D.; Pfister, H.; Yang, M.H. Deblurring images via dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 2315–2328. [Google Scholar] [CrossRef] [PubMed]
  21. Mastan, I.D.; Raman, S. Multi-level encoder-decoder architectures for image restoration. In Proceedings of the Conference Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
  22. Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the Conference Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  23. Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson-Roberson, M. WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 2017, 3, 387–394. [Google Scholar] [CrossRef] [Green Version]
  24. Guo, Y.; Li, H.; Zhuang, P. Underwater image enhancement using a multiscale dense generative adversarial network. IEEE J. Ocean. Eng. 2019, 5, 862–870. [Google Scholar] [CrossRef]
  25. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Li, C.; Anwar, S.; Porikli, F. Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recognit. 2020, 98, 107038. [Google Scholar] [CrossRef]
  27. Kim, Y.T. Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Trans. Consum. Electron. 1997, 43, 1–8. [Google Scholar]
  28. Wang, Y.; Chen, Q.; Zhang, B. Image enhancement based on equal area dualistic sub-image histogram equalization method. IEEE Trans. Consum. Electron. 1999, 45, 68–75. [Google Scholar] [CrossRef]
  29. Wongsritong, K.; Kittayaruasiriwat, K.; Cheevasuvit, F.; Dejhan, K.; Somboonkaew, A. Contrast enhancement using multipeak histogram equalization with brightness preserving. In Proceedings of the IEEE. APCCAS 1998. 1998 IEEE Asia-Pacific Conference on Circuits and Systems. Microelectronics and Integrating Systems. Proceedings (Cat. No.98EX242), Chiang Mai, Thailand, 24–27 November 1998. [Google Scholar]
  30. Chen, S.D.; Ramli, A.R. Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation. IEEE Trans. Consum. Electron. 2003, 49, 1301–1309. [Google Scholar] [CrossRef]
  31. Sim, K.S.; Tso, C.P.; Tan, Y.Y. Recursive sub-image histogram equalization applied to gray scale images. Pattern Recognit. Lett. 2007, 28, 1209–1221. [Google Scholar] [CrossRef]
  32. Yang, S.; Oh, J.H.; Park, Y. Contrast enhancement using histogram equalization with bin underflow and bin overflow. In Proceedings of the 2003 International Conference on Image Processing (Cat. No. 03CH37429), Barcelona, Spain, 14–17 September 2003. [Google Scholar]
  33. Kim, T.; Paik, J. Adaptive contrast enhancement using gain-controllable clipped histogram equalization. IEEE Trans. Consum. Electron. 2008, 54, 1803–1810. [Google Scholar] [CrossRef]
  34. Ooi, C.H.; Kong, N.S.P.; Ibrahim, H. Bi-histogram equalization with a plateau limit for digital image enhancement. IEEE Trans. Consum. Electron. 2009, 55, 2072–2080. [Google Scholar] [CrossRef]
  35. Huang, S.C.; Cheng, F.C.; Chiu, Y.S. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 2012, 22, 1032–1041. [Google Scholar] [CrossRef] [PubMed]
  36. Gonzalez, R.C.; Woods, R.E.; Masters, B.R. Digital Image Processing; Prentice Hall Press: Hoboken, NJ, USA, 2009. [Google Scholar]
  37. Celik, T. Two-dimensional histogram equalization and contrast enhancement. Pattern Recognit. 2012, 45, 3810–3824. [Google Scholar] [CrossRef]
  38. Celik, T.; Tjahjadi, T. Contextual and variational contrast enhancement. IEEE Trans. Image Process. 2011, 20, 3431–3441. [Google Scholar] [CrossRef] [Green Version]
  39. Song, W.; Wang, Y.; Huang, D.; Tjondronegoro, D. A rapid scene depth estimation model based on underwater light attenuation prior for underwater image restoration. In Proceedings of the Pacific Rim Conference on Multimedia, Hefei, China, 21–22 September 2018. [Google Scholar]
  40. Song, W.; Wang, Y.; Huang, D.; Liotta, A.; Perra, C. Enhancement of underwater images with statistical model of background light and optimization of transmission map. IEEE Trans. Broadcast. 2020, 66, 153–169. [Google Scholar] [CrossRef] [Green Version]
  41. Grant, M.; Boyd, S.; Ye, Y. CVX: Matlab Software for Disciplined Convex Programming, Version 2.0 Beta. 2013. Available online: http://cvxr.com/cvx/citing/ (accessed on 20 January 2022).
  42. Grant, M.C.; Boyd, S.P. Graph implementations for nonsmooth convex programs. In Recent Advances in Learning and Control; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  43. Ancuti, C.; Ancuti, C.O.; Haber, T.; Bekaert, P. Enhancing underwater images and videos by fusion. In Proceedings of the Conference Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012. [Google Scholar]
  44. Fu, X.; Zhuang, P.; Huang, Y.; Liao, Y.; Zhang, X.P.; Ding, X. A retinex-based enhancing approach for single underwater image. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014. [Google Scholar]
  45. Yin, J.L.; Chen, B.H.; Peng, Y.T.; Lin, Y.C. Color Shifting-Aware Image Dehazing. In Proceedings of the IEEE International Symposium on Multimedia, San Diego, CA, USA, 9–11 December 2019. [Google Scholar]
  46. Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef]
  47. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  48. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
Figure 1. A comparison of underwater image enhanced results. (a) Underwater images. The enhanced images obtained using (b) the HE [4], and (c) ref. [9] for the underwater image.
Figure 1. A comparison of underwater image enhanced results. (a) Underwater images. The enhanced images obtained using (b) the HE [4], and (c) ref. [9] for the underwater image.
Sensors 22 02168 g001
Figure 2. The overall framework of the proposed method.
Figure 2. The overall framework of the proposed method.
Sensors 22 02168 g002
Figure 3. (a) Dark bluish underwater image, and its restored/enhanced results obtained using (b) Fusion-based [43], (c) Retinex-based [44], (d) IBLA [10], (e) AGC [45], (f) GDCP [9], (g) UWCNN [26], (h) BL-TM [40], and (i) the proposed method. The number on the top-left corner of each image refers to its UCIQE/NIQE score.
Figure 3. (a) Dark bluish underwater image, and its restored/enhanced results obtained using (b) Fusion-based [43], (c) Retinex-based [44], (d) IBLA [10], (e) AGC [45], (f) GDCP [9], (g) UWCNN [26], (h) BL-TM [40], and (i) the proposed method. The number on the top-left corner of each image refers to its UCIQE/NIQE score.
Sensors 22 02168 g003
Figure 4. (a) Bright bluish underwater image, and its restored/enhanced results obtained using (b) Fusion-based [43], (c) Retinex-based [44], (d) IBLA [10], (e) AGC [45], (f) GDCP [9], (g) UWCNN [26], (h) BL-TM [40], and (i) the proposed method. The number on the top-left corner of each image refers to its UCIQE/NIQE score.
Figure 4. (a) Bright bluish underwater image, and its restored/enhanced results obtained using (b) Fusion-based [43], (c) Retinex-based [44], (d) IBLA [10], (e) AGC [45], (f) GDCP [9], (g) UWCNN [26], (h) BL-TM [40], and (i) the proposed method. The number on the top-left corner of each image refers to its UCIQE/NIQE score.
Sensors 22 02168 g004
Figure 5. (a) Hazy and bluish underwater image, and its restored/enhanced results obtained using (b) Fusion-based [43], (c) Retinex-based [44], (d) IBLA [10], (e) AGC [45], (f) GDCP [9], (g) UWCNN [26], (h) BL-TM [40], and (i) the proposed method. The number on the top-left corner of each image refers to its UCIQE/NIQE score.
Figure 5. (a) Hazy and bluish underwater image, and its restored/enhanced results obtained using (b) Fusion-based [43], (c) Retinex-based [44], (d) IBLA [10], (e) AGC [45], (f) GDCP [9], (g) UWCNN [26], (h) BL-TM [40], and (i) the proposed method. The number on the top-left corner of each image refers to its UCIQE/NIQE score.
Sensors 22 02168 g005
Figure 6. (a) Greenish underwater image with a diver and ribbons, and its restored/enhanced results obtained using (b) Fusion-based [43], (c) Retinex-based [44], (d) IBLA [10], (e) AGC [45], (f) GDCP [9], (g) UWCNN [26], (h) BL-TM [40], and (i) the proposed method. The number on the top-left corner of each image refers to its UCIQE/NIQE score.
Figure 6. (a) Greenish underwater image with a diver and ribbons, and its restored/enhanced results obtained using (b) Fusion-based [43], (c) Retinex-based [44], (d) IBLA [10], (e) AGC [45], (f) GDCP [9], (g) UWCNN [26], (h) BL-TM [40], and (i) the proposed method. The number on the top-left corner of each image refers to its UCIQE/NIQE score.
Sensors 22 02168 g006
Figure 7. (a) Greenish underwater image, and its restored/enhanced results obtained using (b) Fusion-based [43], (c) Retinex-based [44], (d) IBLA [10], (e) AGC [45], (f) GDCP [9], (g) UWCNN [26], (h) BL-TM [40], and (i) the proposed method. The number on the top-left corner of each image refers to its UCIQE/NIQE score.
Figure 7. (a) Greenish underwater image, and its restored/enhanced results obtained using (b) Fusion-based [43], (c) Retinex-based [44], (d) IBLA [10], (e) AGC [45], (f) GDCP [9], (g) UWCNN [26], (h) BL-TM [40], and (i) the proposed method. The number on the top-left corner of each image refers to its UCIQE/NIQE score.
Sensors 22 02168 g007
Figure 8. An example of feature-point matching. We match an underwater image with its rotated version using SIFT [48] to demonstrate how an image enhancement helps improve the image’s quality and increase the matching feature points. (a) Original underwater image pair, and its enhanced version using (b) Fusion-based [43], (c) Retinex-based [44], (d) IBLA [10], (e) AGC [45], (f) GDCP [9], (g) UWCNN [26], (h) BL-TM [40], and (i) the proposed method.
Figure 8. An example of feature-point matching. We match an underwater image with its rotated version using SIFT [48] to demonstrate how an image enhancement helps improve the image’s quality and increase the matching feature points. (a) Original underwater image pair, and its enhanced version using (b) Fusion-based [43], (c) Retinex-based [44], (d) IBLA [10], (e) AGC [45], (f) GDCP [9], (g) UWCNN [26], (h) BL-TM [40], and (i) the proposed method.
Sensors 22 02168 g008
Table 1. The UCIQE [46] and NIQE [47] scores for all the compared methods on the UIEB dataset [25]. The best scores are in bold.
Table 1. The UCIQE [46] and NIQE [47] scores for all the compared methods on the UIEB dataset [25]. The best scores are in bold.
MethodsUCIQE ↑NIQE ↓
input 0.52 4.31
Fusion-based [43] 0.59 3.85
Retinex-based [44] 0.60 4.09
IBLA [10] 0.60 3.90
AGC [45] 0.62 4.04
GDCP [9] 0.60 3.88
UWCNN [26] 0.52 4.20
BL-TM [40] 0.62 4.00
Proposed0.643.79
Table 2. Average runtime in seconds on the UIEB dataset [25].
Table 2. Average runtime in seconds on the UIEB dataset [25].
FusionRetinexIBLAAGCGDCPUWCNNBL-TMProposed
Runtime ↓ 5.58 0.66 41.74 0.65 1.00 9.25 22.17 4.80
Table 3. Numbers of matching feature points in Figure 8.
Table 3. Numbers of matching feature points in Figure 8.
InputFusionRetinexIBLAAGCGDCPUWCNNBL-TMProposed
#Feature pts ↑1347479234733035911802901005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Peng, Y.-T.; Chen, Y.-R.; Chen, Z.; Wang, J.-H.; Huang, S.-C. Underwater Image Enhancement Based on Histogram-Equalization Approximation Using Physics-Based Dichromatic Modeling. Sensors 2022, 22, 2168. https://doi.org/10.3390/s22062168

AMA Style

Peng Y-T, Chen Y-R, Chen Z, Wang J-H, Huang S-C. Underwater Image Enhancement Based on Histogram-Equalization Approximation Using Physics-Based Dichromatic Modeling. Sensors. 2022; 22(6):2168. https://doi.org/10.3390/s22062168

Chicago/Turabian Style

Peng, Yan-Tsung, Yen-Rong Chen, Zihao Chen, Jung-Hua Wang, and Shih-Chia Huang. 2022. "Underwater Image Enhancement Based on Histogram-Equalization Approximation Using Physics-Based Dichromatic Modeling" Sensors 22, no. 6: 2168. https://doi.org/10.3390/s22062168

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop