Robust Chromatic Adaptation Based Color Correction Technology for Underwater Images

Featured Application: This work aims for recovering realistic colors of underwater images, which is very important for the detection and classiﬁcation of underwater targets. Abstract: Recovering correct or at least realistic colors of underwater scenes is a challenging issue for image processing due to the unknown imaging conditions including the optical water type, scene location, illumination, and camera settings. With the assumption that the illumination of the scene is uniform, a chromatic adaptation-based color correction technology is proposed in this paper to remove the color cast using a single underwater image without any other information. First, the underwater RGB image is ﬁrst linearized to make its pixel values proportional to the light intensities arrived at the pixels. Second, the illumination is estimated in a uniform chromatic space based on the white-patch hypothesis. Third, the chromatic adaptation transform is implemented in the device-independent XYZ color space. Qualitative and quantitative evaluations both show that the proposed method outperforms the other test methods in terms of color restoration, especially for the images with severe color cast. The proposed method is simple yet e ﬀ ective and robust, which is helpful in obtaining the in-air images of underwater scenes.


Introduction
Underwater imaging is increasingly used in many important applications such as marine biology and archaeology, underwater surveying and mapping [1]. However, captured underwater images are generally degraded by scattering and absorption. The color cast and the contrast loss are the main consequences of underwater imaging degradation processes, and the goal of underwater image processing is to rectify the color cast and enhance visibility [2,3]. Many image formation model based (IFM-based) image restoration methods and IFM-free image enhancement methods have been raised to achieve this goal [1,4]. The restoration methods try to reverse the underwater imaging process according to some priors, such as dark channel prior (DCP) [5,6] and scene depth map [3,7]. The enhancement methods use qualitative subjective criteria to produce more visually pleasing images [4]. Recently, many deep-learning-based methods have been developed for underwater image restoration and enhancement [3,[8][9][10][11][12]. However, the deep-learning-based methods are hindered by the lack of large training datasets [12]. The contributions of this paper are as follows. First, a simple yet effective and robust color correction method for underwater images is proposed. Second, a comprehensive comparison between several typical white balance and color correction methods on underwater images is drawn. Third, several key issues to be concerned in the color correction of underwater images are discussed.

Related Work
Color correction is also known as white balance in image processing. Many popular existing automatic white balance algorithms are based on low level image features due to their simple concept and surprising performance [23], such as the gray-world (GW) and white-patch (WP) The contributions of this paper are as follows. First, a simple yet effective and robust color correction method for underwater images is proposed. Second, a comprehensive comparison between several typical white balance and color correction methods on underwater images is drawn. Third, several key issues to be concerned in the color correction of underwater images are discussed.

Related Work
Color correction is also known as white balance in image processing. Many popular existing automatic white balance algorithms are based on low level image features due to their simple concept and surprising performance [23], such as the gray-world (GW) and white-patch (WP) methods. The GW method works under the assumption that, given an image with sufficient color variations, the average of reflectance of a scene is achromatic [19,20]. The WP method, also known as the perfect reflector method, is based on the hypothesis that the brightest pixel in an image corresponds to an object point on a glossy or specular surface [20,24] which reflects the full range of light that it captures [18]. Consequently, the color of the brightest pixel is considered as the color of the light source and the brightest pixel will be assigned as the reference white point for white balance. The dynamic-threshold (DT) method [20] is another simple white balance method. It finds reference white points around the mean chromaticity coordinate and takes the ratios of the maximum luminance value to the mean RGB values of the white points as the channel gains.
Conventional white balance methods and histogram equalization methods are often directly used to correct the color cast for underwater images [2,7,25,26]. Considering the characteristics of underwater imaging, many specialized methods have been raised. To weaken the reddish cast produced by GW, an adaptive GW is developed by merging the global and local averages [27]. Two variants of shades of grey (SG) method [28,29] are also adopted to efficiently implement the color correction. Henke et al. [30] uses a binary depth map obtained by applying DCP [5] to GB channels to estimate multiple channel gains for different regions in underwater image. Ancuti et al. [31,32] blends the raw image with its color transfer image according to a weight map reflecting the desired level of correction. Li et al. [33] developed a learning-based weakly supervised color transfer model. Liu et al. [34] corrects the regional or full-extent color deviation via frequency-based color-tone estimation. Liu et al. [35] apply local surface reflectance statistical prior to the Retinex image formation model. A statistical-based (SB) method [21,36] is applied to compress the outliners as well as expand the mid-range, which has the effect of image enhancement due to the expanded dynamic range.
Most of these methods process RGB color channels separately, which is prone to introduce artifacts especially for underwater images due to the uneven attenuations in the three channels. Implementing color correction in color space other than RGB space is a good solution to this problem. Bianco et al. [37] performs the GW hypothesis in Ruderman opponent color space of Lαβ to correct the color of underwater images. Emberton et al. [38] proposed a chromatic adaptation based water-type dependent white balance (WTDWB) method which divides underwater images into three categories including blue-dominated, turquoise and green-dominated images. The illuminations of the turquoise and the green-dominated images are respectively estimated by applying WP and GW on the nonlinear RGB images. For blue-dominated images, white balance is not applied to inhibit the introduction of artifacts [38].

Method
The proposed method is based on the chromatic adaptation theory and the WP hypothesis. The brightest p percent of pixels are determined based on the luminance values. The average chromaticity coordinate of these brightest pixels is assigned as the color of the reference white point. To get an accurate average chromaticity, the chromaticity coordinates are calculated in a uniform chromatic space of CIE 1960 UCS space. Von Kries theory [39] is then applied to implement the color correction in CIE 1931 XYZ space. The flow of the proposed method is shown in Figure 2 and the details are as follows.

Chromatic Adaptation Theory
Based on the Von Kries theory [39], the tristimulus values [X s Y s Z s ] T under the source illumination could be mapped to the tristimulus values [X d Y d Z d ] T under the destination illumination as follows:  [16]. The Bradford transform [40,41] is adopted in this paper and the matrix M is: In Equation (1), α, β, and γ are defined as: pl. Sci. 2020, 10, x FOR PEER REVIEW 4 of   [16]. The Bradford transform [40,41] is adopted in this paper and the matrix M is:

Transform from sRGB to XYZ
As described above, if the XYZ values of each pixel in an image and the XYZ values of the white point under the scene illumination are known, the XYZ values of this image under a known canonical illuminant could be obtained. For different RGB color spaces, such as sRGB, Adobe RGB and Prophoto RGB, the transforms from the RGB space to the XYZ space are different due to different reference primaries and illuminations [42]. Given an RGB image, it is difficult to judge the RGB color space in which the image is encoded since the color space information recorded in the exchangeable image file format header can be changed or lost after post-processing by using image editing software [43]. A learning-based method can be used to identify the color space for an arbitrary RGB image in the subsequent research. Here, we simply assume that the images are encoded in sRGB color space since sRGB is representative of the majority of devices on which color is and will be viewed [15]. For an image encoded in the sRGB color space, the CIE 1931 XYZ values can be computed from the RGB values in three steps [44]. First, the digital code values [R nbit G nbit B nbit ] are converted to the nominal Appl. Sci. 2020, 10, 6392 5 of 15 range [0, 1] following Equation (5). Since the same operation will be performed on the three channels, the notation V is used to represent any of the three channels.
where n represents the number of bits per channel, for example, n = 8 for 24-bit RGB images. Second, V sRGB is linearized, known as de-gamma correction, as follows:

Color Correction
To estimate the white point under the scene illumination, the XYZ values of the image are first converted to the uniform chromatic space CIE 1960 UCS by Equation (8) [45]. The average chromaticity coordinate of the brightest p percent of pixels, evaluated by the Y values representing the brightness, is computed and denoted by (u, v). It is taken as the chromaticity coordinate of the white point under the scene illumination in this paper and converted back to the XYZ space by Equation (9) (1), and the RGB values can be obtained based on the reverse relationship of Equations (5)- (7).

Results
Each of the compared method will be described briefly in turn. In the GW method, the ratios of half the RGB values of the white point under the canonical illumination to the mean RGB values of the whole image are taken as the channel gains. The WP method takes the ratios of the RGB values of the white point under the canonical illumination to the mean RGB values of the brightest 10% pixels evaluated by the sum of the RGB values as the channel gains. The SB method was coded exactly following the paper [21] with µ fixed as 2.3. We reproduced the color correction method proposed by Bianco et al. [37] without implementing the contrast improvement. In our method, p is set to 10.
Evaluating a restored image by comparing it with the ground-truth is hard to achieve since it is difficult if not impossible to obtain medium-free in situ images [46]. The restored images are usually evaluated subjectively or objectively based on color cards or objectively based on non-reference image quality metrics. However, the reliability of the existing non-reference image quality metrics is in doubt since they always produce results that are inconsistent with visual observation [1,8,46]. Therefore, only subjective assessment and color-card-based objective assessment, for images from a Haze-line dataset, are adopted in this paper.

Evaluation on UIEB Dataset
The UIEB dataset [8] is used to test the proposed method first. In the UIEB dataset, the reference images are manually selected by 50 volunteers from the potential reference images generated by 12 image enhancement methods [8]. Following [8], the proposed method is also test on the five categories of images: greenish and bluish images, downward looking images, forward looking images, low backscatter scenes, and high backscatter scenes. The results of different methods and the corresponding references are shown in Figures 3-7. Zoom in the images to see more details.
Evaluating a restored image by comparing it with the ground-truth is hard to achieve since it is difficult if not impossible to obtain medium-free in situ images [46]. The restored images are usually evaluated subjectively or objectively based on color cards or objectively based on non-reference image quality metrics. However, the reliability of the existing non-reference image quality metrics is in doubt since they always produce results that are inconsistent with visual observation [1,8,46]. Therefore, only subjective assessment and color-card-based objective assessment, for images from a Haze-line dataset, are adopted in this paper.

Evaluation on UIEB Dataset
The UIEB dataset [8] is used to test the proposed method first. In the UIEB dataset, the reference images are manually selected by 50 volunteers from the potential reference images generated by 12 image enhancement methods [8]. Following [8], the proposed method is also test on the five categories of images: greenish and bluish images, downward looking images, forward looking images, low backscatter scenes, and high backscatter scenes. The results of different methods and the corresponding references are shown in Figures 3-7. Zoom in the images to see more details.    difficult if not impossible to obtain medium-free in situ images [46]. The restored images are usually evaluated subjectively or objectively based on color cards or objectively based on non-reference image quality metrics. However, the reliability of the existing non-reference image quality metrics is in doubt since they always produce results that are inconsistent with visual observation [1,8,46]. Therefore, only subjective assessment and color-card-based objective assessment, for images from a Haze-line dataset, are adopted in this paper.

Evaluation on UIEB Dataset
The UIEB dataset [8] is used to test the proposed method first. In the UIEB dataset, the reference images are manually selected by 50 volunteers from the potential reference images generated by 12 image enhancement methods [8]. Following [8], the proposed method is also test on the five categories of images: greenish and bluish images, downward looking images, forward looking images, low backscatter scenes, and high backscatter scenes. The results of different methods and the corresponding references are shown in Figures 3-7. Zoom in the images to see more details.    usually evaluated subjectively or objectively based on color cards or objectively based on non-reference image quality metrics. However, the reliability of the existing non-reference image quality metrics is in doubt since they always produce results that are inconsistent with visual observation [1,8,46]. Therefore, only subjective assessment and color-card-based objective assessment, for images from a Haze-line dataset, are adopted in this paper.

Evaluation on UIEB Dataset
The UIEB dataset [8] is used to test the proposed method first. In the UIEB dataset, the reference images are manually selected by 50 volunteers from the potential reference images generated by 12 image enhancement methods [8]. Following [8], the proposed method is also test on the five categories of images: greenish and bluish images, downward looking images, forward looking images, low backscatter scenes, and high backscatter scenes. The results of different methods and the corresponding references are shown in Figures 3-7. Zoom in the images to see more details.       It can be seen from Figures 4 and 5 that the proposed method does not show obvious advantages on the images with little color cast, which is consistent with expectations. From Figure 6, we can see that the proposed method could correct the color cast well though the restored image still does not look very good due to the unimproved contrast. The restored images of the total 890 images with reference images in the UIEB dataset are compared between the test methods. We found the GW, WP and SB [21] methods often cause artifacts, especially in the dark regions of the images with severe color cast, as shown in Figure 8. Bianco's method [37] often over-grays the images making the color compressed or lost, as shown in Figures 4, 5, and 9. The proposed method has consistently outperformed the other methods on the images with severe color cast. Many restored images obtained by the proposed method look even more natural than the reference images, as shown in Figure 10.  It can be seen from Figures 4 and 5 that the proposed method does not show obvious advantages on the images with little color cast, which is consistent with expectations. From Figure 6, we can see that the proposed method could correct the color cast well though the restored image still does not look very good due to the unimproved contrast. The restored images of the total 890 images with reference images in the UIEB dataset are compared between the test methods. We found the GW, WP and SB [21] methods often cause artifacts, especially in the dark regions of the images with severe color cast, as shown in Figure 8. Bianco's method [37] often over-grays the images making the color compressed or lost, as shown in Figures 4 and 5, and Figure 9. The proposed method has consistently outperformed the other methods on the images with severe color cast. Many restored images obtained by the proposed method look even more natural than the reference images, as shown in Figure 10.

Evaluation on Haze-line Dataset
The results on Haze-line dataset [13] are shown in Figure 11. The result of haze-line method [13] was generated using code released by the authors. For evaluation, the mean angular error between the six grayscale patches of each chart and a pure gray color was computed as done in Reference [13]: Appl. Sci. 2020, 10, x FOR PEER REVIEW 7 of 15  It can be seen from Figures 4 and 5 that the proposed method does not show obvious advantages on the images with little color cast, which is consistent with expectations. From Figure 6, we can see that the proposed method could correct the color cast well though the restored image still does not look very good due to the unimproved contrast. The restored images of the total 890 images with reference images in the UIEB dataset are compared between the test methods. We found the GW, WP and SB [21] methods often cause artifacts, especially in the dark regions of the images with severe color cast, as shown in Figure 8. Bianco's method [37] often over-grays the images making the color compressed or lost, as shown in Figures 4, 5, and 9. The proposed method has consistently outperformed the other methods on the images with severe color cast. Many restored images obtained by the proposed method look even more natural than the reference images, as shown in Figure 10.

Evaluation on Haze-line Dataset
The results on Haze-line dataset [13] are shown in Figure 11. The result of haze-line method [13] was generated using code released by the authors. For evaluation, the mean angular error between the six grayscale patches of each chart and a pure gray color was computed as done in Reference [13]:

Evaluation on Haze-line Dataset
The results on Haze-line dataset [13] are shown in Figure 11. The result of haze-line method [13] was generated using code released by the authors. For evaluation, the mean angular error between the six grayscale patches of each chart and a pure gray color was computed as done in Reference [13]: . Comparisons on images in Haze-line dataset [13]. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco's (GWLαβ) method [37], proposed method and haze-lines method [13].
Lower angular error indicates a more accurate color restoration. Many images in the Haze-line dataset contain multiple color charts at different distances from the camera. The charts are ordered by its distance to the camera and chart #1 is closest to the camera. The angular errors for each chart, image, and method are shown in Table 1. The lowest average angular error is got by our method, as evident from Table 1. Table 1. Angular error (unit: degrees) in RGB space between the gray-scale patches and a pure gray color, for each chart in each image, and all methods. Lower is better. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco's (GWLαβ) method [37], proposed method and haze-lines method [13].  Figure 11. Comparisons on images in Haze-line dataset [13]. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco's (GWLαβ) method [37], proposed method and haze-lines method [13].
Lower angular error indicates a more accurate color restoration. Many images in the Haze-line dataset contain multiple color charts at different distances from the camera. The charts are ordered by its distance to the camera and chart #1 is closest to the camera. The angular errors for each chart, image, and method are shown in Table 1. The lowest average angular error is got by our method, as evident from Table 1. Table 1. Angular error (unit: degrees) in RGB space between the gray-scale patches and a pure gray color, for each chart in each image, and all methods. Lower is better. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco's (GWLαβ) method [37], proposed method and haze-lines method [13].

Evaluation on Sea-Thru Dataset
Since Sea-thru dataset [7] only provides unprocessed sensor data stored in RAW photo formats, the software of Imaging Edge Desktop and Adobe Photoshop Lightroom was used to convert the ARW files to TIF and JPG files without any post-processing. No reference images are provided in the Sea-thru dataset [7], so only the results obtained by the test methods are given in Figure 12. It can be seen that the proposed method could remove the color cast well and it generates more colorful and realistic images than Bianco's method [37].

Evaluation on Sea-Thru Dataset
Since Sea-thru dataset [7] only provides unprocessed sensor data stored in RAW photo formats, the software of Imaging Edge Desktop and Adobe Photoshop Lightroom was used to convert the ARW files to TIF and JPG files without any post-processing. No reference images are provided in the Sea-thru dataset [7], so only the results obtained by the test methods are given in Figure 12. It can be seen that the proposed method could remove the color cast well and it generates more colorful and realistic images than Bianco's method [37]. Comparisons on images in Sea-thru dataset [7]. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco's (GWLαβ) method [37] and the proposed method.

Artifacts Caused by Over-Enhancement
Conventional color correction methods which calculate a gain for each channel in RGB space, such as GW, WP, SB [21], and DT [20] methods, prone to over-enhance the severely attenuated channel resulting in obvious artifacts. The pixels with nonzero values in the severely attenuated channel tend to be over-enlarged, as shown in Figures 13 and 14. The methods which implement Figure 12. Comparisons on images in Sea-thru dataset [7]. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco's (GWLαβ) method [37] and the proposed method.

Artifacts Caused by Over-Enhancement
Conventional color correction methods which calculate a gain for each channel in RGB space, such as GW, WP, SB [21], and DT [20] methods, prone to over-enhance the severely attenuated channel resulting in obvious artifacts. The pixels with nonzero values in the severely attenuated channel tend to be over-enlarged, as shown in Figures 13 and 14. The methods which implement the color correction in other color spaces, such as Bianco's method [37], Emberton's method [38], and the proposed method do not produce obvious artifacts. Therefore, it is a good solution to avoiding artifacts.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 11 of 15 the color correction in other color spaces, such as Bianco's method [37], Emberton's method [38], and the proposed method do not produce obvious artifacts. Therefore, it is a good solution to avoiding artifacts.

Linearization of RGB Images
The linear RGB images are generally non-linearized, known as gamma correction, in the image signal processing pipeline embedded in a camera's hardware [47] to adapt the output characteristics of monitors. It is very important and necessary to correct the non-linearity of RGB images, known as de-gamma correction, to ensure the color correction is implemented in a linear RGB coordinate system [37]. For verification, the linearization process is removed from the flow of the proposed method, and several typical results are compared in Figure 15. We can see that artifacts are also produced in some images without linearization and the performance of color correction becomes slightly worse. However, the artifacts are slight compared with the conventional color correction methods, such as GW, WP, and SB [21]. the color correction in other color spaces, such as Bianco's method [37], Emberton's method [38], and the proposed method do not produce obvious artifacts. Therefore, it is a good solution to avoiding artifacts.

Linearization of RGB Images
The linear RGB images are generally non-linearized, known as gamma correction, in the image signal processing pipeline embedded in a camera's hardware [47] to adapt the output characteristics of monitors. It is very important and necessary to correct the non-linearity of RGB images, known as de-gamma correction, to ensure the color correction is implemented in a linear RGB coordinate system [37]. For verification, the linearization process is removed from the flow of the proposed method, and several typical results are compared in Figure 15. We can see that artifacts are also produced in some images without linearization and the performance of color correction becomes slightly worse. However, the artifacts are slight compared with the conventional color correction methods, such as GW, WP, and SB [21].

Linearization of RGB Images
The linear RGB images are generally non-linearized, known as gamma correction, in the image signal processing pipeline embedded in a camera's hardware [47] to adapt the output characteristics of monitors. It is very important and necessary to correct the non-linearity of RGB images, known as de-gamma correction, to ensure the color correction is implemented in a linear RGB coordinate system [37]. For verification, the linearization process is removed from the flow of the proposed method, and several typical results are compared in Figure 15. We can see that artifacts are also produced in some images without linearization and the performance of color correction becomes slightly worse. However, the artifacts are slight compared with the conventional color correction methods, such as GW, WP, and SB [21].

Underwater Image Enhancement
To get in-air images of underwater scenes by image enhancement, color correction methods should be combined with contrast improvement methods. Figure 16 shows several enhanced images obtained by applying a simple contrast improvement method named contrast limited adaptive histogram equalization (CLAHE) [48] after the proposed color correction method. It can be seen that the color cast and contrast are both significantly improved. However, most of the existing contrast improvement methods are designed to deal with a special problem and some of them will also change the image color. It is still important to look for a more effective and robust underwater image enhancement method.
images, known as de-gamma correction, to ensure the color correction is implemented in a linear RGB coordinate system [37]. For verification, the linearization process is removed from the flow of the proposed method, and several typical results are compared in Figure 15. We can see that artifacts are also produced in some images without linearization and the performance of color correction becomes slightly worse. However, the artifacts are slight compared with the conventional color correction methods, such as GW, WP, and SB [21].

Underwater Image Enhancement
To get in-air images of underwater scenes by image enhancement, color correction methods should be combined with contrast improvement methods. Figure 16 shows several enhanced images obtained by applying a simple contrast improvement method named contrast limited adaptive histogram equalization (CLAHE) [48] after the proposed color correction method. It can be seen that the color cast and contrast are both significantly improved. However, most of the existing contrast improvement methods are designed to deal with a special problem and some of them will also change the image color. It is still important to look for a more effective and robust underwater image enhancement method.

Evaluation of Restored Images
Since the inconsistency between the common non-reference image quality metrics and the visual observation has been observed [1,8,46], color-card-based evaluation methods are more reliable. However, it is not accurate enough only using the grayscale patches [7,13]. Taking advantage of the color patches on color charts could evaluate the color accuracy better [11].

Conclusions
A simple chromatic adaptation-based color correction method for underwater images is proposed in this paper. The underwater RGB image is first linearized to make its pixel values proportional to the light intensities arrived at the pixels, ensuring the color correction operation is performed in a linear space. The illumination is estimated in the uniform chromatic space of CIE 1960 UCS using the brightest 10% pixels based on the WP hypothesis. The chromatic adaptation transform is implemented in the device-independent color space of CIE 1931 XYZ. Experiments show that the proposed method is quite robust and could produce visually pleasing results while the other methods often introduce artifacts or over-enhancement. How to combine the color correction method with contrast improvement methods or model-based restoration methods to get the in-air images of underwater scenes will be the focus of follow-up work.

Evaluation of Restored Images
Since the inconsistency between the common non-reference image quality metrics and the visual observation has been observed [1,8,46], color-card-based evaluation methods are more reliable. However, it is not accurate enough only using the grayscale patches [7,13]. Taking advantage of the color patches on color charts could evaluate the color accuracy better [11].

Conclusions
A simple chromatic adaptation-based color correction method for underwater images is proposed in this paper. The underwater RGB image is first linearized to make its pixel values proportional to the light intensities arrived at the pixels, ensuring the color correction operation is performed in a linear space. The illumination is estimated in the uniform chromatic space of CIE 1960 UCS using the brightest 10% pixels based on the WP hypothesis. The chromatic adaptation transform is implemented in the device-independent color space of CIE 1931 XYZ. Experiments show that the proposed method is quite robust and could produce visually pleasing results while the other methods often introduce artifacts or over-enhancement. How to combine the color correction method with contrast improvement methods or model-based restoration methods to get the in-air images of underwater scenes will be the focus of follow-up work.