In the next two sub-sections, subjective and objective results for the proposed method will be presented.
3.3.1. Subjective Result
To demonstrate the subjective performance of the proposed Colour Constancy Adjustment by Fusion of Image Segments’ initial colour correction factors (CCAFIS) method and to compare the quality of its colour-corrected images with those of the state of the art colour constancy techniques, two sample images from the Colour Checker and the Upenn Natural Image benchmark image datasets are selected and colour-balanced using different colour correction methods.
Figure 2 shows a sample image from the Colour Checker image dataset, its corresponding ground truth and its colour-balanced images using the Weighted Grey Edge [
20], Corrected Moment [
21], Cheng et al. [
38] and the proposed CCAFIS techniques. The resulting images have been linearly gamma-corrected to improve their visual qualities. From
Figure 2a, it can be seen that the input image has a significant green colour cast and the scene is illuminated by multiple indoor and outdoor light sources.
Figure 2b shows the ground truth of the image.
Figure 2c shows the Weighted Grey Edge method’s image, which demonstrates a slightly lower green colour cast than the input image.
Figure 2d illustrates the Corrected Moment’s image, which exhibits a yellow to orange colour cast. Cheng et al.’s method’s image is shown in
Figure 2e. This image suffers from the presence of a deep yellow-orange colour cast. The proposed CCAFIS method’s image, shown in
Figure 2f, exhibits high colour constancy and appears to have the closest colour constancy to that of the ground truth image. The recovery angular error of the images are also shown on the images; from these figures it can be seen that the recovery angular error of the proposed method’s image is the lowest among all other methods. This implies that the objective qualities of the images are consistent with their subjective qualities.
Figure 3 shows a sample image from the UPenn dataset [
49], its ground truth and colour-balanced images using the Max-RGB, Shades of Grey, Grey Edge-1, Grey Edge-2, Weighted Grey Edge and the proposed CCAFIS’ methods’ images. From
Figure 3a, it can be noted that the input image exhibits a yellow colour cast. The tree’s green leaves and the colour chart exhibit yellow colour cast.
Figure 3b is its ground truth image.
Figure 3c shows the Max-RGB method’s image. From this image, it can be seen that the image has a slightly higher yellow colour cast than its original input image. The Shades of Grey method’s image is shown in
Figure 3d. This figure demonstrates significantly higher yellow colour cast than the original image, particularly on the tree’s green leaves area of the image.
Figure 3e is the Grey Edge-1 method’s image. This image also suffers from an increased colour cast on the tree’s green leaves and the colour chart areas of the image. The Grey Edge-method’s image is shown in
Figure 3f. This image demonstrates a slightly higher colour constancy than its original image. The tree and the deciduous plants on the left side of the image have a slightly lower colour cast than the original image. The Weighted Grey Edge method’s image is illustrated in
Figure 3g. This image is appeared to be very alike to that of the Grey Edge-1 method’s image, shown in
Figure 3e.
Figure 4 illustrates a sample image from the Gray Ball dataset with a yellow colour cast, its respective ground truth image and its colour-balanced images using Edge-based gamut [
23], Grey pixel [
42], RCC-Net [
29], and the proposed CCAFIS methods’ images. From
Figure 4c, it can be seen that the Gamut-based method exhibits a high level of red colour casts.
Figure 4d and
Figure 4e are the images of the Grey Pixel and the RCC-Net methods, respectively. These images demonstrate an improved colour balance to that of the input image. However, the images have still some levels of yellow colour cast. The proposed CCAFIS method’s image (
Figure 4f) appears to be a shot under canonical light as the presence of the source illuminant is significantly reduced. Moreover, the median recovery angular error of the proposed technique’s image is the lowest among all other techniques’ images, which means the proposed technique’s image has the highest objective colour constancy.
Figure 5 illustrates a sample image from the Gray Ball image dataset [
49], which has a yellow colour cast, and its colour-balanced images using the Edge-based gamut [
23], Grey pixel [
42], RCC-Net [
29] and the proposed CCAFIS methods’ images. From
Figure 5c, it can be seen that the Gamut-based method’s image exhibits both blue and reddish colour cast. However, this image exhibits lower colour constancy to those of the Grey pixel and the RCC-Net techniques’ images, shown in
Figure 5d and
Figure 5e, respectively. Nevertheless, the images of all these three techniques still have visible yellow colour cast. The proposed CCAFIS method’s image (
Figure 5f) appears as if being taken under a white illuminant. The recovery angular errors of the images are also calculated and displayed on the images. By comparing the images’ recovery angular error, it can be seen that the proposed method’s image has the lowest recovery angular error, which means it exhibits the highest objective colour quality to other techniques images.
To give the reader a better understanding in the performance of the proposed CCAFIS method on images of scenes with spatially varying illuminant distribution, an image from the MLS image dataset [
32] that represents a scene lit by spatially varying illumination is taken and colour corrected using the proposed technique, Grey Edge-2, Weighted Grey Edge and Gisenji et al. The original image, its ground truth and the resulting colour-corrected images using the proposed CCAFIS and other techniques are shown in
Figure 6. From this figure, it can be noted that the proposed technique’s image exhibits the highest colour constancy. In addition, it has the lowest median angular error among all other techniques’ images, which implies that the proposed technique’s image has the highest objective quality.
To generate the Mean Opinion Score (MOS) for the images of the proposed and the state-of-the-art colour constancy methods, a set of images from the Grey Ball, the Colour Checker and the MIMO image datasets, which contain images of scenes lit by either single or multiple light sources, was chosen. The selected images were colour-corrected using the proposed CCAFIS method, as well as other state-of-the art techniques including the multiple illuminant methods such as: Gijsenij et al. [
32], MIRF [
33] and Cheng et al. [
38]. Ten independent observers subjectively evaluated the resulting colour-balanced images. The viewers scored the colour constancy of each image from 1 to 5, where higher numbers correspond to increased colour constancy. The average MOS of different methods’ images were then calculated and tabulated in
Table 2. From
Table 2, it can be noted that the proposed method’s images have the highest average MOS when compared to the other techniques’ images. This implies that the proposed method’s images have the uppermost subjective colour constancy.
3.3.2. Objective Result
To evaluate the objective performance of the proposed method, Grey world [
16], Max-RGB [
17], Grey Edge-1 [
19], Grey Edge-2 [
19], Gijsenij et al. [
32], MIRF [
33], ASM [
41], Grey Pixel [
42], Exemplar [
43] and CNN+SVR [
28] methods were used to colour balance the images of the MIMO [
33], the Grey Ball [
49] and the Colour Checker [
48] image datasets, as well as 9 outdoor images of the Multiple light sources image dataset [
32]. The average mean and median of both recovery and reproduction angular errors of the colour-balanced images of the Grey Ball and the Colour Checker dataset are tabulated in
Table 3 and
Table 4, respectively.
From
Table 3, the proposed technique’s images have the lowest average mean and median recovery and reproduction angular errors among all the statistics-based colour constancy methods, implying that the proposed technique outperforms statistics- and gamut-based techniques with respect to objective colour constancy (the mean and the median angular errors of the Deep Learning, Natural Image Statistics and Spectral Statistics were taken from [
41]). When compared to the learning-based methods, the proposed technique’s average median angular error equals 4.0°, which is slightly higher than that of the learning-based methods. The proposed algorithm’s median reproduction angular error is 2.6°, which is the lowest among all techniques apart from ASM with a median angular error of 2.3°. This demonstrates that the proposed method produces very competitive objective results compared to those of the learning-based methods.
According to
Table 4, the proposed CCAFIS technique’s images have the lowest average mean and median recovery and reproduction angular errors among all the statistics-based colour constancy methods, which implies that the proposed technique outperforms statistics- and gamut-based techniques with respect to objective colour constancy. With respect to the learning-based methods, the proposed technique’s average median angular error equals 2.7°, which is slightly higher than some of the learning-based methods (the mean and the median angular error of the AAC, HLVIBU, HLVI BU & TD, CCDL, EB, BDP, CM, FB+GM, PCL, SF, CCP, CCC, AlexNet + SVR, CNN-Per patch, CNN average-pooling, CNN median-pooling and CNN fine-tuned methods have been taken from [
28]). The proposed algorithm’s median reproduction angular error is 2.9
, which is the lowest among all techniques except the Exemplar-based method with a median angular error of 2.6
. This demonstrates that the proposed method produces very competitive objective results compared to those of the learning-based methods.
The average mean and median recovery angular errors of the colour-balanced images from the MIMO image dataset for different techniques were computed and tabulated in
Table 5 and the median angular errors for 9 outdoor image of multiple light source dataset for different algorithms were determined and tabulated in
Table 6.
From
Table 5, it can be seen that the proposed CCAFIS method’s mean recovery angular error for real world images of the MIMO dataset is 4.2°. Please note that the mean and the median recovery angular error for the MLS + GW, MLS + WP, MIRF + GW, MIRF + WP and MIsRF + IEbV methods have been taken from [
28]. This is the same as the Grey World’s and MLS + GW mean recovery angular error and slightly higher than that of the MIRF methods, which produced images having the smallest mean angular error of 4.1°. The median recovery angular error of the proposed CCAFIS method is 4.3°, which is the lowest among all of the statistics-based methods. For the laboratory images, the proposed CCAFIS method’s mean recovery angular error is 2.1° and the median recovery angular is 2.7°, which are the lowest recovery angular errors compared to all other methods. This implies that the proposed CCAFIS method has the highest objective performance when dealing with lab images of MIMO dataset.
From
Table 6, it can be noted that the proposed CCAFIS method’s images exhibit the lowest median recovery angular error among all statistical and the state of art techniques. This implies that the proposed CCAFIS method outperforms other methods in adjusting the colour constancy of the images taken from scenes illuminated by multiple light sources.