Bayer Image Demosaicking Using Eight-Directional Weights Based on the Gradient of Color Difference

In this paper, we propose a new demosaicking algorithm which uses eight-directional weights based on the gradient of color difference (EWGCD) for Bayer image demosaicking. To obtain the interpolation of green (G) pixels, the eight-directional G pixel values are first estimated in red (R)/blue (B) pixels. This estimate is used to calculate the color difference in R/B pixels of the Bayer image in diagonal directions. However, in horizontal and vertical directions, the new estimated G pixels are defined to obtain the color difference. The eight-directional weights of estimated G pixels can be obtained by considering the gradient of the color difference and the gradient of the RGB pixels of the Bayer image. Therefore, the eight-directional weighted values and the first estimated G pixel values are combined to obtain the full G image. Compared with six similar algorithms using the same eighteen McMaster images, the results of the experiment demonstrate that the proposed algorithm has a better performance not only in the subjective visual measurement but also in the assessments of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index measurement.


Introduction
Digital color cameras are widely used in our daily life.A single image sensor (e.g., CCD or CMOS) is always used in cameras to capture images of target objects with a color filter array (CFA).Only one of the three-color elements, red (R), green (G), and blue (B), is sampled at each pixel location through the CFA.This is the original single-color image of target objects.The most common CFA is Bayer pattern, in which the G information is two times more than the others, as shown in Figure 1.The interpolation algorithms for the Bayer images are important for obtaining the full color images because they help to reconstruct the other two missing color elements of each pixel.This process is usually called demosaicking.We have been pursuing high quality interpolated images as approaching better demosaicking algorithms is extremely important.
Demosaicking methods have been proposed in many recent studies with the purpose of enhancing interpolation quality.One of the efficient demosaicking methods is linear interpolation [1], which assumes that color differences are constant over small regions.This method has a low level of computational complexity and a good performance in the homogenous region.However, when it deals with the edge of the image, it causes severe color artifacts.To gain a better interpolation performance along edges, some methods [2][3][4][5] exploit edge indicators to achieve edge-directed interpolation along the estimated interpolation directions, proving how important the edge indicator is in improving the quality of the interpolated image.Zhang and Wu [2] proposed a directional linear minimum mean square error (DLMMSE) estimation method, which adaptively estimated the missing pixel values by the linear minimum mean square error technique in two directions.To improve the performance at the structural level [3], a nonlocal adaptive threshold was introduced.On the basis of polynomial interpolation, Wu et al. came up with an alternative indicator and an edge classifier to enhance the interpolation accuracy [4].Chen and Chang [5] proposed an accurate edge detecting method to minimize the color artifacts at the edges.
Furthermore, to achieve a better interpolation performance along edges and avoid errors in the estimations of the edge direction due to limiting candidate directions, there are some algorithms which add weights to the process of interpolation.These algorithms estimate adaptive directions by evaluating the similarity among the neighbor pixels.Chung and Chan [6] proposed integrated gradients (IG) to simultaneously extract gradient information from intensity and color differences.Pekkucuksen and Altunbasak [7] suggested an orientation-free edge strength filter, which is used to apply the color difference adaptively in order to provide more edge information.They also proposed the gradient-based threshold free (GBTF) color filter array interpolation [8], combining estimations from de-coupled directions instead of making a hard direction decision based on the weights.To reduce color artifacts, Wang et al. [9] introduced more elaborate weights, considering both neighborhood similarity and patch distance.
Most spatial interpolation algorithms are proposed by assuming that color ratios and color differences are constant over small regions, while residual interpolation (RI) [10], a novel demosaicking algorithm, works over residual domain.The interpolation method, which defined the residual as the difference between the observed and the tentatively estimated pixel values, first generates an estimate image accurately enough over R/B channel by using the guide filter [11], a powerful edge-preserving filter, followed by using GBTF [8].This method has proven to be effective.Due to the prominent impact of a precise tentative estimate on the performance of the algorithm, KiKu et al. proposed minimized-Laplacian residual interpolation (MLRI) [12] to tentatively generate a more accurate estimate and introduced a weighted average in the guide filter [13].Valuable information can be obtained from these two articles.On the basis of RI, Ye and Ma [14] constructed a new highly accurate G channel with iteration which does not attach the direction to the interpolation.Kim and Jeong [15] aimed to exert a joint gradient by utilizing the four directions' information, and Monno et al. [16] proposed an adaptive residual interpolation, adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel.Wang and Jeon [17] introduced the multi-directional weighted interpolation and guide filter (MDWI-GF), yielding a high-performance full color image.
To boost the quality of interpolated images, several methods were proposed besides interpolation algorithms based on the spatial domain.Algorithms which implement frequency correlation provide great interpolation results.Prakash et al. [18] obtained luminance and chrominance information from the frequency domain, followed by estimating the missing components with the assistance of a neural network.Additionally, by focusing on luminance images, rather than color channels, Ji et al. [19] analyzed subband signals for an aliasing map, followed by designing a cost function to remove residual aliasing artifacts.Wu et al. [20] developed the recovered process for directional difference regression and fused regression, and then they incorporated efficient regression priors, an efficient post-processing step that can be widely applied in most demosaicking methods.Due to the similarity between demosaicking and fusion methods, Yang et al. [21] aimed to perform image demosaicking and fusion simultaneously via compressive sensing theory.
In this paper, we propose an eight-directional weighted algorithm based on the gradient of the color difference (EWGCD) for the Bayer image.The residual interpolation is applied to gain the R/B pixel values by using the full G values as the filter.Therefore, the interpolation of the full G values significantly contributes to the interpolation of the overall image.This novel approach is inspired by the work in [12,17].After analyzing those experimental results, the G pixel values in R/B pixels are creatively interpolated by first estimating the G values in eight directions.This is different from many algorithms which only utilize two or four directions as the effective information.Next, the color difference between estimated values and original values is obtained, and thus, the gradient of the color difference is calculated.The gradient of the color difference and the gradient of the RGB image are combined to obtain the weights in eight directions.Therefore, the color information from eight directions is fully utilized to obtain the full G values.The minimized-Laplacian energy is applied to the interpolation of R/B values.
The rest of this paper is organized as follows: Section 2 starts with a brief review of color difference, RI [10], Laplacian energy, and the G interpolation in GBTF [8] and MDWI-GF [17].Section 3 presents the process of the proposed interpolation method.Experimental results and analyses on the proposed method are shown in Section 4. The conclusions and remarks on possible further work are finally given in Section 5.

Color Difference Interpolation and Residual Interpolation
It is known that color difference interpolation and residual interpolation are common methods in the interpolation of the R/B plane.These two interpolations in the R plane are shown in Figure 2. The interpolation of B plane is the same.The first method is the color difference interpolation (CDI) [12], as shown in Figure 2a.Firstly, color difference images are produced by subtracting the interpolated G pixel values from the sample of R pixels.Then, the color difference images can be interpolated.Finally, to generate the full R images, the interpolated G pixel images are added to the interpolated color difference images.Figure 2b shows the RI [10] process of the R plane.The part that is different from CDI [12] is the estimations of R pixel values.Firstly, to get the full G images, the subsampled G components are interpolated.Secondly, the pre-estimated R pixel values ( R) are obtained by using full G images as the guide filter for sampled R components.By subtracting the R from sampled R pixels (R − R), the residual values are obtained.Finally, to approach the interpolated image of R, the residual values are interpolated.The full R image is resumed by combining the interpolated residual image and R.

Laplacian Energy
In Figure 2b, guide filter is used to generate the tentative image.The Laplacian energy is applied to calculate the parameters in guide filter.Therefore, before knowing the guide filter, Laplacian energy are introduced.The Laplacian energy is the quota used to represent the smoothness of images.The map of an image is calculated as: where ∆I is the Laplacian map, I is an image, and ⊗ is a convolution operation.The Laplacian energy of the image is calculated as: where E(I) is the Laplacian energy, the i, j are the image pixel coordinates, Ω is a set of pixel coordinates of the whole image, ∆I i,j is the approximate Laplacian map at subsampled pixel i, j and |Ω| represents the number of whole image pixels.In [12], it indicates that the interpolated performance is better in a domain where the Laplacian energy is smaller.However, in the realistic demosaicking process, the accurate Laplacian energy cannot be computed from the subsampled images.Therefore, an approximate Laplacian energy is used to process sampling images.The approximate Laplacian map ∆I is: Therefore, the approximate Laplacian energy of images E(I) can be defined as: where ω represents the subsampled pixel indexes and |ω| is the number of subsampled pixels.
2.1.3.The Guide Filter in MLRI RI [10] is applied to the interpolation of R/B pixels in MLRI [12] in which the guide filter is the key point.The guide filter is a linear variable process.This process takes an input image as a guide image, which enables the output image to have edge-preserving capability.After interpolating the G pixel values, the full G image, the guide filter, which is shown in Figure 2b in a linear relation, and the minimum Laplacian energy are applied to obtain the tentative R/B pixel values in MLRI [12].
In [12], it takes the interpolation of R pixels as an example.The method for the B pixel values is done in the same manner.Since the guide image and the tentative R image have the relationship as Equation ( 5), when the guide image has the edge-preserving capability, the tentative R image also has the edge-preserving capability.The local window ω p,q is a 5 × 5 window which centers at pixel p, q: R i,j = a p,q G R i,j + b p,q , ∀i, j ∈ ω p,q , where R i,j is the estimated R values at the pixel i, j, and a p,q and b p,q are linear coefficients in the ω p,q .Furthermore, from the Laplacian energy, it is known that smaller energy performs better in the image interpolation.The minimum value of a p,q is obtained from Equation (6): where M i,j is the mask which is one for the sampled R pixels and zero for the others at the pixel i, j, R i,j is the R values at the pixel i, j, and ω p,q is the number of pixels in ω p,q .In the same way, after obtaining minimum a p,q , b p,q is calculated as: After knowing the estimated R pixel values, according to the RI [10], the sampled R pixels are subtracted from the estimated pixel values in MLRI [12] to obtain the residual image.To obtain the interpolated image, the residual image is bilinearly interpolated.The sum of the sampled R images and the interpolated R image gives the final full R picture.

The Outline of G Interpolation in GBTF and MDWI-GF
In [8], the process of interpolation of the G pixels is shown in Figure 3.The interpolated steps are roughly shown as follows.
Firstly, to obtain the G and R/B estimated pixel values in the horizontal and vertical directions, the G and R/B channel information is used.
Then, by using the estimated pixel values and the color information in the Bayer image, the horizontal and vertical color difference values are obtained.The color difference gradient values can be calculated from the color difference values.Furthermore, in the 5 × 5 window, the reciprocal of the sum of gradient of color difference in the horizontal and vertical directions is regarded as the weights in north (N), south (S), west (W), and east (E) directions.
The final estimated difference values are derived by combining the weights, the vector f , and the color difference information.The G pixel value is calculated by using the final estimated difference and the R/B pixel values.In [17], the interpolation process of G pixels is shown in Figure 4. Furthermore, the gradient in the eight directions is acquired from the gradient information of R, G, and B in eight directions.The weights in the eight directions is the reciprocal of the gradient.
Finally, the G pixel values can be obtained by combining the weights and values of the estimations.

The Outline of Proposed Algorithm
In this section, the process of the proposed algorithm is introduced.The interpolation of G pixels at R pixels is shown in Figure 5.The interpolation of G pixels at B pixels is the same.It is clear that the novel algorithm is based on the work in [8,17], shown in Figures 3-5.In [17], it first obtains the estimated values in eight directions and obtains the weights of eight directions by gradient information.Then, the G pixel value is calculated.In [8], the horizontal and vertical color difference are calculated.The reciprocal of the gradient of color difference is used as the weights, and the final color difference estimations are obtained by the information of the color difference and weight information in four directions.Finally, the G pixel value is calculated by adding the R or B pixels to the final color difference estimations.Secondly, a new method is applied to obtain the new estimated values in N, S, W, and E directions.The color difference of N, S, W, and E directions is then derived from the new estimated values.That is different from the way in [17].At the same time, the gradient of the color difference in eight directions is calculated.
Finally, the gradient information of the color difference in eight directions and the gradient information of some parts of the RGB image are used to calculate the weights of eight directions.Those are calculated in different ways from [8].They are also shown in Figure 5 with a red dotted line.The G pixel values are obtained from the eight-directional weights and first estimated G values.Therefore, the proposed algorithm has made great innovations on the basis of [8,17].
The full G images are utilized as the guide diagrams and the linear relation is applied in the guide filter to get the tentative estimated R values.The interpolation of R/B is the same as MLRI [12].

The Interpolation of G Values
Taking the demosaicking of G components in the R pixel of the Bayer image for example to present the process of the interpolation for G values clearly, the algorithm in B components is the same.The interpolation of eight directions will result in more effective interpolation.Therefore, the interpolation of G pixels is calculated as: where X represents eight directions, G R i,j is the full G values at R pixels, G X R i,j is the estimations of G values in eight directions at R pixels, P X is the weights values in eight directions.
The estimations of the G pixel in the R pixel are to be conducted.In the diagonal lines, the G pixels center on the R pixel, which is shown in Figure 6.To enhance the accuracy of estimations, the estimated pixel values of G in NW, NE, SW, and SE directions in the R pixel are: where X 1 represents the four directions (i.e., NW, NE, SW, and SE), S X 1 G stands for the useful information in the interpolation of G pixels in NW, NE, SW, and SE directions, h 6 is an interpolation filter.As can be seen from Figure 6, G pixels are distributed in diagonal lines in a 7 × 7 pattern.S X 1 G in the NE direction is calculated by Equation (10).The other three directions (NW, SW, and SE) are similarly calculated.The position of the pixels which are used to formulate is marked by the dashed line and solid line in Figure 6.
The interpolation filter is defined as Equation ( 11): The estimations of G values in N, S, W, and E directions are computed as follows: , G W R i,j , and G E R i,j are the estimated G pixel values in N, S, W, and E directions in R pixels at the pixel i, j, respectively.
After calculating the estimations of N, S, W, E, NW, NE, SW, and SE directions, P X are obtained in N, S, W, E, NW, NE, SW, and SE directions.∇ X R i,j is the gradient information in eight directions in R pixels at the pixel i, j.
The Bayer image which takes the R pixel as the center pixel is shown in Figure 1.This image is divided into two parts to represent the gradient information.One part shown in Figure 7 contains the gradient information of R, G, and B components in N, S, W, and E directions.The other part shown in Figure 8 has the gradient information of R, G, and B components in the NW, NE, SW, and SE directions.  is defined as the gradient information in the N direction in the R pixel at the pixel i, j.Then the gradient is defined as: where D V r (i, j) is the gradient of color difference in vertical direction for R pixels at the pixel (i, j).In the NW direction, pixels which are used to calculate are marked with black solid lines in Figure 8.The gradient of NW direction is defined as: where (i, j) are the coordinates, ∇ NW R i,j is the gradient information in NW direction in R pixels at the pixel i, j and D NW r (i, j) represents the gradient of color difference in the NW direction in R pixels at the pixel (i, j).The methods in the NE, SW, and SE directions are the same as the NW direction.
The gradient of color difference in the NW direction is obtained as: where ∆ NW g,r (i, j) is the color difference in the NW direction between estimated G values and R image at the pixel (i, j).The gradient of color difference in NE, SW, and SE is calculated in a similar way.
According to the estimations in the four diagonal directions, the color difference is calculated in R position.Taking the NW direction for example, the color difference of other three directions (NE, SW, and SE) at the pixel (i, j) are calculated in the same way.The color difference in NW direction is defined as: After that, the gradient of the color difference is obtained in the horizontal and vertical directions, respectively: where D H r (i, j) is the gradient of the color difference in the horizontal direction in R pixels at the pixel (i, j), D V r (i, j) is the gradient of the color difference in the vertical direction in R pixels at the pixel (i, j), ∆ H g,r (i, j) represents the color difference in the horizontal direction between the new estimated G values and R image or between G image and new estimated R values at the pixel (i, j), and ∆ V g,r (i, j) is the color difference in the vertical direction between the new estimated G values and R image or between G image and the new estimated R values at the pixel (i, j).
The color difference in the horizontal and vertical directions is shown in Equations ( 19) and ( 20): where G H i,j and G V i,j are the new estimated G values in the horizontal and vertical directions in the pixel i, j, respectively, and R H i,j and R V i,j are the new estimated R values in the horizontal and vertical directions in the pixel i, j, respectively.
In the horizontal and vertical directions, however, the new estimations of G pixel values and R pixel values in the pixel i, j are shown in Equation (21), which is different from the method used in the first estimations.The new estimations are formulated as:

Experimental Results and Discussions
In order to know the interpolated effect of the proposed method, we carried out the experiment by using MATLAB R2015a with an Intel(R) Core(TM), i5-5200U 2.20 GHz CPU, 4.00 GB memory computer.The sampled data of the McMaster (McM) images shown in Figure 9 is used for testing.The original sequence of Bayer is B, G, G, R. To demonstrate the effect of the proposed method, the experimental results are compared with DLMMSE [2], IG [6], GBTF [8], RI [10], MLRI [12], and MDWI-GF [17].
The peak signal-to-noise ratio (PSNR) is widely used in the estimate.The results are shown in Table 1.Two decimal digits are retained.In each image, the best processing result has been marked in black bold font.In Table 1, the PSNR in the average values of images which are processed by MLRI [12] is the maximum value and the proposed algorithm is the secondary one.However, in Figure 9a,e,f,h,i,j,l,m,n,o,q, the PSNR of the proposed algorithm is higher than that of MLRI [12].The proposed algorithm does not obtain the best result in all images.It performs best in Figure 9a,e,f,i,m,o,q.Especially in Figure 9f, the proposed algorithm (EWGCD) improves significantly, reaching 5.65 dB higher than GBTF [8].However, EWGCD does not make a greater promotion in Figure 9c,g.The structural similarity (SSIM) index measurement works well in measuring the similarity of the interpolated image to the original one.The results are listed in Table 2. Four decimal digits are retained.The maximum average value of the SSIM for all images has been marked in black bold font.From Table 2, the average values of SSIM show that the image which is interpolated by the proposed method is more similar to the original image.Table 1.PSNR (dB) of the proposed method compared with DLMMSE [2], IG [6], GBTF [8], RI [10], MLRI [12], and MDWI-GF [17]., IG [6], GBTF [8], RI [10], MLRI [12], and MDWI-GF [17].Furthermore, two images are shown in Figure 10 to see the effect of the proposed algorithm in the local magnification.The two images are Figure 9e,q.The local part of Figure 9e is magnified.The magnified part, which is marked by a black box, is shown in Figure 10a.The magnified local images, which have a visual comparison, are shown in Figure 11.The magnified image can be clearly seen in Figure 11a.The other seven images, which are magnified in the same part as shown in Figure 11, are obtained by DLMMSE [2], IG [6], GBTF [8], RI [10], MLRI [12], MDWI-GF [17], and the proposed method, respectively.Compared with the other six methods, the proposed one is visually more similar to the original one.Visual distortions are present, as can be seen in Figure 11, where the yellow part has grey spots from Figure 11b-g.Compared with the original image shown in Figure 11a, the distortion from Figure 11b-e is particularly serious.Though the image which is processed by the proposed algorithm has a little blur at the edge of yellow part, it has a better performance than the other six algorithms.

Conclusions
In

Figure 2 .
Figure 2. The process of interpolation in the R plane: (a) the process of color difference interpolation (CDI)[12] and (b) the process of residual interpolation (RI)[10].
Firstly, the estimated values of G pixels in eight directions are obtained.The estimated values in W, E, N, and S directions are incorporated with the value of the G and R/B in the Bayer image.The values in the northwest (NW), northeast (NE), southwest (SW), and southeast (SE) directions are calculated by the diagonal information of G pixels passing through a filter.

Figure 5 .
Figure 5.The process of G interpolation in the proposed algorithm in R pixels.

Figure 6 .
Figure 6.The G pixels in four diagonals.

Figure 7 .
Figure 7.The gradient information of N, S, W, and E directions.

Figure 8 .
Figure 8.The gradient information of NW, NE, SW, and SE directions.

Figure 9 .
Figure 9.The McM images used to simulate.

Figure 10 .
Figure 10.The images used to magnify the local part: (a) the text image Figure 9e and (b) the test image Figure 9q.
this paper, a new algorithm is proposed in demosaicking by eight-directional weights based on the gradient of the color difference for the Bayer image.We first interpolate G pixels by considering the weights and estimated G values in the eight directions.Then, a filter and the diagonal G values are utilized to obtain the estimated G values in NW, NE, SW, and SE directions.The estimated G values in N, S, W, and E directions are calculated by pixel values in the Bayer image.These estimated values in NW, NE, SW, and SE are applied to the color difference, while in N, S, W, and E directions, the estimated values used to obtain the color difference are calculated in a new method.The gradient of the color difference is obtained in eight directions.This gradient is used to calculate the weights in eight directions.The weights and first estimated G are applied to generate the full G image.The interpolation of R/B values is the same as MLRI.Compared with the other six algorithms, the experimental results demonstrate that our proposed method has different degrees of improvement in the results of eighteen McM images objectively and subjectively.In our future work, we will concentrate on the improvement of the interpolation of R/B pixels and the operating efficiency of demosaicking in G pixels.