Further Improvement of Debayering Performance of RGBW Color Filter Arrays Using Deep Learning and Pansharpening Techniques

The RGBW color filter arrays (CFA), also known as CFA2.0, contains R, G, B, and white (W) pixels. It is a 4 × 4 pattern that has 8 white pixels, 4 green pixels, 2 red pixels, and 2 blue pixels. The pattern repeats itself over the whole image. In an earlier conference paper, we cast the demosaicing process for CFA2.0 as a pansharpening problem. That formulation is modular and allows us to insert different pansharpening algorithms for demosaicing. New algorithms in interpolation and demosaicing can also be used. In this paper, we propose a new enhancement of our earlier approach by integrating a deep learning-based algorithm into the framework. Extensive experiments using IMAX and Kodak images clearly demonstrated that the new approach improved the demosaicing performance even further.


Introduction
Two mast cameras (Mastcams) are onboard the NASA's rover, Curiosity.The Mastcams are multispectral imagers having nine bands in each.The standard Bayer pattern [1] in Figure 1a has been used for the RGB bands in the Mastcams.One objective of our research was to investigate whether or not it is worthwhile to adopt the 4 × 4 RGBW Bayer pattern [2,3] in Figure 1b instead of the 2 × 2 one in NASA's Mastcams.We have addressed the comparison between 2 × 2 Bayer and 4 × 4 RGBW pattern in an earlier conference paper [4], which proposed a pansharpening approach.We observed that Bayer has better performance than the RGBW pattern.Another objective of our paper here is to investigate a new and enhanced pansharpening approach to demosaicing the RGBW images.
In a 2017 conference paper [4] written by us, a pansharpening approach was proposed to demosaicing the RGBW patterns.The idea was motivated by pansharpening [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34], which is a mature and well-developed research area.The objective is to enhance a low resolution color image with help from a co-registered high resolution panchromatic image.Due to the fact that half of the pixels in the RGBW pattern are white, we think that it is appropriate to apply pansharpening techniques to perform the demosaicing.Although RGBW has some robustness against noise and low light conditions, it is not popular and does not have good performance [4] as compared to the standard Bayer pattern.Nevertheless, we would like to argue that the debayering of RGBW is a good research problem for academia even in the case where mosaiced images are clean and noise-free.Ideally, it will be good to reach the same level of performance of the standard Bayer pattern.However, it is a challenge to improve the debayering performance of RGBW.
In our earlier paper [4], our pansharpening approach consisted of the following steps.First, the generation of the pan band and the low resolution RGB bands is similar to that in [2,3].Second, instead of downsampling the pan band, we apply some pansharpening algorithms to directly generate the pansharpened color images.However, the results in [4] were slightly better than the standard method [2,3] for IMAX data, but slightly inferior for Kodak data.
In this paper, we present a new approach that aims at further improving the pansharpening approach in [4].There are two major differences between this paper and [4].First, we propose to apply a recent deep learning based demosaicing algorithm in [35] to improve both the white band (also known as illuminance band or panchromatic band) and the reduced resolution RGB image.After that, a pansharpening step is used to generate the final demosaiced image.Second, it should be emphasized that a new "feedback" concept was introduced and evaluated.The idea is to feed the pansharpened images back to two early steps.Extensive experiments using the benchmark IMAX and Kodak images showed that the new framework improves over earlier approaches.
Our contributions are as follows: • We are the first team to propose the combination of pansharpening and deep learning to demosaic RGBW pattern.Our approach opens a new direction in this research field and may stimulate more research in this area; • Our new results improved over our earlier results in [4]; Our results are comparable or better than state-of-the-art methods [2,14,16].
This paper is organized as follows.In Section 2, we will review the standard approach and also the pansharpening approach [4] of demosaicing the RGBW images.We will then introduce our new approach that combines deep learning and pansharpening.In Section 3, we will summarize our extensive comparative studies.Section 4 will include a few concluding remarks and future research directions.

Standard Approach
In [3], a standard approach was presented.Figure 2 [2] depicts the key ideas.A mosaiced image is first split into color and panchromatic components.The color and the panchromatic components are then processed separately to generate the full resolution color images.This approach is very efficient and can achieve decent performance, which can be explained using Lemma 1 of [7].For completeness, the Lemma is included below.Although RGBW has some robustness against noise and low light conditions, it is not popular and does not have good performance [4] as compared to the standard Bayer pattern.Nevertheless, we would like to argue that the debayering of RGBW is a good research problem for academia even in the case where mosaiced images are clean and noise-free.Ideally, it will be good to reach the same level of performance of the standard Bayer pattern.However, it is a challenge to improve the debayering performance of RGBW.
In our earlier paper [4], our pansharpening approach consisted of the following steps.First, the generation of the pan band and the low resolution RGB bands is similar to that in [2,3].Second, instead of downsampling the pan band, we apply some pansharpening algorithms to directly generate the pansharpened color images.However, the results in [4] were slightly better than the standard method [2,3] for IMAX data, but slightly inferior for Kodak data.
In this paper, we present a new approach that aims at further improving the pansharpening approach in [4].There are two major differences between this paper and [4].First, we propose to apply a recent deep learning based demosaicing algorithm in [35] to improve both the white band (also known as illuminance band or panchromatic band) and the reduced resolution RGB image.After that, a pansharpening step is used to generate the final demosaiced image.Second, it should be emphasized that a new "feedback" concept was introduced and evaluated.The idea is to feed the pansharpened images back to two early steps.Extensive experiments using the benchmark IMAX and Kodak images showed that the new framework improves over earlier approaches.
Our contributions are as follows: • We are the first team to propose the combination of pansharpening and deep learning to demosaic RGBW pattern.Our approach opens a new direction in this research field and may stimulate more research in this area;

•
Our new results improved over our earlier results in [4]; Our results are comparable or better than state-of-the-art methods [2,14,16].
This paper is organized as follows.In Section 2, we will review the standard approach and also the pansharpening approach [4] of demosaicing the RGBW images.We will then introduce our new approach that combines deep learning and pansharpening.In Section 3, we will summarize our extensive comparative studies.Section 4 will include a few concluding remarks and future research directions.

Standard Approach
In [3], a standard approach was presented.Figure 2 [2] depicts the key ideas.A mosaiced image is first split into color and panchromatic components.The color and the panchromatic components are then processed separately to generate the full resolution color images.This approach is very efficient and can achieve decent performance, which can be explained using Lemma 1 of [7].For completeness, the lemma is included below.Lemma 1.Let F be a full-resolution reference color component.Then any other full-resolution color components C ∈ { R, G, B} can be predicted from its subsampled version C s using where F s is subsampled version of F and I denotes a proper interpolation process.
Lemma 1 provides a theoretical foundation for justifying the standard approach.Moreover, the standard approach is intuitive and simple.
where Fs is subsampled version of F and I denotes a proper interpolation process.
Lemma 1 provides a theoretical foundation for justifying the standard approach.Moreover, the standard approach is intuitive and simple.

Pansharpening Approach to Denosaicing CFA2.0 Patterns
Figure 3 shows our earlier pansharpening approach to debayering CFA2.0 images.Details can be found in [4].The generation of pan and low resolution RGB images is the same in both Figures 2  and 3.
In particular, HCM is a pansharpening algorithm that uses a high resolution color image to enhance a low resolution hyperspectral image.HCM can be used for color, multispectral, and hyperspectral images.More details about HCM can be found in [20] and open source codes can be found in [34].

Pansharpening Approach to Denosaicing CFA2.0 Patterns
Figure 3 shows our earlier pansharpening approach to debayering CFA2.0 images.Details can be found in [4].The generation of pan and low resolution RGB images is the same in both Figures 2 and 3.
Lemma 1: Let F be a full-resolution reference color component.Then any other full-resolution color components { } , , C R G B ∈ can be predicted from its subsampled version Cs using ( ) where Fs is subsampled version of F and I denotes a proper interpolation process.Lemma 1 provides a theoretical foundation for justifying the standard approach.Moreover, the standard approach is intuitive and simple.

Pansharpening Approach to Denosaicing CFA2.0 Patterns
Figure 3 shows our earlier pansharpening approach to debayering CFA2.0 images.Details can be found in [4].The generation of pan and low resolution RGB images is the same in both Figures 2  and 3.
In particular, HCM is a pansharpening algorithm that uses a high resolution color image to enhance a low resolution hyperspectral image.HCM can be used for color, multispectral, and hyperspectral images.More details about HCM can be found in [20] and open source codes can be found in [34].In particular, HCM is a pansharpening algorithm that uses a high resolution color image to enhance a low resolution hyperspectral image.HCM can be used for color, multispectral, and hyperspectral images.More details about HCM can be found in [20] and open source codes can be found in [34].

Enhanced Pansharpening Approach
Figure 4 illustrates the enhanced pansharpening approach.First, we apply a deep learning based demosaicing algorithm known as DEMONET [35] to demosaic the reduced resolution CFA.Second, the demosaiced R and B images are upsampled and used to fill in the missing pixels in the panchromatic (pan) band.The reason for this is that the R and B bands have some correlations with the white pixels [36].Some supporting arguments can be found below and also in Section 3.2.Third, we now treat the filled in pan band as a standard Bayer pattern with two white pixels, one R pixel, one B pixel, and then apply DEMONET again.The demosaiced image will have two white bands, one R band, and one B band.Fourth, the two white bands are averaged and extracted as the full resolution luminance band.Fifth, the luminance band is used to pansharpen the reduced resolution RGB images to generate the final demosaiced image.Sixth, we introduce a feedback concept (Figure 4b) that feeds the pansharpened RGB bands back to replace the reduced resolution RGB image and also replace those R and B pixels in the pan band.The pan band is then generated using DEMONET, and then pansharpening is performed again.This process repeats multiple times to yield the final results.We believe this "feedback" is probably the first ever idea in the demosaicing of RGBW images.Experimental results showed that the overall approach is promising and improved over earlier results in both IMAX and Kodak images.We observed that three iterations of feedback can generate good results.

Enhanced Pansharpening Approach
Figure 4 illustrates the enhanced pansharpening approach.First, we apply a deep learning based demosaicing algorithm known as DEMONET [35] to demosaic the reduced resolution CFA.Second, the demosaiced R and B images are upsampled and used to fill in the missing pixels in the panchromatic (pan) band.The reason for this is that the R and B bands have some correlations with the white pixels [36].Some supporting arguments can be found below and also in Section 3.2.Third, we now treat the filled in pan band as a standard Bayer pattern with two white pixels, one R pixel, one B pixel, and then apply DEMONET again.The demosaiced image will have two white bands, one R band, and one B band.Fourth, the two white bands are averaged and extracted as the full resolution luminance band.Fifth, the luminance band is used to pansharpen the reduced resolution RGB images to generate the final demosaiced image.Sixth, we introduce a feedback concept (Figure 4b) that feeds the pansharpened RGB bands back to replace the reduced resolution RGB image and also replace those R and B pixels in the pan band.The pan band is then generated using DEMONET, and then pansharpening is performed again.This process repeats multiple times to yield the final results.We believe this "feedback" is probably the first ever idea in the demosaicing of RGBW images.Experimental results showed that the overall approach is promising and improved over earlier results in both IMAX and Kodak images.We observed that three iterations of feedback can generate good results.Here, we provide some more details about the DEMONET algorithm.We chose DEMONET because a comparative study was carried out in [35] that demonstrated its performance against other deep learning and conventional methods.As described in [35], the DEMONET is a feed-forward network architecture for demosaicing (Figure 5).The network comprises D + 1 convolutional layers.Each layer has W outputs and the kernel sizes are K × K.An initial model was trained using first network using 1.3 million images from Imagenet and 1 million images from MirFlickr.Additionally, some challenging images were searched to further enhance the training model.Details can be found in [35].
J. Imaging 2019, 5, 68 5 of 13 Here, we provide some more details about the DEMONET algorithm.We chose DEMONET because a comparative study was carried out in [35] that demonstrated its performance against other deep learning and conventional methods.As described in [35], the DEMONET is a feed-forward network architecture for demosaicing (Figure 5).The network comprises D + 1 convolutional layers.Each layer has W outputs and the kernel sizes are K × K.An initial model was trained using first network using 1.3 million images from Imagenet and 1 million images from MirFlickr.Additionally, some challenging images were searched to further enhance the training model.Details can be found in [35].Some additional details regarding Figure 4 are described below.
• First, we will explain how DEMONET was used for improving the pan band.Our idea was motivated by the research of [36] in which it was observed that the white (W) channel has a higher spectral correlation with the R and B channels than the G channel.Hence, we create a fictitious Bayer pattern where the original W (also known as P) pixels are treated as G pixels, the missing W pixels are filled in with interpolated R and B pixels from the low resolution RGB image.Figure 6 illustrates the creation of the fictitious Bayer pattern.Once the fictitious Bayer pattern is created, we apply DEMONET to demosaic this pattern.The W or P pixels will be extracted from the G band in the demosaiced image.Although the above simple idea is very straightforward, the results of the improvement are quite large, which can be seen in Table 1.
• Second, we would like to emphasize that we did not re-train the DEMONET because we do not have that many images.Most importantly, the DEMONET was trained with millions of diverse images.The performance of the above way of generating the pan band is quite good, as can be seen from Table 1; • Third, we will explain how feedback works.There are two feedback paths.After the first iteration, we will obtain an enhanced color image.In the first feedback path, we replace the reduced resolution color image in Figure 4 with a downsized version of the enhanced color image.In the second feedback path, we directly replace the R and B pixels with the corresponding R and B pixels from the enhanced color image as shown in Figure 7.Some additional details regarding Figure 4 are described below.
• First, we will explain how DEMONET was used for improving the pan band.Our idea was motivated by the research of [36] in which it was observed that the white (W) channel has a higher spectral correlation with the R and B channels than the G channel.Hence, we create a fictitious Bayer pattern where the original W (also known as P) pixels are treated as G pixels, the missing W pixels are filled in with interpolated R and B pixels from the low resolution RGB image.Figure 6 illustrates the creation of the fictitious Bayer pattern.
J. Imaging 2019, 5, 68 5 of 13 Here, we provide some more details about the DEMONET algorithm.We chose DEMONET because a comparative study was carried out in [35] that demonstrated its performance against other deep learning and conventional methods.As described in [35], the DEMONET is a feed-forward network architecture for demosaicing (Figure 5).The network comprises D + 1 convolutional layers.Each layer has W outputs and the kernel sizes are K × K.An initial model was trained using first network using 1.3 million images from Imagenet and 1 million images from MirFlickr.Additionally, some challenging images were searched to further enhance the training model.Details can be found in [35].Some additional details regarding Figure 4 are described below.
• First, we will explain how DEMONET was used for improving the pan band.Our idea was motivated by the research of [36] in which it was observed that the white (W) channel has a higher spectral correlation with the R and B channels than the G channel.Hence, we create a fictitious Bayer pattern where the original W (also known as P) pixels are treated as G pixels, the missing W pixels are filled in with interpolated R and B pixels from the low resolution RGB image.Figure 6 illustrates the creation of the fictitious Bayer pattern.Once the fictitious Bayer pattern is created, we apply DEMONET to demosaic this pattern.The W or P pixels will be extracted from the G band in the demosaiced image.Although the above simple idea is very straightforward, the results of the improvement are quite large, which can be seen in Table 1.
• Second, we would like to emphasize that we did not re-train the DEMONET because we do not have that many images.Most importantly, the DEMONET was trained with millions of diverse images.The performance of the above way of generating the pan band is quite good, as can be seen from Table 1; • Third, we will explain how feedback works.There are two feedback paths.After the first iteration, we will obtain an enhanced color image.In the first feedback path, we replace the reduced resolution color image in Figure 4 with a downsized version of the enhanced color image.In the second feedback path, we directly replace the R and B pixels with the corresponding R and B pixels from the enhanced color image as shown in Figure 7. Once the fictitious Bayer pattern is created, we apply DEMONET to demosaic this pattern.The W or P pixels will be extracted from the G band in the demosaiced image.Although the above simple idea is very straightforward, the results of the improvement are quite large, which can be seen in Table 1.

•
Second, we would like to emphasize that we did not re-train the DEMONET because we do not have that many images.Most importantly, the DEMONET was trained with millions of diverse images.The performance of the above way of generating the pan band is quite good, as can be seen from Table 1; • Third, we will explain how feedback works.There are two feedback paths.After the first iteration, we will obtain an enhanced color image.In the first feedback path, we replace the reduced resolution color image in Figure 4 with a downsized version of the enhanced color image.In the second feedback path, we directly replace the R and B pixels with the corresponding R and B pixels from the enhanced color image as shown in Figure 7.We then apply DEMONET to the above enhanced Bayer pattern to generate an enhanced pan band and go through the pansharpening step to create another enhanced color image.The above process repeats three or more times.In our experiments, we found that the performance reaches the maximum after three iterations.
For ease of illustration of the work flow, we created a pseudo-code as follows:

Go to * ________________________________________________________________________________________
One may ask why an end-to-end deep learning approach was not developed for RBGW.This is a good question for the research community and we do not have an answer for this at the moment.We then apply DEMONET to the above enhanced Bayer pattern to generate an enhanced pan band and go through the pansharpening step to create another enhanced color image.The above process repeats three or more times.In our experiments, we found that the performance reaches the maximum after three iterations.
For ease of illustration of the work flow, we created a pseudo-code as follows:

Combined Deep Learning and Pansharpening for Demosaicing RGBW Patterns
Input: An RGBW pattern Output: A demosaiced color image I = 1; iteration number Step 1.For each 4 × 4 RGBW patch, create a 2 × 2 reduced resolution Bayer pattern, and also a 4 × 4 pan band with half of the pixels white and half of pixels missing.Repeat the above for the whole image.Step 2. Demosaic the 2 × 2 Bayer pattern using DEMONENT algorithm (pre-trained offline).Furthermore, upsample the demosaiced image to the same size of the original image.Step 3. Fill in the missing pixels of pan band.

a.
Creation of a fictitious Bayer pattern for the pan band: Take R and B pixels from the upsampled demosaiced image and alternately fill in the missing pixels in the original pan band.Here, the green band of the fictitious Bayer pattern has pixels from the original white pixels in the pan band.b.
Apply DEMONET to demosaic the fictitious Bayer pattern in Step 3a.Take the green band of the DEMONET output as the pan band.c.
Replace half of the pixels in the output of Step 3b with the original white pixels in the original pan band.

Go to *
One may ask why an end-to-end deep learning approach was not developed for RBGW.This is a good question for the research community and we do not have an answer for this at the moment.We believe that it is a non-trivial task to modify an existing scheme such as DEMONET to deal with RGBW.This extension by itself could be a good research direction for future research.
For the pansharpening module in Figure 4, we used HCM because it performed well in our earlier study [4].

Data: IMAX and Kodak
Similar to earlier studies in the literature, we used IMAX (Figure 8) and Kodak (Figure 9) data sets.In the original Kodak data, there are 24 images.We chose only 12 images because other researchers [2] also used these 12 images.We believe that it is a non-trivial task to modify an existing scheme such as DEMONET to deal with RGBW.This extension by itself could be a good research direction for future research.
For the pansharpening module in Figure 4, we used HCM because it performed well in our earlier study [4].

Data: IMAX and Kodak
Similar to earlier studies in the literature, we used IMAX (Figure 8) and Kodak (Figure 9) data sets.In the original Kodak data, there are 24 images.We chose only 12 images because other researchers [2] also used these 12 images.

Performance Metrics and Comparison of Different Approaches to Generating the Pan Band
Two well-known performance metrics were used: Peak signal-to-noise ratio (PSNR) and CIELAB [37].In Table 1, we first show some results that justify why we fill in the R and B pixels in the missing locations of the panchromatic band.Table 1 shows the PSNR values of several methods for generating the pan band.It can be seen that the bilinear and Malvar-He-Cutler (MHC) methods have 31.26and 31.91 dBs, respectively.To explore alternatives for generating better pan band, we used DEMONET with filled in R and B pixels from two cases (one from the reduced resolution color image and one from the ground truth RGB images).We can clearly see that the PSNR values (33.13 and 37.48) are larger with DEMONET than those by using bilinear and MHC methods.This is because the R and B pixels have some correlations with the white pixels and DEMONET was able to extract some information from the R and B pixels in the demosaicing process.In practice, we will not have the ground truth RGB bands and hence the 37.4825 dBs will never be attained.However, as shown in Figure 4b, we can still take R and B values from the pansharpened RGB image.It turns out that such a feedback process further enhances the performance of our proposed method.We  We believe that it is a non-trivial task to modify an existing scheme such as DEMONET to deal with RGBW.This extension by itself could be a good research direction for future research.
For the pansharpening module in Figure 4, we used HCM because it performed well in our earlier study [4].

Data: IMAX and Kodak
Similar to earlier studies in the literature, we used IMAX (Figure 8) and Kodak (Figure 9) data sets.In the original Kodak data, there are 24 images.We chose only 12 images because other researchers [2] also used these 12 images.

Performance Metrics and Comparison of Different Approaches to Generating the Pan Band
Two well-known performance metrics were used: Peak signal-to-noise ratio (PSNR) and CIELAB [37].In Table 1, we first show some results that justify why we fill in the R and B pixels in the missing locations of the panchromatic band.Table 1 shows the PSNR values of several methods for generating the pan band.It can be seen that the bilinear and Malvar-He-Cutler (MHC) methods have 31.26and 31.91 dBs, respectively.To explore alternatives for generating better pan band, we used DEMONET with filled in R and B pixels from two cases (one from the reduced resolution color image and one from the ground truth RGB images).We can clearly see that the PSNR values (33.13 and 37.48) are larger with DEMONET than those by using bilinear and MHC methods.This is because the R and B pixels have some correlations with the white pixels and DEMONET was able to extract some information from the R and B pixels in the demosaicing process.In practice, we will not have the ground truth RGB bands and hence the 37.4825 dBs will never be attained.However, as shown in Figure 4b, we can still take R and B values from the pansharpened RGB image.It turns out that such a feedback process further enhances the performance of our proposed method.We

Performance Metrics and Comparison of Different Approaches to Generating the Pan Band
Two well-known performance metrics were used: Peak signal-to-noise ratio (PSNR) and CIELAB [37].In Table 1, we first show some results that justify why we fill in the R and B pixels in the missing locations of the panchromatic band.Table 1 shows the PSNR values of several methods for generating the pan band.It can be seen that the bilinear and Malvar-He-Cutler (MHC) methods have 31.26and 31.91 dBs, respectively.To explore alternatives for generating better pan band, we used DEMONET with filled in R and B pixels from two cases (one from the reduced resolution color image and one from the ground truth RGB images).We can clearly see that the PSNR values (33.13 and 37.48) are larger with DEMONET than those by using bilinear and MHC methods.This is because the R and B pixels have some correlations with the white pixels and DEMONET was able to extract some information from the R and B pixels in the demosaicing process.In practice, we will not have the ground truth RGB bands and hence the 37.4825 dBs will never be attained.However, as shown in Figure 4b, we can still take R and B values from the pansharpened RGB image.It turns out that such a feedback process further enhances the performance of our proposed method.We believe the above "feedback" idea is a good contribution to the demosaicing community for CFA2.0.
In our study, we also did customize the deep learning demosaicing method for Mastcam images from NASA because Mastcam images are of interest to NASA.It is interesting to observe that our customized model did not perform as well as the original model.This is because (1) our Mastcam image database is limited in size; (2) the original DEMONET used millions of images.Based on the above, we decided to use the original model instead of re-training it.In other words, if the original model is already good enough, there is no need to re-invent the wheel.

Evaluation Using IMAX Images
Table 2 summarizes the PSNR and CIELAB scores for the IMAX images.The column "Before Processing" contains results using the bicubic interpolation of the reduced resolution color image in Figure 4. We could have included results using some other RGBW demosaicing algorithms [13][14][15][16][17].However, we contacted those authors for their codes.Some [13,15] did not respond and some [16,17] provided codes that were not for the RGBW pattern.Actually, we tried to implement some of those algorithms [16,17], but could not get good results.We were able to obtain LSLCD codes from [14] and have included comparisons with [14] in this paper.The column "Standard" refers to results using the standard demosaicing procedure in Figure 2. The column "LSLCD" shows results using the algorithm from [14].The "HCM" contains results using the framework in Figure 3.The last two columns contain the results generated by using the proposed new framework (without and with feedback) in Figure 4.It can be seen that the new framework with feedback based on DEMONET achieved better results in almost all images as compared to the earlier approaches.The improvement is about 0.8 dBs over the best previous approach in terms of averaged PSNR for all images.

Evaluation using Kodak images
Table 3 summarizes the PSNR and CIELAB scores of various algorithms for the Kodak images.The arrangement of columns in Table 3 is similar to that in Table 2.We observe that the new approach based on DEMONET yielded better results than most of the earlier methods.Figure 13 and Figure 14 plot the averaged PSNR and CIELAB scores versus different algorithms.The averaged CIELAB scores of the proposed approach without and with feedback are close to each other to the third decimal place.In terms of PSNR, the approach with feedback is 0.3 dBs better than that without feedback.In general, Kodak images have better correlations between bands than that of IMAX images according to [5].Because of the above observation, algorithms working well for Kodak images may not work well for IMAX images.Figure 15 shows the demosaiced images of various algorithms.We also included one demosaiced image from one universal demosaicing algorithm [16] in Figure 15.We can see that results using proposed framework with DEMONET look slightly better than the other methods in terms of color distortion.

Evaluation Using Kodak Images
Table 3 summarizes the PSNR and CIELAB scores of various algorithms for the Kodak images.The arrangement of columns in Table 3 is similar to that in Table 2.We observe that the new approach based on DEMONET yielded better results than most of the earlier methods.Figures 13 and 14 plot the averaged PSNR and CIELAB scores versus different algorithms.The averaged CIELAB scores of the proposed approach without and with feedback are close to each other to the third decimal place.In terms of PSNR, the approach with feedback is 0.3 dBs better than that without feedback.In general, Kodak images have better correlations between bands than that of IMAX images according to [5].Because of the above observation, algorithms working well for Kodak images may not work well for IMAX images.Figure 15 shows the demosaiced images of various algorithms.We also included one demosaiced image from one universal demosaicing algorithm [16] in Figure 15.We can see that results using proposed framework with DEMONET look slightly better than the other methods in terms of color distortion.

Conclusions
We present a deep learning-based approach that improves an earlier pansharpening approach to debayering CFA2.0 CFAs.Our key idea is to utilize the deep learning-based algorithm to improve the interpolation of the illuminance/pan band and also the reduced resolution color image.A novel feedback concept was introduced that can further enhance the overall demosaicing performance.Using IMAX and Kodak data sets, we carried out a comparative study between the

Conclusions
We present a deep learning-based approach that improves an earlier pansharpening approach to debayering CFA2.0 CFAs.Our key idea is to utilize the deep learning-based algorithm to improve the interpolation of the illuminance/pan band and also the reduced resolution color image.A novel feedback concept was introduced that can further enhance the overall demosaicing performance.Using IMAX and Kodak data sets, we carried out a comparative study between the

Conclusions
We present a deep learning-based approach that improves an earlier pansharpening approach to debayering CFA2.0 CFAs.Our key idea is to utilize the deep learning-based algorithm to improve the interpolation of the illuminance/pan band and also the reduced resolution color image.A novel feedback concept was introduced that can further enhance the overall demosaicing performance.Using IMAX and Kodak data sets, we carried out a comparative study between the proposed approach and earlier approaches.One can observe that the proposed new approach has better performance than earlier approaches for both the Kodak data and the IMAX data.
One future research direction is on how to improve the quality of the pan band.Another direction is to develop a stand-alone and end-to-end deep learning based approach for RGBW patterns.

Lemma 1 :
Let F be a full-resolution reference color component.Then any other full-resolution color components can be predicted from its subsampled version Cs using ( )

Figure 6 .
Figure 6.Fictitious Bayer pattern for pan band generation.

Figure 6 .
Figure 6.Fictitious Bayer pattern for pan band generation.

Figure 6 .
Figure 6.Fictitious Bayer pattern for pan band generation.

Figure 7 .
Figure 7. Fictitious Bayer pattern when there is feedback.
Combined Deep Learning and Pansharpening for Demosaicing RGBW Patterns ____________________________________________________________________________________ Input: An RGBW pattern Output: A demosaiced color image I = 1; iteration number Step 1.For each 4 × 4 RGBW patch, create a 2 × 2 reduced resolution Bayer pattern, and also a 4 × 4 pan band with half of the pixels white and half of pixels missing.Repeat the above for the whole image.Step 2. Demosaic the 2 × 2 Bayer pattern using DEMONENT algorithm (pre-trained offline).Furthermore, upsample the demosaiced image to the same size of the original image.Step 3. Fill in the missing pixels of pan band.a. Creation of a fictitious Bayer pattern for the pan band: Take R and B pixels from the upsampled demosaiced image and alternately fill in the missing pixels in the original pan band.Here, the green band of the fictitious Bayer pattern has pixels from the original white pixels in the pan band.b.Apply DEMONET to demosaic the fictitious Bayer pattern in Step 3a.Take the green band of the DEMONET output as the pan band.c.Replace half of the pixels in the output of Step 3b with the original white pixels in the original pan band.Step 4. Apply the HCM pansharpening algorithm to fuse the pan band from Step 3 and the reduced resolution color image from Step 2. * I = I + 1 If I > K, then stop.K is a pre-designed integer.We used K = 3 in our experiments.Otherwise, Step 5: Downsample the pansharpened image; feed it back to Step 2 to replace the reduced resolution color image.Step 6: Go to Step 3a, take R and B pixels from the pansharpened image, and fill them into those missing pixels in original pan band.Step 7: Repeat Steps 3b and 3c.Step 8: Repeat Step 4.

Figure 7 .
Figure 7. Fictitious Bayer pattern when there is feedback.

Step 4 .Step 5 .
Apply the HCM pansharpening algorithm to fuse the pan band from Step 3 and the reduced resolution color image from Step 2. * I = I + 1 If I > K, then stop.K is a pre-designed integer.We used K = 3 in our experiments.Otherwise, Downsample the pansharpened image; feed it back to Step 2 to replace the reduced resolution color image.Step 6. Go to Step 3a, take R and B pixels from the pansharpened image, and fill them into those missing pixels in original pan band.Step 7. Repeat Steps 3b and 3c.Step 8. Repeat Step 4.

Figure 10 .
Figure 10.Averaged PSNR values of different methods for RGB bands.Figure 10.Averaged PSNR values of different methods for RGB bands.

Figure 10 .
Figure 10.Averaged PSNR values of different methods for RGB bands.

Figure 10 .
Figure 10.Averaged PSNR values of different methods for RGB bands.

Figure 12 .
Figure 12.Demosaiced images of different algorithms for one IMAX image.

Figure 13 .
Figure 13.Averaged PSNR values of different methods for RGB bands.

Figure 13 .
Figure 13.Averaged PSNR values of different methods for RGB bands.

Figure 13 .
Figure 13.Averaged PSNR values of different methods for RGB bands.

Figure 13 .
Figure 13.Averaged PSNR values of different methods for RGB bands.

Table 1 .
Peak signal-to-noise ratio (PSNR) of pan bands generated by using different interpolation methods.

Table 2 .
PSNR and CIELAB metrics of different algorithms: IMAX data.Bold numbers indicate the best performing method in each row.

Table 2 .
Cont.Figures 10 and 11 depict the averaged PSNR and CIELAB scores of the various methods for IMAX images.The scores of the new framework are better than earlier methods.Figure12visualizes all the demosaiced images as well as the original image for one IMAX image.It can be seen that the images using the new framework are comparable to others.
Figure 10.Averaged PSNR values of different methods for RGB bands.

Table 3 .
PSNR and CIELAB metrics of various algorithms: Kodak data.Bold numbers indicate the best performing method in each row.

Table 3 .
PSNR and CIELAB metrics of various algorithms: Kodak data.Bold numbers indicate the best performing method in each row.