Demosaicing of CFA 3.0 with Applications to Low Lighting Images

Low lighting images usually contain Poisson noise, which is pixel amplitude-dependent. More panchromatic or white pixels in a color filter array (CFA) are believed to help the demosaicing performance in dark environments. In this paper, we first introduce a CFA pattern known as CFA 3.0 that has 75% white pixels, 12.5% green pixels, and 6.25% of red and blue pixels. We then present algorithms to demosaic this CFA, and demonstrate its performance for normal and low lighting images. In addition, a comparative study was performed to evaluate the demosaicing performance of three CFAs, namely the Bayer pattern (CFA 1.0), the Kodak CFA 2.0, and the proposed CFA 3.0. Using a clean Kodak dataset with 12 images, we emulated low lighting conditions by introducing Poisson noise into the clean images. In our experiments, normal and low lighting images were used. For the low lighting conditions, images with signal-to-noise (SNR) of 10 dBs and 20 dBs were studied. We observed that the demosaicing performance in low lighting conditions was improved when there are more white pixels. Moreover, denoising can further enhance the demosaicing performance for all CFAs. The most important finding is that CFA 3.0 performs better than CFA 1.0, but is slightly inferior to CFA 2.0, in low lighting images.


Introduction
Many commercial cameras have incorporated the Bayer pattern [1], which is also named as color filter array (CFA) 1.0. An example of CFA 1.0 is shown in Figure 1a. There are many repetitive 2 × 2 blocks and, in each block, two green, one red, and one blue pixels are present. To save cost, the Mastcam onboard the Mars rover Curiosity [2][3][4][5] also adopted the Bayer pattern. Due to the popularity of CFA 1.0, Kodak researchers invented a red-green-blue-white (RGBW) pattern or CFA 2.0 [6,7]. An example of the RGBW pattern is shown in Figure 1b. In each 4 × 4 block, eight white pixels, four green pixels, and two red and blue pixels are present. Numerous other CFA patterns have been invented in the past few decades [8][9][10].
Researchers working on CFAs believe that CFA 2.0 is more suitable for taking images in low lighting environments. Recently, some researchers [11] have further explored the possibility of adding more white pixels to the CFA 2.0. The new pattern has 75% white pixels and the RGB pixels are randomly distributed among the remaining 25% pixels.
Motivated by the work in [11], we propose a simple CFA pattern in which the RGB pixels are evenly distributed, instead of using randomly distributed RGB pixels. In particular, as shown in Figure 1c, each 4 × 4 block has 75% or 12 white pixels, 12.5% or two green pixels, 6.25% or one red and blue pixels. We identify this CFA pattern as the CFA 3.0. There are three key advantages of using fixed CFA patterns. For the random pattern case, each camera will have a different pattern. In contrast, Researchers working on CFAs believe that CFA 2.0 is more suitable for taking images in low lighting environments. Recently, some researchers [11] have further explored the possibility of adding more white pixels to the CFA 2.0. The new pattern has 75% white pixels and the RGB pixels are randomly distributed among the remaining 25% pixels.
Motivated by the work in [11], we propose a simple CFA pattern in which the RGB pixels are evenly distributed, instead of using randomly distributed RGB pixels. In particular, as shown in Figure  1c, each 4 × 4 block has 75% or 12 white pixels, 12.5% or two green pixels, 6.25% or one red and blue pixels. We identify this CFA pattern as the CFA 3.0. There are three key advantages of using fixed CFA patterns. For the random pattern case, each camera will have a different pattern. In contrast, the first advantage is that the proposed fixed pattern allows a camera manufacturer to mass produce the cameras without changing the RGBW patterns for each camera. This can save manufacturing cost quite significantly. The second advantage is that the demosaicing software can be the same in all cameras if the pattern is fixed. Otherwise, each camera needs to have a unique demosaicing software tailored to a specific random pattern. This will seriously affect the cost. The third advantage is that some of the demosaicing algorithms for CFA 2.0 can be applied with little modifications. This can be easily seen if one puts the standard demosaicing block diagrams for CFA 2.0 and CFA 3.0 side by side. One can immediately notice that the reduced resolution color image and the panchromatic images can be similarly generated. As a result, the standard approach for CFA 2.0, all the pan-sharpening based algorithms for CFA 2.0, and the combination of pan-sharpening, and deep learning approaches for CFA 2.0, that we developed earlier in [12] can be applied to CFA 3.0.
In our recent paper on the demosaicing of CFA 2.0 (RGBW) [12], we have compared CFA 1.0 and CFA 2.0 using IMAX and Kodak images and observed that CFA 1.0 was better than CFA 2.0. One may argue that our comparison was not fair because IMAX and Kodak datasets were not collected in low lighting conditions and CFA 2.0 was designed for taking images in low lighting environments. Due to the dominance of white pixels in CFA 2.0, the SNR of the collected image is high and hence CFA 2.0 should have better demosaicing performance in dark environments.
Recently, we systematically and thoroughly compared CFA 1.0 and CFA 2.0 under dark conditions [13]. We observed that CFA 2.0 indeed performed better under dark conditions. We also noticed that denoising can further improve the demosaicing performance. In our recent paper on the demosaicing of CFA 2.0 (RGBW) [12], we have compared CFA 1.0 and CFA 2.0 using IMAX and Kodak images and observed that CFA 1.0 was better than CFA 2.0. One may argue that our comparison was not fair because IMAX and Kodak datasets were not collected in low lighting conditions and CFA 2.0 was designed for taking images in low lighting environments. Due to the dominance of white pixels in CFA 2.0, the SNR of the collected image is high and hence CFA 2.0 should have better demosaicing performance in dark environments.
Recently, we systematically and thoroughly compared CFA 1.0 and CFA 2.0 under dark conditions [13]. We observed that CFA 2.0 indeed performed better under dark conditions. We also noticed that denoising can further improve the demosaicing performance.
The aforementioned discussions immediately lead to several questions concerning the different CFAs. First, we enquired how demosaic CFA 3.0. Although there are universal debayering algorithms [8][9][10], those codes are not accessible to the public or may require customization. Here, we propose quite a few algorithms that can demosaic CFA 3.0, and this can be considered as our first contribution. Second, regardless of whether the answer to the first question is positive or negative, will more white pixels in the CFA pattern help the demosaicing performance for low lighting images? In other words, will CFA 3.0 have any advantages over CFA 1.0 and CFA 2.0? It will be a good contribution to the research community to answer the question: Which CFA out of the three is most suitable for low lighting environments? Third, the low lighting images contain Poisson noise and demosaicing does not have denoising capability. To improve the demosaicing performance, researchers usually carry out some denoising and contrast enhancement. It is important to know where one should perform denoising. Denoising can be performed either after or before demosaicing. Which choice can yield better overall image quality? Answering the above questions will assist designers understand the next generation of cameras that have adaptive denoising capability to handle diverse lighting environments.
In this paper, we will address the aforementioned questions. After some extensive research and experiments, we found that some algorithms for CFA 2.0 can be adapted to demosaic CFA 3.0. For instance, the standard approach for CFA 2.0 is still applicable to CFA 3.0. The pan-sharpening based algorithms for CFA 2.0 [12] and deep learning based algorithms for CFA 2.0 [14] are also applicable to Sensors 2020, 20, 3423 3 of 41 CFA 3.0. We will describe those details in Section 2. In Section 3, we will first present experiments to demonstrate that CFA 3.0 can work well for low lighting images. Denoising using block matching in 3D (BM3D) [15] can further enhance the demosaicing performance. We also summarize a comparative study that compares the performance of CFA 1.0, CFA 2.0, and CFA 3.0 using normal and emulated low lighting images. We have several important findings. First, having more white pixels does not always improve the demosaicing performance. CFA 2.0 achieved the best performance. CFA 3.0 performs better than CFA 1.0 and is slightly inferior to CFA 2.0. Second, denoising can further enhance the demosaicing performance in all CFAs. Third, we observed that the final image quality relies heavily on the location of denoising. In particular, denoising after demosaicing is worse than denoising before demosaicing. Fourth, when the SNR is low, denoising has more influence on demosaicing. Some discussions on those findings are also included. In Section 4, some remarks and future research directions will conclude our paper.

Demosaicing Algorithms
We will first review some demosaicing algorithms for CFA 2.0. We will then answer the first question mentioned in Section 1: how one can demosaic the CFA 3.0 pattern shown in Figure 1c. It turns out that some of the existing algorithms for CFA 2.0 can be used for CFA 3.0 with some minor modifications.

Demosaicing Algorithms for CFA 2.0
The baseline approach is a simple demosaicing operation on the CFA, followed by an upsampling of the reduced resolution color image shown in Figure 2 of [13]. The standard approach consists of four steps as shown in Figure 2 of [13].
Step 1 interpolates the luminance image with half of the white pixels missing.
Step 2 subtracts the reduced color image from the down-sampled interpolated luminance image.
Step 3 upsamples the difference image in Step 2. Step 4 fuses the full resolution luminance with the upsampled difference image in Step 3. In our implementation, the demosaicing of the reduced resolution color image is done using local directional interpolation and nonlocal adaptive thresholding (LDI-NAT) [16] and the pan interpolation is also done using LDI-NAT [16].
In a recent paper by us [14], the pan-sharpening approach has been improved by integrating with deep learning. As shown in Figure 4 of [13], a deep learning method was incorporated in two places. First, deep learning has been used to demosaic the reduced resolution CFA image. Second, deep learning has been used to improve the interpolation of the pan band. We adopted a deep learning algorithm known as Demonet [30]. Good performance improvement has been observed.
In the past, we also developed two pixel-level fusion algorithms known as fusion of 3 (F3) algorithms and alpha trimmed mean filter (ATMF), which were used in our earlier studies [12][13][14]32]. Three best performing algorithms are fused in F3 and seven high performing algorithms are fused in ATMF. These fusion algorithms are applicable to any CFAs. As opposed to the random color patterns in [11], the CFA 3.0 pattern in this paper has fixed patterns. One key advantage is that some of the approaches for CFA 2.0 can be easily applied with little modifications. For instance, the standard approach shown in Figure 2 of [13] for CFA 2.0 can be immediately applied to CFA 3.0, as shown in Figure 2. In each 4 × 4 block, the four R, G, B pixels in the CFA 3.0 raw image are extracted to form a reduced resolution CFA image. A standard demosaicing algorithm, any of those mentioned in Section 2.1 can be applied. In our implementation, we used LDI-NAT [16] for demosaicing the reduced resolution color image. The missing pan pixels are interpolated using LDI-NAT [16] to create a full resolution pan image. The subsequent steps will be the same as before.
We will first review some demosaicing algorithms for CFA 2.0. We will then answer the first question mentioned in Section 1: how one can demosaic the CFA 3.0 pattern shown in Figure 1c. It turns out that some of the existing algorithms for CFA 2.0 can be used for CFA 3.0 with some minor modifications.

Demosaicing Algorithms for CFA 2.0
The baseline approach is a simple demosaicing operation on the CFA, followed by an upsampling of the reduced resolution color image shown in Figure 2 of [13]. The standard approach consists of four steps as shown in Figure 2 of [13].
Step 1 interpolates the luminance image with half of the white pixels missing.
Step 2 subtracts the reduced color image from the down-sampled interpolated luminance image.
Step 3 upsamples the difference image in Step 2.
Step 4 fuses the full resolution luminance with the upsampled difference image in Step 3. In our implementation, the demosaicing of the reduced resolution color image is done using local directional interpolation and nonlocal adaptive thresholding (LDI-NAT) [16] and the pan interpolation is also done using LDI-NAT [16]. Similarly, the pan-sharpening approach for CFA 2.0 shown in Figure 3 of [13] can be applied to CFA 3.0 as shown in Figure 3. Here, the four R, G, B pixels are extracted first and then a demosaicing algorithm for CFA 1.0 is applied to the reduced resolution Bayer image. We used LDI-NAT [16] for reduced resolution color image. For the pan band, any interpolation algorithms can be applied. We used LDI-NAT. Afterwards, any pan-sharpening algorithms mentioned earlier can be used to fuse the pan and the demosaiced reduced resolution color image to generate a full resolution color image. In our experiments, we have used PCA [17], SFIM [18], GLP [19], HPM [20], GS [21], GSA [22], GFPCA [23], PRACS [24] and HCM [25] for pan-sharpening. The hybrid deep learning and pan-sharpening approach for CFA 2.0 shown in Figure 4 of [13] can be extended to CFA 3.0, as shown in Figure 4. For the reduced resolution demosaicing step, the Demonet algorithm is used. In the pan band generation step, we also propose to apply Demonet. The details are similar to our earlier paper on CFA 2.0 [12]. Hence, we skip the details. After those two The hybrid deep learning and pan-sharpening approach for CFA 2.0 shown in Figure 4 of [13] can be extended to CFA 3.0, as shown in Figure 4. For the reduced resolution demosaicing step, the Demonet algorithm is used. In the pan band generation step, we also propose to apply Demonet. The details are similar to our earlier paper on CFA 2.0 [12]. Hence, we skip the details. After those two steps, a pan-sharpening algorithm is then applied. In our experiments, Demonet is combined with different pan-sharpening algorithms in different scenarios. For normal lighting conditions, GSA is used for pan-sharpening and we call this hybrid approach the Demonet + GSA method. For low lighting conditions, it is more effective to use GFPCA for pan-sharpening and we term this as the Demonet + GFPCA method. The hybrid deep learning and pan-sharpening approach for CFA 2.0 shown in Figure 4 of [13] can be extended to CFA 3.0, as shown in Figure 4. For the reduced resolution demosaicing step, the Demonet algorithm is used. In the pan band generation step, we also propose to apply Demonet. The details are similar to our earlier paper on CFA 2.0 [12]. Hence, we skip the details. After those two steps, a pan-sharpening algorithm is then applied. In our experiments, Demonet is combined with different pan-sharpening algorithms in different scenarios. For normal lighting conditions, GSA is used for pan-sharpening and we call this hybrid approach the Demonet + GSA method. For low lighting conditions, it is more effective to use GFPCA for pan-sharpening and we term this as the Demonet + GFPCA method. The two fusion algorithms (F3 and ATMF) can be directly applied to CFA 3.0.

Performance Metrics
Five performance metrics were used in our experiments to compare the different methods and CFAs. These metrics are well-known in the literature.

•
Peak Signal-to-Noise Ratio (PSNR) [33] Separate PSNRs in dBs are computed for each band. A combined PSNR is the average of the PSNRs of the individual bands. Higher PSNR values imply higher image quality. • Structural SIMilarity (SSIM) In [34], SSIM was defined to measure the closeness between two images. An SSIM value of 1 means that the two images are the same.

•
Human Visual System (HVS) metric Details of HVS metric in dB can be found in [35]. • HVSm (HVS with masking) [36] Similar to HVS, HVS incorporates the visual masking effects in computing the metrics. The two fusion algorithms (F3 and ATMF) can be directly applied to CFA 3.0.

Performance Metrics
Five performance metrics were used in our experiments to compare the different methods and CFAs. These metrics are well-known in the literature.

•
Peak Signal-to-Noise Ratio (PSNR) [33] Separate PSNRs in dBs are computed for each band. A combined PSNR is the average of the PSNRs of the individual bands. Higher PSNR values imply higher image quality. • Structural SIMilarity (SSIM) In [34], SSIM was defined to measure the closeness between two images. An SSIM value of 1 means that the two images are the same.

•
Human Visual System (HVS) metric Details of HVS metric in dB can be found in [35]. • HVSm (HVS with masking) [36] Similar to HVS, HVS incorporates the visual masking effects in computing the metrics. • CIELAB We also used CIELAB [37] for assessing demosaicing and denoising performance in our experiments.

Experiments
In Section 2, we answer the first question about how one can demosaic CFA 3.0. Here, we will answer the two remaining questions mentioned in Section 1. One of the questions is whether or not the new CFA 3.0 can perform well for demosacing low lighting images. The other question is regarding whether CFA 3.0 has any advantages over the other two CFAs. Simply put, we will answer which one of the three CFAs is the best method for low light environments.

Data
A benchmark dataset (Kodak) was downloaded from a website (http://r0k.us/graphics/kodak/) and 12 images were selected. The images are shown in Figure 5 of [13]. We will use them as reference images for generating objective performance metrics. In addition, noisy images emulating images collected Sensors 2020, 20, 3423 6 of 41 from dark conditions will be created using those clean images. It should be noted that the Kodak images were collected using films and then converted to digital images. We are absolutely certain that the images were not created using CFA 1.0. Many researchers in the demosaicing community have used Kodak data sets in their studies.
Emulating images in low lighting conditions is important because ground truth (clean) images can then be used for a performance assessment. In the literature, some researchers used Gaussian noise to emulate low lighting images. We think the proper way to emulate low lighting images is by using Poisson noise, which is simply because the noise introduced in low lighting images follows a Poisson distribution.
The differences between Gaussian and Poisson noises are explained as follows. Gaussian noise is additive, independent at each pixel, and independent of the pixel intensity. It is caused primarily by Johnson-Nyquist noise (thermal noise) [38]. Poisson noise is pixel intensity dependent and is caused by the statistical variation in the number of photons. Poisson noise is also known as photon shot noise [39]. As the number of photons at the detectors of cameras follows a Poisson distribution, and hence, the name of Poisson noise, when the number of photons increases significantly, the noise behavior then follows a Gaussian distribution due to the law of large numbers. However, the shot noise behavior of transitioning from Poisson distribution to Gaussian distribution does not mean that Poisson noise (photon noise) becomes Gaussian noise (thermal noise) when the number of photons increases significantly. This may be confusing for many people due to the terminology of Gaussian distribution. In short, the two noises come from different origins and have very different behaviors.
Poisson distribution has been widely used to characterize discrete events. For example, the arrival of customers to a bank follows a Poisson distribution; the number of phone calls to a cell phone tower also follows a Poisson distribution. For cameras, the probability density function (pdf) of photon noise in an image pixel follows a Poisson distribution, which can be mathematically described as, where λ is the mean number of photons per pixel and P(k) is the probability when there are k photons. Based on the above pdf, one can interpret the actual number of photons arriving at a detector pixel fluctuates around the mean (λ), which can be used to characterize the lighting conditions. That is, a small λ implies the lighting is low and vice versa. In statistics, when λ increases to a large number, the pdf in (1) will become a continuous pdf known as the Gaussian distribution, which is given by, where x denotes the continuous noise variable and λ is the same for both mean and variance in Poisson noise. In [40], central limit theorem is used to connect (1) and (2) by assuming λ >> 1. The derivation of (2) from (1) can be found in [41]. Figure 5 [42] clearly shows that the Poisson distribution gradually becomes a Gaussian distribution when λ increases. It appears that when λ = 10, the Poisson pdf already looks like a Gaussian distribution. However, it must be emphasized here that although (2) follows the normal or Gaussian distribution, the noise is still photon shot noise, not Gaussian noise due to thermal noise.
In contrast, Gaussian noise (thermal noise) follows the following distribution, where z is the noise variable, µ is the mean, and σ is the standard deviation. As mentioned earlier, Gaussian noise is thermal noise and is independent of pixels and pixel intensity. To introduce Gaussian noise to a clean image, one can use a Matlab function: imnoise (I, "Gaussian", µ, σ 2 ) where I is a clean image and µ is set to zero.
A benchmark dataset (Kodak) was downloaded from a website (http://r0k.us/graphics/kodak/) and 12 images were selected. The images are shown in Figure 5 of [13]. We will use them as reference images for generating objective performance metrics. In addition, noisy images emulating images collected from dark conditions will be created using those clean images. It should be noted that the Kodak images were collected using films and then converted to digital images. We are absolutely certain that the images were not created using CFA 1.0. Many researchers in the demosaicing community have used Kodak data sets in their studies. Emulating images in low lighting conditions is important because ground truth (clean) images can then be used for a performance assessment. In the literature, some researchers used Gaussian noise to emulate low lighting images. We think the proper way to emulate low lighting images is by using Poisson noise, which is simply because the noise introduced in low lighting images follows a Poisson distribution.
The differences between Gaussian and Poisson noises are explained as follows. Gaussian noise is additive, independent at each pixel, and independent of the pixel intensity. It is caused primarily by Johnson-Nyquist noise (thermal noise) [38]. Poisson noise is pixel intensity dependent and is caused by the statistical variation in the number of photons. Poisson noise is also known as photon shot noise [39]. As the number of photons at the detectors of cameras follows a Poisson distribution, and hence, the name of Poisson noise, when the number of photons increases significantly, the noise behavior then follows a Gaussian distribution due to the law of large numbers. However, the shot noise behavior of transitioning from Poisson distribution to Gaussian distribution does not mean that Poisson noise (photon noise) becomes Gaussian noise (thermal noise) when the number of photons Here, we describe a little more about the imaging model in low lighting conditions. As mentioned earlier, Poisson noise is related to the average number of photons per pixel, λ. To emulate the low lighting images, we vary λ. It should be noted that the SNR in dB of a Poisson image is given by [40]: A number of images with different levels of Poisson noise or SNR can be seen in the table in Appendix A.
The process of how we introduced Poisson noise is adapted from code written by Erez Posner (https://github.com/erezposner/Shot-Noise-Generator) and it is summarized as follows.
Given a clean image and the goal of generating a Poisson noisy image with a target signal-to-noise (SNR) value in dB, we first compute the full-well (FW) capacity of the camera, which is related to the SNR through: For a pixel I(i,j), we then compute the average photons per pixel (λ) for that pixel by, where 255 is the number of intensity levels in an image. Using a Poisson noise function created by Donald Knuth [43], we can generate an actual photon number k through the Poisson distribution described by Equation (1). This k value changes randomly whenever a new call to the Poisson noise function is being made. Finally, the actual noisy pixel amplitude (I n (i,j)) is given by: A loop iterating over every (i,j) in the image will generate the noisy Poisson image with the target SNR value.
Although Gaussian and Poisson noises have completely different characteristics, it will be interesting to understand when the two noises will become indistinguishable. To achieve that and to save some space, we include noisy images between 20 dBs and 38 dBs. It should be noted that the Gaussian noise was generated using Matlab's noise generation function (imnoise). The Poisson noise was generated following an open source code [44]. The SNRs are calculated by comparing the noisy images to the ground truth image. From Table A1 in Appendix A, we can see that when SNR values are less than 35 dBs, the two types of noisy images are visually different. Poisson images are slightly darker than Gaussian images. When SNR increases beyond 35 dBs, the two noisy images are almost indistinguishable.
From this study, we can conclude that 35-dB SNR is the threshold for differentiating Poisson noise (photon shot noise) from Gaussian noise (thermal). At 35 dBs, the average number of photons per pixel arriving at the detector is 3200 for Poisson noise and the standard deviation of the Gaussian noise is 0.0177. The image pixels are in double precision and normalized between 0 and 1.
To create a consistent level of noise close to our SNR levels of 10 dBs and 20 dBs, we followed a technique described in [44,45]. For each color band, we added Poisson noise separately. The noisy and low lighting images at 10 dBs and 20 dBs are shown in Figures 6 and 7 of [13], respectively.
In this paper, denoising is done via BM3D [15], which is a well-known method in the research community. The particular BM3D is specifically for Poisson noise. We performed denoising in a band by band manner. The BM3D package we used is titled 'Denoising software for Poisson and Poisson-Gaussian data," released on March 16th 2016. See the link (http://www.cs.tut.fi/~{}foi/invansc/). We used this code as packaged, which requires the input to be a single band image. This package would not require any input other than a single band noisy image. We considered using the standard BM3D package titled "BM3D Matlab" in this link (http://www.cs.tut.fi/~{}foi/GCF-BM3D/) released on February 16th 2020. This package would allow denoising 3-band RGB images. This package, however, assumes Gaussian noise and required a parameter based on the noise level.

CFA 3.0 Results
Here, we will first present demosaicing of CFA 3.0 for clean images, which are collected under normal lighting conditions. We will then present demosaicing of low lighting images at two SNRs with and without denoising.

Demosaicing Clean Images
There are 14 methods in our study. The baseline and standard methods are mentioned in Section 2.2. The other 12 methods include two fusion methods, one deep learning (Demonet + GSA), and nine pansharpening methods.
The three best methods used for F3 are Demonet + GSA, GSA, and GFPCA. The ATMF uses those three methods as well as Standard, PCA, GS, and PRACS.
From the PSNR and SSIM metrics in Table A2, the best performing algorithm is the Demonet + GSA method. The fusion methods of F3 and ATMF have better scores in Cielab, HVS and HVSm. Figure 6 shows the averaged metrics for all images.
In subjective comparisons shown in Figure 7, we can see the performance of the three selected methods (Demonet + GSA, ATMF and F3) varies a lot. Visually speaking, Demonet + GSA has the best visual performance. There are some minor color distortions in the fence area of the lighthouse image for F3 and ATMF.
Sensors 2020, 20, 3423 9 of 41 noise (photon shot noise) from Gaussian noise (thermal). At 35 dBs, the average number of photons per pixel arriving at the detector is 3200 for Poisson noise and the standard deviation of the Gaussian noise is 0.0177. The image pixels are in double precision and normalized between 0 and 1.
To create a consistent level of noise close to our SNR levels of 10 dBs and 20 dBs, we followed a technique described in [44,45]. For each color band, we added Poisson noise separately. The noisy and low lighting images at 10 dBs and 20 dBs are shown in Figures 6 and 7 of [13], respectively.   In this paper, denoising is done via BM3D [15], which is a well-known method in the research community. The particular BM3D is specifically for Poisson noise. We performed denoising in a band by band manner. The BM3D package we used is titled 'Denoising software for Poisson and Poisson-Gaussian data," released on March 16th 2016. See the link (http://www.cs.tut.fi/~foi/invansc/). We used this code as packaged, which requires the input to be a single band image. This package would not require any input other than a single band noisy image. We considered using the standard BM3D package titled "BM3D Matlab" in this link (http://www.cs.tut.fi/~foi/GCF-BM3D/) released on February 16th 2020. This package would allow denoising 3-band RGB images. This package, however, assumes Gaussian noise and required a parameter based on the noise level.

10 dBs SNR
There are three cases in this sub-section. In the first case, we focus on the noisy images and there is no denoising. The second case includes denoising after demosaicing operation. The third case is about denoising before demosaicing operation.

• Case 1: No Denoising
There are 14 methods for demosaicing CFA 3.0. The F3 method is a fusion method that fused the results of Standard, Demonet+GFPCA, and GFPCA, which are the best performing individual methods for this case. The ATMF fusion method used the seven high performing methods, which are Standard, Demonet+GFPCA, GFPCA, Baseline, PCA, GS, and PRACS. Table A3 in Appendix A summarizes the PSNR, the CIELAB, SSIM, HVS, and HVSm metrics. The PSNR and CIELAB values vary a lot. All the SSIM, HVS, and HVSm values are not high.
The averaged PSNR, CIELAB, SSIM, HVS, and HVSm scores of all the 14 methods are shown in Figure 8. Big variations can be observed in the metrics.

10 dBs SNR
There are three cases in this sub-section. In the first case, we focus on the noisy images and there is no denoising. The second case includes denoising after demosaicing operation. The third case is about denoising before demosaicing operation. • Case 1: No Denoising There are 14 methods for demosaicing CFA 3.0. The F3 method is a fusion method that fused the results of Standard, Demonet+GFPCA, and GFPCA, which are the best performing individual methods for this case. The ATMF fusion method used the seven high performing methods, which are Standard, Demonet+GFPCA, GFPCA, Baseline, PCA, GS, and PRACS. Table A3 in the Appendix summarizes the PSNR, the CIELAB, SSIM, HVS, and HVSm metrics. The PSNR and CIELAB values vary a lot. All the SSIM, HVS, and HVSm values are not high.
The averaged PSNR, CIELAB, SSIM, HVS, and HVSm scores of all the 14 methods are shown in Figure 8. Big variations can be observed in the metrics. The demosaiced results of Images 1 and 8 are shown in Figure 9. There are color distortion, noise, and contrast issues in the demosaiced images. The demosaiced results of Images 1 and 8 are shown in Figure 9. There are color distortion, noise, and contrast issues in the demosaiced images.
It can be observed that, if there is no denoising, all the algorithms have big fluctuations and the demosaiced results are not satisfactory.

• Case 2: Denoising after Demosaicing
In this case, we applied demosaicing first, followed by denoising. The denoising algorithm is BM3D. The denoising was done one band at a time. The F3 method fused the results from Demonet + GFPCA, GFPCA, and GSA. ATMF fused results from Demonet + GFPCA, GFPCA, GSA, PCA, GLP, GS, and PRACS. From Table A4 in Appendix A, the averaged PSNR score of Demonet + GFPCA and GFPCA have much higher scores than the rest. The other methods also yielded around 4 dBs higher scores than those numbers in Table A3. Figure 10 illustrates the averaged performance metrics, which look much better than those in Figure 8. Sensors 2020, 20, x 11 of 44  Table A4 in the Appendix, the averaged PSNR score of Demonet + GFPCA and GFPCA have much higher scores than the rest. The other methods also yielded around 4 dBs higher scores than those numbers in Table A3. Figure 10 illustrates the averaged performance metrics, which look much better than those in Figure 8.  Table A4 in the Appendix, the averaged PSNR score of Demonet + GFPCA and GFPCA have much higher scores than the rest. The other methods also yielded around 4 dBs higher scores than those numbers in Table A3. Figure 10 illustrates the averaged performance metrics, which look much better than those in Figure 8. The denoised and demosaiced images of three methods are shown in Figure 11. We observe that the artifacts in Figure 9 have been reduced significantly. Visually speaking, the distortion in the images of Demonet + GFPCA is quite small for the fence area of Image 8. The denoised and demosaiced images of three methods are shown in Figure 11. We observe that the artifacts in Figure 9 have been reduced significantly. Visually speaking, the distortion in the images of Demonet + GFPCA is quite small for the fence area of Image 8. The denoised and demosaiced images of three methods are shown in Figure 11. We observe that the artifacts in Figure 9 have been reduced significantly. Visually speaking, the distortion in the images of Demonet + GFPCA is quite small for the fence area of Image 8. • Case 3: Denoising before Demosaicing In this case, we first performed denoising and then demosaicing by pansharpening. The denoising is applied to two places. One is to the luminance image, which is the image after interpolation. The other place is to the reduced resolution color image. Denoising using Akiyama et al. approach [46] is a good alternative and will be a good future direction. The F3 method fused the results from the Standard, Demonet + GFPCA, GSA. ATMF fused the results from Standard, Demonet + GFPCA, GSA, HCM, GFPCA, GLP, and PRACS. From Table A5, we can see that the Demonet + GFPCA algorithm yielded the best averaged PSNR score, which is close to 26 dBs. This is almost 6 dBs better than those numbers in Table A4 and 16 dBs more than those in Table A3. The other metrics in Table A5 are all significantly improved over Table A4. As we will explain later, denoising after demosaicing performs worse than that of before demosaicing. • Case 3: Denoising before Demosaicing In this case, we first performed denoising and then demosaicing by pansharpening. The denoising is applied to two places. One is to the luminance image, which is the image after interpolation. The other place is to the reduced resolution color image. Denoising using Akiyama et al. approach [46] is a good alternative and will be a good future direction. The F3 method fused the results from the Standard, Demonet + GFPCA, GSA. ATMF fused the results from Standard, Demonet + GFPCA, GSA, HCM, GFPCA, GLP, and PRACS. From Table A5, we can see that the Demonet + GFPCA algorithm yielded the best averaged PSNR score, which is close to 26 dBs. This is almost 6 dBs better than those numbers in Table A4 and 16 dBs more than those in Table A3. The other metrics in Table A5 are all  significantly improved over Table A4. As we will explain later, denoising after demosaicing performs worse than that of before demosaicing. Figure 12 shows the averaged performance metrics. The metrics are significantly better than those in Figures 8 and 10. Figure 13 shows the demosaiced images of three methods. We can observe that the demosaiced images have better contrast than those in Figure 11. The Demonet + GFPCA method has less color distortion.

20 dBs SNR
We have three cases here.  Table A6 in Appendix A, we can see that the averaged PSNR score of PRACS is the best, which is 21.8 dBs.
Sensors 2020, 20, x 13 of 44 Figure 12 shows the averaged performance metrics. The metrics are significantly better than those in Figures 8 and 10.  Figure 13 shows the demosaiced images of three methods. We can observe that the demosaiced images have better contrast than those in Figure 11. The Demonet + GFPCA method has less color distortion.   Figure 13 shows the demosaiced images of three methods. We can observe that the demosaiced images have better contrast than those in Figure 11. The Demonet + GFPCA method has less color distortion. The average performance metrics are shown in Figure 14. The results are reasonable because there is no denoising capability in demosaicing methods. Figure 15 shows the demosaiced images of three methods: GFPCA, ATMF, and F3. One can easily see some artifacts (color distortion).
We have three cases here. • Case 1: No Denoising (20 dBs SNR) There are 14 methods. The F3 method fused the three best performing methods: Demonet+GFPCA, GFPCA, and PRACS. ATMF fused the seven best performing methods: Demonet+GFPCA, GFPCA, PRACS, Baseline, GSA, PCA, and GLP. From Table A6 in the Appendix, we can see that the averaged PSNR score of PRACS is the best, which is 21.8 dBs.
The average performance metrics are shown in Figure 14. The results are reasonable because there is no denoising capability in demosaicing methods. Figure 15 shows the demosaiced images of three methods: GFPCA, ATMF, and F3. One can easily see some artifacts (color distortion).   Table A7, we can observe that the Demonet + GFPCA achieved the highest averaged PSNR score of 21.292 dBs. This is better than most of PSNR numbers in Table A6, but only slightly better  Table A7, we can observe that the Demonet + GFPCA achieved the highest averaged PSNR score of 21.292 dBs. This is better than most of PSNR numbers in Table A6, but only slightly better than the Demonet + GFPCA method (20.573 dBs) in Table A4 (10 dBs SNR case). This clearly shows that denoising has more dramatic impact for low SNR case than with high SNR case. The other metrics in Table A7 are all improved over those numbers in Table A6. Figure 16 shows the averaged performance metrics. The numbers are than those in Figure 14.  Table A7, we can observe that the Demonet + GFPCA achieved the highest averaged PSNR score of 21.292 dBs. This is better than most of PSNR numbers in Table A6, but only slightly better than the Demonet + GFPCA method (20.573 dBs) in Table A4 (10 dBs SNR case). This clearly shows that denoising has more dramatic impact for low SNR case than with high SNR case. The other metrics in Table A7 are all improved over those numbers in Table A6. Figure 16 shows the averaged performance metrics. The numbers are than those in Figure 14. The demosaiced images of three methods are shown in Figure 17. We can see that the artifacts in Figure 17 have been reduced as compared to Figure 15. The color distortions are still noticeable.
• Case 3: Denoising before Demosaicing (20 dBs SNR) The F3 method fused the results of three best performing methods: Standard, GSA, and GFPCA. ATMF fused the 7 best performing methods: Standard, GSA, GFPCA, HCA, SFIM, GS, and HPM. From Table A8, we can see that F3 yielded 27.07 dBs of PSNR. This is 7 dBs better than the best method in Table A6 and 6 dBs better than the best method in Table A7. The other metrics in Table A8 are all  improved over Table A7 quite significantly. This means that the location of denoising is quite critical for improving the overall demosaicing performance. The demosaiced images of three methods are shown in Figure 17. We can see that the artifacts in Figure 17 have been reduced as compared to Figure 15. The color distortions are still noticeable.  Table A8, we can see that F3 yielded 27.07 dBs of PSNR. This is 7 dBs better than the best method in Table A6 and 6 dBs better than the best method in Table A7. The other metrics in Table A8 are all improved over Table A7 quite significantly. This means that the location of denoising is quite critical for improving the overall demosaicing performance. Figure 18 shows the average performance metrics. The numbers are better than those in Figures  14 and 16.    Table A8, we can see that F3 yielded 27.07 dBs of PSNR. This is 7 dBs better than the best method in Table A6 and 6 dBs better than the best method in Table A7. The other metrics in Table A8 are all improved over Table A7 quite significantly. This means that the location of denoising is quite critical for improving the overall demosaicing performance. Figure 18 shows the average performance metrics. The numbers are better than those in Figures  14 and 16.  Figure 19 displays the demosaiced images of three selected methods. It is hard to say whether or not the demosaiced images in Figure 19 is better than that of Figure 17 because there are some color distortions.  Figure 19 displays the demosaiced images of three selected methods. It is hard to say whether or not the demosaiced images in Figure 19 is better than that of Figure 17 because there are some color distortions.
(c) (d)  Figure 19 displays the demosaiced images of three selected methods. It is hard to say whether or not the demosaiced images in Figure 19 is better than that of Figure 17 because there are some color distortions. As mentioned in Section 1, it will be important to compare the three CFAs and answer the question; which is the best for low lighting images? Given that different algorithms were used in each CFA, selecting the best performing method for each CFA and comparing them against one another will be a good strategy.
We evaluated the following algorithms for CFA 1.0 d in our experiments. Three of them are deep learning based algorithms (Demonet, SEM, and DRL).

•
Bilinear [47]. As mentioned in Section 1, it will be important to compare the three CFAs and answer the question; which is the best for low lighting images? Given that different algorithms were used in each CFA, selecting the best performing method for each CFA and comparing them against one another will be a good strategy.
We evaluated the following algorithms for CFA 1.0 d in our experiments. Three of them are deep learning based algorithms (Demonet, SEM, and DRL).

Noiseless Case (Normal Lighting Conditions)
Here, we compare the performance of CFAs in the noiseless case. The 12 clean Kodak images were used in our study. To save space, we do not provide the image by image performance metrics. Instead, we only summarize the averaged metrics of the different CFAs in Table 1 and Figure 20. In each cell of Table 1, we provide the metric values as well as the name of the best performance method for that metric. One can see that CFA 1.0 is the best in every performance metric, followed by CFA 2.0. CFA 3.0 has the worst performance. We had the same observation for CFA 1.0 and CFA 2.0 in our earlier studies [12]. Here, we compare the performance of CFAs in the noiseless case. The 12 clean Kodak images were used in our study. To save space, we do not provide the image by image performance metrics. Instead, we only summarize the averaged metrics of the different CFAs in Table 1 and Figure 20. In each cell of Table 1, we provide the metric values as well as the name of the best performance method for that metric. One can see that CFA 1.0 is the best in every performance metric, followed by CFA 2.0. CFA 3.0 has the worst performance. We had the same observation for CFA 1.0 and CFA 2.0 in our earlier studies [12].  3.3.2. 10 dBs SNR Table 2 and Figure 21 summarize the averaged performance metrics for 10 dBs SNR case in our earlier studies in Sections 3.2 for CFA 3.0 and our earlier paper [13] for CFAs 1.0 and 2.0. In Table 2, we include the name of the best performing algorithm. We have the following observations: • Without denoising, CFAs 1.0, 2.0, and 3.0 have big differences. CFA 2.0 is more than 4 dBs higher than CFA 1.0 and CFA 3.0 is 1.2 dBs lower than CFA 2.0.

•
Denoising improves the demosaicing performance independent of the denoising location. For CFA 1.0, the improvement over no denoising is 4 dBs; for CFA 2.0, the improvement is more than 2.7 dBs to 5 dBs; for CFA 3.0, we see 0.57 dBs to 5.6 dBs of improvement in PSNR. We also see dramatic improvements in other metrics, 3.5. 10 dBs SNR Table 2 and Figure 21 summarize the averaged performance metrics for 10 dBs SNR case in our earlier studies in Section 3.2 for CFA 3.0 and our earlier paper [13] for CFAs 1.0 and 2.0. In Table 2, we include the name of the best performing algorithm. We have the following observations: • Without denoising, CFAs 1.0, 2.0, and 3.0 have big differences. CFA 2.0 is more than 4 dBs higher than CFA 1.0 and CFA 3.0 is 1.2 dBs lower than CFA 2.0. • Denoising improves the demosaicing performance independent of the denoising location. For CFA 1.0, the improvement over no denoising is 4 dBs; for CFA 2.0, the improvement is more than 2.7 dBs to 5 dBs; for CFA 3.0, we see 0.57 dBs to 5.6 dBs of improvement in PSNR. We also see dramatic improvements in other metrics, • Denoising after demosaicing is worse than that of denoising before demosaicing. For CFA 1.0, the improvement is 1.1 dBs with denoising before demosaicing; for CFA 2.0, the improvement is 2.1 dBs with denoising before demosaicing; for CFA 3.0, the improvement is over 5 dBs in PSNR with denoising before demosaicing.

20 dBs SNR
In Table 3 and Figure 22, we summarize the best results for different CFAs under different denoising/demosaicing scenarios presented in earlier sections. Some numbers for CFAs 1.0 and 2.0 in Table 3 came from our earlier paper [13]. The following observations can be drawn: • Without denoising, CFA 2.0 is the best, followed by CFA 3.0 and CFA 1.0.

•
Denoising improves the demosaicing performance in all scenarios. For CFA 1.0, the improvement is over 2 to 4 dBs; for CFA 2.0, the improvement is more than 1 to close to 5 dBs;

20 dBs SNR
In Table 3 and Figure 22, we summarize the best results for different CFAs under different denoising/demosaicing scenarios presented in earlier sections. Some numbers for CFAs 1.0 and 2.0 in Table 3 came from our earlier paper [13]. The following observations can be drawn: • Without denoising, CFA 2.0 is the best, followed by CFA 3.0 and CFA 1.0.

•
Denoising improves the demosaicing performance in all scenarios. For CFA 1.0, the improvement is over 2 to 4 dBs; for CFA 2.0, the improvement is more than 1 to close to 5 dBs; for CFA 3.0, the improvement is 6 dBs in terms of PSNR. Other metrics have been improved with denoising.

•
Denoising after demosaicing is worse than that of denoising before demosaicing. For CFA 1.0, the improvement is 1.2 dBs with denoising before demosaicing; for CFA 2.0, the improvement is close to 4 dBs with denoising before demosaicing; for CFA 3.0, the improvement is close to 6 dBs in PSNR with denoising before demosaicing.

•
We observe that CFAs 2.0 and 3.0 definitely have advantages over CFA 1.0. • CFA 2.0 is better than CFA 3.0.

Discussions
Here, some qualitative analyses/explanations for some of those important findings in Sections 3.3.2 and 3.3.3 are provided: • The reason denoising before demosaicing is better that after demosaicing We explained this phenomenon in our earlier paper [13]. The reason is simply because noise is easier to suppress early than later. Once noise has propagated down the processing pipeline, it is harder to suppress it due to some nonlinear processing modules. For instance, the rectified linear units (ReLu) are nonlinear in some deep learning methods. We have seen similar noise behavior in our active noise suppression project for NASA. In that project [60,61], we noticed that noise near the source was suppressed more effectively than noise far away from the source.

•
The reasons why CFA 2.0 and CFA 3.0 are better than CFA 1.0 in low lighting conditions To the best of our knowledge, we are not aware of any theory explaining why CFA 2.0 and CFA 3.0 have better performance than CFA 1.0. Intuitively, we agree with the inventors of CFA 2.0 that having more white pixels improves the sensitivity of the imager/detector. Here, we offer another explanation.
We use the bird image at 10 dBs condition (Image 1 in Figure 6 of [13]) for explanations. Denoising was not used in the demosaicing process. Figure 23 contains three histograms and the means of the residual images (residual = reference − demosaiced) for CFAs 1.0, 2.0, and 3.0 are also computed. We can see that the histograms of CFA 2.0 and CFA 3.0 are centered near zero whereas

Discussions
Here, some qualitative analyses/explanations for some of those important findings in Sections 3.5 and 3.6 are provided: • The reason denoising before demosaicing is better that after demosaicing We explained this phenomenon in our earlier paper [13]. The reason is simply because noise is easier to suppress early than later. Once noise has propagated down the processing pipeline, it is harder to suppress it due to some nonlinear processing modules. For instance, the rectified linear units (ReLu) are nonlinear in some deep learning methods. We have seen similar noise behavior in our active noise suppression project for NASA. In that project [60,61], we noticed that noise near the source was suppressed more effectively than noise far away from the source.
• The reasons why CFA 2.0 and CFA 3.0 are better than CFA 1.0 in low lighting conditions To the best of our knowledge, we are not aware of any theory explaining why CFA 2.0 and CFA 3.0 have better performance than CFA 1.0. Intuitively, we agree with the inventors of CFA 2.0 that having more white pixels improves the sensitivity of the imager/detector. Here, we offer another explanation.
We use the bird image at 10 dBs condition (Image 1 in Figure 6 of [13]) for explanations. Denoising was not used in the demosaicing process. Figure 23 contains three histograms and the means of the residual images (residual = reference − demosaiced) for CFAs 1.0, 2.0, and 3.0 are also computed. We can see that the histograms of CFA 2.0 and CFA 3.0 are centered near zero whereas the histogram of CFA 1.0 is biased towards to right, meaning that CFA 2.0 and CFA 3.0 are closer to the ground truth, because of their better light sensitivity, than that of CFA 1.0. Why CFA 3.0 is NOT better than CFA 2.0 in low lighting conditions We observe that CFA 3.0 is better than CFA 1.0, but is slightly inferior to CFA 2.0 in dark onditions, which means that having more white pixels can only improve the demosaicing erformance to certain extent. Too many white pixels means fewer color pixels and this may degrade he demosaicing performance by having more color distortion. CFA 2.0 is the best compromise etween sensitivity and color distortion.

. Conclusions
In this paper, we first introduce a RGBW pattern with 75% of the pixels white, 12.5% of the pixels reen, and 6.25% of the pixels red and blue. This is known as the CFA 3.0. Unlike a conventiona GBW pattern with 75% white and the rest pixels are randomly red, green and blue, our pattern is ixed. One key advantage of our pattern is that some of the algorithms for demosaicing CFA 2.0 can e easily adapted to CFA 3.0. Other advantages are also mentioned in Section 1. We then performed xtensive experiments to evaluate the CFA 3.0 using clean and emulated low lighting images. After hat, we compared the CFAs for various clean and noisy images. Using five objective performance etrics and subjective evaluations, it was observed that, the demosacing performance in CFA 2.0 and FA 3.0 is indeed better than CFA 1.0. However, more white pixels do not guarantee better erformance because CFA 3.0 is slightly worse than CFA 2.0. This is because the color information is ess in CFA 3.0, compared to CFA 2.0, causing the loss of color information in the CFA 3.0 case enoising further improves the demosaicing performance. In our research, we have experimented ith two denoising scenarios: before and after demosaicing. We have seen dramatic performance • Why CFA 3.0 is NOT better than CFA 2.0 in low lighting conditions We observe that CFA 3.0 is better than CFA 1.0, but is slightly inferior to CFA 2.0 in dark conditions, which means that having more white pixels can only improve the demosaicing performance to certain extent. Too many white pixels means fewer color pixels and this may degrade the demosaicing performance by having more color distortion. CFA 2.0 is the best compromise between sensitivity and color distortion.

Conclusions
In this paper, we first introduce a RGBW pattern with 75% of the pixels white, 12.5% of the pixels green, and 6.25% of the pixels red and blue. This is known as the CFA 3.0. Unlike a conventional RGBW pattern with 75% white and the rest pixels are randomly red, green and blue, our pattern is fixed. One key advantage of our pattern is that some of the algorithms for demosaicing CFA 2.0 can be easily adapted to CFA 3.0. Other advantages are also mentioned in Section 1. We then performed extensive experiments to evaluate the CFA 3.0 using clean and emulated low lighting images. After that, we compared the CFAs for various clean and noisy images. Using five objective performance metrics and subjective evaluations, it was observed that, the demosacing performance in CFA 2.0 and CFA 3.0 is indeed better than CFA 1.0. However, more white pixels do not guarantee better performance because CFA 3.0 is slightly worse than CFA 2.0. This is because the color information is less in CFA 3.0, compared to CFA 2.0, causing the loss of color information in the CFA 3.0 case. Denoising further improves the demosaicing performance. In our research, we have experimented with two denoising scenarios: before and after demosaicing. We have seen dramatic performance gain of more than 3 dBs improvement in PSNR for the 10 dBs case when denoising was applied. One important observation is that denoising after demosaicing is worse than denoising before demosaicing. Another observation is that CFA 2.0 with denoising is the best performing algorithm for low lighting conditions. One potential future direction for research is to investigate different denoising algorithms, such as color BM3D and deep learning based denoising algorithms [62]. Another direction is to investigate joint denoising and demosaicing for CFAs 2.0 and 3.0 directly. Notably, joint denoising and demosaicing has been mostly done for CFA 1.0. The extension of joint denoising and demosaicing to CFAs 2.0 and 3.0 may be non-trivial and needs some further research.

Acknowledgments:
We would like to thank the anonymous reviewers for their constructive comments and suggestions, which significantly improved the quality of our paper.

Conflicts of Interest:
The authors declare no conflict of interest. Acknowledments: We would like to thank the anonymous reviewers for their constructive comments and suggestions, which significantly improved the quality of our paper.

Conflicts of Interest:
The authors declare no conflict of interest.  Acknowledments: We would like to thank the anonymous reviewers for their constructive comments and suggestions, which significantly improved the quality of our paper.

Conflicts of Interest:
The authors declare no conflict of interest.  Acknowledments: We would like to thank the anonymous reviewers for their constructive comments and suggestions, which significantly improved the quality of our paper.

Conflicts of Interest:
The authors declare no conflict of interest.  Acknowledments: We would like to thank the anonymous reviewers for their constructive comments and suggestions, which significantly improved the quality of our paper.

Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A  Acknowledments: We would like to thank the anonymous reviewers for their constructive comments and suggestions, which significantly improved the quality of our paper.

Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A  Acknowledments: We would like to thank the anonymous reviewers for their constructive comments and suggestions, which significantly improved the quality of our paper.

Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A