Next Article in Journal
Deep Learning Method on Target Echo Signal Recognition for Obscurant Penetrating Lidar Detection in Degraded Visual Environments
Previous Article in Journal
Bearing Fault Diagnosis Using a Particle Swarm Optimization-Least Squares Wavelet Support Vector Machine Classifier
Previous Article in Special Issue
Land Cover Change in the Central Region of the Lower Yangtze River Based on Landsat Imagery and the Google Earth Engine: A Case Study in Nanjing, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Demosaicing of CFA 3.0 with Applications to Low Lighting Images

Applied Research LLC; Rockville, MD 20850, USA
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(12), 3423; https://doi.org/10.3390/s20123423
Submission received: 25 April 2020 / Revised: 1 June 2020 / Accepted: 15 June 2020 / Published: 17 June 2020

Abstract

:
Low lighting images usually contain Poisson noise, which is pixel amplitude-dependent. More panchromatic or white pixels in a color filter array (CFA) are believed to help the demosaicing performance in dark environments. In this paper, we first introduce a CFA pattern known as CFA 3.0 that has 75% white pixels, 12.5% green pixels, and 6.25% of red and blue pixels. We then present algorithms to demosaic this CFA, and demonstrate its performance for normal and low lighting images. In addition, a comparative study was performed to evaluate the demosaicing performance of three CFAs, namely the Bayer pattern (CFA 1.0), the Kodak CFA 2.0, and the proposed CFA 3.0. Using a clean Kodak dataset with 12 images, we emulated low lighting conditions by introducing Poisson noise into the clean images. In our experiments, normal and low lighting images were used. For the low lighting conditions, images with signal-to-noise (SNR) of 10 dBs and 20 dBs were studied. We observed that the demosaicing performance in low lighting conditions was improved when there are more white pixels. Moreover, denoising can further enhance the demosaicing performance for all CFAs. The most important finding is that CFA 3.0 performs better than CFA 1.0, but is slightly inferior to CFA 2.0, in low lighting images.

1. Introduction

Many commercial cameras have incorporated the Bayer pattern [1], which is also named as color filter array (CFA) 1.0. An example of CFA 1.0 is shown in Figure 1a. There are many repetitive 2 × 2 blocks and, in each block, two green, one red, and one blue pixels are present. To save cost, the Mastcam onboard the Mars rover Curiosity [2,3,4,5] also adopted the Bayer pattern. Due to the popularity of CFA 1.0, Kodak researchers invented a red-green-blue-white (RGBW) pattern or CFA 2.0 [6,7]. An example of the RGBW pattern is shown in Figure 1b. In each 4 × 4 block, eight white pixels, four green pixels, and two red and blue pixels are present. Numerous other CFA patterns have been invented in the past few decades [8,9,10].
Researchers working on CFAs believe that CFA 2.0 is more suitable for taking images in low lighting environments. Recently, some researchers [11] have further explored the possibility of adding more white pixels to the CFA 2.0. The new pattern has 75% white pixels and the RGB pixels are randomly distributed among the remaining 25% pixels.
Motivated by the work in [11], we propose a simple CFA pattern in which the RGB pixels are evenly distributed, instead of using randomly distributed RGB pixels. In particular, as shown in Figure 1c, each 4 × 4 block has 75% or 12 white pixels, 12.5% or two green pixels, 6.25% or one red and blue pixels. We identify this CFA pattern as the CFA 3.0. There are three key advantages of using fixed CFA patterns. For the random pattern case, each camera will have a different pattern. In contrast, the first advantage is that the proposed fixed pattern allows a camera manufacturer to mass produce the cameras without changing the RGBW patterns for each camera. This can save manufacturing cost quite significantly. The second advantage is that the demosaicing software can be the same in all cameras if the pattern is fixed. Otherwise, each camera needs to have a unique demosaicing software tailored to a specific random pattern. This will seriously affect the cost. The third advantage is that some of the demosaicing algorithms for CFA 2.0 can be applied with little modifications. This can be easily seen if one puts the standard demosaicing block diagrams for CFA 2.0 and CFA 3.0 side by side. One can immediately notice that the reduced resolution color image and the panchromatic images can be similarly generated. As a result, the standard approach for CFA 2.0, all the pan-sharpening based algorithms for CFA 2.0, and the combination of pan-sharpening, and deep learning approaches for CFA 2.0, that we developed earlier in [12] can be applied to CFA 3.0.
In our recent paper on the demosaicing of CFA 2.0 (RGBW) [12], we have compared CFA 1.0 and CFA 2.0 using IMAX and Kodak images and observed that CFA 1.0 was better than CFA 2.0. One may argue that our comparison was not fair because IMAX and Kodak datasets were not collected in low lighting conditions and CFA 2.0 was designed for taking images in low lighting environments. Due to the dominance of white pixels in CFA 2.0, the SNR of the collected image is high and hence CFA 2.0 should have better demosaicing performance in dark environments.
Recently, we systematically and thoroughly compared CFA 1.0 and CFA 2.0 under dark conditions [13]. We observed that CFA 2.0 indeed performed better under dark conditions. We also noticed that denoising can further improve the demosaicing performance.
The aforementioned discussions immediately lead to several questions concerning the different CFAs. First, we enquired how demosaic CFA 3.0. Although there are universal debayering algorithms [8,9,10], those codes are not accessible to the public or may require customization. Here, we propose quite a few algorithms that can demosaic CFA 3.0, and this can be considered as our first contribution. Second, regardless of whether the answer to the first question is positive or negative, will more white pixels in the CFA pattern help the demosaicing performance for low lighting images? In other words, will CFA 3.0 have any advantages over CFA 1.0 and CFA 2.0? It will be a good contribution to the research community to answer the question: Which CFA out of the three is most suitable for low lighting environments? Third, the low lighting images contain Poisson noise and demosaicing does not have denoising capability. To improve the demosaicing performance, researchers usually carry out some denoising and contrast enhancement. It is important to know where one should perform denoising. Denoising can be performed either after or before demosaicing. Which choice can yield better overall image quality? Answering the above questions will assist designers understand the next generation of cameras that have adaptive denoising capability to handle diverse lighting environments.
In this paper, we will address the aforementioned questions. After some extensive research and experiments, we found that some algorithms for CFA 2.0 can be adapted to demosaic CFA 3.0. For instance, the standard approach for CFA 2.0 is still applicable to CFA 3.0. The pan-sharpening based algorithms for CFA 2.0 [12] and deep learning based algorithms for CFA 2.0 [14] are also applicable to CFA 3.0. We will describe those details in Section 2. In Section 3, we will first present experiments to demonstrate that CFA 3.0 can work well for low lighting images. Denoising using block matching in 3D (BM3D) [15] can further enhance the demosaicing performance. We also summarize a comparative study that compares the performance of CFA 1.0, CFA 2.0, and CFA 3.0 using normal and emulated low lighting images. We have several important findings. First, having more white pixels does not always improve the demosaicing performance. CFA 2.0 achieved the best performance. CFA 3.0 performs better than CFA 1.0 and is slightly inferior to CFA 2.0. Second, denoising can further enhance the demosaicing performance in all CFAs. Third, we observed that the final image quality relies heavily on the location of denoising. In particular, denoising after demosaicing is worse than denoising before demosaicing. Fourth, when the SNR is low, denoising has more influence on demosaicing. Some discussions on those findings are also included. In Section 4, some remarks and future research directions will conclude our paper.

2. Demosaicing Algorithms

We will first review some demosaicing algorithms for CFA 2.0. We will then answer the first question mentioned in Section 1: how one can demosaic the CFA 3.0 pattern shown in Figure 1c. It turns out that some of the existing algorithms for CFA 2.0 can be used for CFA 3.0 with some minor modifications.

2.1. Demosaicing Algorithms for CFA 2.0

The baseline approach is a simple demosaicing operation on the CFA, followed by an upsampling of the reduced resolution color image shown in Figure 2 of [13]. The standard approach consists of four steps as shown in Figure 2 of [13]. Step 1 interpolates the luminance image with half of the white pixels missing. Step 2 subtracts the reduced color image from the down-sampled interpolated luminance image. Step 3 upsamples the difference image in Step 2. Step 4 fuses the full resolution luminance with the upsampled difference image in Step 3. In our implementation, the demosaicing of the reduced resolution color image is done using local directional interpolation and nonlocal adaptive thresholding (LDI-NAT) [16] and the pan interpolation is also done using LDI-NAT [16].
In our recent paper [12], a pan-sharpening approach, as shown in Figure 2 of [13], was proposed to demosaicing CFA 2.0. The demosaicing of the reduced resolution color image is done using LDI-NAT [16]. The panchromatic (luminance) band with missing pixels is interpolated using LDI-NAT [16]. After those steps, pan-sharpening is performed to generate the full resolution color image. It should be noted that many pan-sharpening algorithms have been used in our experiments, including Principal Component Analysis (PCA) [17], Smoothing Filter-based Intensity Modulation (SFIM) [18], Modulation Transfer Function Generalized Laplacian Pyramid (GLP) [19], MTF-GLP with High Pass Modulation (HPM) [20], Gram Schmidt (GS) [21], GS Adaptive (GSA) [22], Guided Filter PCA (GFPCA) [23], PRACS [24] and hybrid color mapping (HCM) [25,26,27,28,29].
In a recent paper by us [14], the pan-sharpening approach has been improved by integrating with deep learning. As shown in Figure 4 of [13], a deep learning method was incorporated in two places. First, deep learning has been used to demosaic the reduced resolution CFA image. Second, deep learning has been used to improve the interpolation of the pan band. We adopted a deep learning algorithm known as Demonet [30]. Good performance improvement has been observed.
Moreover, the least-squares luma-chroma demultiplexing (LSLCD) [31] algorithm was used in our experiments for CFA 2.0.
In the past, we also developed two pixel-level fusion algorithms known as fusion of 3 (F3) algorithms and alpha trimmed mean filter (ATMF), which were used in our earlier studies [12,13,14,32]. Three best performing algorithms are fused in F3 and seven high performing algorithms are fused in ATMF. These fusion algorithms are applicable to any CFAs.

2.2. Demosaicing Algorithms for CFA 3.0

As opposed to the random color patterns in [11], the CFA 3.0 pattern in this paper has fixed patterns. One key advantage is that some of the approaches for CFA 2.0 can be easily applied with little modifications. For instance, the standard approach shown in Figure 2 of [13] for CFA 2.0 can be immediately applied to CFA 3.0, as shown in Figure 2. In each 4 × 4 block, the four R, G, B pixels in the CFA 3.0 raw image are extracted to form a reduced resolution CFA image. A standard demosaicing algorithm, any of those mentioned in Section 2.1 can be applied. In our implementation, we used LDI-NAT [16] for demosaicing the reduced resolution color image. The missing pan pixels are interpolated using LDI-NAT [16] to create a full resolution pan image. The subsequent steps will be the same as before.
Similarly, the pan-sharpening approach for CFA 2.0 shown in Figure 3 of [13] can be applied to CFA 3.0 as shown in Figure 3. Here, the four R, G, B pixels are extracted first and then a demosaicing algorithm for CFA 1.0 is applied to the reduced resolution Bayer image. We used LDI-NAT [16] for reduced resolution color image. For the pan band, any interpolation algorithms can be applied. We used LDI-NAT. Afterwards, any pan-sharpening algorithms mentioned earlier can be used to fuse the pan and the demosaiced reduced resolution color image to generate a full resolution color image. In our experiments, we have used PCA [17], SFIM [18], GLP [19], HPM [20], GS [21], GSA [22], GFPCA [23], PRACS [24] and HCM [25] for pan-sharpening.
The hybrid deep learning and pan-sharpening approach for CFA 2.0 shown in Figure 4 of [13] can be extended to CFA 3.0, as shown in Figure 4. For the reduced resolution demosaicing step, the Demonet algorithm is used. In the pan band generation step, we also propose to apply Demonet. The details are similar to our earlier paper on CFA 2.0 [12]. Hence, we skip the details. After those two steps, a pan-sharpening algorithm is then applied. In our experiments, Demonet is combined with different pan-sharpening algorithms in different scenarios. For normal lighting conditions, GSA is used for pan-sharpening and we call this hybrid approach the Demonet + GSA method. For low lighting conditions, it is more effective to use GFPCA for pan-sharpening and we term this as the Demonet + GFPCA method.
The two fusion algorithms (F3 and ATMF) can be directly applied to CFA 3.0.

2.3. Performance Metrics

Five performance metrics were used in our experiments to compare the different methods and CFAs. These metrics are well-known in the literature.
  • Peak Signal-to-Noise Ratio (PSNR) [33]
    Separate PSNRs in dBs are computed for each band. A combined PSNR is the average of the PSNRs of the individual bands. Higher PSNR values imply higher image quality.
  • Structural SIMilarity (SSIM)
    In [34], SSIM was defined to measure the closeness between two images. An SSIM value of 1 means that the two images are the same.
  • Human Visual System (HVS) metric
    Details of HVS metric in dB can be found in [35].
  • HVSm (HVS with masking) [36]
    Similar to HVS, HVS incorporates the visual masking effects in computing the metrics.
  • CIELAB
    We also used CIELAB [37] for assessing demosaicing and denoising performance in our experiments.

3. Experiments

In Section 2, we answer the first question about how one can demosaic CFA 3.0. Here, we will answer the two remaining questions mentioned in Section 1. One of the questions is whether or not the new CFA 3.0 can perform well for demosacing low lighting images. The other question is regarding whether CFA 3.0 has any advantages over the other two CFAs. Simply put, we will answer which one of the three CFAs is the best method for low light environments.

3.1. Data

A benchmark dataset (Kodak) was downloaded from a website (http://r0k.us/graphics/kodak/) and 12 images were selected. The images are shown in Figure 5 of [13]. We will use them as reference images for generating objective performance metrics. In addition, noisy images emulating images collected from dark conditions will be created using those clean images. It should be noted that the Kodak images were collected using films and then converted to digital images. We are absolutely certain that the images were not created using CFA 1.0. Many researchers in the demosaicing community have used Kodak data sets in their studies.
Emulating images in low lighting conditions is important because ground truth (clean) images can then be used for a performance assessment. In the literature, some researchers used Gaussian noise to emulate low lighting images. We think the proper way to emulate low lighting images is by using Poisson noise, which is simply because the noise introduced in low lighting images follows a Poisson distribution.
The differences between Gaussian and Poisson noises are explained as follows. Gaussian noise is additive, independent at each pixel, and independent of the pixel intensity. It is caused primarily by Johnson–Nyquist noise (thermal noise) [38]. Poisson noise is pixel intensity dependent and is caused by the statistical variation in the number of photons. Poisson noise is also known as photon shot noise [39]. As the number of photons at the detectors of cameras follows a Poisson distribution, and hence, the name of Poisson noise, when the number of photons increases significantly, the noise behavior then follows a Gaussian distribution due to the law of large numbers. However, the shot noise behavior of transitioning from Poisson distribution to Gaussian distribution does not mean that Poisson noise (photon noise) becomes Gaussian noise (thermal noise) when the number of photons increases significantly. This may be confusing for many people due to the terminology of Gaussian distribution. In short, the two noises come from different origins and have very different behaviors.
Poisson distribution has been widely used to characterize discrete events. For example, the arrival of customers to a bank follows a Poisson distribution; the number of phone calls to a cell phone tower also follows a Poisson distribution. For cameras, the probability density function (pdf) of photon noise in an image pixel follows a Poisson distribution, which can be mathematically described as,
P ( k ) = λ k e λ k !
where λ is the mean number of photons per pixel and P(k) is the probability when there are k photons. Based on the above pdf, one can interpret the actual number of photons arriving at a detector pixel fluctuates around the mean (λ), which can be used to characterize the lighting conditions. That is, a small λ implies the lighting is low and vice versa.
In statistics, when λ increases to a large number, the pdf in (1) will become a continuous pdf known as the Gaussian distribution, which is given by,
P ( x ) = 1 2 π λ e 1 2 λ ( x λ ) 2
where x denotes the continuous noise variable and λ is the same for both mean and variance in Poisson noise. In [40], central limit theorem is used to connect (1) and (2) by assuming λ >> 1. The derivation of (2) from (1) can be found in [41]. Figure 5 [42] clearly shows that the Poisson distribution gradually becomes a Gaussian distribution when λ increases. It appears that when λ = 10, the Poisson pdf already looks like a Gaussian distribution.
However, it must be emphasized here that although (2) follows the normal or Gaussian distribution, the noise is still photon shot noise, not Gaussian noise due to thermal noise.
In contrast, Gaussian noise (thermal noise) follows the following distribution,
P ( z ) = 1 2 π σ e 1 2 ( z μ σ ) 2
where z is the noise variable, μ is the mean, and σ is the standard deviation. As mentioned earlier, Gaussian noise is thermal noise and is independent of pixels and pixel intensity. To introduce Gaussian noise to a clean image, one can use a Matlab function: imnoise (I, “Gaussian”, μ, σ2) where I is a clean image and μ is set to zero.
Here, we describe a little more about the imaging model in low lighting conditions. As mentioned earlier, Poisson noise is related to the average number of photons per pixel, λ. To emulate the low lighting images, we vary λ. It should be noted that the SNR in dB of a Poisson image is given by [40]:
SNR = 10 log ( λ )
A number of images with different levels of Poisson noise or SNR can be seen in the table in Appendix A.
The process of how we introduced Poisson noise is adapted from code written by Erez Posner (https://github.com/erezposner/Shot-Noise-Generator) and it is summarized as follows.
Given a clean image and the goal of generating a Poisson noisy image with a target signal-to-noise (SNR) value in dB, we first compute the full-well (FW) capacity of the camera, which is related to the SNR through:
FW = 10 SNR / 10
For a pixel I(i,j), we then compute the average photons per pixel (λ) for that pixel by,
λ = I(i,j) × FW/255
where 255 is the number of intensity levels in an image. Using a Poisson noise function created by Donald Knuth [43], we can generate an actual photon number k through the Poisson distribution described by Equation (1). This k value changes randomly whenever a new call to the Poisson noise function is being made.
Finally, the actual noisy pixel amplitude (In(i,j)) is given by:
In(i,j) = 255 × k/FW
A loop iterating over every (i,j) in the image will generate the noisy Poisson image with the target SNR value.
Although Gaussian and Poisson noises have completely different characteristics, it will be interesting to understand when the two noises will become indistinguishable. To achieve that and to save some space, we include noisy images between 20 dBs and 38 dBs. It should be noted that the Gaussian noise was generated using Matlab’s noise generation function (imnoise). The Poisson noise was generated following an open source code [44]. The SNRs are calculated by comparing the noisy images to the ground truth image. From Table A1 in Appendix A, we can see that when SNR values are less than 35 dBs, the two types of noisy images are visually different. Poisson images are slightly darker than Gaussian images. When SNR increases beyond 35 dBs, the two noisy images are almost indistinguishable.
From this study, we can conclude that 35-dB SNR is the threshold for differentiating Poisson noise (photon shot noise) from Gaussian noise (thermal). At 35 dBs, the average number of photons per pixel arriving at the detector is 3200 for Poisson noise and the standard deviation of the Gaussian noise is 0.0177. The image pixels are in double precision and normalized between 0 and 1.
To create a consistent level of noise close to our SNR levels of 10 dBs and 20 dBs, we followed a technique described in [44,45]. For each color band, we added Poisson noise separately. The noisy and low lighting images at 10 dBs and 20 dBs are shown in Figures 6 and 7 of [13], respectively.
In this paper, denoising is done via BM3D [15], which is a well-known method in the research community. The particular BM3D is specifically for Poisson noise. We performed denoising in a band by band manner. The BM3D package we used is titled ‘Denoising software for Poisson and Poisson-Gaussian data,” released on March 16th 2016. See the link (http://www.cs.tut.fi/~foi/invansc/). We used this code as packaged, which requires the input to be a single band image. This package would not require any input other than a single band noisy image. We considered using the standard BM3D package titled “BM3D Matlab” in this link (http://www.cs.tut.fi/~foi/GCF-BM3D/) released on February 16th 2020. This package would allow denoising 3-band RGB images. This package, however, assumes Gaussian noise and required a parameter based on the noise level.

3.2. CFA 3.0 Results

Here, we will first present demosaicing of CFA 3.0 for clean images, which are collected under normal lighting conditions. We will then present demosaicing of low lighting images at two SNRs with and without denoising.

3.2.1. Demosaicing Clean Images

There are 14 methods in our study. The baseline and standard methods are mentioned in Section 2.2. The other 12 methods include two fusion methods, one deep learning (Demonet + GSA), and nine pansharpening methods.
The three best methods used for F3 are Demonet + GSA, GSA, and GFPCA. The ATMF uses those three methods as well as Standard, PCA, GS, and PRACS.
From the PSNR and SSIM metrics in Table A2, the best performing algorithm is the Demonet + GSA method. The fusion methods of F3 and ATMF have better scores in Cielab, HVS and HVSm. Figure 6 shows the averaged metrics for all images.
In subjective comparisons shown in Figure 7, we can see the performance of the three selected methods (Demonet + GSA, ATMF and F3) varies a lot. Visually speaking, Demonet + GSA has the best visual performance. There are some minor color distortions in the fence area of the lighthouse image for F3 and ATMF.

3.2.2. 10 dBs SNR

There are three cases in this sub-section. In the first case, we focus on the noisy images and there is no denoising. The second case includes denoising after demosaicing operation. The third case is about denoising before demosaicing operation.
● Case 1: No Denoising
There are 14 methods for demosaicing CFA 3.0. The F3 method is a fusion method that fused the results of Standard, Demonet+GFPCA, and GFPCA, which are the best performing individual methods for this case. The ATMF fusion method used the seven high performing methods, which are Standard, Demonet+GFPCA, GFPCA, Baseline, PCA, GS, and PRACS. Table A3 in Appendix A summarizes the PSNR, the CIELAB, SSIM, HVS, and HVSm metrics. The PSNR and CIELAB values vary a lot. All the SSIM, HVS, and HVSm values are not high.
The averaged PSNR, CIELAB, SSIM, HVS, and HVSm scores of all the 14 methods are shown in Figure 8. Big variations can be observed in the metrics.
The demosaiced results of Images 1 and 8 are shown in Figure 9. There are color distortion, noise, and contrast issues in the demosaiced images.
It can be observed that, if there is no denoising, all the algorithms have big fluctuations and the demosaiced results are not satisfactory.
● Case 2: Denoising after Demosaicing
In this case, we applied demosaicing first, followed by denoising. The denoising algorithm is BM3D. The denoising was done one band at a time. The F3 method fused the results from Demonet + GFPCA, GFPCA, and GSA. ATMF fused results from Demonet + GFPCA, GFPCA, GSA, PCA, GLP, GS, and PRACS. From Table A4 in Appendix A, the averaged PSNR score of Demonet + GFPCA and GFPCA have much higher scores than the rest. The other methods also yielded around 4 dBs higher scores than those numbers in Table A3.
Figure 10 illustrates the averaged performance metrics, which look much better than those in Figure 8.
The denoised and demosaiced images of three methods are shown in Figure 11. We observe that the artifacts in Figure 9 have been reduced significantly. Visually speaking, the distortion in the images of Demonet + GFPCA is quite small for the fence area of Image 8.
● Case 3: Denoising before Demosaicing
In this case, we first performed denoising and then demosaicing by pansharpening. The denoising is applied to two places. One is to the luminance image, which is the image after interpolation. The other place is to the reduced resolution color image. Denoising using Akiyama et al. approach [46] is a good alternative and will be a good future direction. The F3 method fused the results from the Standard, Demonet + GFPCA, GSA. ATMF fused the results from Standard, Demonet + GFPCA, GSA, HCM, GFPCA, GLP, and PRACS. From Table A5, we can see that the Demonet + GFPCA algorithm yielded the best averaged PSNR score, which is close to 26 dBs. This is almost 6 dBs better than those numbers in Table A4 and 16 dBs more than those in Table A3. The other metrics in Table A5 are all significantly improved over Table A4. As we will explain later, denoising after demosaicing performs worse than that of before demosaicing.
Figure 12 shows the averaged performance metrics. The metrics are significantly better than those in Figure 8 and Figure 10.
Figure 13 shows the demosaiced images of three methods. We can observe that the demosaiced images have better contrast than those in Figure 11. The Demonet + GFPCA method has less color distortion.

3.2.3. 20 dBs SNR

We have three cases here.
● Case 1: No Denoising (20 dBs SNR)
There are 14 methods. The F3 method fused the three best performing methods: Demonet+GFPCA, GFPCA, and PRACS. ATMF fused the seven best performing methods: Demonet+GFPCA, GFPCA, PRACS, Baseline, GSA, PCA, and GLP. From Table A6 in Appendix A, we can see that the averaged PSNR score of PRACS is the best, which is 21.8 dBs.
The average performance metrics are shown in Figure 14. The results are reasonable because there is no denoising capability in demosaicing methods. Figure 15 shows the demosaiced images of three methods: GFPCA, ATMF, and F3. One can easily see some artifacts (color distortion).
● Case 2: Denoising after Demosaicing (20 dBs SNR)
The F3 method performed pixel level fusion using the results of Demonet + GFPCA, GFPCA, and GLP. ATMF fused the results of Demonet + GFPCA, GFPCA, GLP, Standard, GSA, PCA, and GS. From Table A7, we can observe that the Demonet + GFPCA achieved the highest averaged PSNR score of 21.292 dBs. This is better than most of PSNR numbers in Table A6, but only slightly better than the Demonet + GFPCA method (20.573 dBs) in Table A4 (10 dBs SNR case). This clearly shows that denoising has more dramatic impact for low SNR case than with high SNR case. The other metrics in Table A7 are all improved over those numbers in Table A6.
Figure 16 shows the averaged performance metrics. The numbers are than those in Figure 14.
The demosaiced images of three methods are shown in Figure 17. We can see that the artifacts in Figure 17 have been reduced as compared to Figure 15. The color distortions are still noticeable.
● Case 3: Denoising before Demosaicing (20 dBs SNR)
The F3 method fused the results of three best performing methods: Standard, GSA, and GFPCA. ATMF fused the 7 best performing methods: Standard, GSA, GFPCA, HCA, SFIM, GS, and HPM. From Table A8, we can see that F3 yielded 27.07 dBs of PSNR. This is 7 dBs better than the best method in Table A6 and 6 dBs better than the best method in Table A7. The other metrics in Table A8 are all improved over Table A7 quite significantly. This means that the location of denoising is quite critical for improving the overall demosaicing performance.
Figure 18 shows the average performance metrics. The numbers are better than those in Figure 14 and Figure 16.
Figure 19 displays the demosaiced images of three selected methods. It is hard to say whether or not the demosaiced images in Figure 19 is better than that of Figure 17 because there are some color distortions.

3.3. Comparison of CFAs 1.0, 2.0, and 3.0

As mentioned in Section 1, it will be important to compare the three CFAs and answer the question; which is the best for low lighting images? Given that different algorithms were used in each CFA, selecting the best performing method for each CFA and comparing them against one another will be a good strategy.
We evaluated the following algorithms for CFA 1.0 d in our experiments. Three of them are deep learning based algorithms (Demonet, SEM, and DRL).
  • Linear Directional Interpolation and Nonlocal Adaptive Thresholding (LDI-NAT) [16].
  • Demosaicnet (Demonet) [30].
  • Fusion using 3 best (F3) [32].
  • Bilinear [47].
  • Malvar–He–Cutler (MHC) [47].
  • Directional Linear Minimum Mean Square-Error Estimation (DLMMSE) [48].
  • Lu and Tan Interpolation (LT) [49].
  • Adaptive Frequency Domain (AFD) [50].
  • Alternate Projection (AP). [51].
  • Primary-Consistent Soft-Decision (PCSD) [52].
  • Alpha Trimmed Mean Filtering (ATMF) [32,53].
  • Sequential Energy Minimization (SEM) [54].
  • Deep Residual Network (DRL) [55].
  • Exploitation of Color Correlation (ECC) [56].
  • Minimized-Laplacian Residual Interpolation (MLRI) [57].
  • Adaptive Residual Interpolation (ARI) [58].
  • Directional Difference Regression (DDR) [59].

3.3.1. Noiseless Case (Normal Lighting Conditions)

Here, we compare the performance of CFAs in the noiseless case. The 12 clean Kodak images were used in our study. To save space, we do not provide the image by image performance metrics. Instead, we only summarize the averaged metrics of the different CFAs in Table 1 and Figure 20. In each cell of Table 1, we provide the metric values as well as the name of the best performance method for that metric. One can see that CFA 1.0 is the best in every performance metric, followed by CFA 2.0. CFA 3.0 has the worst performance. We had the same observation for CFA 1.0 and CFA 2.0 in our earlier studies [12].

3.3.2. 10 dBs SNR

Table 2 and Figure 21 summarize the averaged performance metrics for 10 dBs SNR case in our earlier studies in Section 3.2 for CFA 3.0 and our earlier paper [13] for CFAs 1.0 and 2.0. In Table 2, we include the name of the best performing algorithm. We have the following observations:
  • Without denoising, CFAs 1.0, 2.0, and 3.0 have big differences. CFA 2.0 is more than 4 dBs higher than CFA 1.0 and CFA 3.0 is 1.2 dBs lower than CFA 2.0.
  • Denoising improves the demosaicing performance independent of the denoising location. For CFA 1.0, the improvement over no denoising is 4 dBs; for CFA 2.0, the improvement is more than 2.7 dBs to 5 dBs; for CFA 3.0, we see 0.57 dBs to 5.6 dBs of improvement in PSNR. We also see dramatic improvements in other metrics,
  • Denoising after demosaicing is worse than that of denoising before demosaicing. For CFA 1.0, the improvement is 1.1 dBs with denoising before demosaicing; for CFA 2.0, the improvement is 2.1 dBs with denoising before demosaicing; for CFA 3.0, the improvement is over 5 dBs in PSNR with denoising before demosaicing.
  • One important finding is that CFAs 2.0 and 3.0 definitely have advantages over CFA 1.0.
  • CFA 2.0 is better than CFA 3.0.

3.3.3. 20 dBs SNR

In Table 3 and Figure 22, we summarize the best results for different CFAs under different denoising/demosaicing scenarios presented in earlier sections. Some numbers for CFAs 1.0 and 2.0 in Table 3 came from our earlier paper [13]. The following observations can be drawn:
  • Without denoising, CFA 2.0 is the best, followed by CFA 3.0 and CFA 1.0.
  • Denoising improves the demosaicing performance in all scenarios. For CFA 1.0, the improvement is over 2 to 4 dBs; for CFA 2.0, the improvement is more than 1 to close to 5 dBs; for CFA 3.0, the improvement is 6 dBs in terms of PSNR. Other metrics have been improved with denoising.
  • Denoising after demosaicing is worse than that of denoising before demosaicing. For CFA 1.0, the improvement is 1.2 dBs with denoising before demosaicing; for CFA 2.0, the improvement is close to 4 dBs with denoising before demosaicing; for CFA 3.0, the improvement is close to 6 dBs in PSNR with denoising before demosaicing.
  • We observe that CFAs 2.0 and 3.0 definitely have advantages over CFA 1.0.
  • CFA 2.0 is better than CFA 3.0.

3.4. Discussions

Here, some qualitative analyses/explanations for some of those important findings in Section 3.3.2 and Section 3.3.3 are provided:
● The reason denoising before demosaicing is better that after demosaicing
We explained this phenomenon in our earlier paper [13]. The reason is simply because noise is easier to suppress early than later. Once noise has propagated down the processing pipeline, it is harder to suppress it due to some nonlinear processing modules. For instance, the rectified linear units (ReLu) are nonlinear in some deep learning methods. We have seen similar noise behavior in our active noise suppression project for NASA. In that project [60,61], we noticed that noise near the source was suppressed more effectively than noise far away from the source.
● The reasons why CFA 2.0 and CFA 3.0 are better than CFA 1.0 in low lighting conditions
To the best of our knowledge, we are not aware of any theory explaining why CFA 2.0 and CFA 3.0 have better performance than CFA 1.0. Intuitively, we agree with the inventors of CFA 2.0 that having more white pixels improves the sensitivity of the imager/detector. Here, we offer another explanation.
We use the bird image at 10 dBs condition (Image 1 in Figure 6 of [13]) for explanations. Denoising was not used in the demosaicing process. Figure 23 contains three histograms and the means of the residual images (residual = reference − demosaiced) for CFAs 1.0, 2.0, and 3.0 are also computed. We can see that the histograms of CFA 2.0 and CFA 3.0 are centered near zero whereas the histogram of CFA 1.0 is biased towards to right, meaning that CFA 2.0 and CFA 3.0 are closer to the ground truth, because of their better light sensitivity, than that of CFA 1.0.
● Why CFA 3.0 is NOT better than CFA 2.0 in low lighting conditions
We observe that CFA 3.0 is better than CFA 1.0, but is slightly inferior to CFA 2.0 in dark conditions, which means that having more white pixels can only improve the demosaicing performance to certain extent. Too many white pixels means fewer color pixels and this may degrade the demosaicing performance by having more color distortion. CFA 2.0 is the best compromise between sensitivity and color distortion.

4. Conclusions

In this paper, we first introduce a RGBW pattern with 75% of the pixels white, 12.5% of the pixels green, and 6.25% of the pixels red and blue. This is known as the CFA 3.0. Unlike a conventional RGBW pattern with 75% white and the rest pixels are randomly red, green and blue, our pattern is fixed. One key advantage of our pattern is that some of the algorithms for demosaicing CFA 2.0 can be easily adapted to CFA 3.0. Other advantages are also mentioned in Section 1. We then performed extensive experiments to evaluate the CFA 3.0 using clean and emulated low lighting images. After that, we compared the CFAs for various clean and noisy images. Using five objective performance metrics and subjective evaluations, it was observed that, the demosacing performance in CFA 2.0 and CFA 3.0 is indeed better than CFA 1.0. However, more white pixels do not guarantee better performance because CFA 3.0 is slightly worse than CFA 2.0. This is because the color information is less in CFA 3.0, compared to CFA 2.0, causing the loss of color information in the CFA 3.0 case. Denoising further improves the demosaicing performance. In our research, we have experimented with two denoising scenarios: before and after demosaicing. We have seen dramatic performance gain of more than 3 dBs improvement in PSNR for the 10 dBs case when denoising was applied. One important observation is that denoising after demosaicing is worse than denoising before demosaicing. Another observation is that CFA 2.0 with denoising is the best performing algorithm for low lighting conditions.
One potential future direction for research is to investigate different denoising algorithms, such as color BM3D and deep learning based denoising algorithms [62]. Another direction is to investigate joint denoising and demosaicing for CFAs 2.0 and 3.0 directly. Notably, joint denoising and demosaicing has been mostly done for CFA 1.0. The extension of joint denoising and demosaicing to CFAs 2.0 and 3.0 may be non-trivial and needs some further research.

Author Contributions

C.K. conceived the overall concept and wrote the paper. J.L. implemented the algorithm, prepared all the figures and tables. B.A. helped with the Poisson noise generation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by NASA JPL under contract # 80NSSC17C0035. The views, opinions and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of NASA or the U.S. Government.

Acknowledgments

We would like to thank the anonymous reviewers for their constructive comments and suggestions, which significantly improved the quality of our paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Comparison of Gaussian and Poisson noisy images.
Table A1. Comparison of Gaussian and Poisson noisy images.
SNRImages with Different Levels of Gaussian Noise (Thermal Noise)Images with Different Levels of Poisson Noise (Photon Shot Noise)
20 dB Sensors 20 03423 i001 Sensors 20 03423 i002
σ = 0.1λ: average number of photons per pixel = 100
23 dB Sensors 20 03423 i003 Sensors 20 03423 i004
σ = 0.0707λ: average number of photons per pixel = 200
26 dB Sensors 20 03423 i005 Sensors 20 03423 i006
σ = 0.05λ: average number of photons per pixel = 400
29 dB Sensors 20 03423 i007 Sensors 20 03423 i008
σ = 0.0354λ: average number of photons per pixel = 800
32 dB Sensors 20 03423 i009 Sensors 20 03423 i010
σ = 0.025λ: average number of photons per pixel = 1600
35 dB Sensors 20 03423 i011 Sensors 20 03423 i012
σ = 0.0177λ: average number of photons per pixel = 3200
38 dB Sensors 20 03423 i013 Sensors 20 03423 i014
σ = 0.0125λ: average number of photons per pixel = 6400
Table A2. Performance metrics of 14 algorithms for clean images. Bold numbers indicate the best performing method in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF. Bold numbers indicate the best performing methods in each row.
Table A2. Performance metrics of 14 algorithms for clean images. Bold numbers indicate the best performing method in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF. Bold numbers indicate the best performing methods in each row.
ImageMetricsBaselineStandardDemonet + GSAGSAHCMSFIMPCAGFPCAGLPHPMGSPRACSF3ATMFBest Score
Img1PSNR31.93633.89435.19234.12633.52633.12333.82833.15833.81033.06634.13033.46636.17137.71637.716
Cielab2.6592.3742.2502.3932.4812.6992.4452.7732.4292.7242.3512.4531.8531.5601.560
SSIM0.7390.8590.8320.8540.8310.8380.8450.8060.8530.8340.8580.8200.8390.8570.859
HVS28.23327.61029.42828.43828.30928.45328.48127.94728.35328.46828.40328.40732.17633.73133.731
HVSm29.76729.03131.05629.88129.83029.97529.93428.83429.82229.98329.83629.86834.06536.27436.274
Img2PSNR26.77130.58534.46430.58130.21129.91830.40630.13630.08929.89630.52130.23030.73732.02734.464
Cielab4.8603.7662.4163.8033.8743.9013.8913.2193.9063.9043.8303.9032.8262.7382.416
SSIM0.6850.8680.8850.8670.8560.8560.8450.8230.8570.8530.8500.8520.8250.8580.885
HVS23.97624.33130.06724.48624.30424.26424.72627.63724.39824.22824.40324.44528.56528.82230.067
HVSm25.51525.62932.74525.80325.66525.66626.07629.84025.72125.61325.70325.76231.72431.21532.745
Img3PSNR30.81533.01734.36932.99732.15632.42032.99534.05532.69832.39533.03732.64835.83836.88636.886
Cielab3.7583.3782.8363.3133.5353.4323.3242.9493.3453.4593.3033.3982.1961.9071.907
SSIM0.7860.8880.8800.8840.8700.8790.8770.8730.8780.8730.8770.8700.8830.8940.894
HVS27.08727.09928.64627.26627.08127.21127.40329.89727.19227.21827.36627.22132.69133.47233.472
HVSm28.86128.73430.76028.92828.89428.97629.06531.43528.87028.97329.02328.90035.47336.59536.595
Img4PSNR22.76226.98030.51127.49627.09026.80826.88426.87326.76226.77126.88426.93329.36529.54630.511
Cielab7.4845.4343.7755.3275.1785.3145.6624.8415.6645.3645.6445.3713.5853.5973.585
SSIM0.7520.9250.9460.9250.9190.9150.9010.8910.9130.9110.9030.9130.9140.9250.946
HVS20.31520.37025.42721.07721.06421.16020.98624.11721.09521.20320.86720.91826.65725.79926.657
HVSm21.99721.68227.75122.47622.52622.65622.37726.30322.54722.70622.24022.33729.82628.26129.826
Img5PSNR30.81634.10736.76233.95233.68633.55833.82534.54133.69433.46934.13233.76636.59937.04537.045
Cielab2.5682.1001.5932.1722.0542.1072.1801.9142.1322.1232.0702.1361.4881.4591.459
SSIM0.6680.8680.8580.8520.8590.8590.8450.7980.8550.8520.8590.8380.8160.8270.868
HVS27.73327.82431.73028.15528.08328.08328.34430.51428.15428.08328.13228.14633.62833.67033.670
HVSm29.44429.33534.09329.70729.67629.77229.91232.08529.75029.77529.66229.68836.53436.33336.534
Img6PSNR27.70630.87433.16831.03130.39130.38230.98031.38130.64730.27830.92630.60132.51133.12533.168
Cielab5.5554.6053.3934.7214.5284.4644.6573.7974.6194.5444.6984.5753.0043.0493.004
SSIM0.7110.8960.9090.8790.8770.8810.8640.8480.8820.8730.8600.8690.8700.8900.909
HVS24.67824.82327.01025.11424.87725.03125.03427.59925.15925.04725.09724.96728.98529.19329.193
HVSm26.35326.29328.95326.60626.47026.60326.52029.30626.66526.61126.59126.49531.64331.80331.803
Img7PSNR30.44634.51738.65834.46934.08133.70134.35133.76733.91733.68034.39134.18334.38935.69138.658
Cielab3.6392.7511.6872.7732.8092.8552.7992.5012.8542.8572.7852.8412.1412.0781.687
SSIM0.7310.9040.9200.9030.8960.8940.8970.8530.8940.8910.8970.8920.8610.8940.920
HVS27.96828.39534.88528.40928.32328.19928.49732.01728.32128.15928.41128.41532.58432.35734.885
HVSm29.53829.68737.76129.69629.66129.57929.80034.46129.62829.51729.70629.72736.02034.75237.761
Img8PSNR26.93930.74833.68231.07830.41930.25330.56830.31930.39030.12330.67030.43933.47933.70033.700
Cielab4.6973.7072.7073.5663.7693.7043.7583.2003.7463.7343.7073.8122.4072.4412.407
SSIM0.7330.9000.9100.8990.8850.8900.8830.8600.8910.8860.8880.8770.8800.8900.910
HVS24.46024.08728.28525.01324.85424.96925.03928.84525.00424.99624.86624.88331.04529.95531.045
HVSm26.14125.46130.69026.45326.37826.49626.47330.94826.47626.51926.27726.33534.11332.14634.113
Img9PSNR29.77532.26834.97432.68232.11731.74232.67633.78332.31631.66932.66832.31835.22035.62035.620
Cielab3.0622.7052.0172.5922.6012.9112.5612.2022.6692.9742.5662.5951.7071.6831.683
SSIM0.5080.6340.6430.6370.6230.6230.5820.6150.5770.5640.5820.6160.6240.6310.643
HVS26.32926.02829.82326.75326.63226.80826.74830.15026.82326.82026.77926.62132.38932.58832.588
HVSm27.95527.48231.90628.23428.18128.36228.22831.98728.33128.36528.26428.11535.38735.45335.453
Img10PSNR27.05430.35433.93130.54729.97029.88530.35031.17730.01429.82230.30930.11831.81932.68933.931
Cielab4.8083.9752.5523.9303.9273.9153.9913.2234.0753.9403.9593.9362.6252.5982.552
SSIM0.6870.8670.8680.8670.8560.8570.8320.8020.8580.8530.8550.8480.8250.8480.868
HVS24.18424.13528.93624.51724.44024.45924.45028.39324.50824.44124.51524.45829.66129.39629.661
HVSm25.79625.52131.33625.92825.96325.96925.86730.35725.93125.93625.93525.91033.18232.16333.182
Img11PSNR29.02732.01133.45832.23431.70331.70732.12131.68231.83531.65532.14331.68733.20933.70233.702
Cielab4.2823.5563.0043.5293.6543.6063.5453.4123.6283.6273.5433.6052.7532.6862.686
SSIM0.7220.8820.8940.8830.8660.8750.8750.8400.8750.8710.8760.8620.8610.8770.894
HVS26.76326.32028.59627.14327.13427.17527.08028.74427.17727.21527.08527.08930.41330.44530.445
HVSm28.41727.77830.48828.62628.70828.72428.54830.40228.69328.76228.55228.58632.99732.89932.997
Img12PSNR25.84528.45130.77629.11528.77928.76928.79629.17128.80728.73328.78228.71230.16929.93930.776
Cielab4.5253.6692.7413.5583.6103.6213.7863.1763.7073.6493.7833.6202.5882.6642.588
SSIM0.7700.9090.9250.9100.9030.9020.8800.8830.8910.8890.8800.9020.8950.9000.925
HVS24.16823.29027.63824.59024.67424.65824.38427.56124.58424.65224.38524.52128.89928.10228.899
HVSm25.80724.72829.84326.10026.21926.22725.84229.62526.11526.21125.84326.02631.99030.70531.990
AveragePSNR28.32431.48434.16231.69231.17831.02231.48231.67031.24830.96331.55031.25833.29233.97434.162
Cielab4.3253.5022.5813.4733.5023.5443.5503.1013.5643.5753.5203.5202.4312.3722.372
SSIM0.7080.8670.8730.8630.8530.8560.8440.8240.8520.8460.8490.8460.8410.8570.873
HVS25.49125.35929.20625.91325.81525.87225.93128.61825.89725.87825.85925.84130.64130.62830.641
HVSm27.13226.78031.44927.37027.34727.41727.38730.46527.37927.41427.30327.31233.58033.21733.580
Table A3. Performance metrics of 14 algorithms at 10 dBs SNR. Bold numbers indicate the best performing methods in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF.
Table A3. Performance metrics of 14 algorithms at 10 dBs SNR. Bold numbers indicate the best performing methods in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF.
ImageMetricsBaselineStandardDemonet + GFPCAGSAHCMSFIMPCAGFPCAGLPHPMGSPRACSF3ATMFBest Score
Img1PSNR14.00316.93816.68413.05011.7549.86813.25019.72212.6299.86913.11513.62418.23315.05119.722
Cielab16.95616.67512.62619.26922.97430.93118.5508.45620.41130.92818.90717.82311.41714.3728.456
SSIM0.2530.2550.2400.2240.1930.1340.2270.3370.2090.1360.2250.2520.3030.3000.337
HVS8.43311.45111.2147.4836.1794.2887.66714.2747.0644.2877.5598.04912.4639.41614.274
HVSm8.46811.52111.2747.5116.2004.3027.69614.3707.0904.3027.5888.07912.5339.45414.370
Img2PSNR14.13817.55815.97413.94711.82611.35213.44820.15113.62611.23613.98814.15718.11315.23620.151
Cielab13.8669.67411.08914.47218.71719.84115.0366.29114.99720.15214.12214.1047.96711.4746.291
SSIM0.3120.4920.4560.4840.4130.3970.4700.4780.4750.3920.4830.4710.5040.4600.504
HVS9.38112.13611.3609.1437.0456.5798.64615.2548.8326.4639.1679.35313.08010.38115.254
HVSm9.47812.28711.4569.2117.0906.6208.70715.5368.8966.5039.2379.42713.23810.47615.536
Img3PSNR15.79520.11518.59814.27811.92210.05214.55219.59014.17312.77114.48315.55720.33716.97620.337
Cielab14.50011.63610.82417.16623.32031.08416.4038.64817.41020.67116.53814.9499.37212.0388.648
SSIM0.3770.3510.3830.3400.2560.1600.3430.4370.3400.2980.3410.3730.4170.4240.437
HVS10.59714.94413.6889.0896.7244.8469.37014.4828.9887.5779.30110.36414.86811.76614.944
HVSm10.67215.16513.8119.1416.7574.8709.42714.6219.0407.6159.35810.43315.03111.84715.165
Img4PSNR10.08814.21114.39110.30610.01210.03910.17818.43310.20410.04210.31910.20615.78211.78618.433
Cielab24.01212.19914.54124.31625.08024.84023.8198.16524.68324.82323.43124.2109.89217.3608.165
SSIM0.2350.4140.4830.3630.3390.3430.3460.5540.3560.3440.3550.3350.5180.4030.554
HVS5.4109.01410.0085.5765.2915.3255.44013.7115.4905.3265.5775.48310.9627.03213.711
HVSm5.5379.28210.2365.6855.3955.4295.55114.2825.5975.4315.6915.59611.2857.18614.282
Img5PSNR16.91621.22417.47014.16311.11411.39314.69322.39313.6769.98414.24916.72920.79917.82322.393
Cielab10.3607.3019.64614.06920.71119.93513.0495.18414.89124.43013.75110.6566.5148.8635.184
SSIM0.2670.3110.2960.2970.2440.2510.3000.3710.2900.2130.2950.3180.3500.3410.371
HVS12.64416.25813.3669.9506.9057.19010.44918.0729.4705.78010.00912.47616.41413.53518.072
HVSm12.75816.52413.46410.0076.9357.22210.51318.3299.5225.80410.06912.57716.61513.64718.329
Img6PSNR17.72620.07618.56716.21013.23814.56016.05422.63616.13110.26816.37417.43321.51518.79022.636
Cielab13.17012.02511.70815.27621.06517.83614.9156.50515.38033.38814.55413.6818.69110.5606.505
SSIM0.3160.3900.3800.3900.2940.3450.3870.4420.3870.1070.3900.4010.4390.4220.442
HVS13.26616.10214.38211.7518.82210.14811.62318.03611.6905.85911.93912.93317.16814.26518.036
HVSm13.48216.47914.57611.8828.89110.23811.75518.50611.8205.90312.08013.11017.55414.48318.506
Img7PSNR19.03622.67918.00318.81717.64918.02419.39422.67918.98418.43919.21619.40822.20020.06522.679
Cielab9.9926.90510.54810.27211.42010.9489.6375.47010.12210.5649.7659.8265.9568.3125.470
SSIM0.3070.4020.3410.3970.3830.3840.3980.3930.3940.3890.3980.4000.4170.4040.417
HVS14.64818.35013.82214.50113.34413.73015.07018.33714.66914.13014.87415.03717.86015.67318.350
HVSm14.80718.70113.92414.65713.45913.86315.24818.61914.84214.27915.04515.20818.10615.84018.701
Img8PSNR11.58115.17817.59011.73410.78810.04111.97120.68211.64410.04211.76511.63317.69613.33220.682
Cielab21.35713.55710.49221.18424.37227.25820.2006.49221.49227.25920.77721.4509.44516.1276.492
SSIM0.2270.3710.4000.3220.2710.2300.3270.4520.3190.2300.3190.2990.4370.3570.452
HVS6.5929.66712.8416.7235.7855.0416.95615.7296.6405.0416.7516.62712.5218.26015.729
HVSm6.6519.77212.9816.7725.8265.0787.00816.0306.6885.0786.8026.67812.6708.33016.030
Img9PSNR10.05311.09014.20810.06810.06210.06410.02717.47410.07110.06510.02610.06614.00111.04817.474
Cielab17.17615.67610.48117.23217.27017.50917.1097.03717.24217.51317.10417.20710.21614.3927.037
SSIM0.1940.2510.2920.2660.2590.2580.2670.3140.2650.2580.2670.2570.3080.2810.314
HVS5.5046.4729.7335.5175.5145.5245.47912.8835.5235.5245.4795.5139.4056.47912.883
HVSm5.5306.5059.7765.5405.5375.5475.50212.9655.5465.5475.5025.5379.4486.50612.965
Img10PSNR13.62519.23917.81513.49311.73612.04013.34819.48313.28912.15813.64513.87620.14215.46520.142
Cielab16.1949.27110.11516.75020.87619.97416.6447.48717.18819.66916.11115.9377.39212.2917.392
SSIM0.2640.3440.3950.3540.2940.3090.3500.4330.3490.3140.3540.3490.4220.3870.433
HVS9.66215.27914.1009.5017.7668.0779.37015.3989.3128.1949.6599.88316.33311.46716.333
HVSm9.76915.65714.2679.5847.8258.1389.45515.6639.3918.2569.7489.97716.67811.58916.678
Img11PSNR14.82519.45815.17814.24011.08110.05314.25518.31714.17210.05314.34914.90118.14415.61019.458
Cielab14.90310.78314.28816.15724.62828.96715.8649.10816.30428.96615.68714.9069.86413.0279.108
SSIM0.3210.4210.3650.4000.2700.2090.3970.4250.3970.2100.3990.4070.4380.4140.438
HVS9.65214.35010.0089.0355.8624.8309.06613.1518.9744.8309.1609.69712.65910.37214.350
HVSm9.71714.53110.0669.0855.8884.8529.11713.2729.0244.8529.2129.75612.76610.43714.531
Img12PSNR12.44316.40416.74812.54511.35711.46212.64718.65312.39711.70412.57912.52917.47213.97118.653
Cielab19.3439.54911.38019.45823.07922.68118.7468.61319.89721.88718.92719.4108.79514.9488.613
SSIM0.2840.4350.4570.3790.3070.3170.3790.5110.3730.3330.3760.3680.4970.4220.511
HVS7.78511.52412.3287.8266.6456.7527.93414.1997.6866.9947.8667.82012.8149.29614.199
HVSm7.86111.68212.4557.8886.6966.8037.99914.4157.7467.0487.9307.88512.9769.38214.415
AveragePSNR14.18617.84716.76913.57111.87811.57913.65120.01813.41611.38613.67614.17618.70315.42920.018
Cielab15.98611.27111.47817.13521.12622.65016.6647.28817.50123.35416.63916.1808.79312.8147.288
SSIM0.2800.3700.3740.3510.2940.2780.3490.4290.3460.2690.3500.3530.4210.3850.429
HVS9.46512.96212.2378.8417.1576.8618.92215.2948.6956.6678.9459.43613.87910.66215.294
HVSm9.56113.17612.3578.9137.2086.9148.99815.5518.7676.7189.0229.52214.07510.76515.551
Table A4. Performance metrics of 14 algorithms at 10 dBs SNR (Poisson noise). Bold numbers indicate the best performing methods in each row. Bold numbers indicate the best performing methods in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF.
Table A4. Performance metrics of 14 algorithms at 10 dBs SNR (Poisson noise). Bold numbers indicate the best performing methods in each row. Bold numbers indicate the best performing methods in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF.
ImageMetricsBaselineStandardDemonet + GFPCAGSAHCMSFIMPCAGFPCAGLPHPMGSPRACSF3ATMFBest Score
Img1PSNR13.05413.76320.41913.82112.3599.85913.48222.57013.6709.90213.53713.45118.18014.65522.570
Cielab18.64716.9767.82916.87020.59030.67617.4676.31217.24430.44717.38017.6919.78615.0066.312
SSIM0.2630.2980.3780.3000.2510.1350.2870.3930.3020.1380.2890.2820.3730.3200.393
HVS7.4778.17814.9628.2336.7734.2727.91717.2198.0814.3147.9777.86712.6549.07717.219
HVSm7.5008.20415.0408.2586.7924.2867.94117.3508.1054.3278.0027.89112.7069.10617.350
Img2PSNR14.69114.24522.16414.24312.35411.94513.75820.23613.94611.80714.35814.44618.37115.21322.164
Cielab12.51313.3035.07613.36316.93717.87713.9876.19013.86418.21912.98313.0157.65811.5645.076
SSIM0.2980.4020.3590.4010.3400.3300.3790.3410.3980.3240.3940.3770.3800.3890.402
HVS9.9849.44918.1079.4577.5867.1769.00915.7429.1657.0399.5979.68113.74610.44918.107
HVSm10.0749.52318.5429.5277.6377.2229.07416.0249.2317.0849.6709.75613.91610.53518.542
Img3PSNR14.19915.32223.47715.26112.85510.31115.11623.32215.23813.88415.16114.94819.90916.34723.477
Cielab16.57314.4596.13714.56019.88129.32714.6956.05014.63417.25914.62115.1478.33312.5546.050
SSIM0.3560.4170.5100.4170.3220.1700.4020.5060.4230.3760.4040.3960.4950.4410.510
HVS9.00110.10618.70310.0437.6485.0989.94318.38010.0158.6669.9879.74214.81111.15918.703
HVSm9.04810.16018.94410.0967.6845.1249.99718.63610.0678.70810.0419.79314.93011.22418.944
Img4PSNR10.1896.94418.73210.50510.12610.28410.37217.65810.45010.17010.52610.30514.97511.56018.732
Cielab23.40140.5958.48523.19124.11423.60622.8909.35723.44223.93722.46323.33412.09419.0788.485
SSIM0.2690.0690.5680.3740.3460.3590.3540.5490.3700.3500.3640.3360.5270.4190.568
HVS5.5332.23714.7235.7765.4065.5555.68413.3275.7255.4425.8365.59610.4396.86014.723
HVSm5.6462.30415.2105.8855.5085.6605.79413.7265.8335.5455.9475.70410.6666.98915.210
Img5PSNR11.88714.89620.73514.89911.74711.95514.29420.44515.31410.39914.59212.93818.30315.65320.735
Cielab18.19512.3496.01212.38418.59818.10113.1956.19111.79622.65712.71915.8247.91911.0706.012
SSIM0.1910.2850.2900.2870.2200.2310.2690.2890.2970.1820.2720.2400.2970.2880.297
HVS7.68110.66116.57510.6677.5347.74010.08816.26311.0806.18910.3818.72214.09911.43316.575
HVSm7.71610.71516.72810.7187.5667.77110.13516.41511.1356.21410.4328.76014.19411.49216.728
Img6PSNR17.14518.93122.06219.16015.67317.54418.98022.51019.09210.48218.89618.88421.25619.64222.510
Cielab12.34410.5556.88810.28114.51211.81310.3886.35010.40831.44310.51810.5017.3209.2526.350
SSIM0.2700.3620.2910.3680.2930.3450.3490.2970.3730.0710.3440.3360.3320.3510.373
HVS12.76314.42217.95514.62711.25213.08414.57218.35414.5666.07314.51014.41516.94515.19218.354
HVSm12.92214.63418.37414.85511.36013.24114.79918.84014.7876.12014.73414.63617.29115.44118.840
Img7PSNR20.80421.55928.58721.51320.21920.42821.58527.92721.19120.55721.38321.16526.30622.67828.587
Cielab7.7137.2113.2557.2578.0337.9147.1113.5117.4707.8367.2047.5014.0896.1803.255
SSIM0.3100.4060.3320.4060.3950.4010.3930.3250.4070.4000.3920.3790.3720.3920.407
HVS16.52617.13025.75017.08715.84716.03617.27624.61916.77916.16517.05916.79722.40418.35525.750
HVSm16.71817.33127.23817.28415.99316.18817.48325.79916.96216.32217.25716.98523.04318.61327.238
Img8PSNR12.23812.29519.20512.28511.17610.36111.77018.26812.36110.39711.97012.30016.07613.16419.205
Cielab19.15719.0987.79419.12622.50525.51020.3958.56518.96325.37219.81019.08111.23116.6777.794
SSIM0.2340.2850.3470.2840.2300.1900.2500.3310.2940.1930.2590.2680.3340.2960.347
HVS7.2697.27914.5307.2726.1725.3546.78413.4407.3475.3906.9837.30111.1618.17114.530
HVSm7.3267.33314.7057.3256.2165.3946.83413.5967.4005.4297.0347.35511.2608.23314.705
Img9PSNR9.9749.18717.49310.20410.29810.17710.15516.88510.14810.15510.16610.07214.21411.16517.493
Cielab17.00918.8606.90216.51916.31416.81116.4877.36516.66916.86916.46016.7849.82214.3926.902
SSIM0.1870.2060.2730.2270.2260.2260.2260.2630.2300.2250.2260.2130.2550.2360.273
HVS5.4344.63913.0225.6525.7485.6305.61812.3525.5955.6075.6295.5239.6796.61913.022
HVSm5.4544.65613.0845.6725.7685.6505.63812.4135.6155.6275.6495.5439.7166.64213.084
Img10PSNR13.84614.15919.04414.50212.74912.48614.40120.42114.36812.86314.43214.26017.68515.20720.421
Cielab15.26914.7967.78914.23017.67318.27514.1286.74514.52417.39014.11714.5559.09012.6236.745
SSIM0.2570.3230.3260.3340.2790.2780.3210.3350.3370.2910.3190.3040.3420.3320.342
HVS9.91010.16915.34010.5038.7808.51110.45116.64510.3728.88710.48710.28613.80611.25116.645
HVSm10.00410.26215.56810.6058.8518.58010.55316.97010.4698.96010.59010.38513.98311.36416.970
Img11PSNR14.15115.44917.67415.39912.93310.05515.31216.68815.44410.13715.30714.75616.56215.62217.674
Cielab15.53413.2629.88113.35018.34228.57913.32110.89313.31228.18013.32914.41111.15412.7139.881
SSIM0.2510.3310.2540.3320.2550.1280.3170.2410.3440.1330.3160.2940.2860.3150.344
HVS8.97210.24712.55410.1967.7244.83210.16111.59010.2414.91410.1569.56211.41310.46012.554
HVSm9.02310.31012.65710.2577.7624.85610.22411.67810.3024.93810.2189.61711.49310.52512.657
Img12PSNR12.46113.28817.28813.31811.75812.12013.14216.75013.28112.22213.09512.84215.66013.83517.288
Cielab18.95416.97110.78416.90321.17320.05417.01611.56417.03519.75817.13018.01212.75615.71010.784
SSIM0.2570.3500.4160.3520.2680.2940.3320.4040.3570.3000.3300.3140.4000.3600.416
HVS7.8118.57813.1138.6097.0537.4108.46512.4508.5727.5118.4188.15011.2069.18913.113
HVSm7.8808.65113.2498.6837.1107.4708.53912.5808.6457.5738.4918.21911.3099.26813.249
AveragePSNR13.72014.17020.57314.59312.85412.29414.36420.30614.54211.91414.45214.19718.12515.39520.573
Cielab16.27616.5367.23614.83618.22320.71215.0907.42414.94721.61414.89415.4889.27113.0687.236
SSIM0.2620.3110.3620.3400.2850.2570.3230.3560.3440.2490.3260.3120.3660.3450.366
HVS9.0309.42516.2789.8438.1277.5589.66415.8659.7957.1839.7529.47013.53010.68516.278
HVSm9.1099.50716.6129.9318.1877.6209.75116.1699.8797.2379.8399.55413.70910.78616.612
Table A5. Performance metrics of 14 algorithms at 10 dBs SNR (Poisson noise). Bold numbers indicate the best performing methods in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF.
Table A5. Performance metrics of 14 algorithms at 10 dBs SNR (Poisson noise). Bold numbers indicate the best performing methods in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF.
ImageMetricsBaselineStandardDemonet + GFPCAGSAHCMSFIMPCAGFPCAGLPHPMGSPRACSF3ATMFBest Score
Img1PSNR21.41321.57625.34821.57521.54220.81121.47221.40821.62021.25321.47821.52022.80221.63625.348
Cielab7.0036.9685.2856.9707.0127.0276.8627.1856.9777.0326.9646.9756.2316.9245.285
SSIM0.4080.4270.3840.4270.4200.4300.4250.4140.4310.4290.4250.4200.4220.4250.431
HVS15.98316.04919.96016.05116.03816.07615.86115.91716.08916.08215.97216.03317.30016.13619.960
HVSm16.12116.16320.23516.16516.15416.18815.96916.02916.19916.19416.08516.15417.44216.24820.235
Img2PSNR23.32624.74925.72024.72824.64624.67124.47324.48124.71324.67924.52024.51925.43024.82825.720
Cielab5.1444.7963.7064.8274.9074.8484.8574.1954.8354.8504.8454.8874.2034.5803.706
SSIM0.3550.5050.3990.5040.4990.5030.5000.4510.5040.5020.4990.4830.4800.4930.505
HVS19.12619.93221.43419.98619.91720.01919.86619.86520.06720.02219.74319.88720.88320.21221.434
HVSm20.00420.64322.70720.68220.62920.73720.53020.66720.78420.74420.40920.61421.78120.95722.707
Img3PSNR28.28729.29028.49829.28729.04429.15729.20129.84129.17629.12729.20129.05929.50329.55729.841
Cielab4.9984.9084.6594.9105.0394.7754.9504.3764.8574.7784.9464.9284.5744.6794.376
SSIM0.5350.5660.5230.5660.5580.5660.5620.5740.5650.5640.5620.5580.5630.5670.574
HVS24.18924.56624.45324.56924.49024.33324.58025.61224.28224.28624.52124.49325.24225.13625.612
HVSm25.46825.70125.51325.70225.65625.48925.69726.83425.43225.44225.63325.64626.42326.31926.834
Img4PSNR18.11519.49120.72519.49119.31519.45519.08119.58419.49219.47219.08719.26620.29519.81020.725
Cielab12.05811.9136.31511.90411.70211.46511.6907.29811.89411.45811.70211.6368.99610.1206.315
SSIM0.4420.6210.5940.6210.6060.6150.6080.6060.6170.6140.6080.5930.6340.6250.634
HVS13.57114.22616.20214.23114.16014.31813.89814.79814.35114.32513.83714.10315.35214.78316.202
HVSm14.29814.79917.05314.80314.75614.92214.44615.44114.95014.93414.38514.71116.00515.38417.053
Img5PSNR27.73829.19527.87129.18928.96429.08328.79429.21329.18029.06628.85728.93529.11329.34829.348
Cielab3.5643.4473.5783.4373.4243.4013.5633.2873.4023.4033.5273.4293.3283.3063.287
SSIM0.3090.3620.3120.3620.3590.3620.3580.3510.3610.3600.3580.3540.3530.3590.362
HVS23.94124.90923.77524.93724.74724.93524.54225.10925.05424.89824.47624.78925.12225.32925.329
HVSm25.21825.99624.68826.01825.87326.09525.51326.21726.19126.06525.46625.90726.19026.44626.446
Img6PSNR22.21622.79026.24822.79022.72922.80922.60822.67022.83022.81222.60222.67724.36322.87626.248
Cielab6.9886.8265.1246.8356.8706.7366.9776.2696.7876.7446.9936.8295.7946.5755.124
SSIM0.3150.3960.3340.3960.3910.3980.3910.3750.3980.3970.3890.3790.3830.3920.398
HVS18.16218.52621.86618.49618.52918.60418.37418.39418.59018.61218.34618.42120.08918.67421.866
HVSm18.74819.01823.11118.99719.01619.10018.87218.90619.09319.11018.83718.94620.81119.18423.111
Img7PSNR26.55627.54526.76627.55227.51027.47527.62327.51127.48427.45927.62027.44927.50627.60327.623
Cielab4.5714.3754.2844.3814.4164.4034.3694.0744.3894.4074.3654.4114.2194.2784.074
SSIM0.3700.4710.3590.4710.4680.4710.4670.4390.4700.4690.4670.4600.4470.4620.471
HVS22.67023.28522.95823.29723.29323.28523.43423.48123.28423.27123.37723.23623.50923.47423.509
HVSm23.53724.04023.82524.05024.05124.05224.21224.33224.05124.04024.15124.00524.32724.26324.332
Img8PSNR24.87827.44926.63327.43127.11327.16926.93126.85427.28127.11826.97126.99728.30227.76028.302
Cielab4.6564.3763.7704.3824.4384.3294.4743.7274.3424.3374.4664.4323.7824.0873.727
SSIM0.4050.4970.3950.4960.4910.4970.4870.4660.4980.4950.4880.4800.4760.4910.498
HVS20.88622.15322.68222.18221.97622.04021.97222.88922.13521.94121.77121.99223.93823.16123.938
HVSm22.17023.28523.98323.30823.14723.25123.01724.22723.34223.15622.80523.15025.44824.42425.448
Img9PSNR26.09027.19525.89327.19926.96226.00126.95827.53427.18524.74026.94927.02927.09427.44427.534
Cielab3.9063.8303.6913.8323.8033.9163.8713.3233.7914.0153.8763.7783.5443.5563.323
SSIM0.2540.3040.3120.3040.2990.2990.3030.3060.2980.2930.3030.2950.3030.3050.312
HVS21.79522.29021.37822.28922.17322.31022.05123.05822.37622.32122.07222.17922.48522.80323.058
HVSm22.64922.95521.83722.95622.87123.01322.68923.69323.06523.02722.70822.87923.05923.44323.693
Img10PSNR23.65124.85924.76124.85624.73424.85624.59224.94724.88824.85024.58124.60125.08024.97525.080
Cielab5.6375.4084.5925.4135.4525.3535.4894.5995.4105.3585.4785.4344.8775.0874.592
SSIM0.3390.4210.3590.4210.4150.4240.4170.4070.4230.4220.4140.4040.4080.4170.424
HVS20.20520.97521.14520.91620.96721.06920.60221.41721.05321.07420.67220.76021.43821.31121.438
HVSm21.28021.86322.17621.82121.85621.98121.48222.41521.97721.99121.54821.71222.40522.24622.415
Img11PSNR23.26423.80725.87823.80523.74723.81723.64223.61723.83323.81723.63923.69624.72823.85325.878
Cielab5.7975.6954.9205.6995.7365.6805.7135.5475.6875.6815.7105.6965.2055.5794.920
SSIM0.3440.4070.3110.4070.4030.4130.4020.3870.4140.4130.4020.3920.3830.4030.414
HVS18.65618.88721.20018.88718.88518.93618.74118.77118.94618.94318.73618.84219.89919.01321.200
HVSm19.11619.27822.03119.27719.27819.32819.12319.17719.33919.33719.11819.24820.40519.40722.031
Img12PSNR21.29222.24223.02522.23422.12922.20922.00322.17122.24822.21322.00622.08622.71122.35023.025
Cielab6.3686.1905.2306.1956.2296.1676.1455.6016.1986.1686.1456.2035.6465.9585.230
SSIM0.4510.5510.4500.5510.5470.5530.5430.5280.5530.5520.5430.5380.5280.5460.553
HVS17.41917.75719.10417.75017.73317.79317.54217.96417.82017.80417.53917.70518.44918.01919.104
HVSm18.02918.25319.78618.24518.23418.30618.02118.46218.32618.31818.01818.21718.98418.50919.786
AveragePSNR23.90225.01625.61425.01124.87024.79324.78224.98624.99424.71724.79324.82025.57725.17025.614
Cielab5.8915.7284.5965.7325.7525.6755.7474.9575.7145.6865.7525.7205.0335.3944.596
SSIM0.3770.4610.3940.4600.4550.4610.4550.4420.4610.4590.4550.4460.4480.4570.461
HVS19.71720.29621.34620.29920.24220.31020.12220.60620.33720.29820.08820.20321.14220.67121.346
HVSm20.55320.99922.24521.00220.96021.03920.79821.36721.06221.03020.76420.93221.94021.40322.245
Table A6. Performance metrics of 14 algorithms at 20 dBs SNR. Bold numbers indicate the best performing methods in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF.
Table A6. Performance metrics of 14 algorithms at 20 dBs SNR. Bold numbers indicate the best performing methods in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF.
ImageMetricsBaselineStandardDemonet + GFPCAGSAHCMSFIMPCAGFPCAGLPHPMGSPRACSF3ATMFBest Score
Img1PSNR20.00618.18019.96120.04119.98619.88819.90220.87320.00519.90519.90320.07720.39620.19220.873
Cielab8.56515.6408.6228.6748.7509.0048.6007.4828.7408.9968.6438.5677.7868.1907.482
SSIM0.3840.3570.3530.3680.3530.3350.3680.4320.3480.3400.3680.4000.4320.4030.432
HVS14.49413.24814.56214.52314.49314.51414.32815.39614.52314.51414.40414.52514.90214.65115.396
HVSm14.60713.33814.64714.63214.61214.63614.43115.49914.64114.63514.51114.62814.98814.74715.499
Img2PSNR19.45217.33020.12120.01619.95619.96819.77123.03620.00219.98619.80119.97121.18420.30823.036
Cielab8.1139.8007.1968.0088.1358.0117.9504.7158.0088.0017.9338.0235.9477.2154.715
SSIM0.3990.5240.5540.6040.5930.5930.6010.5450.5990.5960.6010.5910.5980.6000.604
HVS14.77211.87715.69115.08915.05315.08414.89318.08515.10015.08314.83715.05416.43915.45818.085
HVSm15.08212.01315.88215.30715.27815.31715.09518.57715.32815.31715.04715.28116.69515.68218.577
Img3PSNR20.05620.56420.18620.15720.07920.14219.97720.35720.17820.15619.98020.15420.35520.23920.564
Cielab9.14710.9718.6169.1279.4109.1889.1167.8999.1449.1879.1189.1588.0428.5697.899
SSIM0.4900.4310.4860.4890.4740.4800.4860.5250.4870.4830.4860.4980.5270.5110.527
HVS14.95715.23315.27515.02715.01115.05814.85715.23915.05915.06114.86015.01815.28415.12015.284
HVSm15.11715.43515.38615.16715.15715.20414.99515.36915.20315.20714.99915.15915.39915.24715.435
Img4PSNR17.30113.21518.36118.08117.90517.99917.65918.28318.06918.04817.66518.06218.64418.31818.644
Cielab13.62513.16511.04413.97113.78213.51413.5498.18514.04413.47113.57913.5829.09211.5378.185
SSIM0.4400.4070.5520.5770.5620.5640.5690.5650.5720.5680.5680.5710.6020.5880.602
HVS12.6198.09014.59013.12113.05013.17212.66613.51513.18313.17712.62113.01014.13313.47214.590
HVSm13.2408.30615.14313.66113.61413.74913.16314.05013.75113.75413.11913.55714.66614.00615.143
Img5PSNR20.08922.47520.30520.21320.16420.17620.01127.07620.20320.19020.02420.23222.27520.57127.076
Cielab7.2776.3467.0037.2977.3117.2787.3103.3377.2917.2737.3167.2795.3176.7543.337
SSIM0.3170.3770.3690.3850.3790.3750.3850.4400.3800.3780.3840.3910.4310.4030.440
HVS15.80217.35616.30415.94215.90915.94715.70422.58615.95015.94315.69715.92518.08116.31122.586
HVSm16.00617.65316.44216.11716.08816.13115.86723.17616.13316.12915.86616.09918.28316.47923.176
Img6PSNR19.76320.52620.29120.21220.13320.18619.97425.26120.22320.19319.97220.13721.84720.48325.261
Cielab9.76111.3558.8379.7209.8669.6349.5744.9999.7329.6439.6139.6936.9778.7574.999
SSIM0.4110.5040.5470.5900.5740.5760.5860.5840.5850.5780.5840.5700.6030.5930.603
HVS15.39216.55116.06415.61915.62215.68515.42620.47315.67615.69115.42715.56517.38715.96020.473
HVSm15.69216.91116.25315.84515.84715.91315.65721.12815.90515.92015.65215.81017.67316.19821.128
Img7PSNR24.56421.57121.88021.97520.14320.14922.55925.61521.80920.15722.44623.49523.80423.14725.615
Cielab5.9436.8576.7067.1418.3568.2926.7164.0247.2228.2856.7706.4365.0616.0164.024
SSIM0.4070.5110.4810.5300.5160.5130.5300.5070.5250.5160.5310.5320.5440.5400.544
HVS20.23617.42617.84117.55415.76815.78218.12821.24217.40415.77917.99319.01419.55818.72921.242
HVSm20.73017.61918.00617.75215.89715.91718.35521.69817.60515.91518.21619.30019.81118.96721.698
Img8PSNR19.38414.23219.98419.97119.88419.93519.67221.10719.97319.94319.69119.88120.49620.12121.107
Cielab8.75314.8647.8868.5958.8158.5948.5716.1398.6278.5908.5598.6826.9467.8566.139
SSIM0.4210.4140.5040.5430.5280.5320.5370.5270.5390.5340.5390.5300.5550.5490.555
HVS14.5768.81715.44114.92814.91114.97314.65216.18714.97414.97314.63014.86715.68215.17116.187
HVSm14.8748.90015.62315.14315.13015.20014.85716.48615.20015.20214.84015.09715.89915.38816.486
Img9PSNR20.11610.89320.46320.20120.13120.07120.03520.54120.18320.08920.03220.21420.56020.37520.560
Cielab6.61215.7245.8836.5716.6566.9706.5335.2246.6096.9766.5336.5475.3275.9475.224
SSIM0.2650.2700.3330.3380.3270.3240.3380.3430.3330.3250.3380.3350.3590.3480.359
HVS15.4686.26416.15215.56015.53515.57415.40115.87515.57515.57515.40615.53816.00115.72616.152
HVSm15.6866.29416.28715.75315.73715.77515.58916.01615.77415.77615.59315.73516.13815.89416.287
Img10PSNR19.47119.76620.06919.93319.85819.89919.67920.69419.93319.91119.67219.87420.38420.08820.694
Cielab8.8108.7287.7848.7348.7958.6538.6466.4768.7708.6488.6838.7046.9737.8676.476
SSIM0.3740.4130.4770.4910.4790.4810.4890.4980.4860.4830.4870.4820.5180.5020.518
HVS15.57315.81216.43415.83615.84315.90515.57516.62615.88415.90515.59215.78716.48816.05516.626
HVSm15.92616.20216.64016.09716.10216.17015.84016.94216.15216.17215.85116.06716.73616.31616.942
Img11PSNR19.89517.08120.12920.18520.12420.14919.98819.94220.17320.15719.98620.16320.20020.18420.200
Cielab8.55712.0648.2658.5448.6878.5528.5017.6198.5688.5498.4998.4957.5778.0157.577
SSIM0.4400.4770.5270.5700.5530.5580.5670.5300.5650.5620.5670.5630.5760.5760.576
HVS14.93711.42315.19015.05315.04215.07314.88614.84915.07115.07614.88215.03815.13715.08415.190
HVSm15.12311.50815.31715.20115.19415.22615.03014.99815.22415.22915.02615.19015.26715.22215.317
Img12PSNR19.40616.11620.15520.08720.00420.05719.82819.06820.09120.07019.83020.00719.92820.00820.155
Cielab8.4169.8177.6638.3488.4398.3228.1728.4738.3778.3178.1748.3307.4727.8917.472
SSIM0.4990.5040.5910.6220.6120.6140.6170.5980.6180.6160.6170.6190.6320.6280.632
HVS15.13811.21116.01315.38115.37215.42515.13114.65715.42615.42815.13215.34715.51215.42716.013
HVSm15.49011.34716.22015.64015.63415.69515.38314.86315.69415.69915.38515.61915.72415.67216.220
AveragePSNR19.95917.66220.15920.08919.86419.88519.92121.82120.07019.90019.91720.18920.83920.33621.821
Cielab8.63211.2787.9598.7288.9178.8348.6036.2148.7618.8288.6188.6256.8777.8856.214
SSIM0.4040.4320.4810.5090.4960.4950.5060.5080.5030.4980.5060.5070.5320.5200.532
HVS15.33012.77615.79615.30315.13415.18315.13717.06115.31915.18415.12315.39116.21715.59717.061
HVSm15.63112.96015.98715.52615.35815.41115.35517.40015.55115.41315.34215.62816.44015.81817.400
Table A7. Performance metrics of 14 algorithms at 20 dBs SNR (Poisson noise). Bold numbers indicate the best performing methods in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF.
Table A7. Performance metrics of 14 algorithms at 20 dBs SNR (Poisson noise). Bold numbers indicate the best performing methods in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF.
ImageMetricsBaselineStandardDemonet + GFPCAGSAHCMSFIMPCAGFPCAGLPHPMGSPRACSF3ATMFBest Score
Img1PSNR19.83519.87320.24019.45919.73919.58819.36921.24419.44419.73419.28319.41220.32519.55521.244
Cielab9.0758.9397.8168.5718.3018.7158.5047.0568.6348.5938.6308.5757.6648.3117.056
SSIM0.3390.3450.4250.4590.4580.4630.4540.4350.4670.4640.4540.4440.4570.4590.467
HVS14.43514.45214.77713.86714.16414.05213.82315.77713.84314.20013.74613.85114.80314.01215.777
HVSm14.54514.55614.84113.93014.23114.11813.88515.86213.90414.26713.80813.91514.86914.07215.862
Img2PSNR19.54119.94220.97618.93618.83319.49718.95820.78319.32819.44118.90019.00220.44019.34020.976
Cielab8.9038.2985.7778.1068.2007.7107.9455.8837.8517.7547.9648.0296.2107.3525.777
SSIM0.4770.6010.4290.5500.5450.5570.5410.4180.5570.5550.5400.5180.4850.5270.601
HVS15.08715.14816.72714.06513.97114.62214.17316.27014.46114.56814.07814.19715.86014.57316.727
HVSm15.32215.35416.98414.21414.12314.79014.32616.53414.62114.73314.22914.35516.07014.73416.984
Img3PSNR19.88919.96521.47119.89320.15720.18019.82221.39520.12520.13219.90419.98721.02720.18821.471
Cielab9.7249.5327.0558.7058.6158.5108.6787.0318.5598.5608.6158.6547.2998.1427.031
SSIM0.4590.4720.5580.5620.5590.5680.5540.5550.5700.5680.5550.5490.5730.5650.573
HVS14.91914.93916.53114.68815.00614.98114.69416.31514.90914.93114.77314.83015.93515.04816.531
HVSm15.06015.07216.65314.78815.11015.08414.79616.44915.01015.03214.87714.93516.04615.14916.653
Img4PSNR17.40517.94419.68517.75917.80417.74817.48518.90117.75817.83917.52217.85519.01217.92619.685
Cielab15.78415.4048.80313.95413.63113.64113.5709.38014.10813.57913.56213.4319.72212.0968.803
SSIM0.4730.5580.5950.6010.5920.5940.5930.5860.5980.5970.5930.5910.6210.6100.621
HVS13.22913.37715.84112.91213.01612.94612.78714.69612.95313.02212.79313.01414.67613.31615.841
HVSm13.83613.94316.45813.34213.47813.39013.21215.21613.38913.47013.22113.47115.17413.73516.458
Img5PSNR19.94220.02221.79820.12219.74619.91620.29821.65719.94619.40020.15220.31821.14920.40321.798
Cielab7.9297.7015.3676.8377.0586.9706.6325.4226.9747.3526.7246.6755.7516.4065.367
SSIM0.3260.3690.3320.3700.3650.3710.3620.3300.3730.3690.3620.3560.3550.3640.373
HVS15.84315.87417.76015.82815.46915.63216.06117.47915.66015.12615.89516.04916.98316.16417.760
HVSm16.01916.03917.91815.94815.58715.74616.18917.64515.77415.22916.02116.18117.11816.28717.918
Img6PSNR19.77220.07620.92220.17520.02820.14819.99420.34420.12019.98119.96419.98420.52620.08620.922
Cielab10.72710.2707.4438.8699.0378.8359.0097.7609.0018.9919.0758.9757.7038.4447.443
SSIM0.4730.5810.4030.5210.5110.5250.5030.3950.5260.5220.5000.4780.4570.4930.581
HVS15.54715.60116.71115.61315.54615.62915.53616.00715.56915.46815.52115.50116.15515.63716.711
HVSm15.77515.80616.94715.82915.74315.83315.75116.23515.77815.66515.73415.71716.37215.84416.947
Img7PSNR19.84719.98529.05826.35021.08620.87225.65129.19025.92120.20826.35722.36928.45627.00129.190
Cielab9.1618.7432.9574.6997.0647.2274.8433.0114.8557.7184.6296.2943.2494.1582.957
SSIM0.4190.5110.4620.5600.5260.5310.5480.4500.5620.5210.5510.5130.5130.5490.562
HVS15.65315.68126.25921.73016.68716.46421.22526.07521.34615.81321.86917.97624.76122.58526.259
HVSm15.78715.80327.34922.14816.82116.59121.59927.27121.72915.92422.30718.16125.52723.07527.349
Img8PSNR19.43819.84619.83519.31919.35519.19319.50320.31919.39619.11519.33919.29319.93119.44820.319
Cielab9.5649.0277.2498.4238.4488.5218.1406.8658.4218.5918.2628.4507.2157.8696.865
SSIM0.4450.5310.4160.5220.5190.5240.5090.4170.5310.5230.5080.4900.4720.5020.531
HVS14.85614.91915.26714.28314.36314.18314.58015.65714.36414.10814.38914.33915.16014.55515.657
HVSm15.09315.12915.44114.44214.52514.33514.75315.86114.52414.25814.55714.50715.32914.71415.861
Img9PSNR20.04120.09821.27419.21019.48719.07818.97920.65719.44219.14319.03119.08520.48619.45221.274
Cielab7.2476.9914.7856.3866.2306.7706.4175.0976.3236.7516.3906.4325.1435.9254.785
SSIM0.2860.3320.3030.3220.3180.3170.3200.2930.3250.3170.3200.3050.3080.3160.332
HVS15.62015.63116.87814.54014.84314.47314.36816.11514.77414.53614.42314.44015.94014.84316.878
HVSm15.82015.82616.99514.63814.94614.57014.46316.23114.87514.63314.51814.53816.04314.93616.995
Img10PSNR19.51919.82721.25719.51119.21219.41119.31420.39519.60319.45519.25419.21320.48319.59921.257
Cielab9.5979.0956.2468.4188.6158.4368.3606.7808.4398.4088.4448.5486.7687.8116.246
SSIM0.4110.4850.4110.4660.4570.4690.4570.3970.4720.4690.4530.4370.4410.4550.485
HVS15.80315.85917.67815.44115.21915.38715.33916.62615.54315.43215.29715.22516.66815.65717.678
HVSm16.07016.10017.93615.65915.40915.58415.55616.88215.75815.63115.51115.43716.89815.86717.936
Img11PSNR19.81520.03019.75119.71719.59119.82719.64420.05319.84419.80419.53719.63419.93419.63220.053
Cielab9.2638.8637.8108.1908.3028.1098.1267.4338.1348.1338.2118.2167.5497.9317.433
SSIM0.4720.5560.3750.5010.4930.5100.4880.3730.5130.5100.4860.4640.4370.4720.556
HVS14.96114.98314.84914.55814.47014.68314.57415.10114.68714.66214.46314.53914.92114.58415.101
HVSm15.11315.12414.97714.67914.58314.80214.69715.24514.80914.78114.58414.66215.04814.70215.245
Img12PSNR19.57220.07919.24119.74019.65419.76019.59919.09319.69719.68519.55719.56619.42219.36920.079
Cielab9.2428.7577.9507.9517.9887.9147.7967.9278.0367.9777.8318.0127.7037.8287.703
SSIM0.5290.6100.5300.6140.6110.6170.5980.5220.6180.6160.5980.5890.5700.5890.618
HVS15.38315.42915.08715.01814.98515.06214.97614.75414.98814.98614.93214.93415.02314.83515.429
HVSm15.64615.67315.26315.23215.19215.27715.19414.94315.19915.19815.14715.14915.20715.02815.673
AveragePSNR19.55119.80721.29220.01619.55819.60219.88521.16920.05219.49519.90019.64320.93320.16721.292
Cielab9.6859.3016.6058.2598.4578.4478.1686.6378.2788.5348.1958.3586.8317.6896.605
SSIM0.4260.4960.4370.5040.4960.5040.4940.4310.5090.5020.4930.4780.4740.4920.509
HVS15.11115.15817.03015.21214.81214.84315.17816.73915.25814.73815.18114.90816.40715.48417.030
HVSm15.34015.36917.31415.40414.97915.01015.36817.03115.44714.90215.37615.08616.64215.67917.314
Table A8. Performance metrics of 14 algorithms at 20 dBs SNR (Poisson noise). Bold numbers indicate the best performing methods in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF.
Table A8. Performance metrics of 14 algorithms at 20 dBs SNR (Poisson noise). Bold numbers indicate the best performing methods in each row. Red numbers indicate those methods used in F3 and those red and green numbers indicate those methods used in ATMF.
ImageMetricsBaselineStandardDemonet + GFPCAGSAHCMSFIMPCAGFPCAGLPHPMGSPRACSF3ATMFBest Score
Img1PSNR26.26626.84826.04526.85626.73226.79826.62026.48226.93226.80726.67226.65926.85426.86326.932
Cielab4.4274.3634.8614.3584.4294.4444.3014.6154.3704.4584.3604.3794.3664.3644.301
SSIM0.4710.5000.4340.5000.4880.5030.4970.4840.5040.5010.4970.4880.4970.4970.504
HVS21.15921.32920.68621.38921.35521.43221.04521.12421.44121.44321.23421.32921.38721.40121.443
HVSm21.58721.68520.95621.73121.70521.77721.35721.41721.78521.78821.56721.69621.72921.74121.788
Img2PSNR24.88127.62626.99527.63227.46527.41427.34627.18827.50327.41627.37227.22427.64927.61227.649
Cielab4.5964.1413.2224.1584.2654.1954.2133.3094.1854.2004.2054.2444.1484.1553.222
SSIM0.3990.5860.4630.5840.5780.5810.5800.5200.5820.5790.5790.5600.5840.5830.586
HVS21.02822.44122.85222.53722.39422.48422.62922.75222.59622.48122.25922.37022.51822.53822.852
HVSm22.40823.59124.41523.66723.55723.69023.74324.19623.79623.69523.32923.54223.65323.68224.415
Img3PSNR28.06029.35330.66729.37929.02429.33429.03129.72429.42529.33529.04629.03029.37529.38130.667
Cielab4.6064.4903.7314.4214.6654.4004.4923.9264.4264.4074.4964.5084.4284.3993.731
SSIM0.5630.6090.5800.6090.5980.6080.6040.6150.6090.6070.6040.5960.6070.6070.615
HVS23.81124.30226.42924.38024.31724.35524.11025.14724.33424.35424.09224.26824.37924.40326.429
HVSm24.91125.21527.66525.28725.25525.31424.95526.01225.29125.31824.94325.21325.28625.31527.665
Img4PSNR18.78620.39520.64020.45020.21520.34319.98420.84520.39820.37719.98720.20820.45220.43420.845
Cielab12.20212.2366.39111.99011.88211.64211.7736.78612.14011.62411.80111.80911.98111.7506.391
SSIM0.4560.6360.6140.6360.6190.6250.6250.6310.6290.6260.6240.6130.6360.6340.636
HVS14.15114.84316.08614.94814.88515.02414.56715.99015.03415.03614.48814.76514.94814.96316.086
HVSm15.01415.55816.90415.66115.62615.78615.23616.81315.78915.80315.15415.50615.66115.68216.904
Img5PSNR27.31828.80829.05428.82528.63428.74928.38328.74428.81828.74828.44828.54628.82928.81629.054
Cielab3.3993.2702.9503.2043.2433.2163.3303.0463.2203.2183.3053.2433.2063.2012.950
SSIM0.3250.3890.3460.3890.3850.3890.3850.3750.3870.3860.3840.3780.3880.3880.389
HVS23.46624.37024.90224.44624.36324.53624.01424.51224.56824.53823.96624.27324.44224.48824.902
HVSm24.54825.21325.89025.27825.22725.42624.75625.35325.44725.43324.73425.15425.27525.32525.890
Img6PSNR25.85728.14228.33528.21327.87828.02427.84128.19428.14127.99227.85127.70828.21428.17928.335
Cielab5.3755.1114.1594.9645.1054.9265.2044.1115.0224.9525.2235.0734.9744.9394.111
SSIM0.4180.5680.4770.5680.5590.5670.5610.5310.5670.5640.5590.5370.5670.5670.568
HVS22.00623.06823.95123.06723.06923.15722.64623.57923.19923.17722.79922.79323.09123.14523.951
HVSm23.43424.27625.35324.29524.30224.43823.80924.87124.48424.46623.97124.08124.31324.37025.353
Img7PSNR26.97128.54327.00328.57928.49028.43628.70628.50028.47528.41928.69928.40328.57828.56028.706
Cielab4.0683.7653.5893.7723.8233.8023.7553.2683.7833.8073.7473.8273.7743.7773.268
SSIM0.4370.5900.4730.5880.5840.5870.5850.5380.5870.5850.5850.5730.5870.5860.590
HVS23.31724.17823.44424.23924.21224.17024.46224.71224.20624.15924.34824.16024.23724.24624.712
HVSm24.26024.90824.17224.97724.96724.95125.23425.62124.98024.94225.11224.92624.97624.99025.621
Img8PSNR25.29828.54427.79228.72328.38328.32528.26527.95328.41628.27628.31428.08728.72328.67728.723
Cielab4.4314.1203.2504.0154.1614.0344.1143.2624.0534.0444.0904.1574.0184.0093.250
SSIM0.4530.5710.4770.5710.5630.5700.5610.5370.5710.5680.5610.5480.5700.5700.571
HVS21.40522.73423.97723.37123.37023.36123.21424.11623.33223.36922.96022.95123.35323.41624.116
HVSm22.81923.95325.56824.63924.69124.74024.41425.65824.71624.77024.13424.23924.62024.69825.658
Img9PSNR26.60627.96827.18028.05127.81927.83927.77928.38828.01927.74527.77127.81228.04728.04828.388
Cielab3.6493.5643.2423.4693.5243.6323.5212.9933.5143.7383.5233.4973.4773.4652.993
SSIM0.2640.3230.3450.3230.3170.3180.3220.3250.3160.3110.3220.3130.3270.3260.345
HVS22.27222.82922.59023.01722.98723.13322.75123.76823.14623.14622.77422.84623.01523.06623.768
HVSm23.22823.57323.14123.74623.72823.89323.44624.48023.90323.90923.46723.62123.74523.79624.480
Img10PSNR24.77426.87625.96326.91526.68126.81526.51427.09626.88326.80426.51226.45926.91926.90827.096
Cielab5.1064.8213.9334.7624.8484.7304.8643.7314.8114.7374.8434.8334.7654.7323.731
SSIM0.3750.4820.4310.4820.4740.4830.4780.4650.4830.4810.4750.4610.4820.4820.483
HVS21.23422.24922.29122.28322.35022.42721.77923.30522.40022.42121.95822.05422.30522.37523.305
HVSm22.56523.29923.32423.35723.41523.53822.79624.56523.52223.54222.98123.20123.37023.43924.565
Img11PSNR26.15027.60626.45427.65027.49927.61127.38927.30927.64727.59627.38627.32127.64827.64627.650
Cielab4.7134.5344.5684.5054.6004.5104.5384.2674.5184.5144.5364.5474.5114.5024.267
SSIM0.4210.5240.4160.5240.5170.5290.5190.4960.5300.5290.5180.5000.5230.5240.530
HVS22.39822.90621.89123.12523.16423.22022.86723.06923.22523.23822.85922.96323.12423.16423.238
HVSm23.44023.78122.66823.98424.01824.09723.68723.96324.10824.11823.67823.86623.98324.02024.118
Img12PSNR22.10323.52324.31723.55523.41823.50723.24323.42323.55823.51323.24523.31723.55423.53024.317
Cielab5.7535.5144.4475.4885.5415.4685.4354.7885.5065.4705.4355.5195.4955.4794.447
SSIM0.5080.6470.5590.6470.6410.6460.6370.6160.6460.6440.6370.6290.6460.6450.647
HVS18.48218.92820.67519.04019.04519.11518.74319.38019.12619.12518.74218.95119.04019.04720.675
HVSm19.24019.51221.47919.60319.60519.69719.28519.97019.70719.70919.28419.54619.60319.60921.479
AveragePSNR25.25627.01926.70427.06926.85326.93326.75826.98727.01826.91926.77526.73127.07027.05427.070
Cielab5.1944.9944.0294.9265.0074.9174.9614.0084.9624.9314.9644.9704.9294.8984.008
SSIM0.4240.5350.4680.5350.5270.5340.5290.5110.5340.5320.5290.5160.5340.5340.535
HVS21.22722.01522.48122.15322.12622.20121.90222.62122.21722.20721.87321.97722.15322.18822.621
HVSm22.28822.88023.46123.01923.00823.11222.72623.57723.12723.12422.69622.88323.01823.05523.577

References

  1. Bayer, B.E. Color Imaging Array. US Patent 3,971,065, 20 July 1976. [Google Scholar]
  2. Bell, J.F., III; Godber, A.; McNair, S.; Caplinger, M.A.; Maki, J.N.; Lemmon, M.T.; van Beek, J.; Malin, M.C.; Wellington, D.; Kinch, K.M.; et al. The mars science laboratory curiosity rover mast camera (Mastcam) instruments: Pre-flight and in-flight calibration, validation, and data archiving. AGU J. Earth Space Sci. 2017. [Google Scholar] [CrossRef] [Green Version]
  3. Dao, M.; Kwan, C.; Ayhan, B.; Bell, J.F. Enhancing mastcam images for mars rover mission. In Proceedings of the 14th International Symposium on Neural Networks, Sapporo/Hakodate/Muroran, Japan, 21–26 June 2017; pp. 197–206. [Google Scholar]
  4. Kwan, C.; Budavari, B.; Dao, M.; Ayhan, B.; Bell, J.F. Pansharpening of mastcam images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Fort Worth, TX, USA, 27–28 July 2017; pp. 5117–5120. [Google Scholar]
  5. Ayhan, B.; Dao, M.; Kwan, C.; Chen, H.; Bell, J.F.; Kidd, R. A Novel utilization of image registration techniques to process mastcam images in mars rover with applications to image fusion, pixel clustering, and anomaly detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4553–4564. [Google Scholar] [CrossRef]
  6. Hamilton, J.; Compton, J. Processing Color and Panchromatic Pixels. U.S. Patent 2,007,002,487,9A1, 1 February 2007. [Google Scholar]
  7. Kijima, T.; Nakamura, H.; Compton, J.T.; Hamilton, J.F.; DeWeese, T.E. Image Sensor with Improved Light Sensitivity. U.S. Patent US8139130B2, 1 February 2007. [Google Scholar]
  8. Zhang, C.; Li, Y.; Wang, J.; Hao, P. Universal demosaicking of color filter arrays. IEEE Trans. Image Process. 2016, 25, 5173–5186. [Google Scholar] [CrossRef] [PubMed]
  9. Condat, L. A generic variational approach for demosaicking from an arbitrary color filter array. In Proceedings of the 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 1625–1628. [Google Scholar]
  10. Menon, D.; Calvagno, G. Regularization approaches to demosaicking. IEEE Trans. Image Process. 2009, 18, 2209–2220. [Google Scholar] [CrossRef] [PubMed]
  11. Oh, P.; Li, S.; Kang, M.G. Colorization-based RGB-white color interpolation using color filter array with randomly sampled pattern. Sensors 2017, 17, 1523. [Google Scholar] [CrossRef] [Green Version]
  12. Kwan, C.; Chou, B.; Kwan, L.M.; Budavari, B. Debayering RGBW color filter arrays: A pansharpening approach. In Proceedings of the IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, New York, NY, USA, 10 October 2017; pp. 94–100. [Google Scholar]
  13. Kwan, C.; Larkin, J. Demosaicing of bayer and CFA 2.0 patterns for low lighting images. Electronics 2019, 8, 1444. [Google Scholar] [CrossRef] [Green Version]
  14. Kwan, C.; Chou, B. Further improvement of debayering performance of RGBW color filter arrays using deep learning and pansharpening techniques. J. Imaging 2019, 5, 68. [Google Scholar] [CrossRef] [Green Version]
  15. BM3D Denoising. Available online: http://www.cs.tut.fi/~foi/invansc/ (accessed on 22 October 2019).
  16. Zhang, L.; Wu, X.; Buades, A.; Li, X. Color demosaicking by local directional interpolation and nonlocal adaptive thresholding. J. Electron. Imaging 2011, 20, 023016. [Google Scholar] [CrossRef] [Green Version]
  17. Vivone, G.; Alparone, L.; Chanussot, J.; Mura, M.D.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2586. [Google Scholar] [CrossRef]
  18. Liu, J.G. Smoothing filter based intensity modulation: A spectral preserve image fusion technique for improving spatial details. Int. J. Remote Sens. 2000, 21, 3461–3472. [Google Scholar] [CrossRef]
  19. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored multiscale fusion of high-resolution MS and pan imagery. Photogramm. Eng. Remote. Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
  20. Vivone, G.; Restaino, R.; Mura, M.D.; Licciardi, G.A.; Chanussot, J. Contrast and error-based fusion schemes for multispectral image pansharpening. IEEE Trans. Geosci. Remote Sens. Lett. 2014, 11, 930–934. [Google Scholar] [CrossRef] [Green Version]
  21. Laben, C.; Brower, B. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 6,011,875A, 4 January 2000. [Google Scholar]
  22. Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of MS+pan data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  23. Liao, W.; Huang, X.; Van Coillie, F.; Gautama, S.; Panić, M.; Philips, W.; Liu, H.; Zhu, T.; Shimoni, M.; Moser, G.; et al. Processing of multiresolution thermal hyperspectral and digital color data: Outcome of the 2014 IEEE GRSS data fusion contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1–13. [Google Scholar] [CrossRef]
  24. Choi, J.; Yu, K.; Kim, Y. A new adaptive component-substitution based satellite image fusion by using partial replacement. IEEE Trans. Geosci. Remote Sens. 2011, 49, 295–309. [Google Scholar] [CrossRef]
  25. Zhou, J.; Kwan, C.; Budavari, B. Hyperspectral image super-resolution: A hybrid color mapping approach. J. Appl. Remote Sens. 2016, 10, 35024. [Google Scholar] [CrossRef]
  26. Kwan, C.; Choi, J.H.; Chan, S.; Zhou, J.; Budavai, B. Resolution enhancement for hyperspectral images: A super-resolution and fusion approach. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, New Orleans, LA, USA, 5–9 March 2017; pp. 6180–6184. [Google Scholar]
  27. Kwan, C.; Budavari, B.; Feng, G. A hybrid color mapping approach to fusing MODIS and landsat images for forward prediction. Remote Sens. 2018, 10, 520. [Google Scholar] [CrossRef] [Green Version]
  28. Kwan, C.; Budavari, B.; Bovik, A.; Marchisio, G. Blind quality assessment of fused worldview-3 images by using the combinations of pansharpening and hypersharpening paradigms. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1835–1839. [Google Scholar] [CrossRef]
  29. Kwan, C.; Ayhan, B.; Budavari, B. Fusion of THEMIS and TES for accurate mars surface characterization. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Fort Worth, TX, USA, 27–28 July 2017; pp. 3381–3384. [Google Scholar]
  30. Gharbi, M.; Chaurasia, G.; Paris, S.; Durand, F. Deep joint demosaicking and denoising. ACM Trans. Gr. 2016, 35, 1–12. [Google Scholar] [CrossRef]
  31. Leung, B.; Jeon, G.; Dubois, E. Least-squares luma–chroma demultiplexing algorithm for bayer demosaicking. IEEE Trans. Image Process. 2011, 20, 1885–1894. [Google Scholar] [CrossRef]
  32. Kwan, C.; Chou, B.; Kwan, L.M.; Larkin, J.; Ayhan, B.; Bell, J.F.; Kerner, H. Demosaicking enhancement using pixel-level fusion. J. Signal Image Video Process. 2018, 12, 749. [Google Scholar] [CrossRef]
  33. Kwan, C.; Zhu, X.; Gao, F.; Chou, B.; Perez, D.; Li, J.; Shen, Y.; Koperski, K. Assessment of spatiotemporal fusion algorithms for planet and worldview images. Sensors 2018, 18, 1051. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. SSIM. Available online: https://en.wikipedia.org/wiki/Structural_similarity (accessed on 26 April 2019).
  35. Egiazarian, K.; Astola, J.; Ponomarenko, N.; Lukin, V.; Battisti, F.; Carli, M. New full quality metrics based on HVS. In Proceedings of the Second International Workshop on Video Processing and Quality Metrics, Scottsdale, AZ, USA, 22–24 January 2006. [Google Scholar]
  36. Ponomarenko, N.; Silvestri, F.; Egiazarian, K.; Carli, M.; Astola, J.; Lukin, V. On between-coefficient contrast masking of DCT basis functions. In Proceedings of the Third International Workshop on Video Processing and Quality Metrics for Consumer Electronics VPQM-07, Scottsdale, AZ, USA, 25–26 January 2007. [Google Scholar]
  37. Zhang, X.; Wandell, B.A. A spatial extension of cielab for digital color image reproduction. SID J. 1997, 5, 61–63. [Google Scholar] [CrossRef]
  38. Ohta, J. Smart CMOS Image Sensors and Applications; CRC: Boca Raton, FL, USA, 2008. [Google Scholar]
  39. MacDonald, L. Digital Heritage; Butterworth-Heinemann: London, UK, 2006. [Google Scholar]
  40. Siegel, A.F. Practical Business Statistics, 7th ed.; Academic Press: Cambridge, MA, USA, 2016. [Google Scholar]
  41. Available online: https://www.roe.ac.uk/~al/ASM-bits/astrostats2012_part2.pdf (accessed on 16 June 2020).
  42. Available online: https://en.wikipedia.org/wiki/Shot_noise#/media/File:Poisson_pmf.svg (accessed on 16 June 2020).
  43. Knuth, D.E. Semi-Numerical Algorithms, The Art of Computer Programming, 2, 3rd ed.; Addison Wesley: Boston, MA, USA, 1997. [Google Scholar]
  44. Poisson Noise Generation. Available online: https://github.com/erezposner/Shot-Noise-Generator (accessed on 24 April 2020).
  45. Poisson Noise Generation. Available online: http://www.numerical-tours.com/matlab/denoisingwav_5_data_dependent/ (accessed on 22 October 2019).
  46. Akiyama, H.; Tanaka, M.; Okutomi, M. Pseudo four-channel image denoising for noisy CFA raw data. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 4778–4782. [Google Scholar]
  47. Malvar, H.S.; He, L.-W.; Cutler, R. High-quality linear interpolation for demosaciking of color images. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, QC, Canada, 17–21 May 2004; pp. 485–488. [Google Scholar]
  48. Zhang, L.; Wu, X. Color demosaicking via directional linear minimum mean square-error estimation. IEEE Trans. Image Process. 2005, 14, 2167–2178. [Google Scholar] [CrossRef]
  49. Lu, W.; Tan, Y.P. Color filter array demosaicking: New method and performance measures. IEEE Trans. Image Process. 2003, 12, 1194–1210. [Google Scholar] [PubMed] [Green Version]
  50. Dubois, E. Frequency-domain methods for demosaicking of bayer-sampled color images. IEEE Signal Proc. Lett. 2005, 12, 847–850. [Google Scholar] [CrossRef]
  51. Gunturk, B.; Altunbasak, Y.; Mersereau, R.M. Color plane interpolation using alternating projections. IEEE Trans. Image Process. 2002, 11, 997–1013. [Google Scholar] [CrossRef]
  52. Wu, X.; Zhang, N. Primary-consistent soft-decision color demosaicking for digital cameras. IEEE Trans. Image Process. 2004, 13, 1263–1274. [Google Scholar] [CrossRef]
  53. Bednar, J.; Watt, T. Alpha-trimmed means and their relationship to median filters. IEEE Trans. Acoust. Speech Signal Process. 1984, 32, 145–153. [Google Scholar] [CrossRef]
  54. Klatzer, T.; Hammernik, K.; Knobelreiter, P.; Pock, T. Learning joint demosaicing and denoising based on sequential energy minimization. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Evanston, IL, USA, 11 January 2016. [Google Scholar]
  55. Tan, R.; Zhang, K.; Zuo, W.; Zhang, L. Color image demosaicking via deep residual learning. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China, 10–14 July 2017; pp. 793–798. [Google Scholar]
  56. Jaiswal, S.P.; Au, O.C.; Jakhetiya, V.; Yuan, Y.; Yang, H. Exploitation of inter-color correlation for color image demosaicking. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 1812–1816. [Google Scholar]
  57. Kiku, D.; Monno, Y.; Tanaka, M.; Okutomi, M. Beyond color difference: Residual interpolation for color image demosaicking. IEEE Trans. Image Process. 2016, 25, 1288–1300. [Google Scholar] [CrossRef]
  58. Monno, Y.; Kiku, D.; Tanaka, M.; Okutomi, M. Adaptive residual interpolation for color and multispectral image demosaicking. Sensors 2017, 17, 2787. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Wu, J.; Timofte, R.; Gool, L.V. Demosaicing based on directional difference regression and efficient regression priors. IEEE Trans. Image Process. 2016, 25, 3862–3874. [Google Scholar] [CrossRef] [PubMed]
  60. Kwan, C. Active Noise Reduction System for Creating a Quiet Zone. US Patent #9773494, 27 September 2017. [Google Scholar]
  61. Kwan, C.; Zhou, J.; Qiao, J.; Liu, G.; Ayhan, B. A high performance approach to local active noise reduction. In Proceedings of the IEEE Conference on Decision and Control, Las Vegas, NV, USA, 12–14 December 2016; pp. 347–352. [Google Scholar]
  62. Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a fast and flexible solution for CNN based image denoising. arXiv 2018, arXiv:1710.04026. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Three CFA patterns. (a) CFA 1.0; (b) CFA 2.0; (c) CFA 3.0.
Figure 1. Three CFA patterns. (a) CFA 1.0; (b) CFA 2.0; (c) CFA 3.0.
Sensors 20 03423 g001
Figure 2. Standard approach for CFA 3.0.
Figure 2. Standard approach for CFA 3.0.
Sensors 20 03423 g002
Figure 3. A pan-sharpening approach for CFA 3.0.
Figure 3. A pan-sharpening approach for CFA 3.0.
Sensors 20 03423 g003
Figure 4. A hybrid deep learning and pan-sharpening approach for CFA 3.0.
Figure 4. A hybrid deep learning and pan-sharpening approach for CFA 3.0.
Sensors 20 03423 g004
Figure 5. Poisson distributions with varying λ .
Figure 5. Poisson distributions with varying λ .
Sensors 20 03423 g005
Figure 6. Averaged performance metrics for all the clean images. (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Figure 6. Averaged performance metrics for all the clean images. (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Sensors 20 03423 g006
Figure 7. Visual comparison of three high performing demosaicing algorithms. The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) Demonet + GSA; (c) ATMF; (d) F3.
Figure 7. Visual comparison of three high performing demosaicing algorithms. The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) Demonet + GSA; (c) ATMF; (d) F3.
Sensors 20 03423 g007
Figure 8. Averaged performance metrics for all the low lighting images at 10 dBs SNR (Poisson noise). (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Figure 8. Averaged performance metrics for all the low lighting images at 10 dBs SNR (Poisson noise). (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Sensors 20 03423 g008
Figure 9. Visual comparison of three high performing demosaicing algorithms at 10 dBs SNR (Poisson noise). The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) Standard; (c) GFPCA; (d) F3.
Figure 9. Visual comparison of three high performing demosaicing algorithms at 10 dBs SNR (Poisson noise). The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) Standard; (c) GFPCA; (d) F3.
Sensors 20 03423 g009
Figure 10. Averaged performance metrics for all the low light images at 10 dBs SNR (Poisson noise). (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Figure 10. Averaged performance metrics for all the low light images at 10 dBs SNR (Poisson noise). (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Sensors 20 03423 g010aSensors 20 03423 g010b
Figure 11. Visual comparison of three high performing demosaicing algorithms at 10 dBs SNR (Poisson noise). The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) Demonet + GFPCA; (c) ATMF; (d) F3.
Figure 11. Visual comparison of three high performing demosaicing algorithms at 10 dBs SNR (Poisson noise). The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) Demonet + GFPCA; (c) ATMF; (d) F3.
Sensors 20 03423 g011
Figure 12. Averaged performance metrics for all the low light images at 10 dBs SNR (Poisson noise). (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Figure 12. Averaged performance metrics for all the low light images at 10 dBs SNR (Poisson noise). (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Sensors 20 03423 g012
Figure 13. Visual comparison of three high performing demosaicing algorithms at 10 dBs SNR (Poisson noise). The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) Demonet + GFPCA; (c) ATMF; (d) F3.
Figure 13. Visual comparison of three high performing demosaicing algorithms at 10 dBs SNR (Poisson noise). The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) Demonet + GFPCA; (c) ATMF; (d) F3.
Sensors 20 03423 g013
Figure 14. Averaged performance metrics for all the low light images at 20 dBs SNR (Poisson noise). (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Figure 14. Averaged performance metrics for all the low light images at 20 dBs SNR (Poisson noise). (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Sensors 20 03423 g014
Figure 15. Visual comparison of three high performing demosaicing algorithms at 20 dBs SNR (Poisson noise). The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) GFPCA; (c) ATMF; (d) F3.
Figure 15. Visual comparison of three high performing demosaicing algorithms at 20 dBs SNR (Poisson noise). The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) GFPCA; (c) ATMF; (d) F3.
Sensors 20 03423 g015
Figure 16. Averaged performance metrics for all the low light images at 20 dBs SNR (Poisson noise). (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Figure 16. Averaged performance metrics for all the low light images at 20 dBs SNR (Poisson noise). (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Sensors 20 03423 g016
Figure 17. Visual comparison of three high performing demosaicing algorithms at 20 dBs SNR (Poisson noise). The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) Demonet + GFPCA; (c) ATMF; (d) F3.
Figure 17. Visual comparison of three high performing demosaicing algorithms at 20 dBs SNR (Poisson noise). The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) Demonet + GFPCA; (c) ATMF; (d) F3.
Sensors 20 03423 g017
Figure 18. Averaged performance metrics for all the low light images at 20 dBs SNR (Poisson noise). (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Figure 18. Averaged performance metrics for all the low light images at 20 dBs SNR (Poisson noise). (a) PNSR; (b) CIELAB; (c) SSIM; (d) HVS and HVSm.
Sensors 20 03423 g018aSensors 20 03423 g018b
Figure 19. Visual comparison of three high performing demosaicing algorithms at 20 dBs SNR (Poisson noise). The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) Demonet + GFPCA; (c)ATMF; (d) F3.
Figure 19. Visual comparison of three high performing demosaicing algorithms at 20 dBs SNR (Poisson noise). The top row is the bird image and the bottom row is the lighthouse image. (a) Ground Truth; (b) Demonet + GFPCA; (c)ATMF; (d) F3.
Sensors 20 03423 g019
Figure 20. Best against the best comparison between CFAs 1.0, 2.0, and 3.0 in the noiseless case. (a) PSNR metrics; (b) Cielab metrics; (c) SSIM metrics; (d) HVS and HVSm metrics.
Figure 20. Best against the best comparison between CFAs 1.0, 2.0, and 3.0 in the noiseless case. (a) PSNR metrics; (b) Cielab metrics; (c) SSIM metrics; (d) HVS and HVSm metrics.
Sensors 20 03423 g020aSensors 20 03423 g020b
Figure 21. Best against the best comparison between CFAs 1.0, 2.0, and 3.0 with and without denoising at 10 dBs SNR. (a) PSNR metrics; (b) Cielab metrics; (c) SSIM metrics; (d) HVS metrics; (e) HVSm metrics.
Figure 21. Best against the best comparison between CFAs 1.0, 2.0, and 3.0 with and without denoising at 10 dBs SNR. (a) PSNR metrics; (b) Cielab metrics; (c) SSIM metrics; (d) HVS metrics; (e) HVSm metrics.
Sensors 20 03423 g021
Figure 22. Best against the best comparison between CFAs 1.0, 2.0, and 3.0 with and without denoising at 20 dBs SNR. (a) PSNR metrics; (b) Cielab metrics; (c) SSIM metrics; (d) HVS metrics; (e) HVSm metrics.
Figure 22. Best against the best comparison between CFAs 1.0, 2.0, and 3.0 with and without denoising at 20 dBs SNR. (a) PSNR metrics; (b) Cielab metrics; (c) SSIM metrics; (d) HVS metrics; (e) HVSm metrics.
Sensors 20 03423 g022aSensors 20 03423 g022b
Figure 23. No denoising cases at 10 dBs. Error distributions of the three CFAs.
Figure 23. No denoising cases at 10 dBs. Error distributions of the three CFAs.
Sensors 20 03423 g023
Table 1. Comparison of CFAs for different demosaicing method in the noiseless case (normal lighting conditions). Bold numbers indicate the best performing methods in each row.
Table 1. Comparison of CFAs for different demosaicing method in the noiseless case (normal lighting conditions). Bold numbers indicate the best performing methods in each row.
MetricsCFA 1.0/Best AlgorithmCFA 2.0/Best AlgorithmCFA 3.0/Best Algorithm
PSNR42.068/ATMF36.554/F334.162/Demonet + GSA
Cielab0.996/ATMF1.956/F32.372/Demonet + GSA
SSIM0.922/ATMF0.892/F30.857/Demonet + GSA
HVS38.101/ATMF32.590/F330.641/Demonet + GSA
HVSm42.788/ATMF35.325/F333.580/Demonet + GSA
Table 2. Comparison of CFA patterns for the various demosaicing cases at 10 dBs SNR. Bold numbers indicate the best performing methods in each row.
Table 2. Comparison of CFA patterns for the various demosaicing cases at 10 dBs SNR. Bold numbers indicate the best performing methods in each row.
MetricsCFANo Denoising/Best AlgorithmDenoising After Demosaicing/Best AlgorithmDenoising Before Demosaicing/Best Algorithm
PSNR (dB)1.016.889/F320.826/F321.978/F3
2.021.249/F324.050/LSLCD26.141/Demonet+GFPCA
3.020.018/GFPCA20.573/Demonet+GFPCA25.614/Demonet+GFPCA
CIELAB1.010.149/GFPCA6.664/F36.545/Demonet
2.06.354/GFPCA5.516/F34.310/Demonet+GFPCA
3.07.288/GFPCA7.236/Demonet+GFPCA4.596/Demonet+GFPCA
SSIM1.00.455/F30.476/ATMF0.463/ATMF
2.00.451/ATMF0.459/LSLCD0.467/Standard
3.00.429/GFPCA0.366/F30.461/Standard
HVS (dB)1.012.285/SEM16.229/F316.833/ARI
2.016.531/F319.056/LSLCD22.053/Demonet+GFPCA
3.015.294/GFPCA16.277/Demonet+GFPCA21.346/Demonet+GFPCA
HVSm (dB)1.012.403/SEM16.494/F317.116/ARI
2.016.868/F319.568/LSLCD23.121/Demonet+GFPCA
3.015.551/HPM16.611/Demonet+GFPCA22.245/Demonet+GFPCA
Table 3. Comparison of CFA patterns for the various demosaicing cases at 20 dBs SNR. Bold numbers indicate the best performing methods in each row.
Table 3. Comparison of CFA patterns for the various demosaicing cases at 20 dBs SNR. Bold numbers indicate the best performing methods in each row.
MetricsCFANo Denoising/Best AlgorithmDenoising After Demosaicing/Best AlgorithmDenoising Before Demosaicing/Best Algorithm
PSNR (dB)1.020.488/ATMF22.821/F324.059/Bilinear
2.023.290/F324.391/GSA28.172/LSLCD
3.021.821/GFPCA21.292/F327.070/Demonet
CIELAB1.06.713/Demonet5.256/Demonet4.935/Demonet
2.05.121/GFPCA5.268/LSLCD3.584/F3
3.06.214/GFPCA6.605/Demonet+GFPCA4.008/GFPCA
SSIM1.00.517/ATMF0.548/F30.574/F3
2.00.535/PCA0.535/LSLCD0.539/GSA
3.00.532/F30.509/GLP0.535/Standard
HVS (dB)1.016.130/Demonet18.204/Bilinear19.142/Demonet
2.018.646/F319.415/LSLCD24.382/ATMF
3.017.061/GPCA17.030/Demonet+GFPCA22.621/GFPCA
HVSm (dB)1.016.365/Demonet18.734/Bilinear19.444/ARI
2.019.112/F319.881/LSLCD25.516/ATMF
3.017.400/GFPCA17.313/Demonet+GFPCA23.576/GFPCA

Share and Cite

MDPI and ACS Style

Kwan, C.; Larkin, J.; Ayhan, B. Demosaicing of CFA 3.0 with Applications to Low Lighting Images. Sensors 2020, 20, 3423. https://doi.org/10.3390/s20123423

AMA Style

Kwan C, Larkin J, Ayhan B. Demosaicing of CFA 3.0 with Applications to Low Lighting Images. Sensors. 2020; 20(12):3423. https://doi.org/10.3390/s20123423

Chicago/Turabian Style

Kwan, Chiman, Jude Larkin, and Bulent Ayhan. 2020. "Demosaicing of CFA 3.0 with Applications to Low Lighting Images" Sensors 20, no. 12: 3423. https://doi.org/10.3390/s20123423

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop