Next Article in Journal
A Decoupled Unified Observation Method of Stochastic Multidimensional Vibration for Wind Tunnel Models
Next Article in Special Issue
Pyramid Inter-Attention for High Dynamic Range Imaging
Previous Article in Journal
A New Real-Time Pinch Detection Algorithm Based on Model Reference Kalman Prediction and SRMS for Electric Adjustable Desk
Previous Article in Special Issue
Demosaicing of RGBW Color Filter Array Based on Rank Minimization with Colorization Constraint
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint Demosaicing and Denoising Based on Interchannel Nonlocal Mean Weighted Moving Least Squares Method

1
Department of Mathematics, Graduate School, Kyung Hee University, Seoul 02447, Korea
2
Department of Applied Mathematics, Kyung Hee University, Yongin 446-701, Korea
3
Division of Applied Mathematics, Korea University, Sejong 30019, Korea
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(17), 4697; https://doi.org/10.3390/s20174697
Submission received: 17 July 2020 / Revised: 14 August 2020 / Accepted: 17 August 2020 / Published: 20 August 2020
(This article belongs to the Special Issue Digital Imaging with Multispectral Filter Array (MSFA) Sensors)

Abstract

:
Nowadays, the sizes of pixel sensors in digital cameras are decreasing as the resolution of the image sensor increases. Due to the decreased size, the pixel sensors receive less light energy, which makes it more sensitive to thermal noise. Even a small amount of noise in the color filter array (CFA) can have a significant effect on the reconstruction of the color image, as two-thirds of the missing data would have to be reconstructed from noisy data; because of this, direct denoising would need to be performed on the raw CFA to obtain a high-resolution color image. In this paper, we propose an interchannel nonlocal weighted moving least square method for the noise removal of the raw CFA. The proposed method is our first attempt of applying a two dimensional (2-D) polynomial approximation to denoising the CFA. Previous works make use of 2-D linear or directional 1-D polynomial approximations. The reason that 2-D polynomial approximation methods have not been applied to this problem is the difficulty of the weight control in the 2-D polynomial approximation method, as a small amount of noise can have a large effect on the approximated 2-D shape. This makes CFA denoising more important, as the approximated 2-D shape has to be reconstructed from only one-third of the original data. To address this problem, we propose a method that reconstructs the approximated 2-D shapes corresponding to the RGB color channels based on the measure of the similarities of the patches directly on the CFA. By doing so, the interchannel information is incorporated into the denoising scheme, which results in a well-controlled and higher order of polynomial approximation of the color channels. Compared to other nonlocal-mean-based denoising methods, the proposed method uses an extra reproducing constraint, which guarantees a certain degree of the approximation order; therefore, the proposed method can reduce the number of false reconstruction artifacts that often occur in nonlocal-mean-based denoising methods. Experimental results demonstrate the performance of the proposed algorithm.

1. Introduction

Demosaicing refers to the task of reconstructing a full-color image from incomplete color samples sensed by the image sensor of a digital camera. The color filter array (CFA), which is placed over the image sensor, determines what colored pixels are missing. The most commonly used color filter array is the Bayer CFA [1]. One significant problem for the demosaicing process is the problem of noise. Nowadays, the sizes of the pixel sensors are decreasing with the increase in the resolutions of camera sensors. This makes the pixel sensors more sensitive to noise as the smaller sensors often receive less light energy.
Conventionally, the denoising process was applied as a postprocessing effect after the demosaicing. This is because the correlation between neighboring pixels is larger in the demosaiced image than in the raw CFA image, which makes it easier to apply a local denoising process. For example, in [2], a polynomial interpolation-based demosaicing method was proposed to resolve both the noise and the zippering and false color artifacts that occur on object boundaries by making some contributions associated with the calculation of error predictors and edge classification. The well-known nonlocal-mean-based denoising scheme was introduced into the demosaicing problem in [3]. Nonlocal mean denoising is a self-similarity driven method that incorporates the interaction between colors and the local image geometry, and was shown to remove the noise effectively. Another way to retain the sharp edges during the demosaicing process is the interpolation of the residuals, i.e., the differences between the original and the tentative images [4]. In this approach, first, the tentative image is estimated by an edge-preserving guided filter [5]; then, the estimated residual images are interpolated by some demosaicing algorithms, e.g., the gradient-based threshold-free method [6].
A major problem when denoising is applied as a postprocess to the demosaicing is that the effect of noise on the demosaicing process cannot be undone after the demosaicing. Even when there is only a small amount of noise in the color filter array, the effect on the demosaiced image can still be significant as the noise affects the reconstruction of two-thirds of the missing data. Because of this, it has become a recent trend to denoise the raw CFA image before applying the demosaicing process [7,8,9]. Applying denoising as a preprocess is difficult because denoising the raw CFA is a more challenging task than denoising the demosaiced image as two-thirds of the data are missing. The distances between the data points are larger than in the demosaiced image. The authors of [7] try to first augment the insufficient data in the raw CFA by converting the raw CFA data to a pseudo-four-channel image (two green, red, and blue channels). The four-channel data are transformed based on principal component analysis (PCA). Since the signal energy is compact while the noise is uniformly distributed, a block-matching and 3D filtering method (BM3D) works very well for denoising in the transformed space. Lastly, the denoised data are rearranged to obtain the denoised CFA raw data and the residual interpolation (RI) method is used for demosaicing. The authors of [8] utilize cross-color correlations into a modified block-matching 3D-filter-based (BM3D) denoising, while the authors of [9] use a unified objective function which incorporates the BM3D filtering, the total variation minimization efficiently by the alternating direction method of multipliers (ADMM). Besides the traditional approach in demosaicing, there are attempts to achieve joint demosaicing and denoising by using convolutional neural networks (CNNs) such as the works of [10,11,12,13,14]. Besides normal CNNs, a generative adversarial network (GAN)-based method [15] and fully end-to-end deep neural models [16,17] have also been proposed to tackle the demosaicing problem. However, the drawback with using neural networks is that large amounts of data are required to train the network. Therefore, even though the performance of some CNNs can achieve state-of-the-art results, it is still important to develop methods that can work with a single The works of [18,19] propose the use of white pixels obtained by an RGBW color filter array and observed that white pixels are helpful for demosaicing in low-light conditions. The work of [20] realized that intracolor demosaicing is important to reduce the color artifacts, while the work of [21] analyzed the real color acquisition process from the raw image to the sRGB image space and showed that it is proper to perform a denoising process directly on the raw sensor data. Mathematical tools like anisotropic diffusion [22] or polarization-based [23] methods have also been applied to the problem of demosaicing; the work in [22] developed a linear anisotropic diffusion method for any arbitrary kind of color filter arrays, while the work in [23] applied an extension of the CFA to a polarization filter array (PFA). In this paper, we apply an approximation-based method for the problem of demosaicing. Approximation-based denoising methods have been applied in various fields. The work in [24] applied a nonlinear moving least-squares projection method for the denoising of high-dimensional noisy scattered data, while the work in [25] applied a biquadratic polynomial approximation for denoising of medical images. In [26], a modified singular value thresholding with minimizing error constraints is used for the segmentation of noisy signals by an orthogonal polynomial approximation.
In this paper, we propose a nonlocal interchannel weighted moving least square method for the denoising of noisy Bayer CFAs. The proposed method is a first attempt of applying a two-dimensional polynomial approximation-based denoising on noisy CFA patterns. Previous conventional joint denoising and demosaicing methods take either a 2-D approach with nonpolynomial approximation, e.g., a two-dimensional linear weighted sum of the neighboring pixels, or a directional 1-D polynomial approximation approach. The reason that previous polynomial approximation methods do not use a 2-D approach is due to the difficulty of the weight control in forming the 2-D approximation shape. If the conventional weighted least squares approach is applied to the 2-D polynomial approximation, the resulting 2-D approximation shape cannot represent the discontinuities of edge regions well, and therefore, will not result in a smooth image. To overcome this problem in our method, we compute the nonlocal mean weights for the weighted least square directly from the raw color filter array by taking the interchannel information into account. The goal of solving the weighted least squares problem is to find the filter coefficients for the 2-D filters that are used in the reconstruction of the pixel values; the better weights will result in finding better 2-D filters. After the filter coefficients are obtained, we again perform a nonlinear function on these coefficients so that the filtering of the data points with the 2-D filters only involves the important data points.
Compared with nonlocal-mean-based methods, e.g., the BM3D denoising, which only uses the mean filtering based on the nonlocal similarities of the pixels, the proposed method incorporates an extra reproducing constraint into the denoising scheme. Normally, nonlocal-mean-based denoising methods work well when there are iterative patterns in the image; however, if the correlations between the patches in the search region weaken, nonlocal-mean-based denoising methods can produce many false intensity values. This is because the denoising is performed without considering the approximation error. In comparison, the reproducing constraint is directly related to the reconstruction error, i.e., the approximation error. Using a reproducing constraint is better than nonlocal-mean-based methods in that it prevents the approximated denoised CFA image from being too different from the original CFA image. This improves the approximation accuracy.
We summarize the main contributions of the proposed method as follows:
  • For the first time, we applied a two-dimensional polynomial-approximation-based denoising on noisy CFA patterns.
  • Compared with nonlocal-mean-based methods, e,g., the BM3D denoising, the proposed method incorporates an extra reproducing constraint into the denoising scheme. This guarantees an approximation accuracy to a desired order.
  • We incorporate interchannel information into the polynomial approximation by determining the nonlocal weights directly from the noisy raw CFA image.

2. Relation of the Proposed Work to Sensors

Nowadays, most digital cameras acquire images with a single monochrome image sensor overlaid by a color filter array (CFA) to capture color information (Figure 1). The reason for the use of a single image sensor in digital cameras is to reduce the cost. However, due to the use of a single image sensor, the color channels are undersampled and the missing color information has to be restored. The aim of a demosaicing algorithm in the IP (Image Processing) module in digital cameras is to reconstruct the full color image from the spatially undersampled color channels output from the CFA. As the original CFA image is noisy and of low resolution, demosaicing algorithms are centered on resolution improvement and denoising. This paper proposes a joint denoising and demosaicing framework based on the moving least square (MLS) method and, therefore, helps to overcome the physical limitations of single monochrome image sensors in digital cameras.

3. Related Works

3.1. Residual Interpolation

The residual interpolation (RI) method proposed in [4] is a demosaicing method with excellent performance. It is an algorithm developed by integrating the residual interpolation method with the gradient-based threshold-free (GBTF) algorithm [6]. With this method, first, a tentative G pixel value is estimated at the positions of the R and B pixels by the Hamilton and Adams interpolation as in the GBTF algorithm. After that, the R and B pixel values are interpolated by the residual interpolation process. For example, for the R image, the RI algorithm first generates the tentative estimate ( R ˜ ) of the R image by a guided upsampling process, and then estimates the residuals between the R pixel and the R ˜ pixel values at the R pixel positions. The residuals are also interpolated to result in the interpolated residual image. After that, the demosaiced R image R ^ is constructed by adding R ˜ to the interpolated residual image. The demosaiced B image B ^ is constructed in the same manner. In the proposed method, we utilize the RI method to construct the full-color image after the denoised CFA image has been obtained with the proposed interchannel nonlocal mean weighted moving least squares method.

3.2. Moving Least Square Methods with Total Variation Minimization

Moving least squares refers to the method of reconstructing a continuous function from a set of unorganized point samples by calculating a weighted least squares measure around the point at which the reconstruction is required. The solution of the MLS method has a closed-form and is easily computed by solving a linear system. It was shown to be quite useful in interpolation-based image-processing such as super-resolution and image zooming [27,28,29]. The MLS method has been applied to 2-D linear and nonlinear systems [30] for the interpolation in meshless environments [31] and for other image processing tasks such as nonlinear color transfer [32], but not to the problem of denoising the CFA image, since one of the major drawbacks of the MLS method is that it is weak against the noise. To overcome the problem of weakness against noise, in [33], we proposed the incorporation of the total variation regularization [34] into the MLS framework for better denoising power; however, if the MLS formulation in [33] is directly applied to the joint demosaicing and denoising problem, it yields poor results, as the correlation between the color channels cannot be well measured because two-thirds of the data are missing and the positions of the missing data are different for each color channel. Therefore, to apply the MLS method to denoise the color filter array, the formulation has to be changed to fit the problem and the weights have to be calculated while taking the interchannel information into account.

4. Proposed Method

4.1. Problem Formulation

Let the ground-truth color image on an image domain Ω be given as u = { u ( i , j ) } i , j Ω with u ( i , j ) = [ R ( i , j ) , G ( i , j ) , B ( i , j ) ] T . The problem of demosaicing is to construct a color image u * from a noisy CFA, x, where x is the addition of the mosaiced pattern image u and the noise n:
x ( i , j ) = u ( i , j ) + n ( i , j ) , ( i , j ) Ω .
It should be noted that u is a color image, which is why we use a vector representation for u , while the mosaiced pattern image u is a monotone CFA image. In this paper, the mosaiced pattern image u is a Bayer pattern image and n is a white Gaussian noise which follows a normal distribution n ( i , j ) N ( 0 , σ 2 ) with a certain variance σ . Our goal is to reconstruct u * as close as possible to the true image u . For the denoising of the CFA, we first construct a local polynomial approximation function for each pixel that well reflects the local structure at that pixel. We then take the coefficients of the local polynomial approximation function and apply a nonlinear transform on these coefficients. After that, we take the dot product of the filter with a neighborhood of pixels to decide the value of the centered pixel. The filtered pixels become the denoised version of the original noisy pixels in the CFA. We describe the proposed method in detail in the following subsection.

4.2. Interchannel Data Weighted Least Squares Reconstruction

Let x be a given mosaiced CFA image and let p C = ( p C , q C ) denote a pixel position corresponding to the color C { R , G , B } . We want to construct the local polynomial approximation functions L p C ( r ) corresponding to each color C { R , G , B } :
L p C ( r ) : = | α | 1 m c α r α , r Ω p C ,
where Ω p C denotes the set of pixels that are in the neighborhood of p C and have the same color as p C . For example, Ω p G : = Ω p C = G contains only the Green pixels in the CFA that are in the neighborhood of p C = G , i.e., Ω p G contains the nearest 41 green pixel positions ( p , , q ) which are ( p , q ) = ( p G 2 i , q G 2 j ) , i , j = 2 , 1 , 0 , 1 , 2 and ( p , q ) = ( p G 2 i 1 , q G 2 j 1 ) , i , j = 1 , 0 , 1 , 2 . Then, the polynomial L p C is constructed to match the distribution of the data in the set Ω p C . By doing so, L p C takes into account the local structure at p C . For the Red and Blue pixels in the CFA, we construct the polynomials L p R and L p B from the data of larger regions, i.e., from the nearest 49 Red pixels or Blue pixels, respectively. For a specific Red pixel p R = ( p R , q R ) , the nearest 49 Red pixels are ( p , q ) = ( p R 2 i , q R 2 j ) , i , j = 3 , 2 , 1 , 0 , 1 , 2 , 3 , while the nearest 49 Blue pixels for p B are ( p , q ) = ( p B 2 i , q B 2 j ) , i , j = 3 , 2 , 1 , 0 , 1 , 2 , 3 .
The polynomial L p G ( p G ) can be obtained by the following minimization, which takes the total variation regularization into account:
argmin L p G Π m { i , j = 2 2 ( | L p G ( p G + 2 i , q G + 2 j ) | + μ 2 | L p G ( p G + 2 i , q G + 2 j ) x ( p G + 2 i , q G + 2 j ) | 2 θ p G ( p G + 2 i , q G + 2 j ) ) + i , j = 1 2 ( | L p G ( p G + 2 i 1 , q G + 2 j 1 ) | + μ 2 | L p G ( p G + 2 i 1 , q G + 2 j 1 ) x ( p G + 2 i 1 , q G + 2 j 1 ) | 2 θ p G ( p G + 2 i 1 , q G + 2 j 1 ) ) } .
It should be noticed that the gradients L p G ( p G + 2 i , q G + 2 j ) and L p G ( p G + 2 i 1 , q G + 2 j 1 ) are computed by taking only the Green pixels in the CFA into account and not the neighboring pixels, as in the case of computing the gradients in normal images. For the computation of L p R ( p R ) and L p B ( p B ) , we solve the following minimization problems,
argmin L p R Π m i , j = 3 3 | L p R ( p R + 2 i , q R + 2 j ) | + μ 2 | L p R ( p R + 2 i , q R + 2 j ) x ( p R + 2 i , q R + 2 j ) | 2 θ p R ( p R + 2 i , q R + 2 j ) ,
and
argmin L p B Π m i , j = 3 3 | L p B ( p B + 2 i , q B + 2 j ) | + μ 2 | L p B ( p B + 2 i , q B + 2 j ) x ( p B + 2 i , q B + 2 j ) | 2 θ p B ( p B + 2 i , q B + 2 j ) ,
where once again the gradients L p R ( p R + 2 i , q R + 2 j ) and L p B ( p B + 2 i , q B + 2 j ) are computed with respect to the Red pixels and Blue pixels in the CFA, respectively. The weighting functions θ p C , C { R , G , B } are defined as
θ p G ( p ^ G , q ^ G ) : = exp 1 h 0 2 i , j = 3 3 G σ ( i , j ) ( x ( p G + i , q G + j ) x ( p ^ G + i , q ^ G + j ) ) 2 ,
for Equation (1), and
θ p R ( p ^ R , q ^ R ) : = exp 1 h 0 2 i , j = 4 4 G σ ( i , j ) ( x ( p R + i , q R + j ) x ( p ^ R + i , q ^ R + j ) ) 2 ,
θ p B ( p ^ B , q ^ B ) : = exp 1 h 0 2 i , j = 4 4 G σ ( i , j ) ( x ( p B + i , q B + j ) x ( p ^ B + i , q ^ B + j ) ) 2 ,
for Equations (2) and (3), respectively, where G σ ( · , · ) denotes a Gaussian function with standard deviation σ . The nonlocal weight θ p C measures the similarity of the data structure of p C = ( p C , q C ) and p ^ C = ( p ^ C , q ^ C ) , where p C = ( p C , q C ) and p ^ C = ( p ^ C , q ^ C ) are the pixels that belong to the same color; however, even though θ p C measures the similarity between pixels of the same color, all the RGB pixels are involved in the computation of the similarity. For example, even though p ^ G = ( p ^ G , q ^ G ) is the position of the pixel belonging to the Green color, ( i , j ) , i = p ^ G 3 , , p ^ G + 3 , j = q ^ G 3 , , q ^ G + 3 are the pixel positions of all colors. This is in contrast with (1)–(3), where we used only the pixels corresponding to a specific color. The incorporation of all the RGB pixels provides the computation of the similarity with interchannel information. This enhances the accuracy in computing the weighting functions. As a different weighting function results in a different local polynomial approximation function, interchannel information results in a better polynomial approximation. The size of the window that contains the pixels involved in the computation is 7 × 7 for the computation of θ p G and 9 × 9 for the computation of θ p R and θ p B , as shown in Figure 2. Normally, after obtaining all the local polynomial approximation functions L p C ( r ) corresponding to all colors C { R , G , B } and all pixels r , we already can obtain the denoised CFA image. This can be done by evaluating the values of all the functions L p C at the points where the functions have been constructed, i.e., evaluate the values L p C ( p C ) for all p C . By integrating all the values L p C ( p C ) into a 2-D image, we obtain the denoised CFA image; however, with the proposed method, we take one more step to eliminate the effect of the data points that contribute little to the reconstruction of the function values L p C ( p C ) . This can be done by first utilizing the theorem that the L p C ( p C ) can be expressed as a dot product between the filter weights and the data points [35]. Applying the theorem to our case, we can rewrite the function values L p G ( p G ) , L p R ( p R ) , and L p B ( p B ) as
L p G ( p G ) : = i , j = 2 2 w 2 i , 2 j G x ( p G + 2 i , q G + 2 j ) + i , j = 1 2 w 2 i 1 , 2 j 1 G x ( p G + 2 i 1 , q G + 2 j 1 ) ,
L p R ( p R ) : = i , j = 3 3 w i , j R x ( p R + 2 i , q R + 2 j ) ,
L p B ( p B ) : = i , j = 3 3 w i , j B x ( p B + 2 i , q B + 2 j ) .
By inserting (4)–(6) into (1)–(3), respectively, we can obtain the filter coefficients { w 2 i , 2 j G } , { w 2 i 1 , 2 j 1 G } , { w i , j R } , and { w i , j B } . Next, we apply nonlinear transforms on the filter coefficients to get
ϕ t h e v e n ( w 2 i , 2 j G ) = 1 | w 2 i , 2 j G | t h w 2 i , 2 j G i , j = 2 2 1 | w 2 i , 2 j G | t h + i , j = 1 2 1 | w 2 i 1 , 2 j 1 G | t h ,
ϕ t h o d d ( w 2 i 1 , 2 j 1 G ) = 1 | w 2 i 1 , 2 j 1 G | t h w 2 i 1 , 2 j 1 G i , j = 2 2 1 | w 2 i , 2 j G | t h + i , j = 1 2 1 | w 2 i 1 , 2 j 1 G | t h ,
ϕ t h ( w i , j R ) = 1 | w i , j R | t h w i , j R i , j = 3 3 1 | w i , j R | t h ,
ϕ t h ( w i , j B ) = 1 | w i , j B | t h w i , j B i , j = 3 3 1 | w i , j B | t h ,
where
1 | w a , b C | t h = 1 if | w a , b C | t h 0 if | w a , b C | < t h f o r C = R , G , B .
The above transform filters out the filter coefficients that are smaller than a predefined threshold value t h , and renormalize the remaining filter coefficients. By filtering out the small filter coefficients, the data points that have little contribution to the construction of L p C ( p C ) are excluded from the construction process. This prevents unrelated pixels from affecting the construction process, which prevents an oversmoothed CFA.
Now, using the transformed filter coefficients { ϕ t h e v e n ( w 2 i , 2 j G ) } , { ϕ t h o d d ( w 2 i 1 , 2 j 1 G ) } , { ϕ t h ( w i , j R ) } , and { ϕ t h ( w i , j B ) } instead of the original weights, we construct L ˜ p C ( p C ) , C = R , G , B instead of L p C ( p C ) , C = R , G , B :
L ˜ p G ( p G ) : = i , j = 2 2 ϕ t h e v e n ( w 2 i , 2 j G ) x ( p G + 2 i , q G + 2 j ) + i , j = 1 2 ϕ t h o d d ( w 2 i 1 , 2 j 1 G ) x ( p G + 2 i 1 , q G + 2 j 1 ) ,
L ˜ p R ( p R ) : = i , j = 3 3 ϕ t h ( w i , j R ) x ( p R + 2 i , q R + 2 j ) ,
L ˜ p B ( p B ) : = i , j = 3 3 ϕ t h ( w i , j B ) x ( p B + 2 i , q B + 2 j ) .
Finally, we replace all the intensity values in { u ( p C , q C ) } p C , q C Ω , C = R , G , B in the noisy CFA with { L ˜ p C ( p C , q C ) } p C , q C Ω , C = R , G , B , where Ω is the domain corresponding to the CFA image. Here, ( p C , q C ) refers to the pixel position corresponding to the color C. With the denoised CFA { L ˜ p C ( p C , q C ) } p C , q C Ω , C = R , G , B , we are now ready to apply the residual interpolation algorithm [4] to obtain the full-color image, i.e., the three-channel array u * = { u * ( i , j ) } i , j Ω , where u * ( i , j ) = [ R * ( i , j ) , G * ( i , j ) , B * ( i , j ) ] T is the full-color pixel reconstructed by the demosaicing method. Figure 3 shows the block diagram of the proposed joint denoising and demosaicing method, while Figure 4 shows how the input image is processed by the proposed method and also shows the intermediate image results at each stage.

5. Experimental Results

We compared the joint denoising and demosaicing results of the proposed scheme with the RI [4] to show the denoising power of the proposed scheme—the block-matching 3D (BM3D) filtering [7], which uses an enhanced sparse representation in transform-domain; and the alternating direction minimization multipliers (ADMM) method [9], which efficiently incorporates the total variation minimization and the BM3D filtering into a unified framework. The BM3D and the ADMM methods are state-of-the-art methods for denoising the color filter array. Both the ADMM and the BM3D use a denoising method that groups similar patches into a 3D volume, then performs a transform-domain shrinkage on this 3D volume. The collaborative filtering used in these methods can reveal even the finest details shared by grouped blocks, and therefore, can preserve essential and unique features of each block.
The experiments show that the proposed method is comparable to the state-of-the-art denoising methods and can better preserve some features that the ADMM and BM3D methods cannot. We experimented on the McMaster dataset, which contains images that are closer to the images taken with a real digital camera than the Kodak dataset. We used two noisy datasets of different noise levels, i.e., on a dataset with weak noise where the noise is generated from a zero-mean Gaussian distribution with σ = 7.65, and a dataset with a higher level of noise generated with σ = 12.75, where σ is the standard deviation.
The number of data points used in the reconstructions of the local polynomial approximation functions in the Green channel is 41, where the data are extracted from a 9 × 9 window. For the Red and the Blue channels, the number of data points is 49 for each channel, which is extracted from a 15 × 15 region. The degree of polynomials and the width of the Gaussian for the Green channel are 3 and 0.3, respectively, while for the Red and Blue channels, the degree of polynomials and the width of the Gaussian are both 3 and 0.5, respectively. Figure 5 shows a comparison of the denoising results on the MCM 9 image with weak noise ( σ = 7.65). We see that even though the BM3D method has the highest PSNR value, there are some blurry artifacts, which can be observed in the enlarged images of Figure 5 and especially in the region corresponding to the fruit basket frame, as can be seen in Figure 5i. The reason that the BM3D results in blurry artifacts is due to the fact that the fruit basket image has little repeatable patterns, therefore, the 3D filtering in the BM3D blurs the thin frame regions. This is also true for the ADMM method, as can observed in Figure 5h. However, with the proposed method, the fruit basket frame is better reconstructed and more distinctive, as can be seen in Figure 5j, which is due to the fact that the proposed method applies the reproducing constraint in the reconstruction process. Table 1, Table 2 and Table 3 compare the PSNR, FSIM, and SSIM values between the different demosaicing methods. The tables show the PSNR, FSIM, and SSIM values for all 18 images in the McMaster dataset for different noise levels and different demosaicing methods. The proposed method is comparable to the state-of-the-art joint demosaicing and denoising methods. The BM3D method in [7] shows the largest PSNR, FSIM, or SSIM values for most images due to the large denoising power, but also shows some local artifacts when there is no repeatable pattern, or when the image structure is small. In this case, the proposed method shows more desirable results. This can be observed again in the dataset with more noise ( σ = 12.75). Figure 6 and Figure 7 show the results of the MCM 10 and 18 images. Again, it can be observed in the enlarged images of the ADMM (Figure 6h) and the BM3D (Figure 6i) methods that the small details become blurry, which can be observed especially inside the blue boxes. However, the small details are sharply reconstructed with the proposed method, as can be seen in Figure 6j. Figure 7 shows that the colors of the small-scaled structures are better preserved with the proposed method. It can be observed in the enlarged images of the ADMM (Figure 7h) and the BM3D (Figure 7i) methods that the brown color has been diffused to green, especially inside the blue boxes, whereas in the original image (Figure 7f) there are several small-scaled structures with brown colors. The small-scaled structures with brown colors are well preserved with the proposed method as can be seen in Figure 7j.

6. Conclusions

In this paper, we proposed a two-dimensional polynomial-approximation-based denoising method with nonlocal weights for the denoising of noisy color filter arrays (CFAs). It is difficult to construct a two-dimensional approximation function from the sparse data points in the CFA image that can represent the original color image well, since the missing information makes it difficult to extract the data points for the two-dimensional approximation that can reconstruct the full color image with high accuracy. The major contribution of this paper is the proposal of a method that can compute the weights that indicate the importance of data points with high accuracy, so that the data points that are close to the point at which the reconstructed value is requested can be extracted with high accuracy. To obtain a high-accuracy estimation of the data points and a high-accuracy two-dimensional polynomial approximation, we incorporated the interchannel information into the calculation of the similarity weights. This results in the reconstruction of digital color images with high visual quality. After the weights have been obtained, we applied a nonlinear transform on the filter coefficients and reconstructed the denoised CFA pixels. We also eliminated the data points that have little contribution to the construction of the local filters to exclude the unrelated pixels from the construction process, which prevents an oversmoothed CFA image. Due to the use of the interchannel nonlocal mean weights and the incorporation of the reproducing constraint and the transformed filtered coefficients, we could get an approximation function that effectively preserves the small-scaled features in the image that other conventional denoising schemes cannot preserve well. This is due to the fact that conventional denoising schemes do not possess the extra reproducing constraint, whereas with the proposed scheme, we could get a reproducing constraint with high order due to the highly accurate weights and data points. The high order of the proposed approximation method improves the resolution in the demosaiced image.
One of the advantages of the proposed method is that it can be easily extended to be used with other CFAs due to the characteristics of the MLS, which reconstructs a function from a set of unorganized point samples. Therefore, the proposed method can be well-combined with existing super-resolution or other interpolation methods. The use of the proposed method with other CFA formats will be one of our further research topics. Furthermore, if we use basis functions other than polynomial basis in the reproducing constraint term, there is much space left for improvement in the approximation accuracy.

Author Contributions

Conceptualization, Y.K., S.L., and Y.J.L.; methodology, Y.K., H.R.; software, Y.K., S.L.; validation, H.R., Y.J.L.; formal analysis, S.L., Y.J.L.; investigation, Y.J.L.; resources, Y.K., H.R.; writing—original draft preparation, Y.K., S.L.; writing—review and editing, S.L., Y.J.L.; visualization, Y.K.; supervision, Y.J.L.; project administration, Y.J.L. All authors have read and agreed to the published version of the manuscript.

Funding

The work of Y.J.L. was supported by the 2015 Korea University Grant (K1508571).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MLS-TVMoving least squares methods with total variation minimization
RIResidual interpolation

References

  1. Bayer, B. Color Imaging Array. U.S. Patent 3971065 A, 20 July 1976. [Google Scholar]
  2. Wu, J.; Anisetti, M.; Wu, W.; Damiani, E.; Jeon, G. Bayer demosaicing with polynomial interpolation. IEEE Trans. Image Process. 2016, 25, 5369–5382. [Google Scholar] [CrossRef] [PubMed]
  3. Buades, A.; Coll, B.; Morel, J.; Sbert, C. Self-similarity driven demosaicing. IEEE Trans. Image Process. 2009, 18, 1192–1202. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Kiku, D.; Monno, Y.; Tanaka, M.; Okutomi, M. Residual interpolation for color image demosaicing. In Proceedings of the 2013 IEEE International Conference on Image Processing(ICIP), Melbourne, Australia, 15–18 September 2013; pp. 2304–2308. [Google Scholar] [CrossRef]
  5. He, K.; Sun, J.; Tang, X. Guided Image Filtering. IEEE Trans. Pattern Anal. 2013, 53, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  6. Pekkucuksen, I.; Altunbasak, Y. Gradient based threshold free color filter array interpolation. In Proceedings of the 2010 IEEE International Conference on Image Processing(ICIP), Hong Kong, China, 12–15 September 2010; pp. 137–140. [Google Scholar] [CrossRef]
  7. Akiyama, H.; Tanaka, M.; Okutomi, M. Pseudo four-channel image denoising for noisy CFA raw data. In Proceedings of the 2015 IEEE International Conference on Image Processing(ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 4778–4782. [Google Scholar] [CrossRef]
  8. Danielyan, A.; Vehvilainen, M.; Foi, A.; Katkovnik, V.; Egiazarian, K. Cross-color BM3D filtering of noisy raw data. In Proceedings of the 2009 International Workshop on Local and Non-Local Approximation in Image Processing, Tuusalu, Finland, 19–21 August 2009; pp. 125–129. [Google Scholar] [CrossRef] [Green Version]
  9. Tan, H.; Zeng, X.; Lai, S.; Liu, Y.; Zhang, M. Joint demosaicing and denoising of noisy bayer images with ADMM. In Proceedings of the 2017 IEEE International Conference on Image Processing(ICIP), Beijing, China, 17–20 September 2017; pp. 2951–2955. [Google Scholar] [CrossRef]
  10. Huang, T.; Wu, F.F.; Dong, W.; Guangming, S.; Li, X. Lightweight deep residue learning for joint color image demosaicking and denoising. In Proceedings of the 2018 International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 127–132. [Google Scholar]
  11. Ehret, T.; Davy, A.; Arias, P.; Facciolo, G. Joint Demosaicking and denoising by fine-tuning of bursts of raw images. In Proceedings of the 2019 International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 8868–8877. [Google Scholar]
  12. Kokkinos, F.; Lefkimmiatis, S. Iterative joint image demosaicking and denoising wsing a residual denoising network. IEEE Trans. Image Process. 2019, 28, 4177–4188. [Google Scholar] [CrossRef] [PubMed]
  13. Klatzer, T.; Hammernik, K.; Knobelreiter, P.; Pock, T. Learning joint demosaicing and denoising based on sequential energy minimization. In Proceedings of the 2016 IEEE International Conference on Computational Photography (ICCP), Evanston, IL, USA, 13–15 May 2016; pp. 1–11. [Google Scholar]
  14. Gharbi, M.; Chaurasia, G.; Paris, S.; Durand, F. Deep Joint Demosaicking and Denoising. ACM Trans. Graph. 2016, 35, 1–12. [Google Scholar] [CrossRef]
  15. Luo, J.; Wang, J. Image Demosaicing based on generative adversarial network. Math. Probl. Eng. 2020, 2020, 7367608. [Google Scholar] [CrossRef]
  16. Fu, H.; Bian, L.; Cao, X.; Zhang, J. Hyperspectral imaging from a raw mosaic image with end-to-end learning. Opt. Express 2020, 28, 314–324. [Google Scholar] [CrossRef]
  17. Schwartz, E.; Giryes, R.; Bronstein, A.M. DeepISP: Toward kearning an end-to-end image processing pipeline. IEEE Trans. Image Process. 2019, 28, 912–923. [Google Scholar] [CrossRef] [Green Version]
  18. Choi, W.; Park, H.; Kyung, C. Color reproduction pipeline for an RGBW color filter array sensor. Opt. Express 2020, 28, 15678–15690. [Google Scholar] [CrossRef]
  19. Kwan, C.; Larkin, J. Demosaicing of bayer and CFA 2.0 patterns for low lighting images. Electronics 2019, 8, 1444. [Google Scholar] [CrossRef] [Green Version]
  20. Lee, S.; Choi, D.; Song, B. Hardware-efficient color correlation-adaptive demosaicing with multifiltering. J. Electron. Imaging 2019, 28, 013018. [Google Scholar] [CrossRef]
  21. Szczepanski, M.; Giemza, F. Noise removal in the developing process of digital negatives. Sensors 2020, 20, 902. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Thomas, J.; Farup, I. Demosaicing of periodic and random color filter arrays by linear anisotropic diffusion. J. Imaging Sci. Technol. 2018, 62, 050401. [Google Scholar] [CrossRef] [Green Version]
  23. Mihoubi, S.; Lapray, P.; Bigue, L. Survey of demosaicking methods for polarization filter array images. Sensors 2018, 18, 3688. [Google Scholar] [CrossRef] [Green Version]
  24. Sober, B.; Levin, D. Manifold approximation by moving least-squares projection (MMLS). Constr. Approx. 2019. [Google Scholar] [CrossRef] [Green Version]
  25. Ji, L.; Guo, Q.; Zhang, M. Medical image denoising based on biquadratic polynomial with minimum error constraints and low-rank approximation. IEEE Access 2020, 8, 84950–84960. [Google Scholar] [CrossRef]
  26. Novosadova, M.; Rajmic, P.; Sorel, M. Orthogonality is superiority in piecewise-polynomial signal segmentation and denoising. EURASIP J. Adv. Signal Process. 2019, 2019, 6. [Google Scholar] [CrossRef]
  27. Takeda, H.; Farsiu, S.; Milanfar, P. Kernel regression for image processing and reconstruction. IEEE Trans. Image Process. 2007, 16, 349–366. [Google Scholar] [CrossRef] [Green Version]
  28. Bose, N.K.; Ahuja, N.A. Superresolution and noise filtering using moving least squares. IEEE Trans. Image Process. 2006, 15, 2239–2248. [Google Scholar] [CrossRef]
  29. Lee, Y.; Yoon, J. Nonlinear image upsampling method based on radial basis function interpolation. IEEE Trans. Image Process. 2010, 19, 2682–2692. [Google Scholar] [CrossRef]
  30. MatinFar, M.; Pourabd, M. Modified moving least squares method for two-dimensional linear and nonlinear systems of integral equations. Appl. Math. 2018, 37, 5857–5875. [Google Scholar] [CrossRef]
  31. Fujita, Y.; Ikuno, S.; Itoh, T.; Nakamura, H. Modified Improved Interpolating Moving Least Squares Method for Meshless Approaches. IEEE Trans. Magn. 2019, 55, 7203204. [Google Scholar] [CrossRef]
  32. Hwang, J.; Lee, J.; Kweon, I.; Kim, S. Probabilistic moving least squares with spatial constraints for nonlinear color transfer between images. Comput. Vis. Image Underst. 2019, 180, 1–12. [Google Scholar] [CrossRef]
  33. Lee, Y.; Lee, S.; Yoon, J. A framework for moving least squares method with total variation minimizing regularization. J Math. Imaging Vis. 2013, 48, 566–582. [Google Scholar] [CrossRef]
  34. Goldstein, T.; Osher, S. The split bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  35. Levin, D. The approximation power of moving least-squares. Math. Comput. 1998, 67, 1517–1531. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Acquisition system of digital cameras in digital imaging applications. CFA—color filter array, IP—image processing.
Figure 1. Acquisition system of digital cameras in digital imaging applications. CFA—color filter array, IP—image processing.
Sensors 20 04697 g001
Figure 2. Measuring the local structure similarity using (a) 9 × 9 windows for R and B pixels and (b) 7 × 7 windows for G pixels.
Figure 2. Measuring the local structure similarity using (a) 9 × 9 windows for R and B pixels and (b) 7 × 7 windows for G pixels.
Sensors 20 04697 g002
Figure 3. Block diagram of the proposed algorithm. MLS—moving least square.
Figure 3. Block diagram of the proposed algorithm. MLS—moving least square.
Sensors 20 04697 g003
Figure 4. Flow Process of the proposed algorithm.
Figure 4. Flow Process of the proposed algorithm.
Sensors 20 04697 g004
Figure 5. Comparison of the results between the different methods on MCM image 9: (a,f) Original; (b,g) residual interpolation (RI) [4] (psnr = 30.3286 dB); (c,h) alternating direction minimization multipliers (ADMM) [9] (psnr = 29.2356 dB); (d,i) block-matching and 3D filtering method (BM3D) [7] (psnr = 30.9309 dB); (e,j) proposed (psnr = 30.5131 dB).
Figure 5. Comparison of the results between the different methods on MCM image 9: (a,f) Original; (b,g) residual interpolation (RI) [4] (psnr = 30.3286 dB); (c,h) alternating direction minimization multipliers (ADMM) [9] (psnr = 29.2356 dB); (d,i) block-matching and 3D filtering method (BM3D) [7] (psnr = 30.9309 dB); (e,j) proposed (psnr = 30.5131 dB).
Sensors 20 04697 g005
Figure 6. Comparison of the results between the different methods on MCM image 10: (a,f) Original; (b,g) RI [4] (psnr = 29.2452 dB); (c,h) ADMM [9] (psnr = 29.8837 dB); (d,i) BM3D [7] (psnr = 30.9424 dB); (e,j) proposed (psnr = 30.7439 dB).
Figure 6. Comparison of the results between the different methods on MCM image 10: (a,f) Original; (b,g) RI [4] (psnr = 29.2452 dB); (c,h) ADMM [9] (psnr = 29.8837 dB); (d,i) BM3D [7] (psnr = 30.9424 dB); (e,j) proposed (psnr = 30.7439 dB).
Sensors 20 04697 g006
Figure 7. Comparison of the results between the different methods on MCM image 18: (a,f) Original; (b,g) RI [4] (psnr = 27.7879 dB); (c,h) ADMM [9] (psnr = 28.2356 dB); (d,i) BM3D [7] (psnr = 29.2443 dB); (e,j) proposed (psnr = 28.6549 dB).
Figure 7. Comparison of the results between the different methods on MCM image 18: (a,f) Original; (b,g) RI [4] (psnr = 27.7879 dB); (c,h) ADMM [9] (psnr = 28.2356 dB); (d,i) BM3D [7] (psnr = 29.2443 dB); (e,j) proposed (psnr = 28.6549 dB).
Sensors 20 04697 g007
Table 1. Comparison of the PSNR between the different methods. The two best values are in bold font.
Table 1. Comparison of the PSNR between the different methods. The two best values are in bold font.
McMaster Dataset Images
Noise LevelMethods123456789
σ = 7.65 RI25.492529.719827.423028.765628.735230.207330.307231.371030.3286
ADMM24.098628.748025.863227.182627.459528.943028.947830.118329.2356
BM3D25.556230.322728.531430.566528.909330.614631.255932.721630.9308
Proposed25.443629.576227.374128.877228.639329.925429.455231.440030.5131
σ = 12.75 RI24.764328.107126.313827.329427.290228.301428.430629.450428.4691
ADMM23.931328.494425.701626.950327.317928.797528.711829.589229.0134
BM3D25.041329.371227.212029.291028.078429.604329.776731.470029.8660
Proposed25.089029.094726.813628.279628.196129.505729.155530.740729.9819
McMaster Dataset Images
Noise LevelMethods101112131415161718
σ = 7.65 RI31.458232.050631.515932.774032.156032.330128.242927.554629.3230
ADMM30.280631.460730.801933.757232.211332.405226.778125.887228.5202
BM3D32.090332.775833.049335.111333.573033.457528.295227.410930.2682
Proposed31.206131.886632.240234.445132.757032.857327.797827.163029.1151
σ = 12.75 RI29.245229.659229.012029.638829.624429.865127.053626.548927.7879
ADMM29.883730.906730.547933.476631.824531.927926.536725.754428.2356
BM3D30.942431.612831.779534.172832.617232.493227.309026.651929.2443
Proposed30.743931.439431.441533.771232.247432.313427.353326.839528.6549
Table 2. Comparison of the FSIM between the different methods. The two best values are in bold font.
Table 2. Comparison of the FSIM between the different methods. The two best values are in bold font.
McMaster Dataset Images
Noise LevelMethods123456789
σ = 7.65 RI0.97090.96940.97400.97520.97240.97100.97020.97680.9700
ADMM0.96070.95700.96370.97500.96680.95990.95590.97050.9665
BM3D0.98200.98230.98590.98590.98380.98370.98290.98590.9829
Proposed0.97320.96720.97740.98040.97380.96350.96470.97700.9723
σ = 12.75 RI0.96180.95280.96050.94930.95720.95500.95110.94720.9473
ADMM0.95990.95720.96260.97340.96710.96100.95670.96810.9654
BM3D0.97090.96940.97400.97520.97240.97100.97020.97680.9700
Proposed0.96630.96070.97030.97150.96550.95610.95820.96920.9650
McMaster Dataset Images
Noise LevelMethods101112131415161718
σ = 7.65 RI0.97530.96460.97390.97730.97730.97140.96720.95950.9725
ADMM0.96870.96060.96920.97800.97590.97120.94970.95380.9697
BM3D0.98570.97950.98540.98590.98570.98200.98320.97760.9844
Proposed0.97300.96150.97790.97910.97660.97380.95970.96520.9725
σ = 12.75 RI0.94910.94930.93970.88050.93340.93550.96950.96400.9614
ADMM0.96730.95980.96850.97680.97400.97090.94930.95260.9692
BM3D0.97530.96460.97390.97730.97730.97140.96720.95950.9725
Proposed0.96620.95330.97060.97190.97000.96720.95370.95740.9658
Table 3. Comparison of the SSIM between the different methods. The two best values are in bold font.
Table 3. Comparison of the SSIM between the different methods. The two best values are in bold font.
McMaster Dataset Images
Noise LevelMethods123456789
σ = 7.65 RI0.92810.91630.88330.87130.91240.89640.83860.84470.9571
ADMM0.90260.90330.86550.87430.90250.87860.77600.83830.9550
BM3D0.93180.93190.92220.92340.92590.90670.87860.92180.9674
Proposed0.92450.91520.90180.89950.91650.89070.79630.88970.9620
σ = 12.75 RI0.91160.87630.82340.77950.86670.84750.75780.73650.9293
ADMM0.89910.89540.85410.86630.89660.87300.76290.78560.9497
BM3D0.92030.91670.90050.90320.91150.88660.82980.90410.9588
Proposed0.91880.90520.88240.87580.90580.87820.78440.85720.9564
McMaster Dataset Images
Noise LevelMethods101112131415161718
σ = 7.65 RI0.96020.95480.98330.98320.94450.96130.94950.93180.9313
ADMM0.96440.95390.98340.98740.96080.96910.93460.90880.9299
BM3D0.97340.96450.98940.99040.96850.97470.95270.93280.9496
Proposed0.96740.95700.98680.98860.96250.97030.94490.92660.9365
σ = 12.75 RI0.93120.92570.96870.96600.89880.93330.92660.90950.8960
ADMM0.95760.94530.98180.98640.95400.96210.92760.90340.9208
BM3D0.96700.95540.98590.98830.96280.96970.94000.91840.9370
Proposed0.96000.95080.98420.98680.95460.96440.93820.91830.9262

Share and Cite

MDPI and ACS Style

Kim, Y.; Ryu, H.; Lee, S.; Lee, Y.J. Joint Demosaicing and Denoising Based on Interchannel Nonlocal Mean Weighted Moving Least Squares Method. Sensors 2020, 20, 4697. https://doi.org/10.3390/s20174697

AMA Style

Kim Y, Ryu H, Lee S, Lee YJ. Joint Demosaicing and Denoising Based on Interchannel Nonlocal Mean Weighted Moving Least Squares Method. Sensors. 2020; 20(17):4697. https://doi.org/10.3390/s20174697

Chicago/Turabian Style

Kim, Yeahwon, Hohyung Ryu, Sunmi Lee, and Yeon Ju Lee. 2020. "Joint Demosaicing and Denoising Based on Interchannel Nonlocal Mean Weighted Moving Least Squares Method" Sensors 20, no. 17: 4697. https://doi.org/10.3390/s20174697

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop