# Noise Reduction for CFA Image Sensors Exploiting HVS Behaviour

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Background

#### 2.1. Bayer Data

#### 2.2. Basic Concepts about the Human Visual System

- - if the local area is homogeneous, then it can be heavily filtered because pixel variations are basically caused by random noise.
- - if the local area is textured, then it must be lightly filtered because pixel variations are mainly caused by texture and by noise to a lesser extent; hence only the little differences can be safely filtered, as they are masked by the local texture.

## 3. The Proposed Technique

#### 3.1. Overall filter block diagram

**Signal Analyzer Block**: computes a filter parameter incorporating the effects of human visual system response and signal intensity in the filter mask.**Texture Degree Analyzer**: determines the amount of texture in the filter mask using information from the Signal Analyzer Block.**Noise Level Estimator**: estimates the noise level in the filter mask taking into account the texture degree.**Similarity Thresholds Block**: computes the fuzzy thresholds that are used to determine the weighting coefficients for the neighborhood of the central pixel.**Weights Computation Block**: uses the coefficients computed by the Similarity Thresholds Block and assigns a weight to each neighborhood pixel, representing the degree of similarity between pixel pairs.**Filter Block**: actually computes the filter output.

#### 3.2. Signal Analyzer Block

#### 3.3. Filter Masks

#### 3.4. Texture Degree Analyzer

_{d}that is representative of the local texture degree. This reference value approaches 1 as the local area becomes increasingly flat and decreases as the texture degree increases (Figure 5). The computed coefficient is used to regulate the filter smoothing capability so that high values of T

_{d}correspond to flat image areas in which the filter strength can be increased.

_{max}and TextureThreshold, a value that is obtained by combining information from the HVS response and noise level, as described below (2).

- - if T
_{d}= 1 the area is assumed to be completely flat; - - if 0 < T
_{d}< 1 the area contains a variable amount of texture; - - if T
_{d}= 0, the area is considered to be highly textured.

_{c}denotes the noise level estimation on the previous pixel of the same Bayer color channel c(see Section 3.5) and HVS

_{weight}(Figure 3) can be interpreted as a jnd (just noticeable difference); hence an area is no longer flat if the D

_{max}value exceeds the jnd plus the local noise level NL.

_{R/B}threshold. The gray-scale output of the texture detection is shown in Figure 7: bright pixels are associated to high texture, dark pixels to flat areas.

#### 3.5. Noise Level Estimator

- if the local area is completely flat (T
_{d}= 1), then the noise level is set to D_{max}; - if the local area is highly textured (T
_{d}= 0), the noise estimation is kept equal to the previous region (i.e., pixel); - otherwise a new value is estimated.

_{d}(k) represents the texture degree at the current pixel and NL

_{c}(k−1) (c=R,G,B) is the previous noise level estimation, evaluated considering pixel of the same colour, already processed. For k = 1 the values NL

_{R}(k−1), NL

_{G}(k−1) and NL

_{B}(k−1) are set to an initial low value depending on the pixel bit-depth. These equations satisfy requirements i), ii) and iii). The raster scanning order of the input image is constrained by global HW architecture. Starting from different spatial locations the noise level converges to the same values due to the presence of homogeneous areas that are, of course, prominent in almost all natural images.

#### 3.6. Similarity Thresholds and Weighting Coefficients computation

_{i}to be assigned to the neighboring pixels of the filter mask. The absolute differences D

_{i}between the central pixel and its neighborhood must be analyzed in combination with the local information (noise level, texture degree and pixel intensities) for estimating the degree of similarity between pixel pairs (see Figure 8). As stated in Section 2.2, if the central pixel P

_{c}belongs to a textured area, then only small pixel differences must be filtered. The lower degree of filtering in textured areas allows maintaining the local sharpness, removing only pixel differences that are not perceived by the HVS.

_{i}coefficients can be expressed in terms of fuzzy logic (Figure 9).

- - P
_{c}be the central pixel of the working window; - - P
_{i}, i = 1,…,7, be the neighborhood pixels; - - D
_{i}= abs(P_{c}− P_{i}), i=1,…,7 the set of absolute differences between the central pixel and its neighborhood;

_{i}coefficients, each absolute difference D

_{i}must be compared against two thresholds Th

_{low}and Th

_{high}that determine if, in relation to the local information, the i-th difference D

_{i}is:

- small enough to be heavily filtered,
- big enough to remain untouched,
- an intermediate value to be properly filtered.

_{low}and Th

_{high}, the shape of the membership function is determined (Figure 10).

_{d}) incorporates the concepts of dark/bright and noise level; hence, its value is crucial to determine the similarity thresholds to be used for determining the W

_{i}coefficients. In particular, the similarity thresholds are determined to obtain maximum smoothing in flat areas, minimum smoothing in highly textured areas, and intermediate filtering in areas containing medium texture; this can be obtained by using the following rules (4):

_{i}differences against them (Figure 10).

_{i}is lower than Th

_{low}, it is reasonable to assume that pixels P and P

_{i}are very similar; hence the maximum degree of similarity Max

_{weight}is assigned to P

_{i}. On the other hand, if the absolute difference between P and P

_{i}is greater than Th

_{high}, it is reasonable that this difference is due to texture details, hence P

_{i}is assigned a null similarity weight. In the remaining cases, i.e. when the i-th absolute difference falls in the interval [Th

_{low}, Th

_{high}], a linear interpolation between Max

_{weight}and 0 is performed, allowing determining the appropriate weight for P

_{i}.

#### 3.7. Final Weighted Average

_{1},…,W

_{N}(N: number of neighborhood pixels) be the set of weights computed for the each neighboring element of the central pixel P

_{c}. The final filtered value P

_{f}is obtained by weighted average as follows (5):

_{low}, Th

_{high}) performs a simple linear interpolation between Th

_{low}and Th

_{high}as depicted in Figure 10.

## 4. Experimental Results

#### 4.1. Noise Power Test

**I**: Noisy CFA Pattern_{NOISY}**I**: Filtered CFA Pattern_{FILTERED}**I**: Original noiseless CFA Pattern_{ORIGINAL}

**I**−_{NOISY}**I**_{ORIGINAL}= I_{ADDED_NOISE}**I**−_{FILTERED}**I**_{ORIGINAL}= I_{RESIDUAL_NOISE}

**I**is the image containing only the noise artificially added to

_{ADDED_NOISE}**I**, whereas

_{ORIGINAL}**I**is the image containing the residual noise after filtering. The noise power is computed for both

_{RESIDUAL_NOISE}**I**and

_{ADDED_NOISE}**I**according to the following formula (7):

_{RESIDUAL_NOISE}#### 4.2. Visual Quality Test

#### 4.3. PSNR test

- - GMED: Gaussian Fuzzy Filter with Median Center
- - GMAV: Gaussian Fuzzy Filter with Moving Average Center
- - ATMED: Asymmetrical Triangular Fuzzy Filter with Median Center
- - ATMAV: Asymmetrical Triangular Fuzzy Filter with Moving Average Center

## Conclusions and Future Work

## Acknowledgments

## References and Notes

- Lukac, R. Single-sensor imaging in consumer digital cameras: a survey of recent advances and future directions. J. Real-Time Image Process
**2006**, 1, 45–52. [Google Scholar] - Battiato, S.; Castorina, A.; Mancuso, M. High Dynamic Range Imaging for Digital Still Camera: an Overview. SPIE J. Electron. Imaging
**2003**, 12, 459–469. [Google Scholar] - Messina, G.; Battiato, S.; Mancuso, M.; Buemi, A. Improving Image Resolution by Adaptive Back-Projection Correction Techniques. IEEE Trans. Consum. Electron
**2002**, 48, 409–416. [Google Scholar] - Battiato, S.; Bosco, A.; Castorina, A.; Messina, G. Automatic Image Enhancement by Content Dependent Exposure Correction. EURASIP J. Appl. Signal Process
**2004**, 2004, 1849–1860. [Google Scholar] - Battiato, S.; Castorina, A.; Guarnera, M.; Vivirito, P. A Global Enhancement Pipeline for Low-cost Imaging Devices. IEEE Trans. Consum. Electron
**2003**, 49, 670–675. [Google Scholar] - Bayer, B.E. Color Imaging Array. US. Pat. 3,971,965
**1976**. [Google Scholar] - Hirakawa, K.; Parks, TW. Joint demosaicing and denoising. Proceedings of the IEEE International Conference on Image Processing (ICIP 2005), Genova, Italy, Sept. 2005; pp. 309–312.
- Hirakawa, K.; Parks, T.W. Joint demosaicing and denoising. IEEE Trans. Image Process
**2006**, 15, 2146–2157. [Google Scholar] - Lu, W.; Tan, Y.P. Color Filter Array Demosaicking: New Method and Performance Measures. IEEE Trans. Image Process
**2003**, 12, 1194–1210. [Google Scholar] - Trussel, H.; Hartwig, R. Mathematics for Demosaicking. IEEE Trans. Image Process
**2002**, 11, 485–492. [Google Scholar] - Battiato, S.; Mancuso, M. An Introduction to the Digital Still Camera Technology. ST J. Syst. Res. — Special Issue on Image. Process. Digital Still Camera
**2001**, 2, 2–9. [Google Scholar] - Jayantn, N.; Johnston, J.; Safranek, R. Signal Compression Based On Models Of Human Perception. Proceedings of the IEEE
**1993**, 81, 1385–1422. [Google Scholar] - Nadenau, M.J.; Winkler, S.; Alleysson, D.; Kunt, M. Human Vision Models for Perceptually Optimized Image Processing - a Review. IEEE Trans. Image Process
**2003**, 12, 58–70. [Google Scholar] - Pappas, T.N.; Safranek, R.J. Perceptual Criteria for Image Quality Evaluation. In Handbook of Image and Video Processing; Bovik, A.C., Ed.; Publisher: Academic Press: San Diego, CA, USA, 2000; pp. 669–684. [Google Scholar]
- Wang, Z.; Lu, L.; Bovik, A. Why Is Image Quality Assessment so difficult? Presented at the IEEE International Conference on Acoustics, Speech, & Signal Processing, Orlando, FL, USA, May 2002; pp. 3313–3316.
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process
**2004**, 13, 600–612. [Google Scholar] - Longere, P.; Xuemei, Z.; Delahunt, P.B.; Brainard, D.H. Perceptual Assessment of Demosaicing Algorithm Performance. Proceedings of the IEEE, Jan 2002; pp. 123–132.
- Pizurica, A.; Zlokolica, V.; Philips, W. Combined wavelet domain and temporal denoising. Proceedings of the IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Miami, FL., USA, July 2003; pp. 334–341.
- Portilla, J.; Strela, V.; Wainwright, M.J.; Simoncelli, E.P. Image Denoising Using Scale Mixtures of Gaussians in the Wavelet Domain. IEEE Trans. Image Process
**2003**, 12, 1338–1351. [Google Scholar] - Scharcanski, J.; Jung, C.R.; Clarke, R.T. Adaptive Image Denoising Using Scale and Space Consistency. IEEE Trans. Image Process
**2002**, 11, 1092–1101. [Google Scholar] - Barcelos, C.A.Z.; Boaventura, M.; Silva, E.C. A Well-Balanced Flow Equation for Noise Removal and Edge Detection. IEEE Trans. Image Process
**2003**, 12, 751–763. [Google Scholar] - Amer, A.; Dubois, E. Fast and reliable structure-oriented video noise estimation. IEEE Trans. Circuits Syst. Video Technol
**2005**, 15, 113–118. [Google Scholar] - Kim, Y.-H.; Lee, J. Image feature and noise detection based on statistical hypothesis tests and their applications in noise reduction. IEEE Trans. Consum. Electron
**2005**, 51, 1367–1378. [Google Scholar] - Russo, F. Technique for Image Denoising Based on Adaptive Piecewise Linear Filters and Automatic Parameter Tuning. IEEE Trans. Instrum. Meas
**2006**, 55, 1362–1367. [Google Scholar] - Kwan, H.K.; Cai, Y. Fuzzy filters for image filtering. Proceedings of the International Symposium on Circuits and Systems, Aug. 2003; pp. 161–164.
- Schulte, S.; De Witte, V.; Kerre, E.E. A fuzzy noise reduction method for colour images. IEEE Trans. Image Process
**2007**, 16, 1425–1436. [Google Scholar] - Bosco, A.; Findlater, K.; Battiato, S.; Castorina, A. Noise Reduction Filter for Full-Frame Imaging Devices. IEEE Trans. Consum. Electron
**2003**, 49, 676–682. [Google Scholar] - Wandell, B. Foundations of Vision, 1st Edition ed; Sinauer Associates: Sunderland, Massachusetts, USA, 1995. [Google Scholar]
- Gonzales, R.; Woods, R. Digital Image Processing, 3rd Edition ed; Addison-Wesley: Reading, MA, 1992. [Google Scholar]
- Lian, N.; Chang, L.; Tan, Y.-P. Improved color filter array demosaicking by accurate luminance estimation. Proceedings of the IEEE International Conference on Image Processing (ICIP 2005), Genova, Italy, Sept. 2005; pp. 41–44.
- Chou, C-H.; Li, Y.-C. A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile. IEEE Trans. Circuits Syst. Video Technol
**1995**, 5, 467–476. [Google Scholar] - Hontsch, I.; Karam, L.J. Locally adaptive perceptual image coding. IEEE Trans. Image Process
**2000**, 9, 1472–1483. [Google Scholar] - Zhang, X.H.; Lin, W.S.; Xue, P. Improved estimation for just-noticeable visual distortion. Signal Process
**2005**, 85, 795–808. [Google Scholar] - Foi, A.; Alenius, S.; Katkovnik, V.; Egiazarian, K. Noise measurement for raw-data of digital imaging sensors by automatic segmentation of non-uniform targets. IEEE Sensors J
**2007**, 7, 1456–1461. [Google Scholar] - Foi, A.; Trimeche, M.; Katkovnik, V.; Egiazarian, K. Practical Poissonian-Gaussian Noise Modeling and Fitting for Single-Image Raw-Data. IEEE Trans. Image Process
**2008**, 17, 1737–1754. [Google Scholar] - Kalevo, O.; Rantanen, H. Noise Reduction Techniques for Bayer-Matrix Images. Proceedings of SPIE Electronic Imaging, Sensors and Cameras Systems for Scientific, Industrial and Digital Photography Applications III, San Jose, CA, USA, Jan 2002; 4669.
- Smith, S.M.; Brady, J.M. SUSAN - A New Approach to Low Level Image Processing. Int. J. Comput. Vision
**1997**, 23, 45–78. [Google Scholar] - Zhang, L.; Wu, X.; Zhang, D. Color Reproduction from Noisy CFA Data of Single Sensor Digital Cameras. IEEE Trans. Image Process
**2007**, 16, 2184–2197. [Google Scholar] - Standard Kodak test images. http://r0k.us/graphics/kodak/.

**Figure 7.**Texture Analyzer output: (a) input image after colour interpolation (b) gray-scale texture degree output: bright areas correspond to high frequency, dark areas correspond to low frequencies.

**Figure 8.**The Wi coefficients weight the similarity degree between the central pixel and its neighborhood.

**Figure 9.**Block diagram of the fuzzy computation process for determining the similarity weights between the central pixel and its N neighborhoods.

**Figure 10.**Weights assignment (Similarity Evaluator Block). The i-th weight denotes the degree of similarity between the central pixel in the filter mask and the i-th pixel in the neighborhood.

**Figure 12.**Noise power test. Upper line: noise level before filtering. Lower line: residual noise power after filtering.

**Figure 13.**Overall scheme used to compare the Susan algorithm with the proposed method. The noisy color image is filtered by processing its color channels independently. The results are recombined to reconstruct the denoised color image.

**Figure 14.**Images acquired by a CFA sensor. (a) SNR value 30.2dB. (b) SNR value 47.2dB. The yellow crops represent the magnified details contained in the following figures.

**Figure 15.**A magnified detail of Figure 14(a), to better evaluate the comparison between the proposed filter and the SUSAN algorithm applied on R/G/B channels separately. Both methods preserve details very well, although the proposed technique is capable to better preserve texture sharpness; the enhancement is visible by looking at the wall and the roof texture. The proposed method uses fewer resources as the whole filtering action takes place on one plane of CFA data.

**Figure 16.**Comparison test at CFA level (magnified details of Figure 14(a)). The original SUSAN implementation was slightly modified so that it can process Bayer data. The efficiency of the proposed method in retaining image sharpness and texture is clearly visible.

**Figure 17.**Magnified details of Figure 14(b). (a) 200% zoomed (pixel resize) cropped part of noisy image. (b) Filtered 200% zoomed (pixel resize) counterpart (c) 200% zoomed (pixel resize) cropped part of noisy image. (d) Filtered 200% zoomed (pixel resize) counterpart. The effects of the proposed method over flat (a), (b) and textured (c), (d) areas are shown. The noisy images are obtained by color interpolating unfiltered Bayer data (a), (c). The corresponding color images produced by demosaicing filtered Bayer data (b), (d). SNR values are: 47.2dB for noisy image and 51.8dB for filtered image.

**Figure 18.**(a) Original Image. (b) Noisy image. (c) Cropped and zoomed noisy image detail. Cropped and zoomed noisy image detail filtered with: Multistage median-1 filter(d), Multistage median-3 filter (e), proposed method(f).

**Figure 19.**Testing procedure. (a) The original Kodak color image is converted to Bayer pattern format and demosaiced. (b) Noise is added to the Bayer image, filtered and color interpolated again. Hence, color interpolation is the same for the clean reference and the denoised images.

**Figure 20.**PSNR comparison between proposed solution and other spatial approaches for the Standard Kodak Images test set. (a) Kodak noisy images set with standard deviation 5. (b) Kodak noisy images set with standard deviation 8. (c) Kodak noisy images set with standard deviation 10.

**Figure 21.**PSNR comparison between proposed solution and other fuzzy approaches for the Standard Kodak Images test set. (a) Kodak noisy images set with standard deviation 5. (b) Kodak noisy images set with standard deviation 8. (c) Kodak noisy images set with standard deviation 10.

© 2009 by the authors; licensee MDPI, Basel, Switzerland This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Bosco, A.; Battiato, S.; Bruna, A.; Rizzo, R.
Noise Reduction for CFA Image Sensors Exploiting HVS Behaviour. *Sensors* **2009**, *9*, 1692-1713.
https://doi.org/10.3390/s90301692

**AMA Style**

Bosco A, Battiato S, Bruna A, Rizzo R.
Noise Reduction for CFA Image Sensors Exploiting HVS Behaviour. *Sensors*. 2009; 9(3):1692-1713.
https://doi.org/10.3390/s90301692

**Chicago/Turabian Style**

Bosco, Angelo, Sebastiano Battiato, Arcangelo Bruna, and Rosetta Rizzo.
2009. "Noise Reduction for CFA Image Sensors Exploiting HVS Behaviour" *Sensors* 9, no. 3: 1692-1713.
https://doi.org/10.3390/s90301692