The Geometry of Noise in Color and Spectral Image Sensors

Digital images are always affected by noise and the reduction of its impact is an active field of research. Noise due to random photon fall onto the sensor is unavoidable but could be amplified by the camera image processing such as in the color correction step. Color correction is expressed as the combination of a spectral estimation and a computation of color coordinates in a display color space. Then we use geometry to depict raw, spectral and color signals and noise. Geometry is calibrated on the physics of image acquisition and spectral characteristics of the sensor to study the impact of the sensor space metric on noise amplification. Since spectral channels are non-orthogonal, we introduce the contravariant signal to noise ratio for noise evaluation at spectral reconstruction level. Having definitions of signal to noise ratio for each steps of spectral or color reconstruction, we compare performances of different types of sensors (RGB, RGBW, RGBWir, CMY, RYB, RGBC).


Introduction
Images acquired by the sensor in digital cameras need to be processed for being displayed on an output screen. But because of the random absorption of photons, image capture is always subject to noise. Noise can be amplified by the processing operations, the amplification at the output image depends both on physical properties of the sensor (spectral sensitivities, dynamic range, etc.) and the kind of processing applied to the raw image [1][2][3][4][5].
In the literature, one common problem is to find the best hardware/software configuration to minimize the amount of noise of the output image [6]. The question of sensor optimization is very complex, because it is a spatio-spectral inverse problem [7]. The problem involves the choice of spectral sensitivities associated to color channels and their distribution on the color filter array (CFA), for optimizing both color rendering and spatial quality.
Here, we assume the whole processing chain is linear. Under this linearity hypothesis, the overall processing can be split into several linear steps, shown in Figure 1. All these linear steps can be optimized independently to minimize noise based on least square error. Mean square error optimization was shown to work well for the demosaicking step [7,8], which is not considered in this study. In this paper, we focus on the problem of the choice of spectral sensitivities and its impacts on noise amplification in the output image. Usual denoising block is voluntarily discarded from this processing chain because our aim is to test noise amplification based on several sensor designs. White balance is not considered either because its noise amplification could be controlled. The only processing step under study in this paper is color correction. Therefore, we consider that each pixel in the raw image contains a vector of raw values (triplet in case of RGB sensor). Color correction is a pixel-wise transformation that converts raw data into displayable RGB triplets. It is actually composed of two operators, a spectral reconstruction operator [9] followed by an operator that projects spectral data onto the display space [10,11].
We investigate noise amplification through color correction by using a geometrical framework to represent signals and noise at the spectral and color reconstruction steps. Algebra and geometry for sensor acquisition has already been stated [10,12]. The representation we use here considers inter-channel spectral correlations because sensor basis are always non-orthogonal. Consequently, we introduce contravariant signal to noise ratio to take in consideration this sensor's metric. This one is then appropriate to determine noise amplification at spectral reconstruction step.

Example of Noise Amplification through Color Correction
To illustrate the problem of noise amplification we start implementing some simulations based on the high resolution multispectral images from the ReDFISh dataset [13]. The simulation framework is given in Figure 2. It uses reflectance data (ρ p,q (λ), p, q the pixel location) from multispectral images to compute raw frames independently for each channel. These frames are then stacked such that each pixel of the obtained image contains the information of all channels without performing any demosaicking.
Simulations are performed using Python. They start by choosing an illuminant whose spectral distribution is denoted I 1lux (λ) and hypothetic spectral sensitivities of a sensor denoted Q i (λ). If the sensor is RGB, i = {r, g, b}. Raw signals are then computed [14,15] by applying this equation on each pixel for each channel: K is an exposure factor depending on several parameters such as pixel size (a pix in m), the aperture of the objective lens ( f # ) the integration time (t i in s), an illumination level (N lux in lux) and finally the CVF, a conversion factor between electron values and digital units (bits) [14]. is an average operator used to denote noise free signals. After stacking the raw planes, we get a noise free raw image.
Then photon shot noise and readout noise are added to the raw image using random functions from Numpy library of Python. Photon shot noise has a Poisson distribution and its variance σ 2 ph is equal to the mean signal value. Readout noise has a Gaussian distribution, its standard deviation σ rn is often given in sensor datasheets as an electron quantity. Noisy image is obtained by: Photon Shot noise is pixel dependent, this dependency is denoted with parenthesis brackets. In these simulations, data are clipped between 0 et 1 and computed in double type so signal quantization is not considered. Other sources of noise are also neglected in this paper.
As an example of noise amplification we consider spectral sensitivities of a classical RGB sensor under constant photon flux illuminant and experiment parameters of Table 1. Then, we compute a noisy raw image and apply a color correction matrix (CCM) to display Figure 3.  A challenge for many authors is to find a way to reduce this amplification. This can be done by optimizing spectral sensitivities of the sensor [16][17][18] or by changing the way CCM is computed (adding regularization, tuning the CCM coefficients...) [9,19]. Complementary to these studies, the goal of our paper is mainly to describe the algebraic mechanisms that lead to this noise amplification.

Geometrical Representation of Sensors Signal and Noise
This section aims to display a geometry of signals and noise propagation from the noisy raw acquisition through spectral reconstruction and color correction. Cohen provided an interpretation of spectral and color acquisition by considering the metric of the sensor space (the angle between each axis represents the correlations between them) [10]. In his formalism, spectra are represented as vectors in the sensor space. Moreover, some authors represented noise in color spaces like CIELAB or CIE xyY [2,20]. The analysis of a noisy image is, here, represented as a cloud of points or a geometrical uncertainty domain around the signal mean value. We join these two approaches to represent noise as an uncertainty domain in Cohen's formalism which considers the metric of the sensor. For visualization purposes and to illustrate the tendencies of noise domain evolution across processes, we set noise with a uniform statistical distribution.
Spectral sensitivities are continuous physical functions of wavelength which can be considered as vectors in a Hilbert space [21]. Inside that vector space, signals acquired from input light have a geometrical representation. For illustration we consider a sensor with two channels (red and blue). This two-dimensional representation can be extended to a p-dimensional vector space without loss of generality. So the model addresses multicolor (even multispectral) sensors having three color channels and more.
The considered Hilbert space owns a scalar product denoted .|. such that for two vectors L 1 and L 2 : The . operator is the point-wise product between the two curves. The definition of this scalar product allows to represent the spectral sensitivities of the sensor, denoted Q r and Q b for our example. Their L 2 norms are derived from a square root of the scalar product of each considered vector with itself: The angle between the two sensitivity vectors [10] is computed as the scalar product divided by the product of norms.
The span of spectral sensitivity functions generates a vector subspace inside the Hilbert space of light spectra [10]. In practical case, these sensitivities are usually non-orthogonal because they have correlations between each other. To accurately display the sensor basis, we derived an orthogonal basis associated to the sensor spectral sensitivities using a Gramm-Schmidt process ( Figure 4). Spectral sensitivities are linked to the orthogonal basis by a linear transform that could be considered for building the metric of the sensor space.  Considering a radiance vector L as the input light of the sensor, equations giving noise free signals (see Equation (1)) can be rewritten using a scalar product in Hilbert space of lights: To reduce the amount of notation, noise free signals are denoted without average operator. If spectral sensitivities are sampled in a n number of wavelengths, these equations can be approximated in a vector way implying a matrix product [10][11][12]. This is a rectangle approximation of the integrals of Equation (6) given by: To represent raw data in the sensor's space, we use the mathematical link between scalar product and orthogonal projection [22]. Each orthogonal projection of L on Q i (i = {r, b}) can be expressed as a vector denoted L Q i collinear to Q i . The L 2 norm of this vector is denoted S i . This norm can be derived from the scalar product between L and Q i which can be rewritten using the raw signals S i : S i may be misplaced in an uncertainty domain because of raw data noise. The noise level which affects S i , denoted σ i , is computed using the same transform applied on σ i : For illustration, a simulation of raw acquisition over 500 pixels is computed in Figure 5. The chosen input radiance L is constant photon flux along wavelength and K is set such that acquisitions are quiet noisy (K = 1.385 S.I units). The levels of raw noises σ r and σ b are computed as in Equation (2), but for visualization purposes, their statistical distribution are set as uniform.
Raw signals are now processed to perform spectral reconstructions from sparse raw values. In the following, we give a geometrical representation of signal and noise for three different linear methods of spectral reconstructions. In a first method, called intrinsic reconstruction, the reconstruction operator is calculated from the sensor sensitivities only. In a second method a database of spectra is used as a training set to build the reconstructing operator with a linear regression. In the third method we use Tikhonov regularization. Finally, we show that these operators can be considered as part of the color correction matrix whose geometrical interpretation is also depicted.
All along this section, one may keep in mind that any point of the sensor plane (see Figure 4b) is a linear combination of Q r and Q b . Thus, each point is associated to a continuous function of wavelength similarly to Q r (λ) and Q b (λ).

Intrinsic Spectral Reconstruction (No Training Set)
Color and snapshot multispectral sensors classically contain few spectral channels, the dimension of acquired spectra is restricted to few numbers (p channels) compared to the number of variables (n wavelength samples) in the spectral domain. To retrieve spectra from acquired data we compute a reconstruction matrix called R. The most straightforward approach consists in performing a least square minimization between the n × n identity operator I n and RF T . We get the reconstructing operator R given by: R is the pseudo-inverse matrix of F and reciprocally, the pseudo-inverse matrix of R is F. For our red and blue sensor, the spectral reconstruction of L, denoted L, using this approach can be written as in [10,11]: The operator F(F T F) −1 F T is an orthogonal projector [10]. So L is placed in the sensor's plane such that its orthogonal projection on Q r and Q b are equal to S r and S b ( Figure 6). Because of noise, the location of L is inside an uncertainty area whose extension is driven by σ r and σ b . Figure 6b shows the location of the noise free L surrounded by the points computed from the noisy simulation of Figure 5. They are distributed in a parallelogram shape because of the choice of the uniform distribution of raw noise. The area A of the parallelogram represents the level of noise of the spectral reconstruction. It depends not only on the raw noise levels, but also on the correlation between the channels through θ rb : This angle decreases when correlation are higher making A wider. So for identical raw noise levels, the more correlated the channels are, the noisier the spectral reconstruction is.

Linear Regression over a Spectral Training Set
To increase estimation accuracy, it is usual to perform a linear regression over a training set of radiance spectra [18]. This training set can be written as a n × m matrix T set where n is the number of wavelengths and m the number of radiance spectra. Reconstructing operator based on training set is denoted R t . It is computed performing a least square minimization between T set and its reconstruction from raw values: For color applications, a well-known spectral dataset is the X-Rite ColorChecker Classic (Figure 7). Radiance of T set are computed by multiplying the reflectance spectra of the color chips with the illuminant spectrum (L for instance here).
Spectral reconstruction of L can be written [18]: T set T T set F(F T T set T T set F) −1 F T is an oblique projection operator. Unlike the intrinsic reconstruction, L is not a simple combination of the sensor's spectra because of the influence of the prior radiance data on R t . Instead, L belongs to an another space generated by a basis of extrapolated spectral sensitivities. Because of the reciprocity of the pseudo-inverse operation, these extrapolated sensitivities are contained in an F t matrix computed from the reconstruction operator R t : In turn, F t defined two axis called Q r,t and Q b,t . The representation of L is given in Figure 8.

Tikhonov Regularization (Ridge Regression)
Both previous reconstructing operators come from inversion of the acquisition process. These inversions are known to amplify noise. A classical method to limit the increase of noise is to perform a Tikhonov regularization also known as ridge regression. Ridge reconstruction operator R r is written as: α is a real factor that controls the conditioning of the matrix, I p is the p × p identity operator. When α increases, the matrix becomes better conditioned so the inverse limits noise amplification. However spectral reconstruction operator gets biased [20]. As before, we can derive corresponding sensitivity functions or matrix F r as follows: Corresponding Q r,r and Q b,r are still linear combinations of Q r and Q b , so they belong to the sensor's space Figure 9. Noise reduction comes from the modification of the shape of uncertainty volume. Noise surface sides are aligned with the axes and its size depends on α. Estimation of the optimal α coefficient is problem dependent and will not be discussed in this article [23,24].

Color Correction: Projection in a Color Space
Color correction consists in transforming raw signal acquired by the sensor toward human tristimuli color data. There exist plenty of methods [11,[25][26][27], among which linear color corrections matrix (CCM) is the most widely and extensively used. Spectral properties of the standard human observer are defined by CIE (Commission internationnale de l'éclairage) as the color matching functions (CMF) x(λ), y(λ), z(λ) (Figure 10). These curves taken as a n × 3 matrix H allow to compute color data according to the CIE XYZ 1931 standard in the same way as sensor raw acquisition Equation (7). H is given by: H allows to compute a CCM [11]. As color coordinates are limited to CIE XYZ 1931 color space and CCM normalization is not performed, we actually call it CCM kernel . This one is given by: CCM contains an implicit spectral reconstruction operator R so geometrical interpretations given previously for radiance estimation remain valuable for color applications. X, Y and Z are color coordinates analogous to S r and S b . They allow to compute the orthogonal projection of L on color matching axes (denoted X, Y and Z) in the sensor's vector space. To represent sensor and color axis at the same time on graphics, we show that applying CCM kernel is equal to apply the projection of H in the sensor's space: As shown in Figure 11 the projection, x, y and z only span two dimensions (because of the choice of a two channels sensor). The projection of the uncertainty volume of L produces uncertainties σ x , σ y and σ z . Amplitude of color noise depends on the size of the spectral uncertainty volume as well as its orientation regarding color matching axes. . This section provides some illustrations of how noise propagates from raw acquisition to spectral and color reconstructions. It shows that noise of output color images depends on many different aspects. From a physical point of view, first parameters are the raw sensitivities of the channels which impact raw noise levels. Then, inter-channel correlations have an impact on noise of spectral reconstruction by changing the size and shape of the uncertainty volume surrounding L ( Figure 6). Finally, the noise of a color corrected image depends on one more criterion: the orientation of the target color space compared to the sensor axes ( Figure 11). From a computational point of view, these noise levels depend also on the method used in spectral and color reconstruction.

Raw, Color and Contravariant Signal to Noise Ratios
Signal to noise ratio (SNR) is the measurement used to evaluate noise amplification in the literature. First, SNR on raw images can be computed independently over each spectral channel. Under the hypothesis of considering only photon shot noise and readout noise, the overall variance of a uniform image is σ 2 i = σ 2 ph ( S i ) + σ 2 rn . For the p channels of the sensor, raw SNR can be computed in decibels as: As shot noise depends on the intensity, the SNR is greater when the signal increases. Inversely in low-light conditions SNR decreases [28]. Regarding geometrical representation of Figure 5, raw SNR can also be written: This definition is valuable at the input of the processing chain. But noise propagation is classically evaluated by studying the SNR at the output image. These ones can be color images (three RGB channels) or multispectral (we consider it contains n channels corresponding to each wavelength).
In case of color images, each pixel contains a RGB triplet expressed in a display space (sRGB for example) [29]. As color is a perceptive entity, color SNR criteria must be representative of human perception [30,31]. Most standard color spaces are related to the standard of the Commission Internationale de l'Éclairage (CIE) spaces, namely CIE XYZ 1931. Color images can be separated into luminance and chrominance components ( Figure 12). With sRGB display space, luminance is computed using RGB triplet values: Then, variances of RGB triplet are extracted from uniform area in output images (such as Figure 12). In practice, this evaluation is often performed over an achromatic (grey) uniform image. By propagating variances of RGB triplets (supposed to be statistically independent) through linear combinations, SNR over the luminance channel is computed as: If we consider Figure 11, this noise can also be computed by: SNR Y associated to luminance is a convenient criterion because it is just a scalar. To be more accurate, one may also evaluate the impact of noise on chromaticity in other color spaces. For this evaluation, pixels are projected in the so called CIELAB color space [32] generating a cloud of points. Chromaticity noise can be represented as a volume in that space [2]. Because of non-linearity of the transformation between CIE XYZ 1931 and CIELAB we do not use this method in the following. Instead, we investigate chromaticity channels by applying Equation (25) on X and Z channels (see Figure 11). Finally, for the case of multispectral images, SNR quantification is more challenging. The output image is a cube of data where the third dimension contains reconstructed spectra from each pixel over n wavelength samples. If spectra are linearly reconstructed, it is possible to evaluate SNR, at each wavelength sample, by using the covariance matrix of the reconstructing operator [2]. With that method, output SNR is expressed over n components, so it allows to draw standard deviation curves around the noise free spectral reconstruction which may be very convenient (as in Figure 19 in next section). Another way to illustrate SNR attached to multispectral data is to project reconstructed spectra in a color space and apply the same evaluation as for color images. SNR is, in that case, computed over the three components of the color space. These two ways of quantifying SNR in multispectral data have a common issue: their dimension. In fact, linear spectral reconstruction is an ill-posed problem where n-dimensional data are evaluated from p-dimensional ones (p is the number of channels of the sensor, usually lower than n). So linear reconstruction follows p degrees of freedom, then its SNR would be better represented over p dimensions instead of n or 3. To quantify SNR of spectral reconstruction over its real dimension, we introduce the contravariant SNR.

Definition of the Contravariant Signal to Noise Ratio
In spectral reconstruction, the estimated spectra L is a weighted sum of Q r and Q b (or their analogs if using training on dataset or Tikhonov regularisation). For a non-orthogonal basis, these weights are the contravariant coordinates of the vector L projected into the sensor basis [33]. In opposition, raw acquisition vectors correspond to the covariant coordinates of L in that basis. Contravariant coordinates are obtained by applying the metric tensor on covariant coordinates, they are often denoted with upper indexes following the so-called Einstein's notation [33]: A geometrical illustration is given in Figure 13 for the previous red and blue sensor. In the same way as L, uncertainty volume associated to spectral reconstruction can be projected alongside axes. As a result, we get the uncertainties of contravariant coordinates of L denoted σ r and σ b .
These uncertainties constitute what we call in the following: contravariant noise. Contravariant signals and noise definitions allow to compute a contravariant SNR associated to each dimension: Figure 13. Projection of the uncertainty volume onto contravariant coordinates. Uncertainty in spectral reconstruction is given by contravariant noise σ r and σ b . When the angle between channel is smaller, the contravariant coordinates and SNR decrease. Spectral reconstruction becomes "more noisy".
Contravariant noise remains valuable when the training method is used as well as the ridge regression. The metric tensor that must be applied in Equation (26) is then, respectively, (R T t .R t ) = (F T t .F t ) −1 or (R T r .R r ) = (F T r .F r ) −1 . Therefore, additionally to evaluate the impact of the sensor's metric, contravariant noise can be used to compare the influence of several reconstructing methods.

Example of Contravariant SNR Evaluation to Compare Several Spectral Reconstructing Methods
Using contravariant SNR, we can compare noise propagation through several reconstruction methods (intrinsic, regression, ridge regression with two different α parameters). The sensor is still the two-dimensional one and the spectrum to be reconstructed is still L (uniform photon distribution over wavelength). The spectral reconstruction accuracy is measured in terms of angular error. The exposure parameter for raw computation remains the same as Section 3.
The angular error between L and L is given in Table 2, it is computed similarly to Equation (5). Corresponding graphical noise free reconstructions are displayed Figure 14. Table 2. Angular error between reference radiance spectrum and reconstructed spectra (the lowest, the best).  Table 2 shows that the most accurate spectral reconstruction of L is obtained using the linear regression method which confirms visual results of ( Figure 14). It also shows that the bias induced by the ridge regression operator is negligible in terms of angle. This occurs because the angle between the red and blue channels is wide enough. So using intrinsic or ridge regression does not change significantly the spectral distribution of the reconstructed spectrum.

Reconstructing Method Intrinsic Regression
Joint evaluation of contravariant SNR for each method has been computed in Figure 15. Their values are close to each other even if regression method appears to be the most noisy one. On the contrary, higher SNR are obtained with ridge regression for increasing α, confirming the interest of ridge regression in noisy conditions.

SNR Analysis of Various Common Sensors
We perform a benchmark of several sensors having different spectral configurations. The goal is to identify a configuration which provides the lower noise amplification across color correction or spectral reconstruction. In accordance with previous investigations, we analyze SNR at raw, spectral reconstruction and color correction steps. We compare six different sensors called respectively RGB, CMY, RGBW, RGBWir, RYB and RGBC.
• RGB and CMY are constituted of respectively red, green, blue and cyan, magenta, yellow channels, they are classical designs for color image sensors. • RGBW and RGBWir contain a white pixel (no color filter) more sensitive than others. RGBWir do not own infrared cutoff filter, so compared to RGBW sensor, it acquires light from both visible near-infrared domains. Both sensors have been proposed for low light conditions. • RYB has a yellow channel twice sensitive as red and blue. • RGBC owns a cyan channel additionnally to RGB also twice sensitive.
Spectral sensitivity curves are mostly proprietary data. So the benchmark is carried out on theoretical built sensors based on one unique model taken from the Teledyne-E2V Onyx EV76C664 sensor. Electronical properties are given in Table 3. Sensitivity curves of RGBWir are those from Onyx [34]. RGB, and RGBW are built applying an infrared cutoff filter in front of RGBWir channels. Similarly, CMY channels are built using the white sensitivy of Onyx, with Fuji CMY color filters in front [35]. Final spectral sensitivites are given in Figure 16. Table 3. Physical parameters of the sensor. The benchmark starts with an evaluation of noise free performances of the sensor (spectral and color reconstruction). For each sensor, spectral channels are equally considered for the computation of the spectral reconstruction operator. This one is computed using (Equation 11) which do not imply any training set (to avoid bias due to prior data). So R is a n × p operator where p = {3, 4} depending on the number of channels. Spectral accuracy is evaluated over the equal distribution radiance spectrum L. The accuracy is given in angle as in Table 2, so the results of the benchmark are given in Table 4. Color performances are checked over the X-Rite ColoChecker Classic using the ∆E 76 criterion ( Table 5). For coherence CCM is computed with Equation (19), as well as to avoid bias due to prior data (still no training set used to compute the CCM).  Table 5 shows that studied sensors have comparable color performances. Nonetheless, RGBWir sensor gets the poorest color performance due to the near-infrared part of the signal [36].

Sensor's Parameters Values
Second step of the benchmark is to investigate noise propagation. This study is performed for both high and low light level. For each sensor, we simulate a noisy acquisition of the L spectrum under controlled exposure setup (Table 6). First, illumination is set to 10 lux which is a dim light for human vision. However as the sensor is very sensitive due to large pixels, these conditions are those of high light level. Illumination level and aperture are fixed, integration time is computed such that acquisition reaches the maximum value without saturating any channel of any sensor. For information, the saturation integration times of each sensor are given Table 7. Raw SNR is computed according to the definition of Equation (21). This SNR evaluation is performed for each channel of each sensor and is represented Figure 17. As expected, sensors with wider spectral sensitivities get higher raw SNR than others since they contain more sensitive channels. So the classical RGB is the sensor that owns the lowest raw SNR. Then we compute the contravariant SNR (Equation (27)) to investigate the spectral reconstruction level as shown in Figure 18). The result displays negative contravariant SNR. This occurs when the absolute value of the contravariant signals are lower than their noise level. In case of negative SNR, results of spectral reconstruction cannot be exploited because they are too noisy. Moreover, contravariant SNR almost follow the inverse trend than raw SNR. Indeed, in sensors with wide spectral channels, inter-channel correlations are increased and cause a high noise amplification at spectral reconstruction level ( Figure 6). In Figure 19, we illustrate the results of noisy spectral intrinsic reconstructions (at same exposure setup) wavelength by wavelength. This figures clearly shows the difficulty of RGBW and RGBC to reconstruct spectra from noisy signals. This happens because respectively W and C channels are too correlated to RGB channels. The final step of the benchmark is an SNR evaluation after color correction. On Figure 20, we display SNR over X, Y and Z components, enlarging the usual picture of SNR Y comparison (see Equation (25)). Here we see that color SNR follows the trend of contravariant SNR. To complete this high light level benchmark, we use the simulating framework of Section 2 to simulate the noisy color renderings over realistic images corresponding to the physical setting of the benchmark Figure 21. Once again, images are computed with high resolution multispectral images from the ReDFISh dataset [13]. For each sensor, color images have been computed with a normalized update of the CCM kernel whose calculus is based on Equation (19) such that no training set is used. In a second test, we perform the same benchmark but in dim light conditions. All physical parameters remain equal except the illumination factor which is divided by a 50 factor: N lux = 0.2 lux. Histogram results are displayed in Figure 22 for each type of SNR.  We can draw some conclusions from this evaluating section. For both high and low signal levels, higher raw SNR are obtained for most sensitive channels. Contravariant SNR evaluation shows that reconstructing spectra from sparse raw data is very sensitive to noise and could not be performed in low light conditions (negative contravariant SNR). Moreover, contravariant SNR seems to remain higher in both conditions for the RGB sensor. This shows that spectral reconstruction is mostly sensitive to inter-channel correlation instead of raw sensitivity. Finally, color SNR of the RGB sensor are overtaken by CMY and RYB at low light level. This change in the tendencies of the benchmark are due to the readout noise impact which becomes higher in low light level. So for color applications, a smart compromise must be found between raw sensitivity, inter-channel correlation and sensor/color axes orientations to minimize noise in output images. RGBW still gets bad results in low light conditions because of high correlations of W and the RGB spectral space. So from a spectral point of view, the W channel does not have benefit. However, we did not perform pansharpening [6] in this study, so the interest of W channel is not highlighted here.

Conclusions
This paper was focused on the impact of the spectral sensitivities of imaging sensors on noise of output image (color and multispectral). To perform our analysis, we studied the color correction step of image processing and discarded all other steps like denoising or demosaicking. So we started introducing an appropriate simulating framework to display an example of noise amplification through color correction.
Then, we proposed a novel geometrical way of representing noise propagation from raw acquisition through spectral reconstruction and color correction. This approach considers spectra as vectors [10] whereas noise is represented as uncertainty domains (segments or volume) around the noise free position [2]. We showed that noise quantity in the final color image depends on three main spectral dependencies: raw sensitivity, inter-channel correlations, relative orientation of sensor and color spaces. Based on our approach, we also were able to represent spectral reconstruction of other methods like least square minimization over a training set and Tikhonov regularization.
After recalling computation of raw and color signal to noise ratios to quantify noise in uniform images, we introduced the contravariant SNR to quantify noise in spectral reconstructions. Compared to other quantifiers, this one is accurate in terms of dimension. We also generalized it such as it can be computed when using the different linear reconstructing methods.
Finally, using the different SNR defined in the article, we evaluated spectral and color performances of several sensors with spectral characteristics close to commercial products. We found out that the minimization of noise in output color images needs a fine compromise between three spectral dependencies we highlighted in Section 3. However, having low inter-channel correlations seems to be mandatory if the goal is to reconstruct spectra, which is shown by the results of the RGB sensor in terms of contravariant SNR.
The next step of this work is to use the description drawn in this paper to optimize spectral sensitivities and make image acquisition less noisy. Additionally to what already exists [16,18], we would like to find a geometrical method based on the presented geometrical framework to find an optimum sensor. We also can expect that this formalism would help one to optimize methods of spectral reconstruction or color correction.