- freely available
Sensors 2014, 14(3), 3965-3985; doi:10.3390/s140303965
Abstract: It has now been 20 years since the seminal work by Finlayson et al. on the use of spectral sharpening of sensors to achieve diagonal color constancy. Spectral sharpening is still used today by numerous researchers for different goals unrelated to the original goal of diagonal color constancy e.g., multispectral processing, shadow removal, location of unique hues. This paper reviews the idea of spectral sharpening through the lens of what is known today in color constancy, describes the different methods used for obtaining a set of sharpening sensors and presents an overview of the many different uses that have been found for spectral sharpening over the years.
Our visual system has a striking ability in allowing us to deal with color. However, we are far from fully understanding its behavior. To gain an insight into how our visual system works, some assumptions are often made: first, it is assumed that there is a single illuminant in the scene which is spatially uniform, and second, it is assumed that objects are flat, coplanar, and Lambertian, i.e., their reflectances are diffuse and independent from the angle of view.
Following these assumptions light energy reaching our eye depends on the spectral power distribution of the illuminant (E(λ), where λ spans the visible spectrum) and the spectral reflectance distribution of the object we are looking at (R(λ)). This information is called the color signal and is written as
This color signal is weighted along the visible spectrum ω with the sensitivities of our cone-cells (which peak in the long, medium and short wavelengths of the visible range, and are denoted by s(λ),m(λ),l(λ) respectively) to obtain the L,M,S color space coordinates of the signal
From this equation, we can see that when looking at a white piece of paper in sunset the values captured by our eye are reddish as a result of the illumination. In contrast, when looking at a white piece of paper on a cloudy day these values are bluish. However, we perceive the piece of paper as approximately white in both cases. This property of our visual system is called color constancy.
1.1. Human Color Constancy
Color constancy is usually defined as the effect whereby the perceived or apparent color of a surface remains constant despite changes in intensity and spectral composition of the illumination . An example is shown in Figure 1, where we are able to perceive the t-shirt of the man in the right as yellow; however, if we isolate the t-shirt we perceive it as green. Some reviews on color constancy have been published recently [1–3].
von Kries  in the XIXth century hypothesized that a compensation (or gain) normalization was performed individually within each photoreceptor. Mathematically, we may write
This model is called von Kries adaptation. The idea of an individual gain for each photoreceptor was adopted early in computer vision and the gains were computed based on the scene, e.g., assuming that the scene had a mean value of grey , or that a white-patch was presented on the scene . von Kries adaptation has been shown to provide a reasonable approximation of how we perceive scenes composed of natural reflectances and illuminant spectra . In a contemporany version of the model, it is stated that cone-signals interact and the gain in each channel is influenced by the other channels 
Even though this model can predict the data very well in natural environments, there is still no agreement in the literature as to how the gain values are computed from the image statistics of the stimulus . Furthermore, this model performs poorly when natural reflectances and illuminants are not present in the stimulus.
For this reason, further research on the neural mechanisms that underline color constancy has been conducted over the years . Different parts of the brain have been shown to deal with color constancy, specially the lateral geniculate nucleus (LGN), and the regions V1 and V4 of the visual cortex , although recent studies suggest that other areas might also be involved . Therefore, as Smithson says  “It seems most reasonable to say that processing for color constancy starts in the retina, is enhanced in V1/V2 and continues in V4”. Cues for color constancy used by humans that might befall in the further neural levels might include mutual reflections, 3D shapes, shadows, color memory, and even the consciousness of illumination change [2,9].
1.2. Computational Color Constancy
Computational color constancy does not aim to recover the perceived image, but an estimation of the surface reflectances of the scene; Therefore, it changes the paradigm upon which human color constancy is built. In other words, while human color constancy relies on the perception of the colors, computational color constancy relies on the absolute color values of the objects viewed under a canonical (usually white) illuminant, without considering how the image is perceived by an observer. Vazquez-Corral et al.  have shown that computational and human color constancy do not aim for the same final image: they performed a pair-wise comparison experiment where they asked human observes to pick out the most natural image from a range of images produced by different computational color constancy algorithms. Observers only chose the best computational solution in 40% of the comparisons.
Therefore, computational color constancy is treated from a mathematical point of view. Let us suppose we have an object with reflectance R(λ) and a camera with sensitivities Qi(λ). We take two photos of the object with the camera at two different moments in time; at each of these moments there is a different illuminant in the scene E1(λ), E2(λ) (let us suppose the illuminant is uniform). Then, the response of the object recorded by the camera sensor i under one of the illuminants is denoted by where the superscript n denotes the illuminant used. Mathematically,
Marimont and Wandell  showed that although natural surface reflectances in the world are 5 or 6 dimensional, i.e., there is a need of a basis of 5 or 6 reflectances in order to derive any other reflectance, an “effective basis” of smaller dimension can be extracted. Similarly, Judd et al.  showed how to derive a basis from the set of daylight illuminations. These studies allow us to relate and by a 3 × 3 matrix. That means, we are dealing with a 9-dimensional problem:
2. Spectral Sharpening for Diagonal Color Constancy
Early research in computational color constancy [5,6] focused on modeling illumination changes by a scaling multiplication in each of the sensor channels (inspired by von Kries' coefficient law). This idea, called the diagonal model of illuminant change, can be expressed as:
Many cameras do not have sensors that match the above specifications. Therefore, different methods have tried to search for a linear combination of the original sensor responses in order to force them to accomplish the diagonal model. Mathematically, this linear combination will be the one accomplishing
Finlayson et al.  called this approach “spectral sharpening” since the new sensors responses are sharper than the original ones. We should note that other authors had previously suggested a similar idea . Figure 2 compares a set of originals sensors with their sharpened version. The remarkable and useful conclusion of the spectral sharpening work was that, even for a broad-band sensor system, a diagonal matrix model of illumination could be used to solve the computational color constancy problem.
An example of the use of spectral sharpening can be seen in Figure 3. In this figure, an original image that presents a blue cast is seen under the CMF color matching functions. In order to apply diagonal color constancy and remove the blue cast, a linear transform T is computed by 5 different methods (that will be explained later in this section). Then, for each method, the pipeline works as follows: (1) a change of basis is performed by the linear transform T; (2) a method for diagonal color constancy (MaxRGB in this case) is applied to the new basis image; (3) the image is converted back to the original sensors by T−1. The different T matrices have been obtained using the Planckian illuminants and the whole set of reflectances from .
In this section we will show a review of different methods used to achieve spectral sharpening. Figure 4 presents a hierarchy regarding when each particular method might be used. In this figure, the different methods are linked to their section in the paper. The selection of a particular method might be taken depending on two aspects: the availability of spectral data and the final goal that is being pursued.
2.1. Perfect Sharpening
Finlayson et al. [15,19] showed that when illuminants are two dimensional (in the sense that two illuminants are enough to define any other illuminant as a linear combination of them) and reflectances three dimensional (in the same sense), or vice versa, spectral sharpening is perfect.
Let us suppose that reflectances are three dimensional and illuminants two dimensional (the other case is analogous). In this case any reflectance can be decomposed as
As illuminants are two dimensional they need a second illuminant E2(λ) independent from the canonical one Ec(λ) to span the space. Associated with this illuminant there will also be a new lighting matrix Λ2. This second lighting matrix is some linear transform (M) away from the first one, Λ2 = MΛc, that is, M = Λ2[Λc]−1.
As E2(λ) and Ec(λ) span the space, any other lighting matrix will be a combination of Λc and MΛc. For this reason, any color descriptor under an illuminant Ee(λ) = αEc(λ) + βE2(λ) can be written as
2.2. Sensor-Based Sharpening
Finlayson et al. proposed in  a method called Sensor-based spectral sharpening. The idea underlying this method is that it is possible to sharpen a sensor from an original set Q(λ) (of dimension n × k) in a wavelength interval [λ1, λ2]. The resulting sensor Q(λ)ṯ where ṯ is a coefficient vector of dimension k, can be found by minimizing
To solve the problem for all the spectra Finlayson and co-authors defined k intervals, where k is the number of sensors. These intervals have no intersection between them and cover all the spectra. Then, the kth row of matrix T will be the vector that minimizes Equation (15) for its particular interval (note that ṯ is post-multiplying in Equation (15) while T is defined in Equation (8) as a pre-multiplication).
Mathematically, they defined a k × k matrix
They took partial derivatives over the vector ṯ in Equation (15) and equated to the zero vector to look for the stationary values. They combined this derivative with Λ(α) obtaining:
2.2.1. Sharpening with Positivity
Sensors with negative values are physically impossible. For this reason Pearson and Yule  defined different positive combinations of the color matching functions. Following this trend, Drew and Finlayson proposed methods to obtain sharpening transforms giving always positive values . These techniques are very similar to the previous one but some constraints were added for ensuring all the values are positive. These constraints can be based either on the L1 or L2 norm and can be performed both in the sensors themselves or in the sharpening matrix coefficients. All these methods can be solved by either linear or quadratic programming. Here we report the different methods presented.
L1- L1 Constrained coefficients:
L1- L1 Constrained sensors:
L2- L2 Constrained coefficients:
L2- L2 Constrained sensors
L2- L1 Constrained coefficients
L2- L1 Constrained sensors
2.3. Adding Information to Improve Sharpening
Information about the illuminants and reflectances that are more representative in natural scenes is available from multiple sources [12,17,22–26]. In this section we review methods that take advantage of this available information to search for sharpened sensors.
2.3.1. Data-Based Sharpening
Finlayson et al.  proposed a method called data-based sharpening which uses linear algebra methods to directly solve for T by minimizing the residual error between a pair of illuminants. To this end, they defined W1 and W2 as 3 × n matrices containing the color values for a set on n different refectances under two different illuminants E1 and E2.
Some years later, Barnard et al.  tried to allow more flexibility to the database sharpening, by averaging over a set of illuminants (not only one), and introducing a parameter to prioritize positivity.
2.3.2. Measurement Tensor
Chong et al.  introduced a new method which finds a matrix T for a complete set of illuminants at the same time. This method is based on the measurement tensor defined as
Chong et al. proved that a measurement tensor supports diagonal color constancy if and only if it is a rank 3 tensor. An order 3 tensor τ is rank N if N is the smallest integer such that there exist vectors allowing decomposition as the sum of outer products
They rewrote Equation (29) as
In order to solve Equation (30) they used the Trilinear Alternate Least Squares (TALS) method . This is necessary since in most of the cases the tensor Mkij is not rank 3, and therefore it is necessary to search for the “closest” rank 3 tensor. At each iteration of the minimization procedure through TALS, two of the three matrices are fixed while the free matrix is chosen to minimize the difference between the given data Mkij and the obtained tensor τ in the least-squares sense. The alternating process is repeated until convergence.
This method has the drawback of local convergence, that is, the result obtained can be a local minima. Also, TALS needs initialization values for two of their three matrices.
2.3.3. Data-Driven Positivity
In , Drew and Finlayson proposed a data-driven approach for obtaining positive sensors. Following the original sensor-based sharpening method they divided the spectra in k intervals, where k is the number of sensors, and for each interval they searched for the sensor Q(λ)ṯ minimizing
2.4. Chromatic Adaptation Transforms
All the previous methods were defined in order to help solving for diagonal color constancy. But, the sharpening matrices related to these methods are not the only ones, there are sharpening matrices that have been derived from psychophysical experiments in chromatic adaptation. Chromatic adaptation matrices also represent sharp sensors , and are obtained from the XYZ color matching functions. They are used to match image appearance to colorimetry when the visual conditions are changed. In particular, they are defined to handle corresponding colors data. Citing from the Fairchild's book  “Corresponding colors are defined as two stimuli, viewed under different viewing conditions, that match in color appearance”. Examples of chromatic adaptation transforms are the Bradford transform, the Fairchild transform, and the CAT02 transform. For a review on the different transforms, we recommend the book by Fairchild . Here we enumerate some of them.
2.4.1. Von Kries Transform
The Von Kries chromatic adaptation transform is usually defined by the Hunt, Poynton and Estevez transform . The values of this transform are:
2.4.2. Bradford Transform
The Bradform transform was defined by Lam  following the results obtained in an experiment regarding corresponding colors. The data used for the experiment consisted of 58 dyed wood samples under the A and D65 illuminants. The original Bradford transform is non-linear, but the non-linear part is usually neglected. The linear matrix is then
2.4.3. Fairchild Transform
The Fairchild transform was suggested by Mark Fairchild  for improving the CIECAM97s color appearance model. It was obtained through a linearization of the previous chromatic adaptation transform. The matrix suggested by Fairchild was
2.4.4. CAT02 Transform
The Comission Internationale de l'Éclairage (CIE) selected in its report CIC-TC8-01 the CAT02 transform as the preferred chromatic adaptation transform. CAT02 was obtained by optimizing a wide variety of corresponding data, while approximating the non-linear transformation of CIECAM97s. The matrix obtained was
2.4.5. Chromatic Adaptation Transforms by Numerical Optimization
They defined an objective function with two competing terms
The term gest becomes bigger for better estimations of corresponding colors according to the Wilcoxon signed-rank test. On the other hand, the term gmed becomes smaller when the median errors on the corresponding colors datasets are smaller. Therefore, our goal must be to look for the transformation T maximizing Equation (36). To this end, authors applied the Particle Swarm Optimization (PSO) technique.
This first optimization might, however, incur in negative values of the resulting sensors. For this reason they defined another objective function
2.5. Spherical Sampling
Spherical sampling [31,41] provides a means for discretely sampling points on a sphere and relate them to sensors. The main idea is to consider each row of the sharpening matrix T as a point in the sphere.
Mathematically, let us represent our original camera sensors Q as a m × 3 matrix where m is the wavelength sampling and 3 the number of sensors. We perform the reduced singular value decomposition (SVD) of these sensors in order to obtain a basis:
From this basis U, we can define a new set of sensors (m × 3), different from the original sensors S, by multiplying the basis by any linear transformation P (3 × 3), which simply consists of 3 sample points vectors, , …, located over the 2-sphere. Then,
2.6. Measuring Diagonal Color Constancy Effectiveness
Effectiveness of sharpening matrices has usually been evaluated by least-squares as follows. Let us denote an observed color by (Equation (5)) where E is the illuminant and r the reflectance for the observation. Then, if we select a canonical reflectance s (usually an achromatic reflectance), we can compute for each illuminant the ratio between any reflectance and the white reflectance as follows.
Let us note that if the transformation T perfectly accomplishes diagonal color constancy, the value is independent from the illuminant. Therefore, measuring the disparity of this ratio depending on the illuminant should tell us the effectiveness of a method.
Mathematically, if we select a canonical illuminant Ec, we can denote the error of the sharpening matrix by
This formula has been widely used to compare spectral sharpening methods and was already included in Finlayson et al.  work. By using this formula two methods outperform the rest. First, the Measurement Tensor method as shown in . This method has an inherent advantage because both the method and the measure are based on least-squares. Therefore, we deal with a least square minimization-least square evaluation paradigm. The second method that excels is Spherical Sampling due to its capability to minimize any measure. Spherical Sampling presents a further advantage since it avoids local minima.
The formula presented in Equation (44) is good for a first inspection on how the methods work with simple diagonal color constancy. But, recently, further applications of sharpened sensors have been found (see next section) where this measure is no longer appropriate.
3. Beyond Diagonal Color Constancy
The original aim of spectral sharpening was to achieve diagonal color constancy. Over the years, spectral sharpening has proven beneficial for a number of purposes, some far removed from the original aim. In this section we review some of these new applications. They are presented graphically in Figure 5 where they are listed in terms of their research field and linked to a particular subsection of this paper.
3.1. Chromatic Adaptation
Section 2.4 shows that chromatic adaptation transforms can be understood as spectral sharpening, therefore it is a straightforward idea to use spectral sharpening techniques for handling corresponding colors data and chromatic adaptation.
Finlayson and Drew  showed that the Bradford transform can be obtained through spectral sharpening with a careful selection of intervals. Later on, Finlayson and Süsstrunk in  defined a chromatic adaptation transform following a technique very similar to the data-based sharpening considering the preservation of the white point. Ciurea and Funt  used the same algorithm but applied it to spectral quantities instead of tristimulus values. Finally, Finlayson and Süsstrunk  used the spherical sampling technique to derive a set of chromatic adaptation transforms that were equivalent in terms of the error committed to the colorimetrically obtained.
3.2. Color Constancy in Perceptual Spaces
Human perception is not linear but colorimetric spaces are. In other words, when we work in RGB or XYZ spaces, a Euclidean distance d will be perceived differently depending on the region of the color space the points are located in.
To overcome this issue CIE proposed the CIELab and CieLuv color spaces . Later on, Finlayson et al.  defined a new color constancy error measure regarding differences in the CIELab perceptual space. From Equation (5), let us call ρ̱D65 the XYZ color value of a particular patch under the D65 illuminant and ρ̱e the value of the same patch under a different illuminant e. We know we can find an approximation of the value under the D65 illuminant by . The basic idea is to convert both ρ̱D65 and values to CIELab and to minimize the measure Δϵ, that is, the euclidean distance between the two points. This measure is considered to be perceptual. Formally,
The matrix T minimizing this equation for a set of reflectances and illuminants is defined as the best matrix regarding perceptual color constancy. It is found using the spherical sampling technique .
3.3. Relational Color Constancy
Foster and co-authors [46,47] defined color constancy as a ratio-based phenomena, not pixel-based. This view, called relational color constancy, assumes that the colors in a scene have a fixed relation between each other. Relational color constancy is also related to Retinex . On the other hand, from the computer vision side, color ratios have proven useful for dealing with some particular problems such as object recognition  and image indexing .
Finlayson et al.  defined a color ratio stability measure that works in each sensor individually. They defined a vector ḇ (m-by-1) containing the colors for a set of m reflectances under the canonical illuminant viewed under a particular sensor, that is , where is the response of the sensors for reflectance m and canonical illuminant c, and subscript i denotes the sensor selected. They defined the vector of color ratios a̱c
3.4. A Perceptual-Based Color Space for Image Segmentation and Poisson Editing
Chong et al.  defined a perception-based color space with two main goals: (1) be linear correlated with perceived distances, that is, distances in this space might correlate with perceived ones, and (2) color displacements in this space should be robust to spectral changes in illumination, that is, if we re-illuminate two colors by the same light, the difference between them might stay equal. To obtain a space with these characteristics the authors showed that one further assumption is needed. This assumption states that diagonal color constancy must be well modeled in the space.
Therefore, the definition of the color space parametrization F given a point x̱ in XYZ coordinates is
The authors showed the advantages of this new space in two common image processing tasks: image segmentation and Poisson editing .
3.5. Multispectral Processing Without Spectra
Drew and Finlayson  showed that spectral sharpening was useful to simplify the cost of calculating the modeling of the interaction of light and reflectance. Reducing this cost is important for problems such as ray-tracing. For this application they applied spectral sharpening in more than the usual 3 dimensions (red, green, and blue).
First, they defined a set of color signals, from which they obtained a n-dimensional (n = 5, 6, 7) basis B(λ) (via SVD). They sharpened this basis to improve the discernability of its information obtaining a new basis B̂(λ) = TB(λ), where T is a n × n sharpening matrix obtaining using the method L2 − L2 sensor-based with positivity. Then, any light or reflectance can be expressed as a coefficient vector in this last basis
Later on, Finlayson et al.  showed that even smaller errors are obtained by the use of spherical sampling in terms of the Δϵ measure between the Lab values of the real and the reconstructed signal.
3.6. Obtaining an Invariant Image and Its Application to Shadows
Finlayson et al.  theoretically proved that it is possible to obtain a 1-dimensional representation of reflectances independent from the illuminant if one supposes a narrow-band camera. From this 1-dimension representation they obtained the invariant image, where the RGB value of the pixel is substituted by its illuminant-independent representation. In the same work they also showed that the illuminant independent representation was useful in real cameras (the narrower the camera sensors, the better the results). Following this last point, Drew et al.  proved that when using spectral sharpening sensors, the invariant image was better than with the original ones.
Finlayson et al. , later used the invariant image to remove shadows from images. Drew et al.  recently showed that when using spectral sharpening sensors, the results were improved, although this last algorithm requires user interaction.
3.7. Estimating the Information from Image Colors
Recently, Marin-Franch and Foster  presented a method to evaluate the amount of information that can be estimated from the image colors captured by a camera under different illuminations. To this end, they applied different statistics to the color images. In their work they explained that when dealing with spectrally sharpened sensors the amount of information that can be extracted is higher than with the original ones.
3.8. Color Names, Unique Hues, Hue Cancellation and Hue Equilibrium
Philipona and O'Regan  showed the possibility of extracting a surface reflectance descriptor using only the information reaching our eye. To start with, they defined υs as the accessible information about the reflected light for a given surface s and u the accessible information about the incident illuminant
They repeated this procedure for N different lights. They arranged the N response vectors for the surface in a 3 × N matrix Vs and for the lights in a 3 x N matrix U. They related these two matrices by finding the best 3 × 3 matrix transform As such that
Philipona and O'Regan selected the eigenvalues s as the surface descriptor, from where they were able to precisely predict color naming, unique hues, hue equilibrium and hue cancellation data.
Building upon Philipona and O'Regan's model, Vazquez-Corral et al.  demonstrated that it is possible to find a unique transformation T for all the surfaces such that the Philipona and O'Regan model can be expressed as
Sensor sharpening was developed 20 years ago to achieve computational color constancy using a diagonal model. Over this period, spectral sharpening has proven important for solving other problems unrelated to its original aim.
In this paper, we have explained some of the differences between human and computational color constancy: human color constancy relies on the perception of the colors while computational color constancy relies on the absolute color values of the objects viewed under a canonical illuminant.
We have reviewed different methods used to obtain spectrally sharpened sensors, dividing them into perfect sharpening, sensor-based sharpening, sharpening with data, spherical sharpening, and chromatic adaptation transforms.
We have also described different research lines where sharpened sensors have proven useful: chromatic adaptation, color constancy in perceptual spaces, relational color constancy, perceptual-based definition of color spaces, multispectral processing without the use of all the spectra, shadow removal, extraction of information from an image, estimation of the color names and unique hues presented in the human visual system, and estimation of the hue cancellation and hue equilibrium phenomena.
This work was supported by European Research Council, Starting Grant ref. 306337, and by Spanish grants ref. TIN2011-15954-E, and ref. TIN2012-38112.
Author Contributions: Both authors contributed equally to this work.
Conflicts of Interest
The authors declare no conflict of interest.
- Foster, D.H. Color constancy. Vis. Res. 2011, 51, 674–700. [Google Scholar]
- Smithson, H. Sensory, computational and cognitive components of human colour constancy. Philos. Trans. Royal Soc. B Biol. Sci. 2005, 360, 1329–1346. [Google Scholar]
- Brainard, D.H. Color Constancy. In The Visual Neurosciences; Chalupa, L., Werner, J., Eds.; MIT Press: Cambridge, MA, USA, 2003; pp. 948–961. [Google Scholar]
- Helson, H.; Judd, D.B.; Warren, M.H. Object-color changes from daylight to incandescent filament illumination. Illum. Eng. 1952, 47, 221–233. [Google Scholar]
- Buchsbaum, G. A spatial processor model for object colour perception. J. Frankl. Inst. 1980, 310, 1–26. [Google Scholar]
- Land, E.H.; McCann, J.J. Lightness and retinex theory. J. Opt. Soc. Am. 1971, 61, 1–11. [Google Scholar]
- Delahunt, P.B.; Brainard, D.H. Control of chromatic adaptation: Signals from separate cone classes interact. Vis. Res. 2000, 40, 2885–2903. [Google Scholar]
- Barbur, J.L.; Spang, K. Colour constancy and conscious perception of changes of illuminant. Neuropsychologia 2008, 46, 853–863. [Google Scholar]
- Roca-Vila, J.; Parraga, C.A.; Vanrell, M. Chromatic settings and the structural color constancy index. J. Vis. 2013, 13, 1–26. [Google Scholar]
- Vazquez-Corral, J.; Párraga, C.; Vanrell, M.; Baldrich, R. Color constancy algorithms: Psychophysical evaluation on a new dataset. J. Imaging Sci. Technol. 2009, 53, 031105:1–031105:9. [Google Scholar]
- Marimont, D.H.; Wandell, B.A. Linear models of surface and illuminant spectra. JOSA A 1992, 9, 1905–1913. [Google Scholar]
- Judd, D.; MacAdam, D.; Wyszecki, G. Spectral distribution of typical daylight as a function of correlated color temperature. J. Opt. Soc. Am. 1964, 54, 1031–1040. [Google Scholar]
- Forsyth, D.A. A novel algorithm for color constancy. Int. J. Comput. Vis. 1990, 5, 5–35. [Google Scholar]
- Worthey, J.A.; Brill, M.H. Heuristic analysis of von Kries color constancy. J. Opt. Soc. Am. A 1986, 3, 1708–1712. [Google Scholar]
- Finlayson, G.D.; Drew, M.S.; Funt, B.V. Spectral sharpening: Sensor transformations for improved color constancy. J. Opt. Soc. Am. A 1994, 11, 1553–1563. [Google Scholar]
- Brill, M.H.; West, G. Constancy of Munsell colors under varying daylight conditions. Die Farbe 1982, 30, 65–68. [Google Scholar]
- Barnard, K.; Martin, L.; Funt, B.; Coath, A. A data set for colour research. Color Res. Appl. 2002, 27, 147–151. [Google Scholar]
- Foster, D.H.; Amano, K.; Nascimento, S.M.C.; Foster, M.J. Frequency of metamerism in natural scenes. J. Opt. Soc. Am. A 2006, 23, 2359–2372. [Google Scholar]
- Finlayson, G.D.; Drew, M.S.; Funt, B.V. Color constancy: Enhancing von Kries adaptation via sensor transformation. Proc. SPIE 1993, 1913, 473. [Google Scholar]
- Pearson, M.; Yule, J. Transformations of color mixture functions without negative portions. J. Color Appear. 1973, 2, 30–35. [Google Scholar]
- Drew, M.S.; Finlayson, G.D. Spectral sharpening with positivity. J. Opt. Soc. Am. A 2000, 17, 1361–1370. [Google Scholar]
- Cho, K.; Jang, J.; Hong, K. Adaptive skin-color filter. Pattern Recognit. 2001, 34, 1067–1073. [Google Scholar]
- Romero, J.; Garcia-Beltran, A.; Hernandez-Andres, J. Linear bases for representation of natural and artificial illuminants. J. Opt. Soc. Am. A 1997, 14, 1007–1014. [Google Scholar]
- Parkkinen, J.P.S.; Hallikainen, J.; Jaaskelainen, T. Characteristic spectra of Munsell colors. J. Opt. Soc. Am. A 1989, 6, 318–322. [Google Scholar]
- Vrhel, M.J.; Gershon, R.; Iwan, L. Measurement and analysis of object reflectance spectra. Color Res. Appl. 1994, 19, 4–9. [Google Scholar]
- Krinov, E.L. Spectral Reflectance Properties of Natural Formations. Natl. Res. Council Can. 1947. [Google Scholar]
- Golub, G.H.; Loan, C.F.V. Matrix Computations, 3rd ed.; Johns Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
- Barnard, K.; Ciurea, F.; Funt, B. Sensor sharpening for computational color constancy. J. Opt. Soc. Am. A 2001, 18, 2728–2743. [Google Scholar]
- Chong, H.; Gortler, S.; Zickler, T. The von Kries Hypothesis and a Basis for Color Constancy. Proceedings of IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8.
- Harshman, R.A. Foundations of the PARAFAC procedure: Models and conditions for an “explanatory” multi-modal factor analysis. UCLA Work. Pap. Phon. 1970, 16, 84. [Google Scholar]
- Finlayson, G.D.; Süsstrunk, S. Spherical Sampling and Color Transformations. Proceedings of the 9th Color Imaging Conference, Scottsdale, AZ, USA, 6–9 November 2001; Volume 9, pp. 321–325.
- Fairchild, M.D. Color Appearance Models, 3rd ed.; John Wiley & Sons: Chichester, UK, 2013. [Google Scholar]
- Lam, K.M. Metamerism and Colour Constancy. PhD Thesis, University of Bradford, Bradford, UK, 1985. [Google Scholar]
- Fairchild, M.D. A revision of CIECAM97s for practical applications. Color Res. Appl. 2001, 26, 418–427. [Google Scholar]
- Bianco, S.; Schettini, R. Two new von Kries based chromatic adaptation transforms found by numerical optimization. Color Res. Appl. 2010, 35, 184–192. [Google Scholar]
- Sobagaki, H.; Nayatani, Y. Field trials of the CIE chromatic-adaptation transform. Color Res. Appl. 1998, 23, 78–91. [Google Scholar]
- Kuo, W.G.; Luo, M.R.; Bez, H.E. Various chromatic-adaptation transformations tested using new colour appearance data in textiles. Color Res. Appl. 1995, 20, 313–327. [Google Scholar]
- Luo, M.; Clarke, A.; Rhodes, P.; Schappo, A.; Scrivener, S.; Tait, C. Quantifying color appearance. Part I. LUTCHI color appearance data. Color Res. Appl. 1991, 166. [Google Scholar]
- Breneman, E.J. Corresponding chromaticities for different states of adaptation to complex visual fields. J. Opt. Soc. Am. A 1987, 4, 1115–1129. [Google Scholar]
- Braun, K.M.; Fairchild, M.D. Psychophysical generation of matching images for cross-media color reproduction. J. Soc. Inf. Disp. 2000, 8, 33–44. [Google Scholar]
- Finlayson, G.D.; Vazquez-Corral, J.; Süsstrunk, S.; Vanrell, M. Spectral sharpening by spherical sampling. J. Opt. Soc. Am. A 2012, 29, 1199–1210. [Google Scholar]
- Finlayson, G.D.; Drew, M.S. Positive Bradford Curves through Sharpening. Proceedings of the 7th Color and Imaging Conference, Scottsdale, AZ, USA, 16–19 November 1999; pp. 227–232.
- Finlayson, G.D.; Süsstrunk, S. Performance of a Chromatic Adaptation Transform based on Spectral Sharpening. Proceedings of the 8th Color Imaging Conference, Scottsdale, AZ, USA, 7–10 November 2000; pp. 49–55.
- Funt, B.; Ciurea, F. Chromatic Adaptation Transforms with Tuned Sharpening. Proceedings of the First European Conference on Color in Graphics, Imaging and Vision, Poitiers, France, 2–5 April 2002; pp. 148–152.
- Wyszecki, G.; Stiles, W. Color Science: Concepts and Methods, Quantitative Data and Formulae, 2nd ed.; John Wiley & Sons: New York, NY, USA, 1982. [Google Scholar]
- Foster, D.H.; Nascimento, S.M.C.; Craven, B.J.; Linnell, K.J.; Cornelissen, F.W.; Bremer, E. Four Issues Concerning Color Constancy and Relational Color Constancy. Vis. Res. 1997, 1341–1345. [Google Scholar]
- Nascimento, S.M.C.; Foster, D.H. Relational color constancy in achromatic and isoluminant surfaces. J. Opt. Soc. Am. A 2000, 225–231. [Google Scholar]
- Land, E. The retinex. Am. Sci. 1964, 52, 247–264. [Google Scholar]
- Nayar, S.; Bolle, R. Reflectance based object recognition. Int. J. Comput. Vis. 1996, 17, 219–240. [Google Scholar]
- Funt, B.; Finlayson, G. Color Constant Color Indexing. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 522–529. [Google Scholar]
- Chong, H.; Gortler, S.; Zickler, T. A Perception-based Color Space for Illumination-invariant Image Processing. ACM Trans. Graph. 2008, 27. [Google Scholar] [CrossRef]
- Pérez, P.; Gangnet, M.; Blake, A. Poisson image editing. ACM Trans. Graph. 2003, 22, 313–318. [Google Scholar]
- Drew, M.S.; Finlayson, G.D. Multispectral processing without spectra. J. Opt. Soc. Am. A 2003, 20, 1181–1193. [Google Scholar]
- Finlayson, G.D.; Hordley, S.D. Color constancy at a pixel. J. Opt. Soc. Am. A 2001, 18, 253–264. [Google Scholar]
- Drew, M.S.; Chen, C.; Hordley, S.D.; Finlayson, G.D. Sensor Transforms for Invariant Image Enhancement. Proceedings of the 10th Color Imaging Conference, Scottsdale, AZ, USA, 12–15 November 2002; pp. 325–330.
- Finlayson, G.; Hordley, S.; Lu, C.; Drew, M. On the removal of shadows from images. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 59–68. [Google Scholar]
- Drew, M.S.; Joze, H.R.V. Sharpening from Shadows: Sensor Transforms for Removing Shadows using a Single Image. Proceedings of the 17th Color Imaging Conference, Alburquerque, NM, USA, 1 January 2009.
- Marin-Franch, I.; Foster, D.H. Estimating information from image colors: An application to digital cameras and natural scenes. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 78–91. [Google Scholar]
- Philipona, D.; O'Regan, J. Color naming, unique hues and hue cancellation predicted from singularities in reflection properties. Vis. Neurosci. 2006, 3–4, 331–339. [Google Scholar]
- Vazquez-Corral, J.; O'Regan, J.K.; Vanrell, M.; Finlayson, G.D. A new spectrally sharpened basis to predict colour naming, unique hues, and hue cancellation. J. Vis. 2012, 12, 1–14. [Google Scholar]
© 2014 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license ( http://creativecommons.org/licenses/by/3.0/).