Next Article in Journal
Identification of Rice Sheath Blight through Spectral Responses Using Hyperspectral Images
Next Article in Special Issue
Physically Plausible Spectral Reconstruction
Previous Article in Journal
Smart Helmet 5.0 for Industrial Internet of Things Using Artificial Intelligence
Previous Article in Special Issue
Brightness Invariant Deep Spectral Super-Resolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Good Are RGB Cameras Retrieving Colors of Natural Scenes and Paintings?—A Study Based on Hyperspectral Imaging

by
João M. M. Linhares
1,*,
José A. R. Monteiro
1,
Ana Bailão
2,3,
Liliana Cardeira
2,
Taisei Kondo
4,
Shigeki Nakauchi
4,
Marcello Picollo
5,
Costanza Cucci
5,
Andrea Casini
5,
Lorenzo Stefani
5 and
Sérgio Miguel Cardoso Nascimento
1
1
Centre of Physics, Gualtar Campus, University of Minho, 4710-057 Braga, Portugal
2
Faculty of Fine Arts, University of Lisbon, 1649-004 Lisboa, Portugal
3
Research Center for Science and Technology of the Arts—Portuguese Catholic University, Centre Regional of Porto, 4169-005 Porto, Portugal
4
Toyohashi University of Technology, Aichi 441-8580, Japan
5
Istituto di Fisica Applicata “Nello Carrara” del Consiglio Nazionale delle Ricerche (IFAC-CNR), Via Madonna del piano 10, 50019 Firenze, Italy
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(21), 6242; https://doi.org/10.3390/s20216242
Submission received: 30 September 2020 / Revised: 26 October 2020 / Accepted: 27 October 2020 / Published: 1 November 2020
(This article belongs to the Special Issue Color & Spectral Sensors)

Abstract

:
RGB digital cameras (RGB) compress the spectral information into a trichromatic system capable of approximately representing the actual colors of objects. Although RGB digital cameras follow the same compression philosophy as the human eye (OBS), the spectral sensitivity is different. To what extent they provide the same chromatic experiences is still an open question, especially with complex images. We addressed this question by comparing the actual colors derived from spectral imaging with those obtained with RGB cameras. The data from hyperspectral imaging of 50 natural scenes and 89 paintings was used to estimate the chromatic differences between OBS and RGB. The corresponding color errors were estimated and analyzed in the color spaces CIELAB (using the color difference formulas ΔE*ab and CIEDE2000), Jzazbz, and iCAM06. In CIELAB the most frequent error (using ΔE*ab) found was 5 for both paintings and natural scenes, a similarity that held for the other spaces tested. In addition, the distribution of errors across the color space shows that the errors are small in the achromatic region and increase with saturation. Overall, the results indicate that the chromatic errors estimated are close to the acceptance error and therefore RGB digital cameras are able to produce quite realistic colors of complex scenarios.

1. Introduction

Digital color cameras acquire images by sampling the spatial and spectral information available, much like the human eye does [1]. In what concerns color, the acquisition process compresses the spectral information reflected from the objects into three components, discarding most of the spectral information initially available. The entire process produces colors that are device-dependent, different from camera to camera, and do not map linearly to the device-independent tristimulus values, which represent human visual perception [2]. Thus, cameras produce colors that are an approximation of what we see when looking at those scenarios. On the other hand, hyperspectral images acquire the spatial information without the chromatic compression found in digital cameras, maintaining the spectral properties of the light signal, information relevant to research in vision [3].
It is possible to improve the fidelity of digital cameras computationally using training data, but the efficacy of the training set is very dependent on the type of images to be acquired [2,4] or the camera settings [5]. If the colors are close to the neutral region, the differences are small [6], but otherwise are large [7,8]. On the other hand, some of the colors may be made metameric by the process [9]. Another way is to calibrate the spectral response of a digital camera using interference filters to improve the accuracy of the color recorded [10]. It is also possible to increase the number of spectral filters, or bands, to be registered, making the image acquired multispectral [11], enabling partial spectral reconstruction of the spectral reflectance of the objects imaged [12]. Even in the specific case of fixed illumination and precise camera calibration, it is still not possible to obtain a perfect reproduction of the colors, although considerable improvements are obtained [13]. Studies based on optimal color stimuli, i.e. optimal colors, show that the gamut of colors for a standard observer is significantly larger than the gamut provided by a digital camera [14,15,16].
Typical digital cameras, however, are trichromatic, registering only three bands in the red, green, and blue regions of the light spectrum, and natural images do not have optimal colors but a very specific color distribution and statistics [17,18]. To what extent typical cameras are faithful to the real colors of complex natural scenarios? Although much work has been done characterizing fidelity with simple stimuli [6,7,8,14,15,16] it is unclear how they perform with complex imagery, such as complex natural scenes.
We address this question by using hyperspectral images of natural scenes and paintings, which contains the contexts and information required for the analysis presented.The spectral reflectance data in the hyperspectral images was converted into colors as perceived by a human observer (OBS) and by a typical digital camera (RGB). The colors obtained for the OBS and RGB were then compared, the chromatic errors estimated, and the frequency of the errors computed. The color difference formulas CIEDE and CIEDE2000 were used in the CIELAB color space and the Euclidean distance formula was used in the Jzazbz and iCAM06 color spaces to compute the color differences. 190 × 106 pixels were used, spanning from natural colors to colors presented in paintings from Portugal, Italy, and Japan.

2. Materials and Methods

Spectral imaging data of paintings and natural scenes acquired using several hyperspectral systems (HIS) were converted into a color space. The color coordinates were computed assuming the conversion of the spectral data of each image pixel into a color coordinate for the CIE 1931 2º Standard Observer [19] and a commercial digital RGB camera [10]. The color difference between the estimated colors for both devices was then estimated and the frequency of error computed. Figure 1 represents the workflow used to estimate the color differences.
Starting with the spectral data for each image pixel, in the form of reflectances (step 1), the radiance (step 2) was estimated for the CIE D65 illuminant [18] and an LED with a CCT of 3176ºK, close to the tungsten or halogen light sources, traditionally considered the default museum illumination [20,21,22,23].
The radiance data was then processed into tristimulus values to estimate the XYZOBS (step3) and the XYZRGB (step 4). To estimate the XYZOBS values, the CIE 1931 2º observer was considered [19]. This allows the estimation of the real colors. To estimate the XYZRGB values, the spectral sensitivity of an RGB digital camera (Kodak KAF-10500 image sensor presented in a Leica M8, Leica Camera INC. Allendale, NJ 07401, USA) was used, as described elsewhere [10]. This step (step 4a) returns the RGB chromaticity coordinates at the camera level. To convert these coordinates from the camera level to the observer level [14,15,16], a linear transformation was used to convert the RGB to XYZRGB for data assuming D65 (step 4b—rgb2xyz.m, MATLAB, MathWorks, Natick, MA, USA). For the LED we assume the same transformation but introduced a Bradford chromatic adaptation [24,25] (this chromatic adaptation was used as it performs better than the von Kries chromatic adaptation, as described elsewhere [25]). This allows the estimation of the colors as if they were displayed in a typical display monitor. Both tristimulus values were then converted into a color space with associated color difference formula (step 5) [19,26,27,28,29]. The difference between the colors obtained from the RGB camera and the CIE observer was then computed and the frequency of errors estimated (step 6).
When converting the spectral reflectance to the CIELAB color coordinates, some of the pixel colors could not be converted properly due to noise in the data. Nevertheless, only 2.5% of the total of the number of pixels used were not considered and were removed from the analysis.

2.1. Paintings and Natural Scenes

Figure 2 represents some examples of the images used in this work. Images were derived from hyperspectral imaging assuming the D65 illuminant. Above the line are represented samples of the 89 paintings analyzed. Paintings from Portugal, Japan, and Italy were used. Below the line are represented samples of the 50 natural scenes analyzed, all acquired in the north of Portugal. The final set composed of 139 images provided 197,936,935 pixels for analysis.
All images were acquired using hyperspectral imaging and delivered in each image pixel the reflectance spectra for the area imaged. The reflectance spectra were from 400 nm to 720 nm in 10 nm steps. Only in the case of the Japanese paintings, the reflectance spectra data was available from 420 to 720 nm in 10 nm steps.
The reflectance spectra of the Portuguese paintings and the natural scenes were acquired using a spatial resolution of 1024 × 1344 pixels. The acquisition procedure and the accuracy of the methodology was described elsewhere [3,30], but the accuracy of the system in retrieving the spectral profile is within 2%, with an average color difference error of about 2.2 in the CIELAB color space.
The reflectance spectra of the Japanese paintings were acquired using a spatial resolution of 1024 × 1932 pixels (Nuance-VIS, Cambridge Research and Instrumentation, Inc., Hopkinton, MA, USA). The reduced bandwidth from 400 to 420 of these images, when compared to the other paintings from Portugal and Italy, are not expected to impact the overall result. Simulations using Portuguese paintings and comparing the same image with and without the mentioned bandwidths, returned color differences of about 0.4 (±0.6) in the CIELAB color space [31].
The Italian paintings were acquired using a line scanning over the visible and infrared spectrum, but the spectral and spatial resolution of the data used was adjusted to be coincident with the data of the other images [32,33,34]. To achieve so, a spatial sampling was done to reduce the spatial resolution and the spectral resolution was trimmed and sampled from 400 nm to 720 nm in 10 nm steps.

2.2. Camera and Standard Observer Spectral Sensitivity

Figure 3 represents the relative spectral profile of the illuminants, the CIE 1931 2º Standard Observer and the digital camera RGB sensor spectral sensitivities used in the work. Panel (a) represents the CIE D65 standard illuminant [19] using a black line and the LED with a CCT of 3176ºK (LXML—PWW1-0060, Luxeon, Philips Lumileds Lighting Company, San Jose, CA, EUA) using a grey line. In both cases, the maximum was set to 1. Panel (b) represents the CIE 1931 2º Standard Observer color matching functions [19] in red, green, and blue for the x ¯ ( λ ) ,   y ¯ ( λ ) and z ¯ ( λ ) color matching function, respectively, normalized so that the maximum of y ¯ ( λ ) is 1. Panel (c) represents the digital camera spectral sensitivity of the r ¯ ( λ ) ,   g ¯ ( λ ) and b ¯ ( λ ) channel with the red, green, and blue line, respectively, normalized so that the maximum of g ¯ ( λ ) is 1. Also represented are the cone fundamentals normalized to unity, representing the l ( λ ) , m ( λ ) and s ( λ ) cones as red, green, and blue dashed lines, respectively [35].

2.3. Color Differences

The CIELAB color space was used as a starting point because it is a perceptual system [19], although with some limitations regarding uniformity [29]. To estimate the chromaticity coordinates of each image pixel the tristimulus values were converted into L * (the lightness of the color, 0 for black and 100 for white), a * (with negative values representing green and positive values representing red) and b * (with negative values representing blue and positive values representing yellow) assuming the CIE D65 and the LED illuminants, as described elsewhere [3,19]. The CIELAB color space has an associated color difference formula ( Δ E a b * = ( Δ L * ) 2 + ( Δ a * ) 2 + ( Δ b * ) 2 ), based on the Euclidean distance between two chromaticity coordinates [18] by comparing the difference for each color attribute. To overcome some of the non-uniformities of the space, the CIE introduced the CIEDE2000 color difference formula [19,29]. The color differences were computed using the ( Δ E a b * ) and the CIEDE2000 color difference formulas. In the case of the CIEDE2000 color difference formula, the parametric factors k L ,   k C , and k H were set to one, the default values [36].
To estimate the color differences on a more recent and more uniform color space, the chromaticity coordinates and color differences for the Jzazbz color space [28] were computed. Its uniformity is superior to the CIELAB, but the color difference between two chromaticity coordinates is also estimated by computing the Euclidean distance between the two points. The Jzazbz color space is a simple computation color space with superior outcomes in what concerns the uniformity and estimation of small or large color differences [37].
The CIELAB and the Jzazbz color spaces are optimal for individual color comparison surrounded by a background. The stimuli analyzed here are complex spatial structures, as found in paintings and natural scenes. As such, these color spaces may not fully describe the complex pixel color differences. An image appearance model can provide a better description of the complex color stimuli that complex images encapsulate [26]. One of such models is the iCAM06 color appearance model [27]. This color appearance model takes as inputs the XYZ tristimulus values and take into account the structure of the image itself as the surrounding environment, tagged with absolute luminance to predict the degree of the chromatic adaptation and the overall image contrast, increase in perceived colorfulness and image contrast with the luminance (Hunt and Stevens effect, respectively) [38]. The color of each pixel influenced by the structure of the image and provided by the standard observer, the digital camera and the two illuminants was then compared using an Euclidean distance between the two coordinates [26], as other color difference formulas are not recommended [38].

2.4. Error Distribution

To estimate the frequency of errors for each illuminant, color space and color difference formula, the color difference between the colors obtained from the standard observer and the digital camera was estimated in each case. The frequency of the color difference was then estimated assuming 30 intervals (K) in each case, as predicted by Sturge’s rule [39] assuming the total number of pixels analyzed (N = 197,936,935 pixels) as the number of observations: K = 1 + l o g e ( N ) . The distribution of the errors was then fitted with a Gaussian curve to extract the position of the center of the curve as the maximum of the errors estimated. The Gaussian curve was computed by:
y =   y 0 + A e 4 ln ( 2 ) ( x x c ) 2 w 2 w π 4 ln ( 2 ) ,
with y 0 being the base of the curve, x c the center of the curve, A the area under the curve, and w the full width at half maximum (FWHM). x c was assumed to be the position of the most frequent error and taken as an indication of the magnitude of the chromatic difference between the colors computed assuming the standard observer and the digital camera. The higher this value, the higher the chromatic difference between them.

2.5. Number of Discernible Colors and Chromatic Volumes

To describe the overall chromatic differences between the natural scenes and the paintings, the number of discernible color and the color volume encompassing such colors were estimated.
The number of discernible colors was estimated in the CIELAB color space. The volume occupied by all the color of each image was segmented into unitary cubes. All cubes that had one or more colors inside were counted, assuming that all colors that were inside the same cube could not be discriminable. Counting the number of non-empty cubes provided a good estimation of the number of different colors in each image. Further details are described elsewhere [17,40]. To have an estimation of the chromatic diversity across images, the number of discernible colors was also estimated ignoring the L* dimension. The area occupied by the colors was segmented into unitary squares and the number of discernible colors was estimated by counting the number of non-empty squares. It was assumed that all colors that were inside the same square could not be discriminable.
For each image and illuminant, the volume (or area if the L* dimension was ignored) occupied by all the colors of a particular image was computed by estimating for each image the volume of a convex envelope that contained all the colors inside by using a routine from MATLAB (convhull.m, MathWorks, Natick, MA, USA). There was a need to estimate the color volume and the number of discernible colors, as the computation of the color volume will overestimate the gamut as will consider empty volumes. The use of the number of discernible colors will only consider actual existing colors. Further statistics were not considered in this work, as described elsewhere [41,42].

3. Results

3.1. Colors and Gamuts

Figure 4 represents the estimated color gamut for natural scenes (black color) and paintings (light grey color), computed for the standard observer (OBS, full lines) and the digital camera (RGB, dashed lines) assuming the CIE D65 (panel (a)) and LED (panel (b)) illuminants.
Table 1 represents the descriptive information associated with the OBS and the RGB, for the CIE D65 and LED illuminants.
Both Figure 4 and Table 1 show that the color quantities are similar across illuminants, although the shape of the volumes or areas are slightly different. For example, the color volume occupied by all the colors of the natural scenes is similar for CIE D65 and LED illuminants, but the latter has a narrower and elongated shape along the a* and the b*, respectively. The data for OBS is consistent with published computations for natural scenes [17] and paintings [42].
Table 2 summarizes the percentage variations found when comparing the volume and number of discernible colors (NODC) across all images (considering the colors of the natural scenes and paintings combined). The first and second rows present the data between OBS and RGB in percentage variation of the NODC and the color volume for the CIE D65 and LED illuminants. The third and fourth row present the data between CIE D65 and LED illuminants in percentage variation of the NODC and color volume for the standard observer OBS and the digital camera RGB. Both NODC and volume were estimated in the CIELAB color space and ignoring the L* dimension (CIE(a*,b*)). It was found that the gamut provided by the RGB (dashed lines in Figure 4) is around 75% smaller than the one provided by the OBS (full lines in Figure 4) and that this value is independent of the illuminant tested. It was also found that for OBS, when comparing the NODC and volumes across illuminants, the variations are negligible, but when the RGB is considered, the value obtained with CIE D65 is around 10% higher than with the one obtained with LED.
The variations across illuminants seem to be independent for the OBS, but illuminant dependent if the RGB is considered. In both cases, as represented in panel A and B in Figure 4, the gamut available for the RGB is smaller than the one available for the OBS, and the shape and orientation is different. The change in size will limit the acquisition of colors with higher saturation, whereas the change in shape and orientation will impact the hues of the colors under acquisition.

3.2. Color Differences

Figure 5 represents the color errors between the CIE 1931 standard observer (OBS) and the digital camera (RGB). The color differences (ΔE*ab) were estimated in the CIELAB color space assuming the Euclidean distance between the two colors. Data in panel (a) was estimated considering the CIE D65 illuminant, paintings (open triangles and dark solid line) and natural scenes (open squares and light gray solid line—panel (b) as the same type of data considering the LED illuminant). Solid lines represent the Gaussian curve fit to the data as described in Equation (1). The number depicted inside the rectangles is the most frequent ΔE*ab obtained from the position of the max of the curve fitted of 5.11 and 4.73, for paintings and natural scenes, respectively.
For the LED illuminant (panel (b)) it was found that the most frequent ΔE*ab was 5.8 for images of paintings (open triangles and dark solid line) and 5.14 for images of natural scenes (open squares and light gray solid line), with the solid line representing the Gaussian curve fitted to the data.
Table 3 represents the same type of data as extracted from the position of the maximum of a Gaussian function fitted to the data, but considering the different color difference formulas, color spaces, illuminants for paintings, and natural scenes.
Data in Table 3 and the standard errors associated to the estimation of the maximum of each function suggests that the illuminant and the class of the image (art painting or natural scenes) does not influence the maximum frequency of the ΔE*ab.
As an illustration, Figure 6 shows the color errors for two scenes obtained when comparing the colors obtained for the OBS and the RGB, considering the CIELAB color space and the CIE D65 illuminant. On the left side (a) is a natural scene and on the right side (b) a painting from the Portuguese database. Both images were selected to be representative of the images analyzed and to contain some degree of chromatic saturation. The natural scene was also selected to contain natural and artificial colors. The color bar on the right represents the magnitude of the color errors, with dark blue being no color error, while light yellow represents a color error up to 30 units. Errors were estimated considering the chroma and the lightness of each image pixel. It can be seen that not all colors are affected in the same way and that there is some clustering around uniform surfaces characterized by a particular hue. This is the case for natural scenes and paintings.
Figure 7 represents the distribution of the color errors in the CIELAB color space assuming the CIE D65 Illuminant and all the colors in the database. The color bar on the right represents the magnitude of the color errors, with dark blue being no color error, while light yellow represents a color error up to 30 units. Errors were estimated considering the chroma and the lightness of each image pixel. The dark blue on the center of the figure represents less saturated colors that have smaller color error. As the saturation increases so the color errors increase, as the limited gamut of the RGB camera imposes a limit of colors that can be acquired without expected error.

4. Discussion and Conclusions

Comparison between the actual colors and those produced by RGB cameras show that the most frequent error in the CIELAB was 5, for paintings and natural scenes. The value of the most frequent error was found to be similar across paintings and natural scenes regardless of the color space used, as represented in Table 3, and the differences in the absolute magnitude are due to the specificity of each color space. It is known that the texture of an image can change the accepted color difference tolerance [43]. Considering different types of texture, the tolerance for acceptance of color differences ΔE*ab is, on average, higher than 8, while if the CIEDE2000 is considered in homogeneous samples, the tolerance is around 5 in the chroma alone, and around 3 in the hue and luminance [43]. If postcards reproductions of art paintings are considered, the postcard reproduction is considered to represent the painting even with a difference in chroma or color saturation (ΔC*ab) of about 8, depending on the illuminant in use [44]. The magnitude of ΔC*ab is comparable to the magnitude of ΔE*ab, even if the former estimates color differences based on saturation while the latter estimates the color differences based on color coordinates. These tolerance values are higher than the values found here and higher than the threshold for chromatic difference detection in complex images of about 2.2 ΔE*ab units [45], which may indicate that the RGB reproductions may be accepted by a real observer.
The distribution of errors across the color space shows that the errors are very small in the achromatic region and increase with saturation. This is expected given the gamut limitations of the RBG cameras. Nevertheless, the visual effects on most natural images may be subtle, as most of the natural colors have low saturation.
Other studies using a limited sample of real colors estimated color differences of 2 to 4 ΔE in the CIELAB color space [2] after optimizing the computation of the RGB data with a training set similar to the colors to measure [4]. The data presented here seems to be higher (around 5 ΔE) but in comparison with the other data, no assumption was made in terms of data selection to perform the color difference test. In addition, no attempt was made to create a model different from the one already published [2] that would use training sets to improve the colorimetric outcome of digital cameras. When compared to other unconstrained acquisitions of colors under different illuminants [7,8,46], the results found here are better, mainly because the input data is exactly the same for the human observer and the digital camera. No variations exist in the acquisition setup when considering the two different acquisition methods, and a uniform illuminant distribution was assumed when considering both illuminants in both acquisition methods.
The optimization of the CIELAB color space for the CIE D65 illuminant is known. Nevertheless, using other illuminants in this color space to estimate variations and comparisons in color difference is acceptable, since no absolute values are to be estimated, but rather relative ones.
The camera-specific settings were not considered in this analysis. It was assumed that the spectral stimulus was being received by the digital camera RGB sensor using the same optical components and camera settings that were in place at the time of the sensor spectral characterizations. Expanding the results found here to all possible camera settings and configurations should be done carefully, as each impacts the camera characterization differently [5]. Moreover, on the possible optimization of the color difference formulas to decrease the chromatic errors that were found [47], all color spaces and color difference formulas were used with standard parametric factors. The CIEDE2000 color difference formula accepts parametrizations, e.g., kL = 2, as suggested elsewhere [47]. The results in the present computations are, however, almost unaffected in relation to the default parametrization (±0.6 CIEDE2000 units in average, across all conditions). Possible improvements to the work flow of the RGB post processing pipeline or CCD array layout [48,49,50] were also not considered and tristimulus values were directly converted into sRGB values.
The use of complex images also increases the complexity of comparing colors, as one color is never seen isolated, but is always surrounded by colors. Color difference formulas tend to compare colors individually, independent of their surroundings. The iCAM06 color space was used to overcome this limitation, but it was found to have a limited impact on the overall result.
Only images of natural scenes and paintings were considered in this work. Cameras can be used in extreme environments such as underwater, or in dentistry, [51,52] or in unmanned aircraft systems [53], but these extreme environments are outside of the scope of this paper.
Overall, the results presented here seems to indicate that the chromatic errors estimated are within the discrimination acceptance error when complex images are considered, but higher than the threshold assumed for such complex images.

Author Contributions

Conceptualization, J.M.M.L. and S.M.C.N.; Data curation, J.M.M.L., J.A.R.M., A.B., L.C., T.K., S.N., M.P., C.C., A.C., L.S. and S.M.C.N.; Formal analysis, J.M.M.L. and S.M.C.N.; Investigation, J.M.M.L.; Methodology, J.M.M.L., S.N. and S.M.C.N.; Writing—original draft, J.M.M.L., J.A.R.M. and S.M.C.N.; Writing—review & editing, J.M.M.L., J.A.R.M., A.B., L.C., T.K., S.N., M.P., C.C., A.C., L.S. and S.M.C.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Portuguese Foundation for Science and Technology (FCT) in the framework of the Strategic Funding UIDB/04650/2020.

Acknowledgments

The authors would like to acknowledge the direct contribution of Nobuyo Okada and Kanako Maruchi from the Toyohashi City Museum Art and history, JAPAN, for their role in collecting hyperspectral images of Japanese paintings.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

CCTCorrelated color temperature
CCDCharge-coupled device
CIECommission International de l′Éclairage
OBSData related to tristimulus values and device independent
RGBData related to a digital camera and device dependent
sRGBstandard Red Green Blue color space
STDStandard deviation

References

  1. Tkačik, G.; Garrigan, P.; Ratliff, C.; Milčinski, G.; Klein, J.M.; Seyfarth, L.H.; Sterling, P.; Brainard, D.H.; Balasubramanian, V. Natural Images from the Birthplace of the Human Eye. PLoS ONE 2011, 6, e20409. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Hong, G.; Luo, M.R.; Rhodes, P.A. A study of digital camera colorimetric characterization based on polynomial modeling. Color. Res. Appl. 2001, 26, 76–84. [Google Scholar] [CrossRef]
  3. Foster, D.H.; Amano, K. Hyperspectral imaging in color vision research: Tutorial. J. Opt. Soc. Am. A 2019, 36, 606. [Google Scholar] [CrossRef] [PubMed]
  4. Pointer, M.R.; Attridge, G.G.; Jacobson, R.E. Practical camera characterization for colour measurement. Imaging Sci. J. 2001, 49, 63–80. [Google Scholar] [CrossRef]
  5. Brady, M.; Legge, G.E. Camera calibration for natural image studies and vision research. J. Opt. Soc. Am. A 2009, 26, 30. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Millán, M.S.; Valencia, E.; Corbalán, M. 3CCD camera’s capability for measuring color differences: Experiment in the nearly neutral region. Appl. Opt. 2004, 43, 6523. [Google Scholar] [CrossRef] [PubMed]
  7. Penczek, J.; Boynton, P.A.; Splett, J.D. Color Error in the Digital Camera Image Capture Process. J. Digit. Imaging 2014, 27, 182–191. [Google Scholar] [CrossRef] [Green Version]
  8. Orava, J.; Jaaskelainen, T.; Parkkinen, J. Color errors of digital cameras. Color Res. Appl. 2004, 29, 217–221. [Google Scholar] [CrossRef]
  9. Prasad, D.K.; Wenhe, L. Metrics and statistics of frequency of occurrence of metamerism in consumer cameras for natural scenes. J. Opt. Soc. Am. A 2015, 32, 1390. [Google Scholar] [CrossRef]
  10. Mauer, C.; Wueller, D. Measuring the Spectral Response with a Set of Interference Filters. In Proceedings of the IS&T/SPIE Electronic Imaging, San Jose, CA, USA, 19–20 January 2009; p. 72500S. [Google Scholar] [CrossRef]
  11. Imai, F.H.; Berns, R.S.; Tzeng, D.-Y. A Comparative Analysis of Spectral Reflectance Estimated in Various Spaces Using a Trichromatic Camera System. J. imaging Sci. Technol. 2000, 44, 280–287. [Google Scholar]
  12. Garcia, J.E.; Girard, M.B.; Kasumovic, M.; Petersen, P.; Wilksch, P.A.; Dyer, A.G. Differentiating Biological Colours with Few and Many Sensors: Spectral Reconstruction with RGB and Hyperspectral Cameras. PLoS ONE 2015, 10, e0125817. [Google Scholar] [CrossRef] [PubMed]
  13. Charrière, R.; Hébert, M.; Trémeau, A.; Destouches, N. Color calibration of an RGB camera mounted in front of a microscope with strong color distortion. Appl. Opt. 2013, 52, 5262. [Google Scholar] [CrossRef] [PubMed]
  14. Pujol, J.; Martínez-Verdú, F.; Luque, M.J.; Capilla, P.; Vilaseca, M. Comparison between the Number of Discernible Colors in a Digital Camera and the Human Eye. In Proceedings of the Conference on Colour in Graphics, Imaging, and Vision, Aachen, Germany, 5–8 April 2004; Volume 2004, pp. 36–40. [Google Scholar]
  15. Pujol, J.; Martínez-Verdú, F.; Capilla, P. Estimation of the Device Gamut of a Digital Camera in Raw Performance Using Optimal Color-Stimuli. In Proceedings of the PICS 2003: The PICS Conference, An International Technical Conference on The Science and Systems of Digital Photography, Including the Fifth International Symposium on Multispectral Color Science, Rochester, NY, USA, 13 May 2003; pp. 530–535. [Google Scholar]
  16. Martínez-Verdú, F.; Luque, M.J.; Capilla, P.; Pujol, J. Concerning the calculation of the color gamut in a digital camera. Color Res. Appl. 2006, 31, 399–410. [Google Scholar] [CrossRef] [Green Version]
  17. Linhares, J.M.M.; Pinto, P.D.; Nascimento, S.M.C. The number of discernible colors in natural scenes. J. Opt. Soc. Am. A 2008, 25, 2918. [Google Scholar] [CrossRef]
  18. Webster, M.A.; Mollon, J.D. Adaptation and the color statistics of natural images. Vis. Res. 1997, 37, 3283–3298. [Google Scholar] [CrossRef] [Green Version]
  19. Carter, E.C.; Ohno, Y.; Pointer, M.R.; Robertson, A.R.; Sève, R.; Schanda, J.D.; Witt, K. Colorimetry, 3rd ed.; International Commission on Illumination (CIE): Vienna, Austria, 2004. [Google Scholar]
  20. Berns, R.S. Designing white-light LED lighting for the display of art: A feasibility study. Color. Res. Appl. 2011, 36, 324–334. [Google Scholar] [CrossRef]
  21. Garside, D.; Curran, K.; Korenberg, C.; MacDonald, L.; Teunissen, K.; Robson, S. How is museum lighting selected? An insight into current practice in UK museums. J. Inst. Conserv. 2017, 40, 3–14. [Google Scholar] [CrossRef] [Green Version]
  22. Pelowski, M.; Graser, A.; Specker, E.; Forster, M.; von Hinüber, J.; Leder, H. Does Gallery Lighting Really Have an Impact on Appreciation of Art? An Ecologically Valid Study of Lighting Changes and the Assessment and Emotional Experience with Representational and Abstract Paintings. Front. Psychol. 2019, 10, 2148. [Google Scholar] [CrossRef]
  23. Martínez-Domingo, M.Á.; Melgosa, M.; Okajima, K.; Medina, V.J.; Collado-Montero, F.J. Spectral Image Processing for Museum Lighting Using CIE LED Illuminants. Sensors 2019, 19, 5400. [Google Scholar] [CrossRef] [Green Version]
  24. Fairchild, M.D. Color Appearance Models, 3rd ed.; The Wiley-IS&T Series in Imaging Science and Technology; John Wiley & Sons, Inc.: Chichester, West Sussex, UK, 2013; ISBN 978-1-118-65309-8. [Google Scholar]
  25. Hunt, R.W.G.; Li, C.; Luo, M.R. Chromatic adaptation transforms. Color. Res. Appl. 2005, 30, 69–71. [Google Scholar] [CrossRef]
  26. Fairchild, M.D. iCAM framework for image appearance, differences, and quality. J. Electron. Imaging 2004, 13, 126. [Google Scholar] [CrossRef] [Green Version]
  27. Kuang, J.; Johnson, G.M.; Fairchild, M.D. iCAM06: A refined image appearance model for HDR image rendering. J. Vis. Commun. Image Represent. 2007, 18, 406–414. [Google Scholar] [CrossRef]
  28. Safdar, M.; Cui, G.; Kim, Y.J.; Luo, M.R. Perceptually uniform color space for image signals including high dynamic range and wide gamut. Opt. Express 2017, 25, 15131. [Google Scholar] [CrossRef]
  29. Luo, M.R.; Cui, G.; Rigg, B. The development of the CIE 2000 colour-difference formula: CIEDE2000. Color Res. Appl. 2001, 26, 340–350. [Google Scholar] [CrossRef]
  30. Pinto, P.D.; Linhares, J.M.M.; Nascimento, S.M.C. Correlated color temperature preferred by observers for illumination of artistic paintings. J. Opt. Soc. Am. A 2008, 25, 623. [Google Scholar] [CrossRef] [Green Version]
  31. Nascimento, S.M.C.; Herdeiro, C.F.M.; Gomes, A.E.; Linhares, J.M.M.; Kondo, T.; Nakauchi, S. The Best CCT for Appreciation of Paintings under Daylight Illuminants is Different for Occidental and Oriental Viewers. LEUKOS 2020, 1–9. [Google Scholar] [CrossRef]
  32. Cucci, C.; Casini, A.; Picollo, M.; Stefani, L. Extending hyperspectral imaging from Vis to NIR spectral regions: A novel scanner for the in-depth analysis of polychrome surfaces. In Proceedings of the SPIE Optical Metrology, Munich, Germany, 13–16 May 2013; p. 879009. [Google Scholar] [CrossRef]
  33. Cucci, C.; Delaney, J.K.; Picollo, M. Reflectance Hyperspectral Imaging for Investigation of Works of Art: Old Master Paintings and Illuminated Manuscripts. Acc. Chem. Res. 2016, 49, 2070–2079. [Google Scholar] [CrossRef]
  34. Picollo, M.; Cucci, C.; Casini, A.; Stefani, L. Hyper-Spectral Imaging Technique in the Cultural Heritage Field: New Possible Scenarios. Sensors 2020, 20, 2843. [Google Scholar] [CrossRef]
  35. Stockman, A. Cone fundamentals and CIE standards. Curr. Opin. Behav. Sci. 2019, 30, 87–93. [Google Scholar] [CrossRef]
  36. Johnson, G.M.; Fairchild, M.D. A top down description of S-CIELAB and CIEDE2000. Color Res. Appl. 2003, 28, 425–435. [Google Scholar] [CrossRef]
  37. Zhao, B.; Luo, M.R. Hue linearity of color spaces for wide color gamut and high dynamic range media. J. Opt. Soc. Am. A 2020, 37, 865. [Google Scholar] [CrossRef]
  38. Fairchild, M.D.; Johnson, G.M. Meet iCAM: A Next-Generation Color Appearance Model. In Proceedings of the Color Imaging Conference, Scottsdale, AZ, USA, 12–15 November 2002; p. 7. [Google Scholar]
  39. Scott, D.W. Sturges’ rule. Wires Comp. Stat. 2009, 1, 303–306. [Google Scholar] [CrossRef]
  40. Pointer, M.R.; Attridge, G.G. The number of discernible colours. Color Res. Appl. 1998, 23, 52–54. [Google Scholar] [CrossRef]
  41. Montagner, C.; Linhares, J.M.M.; Vilarigues, M.; Melo, M.J.; Nascimento, S.M.C. Supporting history of art with colorimetry: The paintings of Amadeo de Souza-Cardoso. Color Res. Appl 2018, 43, 304–310. [Google Scholar] [CrossRef]
  42. Montagner, C.; Linhares, J.M.M.; Vilarigues, M.; Nascimento, S.M.C. Statistics of colors in paintings and natural scenes. J. Opt. Soc. Am. A 2016, 33, A170. [Google Scholar] [CrossRef]
  43. Huertas, R.; Melgosa, M.; Hita, E. Influence of random-dot textures on perception of suprathreshold color differences. J. Opt. Soc. Am. A 2006, 23, 2067. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Amano, K.; Linhares, J.M.M.; Nascimento, S.M.C. Color constancy of color reproductions in art paintings. J. Opt. Soc. Am. A 2018, 35, B324. [Google Scholar] [CrossRef] [Green Version]
  45. Aldaba, M.A.; Linhares, J.M.M.; Pinto, P.D.; Nascimento, S.M.C.; Amano, K.; Foster, D.H. Visual sensitivity to color errors in images of natural scenes. Vis. Neurosci 2006, 23, 555–559. [Google Scholar] [CrossRef]
  46. Seymour, J.C. Why do color transforms work? In Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts III; Beretta, G.B., Eschbach, R., Eds.; Society of Photo-Optical Instrumentation Engineers (SPIE): San Jose, CA, USA, 1997; pp. 156–164. [Google Scholar]
  47. Liu, H.; Huang, M.; Cui, G.; Luo, M.R.; Melgosa, M. Color-difference evaluation for digital images using a categorical judgment method. J. Opt. Soc. Am. A 2013, 30, 616. [Google Scholar] [CrossRef]
  48. Karaimer, H.C.; Brown, M.S. Improving Color Reproduction Accuracy on Cameras. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6440–6449. [Google Scholar]
  49. Lebourgeois, V.; Bégué, A.; Labbé, S.; Mallavan, B.; Prévot, L.; Roux, B. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test. Sensors 2008, 8, 7300–7322. [Google Scholar] [CrossRef] [PubMed]
  50. Solli, M.; Andersson, M.; Lenz, R.; Kruse, B. Color Measurements with a Consumer Digital Camera Using Spectral Estimation Techniques. In Image Analysis; Kalviainen, H., Parkkinen, J., Kaarna, A., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg Germany, 2005; Volume 3540, pp. 105–114. ISBN 978-3-540-26320-3. [Google Scholar]
  51. Bainbridge, S.; Gardner, S. Comparison of Human and Camera Visual Acuity—Setting the Benchmark for Shallow Water Autonomous Imaging Platforms. JMSE 2016, 4, 17. [Google Scholar] [CrossRef]
  52. Wee, A.G.; Lindsey, D.T.; Kuo, S.; Johnston, W.M. Color accuracy of commercial digital cameras for use in dentistry. Dent. Mater. 2006, 22, 553–559. [Google Scholar] [CrossRef] [PubMed]
  53. Adão, T.; Hruška, J.; Pádua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J. Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry. Remote Sens. 2017, 9, 1110. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Workflow used to estimate the color differences between the human eye and a RGB digital camera. From a set of reflectance functions derived from spectral imaging (1), the colors processed by the digital camera (4, orange lines) and those seen by the standard observer (3, blue lines) were estimated and compared in the same color space (5 and 6).
Figure 1. Workflow used to estimate the color differences between the human eye and a RGB digital camera. From a set of reflectance functions derived from spectral imaging (1), the colors processed by the digital camera (4, orange lines) and those seen by the standard observer (3, blue lines) were estimated and compared in the same color space (5 and 6).
Sensors 20 06242 g001
Figure 2. Representative images of the 89 paintings (above line) and 50 natural scenes (bellow line) analyzed in this work.
Figure 2. Representative images of the 89 paintings (above line) and 50 natural scenes (bellow line) analyzed in this work.
Sensors 20 06242 g002
Figure 3. Relative spectral power distributions of the illuminants used (a), the CIE 1931 Observer color matching functions (b) and the digital camera sensor sensitivity (c), compared with the human cones sensitivity (dashed lines).
Figure 3. Relative spectral power distributions of the illuminants used (a), the CIE 1931 Observer color matching functions (b) and the digital camera sensor sensitivity (c), compared with the human cones sensitivity (dashed lines).
Sensors 20 06242 g003
Figure 4. Color gamut in the CIE(a*,b*) color space generated by all the colors of paintings (light grey line) and natural scenes (dark line), for the CIE 1931 standard observer (OBS, full line) and the digital camera (RGB, dashed line). Data for the CIE D65 (panel (a)) and the LED (panel (b)) illuminants.
Figure 4. Color gamut in the CIE(a*,b*) color space generated by all the colors of paintings (light grey line) and natural scenes (dark line), for the CIE 1931 standard observer (OBS, full line) and the digital camera (RGB, dashed line). Data for the CIE D65 (panel (a)) and the LED (panel (b)) illuminants.
Sensors 20 06242 g004
Figure 5. Color difference errors (ΔE*ab) in the CIELAB color space between the colors obtained for the CIE 1931 standard observer (OBS) and the digital camera (RGB), computed for paintings (open triangles) and natural scenes (open squares) under the CIE D65 (panel (a)) and the LED (panel (b)) illuminants. Full lines represent the best fit to the data of a Gaussian function. The numbers depicted correspond to the maximum color error (ΔE*ab) extracted from the function fitted.
Figure 5. Color difference errors (ΔE*ab) in the CIELAB color space between the colors obtained for the CIE 1931 standard observer (OBS) and the digital camera (RGB), computed for paintings (open triangles) and natural scenes (open squares) under the CIE D65 (panel (a)) and the LED (panel (b)) illuminants. Full lines represent the best fit to the data of a Gaussian function. The numbers depicted correspond to the maximum color error (ΔE*ab) extracted from the function fitted.
Sensors 20 06242 g005
Figure 6. Color errors estimated between the RGB and the OBS in the CIELAB color space for a natural image (a) and a Portuguese painting (b) The original colors of the images are represented in Figure 2 Color bar represents the magnitude of the color errors (dark blue for no error and light yellow for a color error of 30).
Figure 6. Color errors estimated between the RGB and the OBS in the CIELAB color space for a natural image (a) and a Portuguese painting (b) The original colors of the images are represented in Figure 2 Color bar represents the magnitude of the color errors (dark blue for no error and light yellow for a color error of 30).
Sensors 20 06242 g006
Figure 7. Color error distribution across the CIELAB color space for all natural scenes and paintings and CIE D65 illuminant. The right color bar represents the magnitude of the color errors.
Figure 7. Color error distribution across the CIELAB color space for all natural scenes and paintings and CIE D65 illuminant. The right color bar represents the magnitude of the color errors.
Sensors 20 06242 g007
Table 1. Average color volume and correspondent number of discernible colors (NODC) estimated across paintings and natural scenes, for the CIE 1931 standard observer (OBS) and the digital camera (RGB) and the CIE D65 (D65) and LED (LED) illuminants considering the CIELAB color space. Also represented is the gamut area ignoring the L* dimension and the correspondent number of discernible colors. Numbers in brackets represents the standard deviation. Data magnitude is (×103).
Table 1. Average color volume and correspondent number of discernible colors (NODC) estimated across paintings and natural scenes, for the CIE 1931 standard observer (OBS) and the digital camera (RGB) and the CIE D65 (D65) and LED (LED) illuminants considering the CIELAB color space. Also represented is the gamut area ignoring the L* dimension and the correspondent number of discernible colors. Numbers in brackets represents the standard deviation. Data magnitude is (×103).
Paintings (×103)Natural Scenes (×103)
D65LEDD65LED
VolumeOBS160.9162.0 439.2441.5
(± 126.4)(± 126.3)(± 284.5)(± 284.8)
RGB25.022.9 69.6 61.0
(± 18.2)(± 17.6)(± 45.42)(± 39.2)
NODC VolumeOBS43.4 43.4 92.2 94.4
(± 32.5)(± 32.4)(± 48.2)(± 49.5)
RGB10.6 9.5 25.7 22.8
(± 7.2)(± 6.9)(± 14.9)(±12.9)
AreaOBS4.5 4.6 9.3 9.3
(± 2.7)(± 2.8)(± 5.0)(± 4.8)
RGB0.7 0.7 1.5 1.3
(± 0.4)(± 0.4)(± 0.9)(± 0.7)
NODC AreaOBS2.8 2.8 5.1 5.2
(± 1.7)(± 1.7)(± 2.4)(± 2.4)
RGB0.60.51.11.0
(± 0.3)(± 0.3)(± 0.5)(± 0.5)
Table 2. Percentage variations of the data represented in the first two columns of Table 1, considering all the images analyzed, combining the data from paintings and natural scenes.
Table 2. Percentage variations of the data represented in the first two columns of Table 1, considering all the images analyzed, combining the data from paintings and natural scenes.
NODC (%)Color Volume (%)
CIELABCIE(a*,b*)CIELABCIE(a*,b*)
OBS vs. RGBD6526.320.915.716.2
LED23.218.313.914.3
D65 vs. LEDOBS101.3100.8100.9100.2
RGB89.588.389.288.1
Table 3. Color errors for the color difference formulas and color spaces considered, for CIE D65 and LED illuminants for paintings and natural scenes. Error numbers represent the standard error associated to the fit.
Table 3. Color errors for the color difference formulas and color spaces considered, for CIE D65 and LED illuminants for paintings and natural scenes. Error numbers represent the standard error associated to the fit.
PaintingsNatural Scenes
D65LEDD65LED
CIELAB5.1 (± 0.5)5.8 (± 0.5)4.7 (± 0.4)5.1 (± 0.4)
CIEDE20005.7 (± 0.2)6.2 (± 0.1)5.9 (± 0.1)6.1 (± 0.2)
Jzazbz (×10−3)34.5 (± 1.5)35.2 (± 1.3)74.0 (± 0.0)73.3 (± 0.6)
iCAM2.0 (± 0.1)1.8 (± 0.1)1.0 (± 0.2)0.9 (± 0.1)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Linhares, J.M.M.; Monteiro, J.A.R.; Bailão, A.; Cardeira, L.; Kondo, T.; Nakauchi, S.; Picollo, M.; Cucci, C.; Casini, A.; Stefani, L.; et al. How Good Are RGB Cameras Retrieving Colors of Natural Scenes and Paintings?—A Study Based on Hyperspectral Imaging. Sensors 2020, 20, 6242. https://doi.org/10.3390/s20216242

AMA Style

Linhares JMM, Monteiro JAR, Bailão A, Cardeira L, Kondo T, Nakauchi S, Picollo M, Cucci C, Casini A, Stefani L, et al. How Good Are RGB Cameras Retrieving Colors of Natural Scenes and Paintings?—A Study Based on Hyperspectral Imaging. Sensors. 2020; 20(21):6242. https://doi.org/10.3390/s20216242

Chicago/Turabian Style

Linhares, João M. M., José A. R. Monteiro, Ana Bailão, Liliana Cardeira, Taisei Kondo, Shigeki Nakauchi, Marcello Picollo, Costanza Cucci, Andrea Casini, Lorenzo Stefani, and et al. 2020. "How Good Are RGB Cameras Retrieving Colors of Natural Scenes and Paintings?—A Study Based on Hyperspectral Imaging" Sensors 20, no. 21: 6242. https://doi.org/10.3390/s20216242

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop