Next Article in Journal
Preliminary Evaluation of a Blast Sprayer Controlled by Pulse-Width-Modulated Nozzles
Previous Article in Journal
Recent Progress in the Topologies of the Surface Acoustic Wave Sensors and the Corresponding Electronic Processing Circuits
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Auxiliary Reference Samples for Extrapolating Spectral Reflectance from Camera RGB Signals

1
Department of Electrophysics, National Yang Ming Chiao Tung University, No. 1001 University Road, Hsinchu 300, Taiwan
2
Department of Electrical Engineering, Yuan Ze University, No. 135 Yuan-Tung Road, Taoyuan 320, Taiwan
3
Department of Photonics, National Yang Ming Chiao Tung University, No. 1001 University Road, Hsinchu 300, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(13), 4923; https://doi.org/10.3390/s22134923
Submission received: 6 June 2022 / Revised: 26 June 2022 / Accepted: 28 June 2022 / Published: 29 June 2022
(This article belongs to the Section Optical Sensors)

Abstract

:
Surface spectral reflectance is useful for color reproduction. In this study, the reconstruction of spectral reflectance using a conventional camera was investigated. The spectrum reconstruction error could be reduced by interpolating camera RGB signals, in contrast to methods based on basis spectra, such as principal component analysis (PCA). The disadvantage of the interpolation method is that it cannot interpolate samples outside the convex hull of reference samples in the RGB signal space. An interpolation method utilizing auxiliary reference samples (ARSs) to extrapolate the outside samples is proposed in this paper. The ARSs were created using reference samples and color filters. The convex hull of the reference samples and ARSs was expanded to enclose outside samples for extrapolation. A commercially available camera was taken as an example. The results show that with the proposed method, the extrapolation error was smaller than that of the computationally time-consuming weighted PCA method. A low cost and fast detection speed for spectral reflectance recovery can be achieved using a conventional camera.

1. Introduction

Surface spectral reflectance is useful for the color reproduction of industrial products and artwork [1,2,3]. It can be measured directly with an imaging spectrometer [4,5]. However, direct spectral measurements are expensive. Indirect measurements using the spectrum reconstruction technique are of interest [6,7,8,9,10,11,12,13,14]. The spectrum of the image pixel is reconstructed from the channel outputs of the image acquisition device. Since no diffractive optical imaging system is required, the indirect method has the advantages of a low cost and fast detection speed. Therefore, using a conventional camera makes more field applications possible, e.g., smartphone cameras used as sensors to measure surface spectral reflectance.
Orthogonal projection [6], principal component analysis (PCA) [7,8], Gaussian mixture [9], non-negative matrix transformation (NMT) [10,11] and interpolation [11,12,13,14] have been proposed for spectrum reconstruction. Indirect methods that require training spectra are also known as learning-based methods, such as orthogonal projection, PCA and NMT. The training spectra are used to derive basis spectra. The reconstructed spectrum is a linear combination of basis spectra. The coefficients of basis spectra can be solved from simultaneous equations describing the channel outputs of the imaging device. The accuracy of the reconstructed spectrum increases with the number of channels. For cases with a conventional tricolor camera, where only three channels are available, the accuracy of the reconstructed spectrum might not be high enough.
The interpolation method uses reference spectra to reconstruct a spectrum interpolated from input values, e.g., XYZ tristimulus values [11,12,13] and RGB signal values [14]. Due to the use of a look-up table (LUT) to store the reference spectra, this method is often referred to as the LUT method. The authors of [11,12,13,14] showed that the LUT method has the advantage of being more accurate than the PCA method, where the reference spectra for interpolation are the same as the training spectra for the PCA method. A spectrum is interpolated from its neighboring reference samples using the LUT method, whereas the basis spectra of the PCA method are derived from all training samples. The weighted PCA (wPCA) method was proposed to enhance the contribution of neighboring training samples in the CIELAB color space to the basis spectra [8,14]. Since basis spectra depend on the sample to be reconstructed, the computation time of the wPCA method is significantly increased compared to the conventional PCA method and the LUT method.
Learning-based methods require camera spectral sensitivities to formulate simultaneous equations describing the channel outputs. Camera spectral sensitivities can be directly measured using a monochromator [15], but accurate measurement is expensive. Without the use of a monochromator, the camera spectral sensitivities can be estimated by solving a quadric minimization problem [15,16]. An alternative approach is to estimate the spectral sensitivities including the camera and light source so that the reflectance spectrum can be calculated from the camera signals [17,18,19,20,21]. The estimation errors of spectral sensitivities cause additional errors in the reconstructed spectrum.
The LUT method does not require the spectral sensitivity functions because the reconstructed spectrum is interpolated from the measured spectra of the reference samples. However, if the sample lies outside the convex hull of the reference samples in the RGB signal space, it cannot be interpolated. Such a sample can be called an outside sample to distinguish it from the samples inside the convex hull. In the literature, modified PCA and NMT methods have been used to extrapolate outside samples [11,12,13,14], although spectral sensitivity functions are required. The authors of [11,12,13] considered interpolation in the XYZ color space, where spectral sensitivity functions were equivalently assumed to be the CIE color matching functions (CMFs). The authors of [14] considered interpolation in the RGB signal space, where the camera was assumed to follow the sRGB standard so that RGB signal values and XYZ tristimulus values can be converted to each other via the well-known sRGB matrix. This hypothetical camera is called the sRGB camera, and its spectral sensitivities are presented in [22].
The authors of [11,13] used 2D interpolation and 3D interpolation to extrapolate outside samples from reference samples, respectively. It is not guaranteed that the 2D interpolation method will extrapolate all outside samples [11]. The authors of [14] extrapolated outside samples from reference samples and additional reference samples; the latter are called model-based metameric spectra of extreme points (MMSEPs). The extreme points are the eight corners of the RGB signal cube. They are black, white, red, green, blue, yellow, cyan and magenta, corresponding to the signal vectors [R, G, B]T = [0, 0, 0]T, [1, 1, 1]T, [1, 0, 0]T, [0, 1, 0]T, [0, 0, 1]T, [1, 1, 0]T, [0, 1, 1]T and [1, 0, 1]T, respectively, where the maximum values of the signals are normalized to 1.0; the subscript T denotes the transpose operation. The metameric spectra are the reflection spectra from eight surfaces under D65 illumination. The spectral reflectance of the eight surfaces was constructed using the sRGB camera. The MMSEPs were equivalently constructed using the spectral sensitivities of the sRGB camera.
Inspired by [14], we propose the use of auxiliary reference samples (ARSs) for extrapolating outside samples using the LUT method. ARSs are high-saturation samples. They are created using appropriately chosen color filters and color chips. Color filters are in turn mounted on the spectroradiometer to measure the spectrum of filtered reflection light from a color chip. The RGB signal values corresponding to the filtered reflection light are recorded by a camera mounted with the same color filter. Color filters and color chips are chosen so that outside samples can be enclosed by reference samples and ARSs in the RGB signal space for extrapolation. Numerical studies of the proposed method were carried out. A comparison of the LUT method utilizing ARSs, the LUT method utilizing MMSEP samples, the wPCA method and other methods is presented. For ease of reference, Table 1 lists the abbreviations defined herein in alphabetical order.

2. Materials and Assessment Metrics

A Nikon D5100 camera was taken as an example. Its spectral sensitivities of the red, green and blue signal channels measured by a monochromator are shown in Figure 1a [16]. The average wavelengths of the spectral sensitivities of the red, green and blue channels are denoted as λCamR, λCamG and λCamB, respectively, which are called the channel wavelengths for simplicity. The full width at half maximum (FWHM) of the spectral sensitivities of the red, green and blue channels is denoted as ΔλCamR, ΔλCamG and ΔλCamB, respectively. The spectral specifications of the camera are shown in Table 2.
The reflectance spectra of matt Munsell color chips measured by a Perkin-Elmer lambda 9 spectroradiometer were adopted for preparing reference samples and test samples [23]. The available measurement data in [23] comprise 1269 records, but 2 of them are duplicates, namely, record 1242 (annotation 10RP 7/2) and record 1249 (annotation 10RP 7/4). Therefore, 1268 reflectance spectra were used in this paper. The light source was assumed to be illuminant D65. A total of 202 color chips were selected for the preparation of the reference samples.
A spectrum can be represented by the vector S = [S(λ1), S(λ2), …, S(λMw) ]T, where S(λj) is the spectral amplitude at wavelength λj, λj = λ1 + (j − 1)Δλ is the j-th sampling wavelength, j = 1, 2, …, Mw, and Δλ is the wavelength sampling interval; Mw is the number of sampling wavelengths. In this paper, spectra were sampled from 400 nm to 700 nm in steps of 10 nm, i.e., λ1 = 400 nm, Δλ = 10 nm and Mw = 31. The spectrum vector of the light reflected from a color chip is SReflection = SRef SD65, where SRef and SD65 are the spectral reflectance vector of the color chip and the spectrum vector of the illuminant D65, respectively; the operator is the Hadamard product, also known as the element-wise product. Figure 2a shows the color points of the reflection light from the 1268 Munsell color chips in the CIELAB color space, where the 202 reference samples and 1066 test samples are shown as red and blue dots, respectively. Figure 2b–d are the same as Figure 2a, but with different viewing angles. The CIE 1931 CMFs were adopted in this paper.
The measured signal of a color channel is UMeas = SReflectionTSU, where U = R, G and B for the red, green and blue channels, respectively; SReflection is the reflection spectrum vector; SU is the spectral sensitivity of the channel. For the white balance condition, the channel signals are normalized to U = UMeas/UMeasD65, where U = R, G and B; UMeasD65 is the measured signal when SReflection = SWhite  SD65; SWhite is the spectral reflectance of a white card. The white side of a Kodak gray card was taken as the white card, where its spectral reflectance is approximately 0.9 in the visible wavelength range. The vector representing the camera signals is designated as C = [R, G, B]T. Figure 3a shows the color points of the reflection spectra from the Munsell color chips in the RGB signal space using the Nikon D5100, where the 202 reference samples and 1066 test samples are shown as red and blue dots, respectively. There are 62 samples in the convex hull of the 202 reference samples. Figure 3b shows the convex hull.
For a given test signal vector, the LUT method to reconstruct the reflection light spectrum is shown in Section 3.1. The reconstructed spectrum vector is designated as SRec. The reconstructed spectral reflectance vector SRefRec was calculated as the reflection spectrum vector SRec divided by the D65 spectrum vector SD65 element by element.
The reconstructed spectral reflectance vector SRefRec was assessed by the root mean square (RMS) error ERef = (|SRefRecSRef|2/Mw)1/2 and the goodness-of-fit coefficient GFC = |SRefRecTSRef|/|SRefRec| |SRef|, where |·| stands for the norm operation. The color difference between SRec and SReflection was assessed using CIEDE2000 ΔE00. The spectral comparison index (SCI) was also used to assess the reconstructed results [24,25]. The parameter k in the formula for calculating SCI shown in [24] was set to 1.0. For the values of ERef, ΔE00 and SCI, the smaller, the better. The statistics of the three metrics were calculated, which are the mean μ, standard deviation σ, 50th percentile PC50, 98th percentile PC98 and maximum MAX. For the value of GFC, the larger, the better. The statistics of GFC were calculated, which are the mean μ, standard deviation σ, 50th percentile PC50 and minimum MIN. The fit of the spectral curve shape is good if GFC > 0.99 [14,26]. The ratio of samples with GFC > 0.99 was calculated, which is called the ratio of good fit and designated as RGF99.

3. Spectrum Reconstruction Method

3.1. Reflection Spectrum Reconstruction

This subsection describes the LUT method to reconstruct the reflection spectrum vector SRec from the test signal vector C [12]. The color points of the reference samples in the RGB signal space were not regularly distributed as shown in Figure 3a. Therefore, the use of the scattered data interpolation method was required. Several interpolation methods were surveyed in [27]. Among these methods, linear tetrahedral interpolation was adopted due to its simplicity and computational time savings [11,12,13]. A tetrahedral mesh in the RGB signal space was generated from the reference signal vectors. Note that the tetrahedrization is not unique, and the interpolation result depends on the tetrahedrization [12,27]. All programs in this paper were implemented in MATLAB (version R2021a, MathWorks). The tetrahedral mesh used for interpolation was generated by the MATLAB function “delaunay” [11,13,14]. There were three steps to interpolate the test sample.
  • STEP 1: Locate the tetrahedron.
The tetrahedron that encloses the color point Q of the vector C in the RGB signal space was located. Figure 4 shows the tetrahedron, with vertices Q1, Q2, Q3 and Q4 enclosing the color point Q. A database or look-up table storing the tetrahedral mesh can be used to save processing time in locating the tetrahedron [13]. This paper used the MATLAB function “pointLocation” to locate the tetrahedron, which is a related function of “delaunay”.
  • STEP 2: Calculate interpolation coefficients.
The reference signal vectors of Q1, Q2, Q3 and Q4 were assumed to be C1, C2, C3 and C4, respectively. It is required that C is the linear combination of the reference signal vectors, and
C = α1C1 + α2C2 + α3C3 + α4C4,
1 = α1 + α2 + α3 + α4,
where the coefficients α1, α2, α3 and α4 are weighting factors. Equation (1a) comprises three scalar equations because tristimulus vectors are 3D. Equation (1b) guarantees that Q is inside the tetrahedron if 0 < α1, α2, α3, α4 < 1. The four coefficients in Equation (1a,b) were solved.
  • STEP 3: Calculate the reconstructed reflection spectrum.
The reconstructed reflection spectrum vector is
SRec = α1S1 + α2S2 + α3S3 + α4S4,
where Sj is the reference spectrum vector corresponding to the vertex Qj for j = 1, 2, 3 and 4. If the reconstructed spectrum has negative values, the value is set to zero.
If both sides of Equation (2) are multiplied by the spectral sensitivity function of a signal channel and integrated over the wavelength, we obtain Equation (1a) corresponding to the signal channel. However, the interpolation is an inverse problem. The reconstructed spectrum vector SRec is one of numerous metameric spectrum vectors corresponding to the test signal vector C. Equation (1a,b) are the four constraints for finding a metameric spectrum vector. The difference between the target spectral reflectance vector SRef and the reconstructed spectral reflectance vector SRefRec calculated from the metameric spectrum vector SRec was assessed using the metrics defined in Section 2.

3.2. Spectral Reflectance Reconstruction Workflow

Figure 5 shows a flow chart for reconstructing the spectral reflectance vector SRefRec from the test signal vector C. The convex hull of the tetrahedral mesh of the reference signal vectors is denoted as HR. An example of HR is shown in Figure 3b. The convex hull of the tetrahedral mesh of the reference signal vectors and ARS vectors is denoted as HRA. The method for creating ARSs is shown in Section 4.
If the test signal vector is inside HR, its reflection spectrum vector SRec is interpolated from the reference samples using the three-step procedure in Section 3.1. If the test signal vector is outside HR and inside HRA, its reflection spectrum vector SRec is extrapolated from the expanded reference sample set including the reference samples and ARSs using the three-step procedure in Section 3.1. If the test signal vector is outside HRA, its reflection spectrum vector must be extrapolated using the other method. Therefore, ARSs must be chosen to guarantee that the test signal vectors of interest are inside HRA.

4. Auxiliary Reference Samples

4.1. ARS Creation

The ARSs were described in the last paragraph of Section 1. Figure 6 shows a five-step flow chart for creating a set of ARSs. The description below uses the Nikon D5100 as an example. The example color filters were optimized for the Nikon D5100.
  • STEP 1: Select reflective surfaces.
The Munsell color chips in the convex hull in CIELAB and the white card were used as reflective surfaces to create ARSs. The number of color chips in the convex hull in Figure 2a is N = 62. The reference samples in the convex hull in Figure 3b are the reflection samples from the same 62 color chips. Samples of the white point and black point are default ARSs. The white ARS is the white point sample, whose signal vector is [1, 1, 1]T. The illuminant is D65. The spectrum of the black ARS is zero, and its signal vector is [0, 0, 0]T.
  • STEP 2: Select color filters.
Appropriate cyan, yellow and magenta filters were selected. They were used to filter reflection light to increase color saturation. Figure 7a shows the spectral transmittance of an example filter set. Given a reference sample of a signal vector [R, G, B]T, its signal vector becomes [Rf, Gf, Bf]T after filtering. If the filter is cyan, the ratios Bf/Rf and Gf/Rf will be larger than B/R and G/R, respectively, and a more saturated sample is created. If the reference sample is magenta (G << B, R), a highly saturated blue sample is created. The issue of color filter selection is discussed further in Section 4.2.
  • STEP 3: Measure raw ARSs.
The color filters selected in STEP 2 were sequentially mounted on the camera and the spectroradiometer for measurement. For each color filter, the RGB signal values and spectrum of the reflection light from the reflective surfaces selected in STEP 1 were measured. There were 3N + 3 = 189 measured samples using the color filters, called the raw ARSs. In this paper, the RGB signal values and spectra were calculated according to Section 2 for numerical study. Figure 8a shows an example of the raw ARSs in the RGB signal space, where the color filters in Figure 7a are used. In Figure 8a, the raw ARSs from the color chips and the white card are shown as red dots and crosses, respectively.
  • STEP 4: Create amplified raw ARSs.
From Figure 3a and Figure 8a, we can see that the 3N raw ARSs from the color chips cannot properly enclose the test samples due to attenuation of the reflection light passing through the filter. The spectra and RGB signal vectors of these raw ARSs were multiplied by an amplification factor γ greater than 1.0. If an RGB signal vector is multiplied by an amplification factor, its color point in the RGB signal space moves away from the black point in the direction from the black point to its original color point. The value of γ could be sample-dependent to reduce extrapolation error. For simplicity, γ = 1.5 was empirically set for all these samples except for the samples corresponding to the color chips of L* = 90 in Figure 2a–d, which have a value of 9 in the Munsell annotation. The exception samples were multiplied by a smaller amplification factor so that they remained within the RGB signal cube, where γ = 1.185. All amplified raw ARSs are shown as blue dots in Figure 8a.
  • STEP 5: Select the ARSs.
The ARSs are the samples in the convex hull of the amplified raw ARSs, the three raw ARSs from the white card, the white ARS and the black ARS. The latter two are shown as green crosses in Figure 8a. The convex hull is denoted as HA and is shown as a blue mesh in Figure 8b. The total number of ARSs in HA is 53. The convex hull of the reference samples HR defined in Section 3.2 is also shown as a red mesh in Figure 8b for comparison. From Figure 8b, HR is completely inside HA. In this case, HA and HRA are the same because the color points of the reference samples are well enclosed by HA. It seems unnecessary to measure 189 samples in STEP 3 as there are only 53 samples in HA. However, before building HA, we do not know which samples will be in HA.

4.2. Color Filter Design Method

Only cyan, yellow and magenta filters were selected in STEP 2 and used in STEP 3. Extrapolation error can be further reduced by using more color filters, e.g., using additional red, green and blue filters. This paper limited the number of color filters to three because (1) the use of cyan, yellow and magenta color filters enables all outside samples to be extrapolated for the cases under consideration, and (2) the cost of creating ARSs increases with the number of color filters. The spectral transmittance functions of the considered cyan, yellow and magenta filters are based on a super-Gaussian function. Such color filters can be absorption filters or interference filters [28]. There are stock color filters in various specifications on the market.
The cyan filter is a short-pass optical filter whose spectral transmittance is assumed to be
f C λ = f C 0 exp λ λ S σ C a C
where fC0 is the maximum transmittance; λS = 400 nm; σC and aC are parameters determined by the filter edge wavelength λC and edge width ΔλC. Figure 9a shows the definitions of λC and ΔλC. The edge wavelength λC is the wavelength of the half-maximum transmittance, i.e., fC(λC) = 0.5 fC0. The edge width ΔλC is the wavelength interval from 0.1 fY0 to 0.9 fY0. From Equation (3) and the definitions of λC and ΔλC, the following equation can be derived.
ln 10 1 a C ln 0.9 1 a C λ C λ S = ln 2 1 a C Δ λ C ,
Given the values of λC and ΔλC, the value of aC can be solved from Equation (4) using the MATLAB function “fzero”. After the value of aC is solved, the parameter σC can be easily calculated by
σ C = ln 2 1 a C λ C λ S
The yellow filter is a long-pass color filter whose spectral transmittance is assumed to be
f Y λ = f Y 0 exp λ L λ σ Y a Y
where fY0 is the maximum transmittance; λL = 700 nm; σY and aY are parameters determined by the filter edge wavelength λY and edge width ΔλY. Figure 9a also shows the definitions of λY and ΔλY, which are similar to those of λC and ΔλC. The parameters σY and aY can be solved from the same equations as Equations (4) and (5), except that aC, (λCλS) and ΔλC in Equation (4) are replaced by aY, (λLλY) and ΔλY, respectively, and σC, aC and (λCλS) in Equation (5) are replaced by σY, aY and (λLλY), respectively.
The magenta filter is a notch optical filter whose spectral transmittance is assumed to be
f M λ = f M 0 1 exp λ λ M σ M a M
where fM0 is the maximum transmittance; λM is the central wavelength; σM and aM are parameters determined by the wavelength separation ΔλSep and edge width ΔλM. Figure 9b shows the definitions of ΔλSep and ΔλM. The wavelength separation ΔλSep = λMLλMS, where λML and λMS are the edge wavelengths at the long-wavelength side and the short-wavelength side of the filter spectral transmittance, respectively. The central wavelength λM = (λMS + λML)/2. The definition of the edge width ΔλM is similar to ΔλC. From Equation (7) and the definitions of ΔλSep and ΔλM, the following equation can be derived:
ln 10 1 a M ln 0.9 1 a M Δ λ S e p = 2 ln 2 1 a M Δ λ M .
Given the values of ΔλSep and ΔλM, the value of aM can be solved from Equation (8). After the value of aM is solved, the parameter σM can be easily calculated by
σ M = 2 ln 2 1 a M Δ λ S e p
The wavelengths λMS and λML were taken as the specifications of the magenta filter, where λMS = λM − ΔλSep/2 and λML = λM + ΔλSep/2.
For simplicity, the maximum transmittance of the filters was set to 0.96, i.e., fY0 = fC0 = fM0 = 0.96; all edge widths for the three filters were set to 30 nm, i.e., ΔλY = ΔλC = ΔλM = 30 nm. The four edge wavelengths λC, λY, λMS and λML were optimized for the minimum mean ERef of the outside samples under the constraints
λ CamG     λ C   λ CamR   +   Δ λ CamR / 2 ,
λ CamB     λ Y     λ CamG   +   Δ λ CamG / 2 ,
λ CamB     λ MS     λ CamG ,
λ CamG     λ ML   λ CamR   +   Δ λ CamR / 2 .
The constraints are empirical, but reasonable. For example, from Equation (10a), the edge wavelength λC of the cyan filter should lie between the wavelengths of the green and red camera channels so that the spectrum amplitudes at short and medium wavelengths are less attenuated. Half the bandwidth of the spectral sensitivity was used as the tolerance for the upper bound in Equation (10a,b,d).
The optimization process consisted of two steps. The first step was to optimize the four edge wavelengths using the Bayesian optimization function “bayesopt” implemented in MATLAB. The objective function is the mean ERef of outside samples. Since Bayesian optimization does not use the derivative of the objective function to find the minimum objective value [29], the second step used the MATLAB optimization function “lsqnonlin” to further optimize the four edge wavelengths, where the result of the first step was taken as the initial trial solution. The function “lsqnonlin” was used because the optimization problem is nonlinear. The optimized edge wavelengths λC, λY, λMS and λML are denoted as λCopt, λYopt, λMSopt and λMLopt, respectively.
The edge wavelengths of the optimized filters for the Nikon D5100 are shown in Table 3. The spectral transmittance of the optimized color filters is shown in Figure 7a. The convex hulls HA and HRA using the optimized filters are shown as blue meshes in Figure 8b and Figure 10, respectively, though they are the same for the case considered.

5. Results and Discussion

In this section, in addition to the Nikon D5100, an artificial camera is used as a second camera for comparison. The spectral sensitivities of the artificial camera were assumed to be the CIE 1931 CMFs as shown in Figure 1b. It is called the CMF camera, whose spectral specifications are shown in Table 2. The camera used in the numerical results below is the Nikon D5100 unless otherwise specified.

5.1. Interpolation

Table 4 shows the assessment metric statistics for the test samples using the LUT method, where the Nikon D5100 and CMF cameras were used. As can be seen from Table 4, out of the 1066 test samples, about 860 samples were inside samples that can be interpolated. The table also shows the extrapolation results for about 200 outside samples, which are discussed in the next subsection. The assessment metric statistics for the inside samples using the two cameras were about the same except for the color difference ΔE00. If the spectral sensitivities of a camera are different from the CMFs, the color difference ΔE00 will be a non-zero value from Equations (1a) and (2). While not zero, most of the inside samples using the Nikon D5100 showed little color difference.
The spectrum reconstructions of the test samples using the PCA and wPCA methods are considered for comparison. In the wPCA method, the i-th training sample was multiplied by a weighting factor 1/(ΔEi + s), where ΔEi is the color difference between the test sample and the i-th training sample in CIELAB; s is a small-valued constant to avoid division by zero [8]. Weighted training samples were used to derive basis spectra. A camera device model was used to convert RGB signal values into tristimulus values for calculating ΔEi. A third-order root polynomial regression model (RPRM) was employed and trained using the reference samples [30]. The accuracy of the RPRM was slightly higher than that of the polynomial regression model in this case.
The PCA and wPCA methods were used to reconstruct all test samples using the spectral sensitivities of the Nikon D5100 in Figure 1a. Table 5 shows the assessment metric statistics for the test samples using the PCA and wPCA methods, where the inside samples and outside samples were the same as those using the LUT method. The spectrum reconstruction error using the wPCA method was apparently smaller than that using the PCA method, as expected. Comparing Table 4 with Table 5, we can see that the LUT method outperformed the wPCA method except for GFC. Figure 11a,b show the ERef and GFC histograms for the 864 inside samples, respectively, where the cases using the LUT, PCA and wPCA methods are shown.
The computation time required for the LUT method is at least two orders of magnitude faster than that required for reconstruction methods using basis spectra that emphasize the relationship between the test and training samples [13]. In this work, the ratio of the computation time required to use the LUT method and wPCA method was 1:80.2, where samples were reconstructed from their signal vector C to the spectral reflectance vector SRefRec using MATLAB on the Windows 10 platform.

5.2. Extrapolation Using the LUT Method Utilizing Optimized ARSs

The color filters optimized as in Section 4.2 were used to create the ARSs in this subsection. The edge wavelengths of the optimized color filters for the Nikon D5100 and CMF cameras are shown in Table 3. The filter spectral transmittance for the Nikon D5100 and CMF cameras is shown in Figure 7a,b, respectively. From Table 4, there were 202 and 203 outside samples for the Nikon D5100 and CMF cameras, respectively. The assessment metric statistics for the outside samples are shown in Table 4. As expected, the mean assessment metrics for the outside samples were worse than those for the inside samples. The assessment metric statistics for the two cameras were about the same except for the color difference ΔE00. The assessment metric statistics for all samples are also shown in Table 4.
For the Nikon D5100, there were 98, 79, 22 and 3 outside samples that referenced 1, 2, 3 and 4 ARSs, respectively. Figure 12a–f show the reconstructed spectra SRec using the LUT method for the 2.5G 7/6, 10P 7/8, 2.5R 4/12, 2.5Y 9/4, 10BG 4/8 and 5PB 4/12 color chips, respectively, where their target spectrum SReflection and neighboring reference spectra are also shown. The case in Figure 12a is an interpolation example for comparison. The cases in Figure 12b–f are extrapolation examples. For the cases in Figure 12b–f, the numbers of referenced ARSs are 1, 2, 2, 3 and 4, respectively. The ARS neighborhoods are indicated in the figures. Neighborhood 3 is the black ARS for the case in Figure 12e. The spectrum was well recovered for the case in Figure 12f, although four ARSs were referenced. The reconstructed spectral reflectance SRefRec for the cases in Figure 12a–f is shown in Figure 13a–f, respectively. RMS errors ERef = 0.004, 0.0223, 0.014, 0.0165, 0.0159 and 0.0149 for the cases in Figure 13a–f, respectively.

5.3. Comparison of Extrapolations using Different Methods

5.3.1. PCA and wPCA Methods

Table 5 shows the assessment metric statistics for the 202 outside samples of the Nikon D5100 using the PCA and wPCA methods. As expected, the spectrum reconstruction error using the wPCA method was apparently smaller than that using the PCA method. Comparing Table 4 with Table 5, we can see that the extrapolation using the LUT method utilizing optimized ARSs outperformed the wPCA method. Note that the ratio of good fit RGF99 was reduced from 0.9803 for the inside samples to 0.7772 for the outside samples when using the wPCA method, i.e., 22.28% of the outside samples had a GFC of less than 0.99. When using the LUT method utilizing optimized ARSs, RGF99 was slightly reduced from 0.9375 for the inside samples to 0.9257 for the outside samples. It was found that when ARSs were included in the training samples of the wPCA method, the extrapolation error did not decrease, but increased further.
Figure 13a–f also show the reconstructed spectral reflectance using the PCA and wPCA methods. The RMS errors ERef = 0.0135, 0.0246, 0.1142, 0.0352, 0.0393 and 0.0366 for the cases using the PCA method in Figure 13a–f, respectively. The RMS errors ERef = 0.0095, 0.0221, 0.0794, 0.03, 0.0297 and 0.0392 for the cases using the wPCA method in Figure 13a–f, respectively.

5.3.2. Nearest Tetrahedron 3D Extrapolation Method

By definition, an outside sample cannot be enclosed by any tetrahedron in the tetrahedral mesh of reference samples in the RGB signal space. However, it can be extrapolated from the nearest tetrahedron [13]. The reference samples of the tetrahedron vertices are used to extrapolate the outside sample, using the same method as interpolation, except that the coefficients in Equation (1a,b) are not restricted to be between 0 and 1. The nearest tetrahedron can be located according to its circumcenter, in-center or centroid. For example, if the locating rule is based on the circumcenter, the nearest tetrahedron is the tetrahedron with the shortest Euclidian distance between its circumcenter and the outside sample.
Table 6 shows the assessment metric statistics for the 202 outside samples of the Nikon D5100 using the nearest tetrahedron 3D extrapolation method. The methods in the table using the locating rules based on the circumcenter, in-center and centroid are designated as NTCC, NTIC and NTCE, respectively. The results using the LUT method utilizing optimized ARSs and the wPCA method shown in Table 4 and Table 5 are also shown in Table 6 for comparison. As can be seen from Table 6, the mean extrapolation error using the NTCC method was much less than that of the NTIC and NTCE methods but was larger than that of the LUT method utilizing optimized ARSs.

5.3.3. LUT Method Utilizing MMSEP Samples

The extrapolation using the LUT method utilizing MMSEP samples is considered. As described in Section 1, eight spectral reflectance functions were constructed so that their color points under D65 illumination were as close as possible to the corners of the RGB signal cube. The eight MMSEP samples were included in the reference sample dataset for extrapolation. The white MMSEP sample and black MMSEP sample are the same as the white ARS and black ARS, respectively. The spectral reflectance functions of the other six MMSEPs are based on the sigmoid function with parameters optimized for the minimum objective function defined in [14].
Table 7 shows the optimized RGB signal values of the MMSEP samples for the Nikon D5100. The RGB signal values were not close to their target values, except for the white and black MMSEP samples. Taking the green MMSEP sample as an example, if the value of its G signal is close to 1.0, its R and B signals will not be small in value because the spectral sensitivities overlap, as shown in Figure 1a. The convex hull of the eight MMSEP samples is shown as a red mesh in Figure 10a,b. The convex hull HAR is smaller in size than the MMSEP convex hull but extends more in the yellow and purple regions. The convex hull HAR can be expanded further if red, green and blue filters are used. The LUT method utilizing MMSEP samples is equivalent to the LUT method utilizing only eight ARSs, where six color filters are used and the white card is the only reflective surface.
The assessment metric statistics for the 202 outside samples of the Nikon D5100 using the LUT method utilizing MMSEP samples are shown in Table 6. There were 148, 46 and 8 outside samples that referenced 1, 2 and 3 MMSEP samples, respectively. As can be seen from Table 6, using the optimized ARSs improved the assessment metrics compared to using MMSEP samples. Figure 13b–f also show the reconstructed spectral reflectance using the LUT method utilizing MMSEP samples, where the RMS errors ERef = 0.0198, 0.0846, 0.0331, 0.0148 and 0.038, respectively. For the cases in Figure 13b–f, the numbers of referenced MMSEP samples are 1, 3, 1, 2 and 1, respectively.
Figure 14a,b show the ERef and GFC histograms for the 202 outside samples, respectively, where the cases using the LUT method utilizing optimized ARSs, the wPCA method and the LUT method utilizing MMSEP samples are shown. For extrapolation, the LUT method utilizing optimized ARSs outperformed the wPCA method and the LUT method utilizing MMSEP samples.

5.4. Effect of Filter Edge Wavelengths

The spectral sensitivities of a camera can be measured or estimated as described in Section 1. If the measurements or estimates are accurate, the color filters can be optimized using the same method as in Section 4.2. On the other hand, estimates of sensitivity spectral shapes may not be very accurate, but estimates of channel wavelengths may be accurate enough for color filter design. It is found that the specifications of optimized color filters are related to the channel wavelengths. For the Nikon D5100, from Table 2 and Table 3, the optimized edge wavelengths λCopt, λYopt, λMSopt and λMLopt are about λCamRλCamR/4, (λCamC + λCamG)/2, λCamC and λCamR, respectively, where λCamC is an average wavelength and
λ CamC   =   ( λ CamB   +   λ CamG ) / 2 ,
These approximate relationships are roughly valid for the CMF camera. Since the channel wavelength is the average wavelength of the spectral sensitivity, the specifications of appropriate color filters could be estimated from less accurate estimates of the spectral sensitivities.
In this subsection, the use of color filters that are not optimized is studied. Deviations of filter specifications from their optimized values are expressed as δλC = λCλCopt, δλY = λYλYopt, δλMS = λMSλMSopt and δλML = λMLλMLopt. It is found that the appropriate range of edge wavelengths can be roughly estimated as
λ CamY     λ C     λ Copt ,
λ CamC     λ Y     λ Yopt ,
λ CamB     λ MS     λ MSopt ,
λ CamY     λ ML     λ MLopt ,
where λCamY is an average wavelength and
λCamY = (λCamG + λCamR)/2.
For the Nikon D5100, λCamC = 498.7 nm and λCamY = 567 nm. The empirical estimates shown in Equation (12a–d) are based on the comparison of Figure 7a,b with Figure 1a,b, respectively, and the tolerance analysis shown below.
The deviation of the edge wavelength from the optimized value results in a change in the convex hull HRA. Since λCopt > λCamR, increasing positive δλC will result in an increase in the R signal with little change in the B and G signals, which will cause the ratios B/R and G/R to decrease. The decrease in the signal saturation results in a smaller convex hull HRA in the cyan region, and some outside samples may not be extrapolated. Therefore, the upper bound in Equation (12a) is set to the optimized edge wavelength of the cyan filter. The upper bounds in Equation (12b–d) are changed for similar reasons. Since λCamC < λYopt < λCamG, increasing positive δλY will result in a greater reduction in the G signal, which will cause the ratio G/B to decrease. Since λMSoptλCamC and λMLoptλCamR, increasing positive δλMS and δλML will result in a greater increase in the G signal and a greater reduction in the R signal, respectively, which will cause the ratios B/G and R/G to decrease.
Figure 15a–i show the mean RMS error ERef of outside samples versus δλMS and δλML, where the values of δλC and δλY are shown in the figures. The camera is the Nikon D5100. In the figures, δλC = −51.9 nm and −15.5 nm correspond to λC = λCamY and λCamR, respectively; δλY = −12.1 nm corresponds to λY = λCamC. The white symbols “+” and “x” are the origin (δλMS = δλML = 0 nm) and the point of the minimum mean ERef in each figure, respectively. The values of the mean ERef at the origin and at the point of the minimum mean ERef are shown in Table 8 and Table 9, respectively, for the cases in Figure 15a–i. In the two tables, the corresponding filter edge wavelength deviations and the ratio RGF99 are also shown.
At the origin in Figure 15a–i, all outside samples can be extrapolated. Away from the origin, the red dotted line represents the boundary where at least one outside sample cannot be extrapolated. Beyond the boundary, the mean ERef of outside samples that can be extrapolated is shown. The white dotted line in the figure represents the contour of the mean ERef = 0.0213, which is the value obtained using the wPCA method. Using color filters that meet the specifications within both the red and white dotted lines, all outside samples can be extrapolated while keeping the mean ERef < 0.0213. For the cases with δλC = 0 in Figure 15c,f,i, the area enclosed by the red and white dotted lines is small because some cyan outside samples cannot be extrapolated. They can be extrapolated by using a blue-shift cyan filter, i.e., δλC < 0, but will increase the extrapolation error. When δλC = −9.0 nm, the red dotted line is about the same as in the cases of δλC = −15.5 nm in Figure 15b,e,h. Therefore, if a larger edge wavelength tolerance is required, a blue-shift cyan filter is preferred. For such a requirement, the upper bound λCopt in Equation (12a) can be replaced by λCamR.
The point of (δλML, δλMS) = (−41.1 nm, −33 nm) in Figure 15g is the lower bound case in Equation (12a–d), where λC = λCamY, λY = λCamC, λMS = λCamB and λML = λCamY. Since all filter edge wavelengths are blue-shifted, this case is called the blue-shift lower bound (BSLB) case. In contrast, the upper bound case in Equation (12a–d) is the optimized case at the origin in Figure 15c. The assessment metric statistics of the BSLB case are shown in Table 6, where the mean ERef = 0.019 and RGF99 = 0.8416. As can be seen from Table 6, the assessment metric statistics of the BSLB case were worse than those of the optimized case, but better than those of the wPCA and NTCC methods. Figure 13b–f also show the reconstructed spectral reflectance of the BSLB case, where the RMS errors ERef = 0.0187, 0.0469, 0.0203, 0.0274 and 0.0482, respectively.
Figure 16a,b show the ERef and GFC histograms, respectively, for the 202 outside samples of the BSLB case. Also shown are the results of the cases using the LUT method utilizing optimized ARSs and the LUT method utilizing MMSEP samples for comparison. As can be seen from the two figures and Table 6, for the RMS error ERef, the unoptimized BSLB case was slightly better than the case including MMSEP samples, but for the goodness-of-fit coefficient GFC, the case including MMSEP samples was slightly better. The extrapolation performance of the two cases is comparable. Note that it is easy to design better color filters than the BSLB case for extrapolation.

6. Conclusions

The reconstruction of spectral reflectance using the LUT method to interpolate camera RGB signals was investigated. Using the LUT method has the advantages of a high accuracy, saving computation time and eliminating the need to use the camera spectral sensitivity functions. The disadvantage of this method is that it cannot interpolate samples outside the convex hull of the reference samples in the RGB signal space. The outside samples can be extrapolated by using the method based on basis spectra, but it has two disadvantages: (1) accurate camera spectral sensitivity functions are required; (2) the calculation of the algorithm with a low spectrum reconstruction error is time-consuming. This paper proposed the LUT method utilizing auxiliary reference samples for extrapolating outside samples. The auxiliary reference samples were created by using reference samples and color filters so that the convex hull of the reference samples and auxiliary reference samples can enclose the outside samples in the RGB signal space. Therefore, outside samples can be extrapolated by using the LUT method utilizing auxiliary reference samples.
The proposed method was tested with Munsell color chips as examples of reflective surfaces. The Nikon D5100 camera was taken as an example camera. The method to create auxiliary reference samples was described. Cyan, yellow and magenta filters were used in this study. The optimized design of the three filters was presented. The results show that the mean extrapolation error using the proposed method was smaller than that of the weighted PCA method. The specifications for the optimized color filters mainly depend on the average wavelengths of the camera spectral sensitivities. The appropriate range of color filter edge wavelengths was shown. The design of color filters may not require accurate measurement or estimation of the camera spectral sensitivities. Therefore, the proposed method is feasible for overcoming the extrapolation problem. It was also shown that the ratio of computation time required to use the LUT method and the wPCA method was 1:80.2.
Since only one commercially available camera was considered, further studies are required for cameras with other sensitivity characteristics. Further research should use more than three color filters to expand the convex hull of the reference samples and auxiliary reference samples, and further reduce the extrapolation error where possible. Studies implementing the proposed method for field application will be published in the future.

Author Contributions

Conceptualization, Y.-C.W. and S.W.; data collection, Y.-C.W.; methodology, Y.-C.W. and S.W.; software, Y.-C.W.; data analysis, Y.-C.W.; supervision, L.H. and S.C.; writing—original draft, Y.-C.W. and S.W.; writing—review and editing, L.H. and S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in [16,23].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liang, H.; Saunders, D.; Cupitt, J. A new multispectral imaging system for examining paintings. J. Imag. Sci. Technol. 2005, 49, 551–562. [Google Scholar]
  2. Liang, J.; Wan, X.; Liu, Q.; Li, C.; Li, J. Research on filter selection method for broadband spectral imaging system based on ancient murals. Col. Res. Appl. 2015, 41, 585–595. [Google Scholar] [CrossRef]
  3. Kim, D.B.; Jang, I.Y.; Choi, H.K.; Lee, K.H. Recovery and representation of spectral bidirectional reflectance distribution function from an image-based measurement system. Col. Res. Appl. 2015, 41, 358–371. [Google Scholar] [CrossRef]
  4. Schaepman, M.E. Imaging spectrometers. In The SAGE Handbook of Remote Sensing; Warner, T.A., Nellis, M.D., Foody, G.M., Eds.; Sage Publications: Los Angeles, CA, USA, 2009; pp. 166–178. [Google Scholar]
  5. Cai, F.; Lu, W.; Shi, W.; He, S. A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera. Sci. Rep. 2017, 7, 15602. [Google Scholar] [CrossRef] [Green Version]
  6. Zhao, Y.; Berns, R.S. Image-based spectral reflectance reconstruction using the matrix R method. Col. Res. Appl. 2007, 32, 343–351. [Google Scholar] [CrossRef]
  7. Tzeng, D.Y.; Berns, R.S. A review of principal component analysis and its applications to color technology. Col. Res. Appl. 2005, 30, 84–98. [Google Scholar] [CrossRef]
  8. Agahian, F.; Amirshahi, S.A.; Amirshahi, S.H. Reconstruction of reflectance spectra using weighted principal component analysis. Col. Res. Appl. 2008, 33, 360–371. [Google Scholar] [CrossRef]
  9. Attarchi, N.; Amirshahi, S.H. Reconstruction of reflectance data by modification of Berns’ Gaussian method. Col. Res. Appl. 2009, 34, 26–32. [Google Scholar] [CrossRef]
  10. Hamza, A.B.; Brady, D.J. Reconstruction of reflectance spectra using robust nonnegative matrix factorization. IEEE Trans. Signal Process. 2006, 54, 3637–3642. [Google Scholar] [CrossRef] [Green Version]
  11. Kim, B.G.; Han, J.; Park, S. Spectral reflectivity recovery from the tristimulus values using a hybrid method. J. Opt. Soc. Am. A 2012, 29, 2612–2621. [Google Scholar] [CrossRef]
  12. Abed, F.M.; Amirshahi, S.H.; Abed, M.R.M. Reconstruction of reflectance data using an interpolation technique. J. Opt. Soc. Am. A 2009, 26, 613–624. [Google Scholar] [CrossRef]
  13. Kim, B.G.; Werner, J.S.; Siminovitch, M.; Papamichael, K.; Han, J.; Park, S. Spectral reflectivity recovery from tristimulus values using 3D extrapolation with 3D interpolation. J. Opt. Soc. Korea 2014, 18, 507–516. [Google Scholar] [CrossRef] [Green Version]
  14. Chou, T.-R.; Hsieh, C.-H.; Chen, E. Recovering spectral reflectance based on natural neighbor interpolation with model-based metameric spectra of extreme points. Col. Res. Appl. 2019, 44, 508–525. [Google Scholar] [CrossRef]
  15. Darrodi, M.M.; Finlayson, G.; Goodman, T.; Mackiewicz, M. Reference data set for camera spectral sensitivity estimation. J. Opt. Soc. Am. A 2015, 32, 381–391. Available online: http://spectralestimation.wordpress.com/data/ (accessed on 27 June 2022). [CrossRef] [Green Version]
  16. Finlayson, G.; Darrodi, M.M.; Mackiewicz, M. Rank-based camera spectral sensitivity estimation. J. Opt. Soc. Am. A 2016, 33, 589–599. [Google Scholar] [CrossRef] [Green Version]
  17. Maloney, L.T. Evaluation of linear models of surface spectral reflectance with small numbers of parameters. J. Opt. Soc. Am. A 1986, 3, 1673–1683. [Google Scholar] [CrossRef]
  18. Valero, E.M.; Nieves, J.L.; Nascimento, S.M.C.; Amano, K.; Foster, D.H. Recovering spectral data from natural scenes with an RGB digital camera and colored Filters. Col. Res. Appl. 2007, 32, 352–360. [Google Scholar] [CrossRef] [Green Version]
  19. Babaei, V.; Amirshahi, S.H.; Agahian, F. Using weighted pseudo-inverse method for reconstruction of reflectance spectra and analyzing the dataset in terms of normality. Col. Res. Appl. 2011, 36, 295–305. [Google Scholar] [CrossRef]
  20. Liang, J.; Wan, X. Optimized method for spectral reflectance reconstruction from camera responses. Opt. Express 2017, 25, 28273–28287. [Google Scholar] [CrossRef]
  21. Xiao, G.; Wan, X.; Wang, L.; Liu, S. Reflectance spectra reconstruction from trichromatic camera based on kernel partial least square method. Opt. Express 2019, 27, 34921–34936. [Google Scholar] [CrossRef]
  22. Wen, S. Color Management for Future Video Systems. Proc. IEEE 2013, 101, 31–44. [Google Scholar] [CrossRef]
  23. Kohonen, O.; Parkkinen, J.; Jaaskelainen, T. Databases for spectral color science. Col. Res. Appl. 2006, 31, 381–390. Available online: https://sites.uef.fi/spectral/munsell-colors-matt-spectrofotometer-measured/ (accessed on 27 June 2022). [CrossRef]
  24. Viggiano, J.A.S. A perception-referenced method for comparison of radiance ratio spectra and its application as an index of metamerism. Proc. SPIE 2002, 4421, 701–704. [Google Scholar]
  25. Viggiano, J.A.S. Metrics for evaluating spectral matches: A quantitative comparison. In Proceedings of the International Conference on Computer Graphics, Imaging and Visualization, Penang, Malaysia, 26–29 July 2004; pp. 286–291. [Google Scholar]
  26. Mansouri, A.; Sliwa, T.; Hardeberg, J.Y.; Voisin, Y. An adaptive-PCA algorithm for reflectance estimation from color images. In Proceedings of the 19th International Conference on Pattern Recognition, Tampa, FL, USA, 8–11 December 2008. [Google Scholar] [CrossRef]
  27. Amidror, I. Scattered data interpolation methods for electronic imaging systems: A survey. J. Electron. Imag. 2002, 11, 157–176. [Google Scholar] [CrossRef]
  28. Macleod, H.A. Thin-Film Optical Filters, 5th ed.; CRC Press: Boca Raton, FL, USA, 2018; Chapter 7. [Google Scholar]
  29. Bayesian Optimization Algorithm. Available online: https://www.mathworks.com/help/stats/bayesian-optimization-algorithm.html (accessed on 27 June 2022).
  30. Finlayson, G.D.; Mackiewicz, M.; Hurlbert, A. Color correction using root-polynomial regression. IEEE Trans. Imag. Process. 2015, 24, 1460–1470. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Spectral sensitivities of (a) the Nikon D5100 and (b) the CMF camera. Spectral sensitivities of the red, green and blue signal channels are denoted as SR, SG and SB, respectively.
Figure 1. Spectral sensitivities of (a) the Nikon D5100 and (b) the CMF camera. Spectral sensitivities of the red, green and blue signal channels are denoted as SR, SG and SB, respectively.
Sensors 22 04923 g001
Figure 2. (a) Color points of the reflection light from Munsell color chips in CIELAB. (bd) show the color points projected on the a*b* plane, a*L* plane and b*L* plane, respectively. Reference samples and test samples are shown as red and blue dots, respectively. The illuminant is D65.
Figure 2. (a) Color points of the reflection light from Munsell color chips in CIELAB. (bd) show the color points projected on the a*b* plane, a*L* plane and b*L* plane, respectively. Reference samples and test samples are shown as red and blue dots, respectively. The illuminant is D65.
Sensors 22 04923 g002
Figure 3. (a) Color points of the reflection light from Munsell color chips in the RGB signal space using the Nikon D5100, where reference samples and test samples are shown as red and blue dots, respectively. The illuminant is D65. (b) Convex hull of reference samples HR.
Figure 3. (a) Color points of the reflection light from Munsell color chips in the RGB signal space using the Nikon D5100, where reference samples and test samples are shown as red and blue dots, respectively. The illuminant is D65. (b) Convex hull of reference samples HR.
Sensors 22 04923 g003
Figure 4. Schematic diagram showing a color point Q enclosed by a tetrahedron with vertices Q1, Q2, Q3 and Q4 in the RGB signal space.
Figure 4. Schematic diagram showing a color point Q enclosed by a tetrahedron with vertices Q1, Q2, Q3 and Q4 in the RGB signal space.
Sensors 22 04923 g004
Figure 5. Flow chart for reconstructing the spectral reflectance vector SRefRec from the signal vector C. Refer to Section 3.2 for details.
Figure 5. Flow chart for reconstructing the spectral reflectance vector SRefRec from the signal vector C. Refer to Section 3.2 for details.
Sensors 22 04923 g005
Figure 6. Flow chart for creating auxiliary reference samples (ARSs). Refer to Section 4.1 for details.
Figure 6. Flow chart for creating auxiliary reference samples (ARSs). Refer to Section 4.1 for details.
Sensors 22 04923 g006
Figure 7. Spectral transmittance of the color filters optimized for (a) the Nikon D5100 and (b) the CMF camera. The spectral transmittance of the cyan, yellow and magenta filters is denoted as fC, fY and fM, respectively.
Figure 7. Spectral transmittance of the color filters optimized for (a) the Nikon D5100 and (b) the CMF camera. The spectral transmittance of the cyan, yellow and magenta filters is denoted as fC, fY and fM, respectively.
Sensors 22 04923 g007
Figure 8. (a) Color points of raw ARSs and amplified raw ARSs shown as red and blue dots, respectively, in the RGB signal space. The raw ARSs from the white card are shown as red crosses. The white and black ARSs are shown as green crosses. Color filters optimized for the Nikon D5100 are used. (b) The convex hull HA of the ARSs and the convex hull HR of the reference samples are shown as blue and red meshes, respectively. Figure 3b shows the same HR.
Figure 8. (a) Color points of raw ARSs and amplified raw ARSs shown as red and blue dots, respectively, in the RGB signal space. The raw ARSs from the white card are shown as red crosses. The white and black ARSs are shown as green crosses. Color filters optimized for the Nikon D5100 are used. (b) The convex hull HA of the ARSs and the convex hull HR of the reference samples are shown as blue and red meshes, respectively. Figure 3b shows the same HR.
Sensors 22 04923 g008
Figure 9. Schematic diagrams showing the specification definitions of the (a) cyan and yellow color filters, and (b) magenta filter. Refer to Section 4.2 for details.
Figure 9. Schematic diagrams showing the specification definitions of the (a) cyan and yellow color filters, and (b) magenta filter. Refer to Section 4.2 for details.
Sensors 22 04923 g009
Figure 10. (a) The convex hull HRA of the reference samples and ARSs and the convex hull of the MMSEP samples are shown as blue and red meshes, respectively. The ARSs are the same as in Figure 3b, where optimized color filters are used. The convex hull HRA is the same as the convex hull HA in Figure 8b, but the viewing angle is different. (b) is the same as (a), except it rotates 90° clockwise along the G axis.
Figure 10. (a) The convex hull HRA of the reference samples and ARSs and the convex hull of the MMSEP samples are shown as blue and red meshes, respectively. The ARSs are the same as in Figure 3b, where optimized color filters are used. The convex hull HRA is the same as the convex hull HA in Figure 8b, but the viewing angle is different. (b) is the same as (a), except it rotates 90° clockwise along the G axis.
Sensors 22 04923 g010
Figure 11. (a) ERef and (b) GFC histograms for the 864 inside samples of the cases using the LUT, PCA and wPCA methods. The camera is the Nikon D5100. The insets show enlarged parts. In (b), all the counts in the “<0.99” slot have GFC < 0.99.
Figure 11. (a) ERef and (b) GFC histograms for the 864 inside samples of the cases using the LUT, PCA and wPCA methods. The camera is the Nikon D5100. The insets show enlarged parts. In (b), all the counts in the “<0.99” slot have GFC < 0.99.
Sensors 22 04923 g011
Figure 12. Target spectrum SReflection, reconstructed spectra SRec and neighboring reference spectra using the LUT method and optimized color filters. Munsell annotations of the color chips are (a) 2.5G 7/6, (b) 10P 7/8, (c) 2.5R 4/12, (d) 2.5Y 9/4, (e) 10BG 4/8 and (f) 5PB 4/12.
Figure 12. Target spectrum SReflection, reconstructed spectra SRec and neighboring reference spectra using the LUT method and optimized color filters. Munsell annotations of the color chips are (a) 2.5G 7/6, (b) 10P 7/8, (c) 2.5R 4/12, (d) 2.5Y 9/4, (e) 10BG 4/8 and (f) 5PB 4/12.
Sensors 22 04923 g012aSensors 22 04923 g012b
Figure 13. (af) show the target spectral reflectance SRef and reconstructed spectral reflectance SRefRec for the cases in Figure 12a–f, respectively. The results using the other reconstruction methods are also shown. Refer to Section 5.1, Section 5.2, Section 5.3 and Section 5.4 for details.
Figure 13. (af) show the target spectral reflectance SRef and reconstructed spectral reflectance SRefRec for the cases in Figure 12a–f, respectively. The results using the other reconstruction methods are also shown. Refer to Section 5.1, Section 5.2, Section 5.3 and Section 5.4 for details.
Sensors 22 04923 g013
Figure 14. (a) ERef and (b) GFC histograms for the 202 outside samples of the cases using the LUT and wPCA methods. The results of the cases using the LUT method utilizing optimized ARSs and the LUT method utilizing MMSEP samples are shown. In (a), the inset shows an enlarged part. In (b), all the counts in the “<0.99” slot have GFC < 0.99.
Figure 14. (a) ERef and (b) GFC histograms for the 202 outside samples of the cases using the LUT and wPCA methods. The results of the cases using the LUT method utilizing optimized ARSs and the LUT method utilizing MMSEP samples are shown. In (a), the inset shows an enlarged part. In (b), all the counts in the “<0.99” slot have GFC < 0.99.
Sensors 22 04923 g014
Figure 15. Mean ERef of 202 outside samples versus δλM and δλSep for the Nikon D5100, where the values of δλC and δλY are shown in (ai). The white symbols “+” and “×” are the origin and the point of the minimum mean ERef in the figure, respectively. The red dotted line is the boundary where at least one outside sample cannot be extrapolated. Beyond the boundary, the mean ERef of outside samples that can be extrapolated is shown. The white dotted line is the contour of ERef = 0.0213.
Figure 15. Mean ERef of 202 outside samples versus δλM and δλSep for the Nikon D5100, where the values of δλC and δλY are shown in (ai). The white symbols “+” and “×” are the origin and the point of the minimum mean ERef in the figure, respectively. The red dotted line is the boundary where at least one outside sample cannot be extrapolated. Beyond the boundary, the mean ERef of outside samples that can be extrapolated is shown. The white dotted line is the contour of ERef = 0.0213.
Sensors 22 04923 g015
Figure 16. (a) ERef and (b) GFC histograms for the 202 outside samples of the BSLB case. Also shown are the results of the cases using the LUT method utilizing optimized ARSs and the LUT method utilizing MMSEP samples for comparison. In (a), the inset shows an enlarged part. In (b), all the counts in the “<0.99” slot have GFC < 0.99.
Figure 16. (a) ERef and (b) GFC histograms for the 202 outside samples of the BSLB case. Also shown are the results of the cases using the LUT method utilizing optimized ARSs and the LUT method utilizing MMSEP samples for comparison. In (a), the inset shows an enlarged part. In (b), all the counts in the “<0.99” slot have GFC < 0.99.
Sensors 22 04923 g016
Table 1. Abbreviation list.
Table 1. Abbreviation list.
AbbreviationDefinition
ARSAuxiliary Reference Sample
BSLBBlue-Shift Lower Bound
CMFColor Matching Function
FWHMFull Width at Half Maximum
GFCGoodness-of-Fit Coefficient
LUTLook-Up Table
MAXMaximum
MINMinimum
MMSEPModel-based Metameric Spectra of Extreme Point
NMTNon-negative Matrix Transformation
NTCCNearest Tetrahedron based on Circumcenter
NTCENearest Tetrahedron based on Centroid
NTICNearest Tetrahedron based on In-Center
PC5050th Percentile
PC9898th Percentile
PCAPrincipal Component Analysis
RGF99Ratio of Good Fit. (The ratio of samples with GFC > 0.99.)
RMSRoot Mean Square
RPRMRoot Polynomial Regression Model
SCISpectral Comparison Index
wPCAWeighted Principal Component Analysis
Table 2. Spectral specifications of the Nikon D5100 and CMF cameras.
Table 2. Spectral specifications of the Nikon D5100 and CMF cameras.
SpecificationChannel Wavelength (nm)FWHM Spectral Width (nm)
ChannelB (λCamB)G (λCamG)R (λCamR)B (ΔλCamB)G (ΔλCamG)R (ΔλCamR)
D5100466.7530.7603.480.188.355.8
CMF452.2559.2588.155.2100.479.4
Table 3. Edge wavelengths of the optimized color filters for the Nikon D5100 and CMF cameras.
Table 3. Edge wavelengths of the optimized color filters for the Nikon D5100 and CMF cameras.
λCopt (nm)λYopt (nm)λMSopt (nm)λMLopt (nm)
D5100618.9510.8499.7608.1
CMF596.7520.5504.0585.4
Table 4. Assessment metric statistics for test samples using the LUT method and optimized color filters. The Nikon D5100 and CMF cameras were used.
Table 4. Assessment metric statistics for test samples using the LUT method and optimized color filters. The Nikon D5100 and CMF cameras were used.
MetricCameraNikon D5100CMF
SampleAllInsideOutsideAllInsideOutside
No.10668642021066863203
ERefmean μ0.01290.01200.01710.01320.01230.0169
std σ0.01180.01070.01500.01160.01030.0152
PC500.00910.00870.01320.00990.00940.0132
PC980.05090.04850.05990.04940.04440.0650
MAX0.10780.08590.10780.11110.08160.1111
GFCmean μ0.99720.99740.99620.99720.99740.9960
std σ0.00740.00710.00840.00630.00540.0090
PC500.99930.99940.99860.99930.99940.9986
MIN0.90000.90000.91930.91610.94570.9161
RGF990.93530.93750.92570.92030.92120.9163
ΔE00mean μ0.42440.42390.42620.00000.00000.0000
std σ0.41150.41820.38270.00000.00000.0000
PC500.28230.27950.30150.00000.00000.0000
PC981.68421.69001.64020.00000.00000.0000
MAX2.59182.59181.89620.00000.00000.0000
SCImean μ4.11023.75035.64954.18693.86325.5631
std σ3.18022.92663.72523.06952.88853.4233
PC503.14842.93104.69513.33483.17324.8976
PC9813.461112.123915.041212.412911.757914.3999
MAX25.229925.229921.937023.893423.893415.7186
Table 5. Assessment metric statistics for test samples using the PCA and wPCA methods. The camera is the Nikon D5100.
Table 5. Assessment metric statistics for test samples using the PCA and wPCA methods. The camera is the Nikon D5100.
MetricMethodPCAwPCA
SampleAllInsideOutsideAllInsideOutside
No.10668642021066864202
ERefmean μ0.02210.01930.03410.01470.01310.0213
std σ0.01680.01280.02470.01220.00920.0194
PC500.01730.01600.02760.01210.01140.0155
PC980.08170.05150.11520.05650.04020.0774
MAX0.14420.11800.14420.12550.08940.1255
GFCmean μ0.99400.99580.98600.99720.99820.9931
std σ0.01010.00720.01550.00620.00310.0118
PC500.99740.99770.98920.99900.99900.9976
MIN0.88580.89820.88580.89210.94440.8921
RGF990.83490.91780.48020.94180.98030.7772
ΔE00mean μ0.82610.69701.37800.50170.43180.8011
std σ0.62020.41630.95720.46500.30990.7887
PC500.70030.64881.12150.37930.36000.5094
PC982.86671.79633.68552.21161.25053.0850
MAX4.35463.07654.35463.30292.36743.3029
SCImean μ7.85316.321714.40324.79423.90338.6050
std σ6.53294.28399.70244.37452.73817.1555
PC505.82915.086212.04283.41853.15036.3268
PC9827.276419.671838.895119.150611.865531.3431
MAX55.433127.423955.433135.209326.846735.2093
Table 6. Assessment metric statistics for the 202 outside samples of the Nikon D5100 using the NTCC method, NTIC method, NTCE method, LUT method utilizing MMSEP samples and LUT method utilizing ARSs in the BSLB case (ARS(BSLB)). Also shown are the cases using the LUT method utilizing optimized ARSs (ARS(Opt)) and the wPCA method for comparison.
Table 6. Assessment metric statistics for the 202 outside samples of the Nikon D5100 using the NTCC method, NTIC method, NTCE method, LUT method utilizing MMSEP samples and LUT method utilizing ARSs in the BSLB case (ARS(BSLB)). Also shown are the cases using the LUT method utilizing optimized ARSs (ARS(Opt)) and the wPCA method for comparison.
MetricStatisticsNTCCNTICNTCEMMSEPARS (BSLB)ARS (Opt)wPCA
ERefmean μ0.02580.03930.04700.02120.01900.01710.0213
std σ0.02250.06000.07120.02010.01640.01500.0194
PC500.01960.01730.01930.01640.01540.01320.0155
PC980.09670.24930.32950.08190.07280.05990.0774
MAX0.12790.42920.42920.15310.10030.10780.1255
GFCmean μ0.98560.97030.96170.99510.99480.99620.9931
std σ0.03130.08780.09810.00900.00930.00840.0118
PC500.99710.99650.99610.99810.99820.99860.9976
MIN0.83550.37180.37180.93560.93760.91930.8921
RGF990.74750.69310.64850.88120.84160.92570.7772
ΔE00mean μ0.71981.06591.77580.59000.68470.42620.8011
std σ0.65641.67874.34530.56570.64370.38270.7887
PC500.52170.52070.56290.34190.47820.30150.5094
PC982.86216.830021.37352.16512.51641.64023.0850
MAX3.455115.392432.89942.84083.26151.89623.3029
SCImean μ8.713712.515816.57626.92237.37565.64958.6050
std σ6.016916.482126.26795.55225.74113.72527.1555
PC507.57257.11767.67225.71585.46414.69516.3268
PC9825.736361.6082130.546023.470722.234115.041231.3431
MAX30.4515135.4394157.662438.234232.903321.937035.2093
Table 7. Target and optimized RGB signal values of MMSEP samples for the Nikon D5100.
Table 7. Target and optimized RGB signal values of MMSEP samples for the Nikon D5100.
TargetOptimized
SignalRGBRGB
Red1000.81740.12210.008
Green0100.18740.69410.19
Blue0010.07360.22550.8014
Cyan0110.17710.87190.9863
Magenta1010.85060.32450.8051
Yellow1100.92610.77200.1972
White111111
Black000000
Table 8. Filter edge wavelength deviations and the mean RMS error ERef of outside samples at the origin in Figure 15a–i. The ratio RGF99 of outside samples is also shown.
Table 8. Filter edge wavelength deviations and the mean RMS error ERef of outside samples at the origin in Figure 15a–i. The ratio RGF99 of outside samples is also shown.
Figure 15δλC (nm)δλY (nm)δλMS (nm)δλML (nm)Mean ERefRGF99
(a)−51.90000.01810.8614
(b)−15.50000.01820.8713
(c)00000.01710.9257
(d)−51.9−6.05000.01820.8564
(e)−15.5−6.05000.01830.8762
(f)0−6.05000.01730.9208
(g)−51.9−12.1000.01870.8614
(h)−15.5−12.1000.01850.8812
(i)0−12.1000.01760.9257
Table 9. The minimum mean RMS error ERef of outside samples in Figure 15a–i and corresponding filter edge wavelength deviations. The ratio RGF99 of outside samples is also shown.
Table 9. The minimum mean RMS error ERef of outside samples in Figure 15a–i and corresponding filter edge wavelength deviations. The ratio RGF99 of outside samples is also shown.
Figure 15δλC (nm)δλY (nm)δλMS (nm)δλML (nm)Mean ERefRGF99
(a)−51.90−600.0180.8614
(b)−15.50−6−120.01780.8911
(c)00000.01710.9257
(d)−51.9−6.052−280.01810.8713
(e)−15.5−6.052−280.01790.9109
(f)0−6.05200.01730.9208
(g)−51.9−12.18−480.01780.8663
(h)−15.5−12.116−120.01770.9158
(i)0−12.12−20.1750.9307
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wen, Y.-C.; Wen, S.; Hsu, L.; Chi, S. Auxiliary Reference Samples for Extrapolating Spectral Reflectance from Camera RGB Signals. Sensors 2022, 22, 4923. https://doi.org/10.3390/s22134923

AMA Style

Wen Y-C, Wen S, Hsu L, Chi S. Auxiliary Reference Samples for Extrapolating Spectral Reflectance from Camera RGB Signals. Sensors. 2022; 22(13):4923. https://doi.org/10.3390/s22134923

Chicago/Turabian Style

Wen, Yu-Che, Senfar Wen, Long Hsu, and Sien Chi. 2022. "Auxiliary Reference Samples for Extrapolating Spectral Reflectance from Camera RGB Signals" Sensors 22, no. 13: 4923. https://doi.org/10.3390/s22134923

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop