Spectral Reflectance Recovery from the Quadcolor Camera Signals Using the Interpolation and Weighted Principal Component Analysis Methods

The recovery of surface spectral reflectance using the quadcolor camera was numerically studied. Assume that the RGB channels of the quadcolor camera are the same as the Nikon D5100 tricolor camera. The spectral sensitivity of the fourth signal channel was tailored using a color filter. Munsell color chips were used as reflective surfaces. When the interpolation method or the weighted principal component analysis (wPCA) method is used to reconstruct spectra, using the quadcolor camera can effectively reduce the mean spectral error of the test samples compared to using the tricolor camera. Except for computation time, the interpolation method outperforms the wPCA method in spectrum reconstruction. A long-pass optical filter can be applied to the fourth channel for reducing the mean spectral error. A short-pass optical filter can be applied to the fourth channel for reducing the mean color difference, but the mean spectral error will be larger. Due to the small color difference, the quadcolor camera using an optimized short-pass filter may be suitable as an imaging colorimeter. It was found that an empirical design rule to keep the color difference small is to reduce the error in fitting the color-matching functions using the camera spectral sensitivity functions.

Indirect methods that require training spectra are also known as learning-based methods, such as PCA and NMT. The training spectra are used to derive basis spectra. The reconstructed spectrum is the linear combination of basis spectra. The coefficients of basis spectra can be solved from simultaneous equations describing the channel outputs of the imaging device. However, for case where XYZ tristimulus values or RGB signal values are Due to the higher dimensional signal space, the extrapolation problem of quadcolor cameras is more severe than that of tricolor cameras using the LUT method. This paper will show that this is also a problem using the wPCA method. ARSs were used for extrapolation using the LUT method. A Nikon D5100 camera was taken as a reference tricolor camera. The RGB channels of the quadcolor cameras under consideration were assumed to be the same as the D5100 camera. The spectral sensitivity of the fourth channel was tailored using a color filter. The reflection spectra from the Munsell color chips irradiated with the illuminant D65 were taken as samples for testing.
This paper is organized as follows. Sections 2.1-2.3 describe the considered camera spectral sensitivities, color samples, and the assessment metrics for the recovered spectral reflectance, respectively. Four spectral sensitivity types for the fourth channel of the quadcolor camera are described in Section 2.1. Sections 3.1 and 3.2 describe the wPCA and LUT methods, respectively. Section 3 presents a method for preparing the ARSs for extrapolation of outside samples using the quadcolor camera and the LUT method. Section 3.4 defines the factor that can be used to design the spectral sensitivities of a camera to achieve the small color difference of the reconstructed spectrum. Sections 4 shows the results. Compared to the D5100 camera, the reduction in spectral reconstruction errors using the quadcolor cameras is presented. The performances using the wPCA and LUT methods are compared. The optimal designs of the considered quadcolor cameras are shown. The spectral sensitivity characteristics affecting spectral reflectance recovery were investigated. Sections 5 gives the conclusions. Appendices A and B give the proofs of the zerocolor-difference condition using the LUT and wPCA methods, respectively. For ease of reference, section Abbreviations lists the abbreviations defined herein in alphabetical order.

Camera Spectral Sensitivities
A spectrum can be represented by the vector S = [S(λ1), S(λ2), …, S(λMw) ] T , where S(λj) is the spectral amplitude at wavelength λj; λj = λ1 + (j − 1)Δλ is the j-th sampling wavelength, j = 1, 2, …, Mw, and Δλ is the wavelength sampling interval; Mw is the number of sampling wavelengths; the subscript T denotes the transpose operation. In this paper, spectra were sampled from 400 to 700 nm in a step of 10 nm, i.e., λ1 = 400 nm, Δλ = 10 nm and Mw = 31. The spectral sensitivity vector of a camera signal channel can be written as where TOpt, TIRC and TCF are the spectral transmittance vectors of the imaging lens set, IR cut filter and color filter, respectively; D is the spectral sensitivity vector of the CMOS sensor at the focal plane; and the operator  is the Hadamard product, which is also known as the element-wise product. The infrared cut filter blocks the invisible infrared light, which can be replaced with the UV/IR cut filter. For simplicity, the lens transmit- Due to the higher dimensional signal space, the extrapolation problem of quadcolor cameras is more severe than that of tricolor cameras using the LUT method. This paper will show that this is also a problem using the wPCA method. ARSs were used for extrapolation using the LUT method. A Nikon D5100 camera was taken as a reference tricolor camera. The RGB channels of the quadcolor cameras under consideration were assumed to be the same as the D5100 camera. The spectral sensitivity of the fourth channel was tailored using a color filter. The reflection spectra from the Munsell color chips irradiated with the illuminant D65 were taken as samples for testing.
This paper is organized as follows. Sections 2.1-2.3 describe the considered camera spectral sensitivities, color samples, and the assessment metrics for the recovered spectral reflectance, respectively. Four spectral sensitivity types for the fourth channel of the quadcolor camera are described in Section 2.1. Sections 3.1 and 3.2 describe the wPCA and LUT methods, respectively. Section 3 presents a method for preparing the ARSs for extrapolation of outside samples using the quadcolor camera and the LUT method. Section 3.4 defines the factor that can be used to design the spectral sensitivities of a camera to achieve the small color difference of the reconstructed spectrum. Section 4 shows the results. Compared to the D5100 camera, the reduction in spectral reconstruction errors using the quadcolor cameras is presented. The performances using the wPCA and LUT methods are compared. The optimal designs of the considered quadcolor cameras are shown. The spectral sensitivity characteristics affecting spectral reflectance recovery were investigated. Section 5 gives the conclusions. Appendices A and B give the proofs of the zero-color-difference condition using the LUT and wPCA methods, respectively. For ease of reference, section Abbreviations lists the abbreviations defined herein in alphabetical order.

Camera Spectral Sensitivities
A spectrum can be represented by the vector S = [S(λ 1 ), S(λ 2 ), . . . , S(λ Mw ) ] T , where S(λ j ) is the spectral amplitude at wavelength λ j ; λ j = λ 1 + (j − 1)∆λ is the j-th sampling wavelength, j = 1, 2, . . . , M w , and ∆λ is the wavelength sampling interval; M w is the number of sampling wavelengths; the subscript T denotes the transpose operation. In this paper, spectra were sampled from 400 to 700 nm in a step of 10 nm, i.e., λ 1 = 400 nm, ∆λ = 10 nm and M w = 31. The spectral sensitivity vector of a camera signal channel can be written as where T Opt , T IRC and T CF are the spectral transmittance vectors of the imaging lens set, IR cut filter and color filter, respectively; D is the spectral sensitivity vector of the CMOS sensor at the focal plane; and the operator • is the Hadamard product, which is also known as the element-wise product. The infrared cut filter blocks the invisible infrared light, which can be replaced with the UV/IR cut filter. For simplicity, the lens transmittance was not considered in this paper. The CMOS sensor converts the light into electric signals. Conventional CFA on the CMOS sensor filters light in order to separate the short-, mid-and long-wavelength components of the light. As shown in Figure 1a, the sensor pixels corresponding to the red, green and blue filters provide R, G and B signals, respectively. The spectral sensitivity vectors of the red, green and blue channels are designated with S CamR , S CamG and S CamB , respectively. Figure 2a shows the S CamR , S CamG and S CamB of the D5100 camera measured using a monochromator [30]. The IR cutoff filter for the D5100 camera has a cutoff wavelength of approximately 690 nm. In this paper, it was assumed that the spectral sensitivities of the red, green and blue channels of the quadcolor cameras under consideration are the same as shown in Figure 2a.
CMOS sensor filters light in order to separate the short-, mid-and long-wavelength components of the light. As shown in Figure 1a, the sensor pixels corresponding to the red, green and blue filters provide R, G and B signals, respectively. The spectral sensitivity vectors of the red, green and blue channels are designated with SCamR, SCamG and SCamB, respectively. Figure 2a shows the SCamR, SCamG and SCamB of the D5100 camera measured using a monochromator [30]. The IR cutoff filter for the D5100 camera has a cutoff wavelength of approximately 690 nm. In this paper, it was assumed that the spectral sensitivities of the red, green and blue channels of the quadcolor cameras under consideration are the same as shown in Figure 2a. Figure 2b shows the spectral sensitivity of the fourth channel of a quadcolor camera, which is the product of the spectral sensitivity of a typical silicon sensor [39] and the spectral transmittance of the Baader UV/IR cut filter. This fourth channel is the greenish yellow channel, although only the UV/IR cut filter is applied. The output signal of the channel is designated as the F signal because this channel is free of the optical filter. In order to distinguish it from the CIE stimulus Y, it is not designated as the Y signal. Therefore, the quadcolor camera with this fourth channel is called the RGBF camera. Blue or cyan filters can be used to compensate for the increased sensitivity of silicon sensors with wavelength, e.g., Isuzu IEC series filters. Figure 3a shows the spectral transmittance of five IEC series filters. The spectral sensitivities of the channels with each of the five filters applied are shown in Figure 3b. They are the yellowish green channel. Because of the compensation filter applied, the quadcolor camera with such a fourth channel is called the RGBC camera.
Short-pass and long-pass optical filters were also applied to the fourth channel, respectively. The spectral transmittance of the optical filters is based on the super-Gaussian function. The spectral transmittance functions for the short-pass and long-pass optical filters are the same as for the cyan and yellow filters in [29], respectively. Figure 4a shows their specification, where f0 is the maximum transmittance; λS and λL are the edge wavelengths at 0.5 f0; ΔλS and ΔλL are the edge widths from 0.1 f0 to 0.9f0. In this paper, for simplicity, ΔλS = ΔλL = 30 nm were assumed. The fourth channel using the short-pass and long-pass optical filters are called the S and L channels, respectively. The quadcolor cameras with the S and L channels are called the RGBS and RGBL cameras, respectively. Figure 4b shows the spectral transmittance of the short-pass and long-pass optical filters with Figure 2. (a) Spectral sensitivities of the Nikon D5100 camera, where the spectra S CamR , S CamG and S CamB are the sensitivities of the red, green and blue signal channels, respectively. (b) Spectral sensitivity of the F signal channel that is the product of the spectral sensitivity of a typical silicon sensor and the spectral transmittance of the Baader UV/IR cut filter. Figure 2b shows the spectral sensitivity of the fourth channel of a quadcolor camera, which is the product of the spectral sensitivity of a typical silicon sensor [39] and the spectral transmittance of the Baader UV/IR cut filter. This fourth channel is the greenish yellow channel, although only the UV/IR cut filter is applied. The output signal of the channel is designated as the F signal because this channel is free of the optical filter. In order to distinguish it from the CIE stimulus Y, it is not designated as the Y signal. Therefore, the quadcolor camera with this fourth channel is called the RGBF camera.
Blue or cyan filters can be used to compensate for the increased sensitivity of silicon sensors with wavelength, e.g., Isuzu IEC series filters. Figure 3a shows the spectral transmittance of five IEC series filters. The spectral sensitivities of the channels with each of the five filters applied are shown in Figure 3b. They are the yellowish green channel. Because of the compensation filter applied, the quadcolor camera with such a fourth channel is called the RGBC camera.  To sum up, four types of quadcolor cameras were considered, which are the RGBF, RGBC, RGBS and RGBL cameras. They are identical except for the color filter applied to the fourth channel.

Color Samples
The reference/training and test samples were prepared using the reflectance spectra of matt Munsell color chips measured by a spectroradiometer [40]. A total of 1268 reflectance spectra in [40] were used in this paper. Illuminant D65 was assumed to be the light source. In the case of using the LUT method, the same 202 and 1066 color chips as in [29] were used to prepare the reference and test samples, respectively. In the case of using the wPCA method, the reference and test samples were also used as training and test samples, respectively.
The spectrum vector of the light reflected from a color chip is where SRef and SD65 are the spectral reflectance vector of the color chip and the spectrum vector of the illuminant D65, respectively. The color points of light reflected from the 1268 Short-pass and long-pass optical filters were also applied to the fourth channel, respectively. The spectral transmittance of the optical filters is based on the super-Gaussian function. The spectral transmittance functions for the short-pass and long-pass optical filters are the same as for the cyan and yellow filters in [29], respectively. Figure 4a shows their specification, where f 0 is the maximum transmittance; λ S and λ L are the edge wavelengths at 0.5 f 0 ; ∆λ S and ∆λ L are the edge widths from 0.1 f 0 to 0.9f 0 . In this paper, for simplicity, ∆λ S = ∆λ L = 30 nm were assumed. The fourth channel using the short-pass and long-pass optical filters are called the S and L channels, respectively. The quadcolor cameras with the S and L channels are called the RGBS and RGBL cameras, respectively. Figure 4b shows the spectral transmittance of the short-pass and long-pass optical filters with λ S = 528 nm and λ L = 585 nm, respectively, where the corresponding spectral sensitivities of the S and L channels are also shown.   To sum up, four types of quadcolor cameras were considered, which are the RGBF, RGBC, RGBS and RGBL cameras. They are identical except for the color filter applied to the fourth channel.

Color Samples
The reference/training and test samples were prepared using the reflectance spectra of matt Munsell color chips measured by a spectroradiometer [40]. A total of 1268 reflectance spectra in [40] were used in this paper. Illuminant D65 was assumed to be the light source. In the case of using the LUT method, the same 202 and 1066 color chips as in [29] were used to prepare the reference and test samples, respectively. In the case of using the wPCA method, the reference and test samples were also used as training and test samples, respectively.
The spectrum vector of the light reflected from a color chip is To sum up, four types of quadcolor cameras were considered, which are the RGBF, RGBC, RGBS and RGBL cameras. They are identical except for the color filter applied to the fourth channel.

Color Samples
The reference/training and test samples were prepared using the reflectance spectra of matt Munsell color chips measured by a spectroradiometer [40]. A total of 1268 reflectance spectra in [40] were used in this paper. Illuminant D65 was assumed to be the light source. In the case of using the LUT method, the same 202 and 1066 color chips as in [29] Sensors 2022, 22, 6288 6 of 27 were used to prepare the reference and test samples, respectively. In the case of using the wPCA method, the reference and test samples were also used as training and test samples, respectively.
The spectrum vector of the light reflected from a color chip is where S Ref and S D65 are the spectral reflectance vector of the color chip and the spectrum vector of the illuminant D65, respectively. The color points of light reflected from the 1268 Munsell color chips in the CIELAB color space have been shown in [29]. The CIE 1931 color-matching functions (CMFs) were adopted in this paper. In the following, the RGBF camera is taken as an example. The measured signal of a color channel is U Meas = S Reflection T S CamU , where U = R, G, B and F for the red, green, blue and fourth channels, respectively; S Reflection is the reflection spectrum vector; and S CamF is the spectral sensitivity vector of the channel calculated from Equation (1). For the white balance condition, the channel signals are normalized to U = U Meas /U MeasD65 , where U = R, G, B and F; U MeasD65 is the measured signal when S Ref = S White , where S White is the spectral reflectance of a white card. The same white card in [29] was taken, which is the white side of a Kodak gray card.
The vector representing the camera signals is designated as C = [R, G, B, F] T . Figure 5a-d show the color points of the light reflected from the Munsell color chips in the RGB, GBF, BFR and FRG signal spaces, respectively, using the RGBF camera. In these figures, the 202 reference samples are shown as red dots; out of the 1066 test samples, the 726 inside samples and 340 outside samples are shown as green and blue dots, respectively. Munsell color chips in the CIELAB color space have been shown in [29]. The CIE 1931 color-matching functions (CMFs) were adopted in this paper. In the following, the RGBF camera is taken as an example. The measured signal of a color channel is UMeas = SReflection T SCamU, where U = R, G, B and F for the red, green, blue and fourth channels, respectively; SReflection is the reflection spectrum vector; and SCamF is the spectral sensitivity vector of the channel calculated from Equation (1). For the white balance condition, the channel signals are normalized to U = UMeas/UMeasD65, where U = R, G, B and F; UMeasD65 is the measured signal when SRef = SWhite, where SWhite is the spectral reflectance of a white card. The same white card in [29] was taken, which is the white side of a Kodak gray card.
The vector representing the camera signals is designated as

Assessment Metrics
For a given test signal vector, the wPCA and LUT methods to reconstruct the reflection light spectrum are shown in Sections 3.1 and 3.2, respectively. The reconstructed spectrum vector is designated as S Rec . The reconstructed spectral reflectance vector S RefRec was calculated as the reflection spectrum vector S Rec divided by the D65 spectrum vector S D65 element by element.
The reconstructed spectral reflectance vector S RefRec was assessed by the root mean where |·| stands for the norm operation. The color difference between S Rec and S Reflection was assessed using CIEDE2000 ∆E 00 . The spectral comparison index (SCI) was also used to assess the reconstructed results, which is an index of metamerism [41]. The parameter k in the formula for calculating SCI shown in [41] was set to 1.0.
For the values of E Ref , ∆E 00 and SCI, the smaller, the better. The statistics of the three metrics were calculated, which are the mean µ, standard deviation σ, 50th percentile PC50, 98th percentile PC98 and maximum MAX. For the value of GFC, the larger, the better. The statistics of GFC were calculated, which are the mean µ, standard deviation σ, 50th percentile PC50, and minimum MIN. The fit of the spectral curve shape is good if GFC > 0.99 [28,42]. The ratio of samples with GFC > 0.99 was calculated, which is called the ratio of good fit and designated as RGF99.
The assessment metrics E Ref and 1-GFC are related to spectral error. The assessment metrics ∆E 00 and SCI are related to color appearance error. Section 4.3 will show that the values of E Ref and 1 − GFC are roughly consistent with each other; the values of ∆E 00 and SCI are also roughly consistent with each other. Since the reconstructed spectrum is a metameric spectrum, the spectral error can be large when the color appearance error is small. Therefore, it is necessary to use these two types of metrics for assessment.

The wPCA Method
From the theory of PCA [43], the spectrum vector S can be decomposed as where P 0 is the average spectrum vector of training samples; d k and P k are the coefficient and spectrum vector of the k-th principal component. Principal components are derived from training samples using PCA. The number of principal components is the number of sampling wavelengths, M w . The camera spectral sensitivity matrix is defined as where D CamR , D CamG , D CamB and D Cam4th are the normalized spectral sensitivity vectors of the red, green, blue and fourth channel for the white balance condition. For example, If both sides of Equation (3) are multiplied by D Cam T , we have the signal vector where C 0 = D Cam T P 0 and Q k = D Cam T P k . Since Equation (4) represents four scalar equations, the summation in Equation (4) was truncated, and the upper limit of the summation index M w was modified to 4 for solving the first four d k coefficients. From the solved coefficients, the reconstructed spectrum vector is If the reconstructed spectrum has negative values, the value is set to zero. The first four principal components are the basis spectra for the spectrum reconstruction using a quadcolor camera. The channel spectral sensitivity vectors are given in Section 2.1. As described in Section 1, in practice, the spectral sensitivity matrix D Cam is measured or estimated experimentally. Additional errors introduced from the measurements/estimations were not considered in this paper. The wPCA method is the same as the PCA method shown above, except that training samples are weighted according to the sample to be reconstructed [22]. The i-th training sample was multiplied by a weighting factor ∆E i − γ , where ∆E i is the color difference between the test sample and the i-th training sample in CIELAB color space; γ is a constant. Weighted training samples were used to derive basis spectra. The larger the value of γ, the smaller the color difference, and the greater the contribution of the training samples to the basis spectra. If γ = 0, the wPCA method becomes the traditional PCA method. The value of γ is usually set to 1.0 [22]. The value of γ was optimized for the minimum mean E Ref of the test samples for individual camera in this paper. A camera device model was used to convert RGB signal values into tristimulus values for calculating ∆E i . A third-order root polynomial regression model (RPRM) was employed and trained using the reference samples [44]. The accuracy of the RPRM was slightly higher than that of the polynomial regression model in this case.
We also tried to use the weighting factor GFC i κ instead of ∆E i −γ , where κ is a constant to be optimized; GFC i is the GFC of the test sample and the i-th training sample. The larger the value of κ, the larger the goodness-of-fitting coefficient, and the greater the contribution of the training samples to the basis spectra. Using such a weighting factor requires two-stage spectrum reconstruction. The first stage reconstructs the spectrum using the weighting factor ∆E i −γ . The reconstruct spectrum is used to calculate the GFC i . The second stage reconstructs the spectrum using the weighting factor GFC i κ . However, the spectrum reconstruction error using the weighting factor GFC i κ is larger than using the weighting factor ∆E i −γ . Therefore, this paper only considers the case of using the weighting factor ∆E i −γ .

The LUT Method: Interpolation
Detailed descriptions of the LUT method for the 3D case were given in [25,29]. The LUT method for the 4D case is the same as for the 3D case, but the signal dimensions are different. This subsection shows the reconstruction of reflection spectrum vector S Rec from the test signal vector C for the 4D case. Linear scattered data interpolation was used to reconstruct the spectrum due to its simplicity and computational time savings [25][26][27]29]. A simplex mesh in the signal space was generated from the reference signal vectors using the Delaunay triangulation [25]. For example, a simplex is the triangle and tetrahedron in 2D and 3D signal spaces, respectively. All programs for this work were implemented in MATLAB (version R2021a, MathWorks). The simplex mesh was generated by the MATLAB function "delaunayn" [45]. There were three steps to interpolate the test sample.
(i) The simplex that encloses the vector C in the signal space was located. This paper used the MATLAB function "tsearchn" to locate the simplex [46]. (ii) It is required that C is the linear combination of the five reference signal vectors at the vertices of the simplex, and where the coefficients α 1 , α 2 , α 3 , α 4 and α 5 are weighting factors. Equation (6) comprises four scalar equations because the signal vector is 4D. Equation (7) guarantees that the color point of the signal vector is inside the simplex in the signal space if 0 < α 1 , α 2 , α 3 , α 4 , α 5 < 1. The five coefficients in Equations (6) and (7) were solved. (iii) The reconstructed reflection spectrum vector is where S j is the reference spectrum vector corresponding to the j-th vertex, j = 1, 2, 3, 4 and 5. If the reconstructed spectrum has negative values, the value is set to zero.
The solutions to the coefficients in Equations (6) and (7) are unique. These coefficients are called barycentric coordinates. They describe the location of the color point in the simplex [25]. The linear interpolation is called the barycentric interpolation.
If a signal vector is outside the convex hull of the simplex mesh, it is an outside sample. Figure 5a-d show the 340 outside samples as blue dots in the signal space using the RGBF camera. The method to extrapolate outside samples is described in Section 3.3.

The LUT Method: Extrapolation
Section 2.2 shows that there are 340 outside samples using the RGBF camera, while the number of outside samples is 202 using the D5100 camera [29] for the same 202 reference samples and 1066 test samples. Imagine projecting multiple points in a 3D space onto a 2D space. If the point volume density in the 3D space is low, the point density in the 2D space can be high and vice versa. Similarly, if the color point density of the reference samples in the RGB signal space is high enough to interpolate, say, 80% of the test samples, then the color point density of the same reference samples in the 4D signal space may only be able to interpolate, say, 70% of the test samples. Therefore, for the same reference and test samples, the number of outside samples increases with the signal dimension.
The spectra of all signals can be reconstructed using the wPCA method but not the LUT method. However, due to the lack of suitable nearby training samples, as shown in Figure 5a-d, the spectrum reconstruction error of an outside sample using the wPCA method is likely to be larger than that of an inside sample. In this paper, outside samples of the LUT method were extrapolated utilizing the reference samples and ARSs [29]. ARSs are high-saturation samples. They are created using appropriately selected color filters and color chips. The color filters are called the ARS filters. The extrapolation process is the same as the interpolation method shown in Section 3.2 but using the expanded reference samples including ARSs.
The authors of [29] used cyan, magenta and yellow (CMY) ARS filters to extrapolate the 202 outside samples for the case with the D5100 camera. It was found that of the 340 outside samples, a few cannot be extrapolated using the CMY ARS filters for some quadcolor cameras under consideration. For example, 2 outside samples cannot be extrapolated using the RGBC cameras. It was also found that the use of additional red, green, and blue (RGB) ARS filters to create more ARSs enables all outside samples to be extrapolated.
The ARS filters can be optimized to minimize spectrum reconstruction errors, but the optimization requires the spectral sensitivity functions of the camera. Filter characteristics can be specified by the edge wavelength and edge width, which are defined in the same way as the color filter of the fourth channel in Section 2.1. Although the design is not optimal, their specifications can be selected according to channel wavelengths. The channel wavelength is the average wavelength of the spectral sensitivity of the signal channel. The RGB channel wavelengths of the D5100 camera are λ CamR = 603.4 nm, λ CamG = 530.7 nm and λ CamB = 466.7 nm, respectively. From [29], empirically, the edge wavelengths of cyan and yellow filters can be λ C = λ CamR and λ Y = λ CamC , respectively, where λ CamC = (λ CamB + λ CamG )/2 is the mean wavelength of λ CamB and λ CamG ; the edge wavelengths at the short-wavelength side and the long-wavelength side of the magenta filter can be λ MS = λ CamC and λ ML = λ CamR , respectively. Therefore, we set λ C = 603.4 nm, λ Y = 498.7 nm, λ MS = 498.7 nm and λ ML = 603.4 nm. Figure 6a shows the spectral transmittance of the CMY ARS filters, where the maximum transmittance and edge width of all filters were set to 0.9 and 30 nm, respectively. where the maximum transmittance and edge width of all filters were set to 0.9 and 30 nm, respectively. The spectral transmittance of the RGB ARS filters is also based on the super-Gaussian function. In [29], the spectral transmittance function of the magenta ARS filter is an inverted super-Gaussian function. Since filter optimization is not the purpose of this paper, for simplicity, we set the edge wavelengths of the blue and red filters as λB = λCamC and λR = λCamR, respectively; the edge wavelengths on the short-and long-wavelength sides of the green filter were λGS = λCamC and λGL = λCamR, respectively. Therefore, we set λR = 603.4 nm, λB = 498.7 nm, λGS = 498.7 nm and λGL = 603.4 nm. Figure 6b shows the spectral transmittance of the RGB ARS filters, where the maximum transmittance and edge width of all filters were set to 0.9 and 30 nm, respectively. The CMY and RGB filters are designated as the CMYRGB filters.
The ARSs were created according to the method in [29] using the CMYRGB filters specified above and the color chips corresponding to the vertices of the reference sample convex hull. The convex hull in the RGBF signal space cannot be shown due to its 4D geometry. Figure 7a-d show the color points of the ARSs created with the CYMRGB filters in the RGB, GBF, BFR and FRG signal spaces, respectively, for the case with the RGBF camera. The color points of ARSs created using CYM and RGB filters are shown as 47 red dots and 79 purple hollow dots, respectively. Figure 7a-d also show the 340 outside samples as blue dots for comparison. We can see that the gamut volume expanded by the ARSs in Figure 7a-d is larger than that expanded by the reference samples in Figure 5ad. Both the reference samples and the ARSs created with the CMYRGB filters were used to extrapolate the outside samples for the cases with the tricolor and quadcolor cameras under consideration using the LUT method. The inclusion of the ARSs in the training sample set was found to deteriorate the spectrum reconstruction using the wPCA method as in [29]. The spectral transmittance of the RGB ARS filters is also based on the super-Gaussian function. In [29], the spectral transmittance function of the magenta ARS filter is an inverted super-Gaussian function. Since filter optimization is not the purpose of this paper, for simplicity, we set the edge wavelengths of the blue and red filters as λ B = λ CamC and λ R = λ CamR , respectively; the edge wavelengths on the short-and long-wavelength sides of the green filter were λ GS = λ CamC and λ GL = λ CamR , respectively. Therefore, we set λ R = 603.4 nm, λ B = 498.7 nm, λ GS = 498.7 nm and λ GL = 603.4 nm. Figure 6b shows the spectral transmittance of the RGB ARS filters, where the maximum transmittance and edge width of all filters were set to 0.9 and 30 nm, respectively. The CMY and RGB filters are designated as the CMYRGB filters.
The ARSs were created according to the method in [29] using the CMYRGB filters specified above and the color chips corresponding to the vertices of the reference sample convex hull. The convex hull in the RGBF signal space cannot be shown due to its 4D geometry. Figure 7a-d show the color points of the ARSs created with the CYMRGB filters in the RGB, GBF, BFR and FRG signal spaces, respectively, for the case with the RGBF camera. The color points of ARSs created using CYM and RGB filters are shown as 47 red dots and 79 purple hollow dots, respectively. Figure 7a-d also show the 340 outside samples as blue dots for comparison. We can see that the gamut volume expanded by the ARSs in Figure 7a-d is larger than that expanded by the reference samples in Figure 5a-d. Both the reference samples and the ARSs created with the CMYRGB filters were used to extrapolate the outside samples for the cases with the tricolor and quadcolor cameras under consideration using the LUT method. The inclusion of the ARSs in the training sample set was found to deteriorate the spectrum reconstruction using the wPCA method as in [29].

CMF Mismatch Factor (CMFMisF)
If both sides of Equation (8) are multiplied by the spectral sensitivity matrix DCam T and integrated over the wavelength, we obtain Equation (6). However, the interpolation is an inverse problem. The reconstructed spectrum vector SRec is one of numerous metameric spectrum vectors corresponding to the test signal vector C. Equations (6) and (7) are five constraints for finding a metameric spectrum vector. If a tricolor camera is used to reconstruct the spectrum, the number of constraints is only four. The difference between the target spectral reflectance vector SRef and the reconstructed spectral reflectance vector SRefRec calculated from the metameric spectrum vector SRec was assessed using the metrics defined in Section 2.3.
Appendices A and B show zero-color-difference conditions using the LUT and wPCA methods, respectively. If the sensitivity functions of the camera fit the CIE CMFs, x , y and z , ideally, the color difference between the reflection light spectrum SReflection and the spectrum SRec reconstructed using the LUT method is zero. Under this condition, the tristimulus XYZ values of the metameric spectrum vector SRec are the same as those of the sample to be reconstructed. For the case of using the wPCA method, the color difference is also zero if the additional condition is satisfied, which requires that the spectral amplitude calculated from Equation (5) be non-negative. The color difference ΔE00 between SReflection and SRec will be non-zero when the spectral sensitivity functions of the considered cameras does not fit the CMFs ideally. Therefore,

CMF Mismatch Factor (CMFMisF)
If both sides of Equation (8) are multiplied by the spectral sensitivity matrix D Cam T and integrated over the wavelength, we obtain Equation (6). However, the interpolation is an inverse problem. The reconstructed spectrum vector S Rec is one of numerous metameric spectrum vectors corresponding to the test signal vector C. Equations (6) and (7) are five constraints for finding a metameric spectrum vector. If a tricolor camera is used to reconstruct the spectrum, the number of constraints is only four. The difference between the target spectral reflectance vector S Ref and the reconstructed spectral reflectance vector S RefRec calculated from the metameric spectrum vector S Rec was assessed using the metrics defined in Section 2.3.
Appendices A and B show zero-color-difference conditions using the LUT and wPCA methods, respectively. If the sensitivity functions of the camera fit the CIE CMFs, x, y and z, ideally, the color difference between the reflection light spectrum S Reflection and the spectrum S Rec reconstructed using the LUT method is zero. Under this condition, the tristimulus XYZ values of the metameric spectrum vector S Rec are the same as those of the sample to be reconstructed. For the case of using the wPCA method, the color difference is also zero if the additional condition is satisfied, which requires that the spectral amplitude calculated from Equation (5) be non-negative.
where m = x, y, and z; u Fit is the least squares fit of the CMF vector u m using the spectral sensitivity vectors of the camera. For the case of using the quadcolor camera, where the coefficients β R , β G , β B and β 4th were solved using the Moore-Penrose pseudoinversion in least-squares sense [43]. Table 1 shows the assessment metric statistics for the test samples using the LUT method, where the cameras are the D5100 and RGBF. The LUT method for the D5100 camera is the same as the quadcolor camera shown in Section 3.2, except that the signal space is reduced from 4D to 3D. As can be seen from Table 1, using the RGBF camera reduced the mean E Ref , ∆E 00 and SCI of the inside samples and increased the mean GFC of the inside samples compared to the D5100 camera. The mean E Ref and GFC values of the outside samples using the RGBF camera were even smaller and larger than the inside samples using the D5100 camera, respectively. While non-zero as expected, the color difference ∆E 00 was small for most of the test samples. Compared to the D5100 camera, the mean E Ref of the test samples, inside samples and outside samples using the RGBF camera was reduced by 31.98%, 35.81% and 35.82%, respectively. Compared to the D5100 camera, the RGF99 of the test samples, inside samples and outside samples using the RGBF camera was increased from 0.9343, 0.9375 and 0.9208 to 0.9887, 0.9972 and 0.9706, respectively.  Table 2 is the same as Table 1 except that the wPCA method was used. The wPCA method for the D5100 camera is the same as the quadcolor camera shown in Section 3.1, except that three basis spectra were used. The inside and outside samples using the wPCA method were the same as those using the LUT method for comparison. The optimized γ = 1.7 and 1.2 for the cases of using the D5100 and RGBF cameras, respectively. From Table 2, the mean assessment metrics of the outside samples were worse than those of the inside samples. Compared to the D5100 camera, the mean E Ref values of the test samples, inside samples and outside samples using the RGBF camera were reduced by 21.6%, 27.3% and 24.9%, respectively. Compared to the D5100 camera, the RGF99 values of the test samples, inside samples and outside samples using the RGBF camera increased from 0.9493, 0.9676 and 0.8713 to 0.9765, 0.9945 and 0.9382, respectively. The improvement of RGF99 on outside samples using the RGBF camera is significant.  From Tables 1 and 2, it can be seen that the LUT method outperformed the wPCA method using the RGBF camera. Note that the wPCA method outperformed the LUT method using the D5100 camera except for about two orders of magnitude longer computation time [27,29]. In [29], the LUT method outperformed the wPCA method using the D5100 camera because the value of γ was not optimized for the wPCA method, where γ = 1.0. Figure 8a-d show the E Ref , GFC, ∆E 00 , and SCI histograms for the test samples, respectively, where the three shown cases are (i) using the D5100 camera and the wPCA method, (ii) using the RGBF camera and the LUT method, and (iii) using the RGBF camera and the wPCA method.  Figure 8a-d, the numbers of test samples in the "ERef > 0.05", "GFC < 0.99", "ΔE00 > 2.0", and "SCI > 20" bins using the LUT method are less than those using the wPCA method. From Tables 1 and 2, using the RGBF camera, the maximum ERef = 0.0567 and 0.0742 for the cases of using the LUT and wPCA method, respectively. These results show that using the LUT method is more reliable for spectral reflectance recovery. However, when using the LUT method or the wPCA method, the assessment metrics were improved using the RGBF camera compared to the D5100 camera. Since the mean assessment metrics of outside samples are worse than those of inside samples, the spectrum reconstruction of outside samples was investigated in more detail. Figure 9a-f show the recovered spectral reflectance SRefRec using the LUT method from the light reflected from 2.5G 7/6, 10P 7/8, 2.5R 4/12, 2.5Y 9/4, 10BG 4/8 and 5PB 4/12 color chips, respectively, where their target reflectance SRef values are also shown. In addition to the D5100 and RGBF cameras, Figure 9a-f also show the results using other cameras, which will be considered in the following subsections. The same color chips were used as examples in [29] to show the spectral reflectance recovery using the D5100 camera and the LUT method. All the cases in Figure 9a-f are outside examples. The case in Figure 9a is an inside sample using the D5100 camera, but it becomes an outside sample using the RGBF From Figure 8a-d, the numbers of test samples in the "E Ref > 0.05", "GFC < 0.99", "∆E 00 > 2.0", and "SCI > 20" bins using the LUT method are less than those using the wPCA method. From Tables 1 and 2, using the RGBF camera, the maximum E Ref = 0.0567 and 0.0742 for the cases of using the LUT and wPCA method, respectively. These results show that using the LUT method is more reliable for spectral reflectance recovery. However, when using the LUT method or the wPCA method, the assessment metrics were improved using the RGBF camera compared to the D5100 camera.

Using the RGBF Camera
Since the mean assessment metrics of outside samples are worse than those of inside samples, the spectrum reconstruction of outside samples was investigated in more detail. Figure 9a-f show the recovered spectral reflectance S RefRec using the LUT method from the light reflected from 2.5G 7/6, 10P 7/8, 2.5R 4/12, 2.5Y 9/4, 10BG 4/8 and 5PB 4/12 color chips, respectively, where their target reflectance S Ref values are also shown. In addition to the D5100 and RGBF cameras, Figure 9a-f also show the results using other cameras, which will be considered in the following subsections. The same color chips were used as examples in [29] to show the spectral reflectance recovery using the D5100 camera and the LUT method. All the cases in Figure 9a-f are outside examples. The case in Figure 9a is an inside sample using the D5100 camera, but it becomes an outside sample using the RGBF camera. Figure 10a-f are the same as Figure 9a-f, respectively, except that spectra were recovered using the wPCA method. camera. Figure 10a-f are the same as Figure 9a-f, respectively, except that spectra were recovered using the wPCA method. camera. Figure 10a-f are the same as Figure 9a-f, respectively, except that spectra were recovered using the wPCA method.   Figure 9e,f and the case of Figure 10a. The cases where the difference ∆E 00 of using the RGBF camera is larger than that of using the D5100 camera are the cases of Figure 9b,f and the cases of Figure 10a,b,f. Compared to the D5100 camera, using the RGBF camera effectively improved the statistics of the assessment metrics, but it does not guarantee better spectral reflectance recovery for every color chip tested.  Figures 9a-f and 10a-f. The values larger than 0.03 are shown in bold.

Using the RGBC Camera
Since there are red, green and blue channels, the fourth channel is reasonably designed to be either a cyan channel or a yellow channel. From the spectral sensitivity of the silicon sensor shown in Figure 2b, if the fourth channel is a cyan channel, its sensitivity in long wavelength is suppressed. Using a compensation color filter shown in Figure 3a modifies the greenish yellow channel in Figure 2b to a yellowish green channel in Figure 3b instead of a cyan channel. If the applied color filter has a much smaller mid-and long-wavelength transmittance than the filters shown in Figure 3a, the fourth channel becomes the blue or greenish blue channel. The S channel of the RGBS camera is an example of such a case, which will be considered in Section 4.3. The case with spectral sensitivity shown in Figure 2b has been considered in Section 4.1, i.e., the RGBF camera. This subsection will consider the cases with spectral sensitivities shown in Figure 3b.
It was found that among the five color filters in Figure 3a, the spectrum reconstruction error was the smallest when the IEC 131K filter was applied to the fourth channel. Using the LUT method, the mean E Ref = 0.0091, 0.0128, 0.0093, 0.0099 and 0.0125 for the cases with the IEC 131K, 501, 508, 518 and 578 filters, respectively; RGF99 = 0.9897, 0.9060, 0.9878, 0.9803 and 0.9240 for the cases with the IEC 131K, 501, 508, 518 and 578 filters, respectively. The mean spectrum reconstruction error increased as the spectral transmittance of filter decreased in the long wavelength region. Table 5 shows the assessment metric statistics for the test samples of the RGBC camera with the IEC 131K filter using the LUT and wPCA methods. For the case of using the wPCA method, the optimized γ = 1.2. Figure 11a-d show the E Ref , GFC, ∆E 00 , and SCI histograms, respectively, for the test samples of the optimized RGBC camera with the IEC 131K filter, using the LUT method. Figure 12a-d are the same as Figure 11a-d, except for using the wPCA method. Figures 9a-f and 10a-f also show the spectral reflectance recovery examples using the optimized RGBC camera, where the values of E Ref and ∆E 00 are shown in Tables 3 and 4, respectively. Table 5. Assessment metric statistics for the spectrum reconstruction of the 1066 test samples. The quadcolor cameras and spectrum reconstruction methods are indicated. For the case of using the wPCA method, the optimized γ = 1.2, 1.2, 1.9 and 1.3 using the RGBF, RGBC, RGBS and RGBL cameras, respectively. The best values are shown in bold. respectively. The mean spectrum reconstruction error increased as the spectral transmittance of filter decreased in the long wavelength region. Table 5 shows the assessment metric statistics for the test samples of the RGBC camera with the IEC 131K filter using the LUT and wPCA methods. For the case of using the wPCA method, the optimized γ = 1.2. Figure 11a-d show the ERef, GFC, ΔE00, and SCI histograms, respectively, for the test samples of the optimized RGBC camera with the IEC 131K filter, using the LUT method.   For ease of comparison, the assessment metric statistics for using the RGBF camera are also shown in Table 5. As can be seen from Table 5, the assessment metric statistics using the optimized RGBC camera were about the same as the RGBF camera but with an additional compensation color filter. The use of the compensation color filter to suppress the spectral sensitivity of the fourth channel in the long wavelength region did not improve the performance of spectrum reconstruction. Table 5. Assessment metric statistics for the spectrum reconstruction of the 1066 test samples. The quadcolor cameras and spectrum reconstruction methods are indicated. For the case of using the wPCA method, the optimized γ = 1.2, 1.2, 1.9 and 1.3 using the RGBF, RGBC, RGBS and RGBL cameras, respectively. The best values are shown in bold red.  For ease of comparison, the assessment metric statistics for using the RGBF camera are also shown in Table 5. As can be seen from Table 5, the assessment metric statistics using the optimized RGBC camera were about the same as the RGBF camera but with an additional compensation color filter. The use of the compensation color filter to suppress the spectral sensitivity of the fourth channel in the long wavelength region did not improve the performance of spectrum reconstruction.

Using the RGBS and RGBL Cameras
The above results show that using the fourth channel without a compensation color filter produced a slightly smaller mean E Ref . Further reduction in the mean E Ref is possible if the spectral sensitivity of the fourth channel can be appropriately modified using a color filter. This subsection considers the cases of using the short-pass and long-pass filters defined in Section 2.1 as the color filter applied to the fourth channel. Figure 13a shows the mean E Ref and 1-GFC of the test samples using the LUT and wPCA methods versus the edge wavelength λ S of the short-pass optical filter. Figure 13b is the same as Figure 13a except that the mean ∆E 00 and SCI are shown. Figure 14a,b are the same as Figure 13a,b, respectively, except that the long-pass optical filter was used. As can be seen from Figures 13a and 14a, the mean 1-GFC roughly followed the mean E Ref . From  Figures 13b and 14b, the mean SCI roughly followed the mean ∆E 00 . Figure 15 shows the optimized value of γ for the case of using the wPCA method in Figures 13 and 14. For the case of using the RGBS camera, the optimized value of γ for the wPCA method is larger around 530 nm, as shown in Figure 15. We will show that the minimum CMFMisF is at this wavelength. From Figures 13b and 14a, it can be seen that the RGBS and RGBL cameras are suitable designed for low color difference and low spectral error, respectively. can be seen from Figures 13a and 14a, the mean 1-GFC roughly followed the mean ERef. From Figures 13b and 14b, the mean SCI roughly followed the mean ΔE00. Figure 15 shows the optimized value of γ for the case of using the wPCA method in Figures 13 and 14. For the case of using the RGBS camera, the optimized value of γ for the wPCA method is larger around 530 nm, as shown in Figure 15. We will show that the minimum CMFMisF is at this wavelength. From Figures 13b and 14a, it can be seen that the RGBS and RGBL cameras are suitable designed for low color difference and low spectral error, respectively. can be seen from Figures 13a and 14a, the mean 1-GFC roughly followed the mean ERef. From Figures 13b and 14b, the mean SCI roughly followed the mean ΔE00. Figure 15 shows the optimized value of γ for the case of using the wPCA method in Figures 13 and 14. For the case of using the RGBS camera, the optimized value of γ for the wPCA method is larger around 530 nm, as shown in Figure 15. We will show that the minimum CMFMisF is at this wavelength. From Figures 13b and 14a, it can be seen that the RGBS and RGBL cameras are suitable designed for low color difference and low spectral error, respectively. From Figure 13a,b, using the LUT method, the quadcolor camera optimized for the minimum mean ΔE00 is the RGBS camera using the short-pass optical filter of λS = 528 nm. For this optimized RGBS camera, the mean ERef = 0.0115 and ΔE00 = 0.1666, where the mean ERef is larger than the RGBF camera but smaller than the D5100 camera; the mean ΔE00 is Figure 15. The optimized values of γ for the case of using the wPCA method in Figures 13 and 14. From Figure 13a,b, using the LUT method, the quadcolor camera optimized for the minimum mean ∆E 00 is the RGBS camera using the short-pass optical filter of λ S = 528 nm. For this optimized RGBS camera, the mean E Ref = 0.0115 and ∆E 00 = 0.1666, where the mean E Ref is larger than the RGBF camera but smaller than the D5100 camera; the mean ∆E 00 is much smaller than the RGBF and D5100 cameras; the maximum ∆E 00 is only 1.0169. Using the wPCA method, the edge wavelength of the optimized RGBS is λ S = 529 nm and the optimized γ = 1.9. For this optimized RGBS camera, the mean E Ref = 0.0141 and ∆E 00 = 0.158, where the mean E Ref and ∆E 00 are larger and smaller, respectively, than the D5100 and RGBF cameras.
From Figure 14a,b, using the LUT method, the quadcolor camera optimized for the minimum mean E Ref is the RGBL camera using the long-pass optical filter of λ L = 585 nm. The mean E Ref = 0.0083 and ∆E 00 = 0.3506 using the optimized RGBL camera with λ L = 585 nm are smaller than the mean E Ref = 0.0089 and ∆E 00 = 0.3992 using the RGBF camera, respectively. Using the wPCA method, the edge wavelength of the optimized RGBL is λ L = 587 nm and the optimized γ = 1.3. The mean E Ref = 0.0087 and ∆E 00 = 0.3862 using the optimized RGBL camera are smaller than the mean E Ref = 0.0095 and ∆E 00 = 0.4257 using the RGBF camera, respectively.
For the cases of using the LUT method, the spectral transmittance of the optimized long-pass and short-pass optical filters and their corresponding fourth channel spectral sensitivities are shown in Figure 3b. The histograms of assessment metrics for using the optimized RGBS and RGBL cameras are shown in Figure 11a-d for the case of using the LUT method. Figure 12a-d are the same as Figure 11a-d, respectively, except for the case of using the wPCA method. From Figures 11a-d and 12a-d, it can be seen that the histogram characteristics for the RGBC and RGBL cameras are similar when using the LUT method or the wPCA method. The histogram characteristics for the RGBS camera are quite different from those for the RGBC and RGBL cameras when using the LUT method or the wPCA method. For the case of using the RGBS camera and the LUT method, there are 28, 52 and 2 test samples in the "E Ref > 0.05", "GFC < 0.99" and "SCI > 20" bins, respectively, while there is only 1 test sample in the "∆E 00 = 1.1" bin and no test sample in the larger ∆E 00 bins. For this case, the color difference of all test samples is low despite the large spectral error. For the case of using the RGBS camera and the wPCA method, there are 52, 82, 4 and 14 test samples in the "E Ref > 0.05", "GFC < 0.99", "∆E 00 > 2.0" and "SCI > 20" bins, respectively. For this case, most of the test samples have a low color difference, but there are few test samples with a large color difference. Therefore, if the RGBS camera is used as an imaging colorimeter, it is more reliable to reconstruct spectra using the LUT method. Table 5 shows the assessment metric statistics for the test samples of the optimized RGBS and RGBL cameras using the LUT and wPCA methods. For the case of using the LUT or wPCA method, the mean E Ref and mean ∆E 00 using the optimized RGBS camera are the largest and smallest, respectively, compared to the other quadcolor cameras.  Tables 3 and 4, respectively. Notably, Figure 10f shows poor spectral reflectance recovered using the RGBS camera and the wPCA method, where the reflectance is 1.486 at 700 nm and zero around 580 nm. The zero is due to the negative value calculated from Equation (5).
The small color difference using the optimized RGBS camera can be explained using the CMFMisF defined in Section 3.4. Figure 16 shows the CMFMisF value versus the edge wavelength of the optical filter. Comparing Figure 16 with Figures 13b and 14b, it can be seen that the mean ∆E 00 closely relates to CMFMisF for the RGBS camera. The optimized edge wavelength of the RGBS camera using the LUT method or the wPCA method is about the wavelength of the minimum value in Figure 16, where the minimum CMFMisF = 0.09537 at λ S = 530 nm. CMFMisF = 0.1495, 0.1495, 0.1486, 0.09543 and 0.1493 for the RGB, RGBF, optimized RGBC, optimized RGBS (λ S = 528 nm) and optimized RGBL (λ L = 585 nm) cameras, respectively. Note that the CMFMisF values are the same for the RGB and RGBF cameras, since the fourth channel of the RGBF camera contributes negligibly to the fit. The CMFs can be better fitted using the spectral sensitivities of the optimized RGBS camera. Figure 17a,b show the least squares fits of the CMF vectors using the spectral sensitivity vectors of the RGBF and optimized RGBS camera (λ S = 528 nm), respectively. Although the CMF vectors were not well fitted in Figure 17a, if the spectrum of the test sample is well reconstructed using the RGBF camera, the color difference ∆E 00 of the test sample will not be small. seen that the mean ΔE00 closely relates to CMFMisF for the RGBS camera. The optimized edge wavelength of the RGBS camera using the LUT method or the wPCA method is about the wavelength of the minimum value in Figure 16, where the minimum CMFMisF = 0.09537 at λS = 530 nm. CMFMisF = 0.1495, 0.1495, 0.1486, 0.09543 and 0.1493 for the RGB, RGBF, optimized RGBC, optimized RGBS (λS = 528 nm) and optimized RGBL (λL = 585 nm) cameras, respectively. Note that the CMFMisF values are the same for the RGB and RGBF cameras, since the fourth channel of the RGBF camera contributes negligibly to the fit. The CMFs can be better fitted using the spectral sensitivities of the optimized RGBS camera. Figures 17a,b show the least squares fits of the CMF vectors using the spectral sensitivity vectors of the RGBF and optimized RGBS camera (λS = 528 nm), respectively. Although the CMF vectors were not well fitted in Figure 17a, if the spectrum of the test sample is well reconstructed using the RGBF camera, the color difference ΔE00 of the test sample will not be small. seen that the mean ΔE00 closely relates to CMFMisF for the RGBS camera. The optimized edge wavelength of the RGBS camera using the LUT method or the wPCA method is about the wavelength of the minimum value in Figure 16, where the minimum CMFMisF = 0.09537 at λS = 530 nm. CMFMisF = 0.1495, 0.1495, 0.1486, 0.09543 and 0.1493 for the RGB, RGBF, optimized RGBC, optimized RGBS (λS = 528 nm) and optimized RGBL (λL = 585 nm) cameras, respectively. Note that the CMFMisF values are the same for the RGB and RGBF cameras, since the fourth channel of the RGBF camera contributes negligibly to the fit. The CMFs can be better fitted using the spectral sensitivities of the optimized RGBS camera. Figures 17a,b show the least squares fits of the CMF vectors using the spectral sensitivity vectors of the RGBF and optimized RGBS camera (λS = 528 nm), respectively. Although the CMF vectors were not well fitted in Figure 17a, if the spectrum of the test sample is well reconstructed using the RGBF camera, the color difference ΔE00 of the test sample will not be small. (a) (b) Figure 17. Least squares fit of CMF vectors using the spectral sensitivity vectors of (a) the RGBF camera and (b) the optimized RGBS camera with the 528 nm short-pass optical filter. The CMFs x, y and z are shown in red, green and blue, respectively. Solid and dash lines show the CMFs and least squares fits, respectively.
As a comparison, the authors of [29] showed that for the same test samples using the CMF camera and optimized CMY ARS filters, the mean E Ref , GFC, ∆E 00 and SCI are 0.0132, 0.9972, 0.0 and 4.1869, respectively. The CMF camera is the artificial tricolor camera with spectral sensitivity functions that are the same as the CMFs. While the mean ∆E 00 is zero, the mean E Ref of the CMF camera is about the same as the D5100 camera.

Cross Comparison
From the results shown above, the LUT method outperformed the wPCA method in spectrum reconstruction. We first discuss the case of using the LUT method. Compared to the D5100 camera, the mean E Ref using the RGBF and optimized RGBL cameras were reduced by 32.0% and 36.8%, respectively. Compared to the D5100 camera, the mean ∆E 00 using the RGBF and optimized RGBS cameras were reduced by 5.3% and 60.5%, respectively. Compared to the RGBF camera, the advantage of using the optimized RGBL camera to obtain a smaller mean E Ref was not significant; the advantage of using the optimized RGBS camera to obtain a smaller mean ∆E 00 was significant. Compared to the RGBF camera, the mean ∆E 00 using the optimized RGBS camera was reduced by 58.3%, but the mean E Ref was increased by 28.6%. However, since the mean ∆E 00 was as small as 0.3992, using the RGBF camera may be better than the optimized RGBS camera in spectral reflectance recovery due to the smaller mean E Ref and no need to use the color filter applied to the fourth channel.
In this paragraph, the use of the wPCA method is discussed. Compared to the D5100 camera, the mean E Ref values using the RGBF and optimized RGBL cameras were reduced by 21.6% and 28.2%, respectively. Compared to the D5100 camera, the mean ∆E 00 values using the RGBF and optimized RGBS cameras were increased by 9.6% and reduced by 59.3%, respectively. Compared to the RGBF camera, the advantage of using the optimized RGBL camera to obtain smaller spectral error was also not significant. Compared to the RGBF camera, using the optimized RGBS camera had a 62.9% smaller mean ∆E 00 but a 49.5% larger mean E Ref .
The RGBF camera is a compromised design for the case of using the LUT or wPCA method. If a mean E Ref smaller than that of the RGBF camera is required, a suitable longpass optical filter can be applied to the fourth channel. If an ultra-small color difference is required, a suitable short-pass optical filter can be applied to the fourth channel, but care must be taken not to add too much spectral error.
The computation time required for the LUT method is about two orders of magnitude faster than that required for reconstruction methods using basis spectra that emphasize the relationship between the test and training samples for the 3D case [27,29]. However, for the 4D case, the computation time required to use the LUT method is longer than the wPCA method, because it takes much longer time to locate the simplex for interpolation than the 3D case. For the case of using the quadcolor camera, the ratio of the computation time required to use the LUT method and wPCA method was 1:0.49, where samples were reconstructed from their signal vector C to the spectral reflectance vector S RefRec using MATLAB on the Windows 10 platform. The improvement of the algorithm to locate the simplex faster is a computational geometry problem and is beyond the scope of this paper.

Conclusions
Using the conventional tricolor cameras to recover the spectral reflectance has the advantages of low cost, high spatial resolution, fast detection, and no need to measure/estimate the camera spectral sensitivity functions. The reduction in the spectrum reconstruction error using the quadcolor cameras was shown, where the color filter array is compatible with the tricolor cameras. The wPCA and LUT methods were used to reconstruct spectra from the quadcolor camera signals. The optimized weighting factor of the wPCA method was used for individual cameras. The spectral error metrics, E Ref and 1-GFC, and the color appearance error metrics, ∆E 00 and SCI, were used to assess the reconstructed spectra. The assessment results for using the two spectrum reconstruction methods were compared.
It was assumed that the spectral sensitivities of the red, green and blue channels of the quadcolor cameras are the same as those of the Nikon D5100 camera. The spectral sensitivity of the fourth channel depends on the spectral transmittance of the IR-cut filter and the color filter in addition to the spectral sensitivity of the silicon sensor. The quadcolor RGBF camera is considered, where no color filter is applied to its fourth channel. The quadcolor RGBC, RGBS and RGBL cameras were also considered with sensitivity compensation filters, short-pass filters, and long-pass filters applied to their fourth channels, respectively. Five commercially available sensitivity compensation optical filters were used.
The Munsell color chips were taken as reflective surface examples where 202 and 1066 color chips were used to prepare reference/training and test samples, respectively, under the illuminant D65. It was found that using the LUT method, the number of 1.
The relation of the color difference and camera spectral sensitivity functions was shown in this paper. More research is needed to investigate the relation of the spectral error and camera spectral sensitivity functions.

2.
Methods are needed to optimize the camera spectral sensitivity functions to achieve low spectral error and low color difference.

3.
Time-saving algorithms need to be developed for the LUT method in the 4D case.

Appendix A. Zero-Color-Difference Condition Using the LUT Method
The CMF matrix is defined as F= [u x u y u z ], where u x , u y , and u z are the CMF vectors defined in Section 3.4. If the camera spectral sensitivity functions fit the CMFs ideally, the CMF matrix F can be written as where D Cam is the camera spectral sensitivity matrix defined in Section 3.1; B is a 3x4 coefficient matrix. If Equation (A1) is valid, the color difference between the reflection light spectrum S Reflection and the spectrum S Rec reconstructed using the LUT method is zero.