Next Article in Journal
A New NILM System Based on the SFRA Technique and Machine Learning
Next Article in Special Issue
Colorimetric Characterization of Color Imaging System Based on Kernel Partial Least Squares
Previous Article in Journal
An Efficient Method for Laser Welding Depth Determination Using Optical Coherence Tomography
Previous Article in Special Issue
Determination and Measurement of Melanopic Equivalent Daylight (D65) Illuminance (mEDI) in the Context of Smart and Integrative Lighting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spectral Filter Selection Based on Human Color Vision for Spectral Reflectance Recovery

1
Faculty of Light Industry, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
2
State Key Laboratory of Biobased Material and Green Papermaking, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(11), 5225; https://doi.org/10.3390/s23115225
Submission received: 30 March 2023 / Revised: 25 May 2023 / Accepted: 27 May 2023 / Published: 31 May 2023
(This article belongs to the Special Issue Recent Trends and Advances in Color and Spectral Sensors)

Abstract

:
Spectral filters are an important part of a multispectral acquisition system, and the selection of suitable filters can improve the spectral recovery accuracy. In this paper, we propose an efficient human color vision-based method to recover spectral reflectance by the optimal filter selection. The original sensitivity curves of the filters are weighted using the LMS cone response function. The area enclosed by the weighted filter spectral sensitivity curves and the coordinate axis is calculated. The area is subtracted before weighting, and the three filters with the smallest reduction in the weighted area are used as the initial filters. The initial filters selected in this way are closest to the sensitivity function of the human visual system. After the three initial filters are combined with the remaining filters one by one, the filter sets are substituted into the spectral recovery model. The best filter sets under L-weighting, M-weighting, and S-weighting are selected according to the custom error score ranking. Finally, the optimal filter set is selected from the three optimal filter sets according to the custom error score ranking. The experimental results demonstrate that the proposed method outperforms existing methods in spectral and colorimetric accuracy, which also has good stability and robustness. This work will be useful for optimizing the spectral sensitivity of a multispectral acquisition system.

1. Introduction

Over the last few decades, multispectral imaging technology has been widely used because it solves the “metameric issues” problem of a traditional three-color digital imaging camera and realizes the real recording of spectral information on the surface of objects. This technology has been gradually applied in museums, art galleries, computer graphics, spectral detection, etc. [1,2,3,4,5,6]. One of the most important components in a multispectral acquisition system is the set of optical filters that allows for acquisition in different bands of the visible light spectrum. The selection of a specific filter set from a given filter space clearly affects the accuracy of spectral recovery. Although using more filters usually improves the accuracy of the spectral recovery, it also increases the operational complexity, image acquisition time, and data volume accordingly. Therefore, some scholars have conducted a significant amount of research on how to achieve the optimal selection of filters.
Filter set optimization has already been studied in some cases, but there are still many problems to be solved [7,8,9,10,11]. Some scholars have designed filters with optimal spectral sensitivity in theory based on specific optimization criteria [12,13,14,15]. However, the comprehensive effect of the optical path, light source, and sensor spectral characteristics makes the design process more complex. At the same time, the actual filter does not guarantee that the theoretically designed optimal filter has exactly the same spectral sensitivity. Another option is to select the best filter from the available filters. This exhaustive method is practical when the total number of filters are small. However, with the increase of the total number of filters to be selected, the computational complexity of the exhaustive method increases dramatically, which makes it inapplicable [16].
With extensive research, multispectral filters no longer simply rely on empirical methods to select filter compositions. The filter vector analysis method (FVAM) [17] is a commonly used method for filter selection. Hardeberg first used the maximum linear independence (MLI) [18] method to select the spectral training set, and since then Li has applied it to filter selection [19]. The selection principle of the MLI method is that the transmission matrix of the selected filter set has the smallest number of conditions. The transmission vector maximization orthogonal method (MaxOr) [20] involves selecting the filter with the largest transmission vector norm as the preferred filter, and then using each filter to form the transmission space and select the filter set with the largest transmission space orthogonality. The linear distance maximization method (LDMM) [21] uses the linear distance between filter sensitivity vectors as the only criterion for selecting filter sets. FVAM directly selects filters to form filter sets by the mathematical properties of filter sensitivity curves, which is simple and time-saving, and the stability of the selecting results is better than that of the empirical method.
However, the above method does not consider other parameters in a multispectral imaging system, such as the spectral distribution of the light source (SPD), the spectral sensitivity of the camera, and the characteristics of the imaging scene [16,22]. This leads to the fact that although the filter chosen by FVAM can guarantee the effectiveness of the first channel response of the multispectral camera, it is difficult to satisfy the optimization requirements of the whole system. Therefore, it is necessary to develop an optimized filtering method that integrates other factors in a multispectral imaging system and selects filter sets based on the spectral recovery effect and colorimetric accuracy of each group as a reference.
In response to the problems of the above study, a filter selection method combining weighted area selection and custom error score ranking is proposed in this paper. The method can be divided into two parts:
  • The original sensitivity curve of the filter is weighted by the LMS cone response function. The area reduction rate of the filter before and after weighting is calculated, and the minimum area reduction rate is selected. The initial filters selected in this way are closest to the sensitivity function of the human visual system.
  • The three initial filters are combined with the remaining filters one by one, and each combination is substituted into the spectral reconstruction model to obtain the recovery results of the whole imaging system. The respective optimal filter sets under L-weighting, M-weighting, and S-weighting are selected according to the customized minimum recovery error, and then the optimal filter set is selected from the three optimal filter sets by comparing the error set scores.
The innovation of the paper is to use the human visual system weighting in the filter vector analysis process so that the selected filters are closest to the human eye sensitivity curve. Other factors are integrated into the multispectral imaging system, and the filter sets are selected according to the spectral recovery effect and chromatic accuracy of each group.

2. Materials and Methods

Spectral recovery needs to simulate the camera response process. This spectral imaging model is suitable for any known camera response process with spectral sensitivity. The multispectral imaging model can be described by Formula (1):
P i = 400 700 E λ R j λ Q i λ d λ + N i   ,
where P i is the response value of the ith channel of the sensor, E λ is the illumination spectral power distribution (SPD) for each wavelength, R j λ is the spectral reflectance of sample j, Q i λ is the spectral sensitivity of the ith channel of the sensor, λ is the wavelength, and the sampling range is 400–700 nm, Ni is the noise of the digital camera. According to Liang’s study [23], in order to simplify the calculation, the imaging model used in this paper does not consider the noise of the camera and assume that the illumination are equal power distribution. Formula (1) can be expressed in matrix form:
P = M R   ,
where P is the responses matrix, M is the overall spectral sensitivity function matrix of the multispectral imaging system including the product of the matrix form of E λ , R j λ and Q i λ , and R is the original spectral reflectance matrix.
Spectral recovery is a process of obtaining high-dimensional spectral reflectance with low-dimensional response values. There are various spectral recovery methods, such as the most common pseudo-inverse methods [24,25], principal component analysis methods [26], compressive sensing [27,28], Wiener estimation methods [29], and other methods [30,31,32,33,34]. This method used in this paper is the pseudo-inverse method, which can be expressed by Formula (3):
R = M T 1 P
where ‘T’ is the transposition of the matrix, and ‘−1’ represents the inverse operation of the matrix. The pseudo-inverse method is used for spectral recovery. Firstly, the transformation matrix is calculated by training samples, and then the spectral reflectance of the testing sample is recovered by the transformation matrix of the known camera response.

3. The Proposed Method

In this section, the flow chart of the proposed filter selection method is shown in Figure 1. The root mean square error (RMSE), goodness of fit coefficient (GFC), and color difference (ΔE) are used to evaluate the spectral recovery effect of the selected filter set.
This selection method can be divided into four main processes: weighted area selection, exhaustive combination, multispectral recovery, and custom error score ranking.
Figure 1a of the flowchart is the first step of the operational process, where the original filters are weighted using the LMS cone response function. The three filters that select the best match to the LMS cone response function of the human vision system are selected as the initial filters, which are based on the morphological and mathematical characteristics of the spectral sensitivity curves of the weighted filters. This process requires the calculation of the filter area reduction rate and the selection of the initial filters based on the area reduction rate.
Figure 1b of the flowchart shows the second step of the operational process, in which the remaining filters are exhaustively combined with the initial filters one by one after the initial filters are identified. This step is an iterative process. When selecting the filters for the next channel, the L-weighted filter set, the M-weighted filter set, and the S-weighted filter set that performed best in the previous selecting process are used as the initial filter set, respectively. The remaining filters are combined with the initial filter set one by one in an exhaustive manner.
Figure 1c of the flowchart is the third step of the operation flow. The third step is to recover the spectra of each group of filter combinations generated by the exhaustive method in the second step one by one, and to derive the recovery error and chromaticity error of each group of filters.
Figure 1d of the flowchart is the first step of the operational process, and the optimal filter sets under L-weighted, M-weighted and S-weighted are selected according to the custom recovery error ranking. The optimal filter sets from the three optimal filter sets are selected by comparing the custom error score ranking. When the number of channels increases, the respective optimal filter sets under L-weighted, M-weighted and S-weighted are exhaustively combined with the remaining filters, and the second step is repeated according to the number of channels until the number of filters in the selected filter set equals the number of channels.

3.1. Weighted Area Selection

There are three distinct photoreceptor cells on the retina of the human eye. The three optic cones are called L, M, and S cones because they roughly correspond to the long, medium, and short wavelength range of the visible spectrum. In this paper, the filter is weighted by L, M, and S to better match the selected filter set with the human visual. The weighting process is as shown in Formula (4):
B n = V n b i ,
where b denotes the original sensitivity curve of the filter; i denotes the ith filter; V denotes the cone response function; n denotes the number of cone response functions, and B denotes the filter sensitivity function weighted by the LMS cone response functions.
Formula (4) uses the response value of the LMS cone response functions to weight the filter at the same time, because of the sensitivity of the human eye in different wavebands, so this paper will consider using the waveband distance between the filter peak and the LMS curve peak to weight the filter, and the process is shown in Formulas (5) and (6).
c n = b max i V max n
C n = c n b i ,
where b m a x denotes the band of the wave peak of the filter response value; Vmax denotes the band of the wave peak of the LMS cone response functions; c n denotes the distance between the filter peak and the LMS curve peak of the waveband, and Cn denotes the filter sensitivity curve weighted by the band distance. The two-weighted filter sensitivity function is combined to obtain the final weighted filter sensitivity curve Dn, such as in Formula (7).
D n = B n C n
In this step of the operation, with the spectral band for the horizontal axis, the filter sensitivity response value for the vertical axis calculates the area of the filter sensitivity curve before and after weighting, then subtracts the original area from the weighted area to get the area difference. Dividing the area difference from the original area, the area decreases the smallest filter, which is the selection of the preferred filter. The selecting process is as described in Formula (8).
Z n = arg m i n n S i S w i S i ,
where S represents the area of the unweighted filter; and S w represents the area of the weighted filter. By using Formula (8), we can obtain three preferred filters that best match the human eye’s L, M and S cone cells.

3.2. Exhaustive Combination

We define b1, b2 and b3 as the three preferred filters that best match the human eye after weighting the L-cone response function, M-cone response function, and the S-cone response function, respectively. The remaining filter is combined with the three filters separately, as shown in Formula (9).
K 1 = b 1 , b i   ( i 1 ) K 2 = b 2 , b i   ( i 2 ) K 3 = b 3 , b i   ( i 3 ) ,
where K1, K2, and K3 respectively denotes the remaining filter set consisting of b1, b2, and b3. When selecting the filters for the next channel, the initial filter set selects the best performance to the previous selecting process in the L-weighted filter set, M-weighted filter set, and S-weighted filter set, respectively. The remaining filters are then combined with the initial filter set one at a time in an exhaustive manner.

3.3. Multispectral Recovery

After obtaining the filter set, the similarity between the training samples and the testing samples in the experiment also affects the final recovery accuracy. Therefore, the Euclidean distance between the response value of the training samples and the testing samples is used as a weighting function to optimize the recovery process and express it by using Formula (10).
s j = ( p 1 t e s t p 1 t r a i n ,   j ) 2 + ( p 2 t e s t p 2 t r a i n ,   j ) 2 + + ( p i t e s t p i t r a i n ,   j ) 2 ,
where the p t e s t is the response of the testing sample; the p t r a i n is the response of the optimal local training sample; the subscript j is the jth sample of the training sample; and the s j represents the Euclidean distance between the jth training sample and the testing sample. The order is then ascending according to the distance between the training and testing samples. The first N (1 ≤ Nj) training samples are selected as the local optimal training samples, and the inverse distance weighting (IDW) coefficient w k is calculated for each selected local optimal training sample, as shown in Formula (11).
w k = 1 s k + ε ,
where the subscript k is the kth sample of the local optimal training sample; s k is the Euclidean distance between the kth local optimal training sample, and the testing sample; ε is a very small added value to avoid dividing the equality by zero, and ε = 0.001 is used. The weighted matrix W is defined as in Formula (12).
W = w 1 0 0 0 w 2 0 0 0 0 0 w k k × k
The transpose matrix M in the spectral recovery Formula (2) can be expressed as:
M = R T r a i n W ( P T r a i n W ) 1
R = M P T e s t ,
where superscript ‘−1’ represents the matrix violation; RTrain is the optimal spectral reflectance of the selected local training sample; PTrain is the normalized response value of the training sample; PTest represents the normalized response value of the test sample; and R is the corresponding reconstructed spectrum.

3.4. Custom Error Score Ranking

Through the spatial vector analysis (FVAM) of the weighted filter, and only considering the characteristics of the filter itself, the three preferred filters have the best match for the visual sensitivity function of the human eye. The spectral recovery error and colorimetric error are calculated for each filter combination by combining the filter with other influence parameters in the multispectral acquisition system, and a custom minimum recovery error is used to select the optimal filter set.
The recovered spectra of the filter sets are obtained in Formula (3). The root mean square error (RMSE), goodness of fit coefficient (GFC), color difference (ΔE), peak signal-to-noise ratio (PSNR), and spectral angle map (SAM) are calculated and normalized. The recovery error is calculated as shown in Formula (15).
T O T A L n i = R M S E i × ( 1 G F C i ) × Δ E i × P S N R i × S A M i ,
where TOTALni is the custom recovery error corresponding to the ith filter set consisting of the nth preferred filter. The RMSE, GFC, ΔE, PSNR and SAM is calculated by Formulas (16)–(20).
R M S E = 1 m ( R t e s t - R ) T ( R t e s t R )
G F C = R t e s t T R R t e s t T R t e s t RR
Δ E * a b = ( Δ L * ) 2 + ( Δ a * ) 2 + ( Δ b * ) 2
P S N R = 20   l o g 10 1 R M S E
S A M = cos 1 ( G F C )
As shown in Formula (21), the filter set with the smallest custom recovery error is selected as the optimal filter set under the current number of channels.
G n = a r g m i n n T O T A L n i ,
where Gn represents the optimal filter set under the current number of channels. The Formulas (9) and (15)–(21) process can be repeated according to the number of channels in the multispectral imaging system.

4. Experiment

To evaluate the performance of the method, comparative experiments are performed based on both simulated and actual multispectral acquisition systems. Four metrics are used to assess the accuracy of the recovery. CIE DE1976 (∆E*ab) and CIE DE2000 (∆E*00) are used as the reference indices to measure color difference. The root mean square error (RMSE) and goodness of fit coefficient (GFC) are used as the spectral reflectance indices.

4.1. Simulation Experiment

To verify the performance of the proposed method, simulation experiments are first performed using a simulated multispectral acquisition system. The systematic noise treatment is not considered in the simulation experiments [23]. The filter data set comes from 15 filters designed at equal intervals by our laboratory. The sensitivity vectors are shown in Figure 2a. We used the CIE illuminant A as the reference light source, while the spectral power distribution of the light source is shown in Figure 2b. Each curve in Figure 2a represents the spectral sensitivity of a filter, and the different color curves represent different filters.
The 1269 of Munsell Matt chips [35], 140 Color Checker SG [36], and the 354 Vrhel spectral dataset [37] are used in a simulation experiment. In order to make the experimental results more convincing, Munsell Matt chips are used as the training sample. The Munsell Matt chips, Color Checker SG, and the Vrhel spectral dataset are used as the testing samples. The spectral reflectance ranges from 400 to 700 at 10 nm intervals.
Before selecting the preferred filter, the LMS cone response functions are shown in Figure 3a, and the filter curve weighted by LMS cone response function is shown in Figure 3b–d. The different color curves represent different filters.
After the three preferred filters are obtained by the weighted area selection, the remaining filters are combined with the preferred filter to form a filter set. The spectral recovery error and colorimetric error are calculated and multiplied by each error parameter index, from which the filter set with the smallest custom error value is selected as the optimal filter set. Therefore, after the analysis of the spectral information data, the response value of the data information should also be analyzed.
Before verifying the parameter indicators of the final selected filter set, the samples first need to be optimized first, and then an appropriate number of characteristic samples should be selected for spectral recovery according to the distance between the samples. To obtain the optimal parameters, the effect of contrast color error in the number of locally optimal training samples is explored, and the results are shown in Figure 4. According to Figure 4, 150 is selected as the number of locally optimal training samples in this experiment.
This study compares the spectral recovery accuracy and colorimetric accuracy of this method with three other existing methods under the same experimental conditions, and the results are shown in Table 1. We compare the recovery results of three samples under the same light source, thus verifying the performance of the method under different shooting conditions. The experimental conditions in Table 1 result from using the Munsell Matt chips as the training samples and the testing samples, selecting the 3–7 channels, and using the other methods under the CIE illuminant A.
The experimental results in Table 1 show that the maximum and mean color difference are the smallest under the different number of channels, and the proposed method is superior to other methods in terms of colorimetric metrics. In terms of spectral recovery, both the RMSE and GFC evaluation indices of the proposed method are better than the existing methods, which also means that this method has a good spectral recovery effect.
To make the results more intuitive and to visualize the recovery data, this study used box plots to demonstrate the spectral and colorimetric recovery accuracy under the different methods, as shown in Figure 5. The boxplot is a standardized way of displaying the spectral recovery results, which are the minimum, maximum, median, and first and third quartiles. The value closest to the box indicates the best spectral recovery results, while the value farther from the box indicates the worst spectral recovery results. The * in the figure represents the anomaly, the farther the anomaly is from the box, the worse the spectral reconstruction effect. The box of the boxplot of the proposed method is smaller than other methods, and shows the best results in the maximum and mean. This more intuitively shows that the proposed method is superior to other methods.
The experimental conditions in Table 2 are the result of using the Munsell Matt chips as the training sample at the CIE illuminant A, using the Color Checker SG as the testing sample, and selecting the 3–7 channels and the other methods used here. The box plots are shown in Figure 6.
With regard to Table 2 and Table 3, and Figure 6 and Figure 7, compared with the results in Table 1 and Figure 5, the spectral recovery accuracy and colorimetric accuracy are consistent with the Munsell Matt chips, and the proposed method still outperformed the other methods. This indicates that the presented method performs better and is more stable under different samples.
In Figure 8, we randomly selected three samples of Munsell Matt chip training samples under CIE illumination A after spectral recovery using different methods in order to compare the results of spectral reflectance curve recovery at 3-7 channels. It can be seen that the method is closer to the original sample and has better performance.
After simple verification of the proposed method, and in order to show its good performance, it was applied to the spectral images [38].
Figure 9 and Figure 10 depict two images selected from the CAVE Multispectral Image Database. The first multispectral image comes from the library in the database. The image content in this library is a common object in daily life. The second multispectral image comes from the real and fake library in the database. The image content in this library is obtained by putting real objects and imitations in life together. These two pictures are of common scenes from daily life.
It can easily be seen in Figure 10 that the results comparison of the spectral images uses the different methods to recover the spectral reflectance. Figure 9a represents the original RGB image. Figure 9 b–f is called the error map, which calculates the color difference of the spectral reflectance recovered by different methods. More red means a larger color difference, and more blue means a lesser color difference. A side-by-side comparison shows the effect of different methods for spectral image recovery, and by the color change, it can be seen that the method approach proposed in this paper is superior to other methods. Therefore, the proposed method shows better performance.

4.2. Real Experiment

This section involves the performing of real experiments in the dark room to further validate the proposed method. In this experiment, the IT8.7/3 color card is used as the data sample, which has 928 color blocks (Figure 11a). The response value of each color block are obtained using a Shot 5.0 multispectral camera with an ISO size of 50, an f-number-hole circle of F5.6, and an exposure time of 1/10 s. The real response values are extracted in the sRGB color space. The power distribution of the light source in the shooting environment is measured using the CS2000 spectroscopic radiometer, as shown in Figure 11b. The selection of the filter in the real experiment is still the same as in the simulation experiment, which is shown in Figure 11c.

4.2.1. Experimental Environment

In order to obtain effective training sample color data and improve the accuracy of the data, a stable shooting environment must be determined before shooting.
First, in the process of shooting color pictures, the camera itself and the settings of the lighting and the surroundings are very critical. The stability of the lighting includes both time and space. If the amount of light radiation received by the target object surface changes over time, or the amount of light radiation received by the color sample at different spatial locations varies, then the camera response value signal generated by the color is bound to change as well. In addition, it is known from the optics of the camera that the light radiation energy received by the photoreceptor is strongest in the central part and decreases along the radius due to the optical effect of the convex lens inside the camera. Therefore, the color samples are placed in the center area of the camera’s field of view as much as possible during the shooting, and the training and test samples are placed in the center of the two light sources, ensuring that the light reached the samples from a 45° angle and placing the samples 1.5 m away from the camera lens, as shown in Figure 12a. From the time the light sources are turned on, they are warmed up for 30 min, and the light intensity of the light sources are allowed to stabilize before shooting. The above operation ensures the lighting stability from both space and time, and the real experimental scenario is shown in Figure 12b.

4.2.2. Linear Calibration of Camera Response Values Correction

All of the experimental data in Section 4.1 are simulated experiments conducted under ideal conditions, thus ignoring noise, and assuming that a multispectral acquisition system has a perfect linear response model, which naturally produces smaller errors than the actual experiment.
The response signal of the camera is obtained by the interaction of the light radiation energy incident on the sensor and the spectral response function of the camera, which is a set of linear data. However, in practice, the signal from the CCD or CMOS optical conversion of the multispectral camera will undergo a series of transmissions and compressions before it is outputted to the display device, at which time the response signal has become nonlinear data. If the camera signal acquired from the image is converted back to a linear response value [39], i.e., the digital signal of the camera is linearly corrected, then the spectrum recovered from the digital signal is as accurate as possible with the spectrum measured with a spectrometer, and an accurate spectral recovery effect is achieved. In this experiment, color blocks 19 to 24 of the 24 color card (Figure 13) are used as a grayscale, and Li’s [40] method is used to linearly correct the response values of the multispectral camera.
The photoelectric signal of the camera is linearly related to the optical radiation energy received by the CCD or CMOS, and the camera response value is also linearly related to the photoelectric signal of the camera. Therefore, this experiment achieves linear correction by establishing the conversion relationship between the camera response value and the CCD or CMOS optical radiation energy.
The light radiation energy T is obtained by the product of the spectral power distribution of the light source E(λ) and the reflectance of the color spectrum of the object surface R(λ) (see Formula (22)).
T i = E ( λ ) R ( λ ) d λ
Figure 14 shows the spectral reflectance curves of each gray sample, from which it can be seen that the reflectance curves of each color block remain basically horizontal, and the reflectance coefficients of each gray color block are similar. The six different color lines in Figure 14 represent the reflectance coefficients of the six gray color blocks.This indicates that the gradients of the gray color block we selected are uniform and reasonable, which can ensure the accuracy of the linear calibration data.
Before establishing the linear conversion formula between the light radiation energy and the camera response value, there is one more important operation, which is the normalization of the acquired data. The processed data will eventually be used to fit the linear formula. Table 4 shows an example of the normalization process for the experimental data of the third filter among the 15 filters to be selected.
T represents the light radiation energy distribution derived from Formula (21), and t represents the light radiation energy distribution after normalization with the maximum value of T as a reference. P represents the camera response value obtained by shooting, and p represents the response value of the brightest color sample (white) selected as a standard to normalize the response signal of the grayscale color and obtain the normalized response value.
In this experiment, the least squares method of curve fitting is used to obtain the linear conversion formula for each channel of the camera to achieve the linear correction of the digital signal. The linear conversion formula is established between the light radiation energy t and the camera response value, which can be realized by Formula (23).
t = f ( x ) p ,
where f(x) represents the linear transformation formula, and the least squares implementation is implemented using the Matlab package. The final linear transformation formula selected was obtained by a Matlab calculation.

4.2.3. Experimental Result

The linearly corrected camera response values are substituted into the multispectral imaging model (1), and subsequent calculations and selections are then performed as in the simulated experimental process to obtain the final experimental results.
The real experimental results are shown in Table 5. The results showed that the proposed method was consistent with the simulation experiments in spectral recovery and color difference, and both outperformed the existing methods, proving that the actual experiments achieved better results, and that the method proposed in this study can be applied to real scenarios.
Figure 15 shows the box plot distribution of the reference metrics for the present method compared with the existing methods, and Figure 16 shows the recovered spectral reflectance of three randomly selected detection samples, proving that the recovered spectral reflectance is more accurate and has a lower color error than the existing methods.
In summary, the high similarity between the simulated and actual experimental results confirms the superiority of this method.

5. Discussion and Conclusions

This study proposes a filter selection method that combines weighted area selection and custom error score ranking. The filter that best matches the human visual system is selected as the initial filter by weighting the filters using the LMS cone response function in combination with the area reducing rate selecting. The initial filter is combined with the remaining filters one by one, and the spectral recovery error and chromaticity error are calculated and then multiplied to select the filter combination with the smallest custom recovery error. In the experiment, we used four color samples and five channel types to verify the performance of the proposed method. The results show that using seven channels and choosing Munsell color samples for the experiments gives the best results with a mean root mean square error of 0.0031 and a mean color difference of 0.1566.
After validation by simulation and real experiments, the results show that the proposed method is better than other existing methods. By changing the data samples and the shooting environment, the method still outperforms other methods, showing good validity and robustness. This work will be useful for optimizing the spectral sensitivity of multispectral imaging sensors.
Due to the influence of the number of filters purchased by the laboratory, the number of filters to be screened in this experiment is 15, and the amount of data is relatively small. The feasibility of the proposed method was demonstrated by the validation of this experiment, and more filters will be purchased to continue to verify the generality of this method.

Author Contributions

Conceptualization, S.N. and G.W.; methodology, S.N. and G.W.; software, S.N. and G.W.; validation, S.N. and G.W.; formal analysis, S.N. and G.W.; investigation, S.N., G.W. and X.L.; resources, S.N. and G.W.; data curation, S.N. and G.W.; writing—original draft preparation, S.N. and G.W.; writing—review and editing, S.N., G.W. and X.L.; visualization, S.N. and G.W.; supervision, S.N. and G.W.; project administration, S.N. and G.W.; funding acquisition, G.W. and X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Shandong Provincial Natural Science Foundation (ZR2020MF091), Key Lab of Intelligent and Green Flexographic Printing (ZBKT202101), Qilu University of Technology (Shandong Academy of Sciences) Pilot Project for Integrating Science, Education, and Industry (2022PX078), Foundation (ZZ20210108) of State Key Laboratory of Biobased Material and Green Papermaking, Qilu University of Technology, Shandong Academy of Sciences, Key Research and Development Program of Shandong Province (2018GGX106009).

Data Availability Statement

The data presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liang, H. Advances in multispectral and hyperspectral imaging for archaeology and art conservation. Appl. Phys. A 2012, 106, 309–323. [Google Scholar] [CrossRef]
  2. Maali Amiri, M.; Garcia-Nieto, S.; Morillas, S.; Fairchild, M.D. Spectral Reflectance Recovery Using Fuzzy Logic System Training: Color Science Application. Sensors 2020, 20, 4726. [Google Scholar] [CrossRef] [PubMed]
  3. Raju, V.B.; Sazonov, E. Detection of Oil-Containing Dressing on Salad Leaves Using Multispectral Imaging. IEEE Access 2020, 8, 86196–86206. [Google Scholar] [CrossRef]
  4. Kim, S.; Kim, J.; Hwang, M.; Kim, M.; Jin Jo, S.; Je, M.; Jang, J.E.; Lee, D.H.; Hwang, J.Y. Smartphone-based multispectral imaging and machine-learning based analysis for discrimination between seborrheic dermatitis and psoriasis on the scalp. Biomed. Opt. Express 2019, 10, 879–891. [Google Scholar] [CrossRef]
  5. Shen, F.; Deng, H.; Yu, L.; Cai, F. Open-source mobile multispectral imaging system and its applications in biological sample sensing. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2022, 280, 121504. [Google Scholar] [CrossRef]
  6. Wang, T.; Shen, F.; Deng, H.; Cai, F.; Chen, S. Smartphone imaging spectrometer for egg/meat freshness monitoring. Anal. Methods 2022, 14, 508–517. [Google Scholar] [CrossRef]
  7. Niu, S.; Wu, G.; Xiong, Y. Spectral filter selection for spectral reflectance recovery. In Proceedings of the Second International Conference on Sensors and Information Technology (ICSI 2022), Nanjing, China, 25–27 March 2022. [Google Scholar]
  8. Huber-Lerner, M.; Hadar, O.; Rotman, S.R.; Huber-Shalem, R. Hyperspectral Band Selection for Anomaly Detection: The Role of Data Gaussianity. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 732–743. [Google Scholar] [CrossRef]
  9. Zhu, G.; Huang, Y.; Lei, J.; Bi, Z.; Xu, F. Unsupervised Hyperspectral Band Selection by Dominant Set Extraction. IEEE Trans. Geosci. Remote Sens. 2016, 54, 227–239. [Google Scholar] [CrossRef]
  10. Jean, M.; Jean, T.; François, G.; Michel, P.; Nicolas, R.; Jacques, L.; Yann, F. Influence of band selection and target estimation error on the performance of the matched filter in hyperspectral imaging. Appl. Opt. 2011, 50, 4276–4285. [Google Scholar]
  11. Ma, J.P.; Zheng, Z.B.; Tong, Q.X.; Zheng, L.F. An application of genetic algorithms on band selection for hyperspectral image classification. In Proceedings of the Second International Conference on Machine Learning and Cybernetics, Xi’an, China, 2–5 November 2003. [Google Scholar]
  12. Sharma, G.; Trussell, H. Optimal Filter Design for Multi-illuminant Color Correction. In Proceedings of the IS&T/OSA’s Optics and Imaging in the Information Age, Springfield, Virginia, 13 February 2001. [Google Scholar]
  13. Wolski, M.; Bouman, C.; Allebach, J.; Walowit, E. Optimization of sensor response functions for colorimetry of reflective and emissive objects. In Proceedings of the International Conference on Image Processing, Washington, DC, USA, 23–26 October 1995. [Google Scholar]
  14. Vora, P.L.; Trussell, H.J. Mathematical methods for the design of color scanning filters. IEEE Trans. Image Process. 1997, 6, 312–320. [Google Scholar] [CrossRef]
  15. Ng, D.Y.; Allebach, J.P. A subspace matching color filter design methodology for a multispectral imaging system. IEEE Trans. Image Process. 2006, 15, 2631–2643. [Google Scholar] [PubMed]
  16. Arad, B.; Ben-Shahar, O. Filter Selection for Hyperspectral Estimation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 25 December 2017. [Google Scholar]
  17. Li, S. Filter Selection for Optimizing the Spectral Sensitivity of Broadband Multispectral Cameras Based on Maximum Linear Independence. Sensors 2018, 18, 1455. [Google Scholar] [CrossRef]
  18. Brill, M.H. Acquisition and reproduction of color images: Colorimetric and multispectral approaches. Color Res. Appl. 2002, 27, 304–306. [Google Scholar] [CrossRef]
  19. Li, S. Several Problems Research of Multispectral Imaging. Ph.D. Thesis, Beijing Institute of Technology, Beijing, China, 2007. [Google Scholar]
  20. Schettini, R.; Novati, G.; Pellegri, P. Training set and filters selection for the efficient use of multispectral acquisition systems. In Proceedings of the Conference on Color in Graphics, Aachen, Germany, 5–8 April 2004. [Google Scholar]
  21. Raimondo, P.P.N.G.S. Training Set Selection for Multispectral Imaging Systems Characterization. J. Imaging Sci. Technol. 2004, 48, 203–210. [Google Scholar]
  22. Li, S.; Liao, N.F.; Sun, Y.N. Optimal sensitivity of multispectral camera based on PCA. Opto-Electron. Eng. 2006, 33, 127–132. [Google Scholar]
  23. Liang, J.; Xiao, K.; Pointer, M.R.; Wan, X.; Li, C. Spectra estimation from raw camera responses based on adaptive local-weighted linear regression. Opt. Express 2019, 27, 5165–5180. [Google Scholar] [CrossRef]
  24. Babaei, V.; Amirshahi, S.H.; Agahian, F. Using weighted pseudo-inverse method for recovery of reflectance spectra and analyzing the dataset in terms of normality. Color Res. Appl. 2011, 36, 295–305. [Google Scholar] [CrossRef]
  25. Tzeng, D.Y.; Berns, R.S. A review of principal component analysis and its applications to color technology. Color Res. Appl. 2005, 30, 84–98. [Google Scholar] [CrossRef]
  26. Wu, G. Reflectance spectra recovery from a single RGB image by adaptive compressive sensing. Laser Phys. Lett. 2019, 16, 085208. [Google Scholar] [CrossRef]
  27. Wu, G.; Xiong, Y.; Li, X. Spectral sparse recovery form a single RGB image. Laser Phys. Lett. 2021, 18, 095201. [Google Scholar] [CrossRef]
  28. Nishidate, I.; Maeda, T.; Niizeki, K.; Aizu, Y. Estimation of Melanin and Hemoglobin Using Spectral Reflectance Images Reconstructed from a Digital RGB Image by the Wiener Estimation Method. Sensors 2013, 13, 7902–7915. [Google Scholar] [CrossRef] [PubMed]
  29. Stigell, P.; Miyata, K.; Hauta-Kasari, M. Wiener estimation method in estimating of spectral reflectance from RGB images. Pattern Recognit. Image Anal. 2007, 17, 233–242. [Google Scholar] [CrossRef]
  30. Xiong, Y.; Wu, G.; Li, X.; Wang, X. Optimized clustering method for spectral reflectance recovery. Front Psychol. 2022, 13, 1051286. [Google Scholar] [CrossRef]
  31. Wang, L.; Sole, A.; Hardeberg, J.Y. Densely Residual Network with Dual Attention for Hyperspectral Reconstruction from RGB Images. Remote Sens. 2022, 14, 3128. [Google Scholar] [CrossRef]
  32. Xiong, Y.; Wu, G.; Li, X.; Niu, S.; Han, X. Spectral reflectance recovery using convolutional neural network. In Proceedings of the International Conference on Optoelectronic Materials and Devices (ICOMD 2021), Guangzhou, China, 10–12 December 2021; pp. 63–67. [Google Scholar]
  33. Arad, B.; Timofte, R.; Yahel, R.; Morag, N.; Bernat, A.; Cai, Y.; Lin, J.; Lin, Z.; Wang, H.; Zhang, Y. Ntire 2022 spectral recovery challenge and data set. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 863–881. [Google Scholar]
  34. Xiong, Y.; Wu, G.; Li, X. Optimized Method Based on Subspace Merging for Spectral Reflectance Recovery. Sensors 2023, 23, 3056. [Google Scholar] [CrossRef]
  35. University of Eastern Finland, Spectral Color Research Group. Available online: http://www.uef.fi/web/spectral/-spectral-database (accessed on 24 November 2022).
  36. Wu, G.; Liu, Z.; Fang, E.; Yu, H. Recovery of spectral color information using weighted principal component analysis. Optik 2015, 126, 1249–1253. [Google Scholar] [CrossRef]
  37. Vrhel, M.J.; Gershon, R.; Iwan, L.S. Measurement and analysis of object reflectance spectra. Color Res. Appl. 1994, 19, 4–9. [Google Scholar] [CrossRef]
  38. Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S.K. Generalized Assorted Pixel Camera: Postcapture Control of Resolution, Dynamic Range, and Spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253. [Google Scholar] [CrossRef]
  39. Chen, Y. Object Surface Color Spectrum Recovery Based on Digital Camera. Master’s Thesis, Zhejiang University, Hangzhou, China, 2008. [Google Scholar]
  40. Li, B. Research on Multi-Spectral Image Acquisition Method Based on Three-Color Camera. Master’s Thesis, Qufu Normal University, Qufu, China, 2012. [Google Scholar]
Figure 1. Filter selection operation schematic chart.
Figure 1. Filter selection operation schematic chart.
Sensors 23 05225 g001
Figure 2. (a) Filter spectral sensitivity; (b) the spectral power distribution of CIE illuminant A.
Figure 2. (a) Filter spectral sensitivity; (b) the spectral power distribution of CIE illuminant A.
Sensors 23 05225 g002
Figure 3. (a) the LMS cone response functions; (b) the filter transmission weighted by the S response curve; (c) the filter transmission weighted by the M response curve; and (d) the filter transmission weighted by the L response curve.
Figure 3. (a) the LMS cone response functions; (b) the filter transmission weighted by the S response curve; (c) the filter transmission weighted by the M response curve; and (d) the filter transmission weighted by the L response curve.
Sensors 23 05225 g003
Figure 4. (a) the relationship between CIE DE1976 color difference and the number of local optimal training samples in Munsell Matt chips; (b) the relationship between CIE DE1976 color difference and the number of Color Checker SG training samples; and (c) the relationship between CIE DE1976 color difference and the number of local optimal training samples in Vrhel spectral dataset.
Figure 4. (a) the relationship between CIE DE1976 color difference and the number of local optimal training samples in Munsell Matt chips; (b) the relationship between CIE DE1976 color difference and the number of Color Checker SG training samples; and (c) the relationship between CIE DE1976 color difference and the number of local optimal training samples in Vrhel spectral dataset.
Sensors 23 05225 g004
Figure 5. A box plot of each parameter index of Munsell Matt chips under CIE illuminant A.
Figure 5. A box plot of each parameter index of Munsell Matt chips under CIE illuminant A.
Sensors 23 05225 g005
Figure 6. A box plot of each parameter index of Color Checker SG under CIE illuminant A.
Figure 6. A box plot of each parameter index of Color Checker SG under CIE illuminant A.
Sensors 23 05225 g006
Figure 7. A box plot of each parameter index of the Vrhel spectral dataset under CIE illuminant A.
Figure 7. A box plot of each parameter index of the Vrhel spectral dataset under CIE illuminant A.
Sensors 23 05225 g007
Figure 8. Spectral reflectance recovery results from our proposed and existing methods with three randomly selected samples.
Figure 8. Spectral reflectance recovery results from our proposed and existing methods with three randomly selected samples.
Sensors 23 05225 g008
Figure 9. A results comparison of spectral images of different methods using the CIE 1964 color matching function as the spectral sensitivity; (a) MaxOr; (b) LDMM; (c) MLI; and (d) Proposed.
Figure 9. A results comparison of spectral images of different methods using the CIE 1964 color matching function as the spectral sensitivity; (a) MaxOr; (b) LDMM; (c) MLI; and (d) Proposed.
Sensors 23 05225 g009
Figure 10. Results comparison of spectral images of different methods using the CIE 1964 color matching function as the spectral sensitivity (a) MaxOr; (b) LDMM; (c) MLI; and (d) Proposed.
Figure 10. Results comparison of spectral images of different methods using the CIE 1964 color matching function as the spectral sensitivity (a) MaxOr; (b) LDMM; (c) MLI; and (d) Proposed.
Sensors 23 05225 g010
Figure 11. (a) IT8.7-3 CMYK target; (b) real spectral power distribution of the light source; (c) filters purchased in the laboratory.
Figure 11. (a) IT8.7-3 CMYK target; (b) real spectral power distribution of the light source; (c) filters purchased in the laboratory.
Sensors 23 05225 g011
Figure 12. (a) Diagram of the shooting standard environment; (b) The real shooting environment.
Figure 12. (a) Diagram of the shooting standard environment; (b) The real shooting environment.
Sensors 23 05225 g012
Figure 13. Color blocks 19 to 24 of the 24 color checker.
Figure 13. Color blocks 19 to 24 of the 24 color checker.
Sensors 23 05225 g013
Figure 14. Spectral reflectance curves of gray samples.
Figure 14. Spectral reflectance curves of gray samples.
Sensors 23 05225 g014
Figure 15. A box plot of each parameter index of the IT8.7/3 dataset under real experimental conditions.
Figure 15. A box plot of each parameter index of the IT8.7/3 dataset under real experimental conditions.
Sensors 23 05225 g015
Figure 16. Spectral reflectance recovery results from our proposed and existing methods with three randomly selected samples.
Figure 16. Spectral reflectance recovery results from our proposed and existing methods with three randomly selected samples.
Sensors 23 05225 g016
Table 1. Results of different methods for recovery of the spectral reflectance of Munsell Matt chips.
Table 1. Results of different methods for recovery of the spectral reflectance of Munsell Matt chips.
Munsell Matt Chips
IlluminantChannelMethodCIE DE1976CIE DE2000RMSEGFC
MaxMeanMaxMeanMaxMeanMean
CIE
Illuminant
A
3 ChannelLDMM57.01625.192738.64273.85840.22810.02330.9932
MLI24.5292.616814.07421.72990.17680.01530.9961
MaxOr57.05785.639838.65474.25040.22750.02540.9917
Our11.17011.36517.66610.99030.11350.01360.9975
4 ChannelLDMM33.02174.418627.20453.09290.12210.01560.997
MLI19.94841.425210.48610.92790.12620.00860.9987
MaxOr13.24271.57118.09561.09360.12220.01140.9982
Our7.13930.55385.15530.390.10290.00840.9988
5 ChannelLDMM13.50991.54527.38991.08440.07530.00820.9993
MLI20.29980.977510.79190.65060.08450.00540.9994
MaxOr13.10931.45267.42561.04210.13140.01070.9984
Our8.76720.57836.21740.43450.04560.00570.9996
6 ChannelLDMM4.39940.50092.72480.37870.07370.00460.9996
MLI19.65540.625710.45790.45940.08090.00450.9995
MaxOr11.09950.70657.19190.5230.12220.00640.9992
Our3.4610.22592.32180.15430.02380.00390.9998
7 ChannelLDMM1.78860.42361.31720.35710.02920.00370.9998
MLI19.76610.717922.52730.55120.03970.00390.9997
MaxOr6.66730.64964.43120.48120.03930.00410.9997
Our1.31680.2071.15460.15660.02350.00310.9999
Table 2. Results of different methods for restoring the spectral reflectance of Color Checker SG.
Table 2. Results of different methods for restoring the spectral reflectance of Color Checker SG.
Color Checker SG
IlluminantChannelMethodCIE DE1976CIE DE2000RMSEGFC
MaxMeanMaxMeanMaxMeanMean
CIE
Illuminant
A
3 ChannelLDMM55.554810.008633.7677.21190.22510.04510.9811
MLI39.73617.270219.5514.32650.12110.0310.9924
MaxOr63.89628.700439.34765.85970.24070.04250.9811
Our18.0882.40028.49411.47320.1180.02670.9929
4 ChannelLDMM36.11357.129720.28385.4050.09550.02860.994
MLI34.79274.894411.44712.77450.09920.02220.9957
MaxOr17.69332.77978.82841.8260.11490.02580.9942
Our5.96321.02573.63370.67660.09310.02010.9962
5 ChannelLDMM11.77492.94129.36222.13460.0720.01880.9978
MLI17.64573.7227.61772.31710.13820.01980.9967
MaxOr18.85673.470212.31892.50380.15910.02640.9931
Our4.30940.93342.58140.60170.0490.0150.9986
6 ChannelLDMM4.94991.10142.91870.89020.04630.01310.9987
MLI19.30111.91128.41441.33220.15770.01360.9974
MaxOr6.89461.44484.4771.07730.07680.01510.9974
Our2.36410.48071.27190.32640.04430.01290.9989
7 ChannelLDMM2.9430.54821.18980.37720.03390.00960.9994
MLI9.78841.37525.76610.94550.04290.00840.9992
MaxOr11.85161.33074.85241.04350.04080.01030.9992
Our2.59980.36040.92420.23550.02560.00710.9995
Table 3. Results of different methods for restoring the spectral reflectance of the Vrhel spectral dataset.
Table 3. Results of different methods for restoring the spectral reflectance of the Vrhel spectral dataset.
Vrhel Spectral Dataset
IlluminantChannelMethodCIE DE1976CIE DE2000RMSEGFC
MaxMeanMaxMeanMaxMeanMean
CIE
Illuminant
A
3 ChannelLDMM55.214913.495435.99189.6310.2090.05760.9659
MLI33.3429.412917.53675.0310.19670.03510.9841
MaxOr56.455611.629236.45248.24060.19920.05210.9709
Our18.32092.67388.74861.65320.18180.03190.9862
4 ChannelLDMM70.500210.076928.83467.17590.23160.03620.9853
MLI29.89686.572713.28883.21260.16870.02860.9804
MaxOr25.61433.752115.23662.47520.17920.03280.987
Our18.20291.48058.2130.910.1670.02760.9894
5 ChannelLDMM27.67855.516314.38064.01680.12030.02540.9921
MLI32.39535.202313.6982.69360.15290.02520.9904
MaxOr23.5244.232114.20792.90.20850.03980.9763
Our11.46841.52696.29960.94210.11730.01770.9955
6 ChannelLDMM8.71841.36755.88220.99940.10620.01930.9945
MLI16.95492.824110.36741.71170.15380.02040.9917
MaxOr21.13512.325312.46451.49380.17060.02260.9893
Our7.92560.82983.21250.44990.1020.01490.995
7 ChannelLDMM7.24690.76872.69310.48390.08710.0160.9957
MLI39.05042.165718.90241.44310.11440.01220.9917
MaxOr10.85611.64837.2021.14420.08840.01250.9945
Our6.9810.66641.68180.35610.08210.01130.9964
Table 4. Light radiation energy T and camera response values for each gray sample.
Table 4. Light radiation energy T and camera response values for each gray sample.
NO.TtPp
11.80197.731
21.190.6775.130.77
30.730.4154.090.55
40.390.2233.420.34
50.190.1021.220.22
60.070.046.690.07
Table 5. Results of different methods to recover spectral reflectance using IT8.7/3 samples.
Table 5. Results of different methods to recover spectral reflectance using IT8.7/3 samples.
IT8.7/3
IlluminantChannelMethodCIE DE1976CIE DE2000RMSEGFC
MaxMeanMaxMeanMaxMeanMean
Real
Illuminant
3 ChannelLDMM55.107212.170636.31639.02470.24330.04620.9833
MLI72.314412.068833.65077.26810.13710.03750.9852
MaxOr36.83477.778325.50015.670.20010.03390.9921
Our14.53463.57819.87332.56090.13650.02370.9985
4 ChannelLDMM60.36698.653340.34946.45620.17320.03320.9896
MLI51.063210.593531.53476.24690.14040.03280.9883
MaxOr15.40713.404210.8732.32320.13930.02290.9989
Our11.79622.91127.13872.00370.13820.01990.9992
5 ChannelLDMM15.3523.291910.5592.34870.17310.02160.9981
MLI42.57535.471214.84813.360.13570.02270.9974
MaxOr15.37413.343110.7112.35640.16580.02180.9979
Our12.95023.202610.0812.30460.13050.02150.9988
6 ChannelLDMM11.55582.66097.80241.83850.14560.01770.9991
MLI10.85032.84038.52631.95310.11640.01870.9991
MaxOr13.08432.77466.27311.89320.12390.01840.9992
Our10.40582.60287.60261.78720.11040.01630.9993
7 ChannelLDMM11.1752.88728.06761.88620.14160.01740.9991
MLI11.78242.73727.36871.8880.11330.01790.9992
MaxOr11.91032.69947.0611.85020.14230.01730.999
Our10.912.68876.49231.83910.11070.01620.9992
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Niu, S.; Wu, G.; Li, X. Spectral Filter Selection Based on Human Color Vision for Spectral Reflectance Recovery. Sensors 2023, 23, 5225. https://doi.org/10.3390/s23115225

AMA Style

Niu S, Wu G, Li X. Spectral Filter Selection Based on Human Color Vision for Spectral Reflectance Recovery. Sensors. 2023; 23(11):5225. https://doi.org/10.3390/s23115225

Chicago/Turabian Style

Niu, Shijun, Guangyuan Wu, and Xiaozhou Li. 2023. "Spectral Filter Selection Based on Human Color Vision for Spectral Reflectance Recovery" Sensors 23, no. 11: 5225. https://doi.org/10.3390/s23115225

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop