Next Article in Journal
Quantitative Evaluation of Crucial Substations and Simulation-Driven Impact Assessment of Commissioning Delays in Multi-Voltage Grid Planning
Previous Article in Journal
Semantic Interoperability of Multi-Agent Systems in Autonomous Maritime Domains
Previous Article in Special Issue
Temporal Adaptive Attention Map Guidance for Text-to-Image Diffusion Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multispectral Reconstruction in Open Environments Based on Image Color Correction

1
School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan 430200, China
2
School of Design, University of Leeds, Leeds LS2 9JT, UK
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(13), 2632; https://doi.org/10.3390/electronics14132632
Submission received: 18 May 2025 / Revised: 25 June 2025 / Accepted: 27 June 2025 / Published: 29 June 2025
(This article belongs to the Special Issue Image Fusion and Image Processing)

Abstract

Spectral reconstruction based on digital imaging has become an important way to obtain spectral images with high spatial resolution. The current research has made great strides in the laboratory; however, dealing with rapidly changing light sources, illumination, and imaging parameters in an open environment presents significant challenges for spectral reconstruction. This is because a spectral reconstruction model established under one set of imaging conditions is not suitable for use under different imaging conditions. In this study, considering the principle of multispectral reconstruction, we proposed a method of multispectral reconstruction in open environments based on image color correction. In the proposed method, a whiteboard is used as a medium to calculate the color correction matrices from an open environment and transfer them to the laboratory. After the digital image is corrected, its multispectral image can be reconstructed using the pre-established multispectral reconstruction model in the laboratory. The proposed method was tested in simulations and practical experiments using different datasets and illuminations. The results show that the root-mean-square error of the color chart is below 2.6% in the simulation experiment and below 6.0% in the practical experiment, which illustrates the efficiency of the proposed method.

1. Introduction

Spectra serve as critical features for characterizing the physicochemical properties of materials, acting as “fingerprints” of color information within the visible spectrum. They hold significant application value in color science, computer vision, biomedical research, and cultural heritage preservation [1,2,3]. Traditional spectral measurement technologies, such as spectrophotometers and spectral cameras, face limitations; the former are restricted to single-point measurements, while the latter suffer from low spatial resolution and inflexibility despite enabling area-array acquisition. To address these challenges, computational spectral imaging based on digital cameras has emerged. This technology establishes a mapping model (spectral reconstruction matrix) between digital responses and spectral data, enabling pixel-level multispectral reconstruction from a single RGB image. Compared with traditional scanning-based systems (e.g., whiskbroom or push-broom), this approach significantly improves spatial resolution while maintaining spectral accuracy, offering cost-effectiveness and operational flexibility. As a result, it has become a research hotspot in multispectral image acquisition in recent years [4,5,6,7,8].
To date, significant progress has been made in multispectral reconstruction technologies under controlled laboratory conditions. However, due to the sensitivity of multispectral reconstruction models to variations in imaging conditions [9], the research outcomes achieved in controlled laboratory environments are difficult to directly apply to natural open-illumination environments. The early efforts focused on hardware adaptations: Shrestha et al. (2014) developed a dual-RGB imaging system with broadband filters to estimate illuminant spectra, achieving six-channel open-environment measurements [10]; Khan et al. (2017) proposed a spectral adaptation transform method based on multispectral constancy theory, later refining its robustness [11,12]; Zhang et al. (2018) introduced white-balance and link functions for exposure compensation in single-RGB-camera systems [13]. Recent unsupervised and deep learning approaches have introduced new possibilities: Gu et al. (2020) designed an iterative spectral reconstruction algorithm requiring no training samples, enhancing generalizability through prior information fusion [14]; Cai et al. (2022) developed the MST++ Transformer architecture, leveraging spectral attention blocks (SABs) to extract multi-resolution features [15]. Finlayson et al. (2023) introduced a novel pixel-wise spectral reconstruction algorithm termed A++, which combines spectral estimation clustering with polynomial regression [16]. This approach outperforms deep learning models (e.g., AWAN and HSCNN-D) on the ICVL dataset, particularly demonstrating enhanced robustness under realistic conditions, such as image rotation and blurring. Hardeberg et al. (2025) proposed a deep joint demosaicking and super-resolution framework for spectral filter array (SFA) imaging [17]. This method employs a two-branch network architecture, comprising a pseudo-panchromatic image network (PPI-Net) and a pre-demosaicking sub-branch, integrated with a Deep Residual Demosaicking and Super-Resolution (DRDmSR) module. The framework achieves superior image reconstruction through a residual-in-residual (RIR) structure and channel attention mechanisms. Nevertheless, the existing methods share common limitations: (1) sensitivity to exposure-level fluctuations (caused by coupled illuminance–camera parameter interactions), leading to spectral feature distortion [18,19,20]; and (2) insufficient real-time calibration capability in dynamic scenes. Resolving these challenges—particularly by constructing illuminant–exposure decoupling models and adaptive feature learning frameworks—remains pivotal for practical open-environment spectral reconstruction.
To directly address these persistent challenges of exposure sensitivity and real-time calibration deficiency in open environments, this study proposes a novel and pragmatic spectral reconstruction framework centered on imaging color correction with a physical reference. The core innovation and key contribution of our approach lie in leveraging a whiteboard as a physical reference and calibration anchor. This simple yet powerful mechanism actively decouples the illuminant–exposure effects during image acquisition by aligning the open-environment imaging conditions with the laboratory reference settings. Crucially, this real-time alignment step effectively mitigates the spectral distortion caused by exposure fluctuations and varying illumination, directly tackling limitation (1). Following this calibration, spectral reconstruction is performed using a pre-trained spectral reconstruction matrix. This combined strategy inherently provides the necessary real-time calibration capability demanded by dynamic scenes (addressing limitation (2)) without requiring the complex online model retraining or sophisticated hardware modifications characteristic of prior solutions. The effectiveness of the method was rigorously validated through experiments evaluating both spectral accuracy and colorimetric precision, demonstrating significant improvements in robustness under varying open-environment conditions.

2. Materials and Methods

This study focuses on spectral measurement based on single RGB images from digital cameras, utilizing the linearized digital response of raw-format images as the foundation. A spectral reconstruction method based on imaging color correction is proposed; first, under reference imaging conditions, a spectral reconstruction matrix is established using training samples while simultaneously capturing the raw response values of the reference whiteboard; second, in open measurement environments, digital images of the measurement object and the raw response values of the reference whiteboard are acquired, and matrix M 1 is calculated to correct the imaging conditions from the open environment to the reference conditions, achieving exposure-level correction and preliminary color error correction; then, based on the theoretical imaging model of digital cameras [21] and a light source database, the spectral power distribution (SPD) of the illuminant in the open environment is estimated using the reference whiteboard, yielding several equivalent light sources for the open environment, and matrix M 2 is computed to further correct color errors caused by differences in light sources; finally, the measurement object’s image is corrected using both matrices M 1 and M 2 , and the spectral reconstruction of the object is completed using the spectral reconstruction matrix established under reference imaging conditions. The basic principle of this method is illustrated in Figure 1, with detailed explanations provided below.

2.1. System Model and Spectral Reconstruction Fundamentals

Assuming an ideal linear camera imaging model that ignores noise, the pixel response vector d is expressed as
d i = Ω l ( λ ) s i ( λ ) r ( λ ) d λ
where d i represents the response of the i-th channel, l ( λ ) denotes the spectral power distribution of the illuminant, s i ( λ ) is the spectral sensitivity of the camera’s i-th channel, r ( λ ) is the spectral reflectance of the target pixel, and Ω defines the spectral integration range of the imaging system. This continuous model is discretized into a matrix form to facilitate computational processing:
d = Mr
Here, M R K × N is the combined sensitivity matrix integrating the illuminant spectrum and camera sensitivity functions, and r R N represents the discretized spectral reflectance vector with N wavelength samples.
To reconstruct spectral reflectance from camera responses, a spectral reconstruction matrix Q is derived using the pseudo-inverse method [22], which minimizes the mean squared error between the training spectra R train and their corresponding camera responses D train :
Q = R train D train +
where + denotes the Moore–Penrose pseudo-inverse. This matrix enables spectral reconstruction for any test response d test via
r test = Q d test
The choice of the pseudo-inverse over alternative methods (e.g., Wiener reconstruction or principal component analysis) is justified by its computational efficiency and robustness in overdetermined systems.

2.2. Workflow of the Color Correction Method

Under controlled laboratory conditions with a known illuminant l r e f ( λ ) , a set of training samples R train and a reference whiteboard are imaged. The raw camera responses D train and d white , ref are extracted and preprocessed. To enhance the generalizability of the spectral reconstruction matrix Q , the camera responses are expanded using a root–polynomial transformation. For instance, a third-degree expansion with 13 terms is formulated as
d exp = r , g , b , r g 3 , r b 3 , g b 3 , r 2 g 3 , r 2 b 3 , g 2 r 3 , g 2 b 3 , b 2 r 3 , b 2 g 3 , r g b 3 T
This linear expansion mitigates exposure-dependent nonlinearities, as demonstrated by Finlayson et al. [23].
In dynamic open environments, the target object and a portable reference whiteboard are captured. The raw responses D test and d white , test are extracted, ensuring spatial alignment and avoiding saturation.

2.2.1. Steps to Compute the Correction Matrix M 1

To address environmental variability, a two-tier correction framework is proposed.
A diagonal matrix M 1 is constructed to compensate for channel-specific sensitivity drifts caused by illuminant changes:
M 1 = diag d white , ref . / d white , test
where . / denotes element-wise division. With the constructed matrix M 1 , the imaging conditions of the object camera response can be corrected as shown in Equation (7), where d test is the corrected object camera response.
d cor = d test M 1
It should be noted that, due to the overlap in sensitivity functions across digital camera channels and optical signal crosstalk between filters [24], color discrepancies persist to some extent in the same sample set under different light sources, even after M 1 matrix correction. Therefore, while utilizing the M 1 correction matrix for initial image calibration, it becomes imperative to perform additional color correction to specifically address illumination-induced chromatic deviations. This two-stage correction approach ensures enhanced color fidelity by comprehensively compensating for both sensor-related inaccuracies and spectral variability caused by light source differences.

2.2.2. Steps to Compute the Correction Matrix M 2

In this framework, the ambient illuminant in uncontrolled lighting conditions is estimated through a reference whiteboard calibration process coupled with a predefined spectral database. By leveraging spectrally equivalent candidates selected from the database, a data-driven adaptive color correction matrix is derived to systematically compensate for illumination-specific chromatic variations, thereby enabling robust color accuracy across diverse lighting scenarios.
The whiteboard response is normalized to its maximum value to decouple the absolute intensity:
d white , test , norm = d white , test . / max ( d white , test )
Within a preconstructed spectral database containing m illuminants, the simulated response d i of the reference whiteboard under candidate illuminants is calculated using Equation (1), which incorporates the digital camera’s sensitivity functions and the whiteboard’s spectral reflectance. This simulated response is then normalized via Equation (8) to obtain the normalized simulated response of the reference whiteboard, d i , norm . The top n spectrally equivalent candidate illuminants are selected by minimizing a composite error metric O E , which quantifies the discrepancy between the normalized raw response d white , test , norm (measured in the open environment) and the normalized simulated response d i , norm . The overall error O E is computed as defined in Equations (9)–(11), where a r c c o s ( · ) denotes the inverse cosine function. The superscript T represents the transpose operator, · denotes the matrix/vector norm, and norm ( · ) refers to the maximum-value normalization function.
A E i = arccos d i , norm T d white , test , norm ( d i , norm T d i , norm ) ( d white , test , norm T d white , test , norm )
E D i = d i , norm d white , test , norm
O E i = norm ( A E i ) × norm ( E D i )
The simulated response matrices D ref and D test , corresponding to the training samples under the reference illuminant and any target illuminant, are computed. Simultaneously, the simulated responses of the reference whiteboard under these illuminants, denoted as d white , ref , sim and d white , target , sim , are derived. The preliminary correction matrix M 1 is then calculated using Equation (6), incorporating the whiteboard’s simulated responses. Subsequently, D target is corrected via Equation (7) to generate D target , corr . To construct the training dataset, D ref is replicated n times to produce D ref , 1 n , while the corrected matrices D target , corr from n target illuminants are concatenated into D target , corr , 1 n . A secondary correction matrix M 2 , which maps D target , corr , 1 n to D ref , 1 n , is derived via least-squares optimization, as formulated in Equation (12):
M 2 = D target , corr , 1 n D ref , 1 n
where ∖ denotes the MATLAB least-squares solver operator, and M 2 is a K K square matrix (K: number of spectral channels).
As formulated in Equation (13), the imaging conditions of the object’s camera response are corrected.
d obj , corr = d obj M 1 M 2
Finally, under the reference imaging conditions, the surface spectrum of the object is reconstructed using the predefined spectral reconstruction matrix Q . It should be emphasized that the corrected raw camera response of the object must undergo polynomial root expansion to the same degree as applied to the training sample set.

2.3. Simulation Experiments

To validate the proposed method, simulation experiments were first conducted. An idealized camera response model was established using the spectral response functions of a Nikon D7200 camera (Minato, Tokyo, Japan), where the spectral sensitivity functions of the Nikon D7200 were estimated based on the methodology proposed by Jiang et al. [25], as shown in Figure 2a. For Model 1 and Model 2, 10 common light sources were collected as test illuminants in open environments, and their spectral power distributions (SPDs) are shown in Figure 2b. These sources included halogen lamps, fluorescent lamps, F-series lamps, and LED sources. Among the 10 collected light sources, the fluorescent lamp with a correlated color temperature (CCT) of approximately 6500K was selected as the reference illuminant (see the black line with an asterisk in Figure 2b). The experimental samples included the SG (ColorChecker SG) chart for training and the CC (ColorChecker) chart, Munsell color samples, and pigment samples for measurement. The quantities of training and testing samples used in the simulation experiments are detailed in Table 1.

2.4. Real-World Experiments

Further validation was performed in outdoor open environments. The camera response model was similarly constructed based on the spectral characteristics of the Nikon D7200. In the laboratory reference environment, a closed uniform lighting booth was employed, and the E5 white patch of the ColorChecker SG (CCSG) chart served as the reference whiteboard. The Nikon D7200 camera was used to capture digital images, with the chart positioned centrally in the field of view (Figure 3a), approximately 1 m from the camera sensor, and a focal length of 35 mm. The relative spectral power distributions (SPDs) of the illuminants were measured using an EVERFINE SPIC-300AW spectroradiometer (Hangzhou, China) (Figure 3b), while the spectral reflectances of the charts were obtained with an X-Rite i1-Pro3 spectrophotometer (Grand Rapids, MI, USA).
For open-environment testing, the Nikon D7200 captured images of both the CCSG and CC charts under six distinct lighting conditions. The E5 white patch of the CCSG chart was again utilized as the reference whiteboard. To compute the correction matrix M 2 , a comprehensive spectral database comprising 701 illuminant SPDs was constructed, and it included the following:
  • 402 light sources from the dataset by Kevin et al. [26];
  • 87 measured sources from Barnard et al. [27];
  • 84 sources from the National Gallery UK database [28];
  • 128 daylight sources spanning color temperatures from 2300K to 15,000K (sampled at 100K intervals) [29].

2.5. Evaluation Metrics

Regarding the experimental results, the study used the spectral root-mean-square error (RMSE), spectral angle mapper (SAM), mean relative absolute error (MRAE), and CIEDE2000 ( Δ E 00 ) color difference to evaluate the spectral reconstruction results. The calculation method for the spectral root-mean-square error is shown in Equation (14):
R M S E = 1 N ( r 1 r 2 ) T ( r 1 r 2 )
where r 1 is the reconstructed spectrum of the testing sample, r 2 is the ground-truth spectrum, the superscript T is the transpose operator, and N is the sampling number of spectral points in the visible spectrum for the spectral wavelength range from 400 to 700 nm at 10 nm intervals; N = 31.

3. Results

Using the aforementioned experimental conditions, the experiments tested the proposed method at two levels. The first level involves correcting the imaging conditions of images using only the M 1 matrix, i.e., an incomplete imaging color correction method denoted by the symbol ‘M1’; the second level involves correcting the imaging conditions using both matrices M 1 and M 2 , i.e., the complete imaging color correction method proposed in this study, denoted by the symbol ‘M1+M2’, where the number of equivalent illuminants k used in the calculation of matrix M 2 was set to 20. Finally, in Section 3.3, we will discuss the impact of the selection of the number of equivalent illuminants k on the final reconstruction accuracy.

3.1. Simulation Experiments

Table 2, Table 3, Table 4 and Table 5 summarize the average spectral reconstruction accuracy across three sample sets in the simulation experiment, as well as their overall average results.
According to the statistical results in Table 2, Table 3, Table 4 and Table 5, for the two testing models of the proposed method, whether for a single sample set or for the average results of all three sample sets, ‘M1’ consistently shows the largest spectral reconstruction error, with the overall average errors of the three experimental sample sets being 2.94 (RMSE%), 4.61 ( Δ E 00 ), 6.63 (SAM), and 0.12 (MRAE), respectively. In contrast, the overall average spectral reconstruction error for ‘M1+M2’ is 2.42 (RMSE%), 1.54 ( Δ E 00 ), 5.38 (SAM), and 0.08 (MRAE).
Comparing the two application models of the proposed method, the spectral reconstruction results of ‘M1+M2’ are significantly better than those of ‘M1’, indicating that the use of matrix M 1 alone is insufficient to achieve better imaging color correction results. On the basis of applying matrix M 1 , the accuracy of spectral reconstruction can be further improved through additional color correction. This result highlights the importance of performing further color correction on the target camera response.

3.2. Real-World Experiments

As summarized in Table 6, Table 7, Table 8 and Table 9, the proposed method was further evaluated across six real-world scenarios. The partial correction model (‘M1’) consistently exhibited higher spectral errors, with an overall average RMSE (%) of 4.6 across all six sample sets. In contrast, the hybrid ‘M1+M2’ model reduced the average spectral error to 4.33 (RMSE%). While the ‘M1+M2’ framework slightly underperformed compared to ‘M1’ in colorimetric accuracy for the Out1 and House1 scenarios, it maintained a superior overall average Δ E 00 of 8.76 versus 9.04 for ‘M1’. Furthermore, for the SAM and MRAE metrics, the hybrid ‘M1+M2’ model outperformed the partial correction model (‘M1’), achieving values of 7.31 and 0.19, respectively. These results confirm that, even in real-world open environments, the ‘M1+M2’ model achieves better spectral fidelity than the partial correction approach, reinforcing the necessity of multi-stage calibration. Additionally, we included comparisons with the MST++ method. As shown, the MST++ method performs worse than the proposed approach in all metrics—including RMSE (%), Δ E 00 , SAM, and MRAE. This is primarily because MST++ is image-based, while our study focuses on sample-driven analysis. Moreover, MST++ relies on pre-trained models from existing public datasets. When applying it to real-world scene images and color chart data captured in open environments, significant reconstruction errors emerge.
To further validate the ‘M1+M2’ framework, Figure 4 compares the reconstructed spectral curves with ground-truth measurements. Although minor deviations exist, the reconstructed spectra generally preserve the shape characteristics of the ground-truth curves and demonstrate reasonable overlap. This alignment highlights the feasibility of the proposed method for spectral reconstruction in open environments, where illumination variability and hardware limitations pose significant challenges. The observed discrepancies may stem from residual nonlinearities in the imaging pipeline or database mismatches, warranting further investigation in future work.
To provide a more intuitive visualization and comparison of the spectral measurement results, Figure 5 renders the spectral measurements of the color patches from the CC color chart. The rendering conditions include the CIED50 illuminant and the CIE 1931 2° standard observer color matching functions using the ‘sRGB’ standard color space.
As illustrated in Figure 5, the true-color reproduction results of the CC chart in various open measurement environments were compared against reference images acquired via spectrophotometer measurements. While the hue accuracy of the color patches remained generally acceptable across the scenarios, significant color deviations were observed in specific open environments, particularly Out2 (c), House2 (f), and House3 (g). These discrepancies indicate that the proposed spectral measurement system currently faces limitations in achieving high-precision color measurement and reproduction in open environments.
However, it is critical to contextualize these results. In practical open-environment spectral measurement applications, spectral data are primarily utilized for spectral analysis [18,19,20,21,22] rather than high-fidelity color reproduction. Within this operational paradigm—where spectral fidelity supersedes colorimetric precision—the proposed method demonstrates sufficient practicality. Future work should focus on optimizing colorimetric accuracy for scenarios requiring both spectral and colorimetric rigor, such as cultural heritage digitization or industrial color quality control.

3.3. Experiment on the Impact of Equivalent Light Sources

To evaluate the impact of the number of equivalent light sources (n) on spectral measurement accuracy, experiments were conducted under three simulated imaging conditions and six real-world imaging conditions (labeled Out1–Out3 and House1–House3). The performance was quantified using the RMSE (%) and Δ E 00 color difference between the reconstructed spectral reflectance and the reference values. Table 10 and Table 11 summarize the RMSE and CIEDE2000 results, respectively, for values of n ranging from 1 to 50.
Through simulation experiments and open-environment experiments, we validated the spectral reconstruction performance of the proposed method with varying numbers of target illuminants (n). In the simulation experiments, the RMSE (%) and color difference ( Δ E 00 ) for the CCSG, Munsell, and pigment datasets stabilized when the number of target illuminants reached n 20 . Notably, even with a single illuminant ( n = 1 ), the RMSE (%) and Δ E 00 in most scenarios significantly outperformed the unoptimized baseline, validating the capability of the correction matrix M 2 to compensate for illuminant discrepancies.
The experimental results confirm that the proposed method reaches a performance plateau when n 20 and exhibits strong robustness against variations in n. This characteristic arises from the error-balancing mechanism and global optimization strategy; the overall error metric integrates the angle error and Euclidean distance, mitigating interference from extreme illuminant deviations, while the least-squares method constructs stable mapping relationships through multi-illuminant data. Consequently, regardless of the value of n, the method stably compensates for illuminant differences via the correction matrix M 2 , enabling high-precision spectral reconstruction.

4. Discussion

The primary objective of this study was to investigate whether the image color correction method could be applied to spectral measurement in complex open environments. This research required an understanding of the existing spectral measurement methods for open environments, identifying technical gaps, and proposing solutions to achieve the set objectives. To this end, this study focused on spectral reconstruction based on a single raw-format digital camera RGB image. A spectral reconstruction method based on image color correction was proposed; first, matrix M 1 was calculated to correct the imaging conditions of the open measurement environment to the reference imaging conditions, achieving exposure-level correction and preliminary color error correction. Subsequently, based on the theoretical imaging model of digital cameras [21] and a light source database, matrix M 2 was computed to further correct color errors caused by light source differences. Finally, the correction matrices M 1 and M 2 were applied to the object’s image, and spectral reconstruction was completed using the spectral reconstruction matrix established under reference imaging conditions.
Simulation experiments were first conducted. Ten common light sources were collected as test illuminants in open environments. Both the ‘M1’ and ‘M1+M2’ methods were applied to three sample sets. The results in Table 2 and Table 3 showed that, compared with using ‘M1’ alone, the ‘M1+M2’ method significantly reduced the average RMSE and Δ E 00 across all three sample sets. This demonstrated that partial correction using only matrix M 1 could not guarantee sufficient spectral reconstruction accuracy. By combining M 2 to correct color errors caused by light source differences, further improvements in spectral reconstruction precision were achieved. Real-world experiments in open environments (Table 4 and Table 5) further validated the critical role of M 2 .
The experimental results of this study confirm that the proposed spectral reconstruction method based on image acquisition condition correction is effective and promising for open-environment spectral measurement applications. However, several challenges remain to be addressed.
  • Error caused by the estimation of the camera sensitivity function: In practical experiments, the sensitivity function of a digital camera used is obtained through an estimation method [25], inevitably introducing estimation errors. These errors will affect both the selection of the target light sources and the accuracy of solving the correction matrix M 2 , consequently compromising spectral measurement precision. Although this study mitigates the impact of camera sensitivity function estimation errors to some extent by selecting the top k target light sources with smaller errors, it cannot entirely eliminate the influence of this factor. Furthermore, the solved correction matrix M 2 is designed for the theoretical-simulation-based measurement system constructed using the estimated sensitivity function, not the actual measurement system. Therefore, the correction matrix M 2 itself possesses inherent limitations in practical spectral measurement applications, affecting spectral measurement accuracy.
  • Error caused by the inherent discrepancy between the actual and theoretical imaging models of the measurement system: The simulation experiments in this study are based on the idealized linear imaging model represented by Equation (1) for conducting spectral measurements. These simulations do not account for factors present in actual measurement systems, such as lens optical effects, exposure parameters, and optical signal crosstalk between filters [30,31,32,33,34,35]. However, in practical experiments, the complexity of the actual imaging model significantly exceeds that of the theoretical model shown in Equation (1). The aforementioned factors are critical influences on the response values of the measurement system. The methodology employed in this study for both solving the target light sources and the correction matrix M 2 is fundamentally grounded in this idealized linear imaging model. It does not consider the inherent discrepancy between the actual and theoretical imaging models. Consequently, this leads to limitations in the applicability of the correction matrix M 2 for practical spectral measurement tasks, ultimately affecting the accuracy of the spectral measurements.

5. Conclusions

The spectral measurement technology based on digital cameras holds significant potential in various spectral analysis applications. To address the issue that the existing methods fail to account for exposure inconsistencies between camera spectral characterization and spectral reconstruction in open environments, this study proposes a spectral reconstruction method based on image color correction for digital cameras. The proposed method was evaluated through both simulation and real-world experiments. The experimental results demonstrated that the adaptive color correction model (‘M1+M2’) in the proposed imaging color correction method achieved the best spectral reconstruction accuracy, with overall average RMSE (%) and Δ E 00 values of 4.33 and 5.16, respectively, in open environments. These results verify the feasibility of the proposed method in open environments and provide a viable reference approach for spectral reconstruction applications in open environments. Future research will focus on minimizing errors caused by discrepancies between the estimated camera sensitivity functions and the inherent differences between the practical and theoretical imaging models of the measurement system. This holds significant importance in various fields, such as agriculture and environmental monitoring, cultural heritage preservation and restoration, among others.

Author Contributions

Methodology, validation, writing review and editing, J.L.; methodology, data collection and analysis, writing original draft preparation, X.H.; investigation, data curation, Y.L.; resources, writing-review, K.X.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Fundamental Research Funds for the Central Universities (2232024G-14), and Hubei Provincial Natural Science Foundation General Project (2022CFB537).

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, Y.; Wang, C.; Zhao, J. Locally Linear Embedded Sparse Coding for Spectral Reconstruction From RGB Images. IEEE Signal Process. Lett. 2017, 25, 363–367. [Google Scholar] [CrossRef]
  2. Kim, T.; Visbal-Onufrak, M.A.; Konger, R.L.; Kim, Y.L. Data-driven imaging of tissue inflammation using RGB-based hyperspectral reconstruction toward personal monitoring of dermatologic health. Biomed. Opt. Express 2017, 8, 5282–5296. [Google Scholar] [CrossRef] [PubMed]
  3. Grabowski, B.; Masarczyk, W.; Głomb, P.; Mendys, A. Automatic pigment identification from hyperspectral data. J. Cult. Herit. 2018, 31, 1–12. [Google Scholar] [CrossRef]
  4. Cao, B.; Liao, N.; Cheng, H. Spectral reflectance reconstruction from RGB images based on weighting smaller color difference group. Color Res. Appl. 2017, 42, 327–332. [Google Scholar] [CrossRef]
  5. Bian, L.; Wang, Z.; Zhang, Y.; Li, L.; Zhang, Y.; Yang, C.; Fang, W.; Zhao, J.; Zhu, C.; Meng, Q.; et al. A broadband hyperspectral image sensor with high spatio-temporal resolution. Nature 2024, 635, 73–81. [Google Scholar] [CrossRef]
  6. Zhang, X.; Wang, Q.; Li, J.; Zhou, X.; Yang, Y.; Xu, H. Estimating spectral reflectance from camera responses based on CIE XYZ tristimulus values under multi-illuminants. Color Res. Appl. 2017, 42, 68–77. [Google Scholar] [CrossRef]
  7. Martínez, E.; Castro, S.; Bacca, J.; Arguello, H. Efficient Transfer Learning for Spectral Image Reconstruction from RGB Images. In Proceedings of the 2020 IEEE Colombian Conference on Applications of Computational Intelligence (IEEE ColCACI 2020), Cali, Colombia, 7–9 August 2020; pp. 1–6. [Google Scholar]
  8. Monroy, B.; Bacca, J.; Arguello, H. Deep Low-Dimensional Spectral Image Representation for Compressive Spectral Reconstruction. In Proceedings of the 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP), Gold Coast, Australia, 25–28 October 2021; pp. 1–6. [Google Scholar]
  9. Liang, J.; Xin, L.; Zuo, Z.; Zhou, J.; Liu, A.; Luo, H.; Hu, X. Research on the deep learning-based exposure invariant spectral reconstruction method. Front. Neurosci. 2022, 16, 1031546. [Google Scholar] [CrossRef]
  10. Shrestha, R.; Hardeberg, J.Y. Spectrogenic imaging: A novel approach to multispectral imaging in an uncontrolled environment. Opt. Express 2014, 22, 9123–9133. [Google Scholar] [CrossRef]
  11. Khan, H.A.; Thomas, J.B.; Hardeberg, J.Y. Multispectral constancy based on spectral adaptation transform. In Scandinavian Conference on Image Analysis; Springer: Cham, Switzerland, 2017; pp. 459–470. [Google Scholar]
  12. Khan, H.A.; Thomas, J.B.; Hardeberg, J.Y.; Laligant, O. Spectral adaptation transform for multispectral constancy. J. Imaging Sci. Technol. 2018, 62, 20504-1–20504-12. [Google Scholar] [CrossRef]
  13. Zhang, L.J.; Jiang, J.; Jiang, H.; Zhang, J.J.; Jin, X. Improving Training-based Reflectance Reconstruction via White-balance and Link Function. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2017; IEEE: Piscataway, NJ, USA, 2018; pp. 8616–8621. [Google Scholar]
  14. Gu, J.; Chen, H. An algorithm of spectral reflectance function reconstruction without sample training can integrate prior information. In Proceedings of the 2017 2nd International Conference on Image, Vision and Computing (ICIVC), Chengdu, China, 2–4 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 541–544. [Google Scholar]
  15. Cai, Y.; Lin, J.; Lin, Z. MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 745–755. [Google Scholar]
  16. Lin, Y.T.; Finlayson, G.D. A rehabilitation of pixel-based spectral reconstruction from RGB images. Sensors 2023, 23, 4155. [Google Scholar] [CrossRef]
  17. Fsian, A.; Thomas, J.B.; Hardeberg, J.Y.; Gouton, P. Deep Joint Demosaicking and Super Resolution for Spectral Filter Array Images. IEEE Access 2025, 13, 16208–16222. [Google Scholar] [CrossRef]
  18. Lin, Y.T.; Finlayson, G.D. Color and Imaging Conference. In Proceedings of the Society for Imaging Science and Technology, Montreal, QC, Canada, 28 October–1 November 2019; pp. 284–289. [Google Scholar]
  19. Lin, Y.T.; Finlayson, G.D. Physically plausible spectral reconstruction from RGB images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 532–533. [Google Scholar]
  20. Ibrahim, A.; Tominaga, S.; Horiuchi, T. A spectral invariant representation of spectral reflectance. Opt. Rev. 2011, 18, 231–236. [Google Scholar] [CrossRef]
  21. Liang, J.; Wan, X. Spectral Reconstruction from Single RGB Image of Trichromatic Digital Camera. Acta Opt. Sin. 2017, 37, 370–377. [Google Scholar]
  22. Connah, D.; Hardeberg, J. Spectral recovery using polynomial models. Proc SPIE 2005, 5667, 65–75. [Google Scholar]
  23. Finlayson, G.D.; Mackiewicz, M.; Hurlbert, A. Color Correction Using Root-Polynomial Regression. IEEE Trans. Image Process. 2015, 24, 1460–1470. [Google Scholar] [CrossRef]
  24. Qiu, J.; Xu, H. Camera response prediction for various capture settings using the spectral sensitivity and crosstalk model. Appl. Opt. 2016, 55, 6989–6999. [Google Scholar] [CrossRef] [PubMed]
  25. Jiang, J.; Liu, D.; Gu, J.; Süsstrunk, S. What is the space of spectral sensitivity functions for digital color cameras? In Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision (WACV), Clearwater Beach, FL, USA, 15–17 January 2013; pp. 168–179. [Google Scholar]
  26. Houser, K.W.; Wei, M.; David, A.; Krames, M.R.; Shen, X.S. Review of measures for light-source color rendition and considerations for a two-measure system for characterizing color rendition. Opt. Express 2013, 21, 10393–10411. [Google Scholar] [CrossRef]
  27. Barnard, K.; Cardei, V.; Funt, B. A comparison of computational color constancy algorithms. I: Methodology and experiments with synthesized data. IEEE Trans. Image Process. 2002, 11, 972–984. [Google Scholar] [CrossRef]
  28. Available online: http://research.ng-london.org.uk/scientific/spd/ (accessed on 20 June 2025).
  29. CIE. CIE.15:2004 COLORIMETRY; Central Bureau of the CIE: Vienna, Austria, 2004. [Google Scholar]
  30. Nakamura, J. Image Sensors and Signal Processing for Digital Still Cameras; CRC Press, Inc.: Boca Raton, FL, USA, 2005. [Google Scholar]
  31. Ramanath, R.; Snyder, W.E.; Yoo, Y.; Drew, M.S. Color image processing pipeline. IEEE Signal Process. Mag. 2005, 22, 34–43. [Google Scholar] [CrossRef]
  32. Farrell, J.E.; Catrysse, P.B.; Wandell, B.A. Digital camera simulation. Appl. Opt. 2012, 51, A80–A90. [Google Scholar] [CrossRef]
  33. Yu, W. Practical anti-vignetting methods for digital cameras. IEEE Trans. Consum. Electron. 2004, 50, 975–983. [Google Scholar]
  34. Getman, A.; Uvarov, T.; Han, Y.; Kim, B.; Ahn, J.; Lee, Y. Crosstalk, color tint and shading correction for small pixel size image sensor. In Proceedings of the International Image Sensor Workshop, Ogunquit, ME, USA, 7–10 June 2007; pp. 166–169. [Google Scholar]
  35. Liang, J.; Hu, X.; Zhuan, Z.; Liu, X.; Li, Y.; Zhou, W.; Luo, H.; Hu, X.; Xiao, K. Exploringmultispectral reconstruction based on camera response prediction. In Proceedings of the CVCS2024: The 12th Colour and Visual Computing Symposium, Gjovik, Norway, 5–6 September 2024. [Google Scholar]
Figure 1. Principle of the spectral reconstruction method based on image color correction. The red line section represents the first step of our method, which involves acquiring the reference whiteboard response values and constructing the spectral reconstruction matrix Q . The dark-purple-filled rectangle indicates the calculation of the correction matrix M 1 . The green line section represents the second step, which includes acquiring the reference whiteboard response values and estimating the illuminant in the open environment. The light-blue-filled rectangle denotes the computation of the correction matrix M 2 . Finally, the black line section at the bottom illustrates the correction and reconstruction process applied to the test samples.
Figure 1. Principle of the spectral reconstruction method based on image color correction. The red line section represents the first step of our method, which involves acquiring the reference whiteboard response values and constructing the spectral reconstruction matrix Q . The dark-purple-filled rectangle indicates the calculation of the correction matrix M 1 . The green line section represents the second step, which includes acquiring the reference whiteboard response values and estimating the illuminant in the open environment. The light-blue-filled rectangle denotes the computation of the correction matrix M 2 . Finally, the black line section at the bottom illustrates the correction and reconstruction process applied to the test samples.
Electronics 14 02632 g001
Figure 2. (a) Camera spectral response functions of Nikon D7200; (b) spectral power distribution of the 10 collected illuminants.
Figure 2. (a) Camera spectral response functions of Nikon D7200; (b) spectral power distribution of the 10 collected illuminants.
Electronics 14 02632 g002
Figure 3. (a) CCSG image captured in the reference environment. (b) Spectral power distribution (SPD) of the reference light source.
Figure 3. (a) CCSG image captured in the reference environment. (b) Spectral power distribution (SPD) of the reference light source.
Electronics 14 02632 g003
Figure 4. The reconstructed spectral reflectance using M1 and M2 in the laboratory.
Figure 4. The reconstructed spectral reflectance using M1 and M2 in the laboratory.
Electronics 14 02632 g004
Figure 5. Rendered spectral measurements of CC color chart patches: (a) reference, (b) Out1, (c) Out2, (d) Out3, (e) House1, (f) House2, and (g) House3. Out1, Out2, and Out3 are samples captured under conditions of shadow, facing direct sunlight, and having one’s back to sunlight, respectively. House1, House2, and House3 were captured under the following lighting conditions: D65 light source alone, a mixture of D65 and A light sources, and a mixed scenario of indoor lighting and daylight, respectively.
Figure 5. Rendered spectral measurements of CC color chart patches: (a) reference, (b) Out1, (c) Out2, (d) Out3, (e) House1, (f) House2, and (g) House3. Out1, Out2, and Out3 are samples captured under conditions of shadow, facing direct sunlight, and having one’s back to sunlight, respectively. House1, House2, and House3 were captured under the following lighting conditions: D65 light source alone, a mixture of D65 and A light sources, and a mixed scenario of indoor lighting and daylight, respectively.
Electronics 14 02632 g005
Table 1. Training and the corresponding testing samples. Among them, for the Munsell and Pigment datasets, the odd-numbered samples were selected as training samples, while the even-numbered samples were used as test samples.
Table 1. Training and the corresponding testing samples. Among them, for the Munsell and Pigment datasets, the odd-numbered samples were selected as training samples, while the even-numbered samples were used as test samples.
Color ChartMunsellPigment
TrainingSG (140)odd (635)odd (392)
TestingCC (24)even (634)even (392)
Table 2. The average spectral reconstruction error (RMSE (%)) of the three sample sets and their overall average results.
Table 2. The average spectral reconstruction error (RMSE (%)) of the three sample sets and their overall average results.
Color ChartMunsellPigmentAverage
M13.182.153.492.94
M1+M22.591.673.002.42
Table 3. The average spectral reconstruction error ( Δ E 00 ) of the three sample sets and their overall average results.
Table 3. The average spectral reconstruction error ( Δ E 00 ) of the three sample sets and their overall average results.
Color ChartMunsellPigmentAverage
M15.113.884.854.61
M1+M21.691.331.591.54
Table 4. The average spectral reconstruction error (spectral angle mapper (SAM)) of the three sample sets and their overall average results.
Table 4. The average spectral reconstruction error (spectral angle mapper (SAM)) of the three sample sets and their overall average results.
Color ChartMunsellPigmentAverage
M16.625.008.286.63
M1+M25.033.807.305.38
Table 5. The average spectral reconstruction error (mean relative absolute error (MRAE)) of the three sample sets and their overall average results.
Table 5. The average spectral reconstruction error (mean relative absolute error (MRAE)) of the three sample sets and their overall average results.
Color ChartMunsellPigmentAverage
M10.120.090.140.12
M1+M20.080.060.110.08
Table 6. The average spectral reconstruction error (RMSE (%)) in open environments and the overall average results.
Table 6. The average spectral reconstruction error (RMSE (%)) in open environments and the overall average results.
Out1Out2Out3House1House2House3Average
M13.514.324.554.055.066.114.60
M1+M23.314.324.453.764.905.914.44
MST++10.6613.4512.4711.078.978.4310.84
Table 7. The average spectral reconstruction error ( Δ E 00 ) in open environments and the overall average results.
Table 7. The average spectral reconstruction error ( Δ E 00 ) in open environments and the overall average results.
Out1Out2Out3House1House2House3Average
M13.505.435.794.436.157.465.46
M1+M24.016.606.403.877.136.575.76
MST++8.839.1910.468.136.877.868.56
Table 8. The average spectral reconstruction error (spectral angle mapper (SAM)) in open environments and the overall average results.
Table 8. The average spectral reconstruction error (spectral angle mapper (SAM)) in open environments and the overall average results.
Out1Out2Out3House1House2House3Average
M15.707.369.187.037.529.307.68
M1+M25.607.138.706.507.498.447.31
MST++9.1112.749.108.267.998.879.35
Table 9. The average spectral reconstruction error (mean relative absolute error (MRAE)) in open environments and the overall average results.
Table 9. The average spectral reconstruction error (mean relative absolute error (MRAE)) in open environments and the overall average results.
Out1Out2Out3House1House2House3Average
M10.120.170.170.200.250.310.20
M1+M20.110.170.160.180.230.290.19
MST++0.430.380.520.340.250.290.37
Table 10. RMSE (%) between the reconstructed spectral reflectance of the ColorChecker chart and the reference values for different n values ranging from 1 to 50.
Table 10. RMSE (%) between the reconstructed spectral reflectance of the ColorChecker chart and the reference values for different n values ranging from 1 to 50.
n1351020304050
CCSG2.502.582.592.602.592.602.592.59
Munsell1.601.661.671.671.661.671.661.66
Pigment2.902.992.993.003.003.003.023.00
Out13.603.643.623.503.403.353.313.33
Out24.384.304.234.254.424.374.324.32
Out34.814.484.594.654.664.524.454.39
House14.544.574.584.544.404.033.763.70
House24.954.954.944.884.904.884.904.91
House35.885.905.935.915.925.915.915.90
Table 11. Δ E 00 between the reconstructed spectral reflectance of the ColorChecker chart and the reference values for different n values ranging from 1 to 50.
Table 11. Δ E 00 between the reconstructed spectral reflectance of the ColorChecker chart and the reference values for different n values ranging from 1 to 50.
n1351020304050
CCSG1.381.651.651.701.691.071.691.70
Munsell1.011.321.341.381.341.331.331.33
Pigment1.261.621.641.641.591.591.611.60
Out14.444.564.484.354.254.144.014.10
Out26.966.786.596.667.006.816.606.65
Out37.116.246.596.806.836.456.406.35
House15.185.225.235.194.984.373.873.65
House27.377.337.297.147.177.127.137.12
House36.896.946.816.786.826.686.576.44
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, J.; Hu, X.; Li, Y.; Xiao, K. Multispectral Reconstruction in Open Environments Based on Image Color Correction. Electronics 2025, 14, 2632. https://doi.org/10.3390/electronics14132632

AMA Style

Liang J, Hu X, Li Y, Xiao K. Multispectral Reconstruction in Open Environments Based on Image Color Correction. Electronics. 2025; 14(13):2632. https://doi.org/10.3390/electronics14132632

Chicago/Turabian Style

Liang, Jinxing, Xin Hu, Yifan Li, and Kaida Xiao. 2025. "Multispectral Reconstruction in Open Environments Based on Image Color Correction" Electronics 14, no. 13: 2632. https://doi.org/10.3390/electronics14132632

APA Style

Liang, J., Hu, X., Li, Y., & Xiao, K. (2025). Multispectral Reconstruction in Open Environments Based on Image Color Correction. Electronics, 14(13), 2632. https://doi.org/10.3390/electronics14132632

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop