Next Article in Journal
Demonstration of Multiple Access FSO Communication System Based on Silicon Optical Phased Array
Previous Article in Journal
Far-Infrared Imaging Lens Based on Dual-Plane Diffractive Optics
Previous Article in Special Issue
Ensquared Energy and Optical Centroid Efficiency in Optical Sensors, Part 3: Optical Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recovering the Reduced Scattering and Absorption Coefficients of Turbid Media from a Single Image

Institute for Laser Technologies in Medicine and Metrology at the University of Ulm, Helmholtzstr. 12, 89081 Ulm, Germany
*
Authors to whom correspondence should be addressed.
Photonics 2025, 12(11), 1118; https://doi.org/10.3390/photonics12111118
Submission received: 21 October 2025 / Revised: 7 November 2025 / Accepted: 10 November 2025 / Published: 13 November 2025

Abstract

This study introduces a physics-based inverse rendering method for determining the reduced scattering and absorption coefficients of turbid materials with arbitrary shapes, using a single image as input. The approach enables fully spectrally-resolved reconstruction of the wavelength-dependent behaviour of the optical properties while also circumventing the specialised sample preparation required by established measurement techniques. Our approach employs a numerical solution of the Radiative Transfer Equation based on an inverse Monte Carlo framework, utilising an improved Levenberg–Marquardt algorithm. By rendering the edge effects accurately, particularly translucency, it becomes possible to differentiate between scattering and absorption from just one image. Importantly, the errors induced by only approximate prior knowledge of the phase function and refractive index of the material were quantified. The method was validated through theoretical studies on three materials spanning a range of optical parameters, initially using a simple cube geometry and later extended to more complex shapes. Evaluated via the CIE Δ E 2000 colour difference, forward renderings based on the recovered properties were indistinguishable from those preset, which were obtained from integrating sphere measurements on real materials. The recovered optical properties showed less than 4% difference relative to these measurements. This work demonstrates a versatile approach for optical material characterisation, with significant potential for digital twin creation and soft-proofing in manufacturing.

1. Introduction

The colour appearance of objects is determined by both the intrinsic optical properties of the material and its geometric shape, as well as the lighting conditions. Accurately rendering these objects using optical property-based methods [1,2] requires precise measurements of key parameters underlying the Radiative Transfer Equation (RTE): the absorption coefficient μ a , the scattering coefficient μ s , the refractive index n, and the scattering phase function, often characterised by the anisotropy factor g. Traditional methods for measuring absorption and scattering properties include integrating sphere setups [3,4], or goniometric measurements [5] that include the scattering phase function and thus the reduced scattering coefficient, μ s = ( 1 g ) μ s . Although these techniques are, in principle, highly precise, they impose constraints on sample preparation and material types. For example, the integrating sphere method [4,6,7,8,9] often involves preparing plane–parallel samples. Similar sample preparation requirements also limit other conventional methods [10,11].
Many previous studies have addressed various inverse approaches to recover optical properties [12,13], e.g., using single scattering approaches [14,15] or the diffusion equation [16,17]. Other related works that acquire optical parameters include Pranovich et al. [18], who studied thin sample slabs measured with a spectrometer; Elek et al. [19,20], who also use thin sample slabs along with reflectance measurements using a camera setup in the three colour channels; and Iser et al. [21], who extended this approach to achieve spectral resolution. Gkioulekas et al. [22] employ a hyperspectral camera setup with reflectance and transmission measurements to recover optical properties in three channels, and Chen et al. [23] rely on ordinary images to replicate the appearance of objects but require at least three images as input. Further closely related work [24] recovers the geometry of complex objects and their appearance using an inverse bidirectional scattering–surface reflectance distribution function approach. Depending on different scene scenarios and properties examined, additional works that use inverse approaches are Refs. [25,26,27].
In this work, we focused on an inverse Monte Carlo (MC) method for determining both the absorption and reduced scattering coefficients of solid materials simultaneously from a single camera image while assuming that the refractive index, scattering phase function, and the object’s geometry are arbitrary but already known, a reasonable premise given the availability of established, contactless shape, measurement techniques [28,29,30,31,32]. Our approach is especially aimed at situations where only samples of a material in geometrically complex or otherwise impractical shapes are available for measurement. Like Ref. [14], we only use a single image as input, but do not limit our approach to the single scattering regime or the three colour channels. At the core of our method is a fully physics-based rendering framework [1,2] that utilises a numerical solution of the RTE to accurately simulate light propagation through the material [33] and explicitly includes the resulting edge effects of the physically correct light transfer. Unlike a semi-infinite medium, the finite objects examined in this study exhibit distinct edge-related phenomena such as brightness and colour gradients, as well as translucency. The optical properties were determined by fitting the computed light distribution to the synthesised image data using an improved Levenberg–Marquardt (LM) algorithm [34] that incorporates the valuable information provided by, e.g., translucency effects. Through the distinct influence that scattering and absorption exert on the translucency and colour, the algorithm successfully separates the contributions of each during the fitting process with only one image as input. Crucially, it is only by including these edge effects in our physics-based rendering that the differences between scattering and absorption can be reliably distinguished by the fit, because for a laterally infinitely extended medium, the reflected light depends only on the quotient of μ s and μ a .
To validate our method, a series of theoretical studies aimed at testing its viability were conducted. We examined three different materials that were firstly characterised using an integrating sphere, with the resulting values serving as our reference. First, varying detection and illumination angles were examined to find a configuration that aims to optimise the convergence of the fit. Subsequently, we introduced variations in the refractive index and anisotropy factor to simulate realistic scenarios in which these parameters are not known exactly. We then transitioned from an ideal monochromatic setup to a more realistic setup that incorporates a realistic light source, detector sensitivities, and spectral bandpass filters. In addition to this spectral filtering approach, we studied the effect of using filters corresponding to the camera’s intrinsic sensitivity, i.e., RGB channels, which are typically much broader. As the final demonstration of our method’s versatility, we employed the spectral filter setup to recover the optical properties of the same three materials using complex geometries to underscore the method’s adaptability to real-world scenarios, where object geometries can be highly irregular. Additionally, we can directly use the recovered properties to create a digital twin [35,36], similarly to Ref. [18], that can be placed in different rendering environments under any lighting and viewing conditions and produce photorealistic renderings. This can be a step towards improving the soft-proofing of manufacturing processes by visualising the products in virtual scenarios [18,37,38].

2. Materials and Methods

In this section, we detail the core methodologies that underpin our study. First, we describe the MC algorithm used in the rendering process, emphasising its role in simulating complex light interactions and ensuring realistic image synthesis. Next, we explain how the LM fitting procedure is employed to recover the optical properties, providing a robust framework for parameter estimation. Finally, we outline the simulation setup, discussing the configuration and conditions that establish the foundation for our analysis.

2.1. Radiative Transfer Modelling via GPU-Based Monte Carlo Simulation

We employed an MC simulation to compute the physics-based light transfer through turbid media. Our in-house developed MC software leverages graphics processing units to significantly reduce computation times while accommodating flexible lighting and viewing conditions. It was validated thoroughly against another MC software [3] and an analytical RTE solution [39]. The simulation computes light transfer solely based on the optical properties of the medium, and a more detailed description and other applications can be found in Refs. [1,2] and [40,41,42], respectively; here, we provide only a brief summary. The light transfer is modelled within the framework of radiative transfer theory by numerically solving the RTE in the limit of an infinite number of photons. The required intrinsic optical properties of the material are: the reduced scattering and absorption coefficients, refractive index, and the scattering phase function here chosen to be the Henyey–Greenstein phase function [43]. The Fresnel equations [44] handled refraction at interfaces with mismatched refractive indices, to accurately represent physical light propagation. As is common for MC methods [45], the process involves executing numerous iterations in which individual energy packets, referred to as photons in this MC context, are propagated through the medium, and random numbers are drawn repeatedly to simulate probabilistic events such as scattering and absorption. These computations are performed on a 3D model that precisely captures the geometry and material distribution of the object, using a backward rendering path that propagates photons from the detector to the light source. For the simple geometries in Section 3.1, a voxel-based code [2] was used, while the more complex geometries in Section 3.4 were computed with a tetraeder-based code [1]. As described in Refs. [1,2], the simulation data, combined with the measured spectrum of the light source and the camera sensitivity, enable the rendering of images that exhibit excellent agreement with actual photographs, based on the CIE Δ E 2000 values. The CIE Δ E 2000 colour–difference metric [46] provides a perceptually uniform measure of the deviation between two colours in the CIE L*a*b* space. Lower values correspond to smaller perceived colour differences, with values below 2 generally regarded as visually negligible.

2.2. Parameter Estimation via Levenberg–Marquardt Fitting of Monte Carlo Simulated Data

The MC simulation provides detailed information on the light transfer from the source through the object to the detector. Initially, this data represents only the intensity of light detected per pixel on the detector. As described in Ref. [1], the spectrally resolved intensity data can be used to reconstruct the fully rendered image. By isolating a single wavelength, we can compute the corresponding intensity distribution on the detector that is associated with the optical properties of that specific wavelength. To establish a reference for fitting, which can also be an experimentally obtained image, we first computed an MC simulation using known optical properties obtained from integrating sphere measurements, shown as the solid blue lines in the comparison with the obtained reduced scattering and absorption coefficient in Section 3.1. A slice through the resulting detector output was then extracted and used as the reference data set in the LM fitting algorithm. The LM algorithm then performs another simulation, iteratively adjusts the scattering and absorption coefficients, and aims to match the reference data by performing the following steps:
  • Initialisation: Accept an initial guess for the scattering and absorption coefficients. Use the MC simulation to obtain the corresponding intensity profile, and compute the cost function based on the discrepancy between the simulated and reference intensity curves. A gradient-based method then determines the direction and magnitude of the coefficient adjustments [34].
  • Iteration: Update the optical coefficients and run the MC simulation to obtain the new intensity profile. Compute the new cost and decide whether to accept or reject the updated coefficients based on cost minimisation.
  • Convergence: Repeat the iterative process until the algorithm converges on the optimal set of coefficients that best reproduce the reference intensity curve.
The choice of initial parameters was addressed by pre-running the fit with arbitrary starting values and allowing for aggressive parameter changes to locate a parameter region relatively close to the optimal value region. Then the fit was started in this region with more conservative settings that allow for stable convergence. Our goal was also to reduce the noise level in the MC simulation during the fitting process by increasing the number of simulated photons per iteration to 10 9 photons while keeping the computation time within reasonable limits, i.e., below 1 min. The best-fit intensity curve yields the scattering and absorption coefficients for the corresponding wavelength. By repeating this fitting procedure across different wavelengths, we recovered the complete spectral distributions of μ s and μ a , while assuming n and g are known. The case where this knowledge is imprecise was investigated in Section 3.1.

2.3. Setup of the Virtual Camera and Reference Data Computation

The next step involved modelling a virtual camera and a realistic light source within the fitting algorithm. This extension enabled more realistic simulations and aligned better with practical measurement scenarios. We chose a virtual camera model that simulates the Nikon D7500 and incorporated properties and settings consistent with its real-world counterpart, including the spectral sensitivity functions of the camera chip and the lens. The light source was chosen to be an LED panel, whose spectral distribution was previously determined [1].
To retrieve absorption and reduced scattering coefficients across the visible spectrum, we introduced spectral filters into the virtual setup. These filters, characterised by Gaussian profiles, isolate wavelengths and allow us to obtain the optical properties with a spectral resolution determined by the positions and widths of the filters. A schematic of the virtual configuration is shown in Figure 1, comprising a fully characterised light source, the sample, the Gaussian optical bandpass filter, and the virtual camera. We used a Gaussian filter of the form
G λ 0 ( λ ) = 1 σ 2 π exp 1 2 λ λ 0 σ 2 ,
which is centred around its mean wavelength λ 0 . This can then be transformed into the following expression:
G λ 0 ( λ ) = exp 4 ln ( 2 ) λ λ 0 fwhm 2 ,
that includes the full-width at half maximum (fwhm), where we used σ = fwhm 2 2 ln 2 and normalised the amplitude to unity. Then the Gaussian weighted average of the intensity I weighted ( λ 0 ) around λ 0 yields
I weighted ( λ 0 ) = [ λ 0 d , λ 0 + d ] d λ I ( λ ) G λ 0 ( λ ) C ( λ ) L ( λ ) [ λ 0 d , λ 0 + d ] d λ G λ 0 ( λ ) C ( λ ) L ( λ ) ,
where I ( λ ) represents the intensity from the MC simulation at wavelength λ and d = 3 σ is the range defined by the fwhm of the filter. The spectral intensity distribution of the light source L ( λ ) and the camera’s spectral sensitivity C F ( λ ) in the colour channel F are also taken into account to include all components of a realistic filtered setup, where C ( λ ) represents only the camera sensitivity channel whose maximum response falls within the filter range. Formally, the channel is determined as follows:
C ( λ ) = max F [ λ 0 d , λ 0 + d ] d λ I ( λ ) G λ 0 ( λ ) C F ( λ ) L ( λ ) .

3. Results

3.1. Simple Geometry

To initially test our method, we inversely recovered the scattering and absorption coefficients from a virtual cube using the spatially resolved reflectance curves per wavelength while assuming an ideal monochromatic setup. That is, we directly used the intensity I ( λ ) from Equation (3) as reference data. We aimed to recover the optical properties across the entire visible range, from 400 nm to 700 nm, in increments of 10 nm. Before initiating the fitting process, we explored different configurations of the camera and light source positions to identify ideal conditions for the optimisation of our specific setup. While it is possible to use the entire image, for this test and the following fits of the cubes, only a single image line from top to bottom was computed, marked with a red dashed line in the renderings. This was performed for two reasons: Firstly, because we used a backwards rendering path from light source to the detector, this allowed us to reduce the noise of the simulation significantly, as only the pixels needed for the fit were calculated. Secondly, for future applications, aligning one or multiple image lines might be more feasible than aligning the 3D object completely. In contrast, the rendering results shown in this section were computed using the complete scene with the optical properties recovered from the fit to the image line data. We varied the positions of the camera and light source by adjusting their angles along fixed axes, i.e., we chose φ l = φ c = 0 and varied ϑ l and ϑ c , as they are defined in Figure 1. To evaluate the influence of the configuration angles, the root mean square error (RMSE) of the examined range of optical properties, μ a and μ s , was computed and plotted as an error map for each pair of chosen angles.
In Figure 2, we show the variation in the camera angle ϑ c , the light source angle ϑ l , and the resulting error maps for μ s and μ a , where we set the true optical properties to μ s = 0.05 mm−1 and μ a = 0.015 mm−1. As is typical in optimisation problems, we aimed for an error map that facilitates rapid convergence; thus, flat error landscapes and numerous local minima were avoided. Our results indicated the most favourable conditions at a camera angle of ϑ c = 45 .
In Figure 3, we further analysed the error map at this fixed camera angle while varying the light source angles, closely examining the region near the optimum. Clearly, a light source angle of ϑ l = 45 yielded the best error map, with the optimal region neither elongated, as observed at ϑ l = 0 , nor blurred, as at ϑ l = 90 . The choice of angles is crucial, as it directly affects the prominence of edge effects. These provide the key information needed to reliably distinguish between scattering and absorption during the fitting process. Thus, for each geometry, the configuration should be selected to maximise the visibility of edge effects. Therefore, the configuration ϑ c = ϑ l = 45 was adopted for subsequent computations using the cube geometry. We examined the optical properties of three real materials, yellow, red, and blue silicone, which were produced in-house [47], and whose optical properties had been precisely determined using an integrating sphere setup, as described in Ref. [3]. For this part, we selected a simple geometry of a cube with 5 cm sides placed on a grey background mat. Renderings of these virtual cubes, created using the measured properties, are presented in the left column of Figure 4.
Following the procedure described in Section 2, we used the same geometry as the virtual object reference in our MC simulations, iteratively adjusting the scattering and absorption coefficients via the LM algorithm until the cost function converged. Figure 5 displays the best-fit results at each wavelength for all three materials. Additionally, the inset plots show the errors between the measured and fitted values of μ s (left column) and μ a (right column) for these materials. The optical properties of all three materials were recovered very accurately, with errors below 4%. Notably, (i) spectral regions with lower absorption showed less error fluctuation compared to regions of higher absorption, and (ii) larger scattering increases the errors, especially when comparing the materials. Across the spectrum, the blue material exhibits the most scattering, reflected in its slightly larger errors. This is a direct consequence of the diminished edge effects when the material becomes less translucent, providing the fit with less information.
Using these fitted properties, new renderings of the cubes were generated, shown in the middle column of Figure 4. Subsequently, we computed the standard CIE Δ E 2000 between the two sets of images, displaying the resulting Δ E maps in the right column of Figure 4. Furthermore, the mean Δ E was calculated for a region of interest (ROI) located at the centre of the cubes’ top and front sides. The resulting values were consistently below 0.5, indicating that the colour differences are imperceptible to a human observer, based on the convention that a colour difference with Δ E < 1 is indistinguishable. These results demonstrate successful recovery of the scattering and absorption coefficients using our intensity curve approach, assuming precise knowledge of the refractive index and anisotropy factor.
Before incorporating the spectral filters and the spectral channels of the virtual camera, we assessed the sensitivity of the fitting method to variations in n and g to evaluate the algorithm’s robustness when these parameters are not precisely known. We skewed the fit results by altering n in 1% increments and varying g in the Henyey–Greenstein phase function from 0.0 to 1.0 in steps of 0.1. These variations were performed in the fit simulations, while the reference simulations were kept at the true values. The measured values of n and g range from n 400 = 1.42 to n 700 = 1.40 and g 400 = 0.65 to g 700 = 0.71 , where we denoted the wavelength in nm. In Figure 6, we illustrate the fit outcomes for varying n values at a fixed true g for each material. The results indicate that an increase in n systematically overestimates μ s and underestimates μ a , while a decrease in n has the opposite effect. This effect slightly differed based on material, being less pronounced in the higher-scattering blue material compared to the yellow and red materials.
We also assessed the sensitivity to variations in g, keeping n fixed and excluding a value close to the true g, c.f. Figure 7. Changing the anisotropy factor within the Henyey–Greenstein phase function influences the light propagation and changes the detected reflection, especially in the case of g = 0 , which represents isotropic scattering. Similar to variations in n, changes in g produced nearly constant spectral offsets. A decrease in g resulted in underestimated μ s and μ a , whereas an increase in g had the opposite effect. The magnitude of this effect is also related to the average number of scattering events each photon undergoes before detection; fewer scattering events mean that the influence of g is greater. This relationship can be assessed by the ratio μ s / μ a , as shown in Figure 7. The larger this ratio, the smaller the offset caused by variations in g becomes, e.g., the regions towards 700 n m for the yellow and red material and the blue material overall showcase this effect.

3.2. Filter Setup

We now extend our fitting method to accommodate realistic light sources and detector sensitivities, as well as the filter setup. As described in Section 2, these filters are chosen with a Gaussian profile, setting the cut-off at the 3 σ interval. Thus, for all of the following reference computations, Equation (3) was used. Once more, the fitting procedure was repeated for the three materials with cube geometry. To determine an appropriate operational range, various filter widths, specifically with fwhm values of 1, 10, 20 and 50 nm, were tested to represent common filter specifications. The 1 n m filter served as a baseline check since it approximates the unfiltered scenario.
Figure 8 presents the optimal fitting results obtained for each material and filter width, including the respective relative errors compared to integrating sphere reference measurements. The observed errors increase significantly in spectral regions where the absorption varies sharply within the filter’s bandwidth. This effect arises because our filtered computations (cf. Equation (3)) essentially represent weighted averages. The trend of the optical properties within the filtered interval shifts the averaged value. Consequently, depending mainly on the shape of the reference absorption coefficient curve, reflection can be over- or underestimated. Naturally, broader filters are more sensitive to this effect as wider regions of the spectrum of the optical properties are averaged, e.g., the steep drop of the absorption coefficient of the blue material close to 400 n m produces large errors for the 50 n m filter. In our case, this effect is further exacerbated by the lower limit of the wavelength interval at 400 n m , imposed by focusing only on the visible spectrum. This truncates the weighted averaging interval, introducing an additional source of error, particularly for the 50 n m filter. Such errors would likely not occur in real-world applications when the reference data is recorded with a camera setup. Similar to the variations in n and g, this discrepancy is offset by adjustments to the optical properties during the fitting process. Furthermore, errors increase in regions of high absorption, e.g., close to 400 n m for the yellow material. This is again a consequence of the less pronounced edge effects in these areas. The same effect can also be observed in the larger errors found for the higher-scattering blue material compared to the yellow and red materials. Particularly for the broader 20 n m and 50 n m filters, these errors accumulate, resulting in significant deviations. As anticipated, narrower filters produce superior results, reducing averaging intervals and consequently diminishing the impact of rapid variations in μ a . As before, we compare forward-rendered images of cubes. Using the recovered optical properties fitted with the 10 n m filter and sampled every 10 n m , we obtain the images shown in Figure 9. Again, we observe colour differences in ROIs on the top and front surfaces well below the threshold of 1.

3.3. RGB–Channel Filters

Next, we investigated the results of the fit if a common RGB camera setup was used; in this case, we modelled the Nikon D7500. The intrinsic RGB channels of the camera, shown in Figure 10, were used as filters to recover a set of three μ s and μ a pairs that correspond to the R, G, and B channels. We then weighted the reference simulations with this RGB–filter set, i.e., we used Equation (3) but weighted only with colour channels. Additionally, the bounds of the integrals were adapted to reflect the width of the camera channels by limiting the range to the 5% intensity interval marked for each channel in Figure 10.
To compare the output from these computations, we chose wavelengths closest to the maximum of a Gaussian that was fitted to each of the channels, which yields λ R = 610 n m , λ G = 530 n m , and λ B = 470 n m . We note that the widths of the camera channels, estimated by the fwhm values of the fitted Gaussians, are roughly fwhm R 65   n m , fwhm G 100   n m , and fwhm B 75   n m . In Figure 11, we show the results when we compare the RGB filter results for all three materials with the integrating sphere reference at the specified wavelengths. As expected, with such broad filters, the resulting optical properties show significant deviations from the reference, with the largest errors in the green channel, at λ G = 530 n m , which is the broadest. Compared with the results of the 50 nm filter at the corresponding wavelengths, shown in Figure 8, the errors are larger, as we average over even broader regions of the spectrum.
Again, we rendered images and compared the resulting colour difference, but this time the forward rendering from the RGB channel results included an interpolation of the limited channel reflectance data compared with our usual spectrally resolved simulation output, described in more detail in Ref. [1]. Since we now only have reflectance information at three wavelengths but still need to integrate over the whole spectrum, the simulated reflectance data was first extrapolated to 400 nm and 700 nm by extending the linear trends between 470 nm and 530 nm (toward 400 nm), and between 530 nm and 610 nm (toward 700 nm). This resulted in a complete spectrum, which was then used, just as before, for image rendering. The renderings and the Δ E -values are shown in Figure 12. This time, we obtain significantly larger Δ E -values in the marked regions, and the resulting colour appearance of the cubes is different. This is a result of both the significant deviations of the optical properties obtained from the fit with broad filters, cf. Figure 11, and the spectral undersampling of the RGB channel data. Especially, the red material looks distinctly different due to the nearly 20% error in the green channel.

3.4. Complex Geometry

In this section, we change the geometry from the simple cube to more complex objects. We used some of the 3D models provided by the Stanford Computer Graphics Laboratory and available in the Stanford 3D library [48]: the Stanford Bunny, the Stanford Dragon, and the XYZ Thai Statue, later referred to as bunny, dragon, and statue. Renderings of the dragon are shown in the left columns of Figure 13, while renderings of the bunny and the statue can be found in Figure 14. We used the same three materials as references and performed the steps as before. Each model was fitted with each material, and the results were compared to the reference to show that we can recover the optical properties independent of geometry. Since now a priori, no illumination and viewing angles can be preferred because the geometry is irregular, the illumination was kept at the same angle of ϑ l = 45 and the camera angle was chosen to view the front side of the objects under an angle of ϑ c = 90 · , as defined in Section 3.1. In this way, the area of the object we could examine in the simulated frame was maximised. Furthermore, two image lines through the object were taken to fit the optical properties, marked with red dashed lines in the first column images of the renderings.
In Figure 15, we show the recovered optical properties from the three different geometries for the 10 nm filter for all three materials, sampled every 10 n m . We find that all three geometries produce comparable results that do not show more significant deviations from the integrating sphere measurements than the cubes before, with the same filter. They show the same behaviours and trends as the results of the fit in Figure 8.
Again, with the recovered properties, we render images and compare the colour with the Δ E between reference (left column) and fit (middle columns) renderings, as shown in Figure 13 and Figure 14. We find that the colour difference between reference and fit renderings is small enough so that they cannot be distinguished. This time, with the more complex geometry, specular reflections cannot be avoided as before with the cubes. Furthermore, we observe increased Δ E -values for the blue material. As before, this is a direct consequence of the higher scattering of the blue material, which limits the information available from edge effects and results in larger errors in the derived optical properties. In contrast, the more translucent yellow and red materials exhibit more pronounced edge effects, allowing for more effective recovery and differentiation of scattering and absorption. This leads to smaller errors in the optical properties and, consequently, a reduced colour difference.

4. Discussion

To the best of our knowledge, we presented a novel method to use edge effects to inversely recover and reliably differentiate the reduced scattering and absorption coefficients for macroscopically homogeneous turbid media in the visible spectral range. Employing an inverse MC method that uses a numerical solution of the RTE, we require just a single image as input. Using this method for a simple cube geometry and three materials, the recovered optical properties show only small deviations of less than 4% when compared to those used for rendering the reference images. Additionally, the influence of incorrectly assumed refractive index and anisotropy factor on the fit outcomes was investigated, enabling estimates for deviations when these properties are known only within a given uncertainty range. In our findings, both overestimation and underestimation of the refractive index and the anisotropy factor shifted the resulting scattering and absorption values, exhibiting consistent trends across all wavelengths. The fitting method was then adapted to model the use of real camera images as a reference input. To achieve this, we implemented a range of bandpass filters in the setup to spectrally resolve the recovered optical properties. This adaptation enables our method to be used in future experiments that employ a broadband light source and a conventional RGB camera to capture images of the samples. Furthermore, we demonstrated that the reduced scattering and absorption coefficients can be recovered within the same error range, independent of the geometry of the object. This was exemplified on three completely different 3D models. Subsequently, forward renderings were computed incorporating the recovered optical properties, and the colour differences were evaluated via the CIE Δ E 2000 . We found that, based on the colour difference, the renderings produced using both sets of optical properties, preset and fitted with the 10 n m filter, are indistinguishable. We have demonstrated that our proposed method is effective at recovering the reduced scattering and absorption coefficients over the examined range, approximately 0.25 mm−1 to 3.5 mm−1 for μ s , and roughly 10−2 mm−1 to 2 × 10−1 mm−1 for μ a . We note, however, that the presented method is not limited to this range, indicating the method’s potential applicability to an even broader set of optical properties. Furthermore, we showed that this method operates independently of the sample geometry, which is particularly beneficial when sample preparation for other common measurement methods is challenging or unfeasible for certain materials. Finally, this approach allows for the direct creation of a digital twin of the measured material with the exact shape of the sample. This capability opens up numerous possibilities, even when the optical properties are not the primary concern but rather the need to render the object, for example, for soft proofing of products that must be evaluated under varying lighting or viewing conditions, or for scaling the object up or down while preserving a realistic appearance in terms of translucency and colour. Another possible application could be process control of objects on a slowly moving conveyor line. Within the limits set by motion-induced errors in the captured image data, it would be possible to perform quality control of colour appearance, verify desired optical properties, and generate digital twins of the objects on the line.
Future work includes the study of macroscopically heterogeneous objects and applying the fitting method to a real photo box spectral filter setup that mirrors the virtual setup employed in this study to synthesise the reference data. Subsequently, additional factors, such as alignment mismatches, image blur, and surface defects of real samples, could be incorporated and evaluated when real images are captured and the optical properties are recovered from this input. Overall, this innovative method provides a versatile and cost-effective solution for the optical characterisation of materials. By combining a robust physics-based rendering model with inverse techniques and rigorous validation under various illumination, filtering, and geometric conditions, our theoretical studies demonstrate that our method can accurately determine optical properties across a wide range of scenarios, including materials of varying degrees of translucency. This, in turn, broadens the scope and simplifies the application to experimental requirements for material analysis.

Author Contributions

P.N.: Conceptualisation, Methodology, Software, Formal Analysis, Data Curation, Writing—Original Draft. D.H.: Software, Validation, Writing—Review and Editing. F.F.: Resources, Data Curation. A.K.: Conceptualisation, Supervision, Validation, Writing—Review and Editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) within the Promotion of Joint Industrial Research Programme (IGF) due to a decision of the German Bundestag. It was part of the research project (01IF23188N) by the Association for Research in Precision Mechanics, Optics and Medical Technology (F.O.M.) under the auspices of the DLR Projektträger (DLR-PT).

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hevisov, D.; Foschum, F.; Wagner, M.; Kienle, A. Physically accurate rendering of translucent objects. Opt. Express 2025, 33, 22791–22804. [Google Scholar] [CrossRef]
  2. Kissel, A.; Nguyen, P.; Hevisov, D.; Foschum, F.; Kienle, A. Optical property-based rendering of 3D prints. Opt. Express 2025, 33, 15187–15206. [Google Scholar] [CrossRef] [PubMed]
  3. Foschum, F.; Bergmann, F.; Kienle, A. Precise determination of the optical properties of turbid media using an optimized integrating sphere and advanced Monte Carlo simulations. Part 1: Theory. Appl. Opt. 2020, 59, 3203–3215. [Google Scholar] [CrossRef]
  4. Bergmann, F.; Foschum, F.; Zuber, R.; Kienle, A. Precise determination of the optical properties of turbid media using an optimized integrating sphere and advanced Monte Carlo simulations. Part 2: Experiments. Appl. Opt. 2020, 59, 3216–3226. [Google Scholar] [CrossRef]
  5. Nothelfer, S.; Foschum, F.; Kienle, A. Goniometer for determination of the spectrally resolved scattering phase function of suspended particles. Rev. Sci. Instrum. 2019, 90, 083110. [Google Scholar] [CrossRef] [PubMed]
  6. Pickering, J.W.; Prahl, S.A.; van Wieringen, N.; Beek, J.F.; Sterenborg, H.J.C.M.; van Gemert, M.J.C. Double-integrating-sphere system for measuring the optical properties of tissue. Appl. Opt. 1993, 32, 399–410. [Google Scholar] [CrossRef] [PubMed]
  7. Nelson, N.B.; Prézelin, B.B. Calibration of an integrating sphere for determining the absorption coefficient of scattering suspensions. Appl. Opt. 1993, 32, 6710–6717. [Google Scholar] [CrossRef]
  8. Simpson, C.R.; Kohl, M.; Essenpreis, M.; Cope, M. Near-infrared optical properties of ex vivo human skin and subcutaneous tissues measured using the Monte Carlo inversion technique. Phys. Med. Biol. 1998, 43, 2465. [Google Scholar] [CrossRef]
  9. Terán, E.; Méndez, E.R.; Quispe-Siccha, R.; Peréz-Pacheco, A.; Cuppo, F.L.S. Application of single integrating sphere system to obtain the optical properties of turbid media. OSA Contin. 2019, 2, 1791–1806. [Google Scholar] [CrossRef]
  10. Frisvad, J.R.; Jensen, S.A.; Madsen, J.S.; Correia, A.; Yang, L.; Gregersen, S.K.S.; Meuret, Y.; Hansen, P.E. Survey of models for acquiring the optical properties of translucent materials. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2020; Volume 39, pp. 729–755. [Google Scholar]
  11. Van Veen, R.L.; Sterenborg, H.J.; Pifferi, A.; Torricelli, A.; Chikoidze, E.; Cubeddu, R. Determination of visible near-IR absorption coefficients of mammalian fat using time-and spatially resolved diffuse reflectance and transmission spectroscopy. J. Biomed. Opt. 2005, 10, 054004. [Google Scholar] [CrossRef]
  12. Bal, G. Inverse transport theory and applications. Inverse Probl. 2009, 25, 053001. [Google Scholar] [CrossRef]
  13. Gkioulekas, I.; Levin, A.; Zickler, T. An evaluation of computational imaging techniques for heterogeneous inverse scattering. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part III 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 685–701. [Google Scholar]
  14. Narasimhan, S.G.; Gupta, M.; Donner, C.; Ramamoorthi, R.; Nayar, S.K.; Jensen, H.W. Acquiring scattering properties of participating media by dilution. In ACM SIGGRAPH 2006 Papers; Association for Computing Machinery: New York, NY, USA, 2006; pp. 1003–1012. [Google Scholar]
  15. Fuchs, C.; Chen, T.; Goesele, M.; Theisel, H.; Seidel, H.P. Density estimation for dynamic volumes. Comput. Graph. 2007, 31, 205–211. [Google Scholar] [CrossRef]
  16. Dong, B.; Moore, K.D.; Zhang, W.; Peers, P. Scattering parameters and surface normals from homogeneous translucent materials using photometric stereo. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2291–2298. [Google Scholar]
  17. Papas, M.; Regg, C.; Jarosz, W.; Bickel, B.; Jackson, P.; Matusik, W.; Marschner, S.; Gross, M. Fabricating translucent materials using continuous pigment mixtures. ACM Trans. Graph. (TOG) 2013, 32, 1–12. [Google Scholar] [CrossRef]
  18. Pranovich, A.; Hannemose, M.R.; Jensen, J.N.; Tran, D.M.; Aanæs, H.; Gooran, S.; Nyström, D.; Frisvad, J.R. Digitizing the appearance of 3D printing materials using a spectrophotometer. Sensors 2024, 24, 7025. [Google Scholar] [CrossRef] [PubMed]
  19. Elek, O.; Zhang, R.; Sumin, D.; Myszkowski, K.; Bickel, B.; Wilkie, A.; Křivánek, J.; Weyrich, T. Robust and practical measurement of volume transport parameters in solid photo-polymer materials for 3D printing. Opt. Express 2021, 29, 7568–7588. [Google Scholar] [CrossRef]
  20. Elek, O.; Sumin, D.; Zhang, R.; Weyrich, T.; Myszkowski, K.; Bickel, B.; Wilkie, A.; Krivanek, J. Scattering-aware texture reproduction for 3D printing. ACM Trans. Graph. 2017, 36, 241. [Google Scholar] [CrossRef]
  21. Iser, T.; Rittig, T.; Nogué, E.; Nindel, T.K.; Wilkie, A. Affordable spectral measurements of translucent materials. ACM Trans. Graph. (TOG) 2022, 41, 1–13. [Google Scholar] [CrossRef]
  22. Gkioulekas, I.; Zhao, S.; Bala, K.; Zickler, T.; Levin, A. Inverse volume rendering with material dictionaries. ACM Trans. Graph. (TOG) 2013, 32, 1–13. [Google Scholar] [CrossRef]
  23. Chen, Z.; Guo, J.; Lai, S.; Fu, R.; Kong, M.; Wang, C.; Sun, H.; Zhang, Z.; Li, C.; Guo, Y. Practical measurements of translucent materials with inter-pixel translucency prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 20932–20942. [Google Scholar]
  24. Deng, X.; Luan, F.; Walter, B.; Bala, K.; Marschner, S. Reconstructing translucent objects using differentiable rendering. In Proceedings of the ACM SIGGRAPH 2022 Conference Proceedings, Vancouver, BC, Canada, 7–11 August 2022; pp. 1–10. [Google Scholar]
  25. Azinovic, D.; Li, T.M.; Kaplanyan, A.; Nießner, M. Inverse path tracing for joint material and lighting estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2447–2456. [Google Scholar]
  26. Levis, A.; Schechner, Y.Y.; Aides, A.; Davis, A.B. Airborne three-dimensional cloud tomography. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 3379–3387. [Google Scholar]
  27. Leonard, L.; Westermann, R. Image-based reconstruction of heterogeneous media in the presence of multiple light-scattering. Comput. Graph. 2024, 119, 103877. [Google Scholar] [CrossRef]
  28. Khan, M.S.U.; Pagani, A.; Liwicki, M.; Stricker, D.; Afzal, M.Z. Three-dimensional reconstruction from a single RGB image using deep learning: A review. J. Imaging 2022, 8, 225. [Google Scholar] [CrossRef]
  29. Ding, D.; Sun, J. 3-D shape measurement of translucent objects based on fringe projection. IEEE Sens. J. 2023, 24, 3172–3179. [Google Scholar] [CrossRef]
  30. Feng, S.; Zhang, L.; Zuo, C.; Tao, T.; Chen, Q.; Gu, G. High dynamic range 3d measurements with fringe projection profilometry: A review. Meas. Sci. Technol. 2018, 29, 122001. [Google Scholar] [CrossRef]
  31. Xu, Y.; Zhao, H.; Jiang, H.; Li, X. High-accuracy 3D shape measurement of translucent objects by fringe projection profilometry. Opt. Express 2019, 27, 18421–18434. [Google Scholar] [CrossRef] [PubMed]
  32. Feng, S.; Zuo, C.; Zhang, L.; Tao, T.; Hu, Y.; Yin, W.; Qian, J.; Chen, Q. Calibration of fringe projection profilometry: A comparative review. Opt. Lasers Eng. 2021, 143, 106622. [Google Scholar] [CrossRef]
  33. Martelli, F.; Binzoni, T.; Del Bianco, S.; Liemert, A.; Kienle, A. Light Propagation Through Biological Tissue and Other Diffusive Media: Theory, Solutions, and Validations; SPIE Press: Bellingham, WA, USA, 2022. [Google Scholar]
  34. Transtrum, M.K.; Sethna, J.P. Improvements to the Levenberg-Marquardt algorithm for nonlinear least-squares minimization. arXiv 2012, arXiv:1201.5885. [Google Scholar]
  35. Tao, F.; Xiao, B.; Qi, Q.; Cheng, J.; Ji, P. Digital twin modeling. J. Manuf. Syst. 2022, 64, 372–389. [Google Scholar] [CrossRef]
  36. Haag, S.; Anderl, R. Digital twin–Proof of concept. Manuf. Lett. 2018, 15, 64–66. [Google Scholar] [CrossRef]
  37. Pranovich, A. Modelling Appearance Printing: Acquisition and Digital Reproduction of Translucent and Goniochromatic Materials; Linkopings Universitet: Linköping, Sweden, 2024. [Google Scholar]
  38. Patil, R.A.; Fairchild, M.D.; Johnson, G.M. 3D simulation of prints for improved soft proofing. In Proceedings of the Color and Imaging Conference, Aachen, Germany, 5–8 April 2004; Society of Imaging Science and Technology: Springfield, VA, USA, 2004; Volume 12, pp. 193–199. [Google Scholar]
  39. Liemert, A.; Reitzle, D.; Kienle, A. Analytical solutions of the radiative transport equation for turbid and fluorescent layered media. Sci. Rep. 2017, 7, 3819. [Google Scholar] [CrossRef]
  40. Liemert, A.; Geiger, S.; Kienle, A. Solutions for single-scattered radiance in the semi-infinite medium based on radiative transport theory. J. Opt. Soc. Am. A 2021, 38, 405–411. [Google Scholar] [CrossRef] [PubMed]
  41. Reitzle, D.; Geiger, S.; Liemert, A.; Kienle, A. Semianalytical solution for the transient temperature in a scattering and absorbing slab consisting of three layers heated by a light source. Sci. Rep. 2021, 11, 8424. [Google Scholar] [CrossRef]
  42. Geiger, S.; Liemert, A.; Reitzle, D.; Bijelic, M.; Ramazzina, A.; Ritter, W.; Heide, F.; Kienle, A. Single scattering models for radiative transfer of isotropic and cone-shaped light sources in fog. Opt. Express 2023, 31, 125–142. [Google Scholar] [CrossRef] [PubMed]
  43. Henyey, L.G.; Greenstein, J.L. Diffuse radiation in the galaxy. Astrophys. J. 1941, 93, 70–83. [Google Scholar] [CrossRef]
  44. Fresnel, A.J. Mémoire sur la loi des Modifications que la Réflexion Imprime à la Lumière Polarisée; De l’Imprimerie De Firmin Didot Fréres: Paris, France, 1834. [Google Scholar]
  45. Metropolis, N.; Ulam, S. The monte carlo method. J. Am. Stat. Assoc. 1949, 44, 335–341. [Google Scholar] [CrossRef]
  46. Sharma, G.; Wu, W.; Dalal, E.N. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Res. Appl. 2005, 30, 21–30. [Google Scholar] [CrossRef]
  47. Wagner, M.; Fugger, O.; Foschum, F.; Kienle, A. Development of silicone-based phantoms for biomedical optics from 400 to 1550 nm. Biomed. Opt. Express 2024, 15, 6561–6572. [Google Scholar] [CrossRef] [PubMed]
  48. The Stanford 3D Scanning Repository. 2025. Available online: https://graphics.stanford.edu/data/3Dscanrep/ (accessed on 7 March 2025).
Figure 1. Schematic representation of the virtual setup used in this work. The green spot marks the position of the middle of the light source (L) with its spherical coordinates ( d l , ϑ l , φ l ) . The red spot marks the position of the camera’s aperture (C) with spherical coordinates ( d C , ϑ C , φ C ) . The orange rectangle represents the spectral filter in the light path towards the detector. The normal of the light source and of the camera point towards the origin located at the middle of the top surface of the object.
Figure 1. Schematic representation of the virtual setup used in this work. The green spot marks the position of the middle of the light source (L) with its spherical coordinates ( d l , ϑ l , φ l ) . The red spot marks the position of the camera’s aperture (C) with spherical coordinates ( d C , ϑ C , φ C ) . The orange rectangle represents the spectral filter in the light path towards the detector. The normal of the light source and of the camera point towards the origin located at the middle of the top surface of the object.
Photonics 12 01118 g001
Figure 2. Error maps for varying camera angle ϑ c and light source angle ϑ l , showing optimisation errors for scattering ( μ s ) and absorption ( μ a ) coefficients. Possible optimal conditions are identified for ϑ c = 45 .
Figure 2. Error maps for varying camera angle ϑ c and light source angle ϑ l , showing optimisation errors for scattering ( μ s ) and absorption ( μ a ) coefficients. Possible optimal conditions are identified for ϑ c = 45 .
Photonics 12 01118 g002
Figure 3. Detailed error maps at fixed camera angle ϑ c = 45 , showing the effects of varying the light source angle ϑ l . The optimal error environment can be found at ϑ l = 45 , characterised by minimal elongation and blur near the global minimum.
Figure 3. Detailed error maps at fixed camera angle ϑ c = 45 , showing the effects of varying the light source angle ϑ l . The optimal error environment can be found at ϑ l = 45 , characterised by minimal elongation and blur near the global minimum.
Photonics 12 01118 g003
Figure 4. Comparison of renderings with measured (left column) and fitted (middle column) optical properties, and resulting standard CIE Δ E 2000 error maps (right column) for virtual cubes. The dashed red line shows the image line that was used to fit the optical properties. The red and cyan squares mark the regions in which the averaged Δ E -values, shown next to the squares, were computed.
Figure 4. Comparison of renderings with measured (left column) and fitted (middle column) optical properties, and resulting standard CIE Δ E 2000 error maps (right column) for virtual cubes. The dashed red line shows the image line that was used to fit the optical properties. The red and cyan squares mark the regions in which the averaged Δ E -values, shown next to the squares, were computed.
Photonics 12 01118 g004
Figure 5. Best-fit results across wavelengths (orange crosses) for reduced scattering ( μ s ) and absorption ( μ a ) coefficients for yellow (top row), red (middle row), and blue (bottom row) materials. The solid blue lines represent the measured μ s and μ a obtained with an integrating sphere. Insets display percentage errors between measured and fitted values.
Figure 5. Best-fit results across wavelengths (orange crosses) for reduced scattering ( μ s ) and absorption ( μ a ) coefficients for yellow (top row), red (middle row), and blue (bottom row) materials. The solid blue lines represent the measured μ s and μ a obtained with an integrating sphere. Insets display percentage errors between measured and fitted values.
Photonics 12 01118 g005
Figure 6. Influence of the refractive index on the fitted reduced scattering (left column) and absorption coefficients (right column) at a fixed anisotropy factor for yellow (top row), red (middle row), and blue (bottom row) silicone materials, compared with those obtained using the reference refractive index (black lines). The red inset plots in the (right column) showcase the values for the regions with low absorption.
Figure 6. Influence of the refractive index on the fitted reduced scattering (left column) and absorption coefficients (right column) at a fixed anisotropy factor for yellow (top row), red (middle row), and blue (bottom row) silicone materials, compared with those obtained using the reference refractive index (black lines). The red inset plots in the (right column) showcase the values for the regions with low absorption.
Photonics 12 01118 g006
Figure 7. Influence of the anisotropy factor on the fitted reduced scattering (left column) and absorption coefficients (right column) at a fixed refractive index for yellow (top row), red (middle row), and blue (bottom row) silicone materials, compared with those obtained using the reference anisotropy factor (black lines). The red inset plots in the (right column) showcase the values for the regions with low absorption.
Figure 7. Influence of the anisotropy factor on the fitted reduced scattering (left column) and absorption coefficients (right column) at a fixed refractive index for yellow (top row), red (middle row), and blue (bottom row) silicone materials, compared with those obtained using the reference anisotropy factor (black lines). The red inset plots in the (right column) showcase the values for the regions with low absorption.
Photonics 12 01118 g007
Figure 8. Fitted optical properties, sampled every 10 nm, for the three materials yellow (left column), red (middle column), and blue (right column) across different filter fwhm values. The (top row) shows the reduced scattering coefficient μ s , the (middle row) presents the absorption coefficient μ a , and the (bottom row) displays relative differences compared to reference measurements, with solid lines representing differences in μ s and dashed lines for μ a .
Figure 8. Fitted optical properties, sampled every 10 nm, for the three materials yellow (left column), red (middle column), and blue (right column) across different filter fwhm values. The (top row) shows the reduced scattering coefficient μ s , the (middle row) presents the absorption coefficient μ a , and the (bottom row) displays relative differences compared to reference measurements, with solid lines representing differences in μ s and dashed lines for μ a .
Photonics 12 01118 g008
Figure 9. Comparison of renderings with measured (left column) and fitted (middle column) optical properties, obtained with the 10 nm filter, and resulting standard CIE Δ E 2000 error maps (right column) for virtual cubes. The dashed red line shows the image line that was used to fit the optical properties. The red and cyan squares mark the regions in which the averaged Δ E -values, shown next to the squares, were computed.
Figure 9. Comparison of renderings with measured (left column) and fitted (middle column) optical properties, obtained with the 10 nm filter, and resulting standard CIE Δ E 2000 error maps (right column) for virtual cubes. The dashed red line shows the image line that was used to fit the optical properties. The red and cyan squares mark the regions in which the averaged Δ E -values, shown next to the squares, were computed.
Photonics 12 01118 g009
Figure 10. Measured normalised spectral responses of the camera colour channels of a Nikon D7500. The dashed lines mark the relevant intervals that were used in Section 3.3 as bounds. Additionally, the fitted Gaussian curve for each channel is shown. The peak wavelength of each curve, defined as the wavelength at which the fitted Gaussian reaches its maximum, was determined to be λ R = 610   n m , λ G = 530   n m , and λ B = 470   n m .
Figure 10. Measured normalised spectral responses of the camera colour channels of a Nikon D7500. The dashed lines mark the relevant intervals that were used in Section 3.3 as bounds. Additionally, the fitted Gaussian curve for each channel is shown. The peak wavelength of each curve, defined as the wavelength at which the fitted Gaussian reaches its maximum, was determined to be λ R = 610   n m , λ G = 530   n m , and λ B = 470   n m .
Photonics 12 01118 g010
Figure 11. Recovered reduced scattering μ s and absorption μ a coefficients and relative errors from the RGB camera channel setup compared to the integrating sphere measurements at λ R = 610 n m , λ G = 530 n m , and λ B = 470 n m . The properties for each material are shown in its respective colour.
Figure 11. Recovered reduced scattering μ s and absorption μ a coefficients and relative errors from the RGB camera channel setup compared to the integrating sphere measurements at λ R = 610 n m , λ G = 530 n m , and λ B = 470 n m . The properties for each material are shown in its respective colour.
Photonics 12 01118 g011
Figure 12. Comparison of rendered virtual cubes using integrating sphere measurements (left column) and fitted (middle column) optical properties obtained with the RGB filter setup. The resulting standard CIE Δ E 2000 error maps are displayed in the (right column). The dashed red line shows the image line that was used to fit the optical properties. The red and cyan squares mark the regions in which the averaged Δ E , shown next to the squares, was computed.
Figure 12. Comparison of rendered virtual cubes using integrating sphere measurements (left column) and fitted (middle column) optical properties obtained with the RGB filter setup. The resulting standard CIE Δ E 2000 error maps are displayed in the (right column). The dashed red line shows the image line that was used to fit the optical properties. The red and cyan squares mark the regions in which the averaged Δ E , shown next to the squares, was computed.
Photonics 12 01118 g012
Figure 13. Comparison of renderings of the dragon using integrating sphere measurements (left column) and fitted (middle column) optical properties obtained with the 10 n m filter. The resulting standard CIE Δ E 2000 error maps are displayed in the (right column). The dashed red lines mark the image lines through the object that were used for fitting the optical properties.
Figure 13. Comparison of renderings of the dragon using integrating sphere measurements (left column) and fitted (middle column) optical properties obtained with the 10 n m filter. The resulting standard CIE Δ E 2000 error maps are displayed in the (right column). The dashed red lines mark the image lines through the object that were used for fitting the optical properties.
Photonics 12 01118 g013
Figure 14. Comparison of renderings of the bunny and statue using integrating sphere measurements (left column) and fitted (middle column) optical properties obtained with the 10 n m filter. The resulting standard CIE Δ E 2000 error maps are displayed in the (right column). The dashed red lines mark the image lines through the object that were used for fitting the optical properties.
Figure 14. Comparison of renderings of the bunny and statue using integrating sphere measurements (left column) and fitted (middle column) optical properties obtained with the 10 n m filter. The resulting standard CIE Δ E 2000 error maps are displayed in the (right column). The dashed red lines mark the image lines through the object that were used for fitting the optical properties.
Photonics 12 01118 g014
Figure 15. Recovered reduced scattering μ s (top row) and absorption μ a (middle row) coefficients from the fits of the three different geometries dragon, bunny, and statue. Each material (yellow: left column, red: middle column, blue: right column) was fitted with each geometry with the 10 nm filter. Additionally, the results from the fit with the cube geometry for the same filter are shown. The bottom row displays relative differences compared to reference measurements, with solid lines representing differences in μ s and dashed lines for μ a .
Figure 15. Recovered reduced scattering μ s (top row) and absorption μ a (middle row) coefficients from the fits of the three different geometries dragon, bunny, and statue. Each material (yellow: left column, red: middle column, blue: right column) was fitted with each geometry with the 10 nm filter. Additionally, the results from the fit with the cube geometry for the same filter are shown. The bottom row displays relative differences compared to reference measurements, with solid lines representing differences in μ s and dashed lines for μ a .
Photonics 12 01118 g015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nguyen, P.; Hevisov, D.; Foschum, F.; Kienle, A. Recovering the Reduced Scattering and Absorption Coefficients of Turbid Media from a Single Image. Photonics 2025, 12, 1118. https://doi.org/10.3390/photonics12111118

AMA Style

Nguyen P, Hevisov D, Foschum F, Kienle A. Recovering the Reduced Scattering and Absorption Coefficients of Turbid Media from a Single Image. Photonics. 2025; 12(11):1118. https://doi.org/10.3390/photonics12111118

Chicago/Turabian Style

Nguyen, Philipp, David Hevisov, Florian Foschum, and Alwin Kienle. 2025. "Recovering the Reduced Scattering and Absorption Coefficients of Turbid Media from a Single Image" Photonics 12, no. 11: 1118. https://doi.org/10.3390/photonics12111118

APA Style

Nguyen, P., Hevisov, D., Foschum, F., & Kienle, A. (2025). Recovering the Reduced Scattering and Absorption Coefficients of Turbid Media from a Single Image. Photonics, 12(11), 1118. https://doi.org/10.3390/photonics12111118

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop