Implementing Many-Lights Rendering with IES-Based Lights

: In recent years, many research projects on real-time rendering have focused on the introduction of global illumination effects in order to improve the realism of a virtual scene. The main goal of these works is to ﬁnd a compromise between the achievable quality of the resulting rendering and the intrinsic high computational cost of global illumination. An established approach is based on the use of Virtual Point Lights, i.e., “ﬁctitious” light sources that are placed on surfaces in the scene. These lights simulate the contribution of light rays emitted by the light sources and bouncing on different objects. Techniques using Virtual Point Lights are often called Many-Lights techniques. In this paper, we propose an extension of a real-time Many-Lights rendering technique characterized by the integration of photometric data in the process of Virtual Point Lights distribution. We base the deﬁnition of light sources and the creation of Virtual Point Lights on the description provided in the IES standard format, created by the Illuminating Engineering Society (IES).


Introduction
Global illumination (GI) is one of the most investigated topics in computer graphics.GI takes into account the light reaching a surface directly from the light sources (direct illumination) and also the light reflected or refracted by other surfaces in the scene (indirect illumination).Previously, several approaches were proposed in order to introduce highly realistic simulations of GI in the rendering of a virtual scene.
The rendering process is based on the simulation of the interaction between light and the surfaces in a scene.For a uniform non-emitting surface, the radiance reflected in a point x is provided by the rendering equation [1]: where • n is the surface normal; • Ω is the hemisphere centered around n; • ω i and ω o are the directions of incoming and reflected light, respectively; • L i (x, ω o ) and L o (x, ω o ) are the incoming and the reflected radiances, respectively; • f r (x, ω i , ω o ) is the Bidirectional Reflectance Distribution Function (BRDF), which describes the amount of radiance reflected in the direction ω o given the radiance coming from direction ω i .
Every GI technique in the literature represents an approximation, at different levels of accuracy, of Equation (1).Techniques related to Physically Based Rendering (PBR) [2] are usually based on variants of ray tracing approaches and Monte Carlo methods, which can achieve very high levels of photorealism at the expense of high computational costs, thus limiting their full application only to offline photorealistic rendering.
In the context of real-time rendering, the inclusion of GI effects is particularly complex due to the time constraints of the rendering process.The standard approaches usually consider a limited set of GI effects (e.g., color bleeding, shadows, and reflections) and/or decouple the computation of GI from the actual main rendering process, which focuses only on the direct illumination component.The most common techniques rely on screen-based algorithms and the precomputation (baking) of shading data in texture maps to be used in the actual rendering process [3].The recent availability of real-time ray tracing graphics hardware is slowly allowing the introduction of techniques that were limited only to offline rendering.However, the current capabilities of this new generation of graphics cards are still far from supporting a full ray tracing approach, forcing the use of hybrid solutions still based on screen-based techniques, combined with highly sophisticated and effective denoising algorithms [4,5].
With Many-Lights rendering [6], we refer to a family of rendering techniques that are based on the placement of several Virtual Point Lights (VPLs) in the scene.The VPLs' positions correspond to the intersection points of rays traced from the light sources and bouncing on the surfaces in the environment.With a similar approach, the indirect illumination is approximated using the direct contribution of all the light sources in the scene (both the actual light sources and VPLs).As reported in [7], this class of techniques enables efficient computation of GI, providing results comparable to path tracing methods but with lower rendering times.Several extensions and improvements were proposed in the literature starting from this original idea, which was also applied successfully in the context of real-time rendering.
In this paper, we analyze the state of the art of Many-Lights techniques for real-time rendering, and we investigate the feasibility of the integration of photometric data in the process of the generation of the VPLs.We base the definition of the light sources and the creation of the VPLs on photometric data provided in the standard IES format proposed by the Illuminating Engineering Society.In particular, our work extends the Reflective Shadow Maps technique [8] so that any kind of light source described through a well-formed IES file can be used to illuminate a virtual environment.The technique generates an "enhanced" version of the original Reflective Shadow Map, associated to an IES-based light source.During the rendering step, each pixel of the map can be viewed as a VPL and used to compute its contribution to the overall lighting.The proposed approach, as in the original work [8], focuses on diffuse reflections.Following the standard Many-Lights approach, gathering and normalizing the illumination contributions from N samples offers a reasonable approximation of color bleeding effects in the final rendering.
To our knowledge, there are no other examples of works that integrate a photometric description of light sources into a Many-Lights rendering technique.While IES-based lights are currently supported by several game engines, they are actually used to "weight" only the intensity of the emission component (thus mainly affecting the direct illumination component), and they are not actively considered in the computation of GI effects.
This paper is structured as follows.In Section 2, we provide an overview of GI models based on Many-Lights techniques.An extensive explanation is reserved for the Reflective Shadow Maps [8] technique, which our work is based on.In Section 3, we describe the photometric data that are contained in IES files, as defined by the Illuminating Engineering Society.Section 4 describes our proposed technique and how photometric data can be used to characterize a light source in a real-time rendering pipeline.In Section 5, we present the results of the proposed method on two test scenes.Finally, Section 6 draws conclusions from the results and presents future work.

State of the Art on Many-Lights Rendering Techniques
The concept at the basis of Many-Lights techniques is quite straightforward.The rendering process is split into two steps: the first step consists of the generation of a certain amount of VPLs in the scene.This process can follow established approaches for the generation of light transport paths, such as in the photon tracing pass in photon mapping [9] or other Monte Carlo methods [2].In the Many-Lights approach, every time the light ray bounces on a surface in the scene, information related to the intersection point is saved: position x, normal n, incident direction ω i , incoming radiance L i (x, ω i ) (appropriately reduced at each bounce), and bidirectional reflectance distribution function f r (x, ω i , ω o ).This is the core intuition of Many-Lights techniques: these data are sufficient to compute the irradiance coming from each point of the light path.As a consequence, each point of a light path can be considered as a virtual light source, contributing to the overall illumination of the scene.Thus, the Many-Lights approach redefines the classical concept of light transport paths used by other GI models [2] and enables dealing with GI without using a different technique for each component.
In the second step, the irradiance in each pixel is approximated by calculating the sum of the direct illumination contributions from all the light sources in the scene, i.e., the "actual" light sources and also the set of generated VPLs.Using as an example the seminal Instant Radiosity [10] method, the radiance reflected by a point x 1 towards the camera in x 0 is provided by [ where is the visibility term between the k-th VPL and the point we are shading • f r (x 1 ) is the BRDFs of point x 1 .We use a simplified notation because only diffuse reflection is considered.
is the geometry term between the k-th VPL and the point we are shading • Φ k is the "flux" of the k-th VPL (i.e., the component of radiance "emitted" by the k-th VPL).It is calculated as:  Original image from [6].
A wide range of deriving algorithms have been devised starting from this idea, each one focused to address different needs with regard to performance or image quality.As an example, some techniques [11][12][13] focused on solving bias issues with the original formulation of Instant Radiosity, for example optimizing or redefining the generation process of the VPLs.
Many-Lights techniques scale according to the number of VPLs that are considered.A few light sources can roughly approximate the illumination of a scene, producing biased artifact-free images in a short rendering time, as demonstrated in [14,15].On the other hand, though, using thousands of VPLs and evaluating them in an efficient and scalable way allows rendering of images with little or no bias.Dachsbacher et al. [7] provide a large overview of scalable algorithms developed in order to solve this challenge.Some examples are Lightcuts [16,17] or Matrix Row-Column sampling [18].
Moreover, the intrinsic scalability of Many-Lights techniques makes them also often suitable for GPU parallelization.While current GPUs are able to process any number of light sources, Many-Lights techniques in real-time applications are usually applied in a deferred rendering [19] pipeline, considering just a single light bounce.Some techniques are focused on the optimization of the interleaved sampling [20] approach, which uses disjoint subsets of VPLs to compute illumination for adjacent pixels, making the processing faster by organizing data in GPU-friendly structures [21], and addressing issues that led to aliasing artifacts and problems with glossy materials [22].Other examples are techniques that shade "hierarchically" according to the level of geometric detail in the processed portion of the scene [23], or subdivide the image in tiles, each having a subset of VPLs employed in the shading process [24,25].Visibility computation is another field that is researched in order to speed up Many-Lights methods [26]: defining whether or not a shading point can see a VPL can save rendering time.An example is the Imperfect Shadow Maps method [15,27].

Reflective Shadow Maps
We provide here a description of the Reflective Shadow Maps [8] method, which we have extended in this paper.
The technique is based on the consideration that all one-bounce indirect illumination for a light source is related to the surfaces "visible" via the light source during the generation of a standard shadow map.Thus, each pixel of a light source's shadow map can be considered a virtual light source for the rest of the scene, and it can simulate the bounce of light on that surface.On such a premise, virtual lights generation corresponds to rasterization from the point of view of the light source.
A Reflective Shadow Map (RSM), as described in [8], stores for each pixel p: • a depth value, d p , as in a standard shadow map; • the world space position of the pixel x p ; • the normal in world space n p ; • the reflected radiant flux Φ p , which corresponds to the amount of luminous power leaving a surface.
The technique follows a standard deferred rendering approach, saving each of these data in different dedicated G-buffers in a first render step and finalizing the rendering process in a final second step.
The following equation describes how the virtual light source identified by the pixel p in the RSM illuminates the surface point x having a normal n.Adding up the contributions of each pixel/virtual light allows the computation of the indirect lighting received by the point.However, the generated RSM is moderately large (512 × 512 pixels/virtual lights), and its use would not be compatible with a real-time rendering context.To reduce the number of sums, an importance sampling approach is applied in order to concentrate the sampling of the RSM to a subset of relevant lights.The approach is based on the consideration that, the closer a virtual light source p is to the point x, the more it will contribute to its illumination.The distances between two points in image space can be considered a reasonable approximation for distances in world space; thus, the point x is projected to the image space, and then the texture is sampled using Poisson disk sampling.
The technique as presented in [8] considers only diffuse surfaces, and it is applied to directional lights and spotlights only, which can be treated using standard 2D texture images, due to the strong directionality of these kinds of light sources.The authors suggested that the technique could be extended to consider omnidirectional lights like point lights.In our proposed approach, we present an extension of the RSM technique characterized by the use of cube maps to manage the multiple emission directions of a VPL.

Photometric Description of Light Sources
Light sources in computer graphics are usually limited to extremely simplified approximations of real light sources.Point lights, directional lights, and spotlights are expressed just using a position in the world space, with no dimension, and with simple types of light emission.The only type of virtual light source that has an actual dimension is the area light.However, area lights are computationally expensive to calculate, thus limiting their actual use only to offline rendering.In real-time applications, illumination data from area lights are usually prebaked in light maps and used during the rendering process through texture sampling operations.Another texture-based approach commonly used in real-time rendering to introduce complex illumination features is Image-Based Lighting [28].
Real light sources may have complex characteristics of emission.Indeed, light emitters may have different dimensions, and they may emit light with different intensities at different directions of emission.The IES file format [29], proposed by the Illuminating Engineering Society [30], stores photometric data measured from real light sources (luminaires).IES files are largely used in the lighting design field and in PBR rendering engines [2], where they are used as input in the importance sampling process of Monte Carlo methods.In recent years, they have also been used for real-time applications.Examples are the integrations in the Frostbite [31,32] and Unreal [33] game engines.However, IES data are usually applied in game engines only as a sort of "mask" assigned to point lights in order to weight the intensity of light emission around the point light source in a non-uniform way.This process leads to a more realistic rendering of the surfaces close to the light source emission, which are primarily affected by the direct illumination component.However, the role of IES photometric data is usually not considered actively in the other processes related to the computation of GI effects.
In practice, an IES file is an ASCII file, which contains information about a specific luminaire, and measured photometric data.Listing 1 shows the content of an IES file we have used in our test scenes.The content can be divided into three parts:

•
Information about the luminaire product (line 1 to line 10): a list of metadata, providing information about the specific luminaire (e.g., model number, manufacturer, destination of use, model of the lamp(s) and ballast, type of mounting, etc.).The list of keywords is not fixed in type, order, or number of elements.Thus, IES files may differ in this section; however, it is not used during the rendering process.Figure 2 shows an example of visualization of the content of the IES file in Listing 1.The orientation of the luminaire's axes with respect to those of the photometric web is provided by the value of the Photometric Type (the sixth value in line 11 of Listing 1).In this paper, we consider the most common case, where Photometric Type corresponds to Type C luminaires (=1 in the IES file format), i.e., architectural and roadway luminaires.For this kind of luminaire, the polar axis of the photometric web coincides with the vertical axis of the luminaire.Finally, notice that, even if the actual luminaire may have multiple lamps (like the luminaire described in Listing 1), the results are expressed considering the light as emitted by a single position in space, thus making the data perfectly compatible with the point light source used in CG.A more comprehensive description of the IES file format can be found in [29].

Method
The main goal of the paper is to investigate the feasibility of the introduction of photometric-based light sources in a real-time Many-Lights technique, thus considering the management and the effect of a non-uniform emission in the computation of both direct and indirect illumination components.
In this section, we describe the main contributions of the paper.In particular: • we implemented a parser for the IES file format, which creates a polygon mesh representing the photometric solid of a luminaire.This mesh is then used in the rendering process to influence the direct and indirect illumination components; • we extended the Reflective Shadow Maps technique [8] using cube maps in order to address the omnidirectional emission of point lights.The original technique as proposed in [8] considers only directional lights, using a standard 2D texture image to store the RSM data; • we weighted the emission from the VPLs on the basis of the parsed data from the IES files.This affects not only the direct illumination component (leading to a more realistic rendering of the area and surfaces closer to the light source) but also the indirect illumination component because the placement of the VPLs is tuned by the actual volume of emission, which is no longer perfectly spherical but described by the data from a real luminaire (Figure 2).

Implementation Details
The project was implemented in C++20 using OpenGL API and GLSL shading language on a Windows machine equipped with Intel ® Core i7-8750H CPU @ 2.20 GHz, 16.0 GB RAM, and a NVIDIA GeForce GTX 1050 Ti graphics card.
We have considered two test scenes: • a modified Cornell box scene containing the Stanford Lucy model.The scene, which consists of 33,480 triangles, uses only simple diffuse materials without any texturing.Both the original models (downloaded from [35,36], respectively) were resized by the authors before use.• the Crytek Sponza, which consists of 262,267 triangles.The scene considers only diffuse surfaces and also presents several textures.The original file (downloaded from [35]) was resized by the authors before use.

Applying an IES Light in a Virtual Scene
The first step for the introduction of IES data into a real-time rendering pipeline is the parsing of the content of an IES file.This is a trivial passage because IES format is ASCII-based.The most relevant data that are required are the list of horizontal and vertical angles, the luminous intensities, and the photometric type of the luminaire.
Once the data have been parsed, a polygon mesh representing the photometric solid described in the IES file is built.The entire process is based on established mesh processing techniques [37].Each angle pair (i.e., a horizontal and vertical angle describing a direction of sampling of the luminaire) expressed in the coordinate system of the photometric web, together with the corresponding measured intensity, is converted to a 3D Cartesian coordinate in space using the luminaire position as reference.The grid of points in the IES file usually represents only a portion of the photometric solid.Given the orientation of the luminaire (determined by the photometric type metadata), and exploiting symmetries, a complete solid can be obtained applying reflection operations.Triangulation and possibly subdivision operations are then applied in order to create the final polygon mesh of the photometric solid [37].The last step consists of the computation of normals for every vertex of the generated mesh.Finally, all the data of the polygon mesh are saved in an OpenGL Vertex Array Object (VAO) [38].
The polygon mesh of the photometric solid is then used in the rendering module in order to extend the Reflective Shadow Maps technique.It can also be easily visualized using wireframe rendering for debug purposes (see Figures 3 and 4d).

Enhancing Reflective Shadow Maps with IES Data
We have extended the Reflective Shadow Maps technique [8], which we have presented in Section 2, in two ways.
First of all, we have extended the original technique in order to also consider point lights.The main approach remains the same: the rendering is based on deferred rendering, saving the same data (depth, position, normal, and flux) in different G-buffers.However, to manage the omnidirectional emission of a point light, we use cube maps instead of standard 2D textures.Cube maps are generated performing six different rendering steps, pointing the light sources in the front, back, up, down, left, and right directions.This way, we have a panoramic determination of what is "seen" by the light in every direction [3].In OpenGL, the process of rendering to each face of the cube map can be completed in a single render pass using an ad hoc geometry shader [38].Figure 5a-c    Standard point lights emit light in a uniform way in every direction (i.e., the volume of emission is perfectly spherical).With IES data, we want to "weight" the emission in each direction given the value provided by the photometric solid.In order to obtain such an effect, we can create an IES light mask starting from the photometric solid.In practice, we convert the value of the IES data, currently stored in the polygon mesh of the photometric solid, into a cube map whose values will represent how and how much light should be emitted from the light source towards each direction.
The photometric solid is positioned where the point light is placed, and oriented accordingly.Computing the associated light mask is completed through projecting its faces to the cube map.We can re-use the first part of the G-buffers creation step because the vertices of the polygon solid mesh need to be processed in the same way.As a consequence, we apply the same vertex and geometry shaders applied for the other cube maps of the RSM technique.
However, in order to achieve the desired result, we apply a fragment shader that saves the IES light mask information using a false colors approach.We present in Listing 2 an excerpt of the GLSL code relative to the final fragment color computation.Thus, the generated texture stores the following data: • Red color channel: we store the length of the vector that goes from the position of the light to the processed fragment, i.e., the luminous intensity emitted from the light source in that direction.

•
Green color channel: distance_to_furthest_ies_vertex is the original maximum distance between the origin and a vertex of the photometric solid.Dividing the distance of the fragment from the light by this value, we compute an intensity modifier based on the maximum intensity that the light source can emit in candelas.This will compensate scaling transformations applied to the original photometric solid once transformed to be used in the scene.The resulting value is always contained in the range (0.0, 1.0].

•
Blue color channel: since a fragment was generated, the solid emits light in that direction, so the variable emitting_along_direction_l is set to 1.0.Because the G-buffer is initialized with 0.0, the final cube map will have non-zero values only for fragments corresponding to actual direction of emission from the light source.• direct illumination component: for each light source present in the scene, the G-buffer storing the standard shadow map is checked to see if the considered fragment is illuminated.If not in shadow, the direct illumination component is computed following the standard screenspace approach of deferred rendering, but the resulting value is weighted by the values sampled in the IES light mask.The blue channel, corresponding to a boolean flag, tells the shader whether the light source emits light in that direction.If it does not, the contribution of that light is set to zero.On the other hand, if the contribution exists, the value in the red channel (possibly corrected by the component in the green channel) can be used as a multiplier for the intensity.An example of the rendering produced considering only this component is a scene that can be seen in Figure 4a.• indirect illlumination component: the computation is based on the application of Equation (3) as discussed in Section 2 but computed using G-buffers consisting of cube maps, and weighting the final result using the values sampled in the IES light mask to take into account the light intensity actually emitted in a direction.Figure 4b shows a preview of the indirect illumination component created using our enhanced RSM technique.A multiplication factor of 4 is used for a better visualization.
Figure 4c shows the final render of the proposed technique.

Results and Discussion
Figures 3, 4c, 6 and 7 show some renderings obtained using the proposed approach on the two test scenes.The results seem to confirm the initial hypothesis: the original RSM technique can be extended to consider point lights, and the photometric data described in an IES file can be effectively integrated in the Many-Lights pipeline, not only in the direct illumination component (as usually considered in real-time applications), also playing an active role in the indirect illumination computation.The two test scenes present only diffuse surfaces; as a consequence, the main GI effect introduced by the Many-Lights technique is color bleeding.The presented results show an evident and realistic color bleeding effect: this is particularly noticeable on the solids and the Stanford Lucy model in Figures 3 and 6, and on the floor in Figures 4c and 7.
Regarding the computational performance, the implemented technique is more expensive than the original RSM method.This is obviously caused by the use of several cube maps for each light source in the scene, while the original method used standard 2D textures.As a consequence, our method requires six times more memory to store each RSM component, and an additional cube map for the IES mask.Moreover, all the cube maps require recomputation in the case of dynamic objects moving in the scene.However, the technique is view-independent in the case of static scenes: moving the camera does not require recomputing either the RSMs or the IES masks for light sources; it only requires an update of the camera G-buffers, and thus of what is rendered to the screen in the last pass of a standard deferred rendering pipeline.
Given N IES lights and taking M samples from their RSMs, N × M texture random accesses are needed to render a single fragment.In our experimental evaluation of the proposed technique, we found that the best visual results are obtained with 400 samples taken in each RSM for each fragment.Fewer samples lead to evident visual artifacts.Figure 6 shows a comparison of the results obtained on the modified Cornell box scene using 100 and 400 samples per fragment.When using fewer samples, rendering artifacts are visible both on the ceiling and on the front face of the taller box.
Performances in terms of achieved framerate on our test machine (Section 4.1) when setting the rendering resolution to 800 × 800 pixels are summarized in Tables 1 and 2. Analysis has been performed on both test scenes considering a single light source and 100 and 400 samples per fragment.The application maintains an interactive framerate with a maximum of 4 lights in the scene.We forced a full recomputation of the RSM buffers at each frame, simulating a dynamic scene.By using NVIDIA Nsight [39] to debug the state of the GPU, we discovered that computing indirect illumination is the most computationally expensive step.Indeed, repeatedly sampling the RSM cube map is an actual bottleneck.
The application of an additional screenspace interpolation approach, as proposed in [8], could also be extremely advantageous in the proposed version of the RSM technique, offering relevant speedups while preserving a good final image quality.

Conclusions and Future Work
In this paper, we introduced the use of photometric data in a Many-Lights technique so that a Virtual Point Lights distribution could be driven by the IES description of the light source.We proposed two extensions to the Reflective Shadow Maps (RSMs) technique while considering point lights and creating a light mask based on the data in IES files in order to "weight" the emission in different directions.
The results obtained with the proposed technique show how the introduction of photometric data describing real light sources can noticeably contribute to the level of realism in the generated images.The current state of the project allows for an adequate framerate with a limited number of light sources in the scene on the considered test machine.
Future work will focus on the optimization of the proposed technique.In the current implementation, we have not considered the optimization based on screen space interpolation, as also outlined in the original RSMs paper [8].The introduction of a similar approach could surely enhance the performance of the overall rendering pipeline.Moreover, other possible approaches might involve encoding useful data in a more compact and efficient format, such as merging depth maps and IES light masks in a single cube map by saving more information in each component.The use of IES photometric data can also be considered in other Many-Lights techniques.Indeed, a different approach was proposed in [40]: instead of gathering energy from pixel lights on an ad hoc shadow map, energy might be "propagated" from the indirect light source, illuminating screenspace neighbors.This step can benefit from a description of actual photometric emission of a light source.

Figure 1
shows a schematic description of the two stages of a standard Many-Lights technique.

Figure 1 .
Figure 1.The two steps of a generic Multi-Lights rendering technique.On the (left): generation of VPLs on different surfaces of a scene.On the (right): rendering process taking into account the effect of the different VPLs.The image considers multiple light bounces, which are applied only in offline rendering.Many-Lights techniques applied in real-time rendering (like the proposed method) usually consider just a single light bounce (e.g., as in the case of the object in the center of the scene).Original image from [6].

•Listing 1 .
Information about the measurement test (lines 11 and 12): a list of values related to the measurement test on the actual luminaire.The values in line 11 represent the number of lamps inside the luminaire, the luminous flux of each lamp, a multiplication factor of light intensity, the number of vertical and horizontal angles considered in the measurements, the Photometric Type of the luminaire, the unit of measurement (1 for feet, 2 for meters), and, finally, length, width, and height of the emitting surface of the luminaire.The values in line 12 provide information on the electrical ballast and the input power of the luminaire.•Measuredphotometric data (line 13 to line 19): the actual measured data, which consist of the core part of an IES file and the data actually used in the rendering process.The luminaire's photometric data are captured by locating them at the center of an imaginary sphere and measuring light intensity values at some points on the surface of this sphere.These points are expressed using polar coordinates relative to a grid called photometric web.The measured samples define a volume that is called photometric solid.Lines 13 and 14 show the lists of the vertical and horizontal angles on the photometric web representing the measured samples.The numbers of the considered angles are stated in line 11 (in our example, thirty-seven vertical angles and five horizontal angles).Lines 15 to 19 show the list of luminous intensity values (expressed in candelas) captured at each angle pair (a line for each horizontal angle, each consisting of 37 values).The content of an IES file.IESNA : LM -63 -1995 [ REPORT NUMBER ] ITL36346 [ DATE ]11 -28 -1989 [ MANUFAC ] LEDALITE ARCHITECTURAL PRODUCTS , INC .[ CATALOG NUMBER ]111621 -PN -12 HP -NN [ LUMINAIRE ] EXTRUDED SQUARE ALUMINUM HOUSING , WHITE PAINTED REFLECTORS , SEMI -SPECULAR PARABOLIC LOUVER , OPEN TOP .

Figure 2 .
Figure 2. On the (left), a visualization of the photometric web of the data in Listing 1; on the (right), a rendering produced using the same IES file.Both images have been generated using IESviewer 3.6 software [34].
show examples of the content of the cube maps version of the data required by the RSM technique.

Figure 3 .
Figure 3.Some rendering results of the modified Cornell box using the proposed method.The scene presents a light source (described by the IES file in Listing 1 and Figure 2), moving from left to right.The photometric solid is rendered in wireframe.The results show the effect of the non-uniform emission of the light source (particularly evident on the ceiling in the image in the center) and the color bleeding effects provided by the Many-Lights approach, in particular on the solid on the left in all the frames and on the Stanford Lucy model in the right image.

Figure 4 .
Figure 4.A render using the Crytek Sponza scene.The subfigures are (a) the scene rendered only with direct lighting using the IES light mask; (b) the scene illuminated only by indirect lighting (a 4× multiplication factor has been applied to make the component more visible); (c) the final render of the scene using the proposed method; (d) the final render with debug visualization of the photometric solid.The scene uses the light source described by the IES file in Listing 1 and Figure 2.

Listing 2 .
IES light mask generation: false colors computation in fragment shader.
vec3 l i g h t _ t o _ f r a g m e n t = f r a g m e n t _ p o s i t i o n .xyz -scene_lights .positions [ light_index ]. xyz ; float d i s t a n c e _ f r o m _ l i g h t = length ( l i g h t _ t o _ f r a g m e n t ) ; float sc al e d_ di s ta n ce = d i s t a n c e _ f r o m _ l i g h t / d i s t a n c e s _ t o _ f u r t h e s t _ i e s _ v e r t e x [ light_index ]; float e m i t t i n g _ a l o n g _ d i r e c t i o n _ l = 1.0; vec4 ies_mask = vec4 ( distance_from_light , scaled_distance , emitting_along_direction_l , 1.0) ;

Figure 5d shows anFigure 5 .Figure 5 .
Figure 5d shows an example of the content of the IES light mask.

Figure 6 .
Figure 6.Rendering results of the modified Cornell box using 100 (left) and 400 (right) samples per fragment.In the image on the left, noticeable rendering artifacts are present on the ceiling and on the front face of the taller box.The scene uses the light source described in Listing 1.

Figure 7 .
Figure 7.A render of the Crytek Sponza scene using the proposed technique.The scene uses the light source described in Listing 1.As in Figure 4c, the color bleeding effect is evident on the floor.

Table 1 .
Framerates for the modified Cornell box scene with respect to the number of samples per fragment used in the indirect lighting computation.

Table 2 .
Framerates for the Crytek Sponza scene with respect to the number of samples per fragment used in the indirect lighting computation.