Application of High-Dynamic Range Imaging Techniques in Architecture : A Step toward High-Quality Daylit Interiors ?

High dynamic range (HDR) imaging techniques are nowadays widely used in building research to capture luminances in the occupant field of view and investigate visual discomfort. This photographic technique also makes it possible to map sky luminances. Such images can be used for illuminating virtual scenes; the technique is called image-based lighting (IBL). This paper presents a work investigating IBL in a lighting quality research context for accelerating the development of appearance-driven performance indicators. Simulations were carried out using Radiance software. The ability of IBL to accurately predict indoor luminances is discussed by comparison with luminances from HDR photographs and luminances predicted by simulation in modeling the sky in several other more traditional ways. The present study confirms previous observations that IBL leads to similar luminance values than far less laborious simulations in which the sky is modeled based on outdoor illuminance measurements. IBL and these last methods minimize differences with HDR photographs in comparison to sky modeling not based on outdoor measurements.


Introduction
In the current context of energy crises and climate change, the work of the architect has become highly complex.One of the current risks is to focus on building energy performance to the detriment of other important aspects participating in architectural quality.In the field of lighting, quality has been defined as conformance to occupants' needs, to architecture requirements, and to economics and environment matters [1].As specified in Figure 1 illustrating lighting quality in light of the Vitruvian triad, occupants' needs should not be understood only as visual needs.Lighting should obviously support occupant's activities in providing sufficient light and in avoiding glaring situations causing visual discomfort.However, it must also participate in the satisfaction of some non-visual needs (a.o.social interactions, performance, health, and safety matters) [2].Thanks to the recognized benefits of daylight to the occupants, its energy-efficiency potential, and its added value for buildings, daylight remains the preferred source for lighting.Additionally, in most places, electric lighting is considered as a complement to daylighting.In practice, the most common approach for predicting lighting quality of daylit spaces is to calculate the daylight factor (DF).This static illuminance-based indicator, developed in the Seventies, informs on daylight provision (visual needs) without taking into account either the variation of daylight over the day and the year or the location of the building.Yet, important advances have been made since the Seventies that have led to the development of three new types of lighting quality indicators: 1. Thanks to computer technology, computing power increased dramatically from the 1970s, and the first physically-based lighting simulation systems were developed in the Eighties.These advances favored the development of dynamic climate-based performance indicators such as daylight autonomy (DA) [3] and useful daylight illuminance (UDI) [4].These metrics are calculated based on weather files describing typical conditions at the building's location.They take into account daylight variability, target illuminance level, and building occupancy periods.
They also inform on the potential to reduce lighting consumption thanks to daylighting.We observe that UDI and DA, developed more than 10 years, have difficulty being adopted by practitioners.To our opinion, potential reasons are (1) calculation tools and software not adapted to architectural design realities and (2) a need for a normative/regulative context specifying targets.2. In the past decade, HDR photography was increasingly used by lighting researchers as a luminance data acquisition tool.Over spot luminance meters, this technique has the advantage to capture luminances in the human field of view more rapidly and with a higher resolution.It also makes possible statistical analyses of luminances of specific surfaces/areas of interest.The accuracy of luminance measurement with HDR photography is widely influenced by the care taken during acquisition and treatment [5][6][7].A measurement error of less than 10% can be expected [6].HDR photography surely accelerated the development of luminance-based metrics predicting visual discomfort caused by glaring situations (e.g., DGP [8]), and will probably facilitate their validation.3. Last, in recent years, we have observed a growing interest of lighting researchers for circadian matters [9,10].This interest follows the discovery in the 2000s of a third type of retinal photoreceptor [11,12].Light is today recognized as the "major synchronizer of circadian rhythms to the 24-h solar day" [13].To help designers to address the need for the building's daylight access-supporting circadian regulation, circadian daylight metrics are under development [14,15].Thanks to the recognized benefits of daylight to the occupants, its energy-efficiency potential, and its added value for buildings, daylight remains the preferred source for lighting.Additionally, in most places, electric lighting is considered as a complement to daylighting.In practice, the most common approach for predicting lighting quality of daylit spaces is to calculate the daylight factor (DF).This static illuminance-based indicator, developed in the Seventies, informs on daylight provision (visual needs) without taking into account either the variation of daylight over the day and the year or the location of the building.Yet, important advances have been made since the Seventies that have led to the development of three new types of lighting quality indicators: 1.
Thanks to computer technology, computing power increased dramatically from the 1970s, and the first physically-based lighting simulation systems were developed in the Eighties.These advances favored the development of dynamic climate-based performance indicators such as daylight autonomy (DA) [3] and useful daylight illuminance (UDI) [4].These metrics are calculated based on weather files describing typical conditions at the building's location.They take into account daylight variability, target illuminance level, and building occupancy periods.They also inform on the potential to reduce lighting consumption thanks to daylighting.We observe that UDI and DA, developed more than 10 years, have difficulty being adopted by practitioners.To our opinion, potential reasons are (1) calculation tools and software not adapted to architectural design realities and (2) a need for a normative/regulative context specifying targets.

2.
In the past decade, HDR photography was increasingly used by lighting researchers as a luminance data acquisition tool.Over spot luminance meters, this technique has the advantage to capture luminances in the human field of view more rapidly and with a higher resolution.It also makes possible statistical analyses of luminances of specific surfaces/areas of interest.
The accuracy of luminance measurement with HDR photography is widely influenced by the care taken during acquisition and treatment [5][6][7].A measurement error of less than 10% can be expected [6].HDR photography surely accelerated the development of luminance-based metrics predicting visual discomfort caused by glaring situations (e.g., DGP [8]), and will probably facilitate their validation.

3.
Last, in recent years, we have observed a growing interest of lighting researchers for circadian matters [9,10].This interest follows the discovery in the 2000s of a third type of retinal photoreceptor [11,12].Light is today recognized as the "major synchronizer of circadian rhythms to the 24-h solar day" [13].To help designers to address the need for the building's daylight access-supporting circadian regulation, circadian daylight metrics are under development [14,15].
Figure 1 highlights the fact that research efforts presented here above have mainly addressed two of the three aspects of the Vitruvian triad (i.e., utilitas and firmitas).Finally, few works investigate the prediction of visual appearance, atmospheres, and aesthetic matters related to the third dimension of architecture, venustas, which is probably a driving force for the designer.In our opinion, that reveals the fact that lighting research does not take sufficient account of the architectural design process of the architect or designer.
For accelerating the development of appearance-driven performance indicators such those developed in [16], some methodological challenges should be addressed.Indeed, the classical method for exploring the appearance of lit spaces is the psychophysical approach: the relationship between physical measurements of (visual) stimuli and sensations/perceptions that those stimuli evoke to observers is studied.In the context of daylighting, one of the difficulties with such an approach is the control of natural variations of lighting conditions.To overcome this issue, physically-based renderings are interesting and particularly suitable to psychophysics, as they provide both physical data and visual stimuli.First few validation works suggest that such images could serve as reasonable surrogates for real world [17][18][19].This kind of work investigating the perceptual equivalence between actual and virtual daylit environments should continue.Also, in such a context of validation, image-based lighting (IBL), a process of illuminating virtual scenes with HDR photographs as explained in the tutorial by Debevec [20], presents a great interest, as it could minimize light distribution differences between real and virtual scenes.To the best of our knowledge, the rare published works investigating IBL in lighting research are from Inanici [21][22][23].Her main conclusions are that image-based lighting renderings predict accurately the luminous indoor conditions and that the method is particularly interesting in urban context or for sites with vegetation.
In the present work, we sought to: • Investigate the ability of IBL renderings to accurately predict luminance distributions, in indoor spaces, in comparison to more traditional ways to describe the light source in Radiance [24]; • Determine how similar are our observations to those reported by Inanici [21][22][23]; Quantify the error between actual and rendered luminances.

Materials and Methods
To evaluate the accuracy of IBL for predicting luminance distribution, a numerical comparison was done between luminance values extracted from HDR photographs of real rooms and simulated luminances.Four actual (and thus complex) rooms were studied (see Figure 2).They are located in Louvain-la-Neuve, Belgium (50 • 40 N, 4 • 33 E).They were photographed three times, on 9 March between 11:00 and 14:20.
Figure 1 highlights the fact that research efforts presented here above have mainly addressed two of the three aspects of the Vitruvian triad (i.e., utilitas and firmitas).Finally, few works investigate the prediction of visual appearance, atmospheres, and aesthetic matters related to the third dimension of architecture, venustas, which is probably a driving force for the designer.In our opinion, that reveals the fact that lighting research does not take sufficient account of the architectural design process of the architect or designer.
For accelerating the development of appearance-driven performance indicators such those developed in [16], some methodological challenges should be addressed.Indeed, the classical method for exploring the appearance of lit spaces is the psychophysical approach: the relationship between physical measurements of (visual) stimuli and sensations/perceptions that those stimuli evoke to observers is studied.In the context of daylighting, one of the difficulties with such an approach is the control of natural variations of lighting conditions.To overcome this issue, physically-based renderings are interesting and particularly suitable to psychophysics, as they provide both physical data and visual stimuli.First few validation works suggest that such images could serve as reasonable surrogates for real world [17][18][19].This kind of work investigating the perceptual equivalence between actual and virtual daylit environments should continue.Also, in such a context of validation, imagebased lighting (IBL), a process of illuminating virtual scenes with HDR photographs as explained in the tutorial by Debevec [20], presents a great interest, as it could minimize light distribution differences between real and virtual scenes.To the best of our knowledge, the rare published works investigating IBL in lighting research are from Inanici [21][22][23].Her main conclusions are that imagebased lighting renderings predict accurately the luminous indoor conditions and that the method is particularly interesting in urban context or for sites with vegetation.
In the present work, we sought to: • Investigate the ability of IBL renderings to accurately predict luminance distributions, in indoor spaces, in comparison to more traditional ways to describe the light source in Radiance [24]; • Determine how similar are our observations to those reported by Inanici [21][22][23]; Quantify the error between actual and rendered luminances.

Materials and Methods
To evaluate the accuracy of IBL for predicting luminance distribution, a numerical comparison was done between luminance values extracted from HDR photographs of real rooms and simulated luminances.Four actual (and thus complex) rooms were studied (see Figure 2).They are located in Louvain-la-Neuve, Belgium (50°40′ N, 4°33′ E).They were photographed three times, on 9 March between 11:00 and 14:20.Simultaneously to HDR luminance acquisition in the real rooms, HDR images of the sky were taken, and outdoor illuminances (horizontal global and horizontal diffuse illuminances) were recorded.Sky images and outdoor illuminances are used for describing the sky in simulations (see Section 2.3).
HDR image processing and renderings were carried out with Radiance, which is a physicallybased rendering system developed in the 1980s by Greg Ward for predicting light levels and appearance of yet unbuilt spaces, in a lighting architectural design context [24].This open-source software supports image-based lighting and is probably the most used software by lighting researchers.Simultaneously to HDR luminance acquisition in the real rooms, HDR images of the sky were taken, and outdoor illuminances (horizontal global and horizontal diffuse illuminances) were recorded.Sky images and outdoor illuminances are used for describing the sky in simulations (see Section 2.3).
HDR image processing and renderings were carried out with Radiance, which is a physicallybased rendering system developed in the 1980s by Greg Ward for predicting light levels and appearance of yet unbuilt spaces, in a lighting architectural design context [24].This open-source software supports image-based lighting and is probably the most used software by lighting researchers.

Outdoor Illuminance Measurements
Table 1 summarizes outdoor illuminance levels recorded simultaneously with the HDR luminance acquisition in the real rooms.Outdoor global horizontal illuminance (E_glob_horiz) and outdoor diffuse horizontal illuminance (E_dif_horiz) were measured with a Hagner EC1-X illuminance meter.For simulations, direct normal illuminance (E_dir_norm) was calculated from outdoor illuminance measurements and the altitude of the sun (theta_sun) as follows: The altitude of the sun was determined for the given date, time, and location using the solar geometry algorithm given by Szokolay [25].Table 1 shows that outdoor global horizontal illuminances (E_glob_horiz) varied between 15,300 and 71,700 lx.Sky type is intermediate or overcast.

Real Rooms Luminance Acquisition
Pictures in indoor spaces were taken with a Canon EFS 17-85 mm IS lens mounted on a Canon 40D camera.In each room, three times, a series of LDR pictures were taken varying the exposure time but keeping constant the aperture of the camera.For easy and automatic bracketing, the camera was controlled from a computer using a USB cable and DSLR Remote Pro software.The white balance of the camera was set to daylight and the lowest sensitivity (ISO100) was chosen to reduce the noise in the HDR picture, as recommended in [6].A tripod was used to avoid camera shakes and get sharp HDR pictures.A double axis bubble level was placed on the camera to ensure that the device was level.In order to create panoramic images, for each exposure, a series of pictures were taken in rotating the camera around its entrance point.For each exposure, pictures were first stitched into a LDR panorama in PTguiPro.
Merging multiple exposure LDR images into an HDR image requires knowledge of the camera response function that establishes the relation between RGB pixel values and relative radiance values.This camera's response can be determined once for a given camera and reused for other sequences [26].The response function of the CANON 40D camera we used in indoor rooms was recovered with the hdrgen program developed by Ward for Linux [27], based on a sequence of seven images of an interior daylit scene with large and smooth gradients.hdrgen uses the Mitsunaga and Nayar's algorithm [28] to derive the camera's response and a specific function is determined for each channel.The recovered response is given in Equation (2).
All HDR images of indoor spaces were created reusing the camera's response presented in Equation ( 2) and the hdrgen program.Output HDR data were stored in Radiance RGBE format.To retrieve real luminance values from HDR image, a photometric calibration has been done.Luminance of several objects in the HDR image (extracted using the pvalue program of the Radiance software [29]) was compared with luminance measurement taken with a Minolta LS100 spot luminance meter.For each scene, a global calibration factor (CF) was determined as follows: where n is the number of objects, L_spot_lum_i is the luminance measurement of object i taken with a spot luminance meter and L_HDR_i the luminance of the same object from HDR picture.In the present study, the resulting calibration factors for the 12 scenes (4 rooms * 3 times) vary between 1.12 and 1.47.

Sky Vault Luminance Acquisition
To capture the entire sky vault, a Sigma f/2.8 4.5 mm fisheye lens was mounted on a second Canon 40D camera.This device (camera + fisheye lens) creates circular pictures catching a 180 • hemisphere.With fisheye lenses, the vignetting effect (the decrease of brightness observed from the center of the picture to its periphery) is not negligeable and should be corrected.With our device, when large apertures are used, luminance losses superior to 50% are observed at the periphery of the pictures [30].A tripod and a double axis bubble level were used to ensure the horizontality of the camera.The white balance of the camera was set to daylight and the ISO (sensor sensitivity) to 100.
Acquiring luminances of the sky vault with HDR photography is more challenging than of indoor spaces due to the high luminances of the sun.Nevertheless, as demonstrated by Stumpfel et al. [31], it is possible to avoid the saturation of the camera's sensor in (1) using neutral density filters and then applying a correction for its presence, and (2) carefully selecting settings (aperture and exposure time) for the capture of LDR images.In the present study, a neutral density filter (Kodak ND 3.00) transmitting 0.1% of the incident light was placed between the lens and the camera.Then, following the best practice [21], to capture the wide range of luminances of sunny skies, two sequences of multiple exposure images were taken.Both were done varying the exposure time but keeping constant the aperture of the camera.A first series of LDR pictures was taken with a large aperture (f/4) to capture the low luminances of the cloud layer.The second series was done with a smaller aperture (f/16) to capture the high luminances of the sun and its corona.For both apertures (f/4 and f/16), the ND filter was present, and the shutter speed varied between 25 s and 1/2500 s, with 2-stop increments.
As the camera response function can vary from a device to another, even between cameras of same model [32], a specific camera response function was determined for this second CANON 40D camera (see Equation ( 4)).Again, hdrgen was used to determine the curves.A sequence of 11 images of an outdoor scene was used.Figure 3 illustrates the response of the two CANON 40D cameras used in the present work.Each sequence of LDR pictures of the sky was merged into a HDR picture using the camera's response presented in Equation ( 4) and the hdrgen program.For intermediate skies, both HDR sky images (from f/16 and f/4 aperture series) were combined: luminances higher than 500,000 cd/m 2 were extracted from the f/16 HDR picture and luminances inferior to 30,000 cd/m 2 were extracted from f/4 HDR sky image.Between these values, luminances were linearly combined.In the case of overcast skies, only the f/4 aperture series was used (no pixel is saturated because of the absence of sun).
The main steps of the calibration process necessary to retrieve real luminance values from HDR sky image are, as illustrated in Figure 4:  5)), outdoor global horizontal illuminance was compared to illuminance from HDR picture calculated with evalglare (a Radiance-based tool [33]) after modifying the projection type of the image from equisolid to equidistant (using the Radiance fisheye_corr.calfile).
where E_glob_horiz is the outdoor global horizontal illuminance measured during the sky image capturing process and E_HDR is the illuminance calculated from the HDR image.Calibration factor vary between 0.95 and 1.41.Each sequence of LDR pictures of the sky was merged into a HDR picture using the camera's response presented in Equation ( 4) and the hdrgen program.For intermediate skies, both HDR sky images (from f/16 and f/4 aperture series) were combined: luminances higher than 500,000 cd/m 2 were extracted from the f/16 HDR picture and luminances inferior to 30,000 cd/m 2 were extracted from f/4 HDR sky image.Between these values, luminances were linearly combined.In the case of overcast skies, only the f/4 aperture series was used (no pixel is saturated because of the absence of sun).
The main steps of the calibration process necessary to retrieve real luminance values from HDR sky image are, as illustrated in Figure 4:  5)), outdoor global horizontal illuminance was compared to illuminance from HDR picture calculated with evalglare (a Radiance-based tool [33]) after modifying the projection type of the image from equisolid to equidistant (using the Radiance fisheye_corr.calfile).
where E_glob_horiz is the outdoor global horizontal illuminance measured during the sky image capturing process and E_HDR is the illuminance calculated from the HDR image.Calibration factor vary between 0.95 and 1.41.Each sequence of LDR pictures of the sky was merged into a HDR picture using the camera's response presented in Equation ( 4) and the hdrgen program.For intermediate skies, both HDR sky images (from f/16 and f/4 aperture series) were combined: luminances higher than 500,000 cd/m 2 were extracted from the f/16 HDR picture and luminances inferior to 30,000 cd/m 2 were extracted from f/4 HDR sky image.Between these values, luminances were linearly combined.In the case of overcast skies, only the f/4 aperture series was used (no pixel is saturated because of the absence of sun).
The main steps of the calibration process necessary to retrieve real luminance values from HDR sky image are, as illustrated in Figure 4: • A neutral density filter correction, determined as proposed by Stumpfel et al. [31] in photographing a Macbeth color chart with and without the ND filter; • A vignetting correction for counteracting respectively the 50% and 4% losses of luminance observed at the periphery of the sky image with our device (CANON40D + Sigma 4.5 mm) and a f/16 or a f/4 aperture; • A calibration of the resulting (combined) HDR image, based on the measurement of outdoor illuminance.To determine the calibration factor (see Equation ( 5)), outdoor global horizontal illuminance was compared to illuminance from HDR picture calculated with evalglare (a Radiance-based tool [33]) after modifying the projection type of the image from equisolid to equidistant (using the Radiance fisheye_corr.calfile).
where E_glob_horiz is the outdoor global horizontal illuminance measured during the sky image capturing process and E_HDR is the illuminance calculated from the HDR image.Calibration factor vary between 0.95 and 1.41.

Renderings
The four actual rooms were modelled in Ecotect and rendered in Radiance.As for most rendering tools, the description of a scene in Radiance requires us to describe the geometry, the materials, and the light source(s) (see Figure 5).The description of the geometry was done based on building plans and in situ measurements.Materials were described thanks to in situ colorimetric measurements.Some hypothesis were done regarding the specularity properties and the roughness features of the materials.

Renderings
The four actual rooms were modelled in Ecotect and rendered in Radiance.As for most rendering tools, the description of a scene in Radiance requires us to describe the geometry, the materials, and the light source(s) (see Figure 5).The description of the geometry was done based on building plans and in situ measurements.Materials were described thanks to in situ colorimetric measurements.Some hypothesis were done regarding the specularity properties and the roughness features of the materials.A first series of renderings were created using HDR sky images (IBL renderings).The mapping onto the virtual hemispherical vault was done as described in the IBL tutorial by Debevec [20] but using an equisolid projection type.Preliminary, HDR pictures were cropped into a square, and a black border was added around the image circle.For intermediate skies, mksource was used for extracting a direct light source from the HDR pictures.After several tests, the radiance threshold was set to 5586 W sr −1 m −2 and the source diameter to 1.153, as in [23].
In order to evaluate the interest of IBL renderings, more traditional ways to describe the light source (the sky vault), in Radiance, were also investigated.We tested two sky model generator programs included in the software: gensky and gendaylit.Gensky produces sky luminance distributions based either on the uniform luminance model, the CIE overcast sky model, the CIE clear sky model, or the Matsuura intermediate sky model [34].Gendaylit generates the daylight source using Perez models.The four following ways to describe the daylight source were tested: • Gensky in specifying date, time, and location (gensky_def).This way to describe the light source is used by many novice users and practitioners unfamiliar with lighting simulations.Rooms #1 and #2 that have more complex geometries were simulated with slightly higher parameters (-ab 8 -aa 0.08 -ar 512 -ad 2048 -as 512) than Rooms #3 and #4 (-ab 7 -aa 0.15 -ar 512 -ad 2048 -as 512).

Results
The comparison between pictures and simulations is challenging.Indeed, given small geometric misalignments between the HDR pictures taken in the real world and renderings, per-pixel comparison would induce a large difference.In the present study, we first visually compared sky luminance distributions generated by gensky and gendaylit programs and HDR sky images used for IBL renderings.Then, luminance maps of real and rendered spaces were compared.Last, a surface- A first series of renderings were created using HDR sky images (IBL renderings).The mapping onto the virtual hemispherical vault was done as described in the IBL tutorial by Debevec [20] but using an equisolid projection type.Preliminary, HDR pictures were cropped into a square, and a black border was added around the image circle.For intermediate skies, mksource was used for extracting a direct light source from the HDR pictures.After several tests, the radiance threshold was set to 5586 W sr −1 m −2 and the source diameter to 1.153, as in [23].
In order to evaluate the interest of IBL renderings, more traditional ways to describe the light source (the sky vault), in Radiance, were also investigated.We tested two sky model generator programs included in the software: gensky and gendaylit.Gensky produces sky luminance distributions based either on the uniform luminance model, the CIE overcast sky model, the CIE clear sky model, or the Matsuura intermediate sky model [34].Gendaylit generates the daylight source using Perez models.The four following ways to describe the daylight source were tested:

•
Gensky in specifying date, time, and location (gensky_def ).This way to describe the light source is used by many novice users and practitioners unfamiliar with lighting simulations.

•
Gensky in specifying date, time, location, and sky type (gensky_sky).The sky type was determined based on a subjective evaluation of the cloud layer.

•
Gensky in specifying date, time, location, sky type, and horizontal diffuse and direct irradiances (gensky_br).Horizontal diffuse and direct irradiances were determined based on outdoor measurements.

•
Gendaylit in specifying date, time, location, sky type, and direct normal and diffuse horizontal illuminances (gendaylit).Direct normal and diffuse horizontal illuminances were determined based on outdoor measurements.

Results
The comparison between pictures and simulations is challenging.Indeed, given small geometric misalignments between the HDR pictures taken in the real world and renderings, per-pixel comparison would induce a large difference.In the present study, we first visually compared sky luminance distributions generated by gensky and gendaylit programs and HDR sky images used for IBL renderings.
Then, luminance maps of real and rendered spaces were compared.Last, a surface-to-surface analysis was done in comparing mean luminance of real surfaces (walls, ceiling, and floor) and rendered ones (see Figure 6 for an illustration of the studied surfaces in Room#3).to-surface analysis was done in comparing mean luminance of real surfaces (walls, ceiling, and floor) and rendered ones (see Figure 6 for an illustration of the studied surfaces in Room#3).In order to quantify the difference between real and virtual luminances, three indicators were calculated:

•
The relative mean bias error (MBE) with respect to the mean luminance by surface in the real space.MBE is a measure of overall bias error and is defined as: • The mean absolute percentage error (MAPE) with respect to the mean luminance by surface in the real space.MAPE is defined as:

•
The relative root mean square error (RMSE), which gives a relatively high weight to large difference with real luminances, contrary to the other indicators.RMSE is calculated as: In the three equations, Li is the mean luminance of the surface i in the rendering, Ltrue is the mean luminance of the real surface, and n is the number of studied surfaces.

Visual Comparison of Sky Maps
As illustrated in Table 2, for most skies, the luminance distribution is similar whatever the method used to produce it.However, sky maps produced by gensky_def and gensky_sky have lower luminances than real skies.Skies generated with gensky and gendaylit programs using outdoor illuminance measurements (gensky_br and gendaylit) are more similar to IBL virtual sky vaults and real sky luminances.In order to quantify the difference between real and virtual luminances, three indicators were calculated:

•
The relative mean bias error (MBE) with respect to the mean luminance by surface in the real space.MBE is a measure of overall bias error and is defined as: • The mean absolute percentage error (MAPE) with respect to the mean luminance by surface in the real space.MAPE is defined as: • The relative root mean square error (RMSE), which gives a relatively high weight to large difference with real luminances, contrary to the other indicators.RMSE is calculated as: In the three equations, L i is the mean luminance of the surface i in the rendering, L true is the mean luminance of the real surface, and n is the number of studied surfaces.

Visual Comparison of Sky Maps
As illustrated in Table 2, for most skies, the luminance distribution is similar whatever the method used to produce it.However, sky maps produced by gensky_def and gensky_sky have lower luminances than real skies.Skies generated with gensky and gendaylit programs using outdoor illuminance measurements (gensky_br and gendaylit) are more similar to IBL virtual sky vaults and real sky luminances.

Visual Comparison of Indoor Spaces
Two groups of renderings can be distinguished based on the indoor luminance maps they produced (see Tables 3-6):

•
Gensky_def and gensky_sky produce similar luminance maps underestimating real luminances; • Gensky_br, gendaylit, and IBL are the second group.They produce luminance maps that seem closer to real luminances than those produced by the first group (gensky_def and gensky_sky).
Nevertheless, Table 3 shows a slight underestimation of luminances in Room#1, in comparison with real luminances.This underestimation is slightly larger in Room#2 (see Table 4).In Room#3 at 11:25 (see Table 5), an overestimation by these three kinds of rendering is observed.Also, in Room#4 (see Table 6), the luminance of the ceiling seems overestimated by simulation.Figure 8 illustrates the relative mean bias error calculated by room.It confirms the quasisystematic underestimation of luminances predicted by gensky_def and gensky_sky.Also, whatever the rendering type, an underestimation is observed in Room#1 and Room#2.Figure 8 also shows that gensky_br, gendaylit, and IBL minimize the error with luminances extracted from HDR photographs in comparison with errors produced with gensky_def and gensky_sky, which are almost double (except in Room#4).In order to evaluate the impact of the misalignment between photographs of real scenes and renderings, a 50-by-50px shift was introduced in the real image (the vertical size of the images is about 3000 pixels).Relative MBE by rooms were calculated between the shifted images and the corresponding original pictures: they vary between −5% and 5%, are always negative in Room#1 and Room#2, and are always positive in Room#3 and Room#4.Figure 8 illustrates the relative mean bias error calculated by room.It confirms the quasisystematic underestimation of luminances predicted by gensky_def and gensky_sky.Also, whatever the rendering type, an underestimation is observed in Room#1 and Room#2.Figure 8 also shows that gensky_br, gendaylit, and IBL minimize the error with luminances extracted from HDR photographs in comparison with errors produced with gensky_def and gensky_sky, which are almost double (except in Room#4).In order to evaluate the impact of the misalignment between photographs of real scenes and renderings, a 50-by-50px shift was introduced in the real image (the vertical size of the images is about 3000 pixels).Relative MBE by rooms were calculated between the shifted images and the corresponding original pictures: they vary between −5% and 5%, are always negative in Room#1 and Room#2, and are always positive in Room#3 and Room#4.Mean relative error for all rooms was estimated with MAPE.MAPE vary between 52% with gendaylit renderings and 72% with gensky_def renderings.Gendaylit renderings minimize thus the error, while gensky_def maximizes the error with real luminances.In between, MAPE are 57%, 62% and 68%, respectively for IBL, gensky_br, and gensky_sky.The shift of 50-by-50px introduces, according to the scene, MAPE between 2 and 42%, which is not negligible.
In a second step, relative MBE were calculated with IBL renderings as the reference.Differences between gendaylit, gensky_br, and IBL renderings are small (see Figure 9).

Discussion
We observed in this work that IBL, gendaylit, and gensky_br are three ways to describe the light source in Radiance leading to similar luminance distributions.Moreover, among the five types of simulation we tested, these three types of rendering (which are all three based on physical Mean relative error for all rooms was estimated with MAPE.MAPE vary between 52% with gendaylit renderings and 72% with gensky_def renderings.Gendaylit renderings minimize thus the error, while gensky_def maximizes the error with real luminances.In between, MAPE are 57%, 62% and 68%, respectively for IBL, gensky_br, and gensky_sky.The shift of 50-by-50px introduces, according to the scene, MAPE between 2 and 42%, which is not negligible.
In a second step, relative MBE were calculated with IBL renderings as the reference.Differences between gendaylit, gensky_br, and IBL renderings are small (see Figure 9).Mean relative error for all rooms was estimated with MAPE.MAPE vary between 52% with gendaylit renderings and 72% with gensky_def renderings.Gendaylit renderings minimize thus the error, while gensky_def maximizes the error with real luminances.In between, MAPE are 57%, 62% and 68%, respectively for IBL, gensky_br, and gensky_sky.The shift of 50-by-50px introduces, according to the scene, MAPE between 2 and 42%, which is not negligible.
In a second step, relative MBE were calculated with IBL renderings as the reference.Differences between gendaylit, gensky_br, and IBL renderings are small (see Figure 9).

Discussion
We observed in this work that IBL, gendaylit, and gensky_br are three ways to describe the light source in Radiance leading to similar luminance distributions.Moreover, among the five types of simulation we tested, these three types of rendering (which are all three based on physical

Discussion
We observed in this work that IBL, gendaylit, and gensky_br are three ways to describe the light source in Radiance leading to similar luminance distributions.Moreover, among the five types of simulation we tested, these three types of rendering (which are all three based on physical measurements) minimize the error with luminances extracted from HDR photographs.We also observed that the skies generated with gensky in only specifying date, time, location (gensky_def ), or in only specifying the sky type (gensky_sky) underestimate almost systematically luminances in comparison to HDR luminance data.This highlights that a basic use of gensky (gensky_def and gensky_sky) without specifying outdoor irradiances can lead to an important underestimation of luminances.
The similarity between HDR photographs and IBL, as well as the underestimation of the daylight availability with gensky_def in comparison to IBL, gendaylit, and gensky_br, were also observed by Inanici, respectively, in [21] and [23] (note that in this second reference, renderings are not compared to real spaces).Contrary to Inanici's observation that the skies generated with gensky using outdoor irradiance measurements (gensky_br) are closer to HDR sky pictures than the sky generated with gendaylit, we observed a greater similarity between the skies produced by gendaylit and the HDR sky images.
In the present study, we sought to quantify observed differences.We selected three indicators.Errors calculated between actual and rendered luminances are quite large but similar to those calculated by Karner and Prantl in a study discussing the difficulties of comparing photographs and rendered images [35].We partially explain these large errors by the fact that, as highlighted by the 50-by-50px shift analysis, our indicators are largely influenced by small misalignments.
We share the point of view of Inanici [21,23] on the interest of IBL renderings when the building is in an urban context or surrounded by vegetation influencing daylight availability.Indeed, these neighbouring elements are difficult and/or time consuming to model.In such environments, illuminating the virtual scene with a light probe image could be useful.
In the present work, we investigated IBL renderings for reducing difference of luminance distribution between real and virtual scenes in a process of validation of tools aiming at the development of new appearance-oriented indicators.Developing such indicators is a way to reduce the existing gap between designers and lighting researchers, and is essential for favouring high-quality, daylit interiors.This is, today, more important than ever as, in industrialized countries, people spend more than 80% of their time indoors.In the present study, the interest of IBL in the frame of validation works investigating the perceptual equivalence between actual and virtual daylit environments such as [18,36] has not been highlighted, as gendaylit and gensky_br give similar results to IBL.Moreover generating skies with gendaylit or gensky is far less laborious than preparing HDR sky images for IBL renderings.As our study cases are not strongly affected by direct sun presence, the investigation of IBL renderings should continue, and further work should be done with more sunny skies and interior spaces with sun.
Last, in a context of seeking alternative environments for investigating visual perception of ambiances, other HDR technologies have to be investigated.Indeed, in the present study, we discussed the interest of IBL renderings, which use HDR sky images for predicting luminance maps of unbuilt spaces (HDR images).In a psychophysical approach, once these visual stimuli (HDR images) are created, they have to be displayed to observers for collecting perceptions (the aim is to better understand human perceptual response to visual environment).Which display devices and which tone-mapping operator to use for ensuring the perceptual accuracy of rendered lit scenes are other recurring issues with this type of approach [19,37,38].They have still to be tackled to accelerate the development of visual-appearance metrics.

Figure 1 .
Figure 1.The lighting quality definition in light of the Vitruvian triad.

Figure 1 .
Figure 1.The lighting quality definition in light of the Vitruvian triad.

Figure 3 .
Figure 3. Camera response curves for the two CANON 40D cameras used in the present study, as determined with hdrgen.

•
A neutral density filter correction, determined as proposed by Stumpfel et al. [31] in photographing a Macbeth color chart with and without the ND filter; • A vignetting correction for counteracting respectively the 50% and 4% losses of luminance observed at the periphery of the sky image with our device (CANON40D + Sigma 4.5 mm) and a f/16 or a f/4 aperture; • A calibration of the resulting (combined) HDR image, based on the measurement of outdoor illuminance.To determine the calibration factor (see Equation (

Figure 4 .
Figure 4. Creation of the light probe image.For overcast skies, only the f/4 aperture series is used.

Figure 3 .
Figure 3. Camera response curves for the two CANON 40D cameras used in the present study, as determined with hdrgen.

•
A neutral density filter correction, determined as proposed by Stumpfel et al. [31] in photographing a Macbeth color chart with and without the ND filter; • A vignetting correction for counteracting respectively the 50% and 4% losses of luminance observed at the periphery of the sky image with our device (CANON40D + Sigma 4.5 mm) and a f/16 or a f/4 aperture; • A calibration of the resulting (combined) HDR image, based on the measurement of outdoor illuminance.To determine the calibration factor (see Equation (

Figure 3 .
Figure 3. Camera response curves for the two CANON 40D cameras used in the present study, as determined with hdrgen.

Figure 4 .
Figure 4. Creation of the light probe image.For overcast skies, only the f/4 aperture series is used.

Figure 4 .
Figure 4. Creation of the light probe image.For overcast skies, only the f/4 aperture series is used.

Figure 5 .
Figure 5.The virtualization of a real scene with Radiance requires describing the geometry, the materials, and the light source.

Figure 5 .
Figure 5.The virtualization of a real scene with Radiance requires describing the geometry, the materials, and the light source.

Figure 6 .
Figure 6.Zones for the surface-to-surface comparison of Room#3.

Figure 6 .
Figure 6.Zones for the surface-to-surface comparison of Room#3.

Figure 7
Figure 7 highlights the underestimation of luminances predicted by gensky_def and gensky_sky.Gensky_br, gendaylit, and IBL profiles seem more similar to real luminances despite some large disparities.

Figure 7
Figure 7 highlights the underestimation of luminances predicted by gensky_def and gensky_sky.Gensky_br, gendaylit, and IBL profiles seem more similar to real luminances despite some large disparities.

Figure 8 .
Figure 8. Relative mean bias error, by room.The reference is the real world.

Figure 9 .
Figure 9. Relative mean bias error, by room.The reference is IBL.

Figure 8 .
Figure 8. Relative mean bias error, by room.The reference is the real world.

Figure 8 .
Figure 8. Relative mean bias error, by room.The reference is the real world.

Figure 9 .
Figure 9. Relative mean bias error, by room.The reference is IBL.

Figure 9 .
Figure 9. Relative mean bias error, by room.The reference is IBL.

Table 1 .
Description of the sky conditions during HDR photographs of indoor spaces.

•
Gensky in specifying date, time, location, and sky type (gensky_sky).The sky type was determined based on a subjective evaluation of the cloud layer.• Gensky in specifying date, time, location, sky type, and horizontal diffuse and direct irradiances (gensky_br).Horizontal diffuse and direct irradiances were determined based on outdoor measurements.• Gendaylit in specifying date, time, location, sky type, and direct normal and diffuse horizontal illuminances (gendaylit).Direct normal and diffuse horizontal illuminances were determined based on outdoor measurements.

Table 3 .
Luminance maps in false colors, in the real spaces (Room#1), and by simulation.

Table 4 .
Luminance maps in false colors, in the real spaces (Room#2), and by simulation.

Table 5 .
Luminance maps in false colors, in the real spaces (Room#3), and by simulation.

Table 6 .
Luminance maps in false colors, in the real spaces (Room#4), and by simulation.