Utilizing a Terrestrial Laser Scanner for 3D Luminance Measurement of Indoor Environments

We aim to present a method to measure 3D luminance point clouds by applying the integrated high dynamic range (HDR) panoramic camera system of a terrestrial laser scanning (TLS) instrument for performing luminance measurements simultaneously with laser scanning. We present the luminance calibration of a laser scanner and assess the accuracy, color measurement properties, and dynamic range of luminance measurement achieved in the laboratory environment. In addition, we demonstrate the 3D luminance measuring process through a case study with a luminance-calibrated laser scanner. The presented method can be utilized directly as the luminance data source. A terrestrial laser scanner can be prepared, characterized, and calibrated to apply it to the simultaneous measurement of both geometry and luminance. We discuss the state and limitations of contemporary TLS technology for luminance measuring.


Introduction
Laser scanning is a commonly applied 3D measuring technology for indoor measurement. Laser scanning is based on measuring 3D coordinates from an environment using a laser beam. As a result, a 3D point cloud is formed from a dense set of 3D measurements. Most contemporary laser scanners also contain one or more integrated cameras that are used to capture a panoramic image used for point colorization. The R (red), G (green), and B (blue) values of the captured image are projected onto the point cloud to obtain coloring for points. In addition to visualization, the color information has been applied for registration [1] and segmentation [2]. However, the point cloud colorization quality varies, depending on the selected terrestrial laser scanning (TLS) instrument [3].
In the past, terrestrial laser scanning has been widely applied in archaeology [4], cultural heritage [5], forestry [6], industry [7], geology [8], surveying [9], and construction engineering [10]. Today, terrestrial laser scanners are also a commonly used instrument in the architecture, engineering, construction, owner, operator (AECOO) industry. In TLS, one path of development is automating the processing of raw measurement into more sophisticated 3D models [11][12][13]. Another path of development is the integration of parallel data and sensors in laser scanning [14,15].
Two-dimensional luminance photometry is commonly applied to measure indoor surface luminances [16,17]. Luminance is the measure of light reflected or emitted from an area, commonly measured in candelas per square meter (cd·m −2 ). In lighting design, luminance distribution is an important aspect, as it affects the security, well-being, visual comfort [18], and aesthetics of the indoor environment. The luminance distribution is usually measured via imaging luminance photometry, where a calibrated digital camera is used to obtain an absolute luminance value for each pixel. Imaging luminance photometry has been applied in the assessment of light pollution [19,20]. High dynamic range (HDR) imaging is a key technology in imaging luminance photometry [21]. In HDR imaging, a set of images with different exposure times is combined to extend the dynamic range of a single exposure. This technique has been applied in architecture [22]. Moreover, the HDR technique is under constant development, for example by being applied to 360 • imaging [23] and by improved image fusion algorithms [24]. As a technology, imaging luminance photometry via HDR imaging has become well-established. However, an innate problem in measurement relying on individual images is the loss of 3D data in measuring.
Via photogrammetric 3D reconstruction, 2D luminance images can also be utilized for obtaining a 3D luminance measuring of a measured indoor environment [25]. Still, photogrammetry can perform poorly when measuring the 3D geometry of smooth, monocolored, and uniform surfaces [26]. Luminance measurement applications require accurate radiometric data, and the use of 3D luminance measuring in design would be beneficial not only for lighting designers but also for architects [27]. However, indoor 3D luminance measurements made with a terrestrial laser scanner have not been extensively studied. Existing research has shown that luminance maps obtained via imaging luminance photometry can be combined with TLS [28] and MLS point clouds [29,30]. As stated, contemporary TLS instruments commonly contain imaging sensors for point cloud colorization. As the sensors are increasingly applicable for HDR imaging [3], the utilization of such HDR imaging-capable TLS instruments for producing a 3D point cloud with luminance information is a topical development issue. While the use of TLS for lighting design via luminance measuring has been suggested in earlier research [27,28], a solution employing the TLS images for luminance measuring is missing, since Rodrique et al. [27] utilized a separate imaging luminance photometer and they did not register the luminance values into a 3D luminance point cloud. Instead, they assessed the geometry and luminance measuring as separate entities. Vaaja et al. [28] manually combined images obtained with a conventional single-lens reflex camera into a point cloud produced by TLS. However, in this case, the images did not cover the full 360 • , and the data integration relied on manual methodology, limiting the efficiency.
In this study, we aim to present a method to measure 3D luminance point clouds. We apply the integrated high dynamic range (HDR) panoramic camera system of a TLS instrument for 3D HDR luminance measurements simultaneously with laser scanning. We present a method for utilizing the images captured with a TLS instrument as the luminance data source (Table 1). Firstly, we present the luminance calibration of a laser scanner, and we assess the accuracy, color measurement properties, and dynamic range of luminance measurement achieved in a laboratory environment. Secondly, we demonstrate the 3D luminance measuring process through a case study with a luminance-calibrated laser scanner. We analyze the results and discuss the effect of scanning angles on luminance measurements. In addition, we explore future research directions in 3D luminance measuring. The novelty of our study is that the method covers the 360 • 3D luminance measurements and increases the level of automation in the data integration. In addition, the luminance point cloud data is enriched with the angle between the surface normal and the measurement direction.

Terrestrial Laser Scanner
For terrestrial laser scanning, we used a time-of-flight scanner Leica RTC360 (Hexagon AB, Stockholm, Sweden) [31,32]. According to the manufacturer, the scanning field of view is 360 • horizontal and 300 • vertical, and the measured 3D point accuracy is 1.9 mm at 10 m. The scanner has three 4000 × 3000 pixel image sensors mounted to the scanner body (see Figure 1). Together, the sensors cover a vertical view of 300 • . These sensors are used to create a panoramic image of 20,480 × 10,240 pixels with 5-bracket HDR imaging. The entire equirectangular panoramic image consists of 12 adjacent vertical images. The total scan time is 4 min 21 s, including HDR imaging with a scan resolution setting of 3.0 mm at 10 m. In the RTC360, HDR imaging is performed with a fixed exposure without any prior exposure measurements [3]. The imaging system of the RTC360 can therefore be calibrated to interpret the absolute luminance values of the measured environment. Furthermore, the panoramic image can be exported for editing as an EXR file without losing the high dynamic range of the images and registered into the point cloud without losing the dynamic information. These attributes make the Leica RTC360 a usable measurement device for luminance-calibrated terrestrial laser scanning.

Reference Color Target
A standardized color target, the X-Rite ColorChecker Classic chart (Grand Rapids, MI, USA) [33], was attached to the wall. The ColorChecker Classic chart is used in photography for creating camera profiles and correcting white balance and color. The chart is designed for color management in a variety of lighting conditions. Figure 2 shows the chart of 24 different colored patches with measured colorimetric reference data provided by X-Rite. The size of the ColorChecker Classic was 21.59 × 27.94 cm. In the X-Rite documents, the patches were labeled in a different order. Table 2 lists colorimetric reference data for the ColorChecker Classic manufactured after November 2014. The values were reported as CIE L*a*b* data.

Reference Luminance Measurements for the Color Target
The 16-bit sRGB values were measured and calculated for each patch in the reference color target. This was done for two reasons. Firstly, 16-bit sRGB values were not provided by the color target manufacturer. Secondly, by measuring and calculating the sRGB values for each patch ourselves, we were able to obtain the exact measurements in our laboratory environment, including especially the influence of lighting. Reference luminance values from the X-Rite ColorChecker Classic were measured with a Konica Minolta CS-2000 spectroradiometer (Teban Gardens Cres, Singapore). According to the manufacturer, the range of measurable luminances of the spectroradiometer is 0.003-500,000 cd·m −2 with a luminance measurement accuracy of ±2%. For each measured patch of the color target, the average of five consecutive measurements was used. For each channel, every measured value was scaled to the maximum 16-bit sRGB, calculated from the CIELAB values provided by X-Rite [33].
A test environment was set up for measuring the radiometric capability of tripodmounted TLS instruments (Aalto University, Espoo, Finland). The space was illuminated by luminaires fitted with D65 standard fluorescent tubes with a color rendering value R a > 93. Figure 3 illustrates a spectrum of the patch number 1 (Figure 2) in the ColorChecker measured with the spectroradiometer. The spikes of the D65 fluorescent illuminant are clearly visible in the spectrum. Figure 4 illustrates the CIE color matching functionsx(λ), y(λ),z(λ) [34].
Each spectral power distribution P(λ) of the measured patches was converted into X, Y, and Z colour values applying the CIE color-matching functions [34]x(λ),ȳ(λ),z(λ) (Equations (1)-(3)): For each patch, the X, Y, and Z values were normalized and then converted into linear R, G, and B values in the sRGB (IEC 1999) color space, applying Equation (4): The acquired linear R, G, and B values were then scaled to make them comparable with the reference values and then applied in order to calculate the relative luminance values with Equation (5) from the sRGB standard [35]:

Characterizing the Color and Luminance Capturing of the TLS
The HDR images ( Figure 5) captured with the TLS were first exported as 32-bit EXR files which were then converted to linear 16-bit TIF format. From the linear images, the sRGB (standard Red Green Blue) R, G, and B values were obtained as a median pixel value for each patch of the color target and as the average of five images. The values were scaled in order to make them comparable with the measured values. For each channel, every value measured with the TLS was scaled to the maximum 16-bit sRGB calculated from the CIE L*a*b* values provided by X-Rite ( Table 2). The 16-bit R, G, and B values were then converted into relative luminances applying Equation (5). A luminance calibration factor was obtained by comparing the relative luminance measured with the TLS to the absolute luminance measured with the spectroradiometer.

Luminance Data Processing
As in Section 2.2.3, the HDR images were exported as 32-bit EXR files from the TLS measurement data, and the 32-bit EXR files were converted to 16-bit TIF files, applying Python 3.6.9 with libraries OpenEXR (1.  (5), and the 16-bit relative monochromatic luminance values were coded over the three 8-bit RGB channels of a respective pixel and a new image was saved as an 8-bit TIF [25]. Hence, the new 8-bit relative luminance TIF image contains a wider dynamic range than a regular 8-bit RGB image, as all three channels carry the relative luminance data. The coded 8-bit file format allowed further processing of data in software that does not support a wider dynamic range, e.g., 16-bit data. The 8-bit TIF images were projected and registered as the R, G, and B values in the point cloud. Point by point, the R, G, and B values were converted back to relative luminance values. Finally, the luminance calibration factor (see Section 2.2.3) was applied to interpret the relative luminance values as absolute luminance values, and the absolute luminance value was registered to each point in the point cloud. Figure 6 illustrates the luminance point cloud generating process.  Figure 7 shows the space measured, the B-Hall, a lecture hall at Aalto University, Espoo, Finland. The maximum capacity of B-Hall is 320 persons, and the floor area is 297 m 2 . The lecture hall was illuminated only by interior lights. Seven scans were taken from the hall, and the scanned point clouds were registered with the manufacturer's Leica Cyclone REGISTER 360 version 1.6.2 (Hexagon AB, Stockholm, Sweden) software [36]. Each scan took 4 min and 21 s. Linear EXR images were exported as separate linear image files and converted to 16-bit TIF images. The scanned point clouds were colored with TIF images, and the color values of the point clouds were converted to absolute luminance values, as described in Section 2.3. Lighting analysis was performed with CloudCompare 2.10.2 software (EDF, Paris, France) with standard tools such as plane fitting, octree subsampling, and distribution fitting.
In laser scanning, the point densities of measured surfaces vary, depending on the different angles of incident and the distance from the laser scanner. Hence, in order to balance the point density, all the point clouds from individual scan stations were sampled in CloudCompare using octree-based subsampling, where the octree level was set to 12. The size of a single scan was about 160 million points, and subsampling reduced the point cloud to about 16-25% of the original. The densest point spacing of the subsampled cloud was about 5 mm. The subsampled point clouds were then merged into a single point cloud, and the merged point cloud was resubsampled with octree level 12 to avoid unnecessarily large file sizes.

Sample Areas
We chose two sample areas (horizontal and vertical) for detailed analysis. In addition, we present a concise analysis for seven sample areas A-G ( Table 3). The sizes of the sample areas were 0.5 m × 0.5 m. Figure 8 presents the sample areas. We applied the CloudCompare 2.10.2 plane fitting tool in order to obtain the angles between the scan stations and surface normal. The angles between the scan stations and the surface normal of the sample areas ranged from 9 to 88 degrees. Detailed information on the vertical and the horizontal sample areas can be found in Appendix A.2.   Table 4 presents the reference sRGB values measured from the X-Rite ColorChecker Classic with the spectroradiometer. The measured spectral power distributions were converted into sRGB values applying Equations (1)-(4).  Table 5 presents the laser scanner luminance measurements compared to the spectroradiometer luminance measurements. Only the lowest row of grayscale patches (1-6) were used for luminance calibration (see Figure 1). The 16-bit values were calculated into relative luminance values, applying Equation (5). The laser scanner absolute luminance measurements were derived using a simple linear regression with the spectroradiometer values. We assume that the sensor noise increases the low-end luminance values captured by the camera of the laser scanner. Hence, an improved iteration of the laser scanner absolute luminance values was derived by reducing the original 16-bit value by the absolute difference in the smallest compared luminance value (18.2 cd·m −2 − 13.9 cd·m −2 = 4.3 cd·m −2 ) multiplied by the calibration factor (146.3) obtained with the linear regression.    Applying the linear regression and the noise removal, the minimum and maximum measurable luminance values are 4.3 cd·m −2 and 443.6 cd·m −2 , respectively. Table 6 presents the consistency of five consecutive images captured with the TLS. The relative standard deviation (RSD) in the 5 repetitions was less than 2% for every grayscale patch of the reference color target. The channel-wise values can be found in Appendix A.1. For the measured patch number 12, the processing from the spectrum into sRGB values resulted in a negative value for the red channel. This is an expected outcome for certain colors. However, it obviously makes the comparison between the measured values and the image values questionable to a certain extent. Furthermore, some measured color patches were very out of proportion in the image compared to the measured values. However, the large relative differences in single channels did not carry through to the calculated luminance values and their relative differences. This may be explained by the fact that often the large relative difference in a single channel was due to a comparison to values that were absolutely small.   (Figure 2).

TLS 16-Bit Linear Images Compared to the Reference Values Measured with a Spectroradiometer
Relative luminance values calculated from the TLS 16-bit linear images 8418. 6 Figure 11 shows the luminance measurement obtained a single scan station projected onto 3D points, while Figure 12 illustrates seven merged luminance measurements subsampled to the octree level 12 as described in Section 2.3. The range of measured luminances was 0-443.6 cd·m −2 . In the measured space, the measurement range covers most of the measurable surfaces. However, the luminance of the light sources and the surfaces around them were too high to be measured with the TLS used in this study.   Figure 13 illustrates the point clouds and their corresponding merged histograms for the vertical and horizontal sample areas (see Figure 8). Illustrations of each laser scan and their merged point clouds and corresponding histograms for both sample areas can be found in Appendix A.2. Tables 9 and 10 present the measured features and statistics for the vertical sample area and the horizontal sample area, respectively. The values presented are the median, Gaussian mean, minimum and maximum luminances, standard deviation, relative standard deviation, number of points, and angle between the surface normal and the measurement direction.   The sample areas show that, especially near the scanner, some scans are over-represented (Tables 9 and 10). The number of points depends on the scanning angle and distance, so these features must be taken into account in the visual observation. Table 11 presents the measured features and statistics for the sample areas A-G. The sample areas are from the merged luminance measurements subsampled to the octree level 12 as described in Section 2.3. The values presented are the median, Gaussian mean, minimum and maximum luminances, standard deviation, relative standard deviation, number of points, and angle between the surface normal and the measurement direction. Table 11. The sample areas A-G: median luminance, Gaussian mean luminance, minimum luminance, maximum luminance, standard deviation, relative standard deviation, number of points, and angle between the surface normal and the measurement direction.

Laboratory Measurements
We characterized the color and luminance measurement quality of a terrestrial laser scanner and we presented a workflow where an HDR image captured by a TLS instrument was converted into absolute luminance values. Compared to the reference, the TLS captured luminance values with an average absolute difference of 2.0 cd·m −2 and an average relative difference of 2.9% for the grayscale patches (No. 1-6). For all patches, the average absolute difference and average relative differences were 5.7 cd·m −2 and 7.5%, respectively. The relative difference between the TLS measurement and the reference measurement was notable for certain patches such as blue (46.7%) and cyan (22.3%). This indicates that certain heavily weighted spectra translate suboptimally into luminance values when using standard sRGB conversion factors. However, as we can characterize the channel-wise values for each patch in the X-Rite ColorChecker, we would be able to obtain conversion factors that would be more optimal for the camera in the TLS than the sRGB conversion factors. Optimized factors could possibly decrease the difference between the luminance values measured with the TLS and the reference values for the weighted spectra.

Field Measurements
We explored the possibilities of simultaneous laser scanning and luminance imaging through a case study. Thus, the level of automation increased in comparison with the previous luminance data and TLS point cloud integration, and the luminance data integrity and usability improved.
The dynamic range needed for luminance measurement depends on the application. The widest dynamic range is required when measuring nighttime outdoor environments, for example, road lighting. In order to measure the lowest end of mesopic luminances on the road surface to the glaring light source, a measurement range of 0.01 cd·m −2 to approximately 100,000 cd·m −2 would be needed. This is a little more than 23 f-stops. The system used in this study had an effective dynamic range of 4.3-443.6 cd·m −2 or a bit less than 9 f-stops. This dynamic range is almost sufficient to measure the luminance distribution of the surfaces in an indoor space but nowhere near wide enough for road lighting measurements. Moreover, it is a technologically difficult task to extend the dynamic range towards the low luminance levels. The sensors would have to be more sensitive yet have a better signal-to-noise ratio. Another solution is to apply HDR imaging with longer exposure times, which obviously makes measuring slower or less convenient.
For indoor applications, however, HDR imaging could be applied by adding images captured with shorter exposure times. This way, the dynamic range of a TLS could be extended to be sufficient for indoor measurement from the low-end surface luminances (1 cd·m −2 ) to the glaring light sources (100,000 cd·m −2 ). This upward extension of the dynamic range would enable the measurements needed when calculating the unified glare rating (UGR). Furthermore, it would be possible to measure the luminance of the light sources if the dynamic range of HDR imaging is wide enough.
To determine the location of the measuring device, terrestrial laser scanning allows the measurement angle to be defined for each measured point. The point can be assigned a location, color value, absolute luminance value, intensity, point normal, and angle between point normal and surface normal. This information can be used in the future to determine the properties of the scanned object, such as reflectivity and gloss. The angle between the scan station and the measured surface normal was not verified by any other method in this study. We considered the collected point cloud data accurate enough for angle measurement.

Limitations of TLS as a Luminance Photometer
Usually, a TLS instrument captures a panoramic image as a composite of several adjacent images that are overlapped and blended together. The technique is often called image stitching. The quality of the stitching is difficult to quantify, and we did not assess the inaccuracies of image stitching. However, the TLS instrument (Leica RTC360) could be more suitable if the uncertainty of the panoramic image stitching process was known and there was a possibility to maintain the bit depth of the measurement in the RGB-registered point clouds. As for now, registering the raw imaging bit depth into the point cloud requires manual effort.
Different TLS instruments employ various imaging sensor installations, such as completely separate camera systems operated from atop of the TLS instrument (e.g., Riegl [37]), integrated imaging sensors utilizing the same rotating mirror as the laser ranging sensor, or sets of cameras mounted in the instrument's chassis, as in the applied Leica RTC360 scanner [3]. The realization of the imaging system affects the quality of produced panoramic images, e.g., through differences in parallax.
Contemporary TLS instruments are capable of obtaining rather high point densities and measurement speeds. For example, for the instrument applied in our work, the manufacturer reports a measuring speed of 2 million points per second and a point spacing of 3 mm at 10 m [31]. As a result, a single point cloud obtained with this instrument may contain up to approx. 200 million points [38]. A mapping campaign in a complex indoor environment may therefore well exceed a billion points. These data amounts present a technical challenge and require suitable storage systems to be applied in processing and distribution. Understandably, point cloud storage [39], distribution [40], and application [41] have become topical development tasks.
For assessing the color measurement of the TLS instrument, the 24 patch X-Rite ColorChecker Classic was applied. In order to improve the color measurement assessment, a color chart with 99 patches could be used as defined in ANSI/IES Method for Evaluating Light Source Color Rendition TM-30-20 [42].

Future Research Directions
In future studies, a method for determining the reflectivity of a surface can be developed as the locations of the measurements and the locations and luminances of the light sources are known. However, this method does not completely solve the reflectivity measurement. For more reliable reflectivity measurement, the light distribution of the light sources and the integration of light within the measurement space also need to be determined.
Simultaneous geometry and luminance measuring executed with a TLS can be applied in lighting design and lighting retrofitting. A 3D mesh model can be created from the measured point cloud. The mesh model can be converted into a CAD 3D model, which can be imported into lighting design software such as DIALux or Relux.
Since the scanner alone is not yet comparable in terms of image quality, the best result is obtained by combining terrestrial laser scanning and photogrammetry. As of yet, a TLS cannot replace conventional imaging luminance photometry in terms of luminance measurement. However, the TLS-based luminance measurement does not fall far behind. When the measurable luminance range is widened, the TLS luminance measurement would perform at a similar level as conventional imaging luminance photometry for indoor measurements and outdoor daytime measurements. Furthermore, both of these required improvements have been solved as individual technologies, but the advancements have not yet been implemented in a TLS. Hence, we are only a few steps away from luminance measurements being obtained as a side product of geometry measurement or vice versa. In TLS luminance measurement, the luminance data is registered into the measured geometry. This is a feature that is completely unobtainable using only conventional imaging luminance photometry.
As TLS point clouds capture the surrounding environment from all directions, their study requires different user interfaces than those used for navigating 2D image data sets. 3D point clouds can of course be studied on conventional displays, either with freely navigable 3D environments or-akin to panoramic images-by fixing the viewpoint and jumping from one measuring position to another. In complex indoor environments, im-mersive display devices, such as virtual reality head-mounted displays, offer a potentially more intuitive alternative for navigating complex virtual 3D environments. By leveraging game-engine technology, laser scanning point clouds can be brought into VR [43]. Adapting the point cloud visualization to the study of luminance data represents an obvious task for future development.

Data Availability Statement:
The data presented in this study are openly available in Zenodo at 10.5281/zenodo.4743890, reference number [44].

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Measurement Details
Appendix A.1. Color Target   Tables A1 and A2 show the 16-bit values for each R, G, and B channel measured with the TLS and the spectroradiometer respectively. Values are linear and scaled to be comparable. Table A3 presents the relative differences between the measurements. Table A1. Linear TLS measurement scaled to be comparable with the X-Rite color target values for each channel R, G, and B. Table A2. Linear spectroradiometer measurement scaled to be comparable with the X-Rite color target values for each channel R, G, and B.  Tables A4 and A5 show the luminance values of each sample area for each scan as well as illustrations of each laser scan and its merged point clouds and corresponding histogram for both of the samples. The deviations observed between the different scans were caused by different viewing angles and possible changes in luminance between the different scans. Table A4. The vertical sample area: included scans (single scan stations "1-7", merged scans "All", and the merged scan subsampled with octree level 12), sample area visualization, histogram of luminances, number of points, and angle between the surface normal and the measurement direction.

No.
Sample Area Histogram Points Angle  Table A5. The horizontal sample area: included scans (single scan stations "1-7", merged scans "All", and merged scan subsampled with octree level 12), sample area visualization, histogram of luminances, the number of points, and the angle between the surface normal and the measurement direction. Scans 2 and 3 were left with no observations due to the large measurement angle. Scan number 7 was partly overexposed and therefore omitted from the merged clouds.

No. Sample area Histogram
Points Angle