Assessment of the Performance of a Portable, Low-Cost and Open-Source Device for Luminance Mapping through a DIY Approach for Massive Application from a Human-Centred Perspective

Ubiquitous computing has enabled the proliferation of low-cost solutions for capturing information about the user’s environment or biometric parameters. In this sense, the do-it-yourself (DIY) approach to build new low-cost systems or verify the correspondence of low-cost systems compared to professional devices allows the spread of application possibilities. Following this trend, the authors aim to present a complete DIY and replicable procedure to evaluate the performance of a low-cost video luminance meter consisting of a Raspberry Pi and a camera module. The method initially consists of designing and developing a LED panel and a light cube that serves as reference illuminance sources. The luminance distribution along the two reference light sources is determined using a Konica Minolta luminance meter. With this approach, it is possible to identify an area for each light source with an almost equal luminance value. By applying a frame that covers part of the panel and shows only the area with nearly homogeneous luminance values and applying the two systems in a dark space in front of the low-cost video luminance meter mounted on a professional reference camera photometer LMK mobile air, it is possible to check the discrepancy in luminance values between the low-cost and professional systems when pointing different homogeneous light sources. In doing so, we primarily consider the peripheral shading effect, better known as the vignetting effect. We then differentiate the correction factor S of the Radiance Pcomb function to better match the luminance values of the low-cost system to the professional device. We also introduce an algorithm to differentiate the S factor depending on the light source. In general, the DIY calibration process described in the paper is time-consuming. However, the subsequent applications in various real-life scenarios allow us to verify the satisfactory performance of the low-cost system in terms of luminance mapping and glare evaluation compared to a professional device.


Introduction
Glare is essentially produced by daylight or electrical sources and is essentially characterised by an uneven luminance distribution in the field of view (FoV) [1]. Glare can impair people's visual performance or cause discomfort [2]. There are various indices for quantifying the glare in different situations-from the unified glare rating (UGR) used for artificial lighting to the daylighting glare probability (DGP) for light entering through windows to the contrast ratio (CR) defined by considering the contrast between certain luminance values and those of the surroundings [3]. In defining glare issues, the luminance of the glare source is, of course, the most important factor, but there are several other factors 1.
What is the response of a low-cost camera compared with a professional camera photometer in different controlled environments with different light sources? 2.
Is there a considerable difference between the luminance values of the low-cost camera and the professional one, and is it possible to consider an eventually differentiated correction factor for the different lighting systems? 3.
Eventually, is it possible to consider an even simpler algorithm that automatically adjusts the luminance distribution of the low-cost system considering the different lighting systems to adapt to that of the professional camera?
The method described in Ref [5] is time-consuming and cannot be performed automatically in a few seconds on a portable device. We would like to find out whether it is possible to limit the time for capturing the images to less than 3 s and how large the error is in the definition of luminance mapping, considering this important constraint and considering different light sources. For this purpose, we considered two cameras: a professional DSLR camera from Canon equipped with a Sigma fisheye and a Raspi cam controlled by a Raspberry Pi. These two devices were positioned in front of different lighting panels used as a reference luminance source (see the Materials and Methods section) to collect different data and check the discrepancy between the two camera devices used for luminance mapping. The main results of the study are then applied to different everyday scenarios to confirm our findings. The idea is to verify if it may be possible to attach the device to a helmet and capture information about the luminance level during the day from a human-centred perspective.

Materials and Methods
Two lighting panels were built, and different light sources (i.e., different light spectra) were considered on a small area with uniform luminance, as described in more detail in Section 2.1 below. Two luminance measurement systems were considered: one based on a low-cost approach and another on a professional reference instrument. For more details on the video luminance meters, see Section 2.2.

Lighting Panels Used as a Reference Luminance Source
Two different lighting systems were developed for the luminance analysis, following the principle of the DIY approach. They consist of a LED panel and a cube with a standard E27 attack (Figure 1). • An aluminium frame where the led strip was located on the long sides of the aluminium frame; • An ethylene vinyl acetate EVA layer; • A reflective paper; • A light guide panel; • A diffuser paper.
The strips consist of SMD2835 LEDs, both cool and warm white, spaced 1.6 cm (Figure 1a). A black frame is attached to the panel on which a Cartesian plane was drawn to define a mesh of points with a resolution of 3 × 3 cm (Figure 1c).
The cube, with external dimensions of 32 × 32 cm, is realised using laminated pieces of wood, with an inner cover made of white alveolar polypropylene and an E27 light bulb attack positioned at 6 cm from the bottom (Figure 1b), allowing the consideration of different lighting sources (i.e., halogen, fluorescent, incandescent). A foil of alveolar polypropylene was placed horizontally at 15 cm from the floor to reduce the luminance discrepancy on the test surface. The upper surface consists of a white synthetic glass panel. The same Cartesian plane with a grid of 3 x 3 cm points was drawn over this test surface (Figure 1d).
A Konica Minolta LS-110 luminance meter is then used to evaluate the two panels' luminance distribution, considering a template that follows the reference points across the x and y axes of the Cartesian orthogonal system ( Figure 2). • An aluminium frame where the led strip was located on the long sides of the aluminium frame; • An ethylene vinyl acetate EVA layer; The strips consist of SMD2835 LEDs, both cool and warm white, spaced 1.6 cm ( Figure 1a). A black frame is attached to the panel on which a Cartesian plane was drawn to define a mesh of points with a resolution of 3 × 3 cm (Figure 1c).
The cube, with external dimensions of 32 × 32 cm, is realised using laminated pieces of wood, with an inner cover made of white alveolar polypropylene and an E27 light bulb attack positioned at 6 cm from the bottom (Figure 1b), allowing the consideration of different lighting sources (i.e., halogen, fluorescent, incandescent). A foil of alveolar polypropylene was placed horizontally at 15 cm from the floor to reduce the luminance discrepancy on the test surface. The upper surface consists of a white synthetic glass panel. The same Cartesian plane with a grid of 3 × 3 cm points was drawn over this test surface (Figure 1d).
A Konica Minolta LS-110 luminance meter is then used to evaluate the two panels' luminance distribution, considering a template that follows the reference points across the x and y axes of the Cartesian orthogonal system ( Figure 2). The luminance values of the LED panel are defined in different configurations to allow CCT and intensity changes. On the other hand, only one configuration is considered for the halogen, fluorescent and incandescent lamp in the cube.
This approach made it possible to identify an area of the two plates with little differences in luminance distributions (see the details in paragraph 3.2 and Appendix A). In this way, it was possible to install masks on the 6 × 6 cm panels that limited the effective size of the lighting source, which was characterised by almost constant luminance and was useful for the subsequent analysis.

Equipment Used and Flowchart Used to Acquire the High Dynamic Range Images
The wide-angle camera with a focal distance of 1.67 mm, an optical FoV D of 160° (FoV H 122°, FoV V 89.5°) based on the OV5647 sensor, namely the V1 camera series, is considered in this research study. It has a native resolution of 5 MP and dimensions of 22.5 mm x 24 mm x 9 mm, making it perfect for mobile or other applications. The camera is connected to a Raspberry Pi 3 A+ equipped with a 64-bit quad-core processor running at 1.4 GHz, dual-band 2.4 GHz and 5 GHz wireless LAN, and Bluetooth 4.2/BLE [7]. The data collected by this device are compared with those of the camera photometer based on the Canon EOS70D digital single-lens reflex (DSLR) camera equipped with a CMOS Canon APS-C sensor and a Sigma Fisheye 4.5 mm F2.8 EX DC HSM [8]. Table 1 shows the most important lighting characteristics.  Figure 3a). The setup also uses an HD30.1 spectroradiometer data logger equipped with the HD30.S1 probe (Figure 3b) for spectral The luminance values of the LED panel are defined in different configurations to allow CCT and intensity changes. On the other hand, only one configuration is considered for the halogen, fluorescent and incandescent lamp in the cube.
This approach made it possible to identify an area of the two plates with little differences in luminance distributions (see the details in Section 3.2 and Appendix A). In this way, it was possible to install masks on the 6 × 6 cm panels that limited the effective size of the lighting source, which was characterised by almost constant luminance and was useful for the subsequent analysis.

Equipment Used and Flowchart Used to Acquire the High Dynamic Range Images
The wide-angle camera with a focal distance of 1.67 mm, an optical FoV D of 160 • (FoV H 122 • , FoV V 89.5 • ) based on the OV5647 sensor, namely the V1 camera series, is considered in this research study. It has a native resolution of 5 MP and dimensions of 22.5 mm × 24 mm × 9 mm, making it perfect for mobile or other applications. The camera is connected to a Raspberry Pi 3 A+ equipped with a 64-bit quad-core processor running at 1.4 GHz, dual-band 2.4 GHz and 5 GHz wireless LAN, and Bluetooth 4.2/BLE [7]. The data collected by this device are compared with those of the camera photometer based on the Canon EOS70D digital single-lens reflex (DSLR) camera equipped with a CMOS Canon APS-C sensor and a Sigma Fisheye 4.5 mm F2.8 EX DC HSM [8]. Table 1 shows the most important lighting characteristics. A 3D printed adapter was designed to install the Raspberry with the wide-angle camera on the fisheye lens of the DSLR camera ( Figure 3a). The setup also uses an HD30.1   Both cameras took three different pictures of the same subject with different exposure times and combined them to create an HDR. The procedure for setting the shutter speed of the camera photometer corresponds to the A2 procedure described in Ref [10] and is based on the use of the hand-held Konica Minolta luminance meter (Figure 3c), which makes it possible to determine the correct time of high dynamic range (THDR). The procedure allows the measurement of the highest luminance value. The three "CR2" files collected with the camera photometer are then processed with LMK LabSoft to create the HDR file and generate a false-colour image of the luminance.
On the other hand, the three jpg files taken with the low-cost device are processed with the hdrgen software [11] to create the HDR file. The resulting HDR file is processed with the freely available Aftab HDR False Colour Analysis tool ( Figure 4). Both cameras took three different pictures of the same subject with different exposure times and combined them to create an HDR. The procedure for setting the shutter speed of the camera photometer corresponds to the A2 procedure described in Ref [10] and is based on the use of the hand-held Konica Minolta luminance meter (Figure 3c), which makes it possible to determine the correct time of high dynamic range (THDR). The procedure allows the measurement of the highest luminance value. The three "CR2" files collected with the camera photometer are then processed with LMK LabSoft to create the HDR file and generate a false-colour image of the luminance.
On the other hand, the three jpg files taken with the low-cost device are processed with the hdrgen software [11] to create the HDR file. The resulting HDR file is processed with the freely available Aftab HDR False Colour Analysis tool ( Figure 4).

Final Setup
The final setup of the low-cost camera calibration system is shown in Figure 5. The illuminated panels face the cameras positioned on a tripod. The tripod was also alternatively used to position the spectroradiometer ( Figure 5). The vertical position is

Final Setup
The final setup of the low-cost camera calibration system is shown in Figure 5.  The illuminated panels face the cameras positioned on a tripod. The tripod was also alternatively used to position the spectroradiometer ( Figure 5). The vertical position is defined so that the centre of the spectroradiometer, or the centre of the segment connecting the centre of the Canon camera lens to the centre of the Raspberry cam lens, is placed at the same height as the centre of the lit panel. This configuration made it possible to collect data on luminance, which was collected in various configurations with both the professional and the low-cost camera. The same configuration also allowed the collection of data on the visual spectrum. The data are then processed to check the discrepancy in luminance mapping captured by the low-cost camera compared to the professional sensor and to see if the differences can be corrected depending on the lighting source.
Before starting the acquisition, a uniform white image was positioned in front of the camera, and a script was launched to correct via software the lens shading, also known as the vignetting effect [12,13], with a methodology often used for a microscope based on Raspberry Pi and a different type of camera with different customised lens. Then, we checked if this software correction was performed correctly. For this reason, in line with paragraph 2.3.5 of Ref [5], the setup described here was also used to verify the lens shading [13]. In this case, the tripod was positioned 60 cm from the LED panel, and the area illuminated by the LED panels was reduced to a surface of 2 × 2 cm ( Figure 6).

Vignetting Assessment
As reported in the previous paragraph, the setup allows us to acquire three images with different exposure for the different rotation angles. By managing the derived HDR file for each rotation step with Aftab HDR False Colour Analysis tool, we determined the "luminance" value for the illuminated area. We normalised those values by considering The low-cost camera is rotated by 11.25 • each time, covering the FOV of the lens, and three images at different exposure are acquired each time.

Vignetting Assessment
As reported in the previous paragraph, the setup allows us to acquire three images with different exposure for the different rotation angles. By managing the derived HDR file for each rotation step with Aftab HDR False Colour Analysis tool, we determined the "luminance" value for the illuminated area. We normalised those values by considering the values in the centre of the image as equal to 1 (relative luminance, y-axis, Figure 7). The same approach was used for relative distance (x-axis in Figure 7) in line with Ref [14], where the relative distance equal to 0 refers to the centre of the image, and the relative distance of 1 refers to the corner of the image.  • By applying the software correction of the low-cost camera as described above, the centre of the image records lower luminance values than those moving towards the corner of the image.

•
It is possible to confirm the symmetrical distribution of the values, in line with expectations.
As confirmed by Figure 7, assuming a symmetrical distribution of the relative luminance differences, it is possible to define a calibration curve that starts from the centre of the FOV and extends to the corner. In this case, the polynomial of the third order used in the cal file of the pcomb function is composed of the coefficient reported in Figure 7b. By applying the -f function provided by pcomb, considering the cal file, it was possible to remove the spatial disuniformity of luminance, as confirmed by Figure 8.   • By applying the software correction of the low-cost camera as described above, the centre of the image records lower luminance values than those moving towards the corner of the image.

•
It is possible to confirm the symmetrical distribution of the values, in line with expectations.
As confirmed by Figure 7, assuming a symmetrical distribution of the relative luminance differences, it is possible to define a calibration curve that starts from the centre of the FOV and extends to the corner. In this case, the polynomial of the third order used in the cal file of the pcomb function is composed of the coefficient reported in Figure 7b. By applying the -f function provided by pcomb, considering the cal file, it was possible to remove the spatial disuniformity of luminance, as confirmed by Figure 8.  • By applying the software correction of the low-cost camera as described above, the centre of the image records lower luminance values than those moving towards the corner of the image.

•
It is possible to confirm the symmetrical distribution of the values, in line with expectations.
As confirmed by Figure 7, assuming a symmetrical distribution of the relative luminance differences, it is possible to define a calibration curve that starts from the centre of the FOV and extends to the corner. In this case, the polynomial of the third order used in the cal file of the pcomb function is composed of the coefficient reported in Figure 7b. By applying the -f function provided by pcomb, considering the cal file, it was possible to remove the spatial disuniformity of luminance, as confirmed by Figure 8.  Figure 8 shows how, effectively, the relative luminance distribution among the different relative distances almost equals 1. In the next paragraph, we focus on the difference between low-cost values of luminance and those monitored with a professional camera.   Figure 8 shows how, effectively, the relative luminance distribution among the different relative distances almost equals 1. In the next paragraph, we focus on the difference between low-cost values of luminance and those monitored with a professional camera.

Panel and Cube Characterisation with Konica Minolta Luminance Reference Meter
Appendix A shows the details of the analysis of luminance resulting from applying the Konica Minolta luminance meter over the two reference sources. The data are classified, considering a string consisting of three parts (e.g., 100_C_1 ). The first is used to identify the light intensity (100% or 50%), and the second is used to identify the white type among warm (W), cool (C) or neutral (N). The third is the distance from the lighting sources: 1 = 55 cm, 2 = 30 cm, 3 = 15 cm from the lighting sources. In the case of the cube, the H, F or I letters indicate, respectively, the halogen, fluorescent or incandescent lamp used in the test without changing the intensity. Numbers 4 = 55 cm, 5 = 30 cm or 6 = 15 cm refer to the distance from the reference lighting sources. In all cases, it is possible to check the luminance distribution over the reference surfaces and identify an area of 6 × 6 cm where the monitored values are almost constant. Even though we do not know the light distribution of warm white and cool white LEDs because the manufacturer's data are unknown (e.g., .ies file), we can test experimentally that the selected area for LED is the same for the different configurations. This is due to the geometric distribution of LED S (Figure 1a), which is quite the same for warm and cool white LEDs, thus supporting the idea that there is no relevant difference in light distribution for the two types of LEDs. Table 2 summarises some details of the area luminance marked in black in Appendix A.  Figure 9 summarises the spectrum profiles for the different configurations considered. For better comprehension of the light source colour rendition, see Ref. [15]. Table 3 reports in the second and third columns the pairwise results of the luminance evaluation with the camera photometer and the Raspberry camera. The table also reports the value of the adimensional coefficient S, the ratio between the luminance value measured with the camera phonometer and that measured with the Raspberry camera. This is a factor used in the pcomb [16] feature developed by Greg Ward to edit the starting HDR image. The fourth column reports the corrected factor of luminance by applying the S_pcomb_factor. The last three columns report data acquired with the spectroradiometer.  Table 3 reports in the second and third columns the pairwise results of the luminance evaluation with the camera photometer and the Raspberry camera. The table also reports the value of the adimensional coefficient S, the ratio between the luminance value measured with the camera phonometer and that measured with the Raspberry camera. This is a factor used in the pcomb [16] feature developed by Greg Ward to edit the starting HDR  From Table 3, it is possible to highlight how for all the considered configurations for LED lighting, the S_pcomb factor is equal to 0.105, on average, with a minimum value of 0.08 and a maximum of 0.12. The average S_pcomb for halogen lamp configurations is 0.042, while it is equal to 0.116 for daylight, 0.136 for fluorescent and 0.045 for incandescent lamps.

Camera Photometer and Raspberry Camera Comparison
To answer the second question posed in the introduction, we want to verify whether it is possible to classify the S_pcomb as a function of some variables among those reported in the previous Table 3. For this purpose, Figure 10 reports S_pcomb in a two-dimensional plot as a function of different parameters characterising the different spectra. 0.08 and a maximum of 0.12. The average S_pcomb for halogen lamp configurations is 0.042, while it is equal to 0.116 for daylight, 0.136 for fluorescent and 0.045 for incandescent lamps.
To answer the second question posed in the introduction, we want to verify whether it is possible to classify the S_pcomb as a function of some variables among those reported in the previous Table 3. For this purpose, Figure 10 reports S_pcomb in a two-dimensional plot as a function of different parameters characterising the different spectra. S_pcomb does not seem to be clearly classifiable considering only one parameter among CRI_Ra (Figure 10a), CCT (Figure 10b), Integral of spectral irradiance (Figure 10c) and E (Figure 10d). It is possible to highlight how all LED configurations are characterised by a CRI_Ra of less than 81. While the Daylight and Halogen configurations are characterised by a CRI_Ra higher than 90, the difference in terms of the Integral of spectral irradiance is remarkable. For this reason, it is feasible to define a possible conditional statement that allows us to classify the lighting source in LED, Halogen, Fluorescent, Incandescent and Daylight and consequently identify the correct S factor: The fairly marginal difference between halogen and incandescent lamps and the minimal difference in terms of the S factor convinced us to consider an average value for S equal to 0.043 and not to distinguish between the two types of lamps.
We can apply the proper factor S_pcomb by considering the different lighting sources.

False-Colour Analysis in Real Cases
Different scenarios are considered:  Figure 11 shows the comparison of illuminance mapping in false colour, considering the proper S factor, defined in accordance with the conditional statement used to classify the predominant light source.  It is possible to make the following considerations about the luminance distribution [cd/m 2 ] with the HDRs acquired with the two systems:

•
The raspicam is less resolute and also has less FoV, but we already knew this in advance; • Even in a very low light scenario (living room at dusk), it is possible to highlight a good comparison in terms of luminance distribution, demonstrating a good criterion for selection of the light source and, consequently, the correct S factor to apply to a low-cost HDR image.

Glare Index Analysis
To perform glare analysis, different methods are considered, depending on the system considered.
In the case of the low-cost instrument, two different methods are used: 1.
The first one considers a task area, as recommended in Ref [17]-a useful approach, especially in the case of scenarios 1, 2 and 5, where users are expected to concentrate their gaze towards a specific area. The average luminance is calculated, and each pixel exceeding this value multiplied by a default factor equal to 5 [17] is considered a glare source.

2.
The second approach-especially useful in the case of walking, when users are not concentrated in a specific area-does not consider a task area, in contrast to what is reported in Ref [17]. This allows us to consider the entire area captured. In this case, a constant threshold luminance level equal to 1500 cd/m 2 is used. This second method also considers the difference in glare assessment due to the different FoV of the acquired figures. Depending on the derived HDR image, two different approaches are considered ( Figure 12). The low-cost images are processed with ra_xyze to create the RGBE radiance file with the following code: The pcomb function is then used to apply the S factor and vignetting adjusting, as reported in the following example: • pfilt -1 -e 1 -x 1120 -y 840 20220705_2128_EVinpixel_0105corr.hdr > 20220705_2128_EVinpixel_0105corr.pic For the HDR file generated with the professional camera photometer, the value of UGR is defined in accordance with Section 17.1.5 of Ref. [18], as synthesised in Figure 12a. Among the different methods of glare calculation reported in Ref [16], we considered the following three methods: a.
The first method-the most accurate-is based on the analysis of the overall luminance histogram and sets the first minimum after the first maximum as the luminance threshold level. b.
The second method is based on using a task area defined in the LMK LabSoft, and the average luminance of the task zone area is defined as the threshold level. The threshold level is multiplied by a factor set to 5. c.
The third method is based on manually setting a luminance threshold level-in this case, equal to 1500 cd/m 2 -for the first four scenarios, while for the fifth, a value equal to 1000 cd/m 2 is considered.
The low-cost images are processed with ra_xyze to create the RGBE radiance file with the following code: The pcomb function is then used to apply the S factor and vignetting adjusting, as reported in the following example: Then, a smaller image is created with the extension pic file using the Pfilt program [19]: • pfilt -1 -e 1 -x 1120 -y 840 20220705_2128_EVinpixel_0105corr.hdr > 20220705_2128_EV-inpixel_0105corr.pic • Pfilt -1 -e 1 -x 1120 -y 840 xxx.hdr > xxx.pic (where "xxx" expresses the name of the initial hdr file) Then, the evalglare program [17] is used to calculate the glare metrics: • In the case of considering the task area, the following script is used, which allows first calculating the glare indices and then saving a pic file with the highlighted task area by considering the following script: • evalglare -T 580 350 0.7 -vth -vv 122 -vh 90 -c taskarea.pic 20220704_1302_EVin-pixel_0116corr.pic • In the case of scenarios 3 and 4, typically a walking scenario, the y position of the task area is lowered slightly and set equal to 100, imagining that the user is focused on looking at the area where they will place their feet. Then, the pic file is converted to a more useful tif file by considering the following: • ra_tiff -z taskarea.pic taskarea.tif • Meanwhile, in the case of considering the entire area captured, the following script is considered: • evalglare -vth -vv 122 -vh 90 -b 1500 xxx.pic > glare_xxx.txt The term -b allows setting the threshold luminance value in line with the third method used by the professional glare calculation method. In the case of scenario 5, this value is set to 1000 cd/m 2 . Table 4 reports the values of UGR for the different scenarios and different methods considered above and the sensation based on the 9-point Hopkinson's glare scale [20,21] below.  [20,21]. Table 4 allows some useful comments to be made. Even if we consider only the professional device, the glare evaluation in relation to the sensation scale could be very different in cases where it is not a "standard" office scenario. In particular, if we look at scenario 4 (outdoor assessment), we can see that the glare sensation calculated based on the professional device data could be "uncomfortable" or "unacceptable" or "just uncomfortable", depending on the method used.
On the other hand, if we compare the results of methods 1 and 2 of the low-cost device for scenarios 1, 2 and 5 with the corresponding methods b and c of the professional device, we can see that there is a difference in terms of glare sensation when considering a task area (method 1 and b), while there is no difference when the whole area is evaluated (method 2 and c). Additionally, when we compare method 2 with method a for the same scenarios, there are no differences in glare sensation.

Discussion and Future Improvements
The idea of performing luminance mapping with a low-cost camera is certainly not new [22,23], as the costs are more than an order of magnitude less than those of professional equipment, and the automated procedure for determining the glare index is very fast when compared to a classic manual procedure in which the photos have to be copied to the PC and then processed with dedicated software. The novelty of the proposed approach lies in the use of the DIY approach used to assess the performance of the low-cost camera, thus allowing the description and implementation of a method that is practically replicable and applicable by considering different light sources, even different from those considered in this study. With this in mind, Figure 13 shows the profiles of the light sources considered at 100% of light intensity and for the closest position to the reference light source in the 16 hue bins circle, which allow us to identify the difference in hue shift compared to a reference blackbody radiator (black line in the figure). Some of the sources (incandescent lamp-I_6_100, halogen-H_6_100 and daylight-D_3) have a similar colour behaviour to the reference colour; others deviate by a maximum of 20% (fluorescent lamp-F_6_100, warm white LED-W_3_100, cold white LED-C_3_100, neutral white LED-N_3_100) and others still (blue LED light-NB_3_100, red LED light-NR_3_100, green LED light-NG_3_100) are intentionally very far from the black reference circle. A comprehensive overview of the colour rendering of all light sources can be found in Ref [15]. Some of the sources (incandescent lamp-I_6_100, halogen-H_6_100 and daylight-D_3) have a similar colour behaviour to the reference colour; others deviate by a maxi-mum of 20% (fluorescent lamp-F_6_100, warm white LED-W_3_100, cold white LED-C_3_100, neutral white LED-N_3_100) and others still (blue LED light-NB_3_100, red LED light-NR_3_100, green LED light-NG_3_100) are intentionally very far from the black reference circle. A comprehensive overview of the colour rendering of all light sources can be found in Ref [15]. However, the approach described in this way could be repeated, and it is not surprising that more information is provided in Appendix A and in Ref [15]. This is because other researchers interested in the same aspect could replicate the easy and inexpensive instrumentation to understand how the system behaves under the action of other sources, different from those considered so far, or to consider a more in-depth study of contrasting fields, bright and dark areas side by side, which may also influence the final glare assessment due to the small size of the optical element of the Raspberry camera.
Another consideration is the presence of different light sources. In this case, the algorithm considers a total spectrum and then applies a correction coefficient that considers the predominant source. In this sense, in the case of daylight at midday, which is predominant compared to the fluorescent spectrum, the algorithm considers the total spectrum as "daylight" and assigns the corresponding S factor (S = 0.116, Figure 11b,c), while at dusk, when the daylight brightness is low and in the presence of LED light, the algorithm considers the total spectrum and assigns an S factor corresponding to the "LED" condition (S = 0.105, Figure 11h,i). The approach designed in this way allows different light sources to be considered by considering the total spectrum.
A future improvement could involve placing a surface orthogonal (or with a different angle) to the illuminated area on which different surface finishes could be applied and also investigating how the reflection effect could affect the luminance mapping of the low-cost system. This aspect is not considered in this study but does not seem to impact the overall luminance mapping and glare assessment significantly. Another improvement could be the use of a camera with higher FoV.
Another consideration regards the use of this low-cost solution for glare assessment. If we refer to the results of Section 3.4, in our opinion, it would be possible to consider a low-cost solution for indoor glare assessment in the case of office spaces (scenario 1 and 2) or home environments (scenario 5). Using a low-cost scenario for glare assessment in outdoor spaces (scenario 4) or indoor spaces (scenario 3) that differ from the classical office space requires further investigation, since, as shown, the same professional device can give different results depending on the method used.

Conclusions
A new calibration setup based on a DIY approach was proposed. The setup made it possible to perform calibration of a low-cost camera and compare the results in terms of luminance mapping with a professional DSLR camera photometer in a controlled environment but also considering real case studies.
According to the main questions formulated at the beginning of this study, we can conclude that: • Luminance mapping can be performed using a low-cost camera if it is subjected to a time-consuming but necessary calibration process; • The S factor of the pcomb function allows us to consider a correction factor that can be applied to the low-cost system to better match the luminance values of the professional device; • The S factor can be differentiated by considering different light sources, and in our study, we introduce a rough algorithm that performs this; • The calibration process could be replicated following a DIY approach to account for the different limitations/improvements, as described in the previous section.