Measuring Facial Illuminance with Smartphones and Mobile Devices

Featured Application: Methodology proposed can be implemented in an app to obtain objective and accurate face illuminance measures required to investigate the effect of illumination on myopia progression. Abstract: Introduction: To uncover a relationship between light exposure and myopia is complicated because of the challenging nature of measuring visually relevant illumination experienced by children. Objective: To ﬁnd a methodology to measure face illuminance using a mobile device. Methods: Accuracy and precision of the mobile device’s built-in ambient light sensor were tested under three different lighting conditions: full-ﬁeld, a single small light, and one mimicking typical ofﬁce illumination. Face illuminance was computed in six faces with different skin reﬂectances using pixel values in face images captured by the device camera placed at 30 cm in front of the face. The results were compared with those obtained with a commercial light meter situated at the face. Results: The illuminance measured by the device’s ambient light sensor showed high linearity (R 2 > 0.99) slightly under-estimating or conversely over-estimating face illuminance with full-ﬁeld or single light sources but accurate for ofﬁce lighting. Face illuminance measured by the devices’ camera under indoor conditions using the new methodology showed a mean relative error of 27% and a high linearity (R 2 > 0.94). Conclusions: Introduction of an app can be used to assess the association between visually relevant environmental light levels and myopia progression.


Introduction
The recent dramatic increase in myopia prevalence, reaching almost 100% in some young adult groups (most notably in East Asia) [1][2][3], points to the existence of a causal agent in the modern environment. Notably, in the most recent decades, children have changed their behavior, now spending the majority of their time indoors [4]. Urbanization, modern technology, and modern schooling are all implicated in this behavioural change [5][6][7]. Studies have shown that myopia prevalence is lower in children who spend more time outdoors [8] even when both their parents were myopic [9], and that increasing the time spent by children outdoors (by approximately 80 min per day) significantly reduced the incidence and progression of myopia in a population of East Asian school children [10].
The indoor visual environment differs from the one outdoors in three major ways: less light intensity, different spectral composition of light, and generally nearer viewing distances [10,11]. Experimental studies of young chicks and monkeys have revealed that low light levels can accelerate eye growth, whereas high light levels can slow eye growth [12][13][14]. Furthermore, seasonal variations in eye growth in children in the northern hemisphere provide indirect support for the important role of light level in eye growth regulation [15].
Studies in human children attempting to uncover a relationship between light exposure and myopia onset and progression are complicated because of the challenging nature of environmental light level monitoring, particularly those features of the luminous environment that affect the retinal image. In many situations, single illuminance measures can only have an approximate relationship with light reaching the retina, because although most of the light entering the eye originates as light reflected by the illuminated environment, the eye's visual field is restricted in its extent. For example, in the case of a child viewing a high luminance mobile device's screen in a dark room, a general room illuminance measure will constitute a poor indicator of central retinal illuminance. Since mobile device use is a risk factor for myopia [16], monitoring of environmental light as a surrogate for measures of retinal illuminance should take into account both diffuse environmental illuminance and any local light sources often occurring in the centre of the visual field.
Small ambient light sensors which can be mounted to the wrists, clothing, or incorporated into spectacle frames [17][18][19][20][21][22][23] have recently been employed to monitor light levels in children's environments and helped reveal that myopic children, on average, tend to be exposed to less light [8,24]. Since retinal illuminance depends on the eye pupil illuminance, which depends on the direction of the eye's line of sight, the ability of these light measurements to capture the features of the lighting environment relevant to eye growth is disputable. For example, wrist mounted detectors may be covered by clothing or pointed in a different direction than the eye's line of sight. Moreover, wrist-mounted ambient light sensors integrate light over a wide field of view that may not match the visual field.
The present report examines the feasibility and accuracy of measuring face illuminance as a surrogate for retinal illuminance using cameras included in mobile devices. If feasibility and accuracy can be demonstrated, the devices which are usually in possession of children at risk of myopic eye growth [25] offer an opportunity to collect face illuminance data from very large populations, with the advantage of not requiring any additional equipment. We propose that face illuminance provides a measure of environmental lighting more closely related to retinal illuminance than alternative wide-field ambient light measures that have been used previously.

Quantification of Face Illuminance
Environmental light levels are typically quantified using illuminance (total luminous flux incident on a surface, per unit area) [26]. Room light levels depend on the total luminous flux output of the primary light source (e.g., Sun, room lamp, etc.) as well as the reflectivity of the secondary sources (diffuser walls, floor, objects). Measured illuminance also depends upon the location and orientation of the illuminance meter. The illumination that contributes to the retinal image can be estimated by mounting a forward-viewing light meter, near the eye [22,23]. The integration angle of such forward-viewing measurement devices should not exceed the angular area of the human visual field (approximately 20,000 square degrees). Since light captured by forward-viewing light meters would be illuminating the face, measuring light being reflected from the face is another approach to capture visually relevant environmental illuminance. Therefore, if face reflectance is known, face luminance can provide accurate measure of the visually relevant environmental illuminance.
The front-facing cameras contained in mobile devices automatically capture light reflected from the face to form images of the face ("selfies"). We propose a method to evaluate face illuminance using images acquired by front-facing cameras. Furthermore, we compared the camera estimates of face illuminance with those derived from ambient light sensors included in the front of most mobile devices [27].
The photometric properties of light captured by each pixel in the camera can be described by three values in a YUV colour space, where Y (luma) represents the amount of light captured by each pixel in the image, which is proportional to the visually weighted energy received during the time t of exposure (image flux), and to the gain of the camera's sensor, which is expressed as ISO value I. Therefore, if a part of the image of the user's face corresponding to the area around the eyes can be identified in the camera image, the mean luminance value of the face in that area Y m can be calculated. If a P is the area of each pixel that capture the image of that region of the user's face, the illuminance received at these pixels (E P ) would be: where k represents a constant corresponding to transmittance losses of the device camera. The relationship between face luminance L f and camera image illuminance E P (Figure 1, right), is given by: where f is the focal length of the front camera of the device, A its aperture, c is constant (4/π), and f/#, the f number.
where f' is the focal length of the front camera of the device, A its aperture, c is co (4/π), and f/#, the f number. Knowing the face luminance, and assuming that skin can be approximated as a reflector (Lambertian surface), we can calculate the face illuminance, Ef (Figure 1    Knowing the face luminance, and assuming that skin can be approximated as a cosine reflector (Lambertian surface), we can calculate the face illuminance, E f (Figure 1 left) as [28]: where ρ represents the spectral reflectance of the face. Finally, combining Equations (1)- (3) gives: where the constant k includes all values that are considered constant for a particular device and face combination. The remaining values in Equation (4) are usually chosen automatically by the camera's auto-exposure system based on the scene. Of all these values, the one that varies the most in response to a change in lighting conditions is the luma (grey level) of the pixels in the area in the image, which includes the face. Because the value k for each device and face combination is unknown, a calibration was necessary for each device/subject pairing to convert camera pixel luma values (Y m ) to face illuminance (E f ).

Calibration
The ambient light sensor included in the device itself can be used to calibrate the system for each face and obtain the coefficient k . The light that reaches the light sensor E 0 f is initially measured by situating the sensor in proximity to the subject's eyes (Figure 2 left), facing forwards. Next, the device is restored to its natural orientation, facing the user at a comfortable distance (Figure 2 right) and a calibration illuminance, E 0 P is determined from the CCD of the device camera (Figure 2 right). The average value Y m is obtained from the area around the eyes in the image of the face captured by the front camera of the device. The exposure system of the camera adjusts the exposure time, ISO value, and the f/# automatically when taking the image. With this calibration approach, using the light sensor to calibrate the camera images, all the values in Equation (4) are known except the constant k which is computed and subsequently can be applied to any other image captured when the device is used under any other illumination conditions for the same face-device combination.

Calibration
The ambient light sensor included in the device itself can be used to calib system for each face and obtain the coefficient k'. The light that reaches the light s is initially measured by situating the sensor in proximity to the subject's eyes ( left), facing forwards. Next, the device is restored to its natural orientation, fa user at a comfortable distance (Figure 2 right) and a calibration illuminance, E termined from the CCD of the device camera (Figure 2 right). The average va obtained from the area around the eyes in the image of the face captured by t camera of the device. The exposure system of the camera adjusts the exposure t value, and the f/# automatically when taking the image. With this calibration ap using the light sensor to calibrate the camera images, all the values in Equatio known except the constant k' which is computed and subsequently can be applie other image captured when the device is used under any other illumination co for the same face-device combination. The proposed methodology was evaluated by using two mobile devices: Sa Edge Plus (smartphone) and Samsung Galaxy Tab S2 (tablet). Firstly, the illu values measured by the ambient light sensors were examined to know the limi the methodology proposed due to the changes in the estimation of the k' value du calibration procedure with the ambient light distribution. To provide a measu angular sensitivity of the sensors, illuminance measurements under three light with different spatial distributions of light were tested and compared with t tained with a standard light meter (Konica Minolta Sensing T-10A) that incorp wide field cosine-weighted sensor. This test provides a measure of the angular se of the device sensors relative to that of the illuminance meter, which includes grating dome. Secondly, device camera calibration was performed in several fac combinations with subjects with different skin reflectance to confirm the ve Equation (4).

Angular Selectivity of Device Ambient Light Sensors
Three light sources representing the full range of angular selectivity were ex (1) Wide-angle homogeneous source-a modified perimeter (Haag Streit Octo Switzerland) which emitted an approximately spatially homogeneous light ov hemi-field (2π sr). Dome illuminance was adjusted in 400 lux increments over t from 0 to 9000 lux. The lux meter and the devices with ambient light sensors wer The proposed methodology was evaluated by using two mobile devices: Samsun S6 Edge Plus (smartphone) and Samsung Galaxy Tab S2 (tablet). Firstly, the illuminance values measured by the ambient light sensors were examined to know the limitation of the methodology proposed due to the changes in the estimation of the k value during the calibration procedure with the ambient light distribution. To provide a measure of the angular sensitivity of the sensors, illuminance measurements under three light sources with different spatial distributions of light were tested and compared with those obtained with a standard light meter (Konica Minolta Sensing T-10A) that incorporates a wide field cosine-weighted sensor. This test provides a measure of the angular sensitivity of the device sensors relative to that of the illuminance meter, which includes an integrating dome. Secondly, device camera calibration was performed in several face-device combinations with subjects with different skin reflectance to confirm the veracity of Equation (4).

Angular Selectivity of Device Ambient Light Sensors
Three light sources representing the full range of angular selectivity were examined: (1) Wide-angle homogeneous source-a modified perimeter (Haag Streit Octopus 900, Switzerland) which emitted an approximately spatially homogeneous light over a full hemi-field (2π sr). Dome illuminance was adjusted in 400 lux increments over the range from 0 to 9000 lux. The lux meter and the devices with ambient light sensors were placed at the centre of the hemisphere facing the central fixation target inside the dome. (2) Small angular source-a high power (500 W) incandescent light, which subtended an angle of less than 2 • from the position of the light sensors, which were pointed directly at the light Appl. Sci. 2021, 11, 7566 5 of 10 source. The room lights were turned off, and light coming from secondary reflections in the room was minimized by cloaking the instruments in black cloth. Illumination values were measured over a range from 3 to 4000 lux. (3) Indoors office environment-An array of incandescent lights and controllable office ceiling lighting (range 80 to 700 lux)-was used to illuminate an office containing a desk. The rest of the office was illuminated by reflected and scattered light and directly from a standard set of ceiling light fixtures. The light sensors were not illuminated directly by the light sources, but pointed directly at the desk containing a laptop and documents. Diffuse reflections from the office walls, desks, and light emitted from a computer screen contributed to the illuminance measured by the light sensors.

Measures of Face Illuminance Using the Camera
A Samsung S6 Edge Plus smartphone camera was placed 30 cm from the face, a typical handheld viewing distance [29], with the face approximately centered in the camera field of view. Camera calibration was achieved using a known face illuminance of 52 lux. Because Equation (4) depends on the reflectance of the face, the experiment was carried out on four subjects: one African (a 30-year-old native of Ghana), one East Asian (42-year-old from China), and two Caucasians (33-year-old from Poland and 51-year-old from Spain) in addition to two artificial faces of mannequins with different skin tones (dark and light skin) and hair color (Figure 3).
source. The room lights were turned off, and light coming from secondary refle the room was minimized by cloaking the instruments in black cloth. Illuminatio were measured over a range from 3 to 4000 lux. (3) Indoors office environmentray of incandescent lights and controllable office ceiling lighting (range 80 lux)-was used to illuminate an office containing a desk. The rest of the office w minated by reflected and scattered light and directly from a standard set of ceil fixtures. The light sensors were not illuminated directly by the light sources, but directly at the desk containing a laptop and documents. Diffuse reflections from t walls, desks, and light emitted from a computer screen contributed to the illu measured by the light sensors.

Measures of Face Illuminance Using the Camera
A Samsung S6 Edge Plus smartphone camera was placed 30 cm from th typical handheld viewing distance [29], with the face approximately centere camera field of view. Camera calibration was achieved using a known face illumi 52 lux. Because Equation (4)

Results
Device light sensor calibrations revealed three key findings. First, sensor res highly linear, with all but one least square linear fit to the data having R 2 values i of 0.99. In all three environments, the sensor of the tablet reported higher values smart-phone (e.g., slopes of 0.70 vs. 0.52 with the hemispheric light source). Th below 1 for the wide field stimulus (Figure 4a), and the slopes > 1 for the sm stimulus ( Figure 4b) reveal a narrower angular weighting for the detectors in eac compared to the wide field cosine weighting reported for the illuminance met narrower weighting will be more appropriate for visual field integration since the monocular visual field is smaller than 2π sr due to restrictions of the brow, n retina. For simplicity, we included the values of the slopes and the regression coe of the linear fitting lines of Figure 4a-c in Table 1. Results obtained under an of mination environment with both devices (Figure 4c) indicate that the illuminanc measure within a relative error lower than 25%.

Results
Device light sensor calibrations revealed three key findings. First, sensor response is highly linear, with all but one least square linear fit to the data having R 2 values in excess of 0.99. In all three environments, the sensor of the tablet reported higher values than the smart-phone (e.g., slopes of 0.70 vs. 0.52 with the hemispheric light source). The slopes below 1 for the wide field stimulus (Figure 4a), and the slopes > 1 for the small field stimulus (Figure 4b) reveal a narrower angular weighting for the detectors in each device compared to the wide field cosine weighting reported for the illuminance meter. Such narrower weighting will be more appropriate for visual field integration since the human monocular visual field is smaller than 2π sr due to restrictions of the brow, nose, and retina. For simplicity, we included the values of the slopes and the regression coefficients of the linear fitting lines of Figure 4a-c in Table 1. Results obtained under an office illumination environment with both devices (Figure 4c) indicate that the illuminance can be measure within a relative error lower than 25%.    Face illuminance was computed from camera pixel values by applying Equation (4) to the average pixel value obtained from the camera images surrounding the eyes are compared to those obtained from the smart-phone light sensor ( Figure 5). The two faces with darker skin are expected to reveal lower estimates because of reduced skin reflectance. Table 2 shows the values of the fitting lines to the data in Figure 5.   (4) to the average pixel value obtained from the camera images surrounding the eyes are compared to those obtained from the smart-phone light sensor ( Figure 5). The two faces with darker skin are expected to reveal lower estimates because of reduced skin reflectance. Table 2 shows the values of the fitting lines to the data in Figure 5.  Table 2. Slope values and regression coefficients corresponding to the regression fit lines of Figure 5.

Discussion
The goal of this project was to assess the feasibility of using the already available smart phone cameras to monitor the general lighting levels experienced by children who might be at risk of developing myopia. Although many smart phones include a light sensor, when the device is being used, the light sensor faces toward the user and thus captures the light traveling in the opposite direction to the light that creates the retinal image. However, the forward-facing camera captures an image of the face, which is itself illuminated by the same general light sources that create the retinal image (those stimuli

Discussion
The goal of this project was to assess the feasibility of using the already available smart phone cameras to monitor the general lighting levels experienced by children who might be at risk of developing myopia. Although many smart phones include a light sensor, when the device is being used, the light sensor faces toward the user and thus captures the light traveling in the opposite direction to the light that creates the retinal image. However, the forward-facing camera captures an image of the face, which is itself illuminated by the same general light sources that create the retinal image (those stimuli within the visual field). Therefore, smart phone cameras offer a convenient way to assess visually relevant environmental light exposure of the device user.
Our approach to assess the feasibility of using these cameras employed a calibrated illuminance meter to evaluate the smart phone built in light sensor (Figure 4), which revealed that in typical room situation, the inbuilt light sensor provided illuminance measures closely matching those of the calibrated illuminance meter. This result is encouraging in that it enables the smart phone light meter to be employed as a way to calibrate the camera for light measurements (see method of calibration, Figure 2), and the results ( Figure 5) indicate accurate estimates of face illuminance can be obtained using the camera, and thus can be obtained in real-time over large durations once this initial calibration has been completed. Errors in face illuminance measures made with the camera are relatively small (mean error = 27%) and skin reflectance also has a relatively minor impact. The small magnitude of these errors is insignificant in the broad context of assessing the overall light levels experienced by children, which will vary by more than 1000 times when moving from outside on a bright sunny day to a dimly lit room at home.
Calibration of the built-in light sensor showed an underestimation of about 50% when a homogeneous illumination source was used that extended over a solid angle of 2π sr ( Figure 4a) and an overestimation of about 80% when the illuminant was a small angle source in an otherwise dark field (Figure 4b). These inconsistencies between the device light sensor and the lux meter measurements reflect a combination of gain/sensitivity of the sensors as well as the angular selectivity of the sensors, which depend on their optical design. Sensors in smartphones have been reported to assess light levels over approximately 70 • [29], being maximal in the normal direction to the detector and decreasing with eccentricity [30]. Such angular selectivity is well designed to assess the visually relevant light levels as long as the sensor is located at or near the eye and directed away from the face.

Conclusions
The two main conclusions of the study are: (1) Illuminance resulting from the light sensors included in the smartphones is highly linear, and reveals a narrower spatial weighting than a full-hemifield lux meter. (2) It is possible to use a smartphone camera to measure the face illuminance in real-time with an accuracy around 80% once a one-time calibration has been performed using the devices in-built light sensor.
If the results can be extrapolated to the rest of devices (which typically used similar cameras and light sensors), the method proposed may give continuous reading (or at least at the speed of the video rate of the device) of the face illumination of the user, as long as the electronic device is being used and the subject is looking at it. Such technology implemented in a software (app) working in the background of the device can be useful to obtain objective data of the illumination received by the subject while the user is using the device naturally. Thus, it can be used as a tool to control myopia progression given the large time that the children and adults spend daily using their electronic devices [31] under relatively low lighting conditions, usually indoors.