An Operational Radiometric Correction Technique for Shadow Reduction in Multispectral UAV Imagery

: This study focuses on the recovery of information from shadowed pixels in RGB or multispectral imagery sensed from unmanned aerial vehicles (UAVs). The proposed technique is based on the concept that a property characterizing a given surface is its spectral reﬂectance, i.e., the ratio between the ﬂux reﬂected by the surface and the radiant ﬂux received by the surface, and this ratio is usually similar under direct-plus-diffuse in high-resolution UAV imagery.


Introduction
Since the beginning of optical remote sensing (RS), much of the scientific literature has been devoted to shadow hindrances: first, in aerial; later, in satellites; and recently, in drone-acquired imagery. The shadow problematics are different at these three scales mainly due to the spatial resolution (SR), but they all share the common trait that the commonly used in UAV frame cameras to obtain dimensionless reflectance values. Field spectroradiometric measurements are usually the ground truth for validating the spectral surface reflectance imagery resulting from radiometric corrections of satellite, airborne or UAV optical data [36][37][38][39][40]. The spectral surface reflectance of a given material does not change when located in shadowed conditions (receiving only diffuse irradiance) or in non-shadowed conditions (receiving mainly direct irradiance): what changes is the reflected spectral radiance because the surface receives less total spectral irradiance, thereby appearing darker in shadowed conditions.
The hypothesis of the present work is that the relationship between irradiance conditions in shadowed areas and non-shadowed areas can be determined by following classical empirical line techniques [41][42][43] for fulfilling the objective of a coherent treatment in both kinds of areas. To this end, we present a simple and operational radiometric correction technique based on the usage of twin sets of radiometrically characterized panels: one set located in shadowed conditions, and the other set in non-shadowed conditions. By first determining the response values of the targets in non-shadowed areas, the procedure seeks, on the one hand, to obtain the same reflectance values in the homologous twin targets located in the shadowed areas after image de-shadowing, and on the other hand, to expand the correction model to all the detected shadows in the scene. The method includes three sequential blocks: (1) shadow detection; (2) shadow reduction; and (3) edge correction filtering to smooth the transition between de-shadowed areas and non-shadowed areas.

Experiment Conceptualization and Materials
The main concept is that the spectral reflectance of a sample surface is the same in directplus-diffuse illumination conditions as in diffuse illumination conditions (what is variable is the spectral irradiance) [2,3]. Additionally, there is a need to account for atmospheric radiance contributions to the at-sensor spectral radiance [4,5]. These principles can be applied using an empirical line approach, among others [41][42][43]. Then, the information retrieval from shadowed pixels can be achieved in three stages: (1) shadow detection; (2) shadow reduction in all shadowed scene pixels using empirical relationships between the characterized set of panels located in shadowed and non-shadowed areas; and (3) edge filtering to smooth the transition between de-shadowed and non-shadowed areas ( Figure 1). The objective was to test whether the de-shadowing workflow was successful and operational in a real scenario. The method for checking the hypothesis is presented in an applied scenario (forested area). The case study was carried out in a Mediterranean for- The objective was to test whether the de-shadowing workflow was successful and operational in a real scenario. The method for checking the hypothesis is presented in an applied scenario (forested area). The case study was carried out in a Mediterranean forestry environment; specifically, within a restored quarry in Catalonia (X = 411,713 m; Y = 4,616,052 m (UTM31N-WGS84)) with grasslands, scrublands, and Pinus hapelensis trees of several heights due to the afforestation carried out in the frame of restoration actuations (Figure 2a). The authors previously used this area for UAV vegetation mapping studies, geometric and radiometric calibration experiments [38][39][40]44]. The flight was performed on 27 April 2018 between 10:36 and 10:47 UTC, at 90 m above ground level, designed to cover a 64,386 m 2 study area (Table 1). By applying photogrammetric and Structure from Motion processing, an orthoimage of 6 cm SR ( Figure 2b) and a digital surface model (DSM) of 9 cm of pixel size were obtained. In order to relate the non-shadowed illumination conditions and the shadowed conditions, we used a set of 500 mm × 500 mm × 20 mm ethylene-vinyl acetate (EVA) panels in direct illumination conditions and a twin set of panels in a cast shadow. Panels were spectrally characterized using a hand-held OceanOptics USB2000+ spectroradiometer [45] (Table 2, Figure 3a-e) and present a Lambertian behavior [37]. The spectral range covered the VNIR regions (340 nm-1030 nm), with a sampling interval of 0.3 nm and a spectral resolution (full width at half maximum, FWHM) of 1.26 nm. A reference panel was used to measure the incoming spectral irradiance. At the same time, UAV multispectral imagery was acquired with a Parrot Sequoia sensor (Table 3, Figure 3c,d) on board a DJI Phantom III quadcopter for the applied case in a forestry environment.   In order to relate the non-shadowed illumination conditions and the shadowed conditions, we used a set of 500 mm × 500 mm × 20 mm ethylene-vinyl acetate (EVA) panels in direct illumination conditions and a twin set of panels in a cast shadow. Panels were Remote Sens. 2021, 13, 3808 5 of 16 spectrally characterized using a hand-held OceanOptics USB2000+ spectroradiometer [45] (Table 2, Figure 3a-e) and present a Lambertian behavior [37]. The spectral range covered the VNIR regions (340 nm-1030 nm), with a sampling interval of 0.3 nm and a spectral resolution (full width at half maximum, FWHM) of 1.26 nm. A reference panel was used to measure the incoming spectral irradiance. At the same time, UAV multispectral imagery was acquired with a Parrot Sequoia sensor (Table 3, Figure 3c,d) on board a DJI Phantom III quadcopter for the applied case in a forestry environment.     The UAV at-sensor reflectance image (obtained with the DLS irradiance reference) was empirically fitted to the characterized panels to obtain at-surface reflectance values [38]. Afterwards, the methodology for recovering information from shadowed areas consisted of three processing blocks, previously mentioned ( Figure 1) and detailed in this section: (a) shadow detection, (b) shadow reduction, and (c) edge filtering.

Shadow Detection
Let us consider the scenario of a clear sky and direct illumination, as in the example case, wherein the sun vector hits a given surface, thereby projecting shadows. Shadow detection can mainly be carried out with physical and/or image-based approaches ( Figure 4). Physical approaches are based on the sun's position and the cast shadow projected by the surface objects at a given location. The sun's position is well known for its given geographical position and the surface is modeled using a DSM, geometrically obtaining a digital cast shadow model (DCSM) and a digital illumination model (DIM). Image-based approaches are based on the image band histogram, selecting those dark pixels located under a given threshold, assuming that those pixels have low illumination and are located in shadows. Both approaches provide good results but commission errors (false positives) for non-shadow pixels classified as shadows (low signal surfaces classified as shadows [23]) and omission errors (false negatives) of shadowed pixels classified as non-shadow, often appear. False positives in the physical approach appear when the DSM is not of enough quality or is not truly 3D but 2.5D, providing false positives specially in the trunk shadows in vegetation environments. False positives in the image-based approach appear in dark surfaces that are not located in shadows, such as waters or dark soils. To overpass the commission errors of both methods, in our approach, the detailed shadow detection was obtained combining image-based and physically based methods [33]. We selected pixels achieving the joined condition of being under the histogram threshold of the first quartile in the intensity image, in the Red Edge band and in the Near-Infrared band (image-based condition), also being under the shadow of the Digital Cast Shadow Model (DCSM) and in a self-shadow of the Digital Illumination Model (DIM) (physically based condition) in order to constrain the possible histogram commissions within the geometrically candidate pixels and considering that the Digital Elevation Models (DEM) are not truly 3D but 2.5D and provide false positives, especially in the trunk shadows.
der the histogram threshold of the first quartile in the intensity image, in the Red Edge band and in the Near-Infrared band (image-based condition), also being under the shadow of the Digital Cast Shadow Model (DCSM) and in a self-shadow of the Digital Illumination Model (DIM) (physically based condition) in order to constrain the possible histogram commissions within the geometrically candidate pixels and considering that the Digital Elevation Models (DEM) are not truly 3D but 2.5D and provide false positives, especially in the trunk shadows. . Shadow detection step methodology workflow. Two problems must be solved: (1) when detecting the shadows with a physical approach, the typical DEM (2.5D) does not reproduce the trees silhouette and produces commission error (left); (2) when detecting shadows with the image histogram approach, dark objects located in non-shadowed areas also produce commission errors (right). The solution is the masking of only those dark pixels located in projected shadow areas.

Shadow Reduction
Detected shadows can be managed by using three approaches [21]: (1) simply masking, (2) multisource data fusion, (3) radiometric enhancement. In the masking ap- . Shadow detection step methodology workflow. Two problems must be solved: (1) when detecting the shadows with a physical approach, the typical DEM (2.5D) does not reproduce the trees silhouette and produces commission error (left); (2) when detecting shadows with the image histogram approach, dark objects located in non-shadowed areas also produce commission errors (right). The solution is the masking of only those dark pixels located in projected shadow areas.

Shadow Reduction
Detected shadows can be managed by using three approaches [21]: (1) simply masking, (2) multisource data fusion, (3) radiometric enhancement. In the masking approach, shadowed pixels are simply converted to NODATA values (removed), whereas in the multisource data fusion approach, the shadowed pixels are filled with imagery captured in other moments, both being approaches out of the focus of this work. Shadow reduction with radiometric enhancement techniques recovers the radiometry of the shadowed pixels, increasing their brightness [21] to make them seamless to neighboring non-shadowed pixels. Due to the great complexity and diversity of shadows, errors can appear in pixels with too high values after shadow reduction (overcorrection errors) or pixels with excessively high values after shadow reduction (undercorrection errors). In our approach (Figure 5), once the shadowed pixels were selected, they were further processed separately. The reduction was achieved with an invariant color model method [33], which takes advantage of pairs of panels located in shadowed and non-shadowed conditions simultaneously. We correlated four panels, whereas the remaining were used for validation purposes [38]. The linear function relationship between both panels was applied to all the detected shadowed pixels in the image. Finally, the de-shadowed pixels were covered over the image, resulting almost in the final result. However, at this stage, the problem of a bad transition between the de-shadowed areas and the non-shadowed areas appeared ( Figure 5). To solve this problem located in the penumbra areas, a further step is needed that involved edge filtering (Section 2.2.3).
conditions simultaneously. We correlated four panels, whereas the remaining were used for validation purposes [38]. The linear function relationship between both panels was applied to all the detected shadowed pixels in the image. Finally, the de-shadowed pixels were covered over the image, resulting almost in the final result. However, at this stage, the problem of a bad transition between the de-shadowed areas and the non-shadowed areas appeared ( Figure 5). To solve this problem located in the penumbra areas, a further step is needed that involved edge filtering (Section 2.2.3). Figure 5. Shadow reduction block methodology: finding the empirical line function from the characterized targets located in shadowed and non-shadowed areas, and further applying it in all the detected shadowed areas.

Edge Filtering
Boundary pixels between de-shadowed and non-shadowed regions are problematic due to the spatial mixture of illumination conditions and the limited perfection of the DSM. Thereby, these pixels tend to be more or less overcorrected if considered as shadows, or undercorrected otherwise, leading to artifacts and a visual sensation of patched de-shadowed areas; the extraction of a boundary map [21,33] is a possible solution for dealing with these areas. In our approach, the shadow-mask was vectorized and a buffer (edge belt) was calculated for that limit area; in this case study, the penumbra Figure 5. Shadow reduction block methodology: finding the empirical line function from the characterized targets located in shadowed and non-shadowed areas, and further applying it in all the detected shadowed areas.

Edge Filtering
Boundary pixels between de-shadowed and non-shadowed regions are problematic due to the spatial mixture of illumination conditions and the limited perfection of the DSM. Thereby, these pixels tend to be more or less overcorrected if considered as shadows, or undercorrected otherwise, leading to artifacts and a visual sensation of patched deshadowed areas; the extraction of a boundary map [21,33] is a possible solution for dealing with these areas. In our approach, the shadow-mask was vectorized and a buffer (edge belt) was calculated for that limit area; in this case study, the penumbra area was in the order of 10 cm, so the buffer area was 2 pixels (1 pixel at each side of the shadow mask). As we have pointed out in Section 1, precise determination of that area can be performed, according to [21], through Equation (1): where w is the width of the penumbra, H is the height of the object (a tree, a house, etc.), e is the elevation angle of the center of the sun, and ε is the angular width of the sun at the top of the object. Figure 6 illustrates this formula. Nevertheless, due to our non-ultra-realistic object modelling goal, a visual inspection was enough to set a convenient value.
That edge belt was used as follows: firstly, the output image of the shadow reduction block was smoothed with a 3 × 3 low-pass filter kernel; secondly, this smoothed image was clipped with the edge belt, and the smoothed belt was overlayed to the patched image, thus applying the kernel filter only in the penumbra transition (Figure 7).
where w is the width of the penumbra, H is the height of the object (a tree, a house, etc.), e is the elevation angle of the center of the sun, and ε is the angular width of the sun at the top of the object. Figure 6 illustrates this formula. Nevertheless, due to our non-ultra-realistic object modelling goal, a visual inspection was enough to set a convenient value. That edge belt was used as follows: firstly, the output image of the shadow reduction block was smoothed with a 3 × 3 low-pass filter kernel; secondly, this smoothed image was clipped with the edge belt, and the smoothed belt was overlayed to the patched image, thus applying the kernel filter only in the penumbra transition (Figure 7). . Edge filtering step methodology workflow: the cover problem is a patching pattern of de-shadowed pixels over the original image. To solve this, the shadow mask is vectorized, a two-pixel buffer is applied, and used to clip a smoothed raster of the patched image. This clip is overlayed again on the patched image to smooth the transition between the de-shadowed and the non-shadowed pixels.

Methods: Statistical Data Analysis
We will present the coefficients of determination, R², of the linear regression between the seven twin targets located both in diffuse illumination conditions and in direct illumination conditions. If these regressions show high R² (>0.90) and p-values lower than 0.01 in all the spectral bands of the sensor, with a reasonably good distribution along the [0, 100] % of reflectance values (also for all spectral bands), we will consider that the approach is robust and fulfils the hypothesis of our proposal. This analysis will be completed with a visual inspection of the corrected image to check whether most shadows have been removed.

Results
The shadow detection method provided good results in detecting those dark pixels located in projected shadows. The sun position was well known during the drone mis- Figure 7. Edge filtering step methodology workflow: the cover problem is a patching pattern of de-shadowed pixels over the original image. To solve this, the shadow mask is vectorized, a two-pixel buffer is applied, and used to clip a smoothed raster of the patched image. This clip is overlayed again on the patched image to smooth the transition between the de-shadowed and the non-shadowed pixels.

Methods: Statistical Data Analysis
We will present the coefficients of determination, R 2 , of the linear regression between the seven twin targets located both in diffuse illumination conditions and in direct illumination conditions. If these regressions show high R 2 (>0.90) and p-values lower than 0.01 in all the spectral bands of the sensor, with a reasonably good distribution along the [0, 100] % of reflectance values (also for all spectral bands), we will consider that the approach is robust and fulfils the hypothesis of our proposal. This analysis will be completed with a visual inspection of the corrected image to check whether most shadows have been removed.

Results
The shadow detection method provided good results in detecting those dark pixels located in projected shadows. The sun position was well known during the drone mission (±5 min the central time of the flight), and the DSM was detailed enough (9 cm/pixel) to obtain a good modelling of the projected shadows (DCSM). However, the application of a constraint to select only dark pixels in the projected shadows avoided false positives, particularly in the trunk shadows (Figure 8). De-shadow reduction method, which is based on the empirical relationship between the shadowed panels and the non-shadowed panels, providing the ad hoc parameters for converting the shadowed pixels of the image to de-shadowed pixels, in each of the Parrot Sequoia bands (Green, Red, Red Edge and Near-Infrared) (Figure 9, Table 4). De-shadow reduction method, which is based on the empirical relationship between the shadowed panels and the non-shadowed panels, providing the ad hoc parameters for converting the shadowed pixels of the image to de-shadowed pixels, in each of the Parrot Sequoia bands (Green, Red, Red Edge and Near-Infrared) (Figure 9, Table 4). Remote Sens. 2021, 13, x FOR PEER REVIEW 12 of 17 Figure 9. Spectral-dependent linear regression plots for Parrot Sequoia bands, between twin targets (n = 7) located in diffuse illumination conditions and direct illumination conditions. All regressions show an R² higher than 0.90 and a p-value < 0.01; additionally, note the reasonably good distribution along the [0, 100] % of reflectance values (also for all spectral bands). These empirical line functions were applied to all the detected shadowed pixels to enhance their radiometry and obtain de-shadowed pixels. The application of this linear function to the previously detected shadowed pixels led to the recovery of information that was previously hidden, thus obtaining at-surface reflectance values both in direct illumination and diffuse illumination conditions. However, and as mentioned above, edge filtering was required to smooth the patched transition between the de-shadowed and the shadowed pixels ( Figure 10). Figure 9. Spectral-dependent linear regression plots for Parrot Sequoia bands, between twin targets (n = 7) located in diffuse illumination conditions and direct illumination conditions. All regressions show an R 2 higher than 0.90 and a p-value < 0.01; additionally, note the reasonably good distribution along the [0, 100] % of reflectance values (also for all spectral bands). These empirical line functions were applied to all the detected shadowed pixels to enhance their radiometry and obtain de-shadowed pixels. The application of this linear function to the previously detected shadowed pixels led to the recovery of information that was previously hidden, thus obtaining at-surface reflectance values both in direct illumination and diffuse illumination conditions. However, and as mentioned above, edge filtering was required to smooth the patched transition between the de-shadowed and the shadowed pixels ( Figure 10). Remote Sens. 2021, 13, x FOR PEER REVIEW 13 Figure 10. Results of the shadow detection and shadow reduction stages. This is the partial result before edge filtering.
The edge map, which is based on the mask shadow, delimited the zone with o corrections and undercorrections due to the penumbra, and a buffer of two pixels applied to delimit the transition zone. This buffer was used to clip the smoothed im and covered the de-shadowed image, thereby obtaining reasonably good results ( Fig  11). The edge map, which is based on the mask shadow, delimited the zone with overcorrections and undercorrections due to the penumbra, and a buffer of two pixels was applied to delimit the transition zone. This buffer was used to clip the smoothed image and covered the de-shadowed image, thereby obtaining reasonably good results ( Figure 11).

Discussion
The ultra-high spatial resolution of UAV imagery allows the use of twin sets of fordable and useful targets in field campaigns. Images with less than 10 cm/pix allow enough pixels inside panels of 500 mm × 500 mm for obtaining a central pixel with adjacency contributions. This new opportunity for remote sensing opens doors for e use of the invariant color model method [33] using twin panels simultaneously locate shadowed and non-shadowed conditions, and ultra-high spatial resolution sensors.
The detailed imagery and derived DSM allowed a reasonable modelling of the v etation structures, but the nature of the 2.5D DSM makes it necessary to combine physical approach with an image-based approach in order to obtain better shadow tection. However, not all shadows were detected, mainly due to the specific comple of canopy structures and the photogrammetric surface modelling inaccuracies.
Shadow correction based on the linear relationship between the targets located der direct illumination and diffuse illumination led to good results in the extrapolatio all the detected shadows on the scene, where general overcorrections or undercorrecti are not observed. On the other hand, there were artifacts in the limit areas of the detec shadows. This problem was tackled with the edge filtering correction. The edge m

Discussion
The ultra-high spatial resolution of UAV imagery allows the use of twin sets of affordable and useful targets in field campaigns. Images with less than 10 cm/pix allow for enough pixels inside panels of 500 mm × 500 mm for obtaining a central pixel without adjacency contributions. This new opportunity for remote sensing opens doors for easy use of the invariant color model method [33] using twin panels simultaneously located in shadowed and non-shadowed conditions, and ultra-high spatial resolution sensors.
The detailed imagery and derived DSM allowed a reasonable modelling of the vegetation structures, but the nature of the 2.5D DSM makes it necessary to combine the physical approach with an image-based approach in order to obtain better shadow detection. However, not all shadows were detected, mainly due to the specific complexity of canopy structures and the photogrammetric surface modelling inaccuracies.
Shadow correction based on the linear relationship between the targets located under direct illumination and diffuse illumination led to good results in the extrapolation to all the detected shadows on the scene, where general overcorrections or undercorrections are not observed. On the other hand, there were artifacts in the limit areas of the detected shadows. This problem was tackled with the edge filtering correction. The edge map based on the shadow mask was precise for detecting the artifacts, whereas the buffer accounted for all of them. The kernel filter, which smoothed the penumbra area, was effective for reducing the patching effects, but obviously blurred these areas.
Limitations: This method is designed for ad hoc radiometric correction, as the approach is completely empirical. Therefore, the empirical line parameters are scenedependent. The scale factor depends on the opacity of objects that causes the shadow, and the bias factor depends on the atmospheric composition (atmospheric optical depth). This is a limitation to establish generalized, fixed parameters, to relate shadowed pixels and non-shadowed pixels. However, the method is also useful for common UAV applications where the area covered is not large in scope for containing great atmospheric composition differences; this is usually reasonable when the flight height does not exceed 120 m (400 ft), as in current regulations in most European countries [46].
Although the overall enhancement of the image is evident, it is worth noting that not absolutely all of the shadows were recovered. Due to the ultra-high spatial detail of UAV imagery, some shadows remained inside the tree canopy. Additionally, not all the shadows were enhanced with the same quality because the empirical approach was fitted to the shadows of the tree canopy, but not in a closed forest situation; therefore, we followed a conservative trend to avoid overcorrection errors. The de-shadowed areas were not patched over the non-shadowed areas, and the transition between both illumination conditions is, in our humble opinion, a strength of the presented methodology.
The shadow detection, shadow reduction and edge filtering workflow use GIS tools implemented in most GIS software, which can thus be automatized by using model builders. In our case, we used the free MiraMon RS and GIS software [47].

Conclusions
In this study, a method for shadow reduction in UAV imagery was described and tested; the method used well-known RS and image enhancement techniques from the airborne and satellite image processing world combined with the new opportunities of drone imagery in reduced scenarios. The empirical line correction using radiometric references has proven to be a good method for obtaining at-surface reflectance values from UAV imagery. In this work, it has also proven to be an interesting method for obtaining surface reflectance values in shadowed forested areas. Apart from the efficiency of the method for recovering radiometric information, it is a simple and operational method for smoothing the transition in the edge zone by applying a correction in the buffer of the limit area. The high spatial resolution of UAV imagery allows the use twin sets of affordable reference targets in field campaigns, which is a common requirement and limitation of ad hoc radiometric corrections. This new opportunity for RS opens the door to the easy use of invariant color models, as well as other well-known remote sensing techniques such as shadow detection or edge filtering smoothing.