Using Unmanned Aerial Vehicles in Postfire Vegetation Survey Campaigns through Large and Heterogeneous Areas: Opportunities and Challenges

This study evaluated the opportunities and challenges of using drones to obtain multispectral orthomosaics at ultra-high resolution that could be useful for monitoring large and heterogeneous burned areas. We conducted a survey using an octocopter equipped with a Parrot SEQUOIA multispectral camera in a 3000 ha framework located within the perimeter of a megafire in Spain. We assessed the quality of both the camera raw imagery and the multispectral orthomosaic obtained, as well as the required processing capability. Additionally, we compared the spatial information provided by the drone orthomosaic at ultra-high spatial resolution with another image provided by the WorldView-2 satellite at high spatial resolution. The drone raw imagery presented some anomalies, such as horizontal banding noise and non-homogeneous radiometry. Camera locations showed a lack of synchrony of the single frequency GPS receiver. The georeferencing process based on ground control points achieved an error lower than 30 cm in X-Y and lower than 55 cm in Z. The drone orthomosaic provided more information in terms of spatial variability in heterogeneous burned areas in comparison with the WorldView-2 satellite imagery. The drone orthomosaic could constitute a viable alternative for the evaluation of post-fire vegetation regeneration in large and heterogeneous burned areas.


Introduction
Natural hazards, such as wildfires, constitute a serious global concern that is expected to increase in the future [1] mainly due to global warming predictions and changes in land use [2,3]. In particular, the increasing severity and recurrence of large forest fires in Mediterranean Basin ecosystems [4] can lead to severe long-term land degradation, including desertification [5,6]. Thus, post-fire monitoring of these systems through different tools should be a priority for management purposes [7].
Advances in geospatial technologies have led to an increase in the utilization of remote sensing techniques [3], which represent a major opportunity for conducting post-fire surveys in large and heterogeneous burned ecosystems [8]. High spatial resolution satellite imagery, such as that provided by Deimos-2, GeoEye-2, QuickBird or WorldView-2 on-board sensors, among others, have been used to assess post-fire regeneration in terms of fractional vegetation cover [8], species richness [9] or the basal area of tree species [10]. Nevertheless, satellite imagery shows certain weaknesses that could limit its applicability in the post-fire monitoring of highly heterogeneous and dynamic areas. First, the revisit periods of satellite platforms cannot be user-controlled for short-term time series monitoring

Study Area
The study area ( Figure 1) is a 3000 ha framework located in the central section of a megafire of about 10,000 ha which occurred in a Pinus pinaster stand in Sierra del Teleno (León Province, northwest Spain) in August 2012. The survey framework was representative of the heterogeneity of the fire regime within the perimeter.

Study Area
The study area ( Figure 1) is a 3000 ha framework located in the central section of a megafire of about 10,000 ha which occurred in a Pinus pinaster stand in Sierra del Teleno (León Province, northwest Spain) in August 2012. The survey framework was representative of the heterogeneity of the fire regime within the perimeter. The study area is dominated by an Appalachian relief with prominent quartzite crests, wide valleys with moderate slopes on the upper two thirds of the study area, and sedimentary terraces on the lower third. The mean annual temperature in the area is 10 °C, with an average rainfall of 650 mm. The understory plant community after the occurrence of the megafire is composed by species such as Halimium alyssoides, Pterospartum tridentatum and Erica australis [43], with a great regeneration of Pinus pinaster seedlings.

UAV Platform and Multispectral Camera
A FV8 octocopter (ATyges, Málaga, Spain, Figure 2) was chosen to perform the aerial survey of a large burned surface of 3000 ha. This UAV is manufactured entirely from carbon fiber and titanium and it weighs 3.5 kg, with a maximum payload mass of 1.5 kg. The eight brushless motors (AXI- The study area is dominated by an Appalachian relief with prominent quartzite crests, wide valleys with moderate slopes on the upper two thirds of the study area, and sedimentary terraces on the lower third. The mean annual temperature in the area is 10 • C, with an average rainfall of 650 mm. The understory plant community after the occurrence of the megafire is composed by species such as Halimium alyssoides, Pterospartum tridentatum and Erica australis [43], with a great regeneration of Pinus pinaster seedlings.

UAV Platform and Multispectral Camera
A FV8 octocopter (ATyges, Málaga, Spain, Figure 2) was chosen to perform the aerial survey of a large burned surface of 3000 ha. This UAV is manufactured entirely from carbon fiber and titanium and it weighs 3.5 kg, with a maximum payload mass of 1.5 kg. The eight brushless motors (AXI-ATYGES 2814/22 260 W with a maximum efficiency of 85%) are powered by two lithium-ion polymer batteries (rated capacity and voltage of 8200 mAh and 14.8 V, respectively). The UAV has a cruising speed of 7 m·s −1 (10 m·s −1 max), with an ascent/descent rate of 5.4 km·h −1 (10.8 km·h −1 max). The maximum interference-free flight range is 3 km, with a flight duration of 10-25 min depending on the payload and weather conditions. The maximum flight height is 500 m above ground layer (AGL). The platform is remotely controlled by a 12-channel MZ-24 HoTT radio transmitter (Graupner, Kirchheim unter Teck, Germany) operating at 2.4 GHz. The UAV is equipped with a micro FPV camera with real-time video transmission at 5.8 GHz to a Flysight monitor. The core component of the UAV electronics is an ATmega 1284P flight controller (Microchip Technology Inc., Chandler, AZ, USA) with an integrated pressure sensor, gyroscopes and accelerometers. The navigation control board is based on an Atmel ARM9 microcontroller and it has a MicroSD card socket for waypoint data storage. The GPS module with integrated antenna is a LEA-6 (u-blox, Thalwil, Switzerland). This system allows for autonomous, semi-autonomous and manual takeoffs, landings and flight. ATYGES 2814/22 260 W with a maximum efficiency of 85%) are powered by two lithium-ion polymer batteries (rated capacity and voltage of 8200 mAh and 14.8 V, respectively). The UAV has a cruising speed of 7 m·s −1 (10 m·s −1 max), with an ascent/descent rate of 5.4 km·h −1 (10.8 km·h −1 max). The maximum interference-free flight range is 3 km, with a flight duration of 10-25 min depending on the payload and weather conditions. The maximum flight height is 500 m above ground layer (AGL). The platform is remotely controlled by a 12-channel MZ-24 HoTT radio transmitter (Graupner, Kirchheim unter Teck, Germany) operating at 2.4 GHz. The UAV is equipped with a micro FPV camera with real-time video transmission at 5.8 GHz to a Flysight monitor. The core component of the UAV electronics is an ATmega 1284P flight controller (Microchip Technology Inc., Chandler, AZ, USA) with an integrated pressure sensor, gyroscopes and accelerometers. The navigation control board is based on an Atmel ARM9 microcontroller and it has a MicroSD card socket for waypoint data storage. The GPS module with integrated antenna is a LEA-6 (u-blox, Thalwil, Switzerland). This system allows for autonomous, semi-autonomous and manual takeoffs, landings and flight. A Parrot SEQUOIA multispectral camera was installed underneath the UAV platform. The camera has four 1.2-megapixel monochrome sensors that collect global shutter imagery along four discrete spectral bands [44]: green (center wavelength -CW-: 550 nm; bandwidth -BW-: 40 nm), red (CW: 660 nm; BW: 40 nm), red edge (CW: 735 nm; BW: 10 nm) and near infrared -NIR-(CW: 790 nm; BW: 40 nm). The horizontal (HFOV), vertical (VFOV) and diagonal (DFOV) fields of view of the multispectral camera are 70.6°, 52.6° and 89.6°, respectively, with a focal length of 4 mm. With a flight altitude of 120 m, a ground sample distance (GSD) of 15 cm can be achieved. The camera was bundled with an irradiance sensor to record light conditions in the same spectral bands as the multispectral sensor. The weight of the multispectral camera plus the irradiance sensor is 107 g. 16-bit RAW files (based on 10-bit data) are stored in this camera during image shooting. ISO value and exposure time was set to automatic. Every image capture setting is saved in a text metadata file together with the irradiance sensor data. All this information is taken into account during the preprocessing stage to obtain absolute reflectance values for the final product.

UAV Survey Campaign
The aerial survey campaign was conducted for 100 h between June and July 2016. All flights (383) were performed within a 6-h window around the solar zenith to maintain relatively constant With a flight altitude of 120 m, a ground sample distance (GSD) of 15 cm can be achieved. The camera was bundled with an irradiance sensor to record light conditions in the same spectral bands as the multispectral sensor. The weight of the multispectral camera plus the irradiance sensor is 107 g. 16-bit RAW files (based on 10-bit data) are stored in this camera during image shooting. ISO value and exposure time was set to automatic. Every image capture setting is saved in a text metadata file together with the irradiance sensor data. All this information is taken into account during the preprocessing stage to obtain absolute reflectance values for the final product.

UAV Survey Campaign
The aerial survey campaign was conducted for 100 h between June and July 2016. All flights (383) were performed within a 6-h window around the solar zenith to maintain relatively constant lighting conditions. Though small variations in environmental conditions were rectified with the irradiance sensor, severe wind or cloud cover were avoided.
Mikrokopter Tools software was used to plan flights, which allowed the operator to generate an automatic flight route with waypoints depending on the camera's field of view (FOV), the chosen forward and side overlap between images and the required GSD [45]. A digital elevation model (DEM) was used to keep the same distance AGL in all flights tracks owing to the large difference in altitude (410 m) in the study framework. Flight tracks were uploaded in the UAV for each complete day. The flight height was fixed at 120 m AGL, providing an average ground resolution of 14.8 cm·pixel −1 given the specific camera characteristics. Each flight had an effective duration of 5-6 min (without including the takeoff and landing), with an average speed of 10 m s −1 . Battery change time and time needed to reach each takeoff site were not computed. However, both time lapses were included in the total flight time of 100 h. The camera trigger interval was set to a platform advance distance of 22.4 m in order to achieve an 80% forward image overlap at the fixed flight altitude. The waypoints route planned allowed an 80% side image overlap. The image overlap between adjacent flights was at least a flight line. The quality of the raw imagery dataset acquired during the UAV survey was evaluated to search for potentially undesired anomalies, such as: (1) horizontal banding noise (HBN) [46]; (2) non-homogeneous radiometry and issues related with hot-spot or opposition effect [47] or (3) blurring effects [48].

Image Data Processing
UAV imagery was processed into a multispectral mosaic with Pix4Dmapper Pro 3.0 [49] following the "Ag Multispectral" template. This software integrates computer vision techniques with photogrammetry algorithms [50] to obtain high accuracy in aerial imagery processing [51,52]. Pix4Dmapper Pro computes keypoints on the images and uses them to find matches between images. From these initial matches, the software runs several automatic aerial triangulation (AAT), bundle block adjustments (BBA) and camera self-calibration steps iteratively until optimal reconstruction is achieved. Then, a densified point cloud is generated to obtain a highly detailed digital surface model (DSM) that will be used to generate the reflectance maps. A pre-process or normalization was automatically applied to the imagery, where 16 bits TIF files (10 bit RAW data) were converted to standard 8 bit jpg files, taking into account the ISO, exposure time and irradiance sensor data.
A high-end computer with a 12-core Intel i7 processor and 64 GB of RAM was used to process the imagery. Most of the processing steps in Pix4Dmapper Pro need a large number of computational resources that grow exponentially as more images are simultaneously processed. Due to software and hardware limitations for very large projects (above 10,000 images), each of the nine projects was split into smaller subprojects. The subprojects could then be merged after completing the AAT-BBA stage for each one, being necessary only to further process the less demanding subsequent steps for the merged project. Flights, subprojects and projects processing workflows are detailed in Figure 3. Radiometric corrections were introduced based on camera setup parameters and sun irradiance measured by the irradiance sensor. Initial georeferencing was achieved by introducing camera locations in the AAT-BBA stage. At least ten ground control points (GCPs) evenly distributed per subproject were extracted from aerial orthophotos of the Spain National Plan of Aerial Orthophotography (PNOA) to improve global spatial accuracy. This dataset has a GSD of 25 cm with an accuracy better than 0.50 m in terms of RMSE X,Y [53]. The multispectral outputs (four reflectance maps with a GSD of 20 cm) of the Pix4D projects were mosaicked using ArcGIS 10.3.1 (Esri, Redlands, CA, USA) [54] without applying reflectance normalization to avoid the modification of the computed reflectance values in the radiometric correction process. Geospatial accuracy of the outputs was assessed in terms of root mean square error (RMSE) in X, Y and Z from the coordinates of 50 targets uniformly arranged through the UAV survey framework. The X, Y and Z coordinates of the Control Points (CPs) were measured with a high-accuracy GPS receiver (Spectra Precision MobileMapper 20 with accuracy better than 0.50 m in terms of RMSE X,Y ) in postprocessing mode.

WorldView-2 High Spatial Resolution Satellite Imagery and Image Comparison Statistical Analysis
A WorldView-2 image acquired on 23 June 2016 for the study framework was used to compare the spatial information provided by the UAV platform with high resolution satellite imagery in a heterogeneous burned landscape. The spatial resolution of the multispectral sensor on-board WorldView-2 satellite at nadir is 1.84 m, but the image was delivered by DigitalGlobe resampled to 2 m. This sensor has eight bands in the visible and NIR region of the spectrum [55]: coastal blue (400-450 nm), blue (450-510 nm), green (510-580 nm), yellow (585-625 nm), red (630-690 nm), red edge (705-745 nm), NIR1 (770-895 nm) and NIR2 (860-1040 nm). The raw image was orthorectified with a DEM (accuracy better than 20 cm in terms of RMSEZ) and GCPs extracted from PNOA orthophotos. The image atmospheric correction was conducted by the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm [56]. The HyPARE algorithm implemented in ENVI 5.3 [57] was used to geometrically align the UAV multispectral orthomosaic and the WorldView-2 image achieving a UAV subpixel RMSE (<20 cm).
The image comparison was performed on the basis of the reflectance values and the Normalized Difference Vegetation Index (NDVI) of the UAV multispectral orthomosaic and the WorldView-2 image. UAV multispectral mosaic at original resolution (20 cm) was resampled to a GSD of 1 m (half of WorldView-2 spatial resolution) and 2 m (WorldView-2 spatial resolution) with a block average function for the input pixels within a set of non-overlapping windows with the required size (5 × 5 and 10 × 10 pixels). The function was computed with ArcGIS 10.3.1. Pearson bivariate correlations between the UAV multispectral mosaic (GSD of 20 cm, 1 m and 2 m) and WorldView-2 image (GSD of 2 m) were calculated on each comparable band to assess the spatial information provided by each sensor in our survey framework. To determine the reflectance variability between sensors, we computed the variance in the reflectance values in each band of the UAV images (native spatial resolution and resampled) and WorldView-2 image. For the more heterogeneous surface within the survey framework, which covers 1.5 ha, a basic statistic package was calculated on the UAV (at native resolution and 2 m) and WorldView-2 NDVI maps to compare the potentiality of these products in post-fire vegetation monitoring.

WorldView-2 High Spatial Resolution Satellite Imagery and Image Comparison Statistical Analysis
A WorldView-2 image acquired on 23 June 2016 for the study framework was used to compare the spatial information provided by the UAV platform with high resolution satellite imagery in a heterogeneous burned landscape. The spatial resolution of the multispectral sensor on-board WorldView-2 satellite at nadir is 1.84 m, but the image was delivered by DigitalGlobe resampled to 2 m. This sensor has eight bands in the visible and NIR region of the spectrum [55]: coastal blue (400-450 nm), blue (450-510 nm), green (510-580 nm), yellow (585-625 nm), red (630-690 nm), red edge (705-745 nm), NIR1 (770-895 nm) and NIR2 (860-1040 nm). The raw image was orthorectified with a DEM (accuracy better than 20 cm in terms of RMSE Z ) and GCPs extracted from PNOA orthophotos. The image atmospheric correction was conducted by the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm [56]. The HyPARE algorithm implemented in ENVI 5.3 [57] was used to geometrically align the UAV multispectral orthomosaic and the WorldView-2 image achieving a UAV subpixel RMSE (<20 cm).
The image comparison was performed on the basis of the reflectance values and the Normalized Difference Vegetation Index (NDVI) of the UAV multispectral orthomosaic and the WorldView-2 image. UAV multispectral mosaic at original resolution (20 cm) was resampled to a GSD of 1 m (half of WorldView-2 spatial resolution) and 2 m (WorldView-2 spatial resolution) with a block average function for the input pixels within a set of non-overlapping windows with the required size (5 × 5 and 10 × 10 pixels). The function was computed with ArcGIS 10.3.1. Pearson bivariate correlations between the UAV multispectral mosaic (GSD of 20 cm, 1 m and 2 m) and WorldView-2 image (GSD of 2 m) were calculated on each comparable band to assess the spatial information provided by each sensor in our survey framework. To determine the reflectance variability between sensors, we computed the variance in the reflectance values in each band of the UAV images (native spatial resolution and resampled) and WorldView-2 image. For the more heterogeneous surface within the survey framework, which covers 1.5 ha, a basic statistic package was calculated on the UAV (at native resolution and 2 m) and WorldView-2 NDVI maps to compare the potentiality of these products in post-fire vegetation monitoring.

Raw Imagery Dataset Quality
From 383 UAV flights, we acquired 45,875 images for each band, which made a total of 183,500 raw images that represented approximately 430 GB of information. The normalized UAV images had a balanced contrast. However, the red channel showed some saturation over highly reflective surfaces on this wavelength, such as forest tracks in our study area (Figure 4).

Raw Imagery Dataset Quality
From 383 UAV flights, we acquired 45,875 images for each band, which made a total of 183,500 raw images that represented approximately 430 GB of information. The normalized UAV images had a balanced contrast. However, the red channel showed some saturation over highly reflective surfaces on this wavelength, such as forest tracks in our study area (Figure 4). A slightly horizontal banding noise (HBN) was observed within the four channels of the camera, especially in the green channel ( Figure 5). The banding effect was more noticeable at the top and bottom of the image, where differences in the digital levels of alternate rows representing the same object were higher than 10%.  A slightly horizontal banding noise (HBN) was observed within the four channels of the camera, especially in the green channel ( Figure 5). The banding effect was more noticeable at the top and bottom of the image, where differences in the digital levels of alternate rows representing the same object were higher than 10%.

Raw Imagery Dataset Quality
From 383 UAV flights, we acquired 45,875 images for each band, which made a total of 183,500 raw images that represented approximately 430 GB of information. The normalized UAV images had a balanced contrast. However, the red channel showed some saturation over highly reflective surfaces on this wavelength, such as forest tracks in our study area (Figure 4). A slightly horizontal banding noise (HBN) was observed within the four channels of the camera, especially in the green channel ( Figure 5). The banding effect was more noticeable at the top and bottom of the image, where differences in the digital levels of alternate rows representing the same object were higher than 10%.  Another undesired effect observed across the imagery was non-homogeneous radiometry across the image related with Bidirectional Reflectance Distribution Function (BRDF) [47]. In particular, a specific area of the imagery had systematically higher reflectance values than the remaining areas ( Figure 6). This radiometric anomaly effect is commonly denominated hot-spot or opposition effect [58,59] and it appears as a consequence of the camera and sun position alignment [60]. For its part, the image dataset did not exhibit blurring effects that are usually associated with camera shaking [15]. Another undesired effect observed across the imagery was non-homogeneous radiometry across the image related with Bidirectional Reflectance Distribution Function (BRDF) [47]. In particular, a specific area of the imagery had systematically higher reflectance values than the remaining areas ( Figure 6). This radiometric anomaly effect is commonly denominated hot-spot or opposition effect [58,59] and it appears as a consequence of the camera and sun position alignment [60]. For its part, the image dataset did not exhibit blurring effects that are usually associated with camera shaking [15].

Multispectral Mosaic Processing and Product Quality
The processing of the multispectral orthomosaic was labor-intensive and time-consuming because of the large size of the surveyed area [19] and the ultra-high spatial resolution of the dataset [11]. Each subproject took 3-6 h to process the AAT, BBA and camera self-calibration. Point cloud densification and generation of the reflectance maps took up to 14 h for each project. The total amount of time required to process the whole dataset was about 320 h (20 days) with the available processing resources, including software failures.

Multispectral Mosaic Processing and Product Quality
The processing of the multispectral orthomosaic was labor-intensive and time-consuming because of the large size of the surveyed area [19] and the ultra-high spatial resolution of the dataset [11]. Each subproject took 3-6 h to process the AAT, BBA and camera self-calibration. Point cloud densification and generation of the reflectance maps took up to 14 h for each project. The total amount of time required to process the whole dataset was about 320 h (20 days) with the available processing resources, including software failures.
For each of the 43 subprojects, the 3D reconstruction algorithm (AAT, BBA and self-calibration) obtained between 95% and 99% images aligned on the basis of more than 10,000 keypoints extracted from each image, with over 5500 keypoints matching with at least another two adjacent images. Green and NIR channels obtained the highest number of matches, whereas red channel systematically got the lowest number. The total number of 2D keypoint observations for BBA in each subproject was about 9 million, whereas the number of 3D matching points was 1.5 million, with a mean reprojection error of 0.2-0.3 pixels. The large forward and side overlap provided high accuracy in the keypoint matching step between adjacent images, as [45] pointed out. Changes between nominal and final parameters defining the geometrical model of the camera were as low as 0.01%. For its part, the point cloud densification at the merge step of the subprojects obtained between 6 × 10 6 and 7 × 10 6 3D densified points. For each of the nine projects, four reflectance maps (green, red, red edge and NIR) were obtained with a resampled GSD of 20 cm/pixel. Some areas of these maps were excluded (Figure 7) due to reflectance anomalies caused by USB-disconnections between the camera and the irradiance sensor. For each of the 43 subprojects, the 3D reconstruction algorithm (AAT, BBA and self-calibration) obtained between 95% and 99% images aligned on the basis of more than 10,000 keypoints extracted from each image, with over 5500 keypoints matching with at least another two adjacent images. Green and NIR channels obtained the highest number of matches, whereas red channel systematically got the lowest number. The total number of 2D keypoint observations for BBA in each subproject was about 9 million, whereas the number of 3D matching points was 1.5 million, with a mean reprojection error of 0.2-0.3 pixels. The large forward and side overlap provided high accuracy in the keypoint matching step between adjacent images, as [45] pointed out. Changes between nominal and final parameters defining the geometrical model of the camera were as low as 0.01%. For its part, the point cloud densification at the merge step of the subprojects obtained between 6 × 10 6 and 7 × 10 6 3D densified points. For each of the nine projects, four reflectance maps (green, red, red edge and NIR) were obtained with a resampled GSD of 20 cm/pixel. Some areas of these maps were excluded (Figure 7) due to reflectance anomalies caused by USB-disconnections between the camera and the irradiance sensor.  Initial georeferencing was achieved by introducing the UAV's GPS positions taken at each camera shot in the bundle-block adjustment process within Pix4D workflow. The precision reported by Pix4D, calculated as the root mean square error (RMSE), was between 1.5-3 m in X-Y and between 2-4 m in Z.
The final georeferencing of the subprojects achieved by using ground control points (GCPs) extracted from PNOA orthophotos achieved an RMSE lower than 30 cm in X-Y and lower than 55 cm in Z. Horizontal and vertical accuracy was improved from initial georeferencing at least 80% and 73% respectively, after providing evenly distributed GCPs through the UAV survey framework.

Comparison of the Spatial Information Provided by UAV and WorldView-2 Imagery
Higher r Pearson values were obtained when the UAV mosaic resolution approached the resolution of the WorldView-2 image (2 m) ( Table 1) for each band of the spectrum. The correlation between the two remote sensing platforms for each resolution was stronger for the visible region of the spectrum. Table 1. Pearson correlation results between native and resampled UAV multispectral mosaics and WorldView-2 multispectral image. The largest variance in the reflectance values of each band was found for the UAV orthomosaic at 20 cm spatial resolution ( Table 2). The variance of the UAV orthomosaic at 2 m of spatial resolution was similar to that of the WorldView-2 image. The comparison between UAV and WorldView-2 NDVI maps derived from the imagery datasets at the original resolution of each sensor, corresponding to a heterogeneous surface of 1.5 ha within the survey framework, revealed greater variability in the UAV pixel values ( Figure 8A,B). The horizontal structure of the vegetation observed in this area ( Figure 9A) can be identified in the UAV mosaic ( Figure 9B), but not in the WorldView-2 image ( Figure 9C). The UAV NDVI map resampled to 2 m presented similar variability to the WorldView-2 image ( Figure 8B,C).

Discussion
This study evaluated the strengths and limitations of using a rotor-based UAV equipped with a novel multispectral camera (Parrot SEQUOIA) to conduct a field survey of a large (3000 ha) and heterogeneous burned surface. Our results indicate that the ultra-high spatial resolution UAV multispectral orthomosaic represents a valuable tool for post-fire management applications at fine spatial scales [18]. However, due to the ultra-high spatial resolution of the data and the large size of the surveyed area, data processing was highly time consuming.
Multispectral cameras onboard UAVs provide countless opportunities for remote sensing applications, but the technological limitations of these sensors [46] would require evaluation of the quality of the captured raw imagery data, particularly in novel sensors. In this study, we found that the raw imagery captured by the Parrot SEQUOIA multispectral camera presented some undesired radiometric anomalies. In the red channel we observed sensor saturation over highly reflective surfaces. This effect was not induced by radiometric down sampling from 10 to 8-bit performed by Pix4D during processing because it was present both in raw (10-bit) and in normalized (8-bit) images. The horizontal banding noise observed within the four channels of the camera is a common artifact of CMOS (complementary metal oxide semiconductor) rolling shutter sensors [46]. However, the Parrot SEQUOIA uses a global shutter system and this effect should not be significant in this multispectral sensor. To our knowledge, this camera has not been used in previous scientific studies and, therefore, this issue has not been reported so far. The issues related with Bidirectional Reflectance Distribution Function (BRDF) effect are magnified in sensors with a wide Field of View [61,62] such as the Parrot SEQUOIA. For its part, the hot-spot or opposition effect was more apparent at shorter wavelengths, as also highlighted by [47]. Some corrections to mask this effect have been proposed [59] that must be made individually for each image taking into account the time and position of the image acquisition, image orientation and solar positioning (azimuth and elevation), following some photogrammetric steps. Thus, the correction of this radiometric anomaly as well as the BRDF effect is very challenging and time consuming, becoming an unapproachable task when dealing with large imagery datasets [58]. The absence of a blurring effect in our dataset could be explained by the increased flight stability that the rotor-based UAVs offer over fixed-wing UAV platforms, also exhibiting fewer vibrations [13,29]. Moreover, the Parrot SEQUOIA camera was attached to the platform with a rubber damper to minimize vibrations, and the camera acquired imagery with the focal length set to infinity and fast shutter speed [15], preventing the occurrence of this effect. USB-disconnections between the camera and the irradiance sensor could be associated to a poor connection. However, the disconnections did not imply a major problem with the irradiance sensor, considering that it provided complete records for more than 90% of the survey framework with varying atmospheric conditions between adjacent flights, even performed on different days since the data acquisition from a rotor-based UAV platform could not be carried out in a single run over large areas due to restrictions in the flight range [12].
In relation to delivery times of on-demand imagery of commercial satellites and the usual times needed to implement post-fire management strategies within large burned areas [63], the length of the flight campaign (17 days) and the laboratory processing tasks (20 days) required a reasonable time. The computational demand of the project was very high due to the large amount of raw image data collected (183,500 raw images) and its ultra-high spatial resolution. The size of this dataset caused management difficulties in the laboratory in terms of data storage, backup and processing capability. This circumstance has already been reported by [20], data transfer between research teams being restricted by physical storage units or some processing options such as cloud computing. This computational demand may limit the execution of this type of projects to users who have access to high-end computers to process raw imagery. However, recent advances in computational capacity would allow a large-scale implementation of this type of workflow [64]. Other remote sensing products with reduced processing requirements such as on-demand satellite imagery offer a resolution from pan-sharpening techniques that is increasingly closer to what can be obtained with multispectral sensors on board UAVs. However, according to [65][66][67], the use of pan-sharpening techniques presents several problems such as the appearance of spatial impurities or radiometric distortions in the merged product. This type of anomaly could represent a serious problem for providing the highest radiometric and spatial accuracy for fine scale applications. On the other hand, we consider that for this type of study, a UAV is more versatile than other types of remote sensing platforms, allowing flights to be carried out in the immediate post-fire situation given the provided control of the revisit time [18]. Another possible alternative to this highly demanding processing framework could be the performance of flights in small non-adjacent surfaces within the study area to reduce the campaign effort, but it would not be feasible to obtain a multispectral product that allows extrapolation of, for example, recovery models to other areas within the study area where the flights have not been carried out. The initial georeferencing precision (RMSE X,Y between 1.5-3 m and RMSE Z between 2-4 m) is not an optimum result considering that some authors, such as [51], have established as low accuracy an X-Y error higher than two times the GSD and a Z error higher than three times the GSD. Single frequency GPS receivers, such as the one used in the platform, which features a light antenna and chip power limitations, typically show important drifts throughout time. This is particularly important in our case since every subproject included flights carried out at different times or even on different days due to the large size of the surveyed area. Current research on the installation of dual frequency GPS onboard UAV platforms [68] would allow for direct georeferencing the generated geomatic products without the need of GCPs [15]. The geospatial accuracy of the final georeferencing achieved by using GCPs is a good result (RMSE X,Y < 30 cm and RMSE Z < 55 cm) considering the great extension of the UAV survey framework and taking into account that some studies reported a decrease in accuracy with large survey areas [64]. Other studies, such as that conducted by [11], obtained similar geospatial accuracy, but in our case, the error is closer to the lower limit that approximately matches the pixel size [69]. This accuracy was highly influenced by the even distribution of the GCPs through the UAV survey framework [70,71].
Within the comparison framework of the spatial information provided by UAV and WorldView-2 imagery, the higher correlations obtained between UAV orthomosaic resampled to match WorldView-2 image resolution, confirm that in the first successional stages of the vegetation on heterogeneous burned areas, the highest spatial resolution UAV mosaic (20 cm) does not provide redundant information [12] in relation to the satellite image. In this case, the ground variability scale associated with small vegetation patches, is larger than the coarser pixel sizes. Moreover, the stronger correlation between the UAV and WorldView-2 imagery found in the visible region of the spectrum was probably due to the similar relative spectral response in that region for the two sensors [44,55]. The NDVI map comparison between the UAV and WorldView-2 imagery conducted on a heterogeneous surface within the UAV survey framework, revealed again that coarser resolution satellite imagery cannot represent the spatial variability and patterns of areas characterized by very small vegetation patches [12]. The larger variance in reflectance values for each band of the highest spatial resolution UAV orthomosaic indicates that this product may be able to capture fine-scale ground patterns because of the greater spatial information provided by the dataset, improving the interpretation of landscape features. Some authors such as [18] stated that at this spatial scale, variations in sun azimuth and elevation will create variable shadow features throughout the day. This factor may introduce reflectance variability, and therefore, distort the calculation of spectral indices in ultra-high spatial resolution images. This effect in small targets is less significant in satellite imagery given its pixel size. However, within the NDVI map comparison framework, the sun azimuth and elevation of the UAV flight approximately matches the ones in WorldView-2 capture and the variability in reflectance values of both sensors was approximately the same as for the entire study area.

Conclusions
(1) The raw imagery acquired by the Parrot SEQUOIA multispectral camera presented some undesirable anomalies such as horizontal banding noise and non-homogeneous radiometry across the image. Moreover, the irradiance sensor disconnections induced some radiometric anomalies across a small area of the multispectral mosaic that had to be discarded. (2) The 16-bit imagery acquired on the UAV flights of the 3000 ha survey framework represents a large volume of data before processing it into a multispectral orthomosaic due to its ultra-high spatial resolution and the large size of the surveyed area. Nevertheless, this spatial resolution, which cannot be achieved with satellite platforms, could be crucial for developing spatial products to be used in post-fire management decision-making. (3) Data processing was very labor-intensive, taking about 320 h to obtain the final multispectral orthomosaic. Due to the large imagery dataset generated on a UAV survey of a large area, the dataset processing must be subdivided regardless of the available processing capability. The obtained geospatial accuracy of the UAV multispectral orthomosaic was high (RMSE X,Y < 30 cm and RMSE Z < 55 cm) regarding the large extension of the surveyed area and the spatial resolution of the dataset. (4) The spatial information provided by the ultra-high spatial resolution UAV multispectral orthomosaic was not redundant in these large and heterogeneous burned areas in comparison with high spatial resolution satellite imagery such as that provided by WorldView-2. The UAV orthomosaic could therefore improve the analysis and interpretation of fine-scale ground patterns.