1. Introduction
Salt marshes are highly complex systems with high ecological values and abundant ecosystem services [
1,
2]. Salt marshes protect coastal areas from floods and storms [
3,
4], prevent coastal erosion [
5], store significant amounts of organic carbon [
6], recycle nutrients, and remove pollutants, thus improving habitat quality and maintaining a high level of productivity in habitats with rich biodiversity [
7,
8].
Despite their importance, up to 70% of worldwide salt marshes have been lost in the 20th century [
8], mostly due to extensive anthropogenic land cover changes that have accelerated marsh degradation. Climate change is now making these threats much more severe [
5,
9,
10], with sea-level rise (SLR) being probably [
8] the greatest current threat to salt marshes. Rising local sea levels could put salt marshes at risk of drowning, depending on SLR scenarios [
11]. These habitats may compensate for the situation with natural mechanisms that maintain their elevation above local sea level [
5,
12]. These mechanisms include biophysical interactions between plants and soil and local sediment dynamics [
12]. Nevertheless, natural events, such as inundations, and human activities, such as changes in land use [
3,
12,
13], may destabilize these mechanisms, compromising [
3,
12,
13] the ability of salt marshes to adapt to future SLR scenarios [
14].
Fortunately, conservation efforts, such as the Ramsar Convention’s implementation [
15], have slowed the erosion of salt marshes during the past few decades. However, further efforts are still needed to improve the likelihood of salt marsh survival, requiring an interdisciplinary approach to understanding the underlying mechanisms [
16,
17]. Field databases required for modeling such processes need to be extensive (i.e., sediment availability, accurate topography, distribution, and vegetation productivity) and of high quality [
11,
18,
19]. Due to the great accuracy needed for modeling coastal processes [
20], the difficulties of accessing these environments, and the disturbance of the sediment during sampling, in situ monitoring is still problematic in salt marshes [
21]. Techniques for remote sensing (RS) have proven to be an excellent tool for gathering spatial environmental data [
22,
23]. Traditional platforms include satellite or aerial systems, which are frequently utilized for many studies at the regional level, such as the mapping of tidal marshes [
24], the monitoring of vegetation cover [
25,
26], and coastal management [
27]. The spectral, spatial, and temporal resolution of satellite images is constrained, making them generally unsuitable for modeling ecological processes [
28,
29]. Unmanned aerial vehicles (UAVs) are bridging the high spectral, spatial, and temporal resolution gap left by satellites, enabling the development of rapid and affordable approaches. High-resolution photogrammetric cameras are among the current UAV sensors, while most other methods have also been effectively applied in conventional RS platforms (i.e., thermography, multispectral, LiDAR, and hyperspectral).
Three RS methods have a great potential for high-quality monitoring of salt marshes. (1) Photogrammetry successfully creates orthorectified maps (i.e., orthomosaics) and topographic products using structure-from-motion (S
fM) methods [
30,
31]. (2) Light detection and ranging (LiDAR) gathers highly reliable 3D point clouds for high-resolution topography modeling and creates digital elevation models (DEM) from digital surface models (DSM) by point cloud classification techniques [
32,
33]. (3) Multispectral techniques offer useful data for vegetation mapping [
34].
S
fM photogrammetric methods based on UAVs have proven to be particularly effective in mapping marsh surfaces and calculating canopy height [
35,
36,
37]. Airborne-LiDAR has been shown to enhance habitat classification for wetland areas [
38,
39,
40]. However, a significant obstacle to mapping and modeling salt marshes is the accuracy of elevation data [
20]. On the one hand, slight elevational variations (in the order of centimeters) can have a significant impact on plant zonation, which affects biomass and species distribution [
41]. On the other hand, the precision of ground elevation measurements (field and LiDAR) below thick vegetation cover is limited by the uneven ground surface and extremely dense covers typical of salt marshes. [
42,
43]. The accuracy of LiDAR-derived DEM has been improved by up to 75% using custom DEM-generation techniques [
42], lowering the root mean square error (RMSE) with specie-specific correction factors [
44], by adjusting LiDAR-derived elevation values with aboveground biomass density estimations [
45], or by integrating multispectral data during the processing [
46].
LiDAR can identify tiny spatial scale structures, which is important for monitoring and modeling elements and features in irregular and dense canopy environments such as salt marshes, offering great potential for studying heterogeneous surface environments. Grassland [
47,
48], forest, and agricultural vegetation monitoring [
49,
50] have all shown the effectiveness of UAV-LiDAR. Currently, the quality of mapping ground elevation and vegetation characteristics of salt marshes based on UAV-LiDAR technology has only been evaluated once [
51], and the same goes for assessing the accuracy of UAV-LiDAR and UAV-photogrammetry in determining elevation and vegetation features in salt marshes [
52]. It is worth mentioning that results from S
fM-based photogrammetry techniques and LiDAR-based techniques can be compared because they are conceptually independent. Photogrammetric processing is based on the reconstruction of models from images, which involves interpolating what is not visible on the surface. The LiDAR is an active sensor whose laser beams can penetrate the spaces between features and pick up small details, thereby combining the 3D information of the scene into the model. Pinton et al. [
52] demonstrated that LiDAR technology generates more precise salt marsh DEM and DSM in comparison to the results from photogrammetry-based approaches, and improves habitat classification. Nevertheless, more salt marshes with a wider range of vegetation heights and densities must be tested to determine this effectiveness. An assessment of the effects of flight settings on the laser beam penetration of salt marsh vegetation is also needed.
The salt marshes of Cádiz Bay Natural Park (CBNP) are an excellent example of Atlantic tidal wetlands in the south of Europe. In addition to being a RAMSAR site, SAC, and SPA, Cádiz Bay was designated as a Natural Park in 1994. This system is within an important bird migration route and the southernmost tidal wetland in Europe. Additionally, due to its geographic configuration and location, it is particularly susceptible to the impacts of climate change [
53]. This makes the salt marshes of Cádiz Bay an excellent natural laboratory for the study of climate change effects on tidal wetlands.
The main goal of this study is to understand the performance of UAV technologies in salt marshes. We will assess the benefits and drawbacks of using UAV-LiDAR and UAV-photogrammetry to create precise digital models (DEM and DSM). The related spatial accuracy will also be evaluated. Additionally, the effectiveness of supplementary multispectral data on habitat classification will be assessed. The capability of canopy penetration and the accuracy of canopy height model estimations is explored by using several LiDAR sensor setups. Our findings will establish the conditions for standardizing the application of UAV technology in the study of salt marshes and will provide the first data for modeling the future responses of the Bay of Cádiz to SLR scenarios.
2. Materials and Methods
2.1. Site Description
The study area is located in Cádiz Bay, on the southwestern Atlantic coast of Spain (
Figure 1). This bay also represents the southernmost example of the European coastal wetlands, right on the intersection between the Mediterranean and Atlantic oceans and the European and African continents. The most representative habitat of Cádiz Bay is the tidal marsh, presenting large extensions of this environment [
54].
Cádiz Bay is divided into two waterbodies. A narrow strait connects an inner shallow basin, with a mean depth of around 2 m, with a deeper external basin, with depths up to 20 m and characterized by sandy beaches. The inner basin is a sheltered area protected from oceanic waves [
55]. The study area is situated in the northwest corner of the inner basin (NE zone, 36°30′59.2″N 6°10′14.7″W), in front of the salina of San José de Barbanera. The intertidal system includes natural salt marshes, salinas, mudflats, and a complex network of tidal channels (
Figure 1).
Cádiz Bay has a mesotidal and semi-diurnal tidal regime, with a mean tidal range (MTR) of 2.3 m, up to 3.7 m during spring tides [
56]. The vegetation communities describe a typical salt marsh zonation of mid-latitudes [
57], which can be divided into three main salt marsh horizons, depending on vegetation types and elevation ranges: upper, medium, and low marsh [
54]. Unfortunately, in most cases, the upper marsh is interrupted by the protective walls of the salinas, with the most representative horizons of natural salt marshes of Cádiz bay being the medium and low ones. The medium marsh is dominated by
Sarcocornia spp. (mainly
S. fruticose and
S. perennis) and other halophytic species in lower abundance (
Figure 2), and the low marsh is mainly dominated by
Sporobolus maritimus. The lowest zones of the intertidal flats are colonized by sequential belts of seagrasses
Zostera noltei and
Cymodocea nodosa and small patches of
Zostera marina [
58].
The sampling site was selected according to the criteria of the width of the salt marsh vegetation and difficulty of access, as it was the best example of salt marsh plant zonation in the bay and was easy accessed by car. The conclusions of this work are expected to be of direct applicability to other tidal salt marshes across the world, given that low and medium tidal marshes usually present comparable structural properties [
59].
2.2. UAV and Sensors
The drones service of the University of Cádiz (
https://dron.uca.es/vehiculos-aereos/) (accessed on 25 April 2022) provided all of the equipment and sensors that were used for this work. The UAV used is a DJI Matrice 300 RTK quadcopter. The drone has an on-board RTK (real-time kinematic positioning) technology. The RTK records accurate GPS information during the flight, providing up to centimeter-level accuracy in geopositioning. The sensors implemented in the UAV were the photogrammetric sensor DJI Zenmuse P1, the DJI Zenmuse L1 LiDAR, and the Micasense RedEdge MxDual multispectral camera (
Table S1). The missions were planned with the DJI pilot application.
The DJI Zenmuse P1 RGB photogrammetric supports 24 mm, 35 mm, and 50 mm fixed-focus lenses. For this work, the 35 mm fixed-focus lens was used, which, together with the 45 Mp full-frame sensor, provided an estimated value for ground sampling distance (GSD) of 1.26 cm/pixel. This sensor offers 0.03 m horizontal and 0.05 m vertical accuracy without deploying ground control points (GCPs).
The DJI Zenmuse L1 LiDAR sensor integrates a Livox LiDAR module, a high-precision IMU with a 20 Mp RGB camera with a focal length of 24 mm and a mechanical shutter on a stabilized 3-axis gimbal. Enabling the RGB camera entails collecting images, which can be used to assign the color to each point of the cloud generated by the LiDAR. When an adequate overlap is set, images can also be used to build an orthomosaic. The Livox LiDAR module has a maximum detection range of 450 m at 80% reflectivity. It can achieve a point cloud data rate of up to 240,000 points/s and it allows up to three returns per laser beam. The laser wavelength is 905 nm. The L1 LiDAR sensor supports two scan modes: repetitive and non-repetitive (
Figure 3). The repetitive scan mode executes a regular line scan. The non-repetitive pattern is an accumulative process with an increase in the area scanned inside the field-of-view (FOV) together with the increase in integration time. This last pattern increases the probability of object detection within the FOV. The sensor can capture data from a nadir or oblique position. In a nadir flight, data are captured with the sensor axis oriented in a straight vertical position. The oblique flight configuration means data are captured with the sensor tilted at an angle with respect to the vertical. The sensor scans the area up to five times, changing the perspective from which the data are captured.
The Micasense RedEdge-MX Dual sensor is a two-camera multispectral imaging system, with a total of 10 bands (five each camera), sampling data in the electromagnetic spectrum from blue to near-infrared. Two bands are centered in the blue (444 and 475 nm), two in the green (531 and 560 nm), two in the red (650 and 668 nm), three in the red edge (707, 715 and 750 nm), and one in the infrared (842 nm). The two-camera system is connected to a downwelling light sensor (DSL2), which is used to correct for global lighting changes during the flight (e.g., changes in clouds covering the sun) and for sun orientation.
2.3. Flight Campaigns
The study area was surveyed in two consecutive campaigns at the end of summer and the beginning of autumn of 2021.
2.3.1. September Campaign
The first campaign was performed on the 8 September 2021, with a low tide of 1.4 m LAT. The campaign included three missions, covering an area of approximately 20 ha (yellow polygon,
Figure 1). The first mission collected data with the photogrammetric sensor DJI Zenmuse P1, while the other two collected data with the DJI Zenmuse L1 LiDAR, changing some configurations in between missions (see
Table 1 for mission configuration details). The two LiDAR missions were programmed with the repetitive scanning mode, double returns operating at a frequency of 240 kHz. The altitude of the LiDAR missions was set to obtain adequate point clouds, rather than an orthomosaic reconstruction. Nevertheless, the lateral overlap for the second LiDAR mission was increased to 70% to allow for the generation of the corresponding orthomosaic.
2.3.2. October Campaign
The second campaign was performed on the 22 October 2021, with a low tide of 1.3 m LAT. This campaign included nine missions, one using the Micasense RedEdge MxDual sensor and eight missions with LiDAR (
Table 1). The eight LiDAR missions only included four LiDAR configurations, but were duplicated to proceed with the calibration trial (
Table 2; see
Section 2.6).
The area covered in October was much smaller (4.5 ha approx., red polygon,
Figure 1), but still representative of the system. The reduction was necessary to reduce the processing time for the collected multispectral data (MS).
The LiDAR missions had the aim of evaluating the best sensor setting combination for optimum accuracy/processing time balance. Settings evaluated included flight time, captured LiDAR data size, accuracy, and spatial resolution of deliverables. The sensor settings manipulated were scan mode (repetitive or non-repetitive) and sensor orientation (nadir or oblique) (
Table 2). The missions were repeated for the calibration trial (see
Section 2.6).
2.4. Data Processing
Orthomosaics are generated through photogrammetric processing of images, captured either by the Zenmuse P1 or the Zenmuse L1 LiDAR sensors. Digital models can be obtained from the photogrammetric processing of images or LiDAR processing of point cloud data (
Figure 4). This section summarizes both types of processing, photogrammetric and LiDAR, as well as the methods to generate the multispectral masks and the digital models. Visualization and handling of raster deliverables were always done with the free and open-source software QGis.
2.4.1. Photogrammetric Processing
The Pix4Dmapper software [
60], which transforms the images into orthomosaics and digital models, automatically implements the three steps of the structure-from-motion (S
fM) algorithm workflow [
30] (
Figure 4,
Table 3). In the first step, the scale invariant feature transform (SIFT) identifies key points from multiple images. The second step reconstructs a low-density 3D point cloud, based on camera positions and orientations, and densifies the cloud with the multi-view-stereo (MVS) algorithms. The third step is the transformation, georeferencing, and post-processing of the dense point clouds, producing the orthomosaics and the corresponding digital models. The ground sample distance (GSD) expresses the spatial resolution of the products in cm/pixel.
The Zenmuse L1 LiDAR sensor captures both image and point cloud datasets. Images can thus undergo photogrammetric processing to generate orthomosaic and digital models. However, for processing the RGB from the Zenmuse L1 LiDAR sensor, the second step of the SfM workflow is replaced by direct capture of the LiDAR 3D point cloud. Unfortunately, Pix4Dmapper does not allow for editing imported point clouds. Therefore, in those cases, the resulting DSM and DEM may contain larger errors and imperfections.
2.4.2. LiDAR Processing
DJI Terra software performs preliminary processing of the raw LIDAR data [
61], which is required to produce a georeferenced, true color, dense 3D point cloud for the next steps (
Figure 4,
Table 3).
After pre-treatment, these datasets must also go through three major steps for processing (
Figure 4), carried out using Global Mapper LiDAR module [
62] (
Table 3). Firstly, in order to increase the accuracy of the final products, the point cloud must be filtered and edited to remove artefacts and signal noises. With greater scan angles, a laser pulse travels a longer path, leading to biased measurements [
63]. Therefore, the primary filtering method was to reduce the sensor’s initial −35°/35° range of scan angles to a proper range of −26°/26°. The classification of the points is the second phase. The algorithms employ geometric positions in relation to nearby points to assign the classes (see Digital Surface Models (DSM) Section). Vegetation masks can be created and imported into the procedure if multispectral data are available. By separating vegetated environments, these masks help with the accurate classification of plant points (see
Section 2.4.3). The third step is the generation of digital models. By using data interpolation, this step reconstructs the ground surface, which results in the creation of the corresponding DEMs and DSMs.
The difference in elevation between DEM and DSM could be the height of the canopy, as there were no other items present apart from plants. Thus, using a geographic information system (e.g., Global Mapper or QGis), canopy height models (CHM) can be produced by subtracting one elevation model from another (see Canopy Height Models (CHM) Section).
Point Cloud Classification
An accurate DEM can only be obtained when the point cloud has been correctly classified. In our situation, classification entails designating each point to one of the following three categories: ground, non-ground, or noise. This method, in which the information on geometry and color is used to assign the class, is made possible by machine learning algorithms. The method works effectively in contexts that are comparable to those used to train the algorithms (i.e., trees and buildings). The algorithm is not expected to operate efficiently in our study location, which is a flat, rough terrain with patches of low, dense vegetation. Manual intervention may be required, which can be a challenging and time-consuming operation.
The auto-classification tool recognizes noise and ground. The remaining points are labelled as non-ground points and interpreted as vegetation points.
Noise may be automatically identified with a classification algorithm that detects elevation values above or below a local average height. ‘Maximum allowed variance from local average’, and ‘local area base bin size’, which were set to 1 SD and 0.2 m respectively, are the input parameters for this algorithm. This means that using reference areas of 0.2 m, points with more than 1 SD of the local average height are classified as noise.
Ground auto-classification is done in two steps. The algorithm first determines non-ground points based on morphological attributes, such as the expected terrain slope and the ground’s maximum elevation change. A second phase allows for the exclusion of some of those remaining from ground classification by comparing them to a simulated 3D curved surface representing the ground. The algorithm requires the neighboring area’s size and the ground classification’s vertical limit in order to compare the points [
62].
The auto-classification process starts with default values that are then improved through trial and error. The parameters for the first filter were chosen based on the salt marsh’s flat surface, with a maximum elevation change of 5 m and an expected terrain slope of 1 degree. The base bin size for modeling the 3D curved surface was set to 6-point spacing (ps). Two values of minimum height deviation from the local average height, 0.03 m and 0.10 m, were tested, in order to determine the appropriate threshold to differentiate vegetation from ground classification.
2.4.3. Masks from Multispectral Data
LiDAR data alone seems insufficient for high-quality classification of salt marsh point clouds. Hard and regular surfaces, such as roads, generate a single return LiDAR signal. However, salt marshes generate wide point clouds with scattered returns for the same LiDAR pulse. Thick point clouds in vegetated zones are reasonable and desirable for habitat classification. However, in salt marshes, bare grounds also produce thick point clouds, hindering the classification step (
Figure 5). A method to solve this issue involves including additional information on the spatial distribution of vegetation. This information is incorporated into the process as multispectral masks that allow the vegetation zones to be separated from the bare ground ones, thereby allowing the creation of cut-off areas to successfully classify the point cloud.
The generation of multispectral masks requires the processing of the reflectance maps of the bands of the multispectral images. The procedure is similar to photogrammetric processing, except for the need for radiometric calibration. The calibration is done for each radiometric band, capturing the image of a calibration target immediately before and after the flight. The calibration target is made of a material with a known reflectance and allows the creation of reflectance-compensated outputs, in order to accurately compare changes in data captured over different days or at different times of day [
64]. The Pix4Dmapper software calibrates and corrects the reflectance of the images according to the calibration values, delivering a total of 10 reflectance maps of the surveyed area.
The multispectral masks are obtained from the map of the normalized difference vegetation index (NDVI). The NDVI map is obtained by importing the reflectance band maps into QGis and stacking them together with the semi-automatic classification plugin (SCP) [
65]; the NDVI was calculated according to Equation (1):
Negative NDVI values correspond to water, while NDVI values close to zero represent the bare ground. Values higher than 1 correspond to vegetation, with values increasing with density and physiological conditions [
25].
The NDVI raster can be classified using several clustering techniques. Among these, the ‘k-mean clustering’ technique was chosen for its quick and simple implementation. All it requires is to specify the number of clusters to generate; then, each object is placed in the cluster with the nearest “mean” [
66]. The algorithms used were the combined minimum distance and the hill-climbing method, resulting in the definition of three classes (namely water, bare soil, and vegetation). The resulting raster is polygonized, and the classes are exported as separate shapefiles. These shapefiles are used for cutting the point clouds into vegetated and bare ground point clouds, treating each of them individually with different classification parameters. After that, the classified point clouds are merged into a single file. To validate the improvement provided by this method, classification results were corroborated visually. Furthermore, the proportions of vegetation and bare ground from each classified point cloud were compared to the values of coverage area obtained from the shapefiles. This would provide a rough estimate of the classification consistency.
2.4.4. Digital Model Generation
From P1 datasets, DSM and DEM are generated with the Pix4Dmapper software, whereas, for L1 datasets, the digital models are created using Global Mapper LiDAR software. The use of the LiDAR software in the second case is due to the limitations of the photogrammetric software. Pix4Dmapper lacks manual intervention options when point clouds are imported, leading to accuracy issues in the final products (see
Section 2.4.1). All digital models are referred to as the ellipsoidal elevation.
Digital Surface Models (DSM)
When obtained from photogrammetric processing, DSMs were generated with the “Triangulation” method, which is based on Delaunay triangulation and recommended for flat areas [
60]. When calculated from LiDAR data, the point clouds were manipulated with the Global Mapper LiDAR module before the generation of the DSMs. In this case, the DSMs are generated with the binning method, a processing technique that takes point data and creates a grid of polygons, or bins [
62].
Digital Elevation Models (DEM)
The DEM is the digital model resulting from excluding any feature on top of the ground after point cloud classification (
Figure 4,
Table 3, see Point Cloud Classification Section). For the specific case of P1 datasets, since the photogrammetric processing did not include point cloud classification, all points are treated as non-ground points, resulting in a DEM that is a smoothed version of the DSM.
DEMs are created only with points of the ground class. To identify the true ground points, the general practice is to use only the minimum values of the LiDAR point clouds. However, this method is inefficient in salt marshes, where true ground surfaces generate broad point distributions, underestimating the elevation of bare areas [
42]. To address this specific problem of salt marshes, the true ground has been classified using the mean values of the cloud points instead of minimum ones.
Canopy Height Models (CHM)
Canopy height models (CHM) were generated by computing the DEM of difference (DoD), which is estimated as the difference between the DSM and the DEM. The result is a raster map with the canopy height distribution (i.e., CHM). This operation does not require matching resolution; it simply works based on cell overlap. Output resolution will be dictated by the element of the equation with the finest resolution (i.e., the DSM).
In order to determine whether UAV-LiDAR data can generate reliable CHMs, and test which is the optimal resolution of DSM and DTM needed to produce accurate estimates, DODs were generated by executing the subtraction operation using source digital models at different resolutions. Three DSMs—at 1, 3, and 5 ps resolution—and three DEMs—at 5, 10, and 15 ps resolution—were produced per LiDAR datasets. DoDs were generated using all possible combinations of DSM and DEM resolutions (i.e., the 1 ps DSM was subtracted from the 5 ps DEM, the 3 ps DSM was subtracted from the 5 ps DEM, etc.) for a total of nine DODs for each LiDAR mission.
2.5. Accuracy
RTK systems are supposed to be highly accurate. However, the P1 and L1 sensors have centimetric accuracy (see specifications in
Section 2.2,
Table S1). Therefore, it is necessary to quantify the accuracy of the products. For the accuracy of the products, the UAV sensor results are compared with ground control points (GCPs: blue, red, and yellow points in
Figure 6A) measured in situ with a dGPS. For dGPS measurements, a LeicaGS18 GNSS RTK Rover was used, with horizontal and vertical measurement precision of 8 mm + 1 ppm and 15 mm + 1 ppm, respectively. In September 2021, the campaign included a total of 41 GCPs. Six of these GCPs were collected on the wall of the saline behind the sampling site. This provides a stable surface reference over time. In October 2021, the campaign included 63 GCPs (blue points in
Figure 6A), with one of the GCPs on the wall of the saline and four GCPs at the calibration trial areas (two points per sector, yellow points in
Figure 6B,
Section 2.6). In this last campaign, the canopy height was also measured at the salt marsh GCPs.
Product accuracy was evaluated using the coefficient of determination (R
2) and root mean square error (RMSE), which can be calculated from the following Equations (2) and (3):
where n is the number of samples, x
i and y
i are the values from i
th reference data (GCPs) and evaluated values (UAV sensor data), and x
m is the mean of all reference data.
The accuracy and mean errors of the orthomosaic were assessed through the photogrammetric software. This software estimates the position difference with respect to the GCPs. Only GCPs measured on the external wall of the saline were used for evaluating the photogrammetric processing reconstruction. To assess the quality of the point cloud, a linear regression between the dGPS measurements and their corresponding values in the point cloud was executed. The quality was evaluated with and without the saline wall GCPs.
In situ GCPs only contain information on ground elevation and canopy height (the last one only for the October campaign). Therefore, the accuracy of the digital models was only evaluated for DEM and CHM, but not for DSM (as we cannot obtain high precision field measurements of the landscape surface elevation).
2.6. Calibration Trial
To evaluate the potential of the UAV-LiDAR in discriminating ground and vegetation, a calibration trial was carried out in the October campaign. As part of the calibration study, aboveground vegetation was intentionally removed from two randomly selected 50 cm × 100 cm sections (yellow points in
Figure 6A,B). The aboveground vegetation was pruned using garden shears. Then, the value of canopy height was measured at those plants present laterally at the four edges of the trial areas (black points in
Figure 6B). Gathered values were used to estimate an average value for each sector, which was assumed to be representative of canopy height in those sectors. All October LiDAR missions were run twice, before and after the vegetation removal. Differences in elevation are expected to represent canopy height in the trial areas.
The LiDAR capacity for recognizing differences in elevation before and after vegetation removal was evaluated using three methods (see below). For each method, the goodness of fit was evaluated by comparing the field values with those obtained with the corresponding method.
2.6.1. Method A: Point Clouds
The first method compared LiDAR elevation data with field measurements. Vegetation and post-pruning datasets were compared using CloudCompare, an open-source 3D point cloud and mesh viewer and processing software [
67] (
Table 3). The distance between pre- and post-pruning point clouds was estimated with the ‘Compute cloud/cloud distance’ tool, using the ‘Quadric’ model and six neighbor points. This method allows for filtering and delimiting areas with height differences, sampling up to 20 points per area to estimate the corresponding value of the difference. The results were compared with the field measurements.
2.6.2. Method B: DSM
The second method compared the DSMs obtained from the missions before and after the calibration trial. This method evaluates vertical differences between pairs of DSMs. Up to 15 points per pair were sampled with the tool “Path profile” in Global Mapper and the value of the difference was estimated as the average of the 15 differences. The comparison was performed for all the DSM pairs, including photogrammetric and LiDAR-derived ones, evaluating the most reliable processing and resolution to detect canopy differences. The accuracy of the method was evaluated by comparing the results with field values, but also with points from the point cloud (Method A).
2.6.3. Method C: CHM
The validation of this method is done by applying the DoD to the calibration areas and cross-checking the results with field measurements of canopy height and point cloud-derived estimations. For this method, only flights before pruning were considered, comparing only the calibration areas.
2.7. PNOA 2015 Dataset
To evaluate the resources generated by the UAV-LiDAR, our data were compared with those of the LiDAR data of the Spanish National Plan of Aerial Orthophotography (PNOA). The PNOA provides a free library of orthophotography and LiDAR, the LiDAR resources having been initiated in 2009 (Centro Nacional de Información Geográfica—CNIG). This work required four PNOA 2015 LiDAR files since the area studied falls at the junction of four tiles of available point clouds (AND-SW, 214/216-4046/4048). These datasets were merged and cut to the same extent as our UAV missions and processed with the LAStools software [
68]. Since the PNOA 2015 LiDAR dataset already comes classified, the corresponding DSM and DEM were generated without the classification step. The resolution and accuracy of the resulting digital models were compared with those of the UAV-LiDAR-derived results.