Next Article in Journal
PIIE-DSA-Net for 3D Semantic Segmentation of Urban Indoor and Outdoor Datasets
Next Article in Special Issue
Forty Years of Wetland Status and Trends Analyses in the Great Lakes Using Landsat Archive Imagery and Google Earth Engine
Previous Article in Journal
A Selection of Experiments for Understanding the Strengths of Time Series SAR Data Analysis for Finding the Drivers Causing Phenological Changes in Paphos Forest, Cyprus
Previous Article in Special Issue
Wetland Hydroperiod Analysis in Alberta Using InSAR Coherence Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating the Performance of High Spatial Resolution UAV-Photogrammetry and UAV-LiDAR for Salt Marshes: The Cádiz Bay Study Case

1
Department of Earth Sciences, Faculty of Marine and Environmental Sciences, International Campus of Excellence in Marine Science (CEIMAR), University of Cadiz, 11510 Puerto Real, Spain
2
Department of Biology, Faculty of Marine and Environmental Sciences, International Campus of Excellence in Marine Science (CEIMAR), University of Cadiz, 11510 Puerto Real, Spain
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(15), 3582; https://doi.org/10.3390/rs14153582
Submission received: 3 June 2022 / Revised: 18 July 2022 / Accepted: 20 July 2022 / Published: 26 July 2022
(This article belongs to the Special Issue Wetland Monitoring Using Remote Sensing)

Abstract

:
Salt marshes are very valuable and threatened ecosystems, and are challenging to study due to their difficulty of access and the alterable nature of their soft soil. Remote sensing methods in unmanned aerial vehicles (UAVs) offer a great opportunity to improve our knowledge in this type of complex habitat. However, further analysis of UAV technology performance is still required to standardize the application of these methods in salt marshes. This work evaluates and tunes UAV-photogrammetry and UAV-LiDAR techniques for high-resolution applications in salt marsh habitats, and also analyzes the best sensor configuration to collect reliable data and generate the best results. The performance is evaluated through the accuracy assessment of the corresponding generated products. UAV-photogrammetry yields the highest spatial resolution (1.25 cm/pixel) orthomosaics and digital models, but at the cost of large files that require long processing times, making it applicable only for small areas. On the other hand, UAV-LiDAR has proven to be a promising tool for coastal research, providing high-resolution orthomosaics (2.7 cm/pixel) and high-accuracy digital elevation models from lighter datasets, with less time required to process them. One issue with UAV-LiDAR application in salt marshes is the limited effectiveness of the autoclassification of bare ground and vegetated surfaces, since the scattering of the LiDAR point clouds for both salt marsh surfaces is similar. Fortunately, when LiDAR and multispectral data are combined, the efficiency of this step improves significantly. The correlation between LiDAR measurements and field values improves from R2 values of 0.79 to 0.94 when stable reference points (i.e., a few additional GCPs in rigid infrastructures) are also included as control points. According to our results, the most reliable LiDAR sensor configuration for salt marsh applications is the nadir non-repetitive combination. This configuration has the best balance between dataset size, spatial resolution, and processing time. Nevertheless, further research is still needed to develop accurate canopy height models. The present work demonstrates that UAV-LiDAR technology offers a suitable solution for coastal research applications where high spatial and temporal resolutions are required.

Graphical Abstract

1. Introduction

Salt marshes are highly complex systems with high ecological values and abundant ecosystem services [1,2]. Salt marshes protect coastal areas from floods and storms [3,4], prevent coastal erosion [5], store significant amounts of organic carbon [6], recycle nutrients, and remove pollutants, thus improving habitat quality and maintaining a high level of productivity in habitats with rich biodiversity [7,8].
Despite their importance, up to 70% of worldwide salt marshes have been lost in the 20th century [8], mostly due to extensive anthropogenic land cover changes that have accelerated marsh degradation. Climate change is now making these threats much more severe [5,9,10], with sea-level rise (SLR) being probably [8] the greatest current threat to salt marshes. Rising local sea levels could put salt marshes at risk of drowning, depending on SLR scenarios [11]. These habitats may compensate for the situation with natural mechanisms that maintain their elevation above local sea level [5,12]. These mechanisms include biophysical interactions between plants and soil and local sediment dynamics [12]. Nevertheless, natural events, such as inundations, and human activities, such as changes in land use [3,12,13], may destabilize these mechanisms, compromising [3,12,13] the ability of salt marshes to adapt to future SLR scenarios [14].
Fortunately, conservation efforts, such as the Ramsar Convention’s implementation [15], have slowed the erosion of salt marshes during the past few decades. However, further efforts are still needed to improve the likelihood of salt marsh survival, requiring an interdisciplinary approach to understanding the underlying mechanisms [16,17]. Field databases required for modeling such processes need to be extensive (i.e., sediment availability, accurate topography, distribution, and vegetation productivity) and of high quality [11,18,19]. Due to the great accuracy needed for modeling coastal processes [20], the difficulties of accessing these environments, and the disturbance of the sediment during sampling, in situ monitoring is still problematic in salt marshes [21]. Techniques for remote sensing (RS) have proven to be an excellent tool for gathering spatial environmental data [22,23]. Traditional platforms include satellite or aerial systems, which are frequently utilized for many studies at the regional level, such as the mapping of tidal marshes [24], the monitoring of vegetation cover [25,26], and coastal management [27]. The spectral, spatial, and temporal resolution of satellite images is constrained, making them generally unsuitable for modeling ecological processes [28,29]. Unmanned aerial vehicles (UAVs) are bridging the high spectral, spatial, and temporal resolution gap left by satellites, enabling the development of rapid and affordable approaches. High-resolution photogrammetric cameras are among the current UAV sensors, while most other methods have also been effectively applied in conventional RS platforms (i.e., thermography, multispectral, LiDAR, and hyperspectral).
Three RS methods have a great potential for high-quality monitoring of salt marshes. (1) Photogrammetry successfully creates orthorectified maps (i.e., orthomosaics) and topographic products using structure-from-motion (SfM) methods [30,31]. (2) Light detection and ranging (LiDAR) gathers highly reliable 3D point clouds for high-resolution topography modeling and creates digital elevation models (DEM) from digital surface models (DSM) by point cloud classification techniques [32,33]. (3) Multispectral techniques offer useful data for vegetation mapping [34].
SfM photogrammetric methods based on UAVs have proven to be particularly effective in mapping marsh surfaces and calculating canopy height [35,36,37]. Airborne-LiDAR has been shown to enhance habitat classification for wetland areas [38,39,40]. However, a significant obstacle to mapping and modeling salt marshes is the accuracy of elevation data [20]. On the one hand, slight elevational variations (in the order of centimeters) can have a significant impact on plant zonation, which affects biomass and species distribution [41]. On the other hand, the precision of ground elevation measurements (field and LiDAR) below thick vegetation cover is limited by the uneven ground surface and extremely dense covers typical of salt marshes. [42,43]. The accuracy of LiDAR-derived DEM has been improved by up to 75% using custom DEM-generation techniques [42], lowering the root mean square error (RMSE) with specie-specific correction factors [44], by adjusting LiDAR-derived elevation values with aboveground biomass density estimations [45], or by integrating multispectral data during the processing [46].
LiDAR can identify tiny spatial scale structures, which is important for monitoring and modeling elements and features in irregular and dense canopy environments such as salt marshes, offering great potential for studying heterogeneous surface environments. Grassland [47,48], forest, and agricultural vegetation monitoring [49,50] have all shown the effectiveness of UAV-LiDAR. Currently, the quality of mapping ground elevation and vegetation characteristics of salt marshes based on UAV-LiDAR technology has only been evaluated once [51], and the same goes for assessing the accuracy of UAV-LiDAR and UAV-photogrammetry in determining elevation and vegetation features in salt marshes [52]. It is worth mentioning that results from SfM-based photogrammetry techniques and LiDAR-based techniques can be compared because they are conceptually independent. Photogrammetric processing is based on the reconstruction of models from images, which involves interpolating what is not visible on the surface. The LiDAR is an active sensor whose laser beams can penetrate the spaces between features and pick up small details, thereby combining the 3D information of the scene into the model. Pinton et al. [52] demonstrated that LiDAR technology generates more precise salt marsh DEM and DSM in comparison to the results from photogrammetry-based approaches, and improves habitat classification. Nevertheless, more salt marshes with a wider range of vegetation heights and densities must be tested to determine this effectiveness. An assessment of the effects of flight settings on the laser beam penetration of salt marsh vegetation is also needed.
The salt marshes of Cádiz Bay Natural Park (CBNP) are an excellent example of Atlantic tidal wetlands in the south of Europe. In addition to being a RAMSAR site, SAC, and SPA, Cádiz Bay was designated as a Natural Park in 1994. This system is within an important bird migration route and the southernmost tidal wetland in Europe. Additionally, due to its geographic configuration and location, it is particularly susceptible to the impacts of climate change [53]. This makes the salt marshes of Cádiz Bay an excellent natural laboratory for the study of climate change effects on tidal wetlands.
The main goal of this study is to understand the performance of UAV technologies in salt marshes. We will assess the benefits and drawbacks of using UAV-LiDAR and UAV-photogrammetry to create precise digital models (DEM and DSM). The related spatial accuracy will also be evaluated. Additionally, the effectiveness of supplementary multispectral data on habitat classification will be assessed. The capability of canopy penetration and the accuracy of canopy height model estimations is explored by using several LiDAR sensor setups. Our findings will establish the conditions for standardizing the application of UAV technology in the study of salt marshes and will provide the first data for modeling the future responses of the Bay of Cádiz to SLR scenarios.

2. Materials and Methods

2.1. Site Description

The study area is located in Cádiz Bay, on the southwestern Atlantic coast of Spain (Figure 1). This bay also represents the southernmost example of the European coastal wetlands, right on the intersection between the Mediterranean and Atlantic oceans and the European and African continents. The most representative habitat of Cádiz Bay is the tidal marsh, presenting large extensions of this environment [54].
Cádiz Bay is divided into two waterbodies. A narrow strait connects an inner shallow basin, with a mean depth of around 2 m, with a deeper external basin, with depths up to 20 m and characterized by sandy beaches. The inner basin is a sheltered area protected from oceanic waves [55]. The study area is situated in the northwest corner of the inner basin (NE zone, 36°30′59.2″N 6°10′14.7″W), in front of the salina of San José de Barbanera. The intertidal system includes natural salt marshes, salinas, mudflats, and a complex network of tidal channels (Figure 1).
Cádiz Bay has a mesotidal and semi-diurnal tidal regime, with a mean tidal range (MTR) of 2.3 m, up to 3.7 m during spring tides [56]. The vegetation communities describe a typical salt marsh zonation of mid-latitudes [57], which can be divided into three main salt marsh horizons, depending on vegetation types and elevation ranges: upper, medium, and low marsh [54]. Unfortunately, in most cases, the upper marsh is interrupted by the protective walls of the salinas, with the most representative horizons of natural salt marshes of Cádiz bay being the medium and low ones. The medium marsh is dominated by Sarcocornia spp. (mainly S. fruticose and S. perennis) and other halophytic species in lower abundance (Figure 2), and the low marsh is mainly dominated by Sporobolus maritimus. The lowest zones of the intertidal flats are colonized by sequential belts of seagrasses Zostera noltei and Cymodocea nodosa and small patches of Zostera marina [58].
The sampling site was selected according to the criteria of the width of the salt marsh vegetation and difficulty of access, as it was the best example of salt marsh plant zonation in the bay and was easy accessed by car. The conclusions of this work are expected to be of direct applicability to other tidal salt marshes across the world, given that low and medium tidal marshes usually present comparable structural properties [59].

2.2. UAV and Sensors

The drones service of the University of Cádiz (https://dron.uca.es/vehiculos-aereos/) (accessed on 25 April 2022) provided all of the equipment and sensors that were used for this work. The UAV used is a DJI Matrice 300 RTK quadcopter. The drone has an on-board RTK (real-time kinematic positioning) technology. The RTK records accurate GPS information during the flight, providing up to centimeter-level accuracy in geopositioning. The sensors implemented in the UAV were the photogrammetric sensor DJI Zenmuse P1, the DJI Zenmuse L1 LiDAR, and the Micasense RedEdge MxDual multispectral camera (Table S1). The missions were planned with the DJI pilot application.
The DJI Zenmuse P1 RGB photogrammetric supports 24 mm, 35 mm, and 50 mm fixed-focus lenses. For this work, the 35 mm fixed-focus lens was used, which, together with the 45 Mp full-frame sensor, provided an estimated value for ground sampling distance (GSD) of 1.26 cm/pixel. This sensor offers 0.03 m horizontal and 0.05 m vertical accuracy without deploying ground control points (GCPs).
The DJI Zenmuse L1 LiDAR sensor integrates a Livox LiDAR module, a high-precision IMU with a 20 Mp RGB camera with a focal length of 24 mm and a mechanical shutter on a stabilized 3-axis gimbal. Enabling the RGB camera entails collecting images, which can be used to assign the color to each point of the cloud generated by the LiDAR. When an adequate overlap is set, images can also be used to build an orthomosaic. The Livox LiDAR module has a maximum detection range of 450 m at 80% reflectivity. It can achieve a point cloud data rate of up to 240,000 points/s and it allows up to three returns per laser beam. The laser wavelength is 905 nm. The L1 LiDAR sensor supports two scan modes: repetitive and non-repetitive (Figure 3). The repetitive scan mode executes a regular line scan. The non-repetitive pattern is an accumulative process with an increase in the area scanned inside the field-of-view (FOV) together with the increase in integration time. This last pattern increases the probability of object detection within the FOV. The sensor can capture data from a nadir or oblique position. In a nadir flight, data are captured with the sensor axis oriented in a straight vertical position. The oblique flight configuration means data are captured with the sensor tilted at an angle with respect to the vertical. The sensor scans the area up to five times, changing the perspective from which the data are captured.
The Micasense RedEdge-MX Dual sensor is a two-camera multispectral imaging system, with a total of 10 bands (five each camera), sampling data in the electromagnetic spectrum from blue to near-infrared. Two bands are centered in the blue (444 and 475 nm), two in the green (531 and 560 nm), two in the red (650 and 668 nm), three in the red edge (707, 715 and 750 nm), and one in the infrared (842 nm). The two-camera system is connected to a downwelling light sensor (DSL2), which is used to correct for global lighting changes during the flight (e.g., changes in clouds covering the sun) and for sun orientation.

2.3. Flight Campaigns

The study area was surveyed in two consecutive campaigns at the end of summer and the beginning of autumn of 2021.

2.3.1. September Campaign

The first campaign was performed on the 8 September 2021, with a low tide of 1.4 m LAT. The campaign included three missions, covering an area of approximately 20 ha (yellow polygon, Figure 1). The first mission collected data with the photogrammetric sensor DJI Zenmuse P1, while the other two collected data with the DJI Zenmuse L1 LiDAR, changing some configurations in between missions (see Table 1 for mission configuration details). The two LiDAR missions were programmed with the repetitive scanning mode, double returns operating at a frequency of 240 kHz. The altitude of the LiDAR missions was set to obtain adequate point clouds, rather than an orthomosaic reconstruction. Nevertheless, the lateral overlap for the second LiDAR mission was increased to 70% to allow for the generation of the corresponding orthomosaic.

2.3.2. October Campaign

The second campaign was performed on the 22 October 2021, with a low tide of 1.3 m LAT. This campaign included nine missions, one using the Micasense RedEdge MxDual sensor and eight missions with LiDAR (Table 1). The eight LiDAR missions only included four LiDAR configurations, but were duplicated to proceed with the calibration trial (Table 2; see Section 2.6).
The area covered in October was much smaller (4.5 ha approx., red polygon, Figure 1), but still representative of the system. The reduction was necessary to reduce the processing time for the collected multispectral data (MS).
The LiDAR missions had the aim of evaluating the best sensor setting combination for optimum accuracy/processing time balance. Settings evaluated included flight time, captured LiDAR data size, accuracy, and spatial resolution of deliverables. The sensor settings manipulated were scan mode (repetitive or non-repetitive) and sensor orientation (nadir or oblique) (Table 2). The missions were repeated for the calibration trial (see Section 2.6).

2.4. Data Processing

Orthomosaics are generated through photogrammetric processing of images, captured either by the Zenmuse P1 or the Zenmuse L1 LiDAR sensors. Digital models can be obtained from the photogrammetric processing of images or LiDAR processing of point cloud data (Figure 4). This section summarizes both types of processing, photogrammetric and LiDAR, as well as the methods to generate the multispectral masks and the digital models. Visualization and handling of raster deliverables were always done with the free and open-source software QGis.

2.4.1. Photogrammetric Processing

The Pix4Dmapper software [60], which transforms the images into orthomosaics and digital models, automatically implements the three steps of the structure-from-motion (SfM) algorithm workflow [30] (Figure 4, Table 3). In the first step, the scale invariant feature transform (SIFT) identifies key points from multiple images. The second step reconstructs a low-density 3D point cloud, based on camera positions and orientations, and densifies the cloud with the multi-view-stereo (MVS) algorithms. The third step is the transformation, georeferencing, and post-processing of the dense point clouds, producing the orthomosaics and the corresponding digital models. The ground sample distance (GSD) expresses the spatial resolution of the products in cm/pixel.
The Zenmuse L1 LiDAR sensor captures both image and point cloud datasets. Images can thus undergo photogrammetric processing to generate orthomosaic and digital models. However, for processing the RGB from the Zenmuse L1 LiDAR sensor, the second step of the SfM workflow is replaced by direct capture of the LiDAR 3D point cloud. Unfortunately, Pix4Dmapper does not allow for editing imported point clouds. Therefore, in those cases, the resulting DSM and DEM may contain larger errors and imperfections.

2.4.2. LiDAR Processing

DJI Terra software performs preliminary processing of the raw LIDAR data [61], which is required to produce a georeferenced, true color, dense 3D point cloud for the next steps (Figure 4, Table 3).
After pre-treatment, these datasets must also go through three major steps for processing (Figure 4), carried out using Global Mapper LiDAR module [62] (Table 3). Firstly, in order to increase the accuracy of the final products, the point cloud must be filtered and edited to remove artefacts and signal noises. With greater scan angles, a laser pulse travels a longer path, leading to biased measurements [63]. Therefore, the primary filtering method was to reduce the sensor’s initial −35°/35° range of scan angles to a proper range of −26°/26°. The classification of the points is the second phase. The algorithms employ geometric positions in relation to nearby points to assign the classes (see Digital Surface Models (DSM) Section). Vegetation masks can be created and imported into the procedure if multispectral data are available. By separating vegetated environments, these masks help with the accurate classification of plant points (see Section 2.4.3). The third step is the generation of digital models. By using data interpolation, this step reconstructs the ground surface, which results in the creation of the corresponding DEMs and DSMs.
The difference in elevation between DEM and DSM could be the height of the canopy, as there were no other items present apart from plants. Thus, using a geographic information system (e.g., Global Mapper or QGis), canopy height models (CHM) can be produced by subtracting one elevation model from another (see Canopy Height Models (CHM) Section).

Point Cloud Classification

An accurate DEM can only be obtained when the point cloud has been correctly classified. In our situation, classification entails designating each point to one of the following three categories: ground, non-ground, or noise. This method, in which the information on geometry and color is used to assign the class, is made possible by machine learning algorithms. The method works effectively in contexts that are comparable to those used to train the algorithms (i.e., trees and buildings). The algorithm is not expected to operate efficiently in our study location, which is a flat, rough terrain with patches of low, dense vegetation. Manual intervention may be required, which can be a challenging and time-consuming operation.
The auto-classification tool recognizes noise and ground. The remaining points are labelled as non-ground points and interpreted as vegetation points.
Noise may be automatically identified with a classification algorithm that detects elevation values above or below a local average height. ‘Maximum allowed variance from local average’, and ‘local area base bin size’, which were set to 1 SD and 0.2 m respectively, are the input parameters for this algorithm. This means that using reference areas of 0.2 m, points with more than 1 SD of the local average height are classified as noise.
Ground auto-classification is done in two steps. The algorithm first determines non-ground points based on morphological attributes, such as the expected terrain slope and the ground’s maximum elevation change. A second phase allows for the exclusion of some of those remaining from ground classification by comparing them to a simulated 3D curved surface representing the ground. The algorithm requires the neighboring area’s size and the ground classification’s vertical limit in order to compare the points [62].
The auto-classification process starts with default values that are then improved through trial and error. The parameters for the first filter were chosen based on the salt marsh’s flat surface, with a maximum elevation change of 5 m and an expected terrain slope of 1 degree. The base bin size for modeling the 3D curved surface was set to 6-point spacing (ps). Two values of minimum height deviation from the local average height, 0.03 m and 0.10 m, were tested, in order to determine the appropriate threshold to differentiate vegetation from ground classification.

2.4.3. Masks from Multispectral Data

LiDAR data alone seems insufficient for high-quality classification of salt marsh point clouds. Hard and regular surfaces, such as roads, generate a single return LiDAR signal. However, salt marshes generate wide point clouds with scattered returns for the same LiDAR pulse. Thick point clouds in vegetated zones are reasonable and desirable for habitat classification. However, in salt marshes, bare grounds also produce thick point clouds, hindering the classification step (Figure 5). A method to solve this issue involves including additional information on the spatial distribution of vegetation. This information is incorporated into the process as multispectral masks that allow the vegetation zones to be separated from the bare ground ones, thereby allowing the creation of cut-off areas to successfully classify the point cloud.
The generation of multispectral masks requires the processing of the reflectance maps of the bands of the multispectral images. The procedure is similar to photogrammetric processing, except for the need for radiometric calibration. The calibration is done for each radiometric band, capturing the image of a calibration target immediately before and after the flight. The calibration target is made of a material with a known reflectance and allows the creation of reflectance-compensated outputs, in order to accurately compare changes in data captured over different days or at different times of day [64]. The Pix4Dmapper software calibrates and corrects the reflectance of the images according to the calibration values, delivering a total of 10 reflectance maps of the surveyed area.
The multispectral masks are obtained from the map of the normalized difference vegetation index (NDVI). The NDVI map is obtained by importing the reflectance band maps into QGis and stacking them together with the semi-automatic classification plugin (SCP) [65]; the NDVI was calculated according to Equation (1):
NDVI = (NIR − RED)/(NIR + RED)
Negative NDVI values correspond to water, while NDVI values close to zero represent the bare ground. Values higher than 1 correspond to vegetation, with values increasing with density and physiological conditions [25].
The NDVI raster can be classified using several clustering techniques. Among these, the ‘k-mean clustering’ technique was chosen for its quick and simple implementation. All it requires is to specify the number of clusters to generate; then, each object is placed in the cluster with the nearest “mean” [66]. The algorithms used were the combined minimum distance and the hill-climbing method, resulting in the definition of three classes (namely water, bare soil, and vegetation). The resulting raster is polygonized, and the classes are exported as separate shapefiles. These shapefiles are used for cutting the point clouds into vegetated and bare ground point clouds, treating each of them individually with different classification parameters. After that, the classified point clouds are merged into a single file. To validate the improvement provided by this method, classification results were corroborated visually. Furthermore, the proportions of vegetation and bare ground from each classified point cloud were compared to the values of coverage area obtained from the shapefiles. This would provide a rough estimate of the classification consistency.

2.4.4. Digital Model Generation

From P1 datasets, DSM and DEM are generated with the Pix4Dmapper software, whereas, for L1 datasets, the digital models are created using Global Mapper LiDAR software. The use of the LiDAR software in the second case is due to the limitations of the photogrammetric software. Pix4Dmapper lacks manual intervention options when point clouds are imported, leading to accuracy issues in the final products (see Section 2.4.1). All digital models are referred to as the ellipsoidal elevation.

Digital Surface Models (DSM)

When obtained from photogrammetric processing, DSMs were generated with the “Triangulation” method, which is based on Delaunay triangulation and recommended for flat areas [60]. When calculated from LiDAR data, the point clouds were manipulated with the Global Mapper LiDAR module before the generation of the DSMs. In this case, the DSMs are generated with the binning method, a processing technique that takes point data and creates a grid of polygons, or bins [62].

Digital Elevation Models (DEM)

The DEM is the digital model resulting from excluding any feature on top of the ground after point cloud classification (Figure 4, Table 3, see Point Cloud Classification Section). For the specific case of P1 datasets, since the photogrammetric processing did not include point cloud classification, all points are treated as non-ground points, resulting in a DEM that is a smoothed version of the DSM.
DEMs are created only with points of the ground class. To identify the true ground points, the general practice is to use only the minimum values of the LiDAR point clouds. However, this method is inefficient in salt marshes, where true ground surfaces generate broad point distributions, underestimating the elevation of bare areas [42]. To address this specific problem of salt marshes, the true ground has been classified using the mean values of the cloud points instead of minimum ones.

Canopy Height Models (CHM)

Canopy height models (CHM) were generated by computing the DEM of difference (DoD), which is estimated as the difference between the DSM and the DEM. The result is a raster map with the canopy height distribution (i.e., CHM). This operation does not require matching resolution; it simply works based on cell overlap. Output resolution will be dictated by the element of the equation with the finest resolution (i.e., the DSM).
In order to determine whether UAV-LiDAR data can generate reliable CHMs, and test which is the optimal resolution of DSM and DTM needed to produce accurate estimates, DODs were generated by executing the subtraction operation using source digital models at different resolutions. Three DSMs—at 1, 3, and 5 ps resolution—and three DEMs—at 5, 10, and 15 ps resolution—were produced per LiDAR datasets. DoDs were generated using all possible combinations of DSM and DEM resolutions (i.e., the 1 ps DSM was subtracted from the 5 ps DEM, the 3 ps DSM was subtracted from the 5 ps DEM, etc.) for a total of nine DODs for each LiDAR mission.

2.5. Accuracy

RTK systems are supposed to be highly accurate. However, the P1 and L1 sensors have centimetric accuracy (see specifications in Section 2.2, Table S1). Therefore, it is necessary to quantify the accuracy of the products. For the accuracy of the products, the UAV sensor results are compared with ground control points (GCPs: blue, red, and yellow points in Figure 6A) measured in situ with a dGPS. For dGPS measurements, a LeicaGS18 GNSS RTK Rover was used, with horizontal and vertical measurement precision of 8 mm + 1 ppm and 15 mm + 1 ppm, respectively. In September 2021, the campaign included a total of 41 GCPs. Six of these GCPs were collected on the wall of the saline behind the sampling site. This provides a stable surface reference over time. In October 2021, the campaign included 63 GCPs (blue points in Figure 6A), with one of the GCPs on the wall of the saline and four GCPs at the calibration trial areas (two points per sector, yellow points in Figure 6B, Section 2.6). In this last campaign, the canopy height was also measured at the salt marsh GCPs.
Product accuracy was evaluated using the coefficient of determination (R2) and root mean square error (RMSE), which can be calculated from the following Equations (2) and (3):
R 2 = 1 i = 1 n ( x i y i ) 2 i = 1 n ( x i x m ) 2
RMSE = i = 1 n ( x i y i ) 2 n
where n is the number of samples, xi and yi are the values from ith reference data (GCPs) and evaluated values (UAV sensor data), and xm is the mean of all reference data.
The accuracy and mean errors of the orthomosaic were assessed through the photogrammetric software. This software estimates the position difference with respect to the GCPs. Only GCPs measured on the external wall of the saline were used for evaluating the photogrammetric processing reconstruction. To assess the quality of the point cloud, a linear regression between the dGPS measurements and their corresponding values in the point cloud was executed. The quality was evaluated with and without the saline wall GCPs.
In situ GCPs only contain information on ground elevation and canopy height (the last one only for the October campaign). Therefore, the accuracy of the digital models was only evaluated for DEM and CHM, but not for DSM (as we cannot obtain high precision field measurements of the landscape surface elevation).

2.6. Calibration Trial

To evaluate the potential of the UAV-LiDAR in discriminating ground and vegetation, a calibration trial was carried out in the October campaign. As part of the calibration study, aboveground vegetation was intentionally removed from two randomly selected 50 cm × 100 cm sections (yellow points in Figure 6A,B). The aboveground vegetation was pruned using garden shears. Then, the value of canopy height was measured at those plants present laterally at the four edges of the trial areas (black points in Figure 6B). Gathered values were used to estimate an average value for each sector, which was assumed to be representative of canopy height in those sectors. All October LiDAR missions were run twice, before and after the vegetation removal. Differences in elevation are expected to represent canopy height in the trial areas.
The LiDAR capacity for recognizing differences in elevation before and after vegetation removal was evaluated using three methods (see below). For each method, the goodness of fit was evaluated by comparing the field values with those obtained with the corresponding method.

2.6.1. Method A: Point Clouds

The first method compared LiDAR elevation data with field measurements. Vegetation and post-pruning datasets were compared using CloudCompare, an open-source 3D point cloud and mesh viewer and processing software [67] (Table 3). The distance between pre- and post-pruning point clouds was estimated with the ‘Compute cloud/cloud distance’ tool, using the ‘Quadric’ model and six neighbor points. This method allows for filtering and delimiting areas with height differences, sampling up to 20 points per area to estimate the corresponding value of the difference. The results were compared with the field measurements.

2.6.2. Method B: DSM

The second method compared the DSMs obtained from the missions before and after the calibration trial. This method evaluates vertical differences between pairs of DSMs. Up to 15 points per pair were sampled with the tool “Path profile” in Global Mapper and the value of the difference was estimated as the average of the 15 differences. The comparison was performed for all the DSM pairs, including photogrammetric and LiDAR-derived ones, evaluating the most reliable processing and resolution to detect canopy differences. The accuracy of the method was evaluated by comparing the results with field values, but also with points from the point cloud (Method A).

2.6.3. Method C: CHM

The validation of this method is done by applying the DoD to the calibration areas and cross-checking the results with field measurements of canopy height and point cloud-derived estimations. For this method, only flights before pruning were considered, comparing only the calibration areas.

2.7. PNOA 2015 Dataset

To evaluate the resources generated by the UAV-LiDAR, our data were compared with those of the LiDAR data of the Spanish National Plan of Aerial Orthophotography (PNOA). The PNOA provides a free library of orthophotography and LiDAR, the LiDAR resources having been initiated in 2009 (Centro Nacional de Información Geográfica—CNIG). This work required four PNOA 2015 LiDAR files since the area studied falls at the junction of four tiles of available point clouds (AND-SW, 214/216-4046/4048). These datasets were merged and cut to the same extent as our UAV missions and processed with the LAStools software [68]. Since the PNOA 2015 LiDAR dataset already comes classified, the corresponding DSM and DEM were generated without the classification step. The resolution and accuracy of the resulting digital models were compared with those of the UAV-LiDAR-derived results.

3. Results

3.1. Photogrammetric Processing Deliverables

From the Zenmuse P1 sensor, the photogrammetric deliverables include orthomosaics, DSMs (both with 1.25 cm/pixel GSD), and DEMs (6.25 cm/pixel GSD) (Table 4). From the LiDAR sensor, the orthomosaic resolution depends on the flight altitude. Surveys at 100 m had an average of 2.78 cm/pixel GSD. For LiDAR surveys at 60 m, the orthomosaic had a GSD of 1.69 cm/pixel. In general, photogrammetric processing is a very time-consuming task, with most of the time dedicated to densifying the point cloud. However, for L1 datasets, the processing is much shorter (Table 4), since this step is omitted due to the fact that the imported LiDAR point cloud is already densified.
When evaluating the overall accuracy of the photogrammetric processing, the processing of P1 datasets generates products with a low general RMSE (0.044 m). The horizontal accuracy was comparable in both x and y coordinates (0.012, 0.009 m RMSE), but the error in the vertical dimension was higher (0.111 m RMSE). The processing of the L1 datasets generates products with RMSE even lower than P1 processing (0.006–0.010 m). The horizontal RMSE were <0.010 m and the vertical ones were 0.011 and 0.014 m for 100 and 60 m flights, respectively (Table S2).
The DEM produced by the photogrammetric processing of the P1 dataset has lower accuracy, with 0.335 m RMSE (R2 0.6027). The greatest deviations correspond to the six points located at the saline wall (Table S3). When such points are excluded, P1-derived DEM more accurately matches field measurements (R2 0.946, RMSE 0.070 m. Table S3, Figure S1).

3.2. LiDAR Processing Deliverables

LiDAR processing generates the full range of digital models (DEM, DSM, and CHM). Pre-processing the LiDAR point clouds requires 3D georeferencing and coloring the point cloud, which takes less than 10 min. The next step, filtering the effects of the scan angle, reduces the file size by up to 43% without changing the range of elevations. (Table 5 and Table S18).
Most of the salt marsh surface generated only one LiDAR return, precluding classification based on the number of returns. Elevation thresholds for ground/non-ground point classification proved to not be satisfactory. Depending on the threshold selected, ground points were either underestimated (0.03 m threshold) or overestimated (0.10 m threshold) (Tables S4, S5, S10, S11 and S20–S35). Multispectral data were used to resolve this problem. Using masks created from the multispectral dataset, point clouds could be classified much more accurately based on the actual distribution of the vegetation (Figure 7, Tables S36–S43, Figure S7). In general, any pattern described a dependence of classification performance on specific datasets. The estimations of the error vary arbitrarily, unaffected by flight type (nadir vs. oblique) or scan mode.
LiDAR point clouds had good accuracy, showing that the technology works well even when dealing with the challenges of salt marsh surfaces. Even when omitting stable GCP measurements (see R2 value (marsh) in Table 6), the accuracy remains high, with R2 values within 0.797–0.949 (Table 6). The lowest RMSEs corresponded to surfaces where LiDAR easily detects the ground (e.g., external wall). This was supported by the calibration trial, showing that the bare ground points in the pruned regions had the smallest deviations. On the other hand, the greatest RMSEs were associated with surfaces of Sarcocornia where vegetation obstructs LiDAR penetration (Tables S46–S53).

LiDAR Processing Digital Models

By flying at a lower altitude or using an oblique flight configuration, denser point clouds and products with finer resolution products can be produced, but at the expense of heavier datasets. The output resolution of the digital models depends on the density of the point cloud (Table 5) and the selected grid point spacing. For this task, the digital models were generated with up to three different point spacing values, obtaining DSMs with a resolution of 0.02–0.25 m/pixel and DEMs with 0.09–0.76 m/pixel resolution (Table S45).
LiDAR-derived DEMs had a high correlation with field data when using all available points (R2 > 0.86). Correlation was lowered when using only points from the marsh surface (Table 7). As regards the sensor setting, the non-repetitive scan mode produced the most accurate DEM with lower RMSE. For the repetitive scan mode setting, DEM precision improves with a coarser resolution.
The nine combinations of DSM and DEM resolutions used for estimating CHM revealed that only the DSM resolution influenced the CHM results (Tables S62–S80). Therefore, only three CHMs per mission—those produced from the operation between DSM at three resolutions and the 5 ps DEM resolution—are displayed (Table 8). In general, the accuracy of the estimated CHMs was very low, with high RMSE (0.09–0.183 m) and low R2 values (0.002–0.172, Table 8). These results show a lack of correspondence between modeled and field values.

3.3. Calibration Trial

3.3.1. Method A: Point Clouds

Depending on the configuration of the missions (Table 2), point cloud values had slight differences, but were very close to the field measurements, with a deviation of +/− 0.01 m (Table 9). This result validated the use of cloud points as reference values in further assessments.

3.3.2. Method B: DSM

The estimates from the comparison of paired DSM (before and after vegetation pruning) deviated from field measurements within a range of −0.04 and 0.11 m (Table 9). On average, the overall concordance between estimations and field values was good. However, data captured with the nadir-non-repetitive configuration (Missions 1 and 5) produced the most accurate estimates (Table 9). The coarsest resolution (5 ps) tended to have the highest underestimation of canopy height, while 3 ps models generated the lowest deviation values.
Exceptionally, the estimations from Mission 3 (oblique-non-repetitive configuration) systematically overestimated the canopy height by 0.10 m. However, this effect was attributed to the signal loss of the RTK during the flight, which made a post-processing kinematic (PPK) treatment necessary to transform the raw dataset into an operable LiDAR dense point cloud. This operation caused a small z-shift in the reconstructed point cloud. This z-offset affects the results of the photogrammetric processing, which does not edit imported point clouds—but not the LiDAR processing, which allows the shifted point cloud to be corrected with other datasets.

3.3.3. Method C: CHM

In general, CHM estimations had no good correspondence with field values at the removed-vegetation-areas (Table 9), although for some missions the differences were reasonable (e.g., +/− 0.01 m). Data captured by nadir repetitive flights (Mission 2) showed similar differences in the two trimmed areas, always underestimating canopy height. For the rest of the missions, differences between CHM and field measurements did not have a consistent pattern with similar proportions of under- and overestimations.

3.4. LiDAR Sensor Optimum Settings

LiDAR sensor settings were manipulated to evaluate the best setting combination for optimum accuracy/processing time balance (Table 2).

3.4.1. Sensor Orientation

Sensor orientation includes nadir vs. oblique sensor configuration. This setting has major consequences on covered area, classification processing time, point cloud size, density, and deliverables resolution. Oblique configuration generates larger point clouds than the nadir one, increasing dataset size with the spatial range of the tilted sensor (Table S16). A larger dataset size implies increasing time processing for classification, but also the possibility of generating products with finer resolution.

3.4.2. Scan Mode

Scan mode (repetitive vs. non-repetitive) did not influence flight time or captured dataset size. However, the repetitive scan mode increases the occurrence of extreme values that need to be cleaned and filtered before the datasets are acceptable for modeling. The proportion of points lost during filtering is 36–42% for repetitive scan mode vs. 14–17% for the non-repetitive one (Table 5).

3.5. Comparison of PNOA 2015 and UAV-Based Data

The PNOA 2015-point cloud had a lower density than point cloud data captured from UAVs. The PNOA 2015 dataset has 0.80 samples/m2, which represents 400 to 2600 times lower point density than UAV-derived datasets.
PNOA 2015-point cloud is an already classified product and it is available in RGB and IR colors. However, when digital elevation models were derived from this dataset, the obtained output resolution was extremely poor for environmental applications at high resolution (Figure 8). The DSM has a resolution of 4.4 m/pixel, while the DEM has 18 m/pixel.
PNOA 2015 LiDAR data presented a large error in elevation values (1.36 m averaged RMSE), with individual point deviations between 1.01 m and 1.54 m. Nevertheless, the PNOA 2015 LiDAR data had a strong relationship with the field points (R2 value 0.952 for all GCPs, 0.780 only for marsh GCPs).
PNOA 2015-derived DEM estimates elevations between 0.249 to 1.409 m below the field measurements (1.248 averaged RMSE). These results confirm that the PNOA 2015 LiDAR dataset underestimates local elevation of more than 1 m in the area of Cádiz Bay. The correlation between the PNOA 2015-DEM and field values was, nevertheless, strong (R2 0.885), albeit not as strong as the correlation between the UAV-LiDAR data and field values.

4. Discussion

4.1. Orthomosaic

Our UAV-photogrammetric system provides an extremely high detail orthomosaic, with a spatial resolution up to 1.25 cm. One drawback is that this high spatial resolution requires a long processing time since the image dataset from the photogrammetric sensor is four times larger than the corresponding LiDAR ones. Flying at the same altitude (100 m), the UAV-LiDAR system provides up to 2.7 cm spatial resolution, still very high for any salt marsh ground monitoring application. Orthomosaics of 2 cm/pixel resolution have been demonstrated to be particularly effective for the water and ecological environment monitoring, resulting in a better evaluation and development of hydrogeological simulation and temporal analysis of the area [69,70] The spatial resolution of our UAV-LiDAR products can be increased by reducing flight altitude (60 m), but to the detriment of the collected dataset magnitude, with heavier images and point cloud datasets.
Horizontal accuracy is crucial for detecting changes in spatial and temporal distributions with sea-level rise and when modeling the accretion or subsidence rate of salt marshes [10,11,71]. In our case, the processing of the photogrammetry images generates very high horizontal accuracy (1.2 cm, 0.9 cm for x and y coordinates). However, LiDAR images provide even higher horizontal accuracy (0.4 cm and 0.5 cm), demonstrating higher precision in positioning than photogrammetry images. The accuracies obtained here are higher than those found in studies in coastal areas, which reported accuracies ranging from 1.5 to 5 cm [35,72,73].

4.2. LiDAR Point Clouds

As expected, measurements at lower altitudes exhibited higher accuracy than at higher altitudes, but with small differences (R2 0.989 and 0.089 m RMSE vs. R2 0.976 and 0.115 m RMSE, respectively, Table 6). A lower altitude increases measurement reliability by increasing spatial resolution and point cloud density.
There is an average RMSE of nearly 0.10 m associated with all datasets, which supports the hypothesis of a systematic error. However, this is a reasonable value for LiDAR datasets, consistent with previous observations that ascribe up to 0.15 m of error to the limitation of the airborne laser to penetrate the dense vegetation [20,74]. Inaccuracy in salt marsh point clouds increases with the penetration issues in vegetation. The highest differences between LiDAR point cloud and field values were observed in areas covered by Sarcocornia, suggesting that this species generates the greatest issues of LiDAR penetration. This is not surprising, since this species forms very dense and thick shrubs covering the marsh surface like a carpet [75]. While technology can overcome this barrier, the best strategy is to determine the most appropriate model to explain the correlation between UAV-LiDAR and field data. Linear regression analysis seems to be a good approximation for salt marshes, especially if the dataset also includes measurements from stable surfaces (i.e., points collected on rigid structures stable over time).

4.3. Digital Models

The present study presents different types of digital models, including digital surface models (DSM), digital elevation models (DEM), and canopy height models (CHM). Both LiDAR and photogrammetric processing can provide digital models. Elevation values in photogrammetric sensor datasets are interpolated from point clouds generated from images, whereas the LiDAR sensor directly measures a point cloud of elevations. The main benefit of LiDAR is the capacity to penetrate spaces between features and pick up small details, whereas photogrammetry is limited to what is visible at the surface of the images.

4.3.1. Photogrammetric Processing Digital Models

The distribution of plants can be affected by differences of just a few centimeters, making maximum vertical accuracy crucial for salt marsh studies [76,77]. Digital models generated from our photogrammetric datasets did not offer the greatest vertical accuracy (RMSE 0.335 m). The high inaccuracy was related to the drone being unstable during the first flight line just over the points measured on the saline wall. When those points were excluded, field measurements and the P1-derived DEM appeared to match well (i.e., R2 0.946 and RMSE 0.070 m). This implies that the results should be interpreted cautiously if the drone’s stability could not be ensured throughout the entire mission.

4.3.2. LiDAR Processing Digital Models

Point cloud classification is the most critical step in LiDAR processing for DEM generation. Initially, autoclassification algorithms were not able to correctly separate vegetation from ground points, although two elevation thresholds (0.03 m and 0.10 m) were tested for the classification algorithms. In terms of point cloud thickness, both vegetated and ground surfaces were comparable, making it difficult to classify them automatically. The rough and irregular surface of the dense canopy reduces the effectiveness of laser penetration, affecting the thickness of the point cloud corresponding to vegetated surfaces. Likewise, bare ground is not flat; it is irregular due to microtopography, rocks, vegetation remnants, puddles, etc., all of which produce scattering returns, which also increase the thickness of the point cloud. To overcome the limitation caused by the LiDAR point dispersion, a multispectral dataset was included. UAV-multispectral systems can be used to map plant communities in wetland environments with high accuracy [78,79]. In our case, NDVI-derived masks proved to be essential for the habitat classification allowing the adjustment of classification parameters and lower DEM error from an average RMSE of 0.11 m to 0.06 m (Tables S9, S15 and S61).
Modeling marsh environments requires high-quality elevation data. Alizad et al. [20] have shown that microtidal models are particularly vulnerable to imprecision because in these systems the error can be as large as the tidal amplitude. Instead, in mesotidal environments such as ours, with tidal amplitudes up to 3.6 m, an error of less than 20 cm (Tables S9, S15 and S61) can be considered irrelevant. A maximum error of 0.162 cm is observed in the produced DEMs, with an average error of 0.07 cm. The average error corresponds to 2.8% of the tidal amplitude (2.6 m), which proves the potential of the UAV-LiDAR system for the accurate elevation mapping of coastal marsh data.
Previous works have addressed the issues on DEM and DSM [80,81], associating the uncertainties and variability of digital models with surface complexity, field measurement accuracy, processing methods, interpolation, and resolution. These errors propagate when modeling DSM and DEM, resulting in amplified errors in the DoDs. These effects are supported by our results, explaining the low correlation between the CHMs and the field data.
Underestimations of canopy height on LiDAR-derived CHMs have already been documented [48,82]. Possibly, this issue is caused by insufficient laser scanning frequency for corresponding drone speeds, which translates into the missing vegetation tops [50,83,84]. The loss of information from the top of the canopy is more frequent than the loss of canopy bottom points, and it is mainly influenced by the maximum vegetation height, its standard deviation, and true flight height [79]. This could explain why salt marsh DEMs are accurate, but CHMs are so inaccurate. Other interpretations could be that LiDAR technology does not provide accurate DSMs in salt marshes, or that field canopy height measurements are still inaccurate.

4.4. Calibration Trial

The calibration trial aimed to understand the sensor sensitivity in order to identify changes in elevations by comparing the mission data before and after the removal of vegetation. Method A contrasts paired point clouds (pre- and post-pruning) and shows a good correspondence between the difference in elevation computed from point clouds and field observations (deviations of 0–0.02 m). This is consistent with prior assessments of the efficiency of UAV-LiDAR in determining elevation changes [48,79,84].
Method B shows that paired DSMs and onsite values mostly agree, but that, as previously demonstrated [44], the interpolation of point clouds to generate digital models may introduce additional elevation error. Our findings support earlier findings that the higher inaccuracy for high-resolution models is related to the natural surface complexity of the environment. [80]. From the three resolutions tested (1, 3, and 5 ps), the intermediate (3 ps) seems to be the most appropriate value for obtaining the most reliable results.
Method C was unable to find a relation between values extracted from CHM and onsite measures at peeled areas. Method C was thus inadequate to validate sensor sensitivity. This experiment revealed that CHMs are inaccurate, not only in areas covered by dense vegetation (see Section 4.3.2)—which could limit laser penetration—but also in areas where the bare ground information is collected (i.e., peeled areas). This result supports the conclusion that interpolating CHMs from UAV-LiDAR may smooth the range of plant height and result in low accuracy of height-related structural features [79].

4.5. LiDAR Sensor Optimum Setting

The evaluation of sensor settings included the assessment of the sensor orientation (nadir vs. oblique) and scan mode (repetitive vs. non-repetitive). The sensor orientation produces very different flight and processing times. In our case, the oblique flights were six times longer and the datasets six to eight times larger than the nadir ones (Table 2). Our results agree with previous results that found that oblique flights improve accuracy when collecting point clouds, resulting in a high precision 3D reconstruction [85] (Table S54). However, the differences with accuracy from the nadir ones are very small. Therefore, the nadir configuration was considered preferable, as the level of detail of the products is adequate, but the datasets are much smaller and need less time for processing.
The scan mode includes repetitive and non-repetitive modes. The repetitive mode produces a higher occurrence of extreme scan angle points, which is the main factor in producing artefacts in the resulting models. This is consistent with the findings of Ma et al. [83], who revealed that as scan angle exceeded a specific threshold, the uncertainty in LiDAR-derived estimations increased significantly. Extreme scan angle points need to be removed, causing an important decrease in point counts and dataset density in datasets from repetitive scan mode flights. Non-repetitive scan mode generates more accurate DEMs (lower RMSE values) at a finer resolution. On the other hand, repetitive scan mode datasets require a coarser resolution to improve the precision of the DEMs. This is consistent with previous studies that explain this effect as a function of the data collection method: the repetitive scan mode works with a linear pattern, which is more sensitive to properties such as shininess, clarity, and color, resulting in larger variability and errors [86,87], whereas the non-repetitive mode improves the detection and details of objects, suggesting that this scan mode is the most suitable for salt marsh systems.

4.6. Comparison of PNOA 2015 and UAVs-Based Data

UAV-LiDAR technology is certainly a very effective tool in environments that require very high temporal and spatial resolution for accurate knowledge of the system, such as salt marshes. PNOA datasets are freely accessible and cover the entire Spanish territory, but they have a very limited temporal and spatial resolution to use for modeling salt marsh processes. Our UAV-LiDAR sensor provided more detailed elevation information than any dataset available in the PNOA 2015 LiDAR library, with higher resolution, correlation, and accuracy. It is important to note that while UAV-LIDAR cannot capture data at the regional scale as a single mission, its high mobility and ease of use allow this technology to capture data for areas larger than the one of this study by planning several flights that, when stitched together, will provide a large spatial coverage. This will enable the use of this method in additional research contexts and environments where high temporal and spatial resolutions are necessary for monitoring programs.
The results presented in this study are in line with those presented by García-López et al. [88], who improved the previous PNOA-derived cartography through significantly reducing the spatial resolution of the mapped area by generating a DEM from a UAV-LiDAR dataset (0.069 m vs. 5 m). Our results also revealed a systematic underestimation of −1 m elevation in PNOA 2015. This corresponds to 44–48% of the mean tidal range of the area, which is definitively too high for high-resolution modeling of the system. Systematic error in other national-LiDAR datasets has been previously reported [42,89]. The authors attributed this error to the type of land cover surveyed and the physical and technological limitations of the employed LiDAR system. The PNOA 2015-derived digital models obtained underestimated the elevation values of the area in concordance with the findings of García-López et al. [88], who demonstrated a PNOA-LiDAR systematic error of −0.4 m for the marshes of Cádiz Bay.
The results from this work therefore reiterate that PNOA-LiDAR datasets can be useful for a first assessment and a general framework, but they should not be used for applications that require high precision, such as flood risk and coastal hazard estimates.

5. Conclusions

This study demonstrates the potential of UAV sensors for the study of complex and difficult access systems such as salt marshes, where inaccuracies are still difficult to overcome. Photogrammetric and LiDAR techniques provide orthomosaics and digital models at very-high spatial resolution. The LiDAR sensor can also capture images that generate products with high accuracy from lighter image datasets in a shorter processing time than the photogrammetric sensor (P1). Nevertheless, the photogrammetric sensor can provide a higher spatial resolution that can be an excellent complementary tool for limited areas. For the use of LiDAR in salt marshes, the nadir non-repetitive configuration seems the best setting for reliable results at fine resolution, providing the best balance between dataset size, spatial resolution, and processing time. Nevertheless, the best results require multispectral data to help with the discrimination of vegetated and non-vegetated zones. Our results demonstrate that LiDAR data can generate accurate salt marsh DEMs, suggesting that LiDAR can penetrate dense vegetation to some extent. However, unless the penetration and reflectance issues observed on natural salt marsh surfaces are solved, additional technical improvements are still required to generate reliable salt marsh canopy height models (CHMs). The inaccuracy of CHMs could be associated not only with LiDAR penetration issues, but also with the reliability of the ground truthing measurements in elevation and canopy height, as field measurements are challenging in this environment. UAV-LiDAR datasets can reach resolutions and accuracies unachievable from the datasets of the national cartography LiDAR library (PNOA-LiDAR 2015), which definitively have high applicability in large-scale frameworks, but lack the precision and details required for coastal research where high spatial and temporal resolutions are crucial. The results from this research can be used to plan monitoring programs in any marsh environment, as well as in other coastal and continental habitats.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs14153582/s1.

Author Contributions

Conceptualization, M.A., A.C.C., L.B. and G.P.; methodology, A.C.C. and L.B.; software, A.C.C.; formal analysis, A.C.C.; investigation, A.C.C.; resources, A.C.C. and L.B.; data curation, A.C.C.; writing—original draft preparation, A.C.C. and M.A.; writing—review and editing, A.C.C., G.P., M.A. and L.B.; visualization, A.C.C. and G.P.; supervision, G.P. and L.B.; project administration, A.C.C., L.B. and G.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are openly available in Zenodo at https://doi.org/10.5281/zenodo.6850188. Publicly available datasets were analyzed in this study. This data can be found here: https://centrodedescargas.cnig.es/CentroDescargas/index.jsp (accessed on 24 January 2022).

Acknowledgments

The authors want to thank all the members of the drone service of the University of Cádiz, which provided all the UAV systems used to carry out the research for this study. The drones service of the University of Cádiz was equipped through the “State Program for Knowledge Generation and Scientific and Technological Strengthening of the R+D+I System State, Subprogram for Research Infrastructures and Scientific-Technical Equipment in the framework of the State Plan for Scientific and Technical Research and Innovation 2017–2020”, co-financed by 80% FEDER project ref. EQC2018-004446-P. Reviewers and editors are acknowledged. The authors acknowledge the Program of Promotion and Impulse of the activity of Research and Transfer of the University of Cadiz for the productivity associated with the work. A.C. Curcio has a contract funded by the Programme FIREPOCTEP, a project promoted by the cooperation programme Interreg V-A Spain-Portugal (POCTEP) 2014–2020 and 75% funded by the European Regional Development Fund (ERDF). M. Aranda has a postdoctoral contract, funded by the Programme for the Promotion and Encouragement of Research Activity at the University of Cádiz. All authors have approved each acknowledgment.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Donatelli, C.; Ganju, N.K.; Zhang, X.; Fagherazzi, S.; Leonardi, N. Salt Marsh Loss Affects Tides and the Sediment Budget in Shallow Bays. J. Geophys. Res. Earth Surf. 2018, 123, 2647–2662. [Google Scholar] [CrossRef] [Green Version]
  2. Bouma, T.J.; van Belzen, J.; Balke, T.; Zhu, Z.; Airoldi, L.; Blight, A.J.; Davies, A.J.; Galvan, C.; Hawkins, S.J.; Hoggart, S.P.G.; et al. Identifying Knowledge Gaps Hampering Application of Intertidal Habitats in Coastal Protection: Opportunities & Steps to Take. Coast. Eng. 2014, 87, 147–157. [Google Scholar] [CrossRef]
  3. Allen, J.R.L. Morphodynamics of Holocene Salt Marshes: A Review Sketch from the Atlantic and Southern North Sea Coasts of Europe. Quat. Sci. Rev. 2000, 19, 1155–1231. [Google Scholar] [CrossRef]
  4. Barbier, E.B.; Georgiou, I.Y.; Enchelmeyer, B.; Reed, D.J. The Value of Wetlands in Protecting Southeast Louisiana from Hurricane Storm Surges. PLoS ONE 2013, 8, e58715. [Google Scholar] [CrossRef]
  5. Alizad, K.; Hagen, S.C.; Medeiros, S.C.; Bilskie, M.v.; Morris, J.T.; Balthis, L.; Buckel, C.A. Dynamic Responses and Implications to Coastal Wetlands and the Surrounding Regions under Sea Level Rise. PLoS ONE 2018, 13, e0205176. [Google Scholar] [CrossRef] [Green Version]
  6. McLeod, E.; Chmura, G.L.; Bouillon, S.; Salm, R.; Björk, M.; Duarte, C.M.; Lovelock, C.E.; Schlesinger, W.H.; Silliman, B.R. A Blueprint for Blue Carbon: Toward an Improved Understanding of the Role of Vegetated Coastal Habitats in Sequestering CO2. Front. Ecol. Environ. 2011, 9, 552–560. [Google Scholar] [CrossRef] [Green Version]
  7. Rannap, R.; Kaart, T.; Pehlak, H.; Kana, S.; Soomets, E.; Lanno, K. Coastal Meadow Management for Threatened Waders Has a Strong Supporting Impact on Meadow Plants and Amphibians. J. Nat. Conserv. 2017, 35, 77–91. [Google Scholar] [CrossRef]
  8. Kingsford, R.T.; Basset, A.; Jackson, L. Wetlands: Conservation’s Poor Cousins. Aquat. Conserv. Mar. Freshw. Ecosyst. 2016, 26, 892–916. [Google Scholar] [CrossRef] [Green Version]
  9. Hansen, V.D.; Reiss, K.C. Threats to Marsh Resources and Mitigation; Elsevier Inc.: Amsterdam, The Netherlands, 2015; ISBN 9780123965387. [Google Scholar]
  10. Crosby, S.C.; Sax, D.F.; Palmer, M.E.; Booth, H.S.; Deegan, L.A.; Bertness, M.D.; Leslie, H.M. Salt Marsh Persistence Is Threatened by Predicted Sea-Level Rise. Estuar. Coast. Shelf Sci. 2016, 181, 93–99. [Google Scholar] [CrossRef] [Green Version]
  11. Fagherazzi, S.; Mariotti, G.; Leonardi, N.; Canestrelli, A.; Nardin, W.; Kearney, W.S. Salt Marsh Dynamics in a Period of Accelerated Sea Level Rise. J. Geophys. Res. Earth Surf. 2020, 125, e2019JF005200. [Google Scholar] [CrossRef]
  12. Fagherazzi, S.; Mariotti, G.; Wiberg, P.L.; Mcglathery, K.J. Marsh Collapse Does Not Require Sea Level Rise. Oceanography 2013, 26, 70–77. [Google Scholar] [CrossRef] [Green Version]
  13. Kirwan, M.L.; Megonigal, J.P. Tidal Wetland Stability in the Face of Human Impacts and Sea-Level Rise. Nature 2013, 504, 53–60. [Google Scholar] [CrossRef] [PubMed]
  14. Alizad, K.; Hagen, S.C.; Morris, J.T.; Medeiros, S.C.; Bilskie, M.V.; Weishampel, J.F. Coastal Wetland Response to Sea-Level Rise in a Fluvial Estuarine System. Earth Future 2016, 4, 483–497. [Google Scholar] [CrossRef]
  15. Ramsar. Available online: https://www.ramsar.org/es (accessed on 23 February 2022).
  16. Fagherazzi, S.; Marani, M.; Blum, K. Introduction: The Coupled Evolution of Geomorphological and Ecosystem Structures in Salt Marshes. In The Ecogeomorphology of Tidal Marshes, Coastal and Estuarine Studies; American Geophysical Union: Washington, DC, USA, 2004; Volume 59, pp. 1–5. [Google Scholar]
  17. Marani, M.; D’Alpaos, A.; Lanzoni, S.; Carniello, L.; Rinaldo, A. Biologically-Controlled Multiple Equilibria of Tidal Landforms and the Fate of the Venice Lagoon. Geophys. Res. Lett. 2007, 34, 1–5. [Google Scholar] [CrossRef]
  18. Fagherazzi, S.; Kirwan, M.L.; Mudd, S.M.; Guntenspergen, G.R.; Temmerman, S.; D’Alpaos, A.; van de Koppel, J.; Rybczyk, J.M.; Reyes, E.; Craft, C.; et al. Numerical Models of Salt Marsh Evolution: Ecological, Geomorphic, and Climatic Factors. Rev. Geophys. 2012, 50, 1–28. [Google Scholar] [CrossRef]
  19. Hopkinson, C.S.; Morris, J.T.; Fagherazzi, S.; Wollheim, W.M.; Raymond, P.A. Lateral Marsh Edge Erosion as a Source of Sediments for Vertical Marsh Accretion. J. Geophys. Res. Biogeosci. 2018, 123, 2444–2465. [Google Scholar] [CrossRef]
  20. Alizad, K.; Medeiros, S.C.; Foster-Martinez, M.R.; Hagen, S.C. Model Sensitivity to Topographic Uncertainty in Meso- And Microtidal Marshes. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 807–814. [Google Scholar] [CrossRef]
  21. Silvestri, S.; Marani, M.; Marani, A. Hyperspectral Remote Sensing of Salt Marsh Vegetation, Morphology and Soil Topography. Phys. Chem. Earth 2003, 28, 15–25. [Google Scholar] [CrossRef]
  22. Kuenzer, C.; Bluemel, A.; Gebhardt, S.; Quoc, T.V.; Dech, S. Remote Sensing of Mangrove Ecosystems: A Review. Remote Sens. 2011, 3, 878–928. [Google Scholar] [CrossRef] [Green Version]
  23. Macintyre, P.; van Niekerk, A.; Mucina, L. Efficacy of Multi-Season Sentinel-2 Imagery for Compositional Vegetation Classification. Int. J. Appl. Earth Obs. Geoinf. 2020, 85, 101980. [Google Scholar] [CrossRef]
  24. Byrd, K.B.; Ballanti, L.; Thomas, N.; Nguyen, D.; Holmquist, J.R.; Simard, M.; Windham-Myers, L. A Remote Sensing-Based Model of Tidal Marsh Aboveground Carbon Stocks for the Conterminous United States. ISPRS J. Photogramm. Remote Sens. 2018, 139, 255–271. [Google Scholar] [CrossRef]
  25. Gandhi, G.M.; Parthiban, S.; Thummalu, N.; Christy, A. Ndvi: Vegetation Change Detection Using Remote Sensing and Gis—A Case Study of Vellore District. Procedia Comput. Sci. 2015, 57, 1199–1210. [Google Scholar] [CrossRef] [Green Version]
  26. Parmehr, E.G.; Amati, M.; Taylor, E.J.; Livesley, S.J. Estimation of Urban Tree Canopy Cover Using Random Point Sampling and Remote Sensing Methods. Urban For. Urban Green. 2016, 20, 160–171. [Google Scholar] [CrossRef]
  27. Ouellette, W.; Getinet, W. Remote Sensing for Marine Spatial Planning and Integrated Coastal Areas Management: Achievements, Challenges, Opportunities and Future Prospects. Remote Sens. Appl. Soc. Environ. 2016, 4, 138–157. [Google Scholar] [CrossRef]
  28. Nex, F.; Remondino, F. UAV for 3D Mapping Applications: A Review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  29. Hossain, M.S.; Bujang, J.S.; Zakaria, M.H.; Hashim, M. The Application of Remote Sensing to Seagrass Ecosystems: An Overview and Future Research Prospects. Int. J. Remote Sens. 2014, 36, 61–113. [Google Scholar] [CrossRef]
  30. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. “Structure-from-Motion” Photogrammetry: A Low-Cost, Effective Tool for Geoscience Applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  31. Fonstad, M.A.; Dietrich, J.T.; Courville, B.C.; Jensen, J.L.; Carbonneau, P.E. Topographic Structure from Motion: A New Development in Photogrammetric Measurement. Earth Surf. Proc. Landf. 2013, 38, 421–430. [Google Scholar] [CrossRef] [Green Version]
  32. Straatsma, M.W.; Middelkoop, H. Airborne Laser Scanning as a Tool for Lowland Flood Plain Vegetation Monitoring. Hydrobiologia 2007, 565, 87–103. [Google Scholar] [CrossRef]
  33. Brock, J.C.; Purkis, S.J. The Emerging Role of Lidar Remote Sensing in Coastal Research and Resource Management. J. Coast. Res. 2009, 1, 1–5. [Google Scholar] [CrossRef]
  34. Belluco, E.; Camuffo, M.; Ferrari, S.; Modenese, L.; Silvestri, S.; Marani, A.; Marani, M. Mapping Salt-Marsh Vegetation by Multispectral and Hyperspectral Remote Sensing. Remote Sens. Environ. 2006, 105, 54–67. [Google Scholar] [CrossRef]
  35. Kalacska, M.; Chmura, G.L.; Lucanus, O.; Bérubé, D.; Arroyo-Mora, J.P. Structure from Motion Will Revolutionize Analyses of Tidal Wetland Landscapes. Remote Sens. Environ. 2017, 199, 14–24. [Google Scholar] [CrossRef]
  36. DiGiacomo, A.E.; Bird, C.N.; Pan, V.G.; Dobroski, K.; Atkins-Davis, C.; Johnston, D.W.; Ridge, J.T. Modeling Salt Marsh Vegetation Height Using Unoccupied Aircraft Systems and Structure from Motion. Remote Sens. 2020, 12, 2333. [Google Scholar] [CrossRef]
  37. Villoslada Peciña, M.; Bergamo, T.F.; Ward, R.D.; Joyce, C.B.; Sepp, K. A Novel UAV-Based Approach for Biomass Prediction and Grassland Structure Assessment in Coastal Meadows. Ecol. Indic. 2021, 122, 107227. [Google Scholar] [CrossRef]
  38. Hladik, C.; Alber, M. Classification of Salt Marsh Vegetation Using Edaphic and Remote Sensing-Derived Variables. Estuar. Coast. Shelf Sci. 2014, 141, 47–57. [Google Scholar] [CrossRef]
  39. Jahncke, R.; Leblon, B.; Bush, P.; LaRocque, A. Mapping Wetlands in Nova Scotia with Multi-Beam RADARSAT-2 Polarimetric SAR, Optical Satellite Imagery, and Lidar Data. Int. J. Appl. Earth Obs. Geoinf. 2018, 68, 139–156. [Google Scholar] [CrossRef]
  40. Li, Q.; Wong, F.K.K.; Fung, T. Mapping Multi-Layered Mangroves from Multispectral, Hyperspectral, and LiDAR Data. Remote Sens. Environ. 2021, 258, 112403. [Google Scholar] [CrossRef]
  41. Suchrow, S.; Jensen, K. Plant Species Responses to an Elevational Gradient in German North Sea Salt Marshes. Wetlands 2010, 30, 735–746. [Google Scholar] [CrossRef]
  42. Schmid, K.A.; Hadley, B.C.; Wijekoon, N. Vertical Accuracy and Use of Topographic LIDAR Data in Coastal Marshes. J. Coast. Res. 2011, 27, 116–132. [Google Scholar] [CrossRef]
  43. Hladik, C.; Schalles, J.; Alber, M. Salt Marsh Elevation and Habitat Mapping Using Hyperspectral and LIDAR Data. Remote Sens. Environ. 2013, 139, 318–330. [Google Scholar] [CrossRef]
  44. Hladik, C.; Alber, M. Accuracy Assessment and Correction of a LIDAR-Derived Salt Marsh Digital Elevation Model. Remote Sens. Environ. 2012, 121, 224–235. [Google Scholar] [CrossRef]
  45. Medeiros, S.; Hagen, S.; Weishampel, J.; Angelo, J.; Gallant, A.L.; Baghdadi, N.; Thenkabail, P.S. Adjusting Lidar-Derived Digital Terrain Models in Coastal Marshes Based on Estimated Aboveground Biomass Density. Remote Sens. 2015, 7, 3507–3525. [Google Scholar] [CrossRef] [Green Version]
  46. Buffington, K.J.; Dugger, B.D.; Thorne, K.M.; Takekawa, J.Y. Statistical Correction of Lidar-Derived Digital Elevation Models with Multispectral Airborne Imagery in Tidal Marshes. Remote Sens. Environ. 2016, 186, 616–625. [Google Scholar] [CrossRef] [Green Version]
  47. Wang, D.; Xin, X.; Shao, Q.; Brolly, M.; Zhu, Z.; Chen, J. Modeling Aboveground Biomass in Hulunber Grassland Ecosystem by Using Unmanned Aerial Vehicle Discrete Lidar. Sensors 2017, 17, 108. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Zhang, X.; Bao, Y.; Wang, D.; Xin, X.; Ding, L.; Xu, D.; Hou, L.; Shen, J. Using UAV LiDAR to Extract Vegetation Parameters of Inner Mongolian Grassland. Remote Sens. 2021, 13, 656. [Google Scholar] [CrossRef]
  49. Guo, M.; Li, J.; Sheng, C.; Xu, J.; Wu, L. A Review of Wetland Remote Sensing. Sensors 2017, 17, 777. [Google Scholar] [CrossRef] [Green Version]
  50. Hu, T.; Sun, X.; Su, Y.; Guan, H.; Sun, Q.; Kelly, M.; Guo, Q. Development and Performance Evaluation of a Very Low-Cost UAV-Lidar System for Forestry Applications. Remote Sens. 2021, 13, 77. [Google Scholar] [CrossRef]
  51. Pinton, D.; Canestrelli, A.; Wilkinson, B.; Ifju, P.; Ortega, A. A New Algorithm for Estimating Ground Elevation and Vegetation Characteristics in Coastal Salt Marshes from High-Resolution UAV-Based LiDAR Point Clouds. Earth Surf. Proc. Landf. 2020, 45, 3687–3701. [Google Scholar] [CrossRef]
  52. Pinton, D.; Canestrelli, A.; Wilkinson, B.; Ifju, P.; Ortega, A. Estimating Ground Elevation and Vegetation Characteristics in Coastal Salt Marshes Using Uav-Based Lidar and Digital Aerial Photogrammetry. Remote Sens. 2021, 13, 4506. [Google Scholar] [CrossRef]
  53. Vázquez Pinillos, F.J.; Marchena Gómez, M.J. Territorial Impacts of Sea-Level Rise in Marsh Environments. The Case of the Bay of Cádiz, Spain. Geogr. Res. Lett. 2021, 47, 523–543. [Google Scholar] [CrossRef]
  54. Porras, R. Caracterización de La Marisma Mareal de La Bahía de Cádiz. In Proyecto LIFE 14 CCM/ES/000957 “Blue Natura Andalucía” Expte; 2016/000225/M Certificación Final; 2016; Available online: https://life-bluenatura.eu/en/results/ (accessed on 23 February 2022).
  55. Jiménez-Arias, J.L.; Morris, E.; Rubio-de-Inglés, M.J.; Peralta, G.; García-Robledo, E.; Corzo, A.; Papaspyrou, S. Tidal Elevation Is the Key Factor Modulating Burial Rates and Composition of Organic Matter in a Coastal Wetland with Multiple Habitats. Sci. Total Environ. 2020, 724, 138205. [Google Scholar] [CrossRef]
  56. Del Río, L.; Plomaritis, T.A.; Benavente, J.; Valladares, M.; Ribera, P. Establishing Storm Thresholds for the Spanish Gulf of Cádiz Coast. Geomorphology 2012, 143–144, 13–23. [Google Scholar] [CrossRef]
  57. García de Lomas, J.; García, C.M.; Álvarez, Ó. Vegetación de Las Marismas de Aletas-Cetina (Puerto Real). Identificación de Hábitats de Interés Comunitario y Estimaciones Preliminares de Posibles Efectos de Su Inundación. Rev. De La Soc. Gaditana De Hist. Nat. 2006, 5, 9–38. [Google Scholar]
  58. Peralta, G.; Godoy, O.; Egea, L.G.; de los Santos, C.B.; Jiménez-Ramos, R.; Lara, M.; Brun, F.G.; Hernández, I.; Olivé, I.; Vergara, J.J.; et al. The Morphometric Acclimation to Depth Explains the Long-Term Resilience of the Seagrass Cymodocea Nodosa in a Shallow Tidal Lagoon. J. Environ. Manag. 2021, 299, 113452. [Google Scholar] [CrossRef] [PubMed]
  59. de Vries, M.; van der Wal, D.; Möller, I.; van Wesenbeeck, B.; Peralta, G.; Stanica, A. Earth Observation and the Coastal Zone: From Global Images to Local Information. In FP7 FAST Project Syntesis Report; 2018. [Google Scholar] [CrossRef]
  60. PIX4Dmapper: Professional Photogrammetry Software for Drone Mapping Pix4D. Available online: https://www.pix4d.com/product/pix4dmapper-photogrammetry-software (accessed on 2 June 2022).
  61. DJI Terra. Available online: https://www.dji.com/uk/dji-terra (accessed on 2 June 2022).
  62. Global Mapper. Available online: https://www.bluemarblegeo.com/knowledgebase/global-mapper-22-1/Lidar_Module/Automated_Lidar_Analysis_Tools.htm (accessed on 25 February 2022).
  63. Roussel, J.R.; Béland, M.; Caspersen, J.; Achim, A. A Mathematical Framework to Describe the Effect of Beam Incidence Angle on Metrics Derived from Airborne LiDAR: The Case of Forest Canopies Approaching Turbid Medium Behaviour. Remote Sens. Environ. 2018, 209, 824–834. [Google Scholar] [CrossRef]
  64. Micasense. Available online: https://support.micasense.com/hc/en-us/articles/220154947-How-do-calibrated-reflectance-panels-improve-my-data- (accessed on 19 May 2022).
  65. Congedo, L. Semi-Automatic Classification Plugin: A Python Tool for the Download and Processing of Remote Sensing Images in QGIS. J. Open Source Softw. 2021, 6, 3172. [Google Scholar] [CrossRef]
  66. Ashabi, A.; bin Sahibuddin, S.; Salkhordeh Haghighi, M. The Systematic Review of K-Means Clustering Algorithm. In Proceedings of the 9th International Conference on Networks, Communication and Computing (ICNCC 2020), Tokyo, Japan, 18–20 December 2020; p. 6. [Google Scholar]
  67. CloudCompare. Available online: http://www.cloudcompare.org/ (accessed on 2 June 2022).
  68. LAStools, Rapidlasso GmbH. Available online: https://rapidlasso.com/lastools/ (accessed on 18 March 2022).
  69. Furlan, L.M.; Moreira, C.A.; de Alencar, P.G.; Rosolen, V. Environmental Monitoring and Hydrological Simulations of a Natural Wetland Based on High-Resolution Unmanned Aerial Vehicle Data (Paulista Peripheral Depression, Brazil). Environ. Chall. 2021, 4, 100146. [Google Scholar] [CrossRef]
  70. Casagrande, M.F.S.; Furlan, L.M.; Moreira, C.A.; Rosa, F.T.G.; Rosolen, V. Non-Invasive Methods in the Identification of Hydrological Ecosystem Services of a Tropical Isolated Wetland (Brazilian Study Case). Environ. Chall. 2021, 5, 100233. [Google Scholar] [CrossRef]
  71. Doughty, C.L.; Cavanaugh, K.C. Mapping Coastal Wetland Biomass from High Resolution Unmanned Aerial Vehicle (UAV) Imagery. Remote Sens. 2019, 11, 540. [Google Scholar] [CrossRef] [Green Version]
  72. Long, N.; Millescamps, B.; Guillot, B.; Pouget, F.; Bertin, X. Monitoring the Topography of a Dynamic Tidal Inlet Using UAV Imagery. Remote Sens. 2016, 8, 387. [Google Scholar] [CrossRef] [Green Version]
  73. Jaud, M.; Grasso, F.; le Dantec, N.; Verney, R.; Delacourt, C.; Ammann, J.; Deloffre, J.; Grandjean, P. Potential of UAVs for Monitoring Mudflat Morphodynamics (Application to the Seine Estuary, France). ISPRS Int. J. Geo-Inf. 2016, 5, 50. [Google Scholar] [CrossRef] [Green Version]
  74. Chassereau, J.E.; Bell, J.M.; Torres, R. A Comparison of GPS and Lidar Salt Marsh DEMs. Earth Surf. Proc. Landf. 2011, 36, 1770–1775. [Google Scholar] [CrossRef]
  75. Idaszkin, Y.L.; Bortolus, A.; Bouza, P.J. Ecological Processes Shaping Central Patagonian Salt Marsh Landscapes. Austral Ecol. 2011, 36, 59–67. [Google Scholar] [CrossRef]
  76. Rosso, P.H.; Ustin, S.L.; Hastings, A. Use of Lidar to Study Changes Associated with Spartina Invasion in San Francisco Bay Marshes. Remote Sens. Environ. 2006, 100, 295–306. [Google Scholar] [CrossRef]
  77. Aranda, M. Geomorphological and Environmental Characterization of Three Estuaries on the Spanish Coast; University of Cádiz: Cádiz, Spain, 2021. [Google Scholar]
  78. Villoslada, M.; Bergamo, T.F.; Ward, R.D.; Burnside, N.G.; Joyce, C.B.; Bunce, R.G.H.; Sepp, K. Fine Scale Plant Community Assessment in Coastal Meadows Using UAV Based Multispectral Data. Ecol. Indic. 2020, 111, 105979. [Google Scholar] [CrossRef]
  79. Zhao, X.; Su, Y.; Hu, T.; Cao, M.; Liu, X.; Yang, Q.; Guan, H.; Liu, L.; Guo, Q. Analysis of UAV Lidar Information Loss and Its Influence on the Estimation Accuracy of Structural and Functional Traits in a Meadow Steppe. Ecol. Indic. 2022, 135, 108515. [Google Scholar] [CrossRef]
  80. James, L.A.; Hodgson, M.E.; Ghoshal, S.; Latiolais, M.M. Geomorphic Change Detection Using Historic Maps and DEM Differencing: The Temporal Dimension of Geospatial Analysis. Geomorphology 2012, 137, 181–198. [Google Scholar] [CrossRef]
  81. Williams, R. DEMs of Difference. Geomorphol. Tech. 2012, 2, 1–17. [Google Scholar]
  82. Wasser, L.; Day, R.; Chasmer, L.; Taylor, A. Influence of Vegetation Structure on Lidar-Derived Canopy Height and Fractional Cover in Forested Riparian Buffers During Leaf-Off and Leaf-On Conditions. PLoS ONE 2013, 8, e54776. [Google Scholar] [CrossRef] [Green Version]
  83. Ma, Q.; Su, Y.; Guo, Q. Comparison of Canopy Cover Estimations from Airborne LiDAR, Aerial Imagery, and Satellite Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4225–4236. [Google Scholar] [CrossRef]
  84. Lin, Y.C.; Cheng, Y.T.; Zhou, T.; Ravi, R.; Hasheminasab, S.M.; Flatt, J.E.; Troy, C.; Habib, A. Evaluation of UAV LiDAR for Mapping Coastal Environments. Remote Sens. 2019, 11, 2893. [Google Scholar] [CrossRef] [Green Version]
  85. Toschi, I.; Remondino, F.; Rothe, R.; Klimek, K. Combining Airborne Oblique Camera and LiDAR Sensors: Investigation and New Perspectives. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.-ISPRS Arch. 2018, 42, 437–444. [Google Scholar] [CrossRef] [Green Version]
  86. Vukašinovíc, N.; Bračun, D.; Možina, J.; Duhovnik, J. The Influence of Incident Angle, Object Colour and Distance on CNC Laser Scanning. Int. J. Adv. Manuf. Technol. 2010, 50, 265–274. [Google Scholar] [CrossRef]
  87. Bešić, I.; van Gestel, N.; Kruth, J.P.; Bleys, P.; Hodolič, J. Accuracy Improvement of Laser Line Scanning for Feature Measurements on CMM. Opt. Lasers Eng. 2011, 49, 1274–1280. [Google Scholar] [CrossRef]
  88. García-López, S.; Ruiz-Ortiz, V.; Barbero, L.; Sánchez-Bellón, Á. Contribution of the UAS to the Determination of the Water Budget in a Coastal Wetland: A Case Study in the Natural Park of the Bay of Cádiz (SW Spain). Eur. J. Remote Sens. 2018, 51, 965–977. [Google Scholar] [CrossRef] [Green Version]
  89. Crespo, M.; Manso, M.I. Control De Calidad Del Vuelo Lidar Utilizado Para La Modelización 3D De Las Fallas De Alhama (Murcia) Y Carboneras (Almería); Universidad Politécnica de Madrid: Madrid, Spain, 2014. [Google Scholar]
Figure 1. Location of Cádiz Bay and detailed view of the study site in the internal basin (in front of the salina San José de Barbanera). The yellow polygon indicates the flight area in September 2021 and the red one is for the flight area in October 2021. (AC) are drone-captured images at the corresponding points in the left image: the uppermost part of the salt marsh system and a portion of a salina with its external wall on the right (A), a zone with a transition of dominant vegetation between Sarcocornia spp. and S. maritimus (B), and the lowermost part of the salt marsh system with a clear view of the tidal channel network (C).
Figure 1. Location of Cádiz Bay and detailed view of the study site in the internal basin (in front of the salina San José de Barbanera). The yellow polygon indicates the flight area in September 2021 and the red one is for the flight area in October 2021. (AC) are drone-captured images at the corresponding points in the left image: the uppermost part of the salt marsh system and a portion of a salina with its external wall on the right (A), a zone with a transition of dominant vegetation between Sarcocornia spp. and S. maritimus (B), and the lowermost part of the salt marsh system with a clear view of the tidal channel network (C).
Remotesensing 14 03582 g001
Figure 2. Westward view of the study area from the uppermost part of the salt marsh. The zoomed pictures show a detail of Sporobolus maritimus—predominant vegetation of the low marsh—(left) and Sarcocornia spp.—predominant vegetation of the medium marsh—(right).
Figure 2. Westward view of the study area from the uppermost part of the salt marsh. The zoomed pictures show a detail of Sporobolus maritimus—predominant vegetation of the low marsh—(left) and Sarcocornia spp.—predominant vegetation of the medium marsh—(right).
Remotesensing 14 03582 g002
Figure 3. Non-repetitive petal scanning (left) and repetitive line scanning (right) of DJI Zenmuse L1 Livox LiDAR module.
Figure 3. Non-repetitive petal scanning (left) and repetitive line scanning (right) of DJI Zenmuse L1 Livox LiDAR module.
Remotesensing 14 03582 g003
Figure 4. Flowchart showing the main data collected by the UAV sensors and their processing steps (photogrammetric camera on the left, LiDAR in the middle, and multispectral sensor on the right). The multispectral data allows the creation of masks for correctly classifying the plants in the point cloud. When multispectral imagery is missing, the imported mask step is absent in LiDAR data processing. Rectangular panels represent processing steps; rounded panels represent products.
Figure 4. Flowchart showing the main data collected by the UAV sensors and their processing steps (photogrammetric camera on the left, LiDAR in the middle, and multispectral sensor on the right). The multispectral data allows the creation of masks for correctly classifying the plants in the point cloud. When multispectral imagery is missing, the imported mask step is absent in LiDAR data processing. Rectangular panels represent processing steps; rounded panels represent products.
Remotesensing 14 03582 g004
Figure 5. Profiles showing the thickness of the point cloud for bare ground (A) and vegetation (B).
Figure 5. Profiles showing the thickness of the point cloud for bare ground (A) and vegetation (B).
Remotesensing 14 03582 g005
Figure 6. (A) Distribution of ground control points (GCP) over the surveyed area. The sectors for the calibration trial are identified by c1–c2 and c3–c4. (B) Trial operation scheme. White areas represent areas with vegetation removal. Black points indicate the location of canopy height measurements. Yellow points identify the location of dGPS measurements inside the trial areas.
Figure 6. (A) Distribution of ground control points (GCP) over the surveyed area. The sectors for the calibration trial are identified by c1–c2 and c3–c4. (B) Trial operation scheme. White areas represent areas with vegetation removal. Black points indicate the location of canopy height measurements. Yellow points identify the location of dGPS measurements inside the trial areas.
Remotesensing 14 03582 g006
Figure 7. Approaches for LiDAR point cloud classification. (A) Classification after the application of the non-ground threshold of 0.03 m. (B) Classification after the application of the non-ground threshold of 0.10 m. (C) Classification after the introduction of multispectral data. Brown and grey points represent ‘ground’ and ‘never classified’ classes (i.e., vegetation), respectively. Red points are noise.
Figure 7. Approaches for LiDAR point cloud classification. (A) Classification after the application of the non-ground threshold of 0.03 m. (B) Classification after the application of the non-ground threshold of 0.10 m. (C) Classification after the introduction of multispectral data. Brown and grey points represent ‘ground’ and ‘never classified’ classes (i.e., vegetation), respectively. Red points are noise.
Remotesensing 14 03582 g007
Figure 8. Comparison of PNOA 2015 and UAV-L1 products. (A) PNOA 2015 DSM. (B) 100 m-L1 DSM. (C) PNOA 2015 DEM. (D) 100 m-L1 DEM. Notice the great difference in resolution between PNOA 2015 and UAV-L1 products.
Figure 8. Comparison of PNOA 2015 and UAV-L1 products. (A) PNOA 2015 DSM. (B) 100 m-L1 DSM. (C) PNOA 2015 DEM. (D) 100 m-L1 DEM. Notice the great difference in resolution between PNOA 2015 and UAV-L1 products.
Remotesensing 14 03582 g008
Table 1. Summary of flight configurations for the missions executed for this work.
Table 1. Summary of flight configurations for the missions executed for this work.
DateMission NameSensorCovered Area (ha)Flight Altitude (m AGL)Side Overlap (%)Frontal Overlap (%)Speed (m/s)Flight Time (min)
8 September 2021P1P1201007080712
8 September 2021100 m-L1L12010020n/a715
8 September 202160 m-L1L1206070n/a720
22 October 2021MSMS4.5100708035
22 October 20211–8L14.56040n/a5*
* Although using the same flight configuration, L1 flight time in the October campaign changed depending on the sensor configuration (see Table 2 for further details). Sensors: P1: photogrammetric sensor; L1: LiDAR; MS: multispectral.
Table 2. Configuration of LiDAR missions performed on 22 October 2021 in Cádiz Bay. The calibration column indicates whether the mission was performed before or after the vegetation was removed for the calibration trial. The flight time indicates the duration of the mission in minutes and seconds.
Table 2. Configuration of LiDAR missions performed on 22 October 2021 in Cádiz Bay. The calibration column indicates whether the mission was performed before or after the vegetation was removed for the calibration trial. The flight time indicates the duration of the mission in minutes and seconds.
MissionScan ModeSensor OrientationCalibrationFlight Time
1Non-repetitiveNadirBefore2′48″
2RepetitiveNadirBefore2′48″
3Non-repetitiveObliqueBefore18′30″
4RepetitiveObliqueBefore18′30″
5Non-repetitiveNadirAfter2′48″
6RepetitiveNadirAfter2′48″
7Non-repetitiveObliqueAfter18′30″
8RepetitiveObliqueAfter18′30″
Table 3. Summary of processing operations and performing software as a function of the UAV dataset nature.
Table 3. Summary of processing operations and performing software as a function of the UAV dataset nature.
SourceDatasetSoftwareProcessEnd Product
P1 sensor, L1 sensorImagesPix4DmapperPhotogrammetric processing (SfM)Orthomosaic, DSM and DEM
L1 sensorImages + 3D georeferenced and colored point cloudPix4DmapperPhotogrammetric processing (SfM)Orthomosaic
MS sensorMultispectral imagesPix4DmapperPhotogrammetric processing (SfM) and calibrationReflectance maps
L1 sensorL1 raw dataDJI TerraFirst LiDAR processing3D georeferenced and colored point cloud
L1-DJI Terra-processed3D georeferenced and colored point cloudCloudComparePoint cloud comparisonPoint cloud distances
Global Mapper LiDAR moduleFiltering, classification, and digital model generationDSM, DEM, CHM
CNIGPNOA 2015-point cloudLAStoolsDigital model generationDSM, DEM
Table 4. Characteristics of deliverables from photogrammetric processing (SfM) of P1 sensor and LiDAR sensor image datasets. Sens: sensor, Alt: flight altitude, Time: processing time for the entire project.
Table 4. Characteristics of deliverables from photogrammetric processing (SfM) of P1 sensor and LiDAR sensor image datasets. Sens: sensor, Alt: flight altitude, Time: processing time for the entire project.
SensAlt (m)Captured ImagesData SizeDeliverableFinal SizeResolution (cm/pixel)Time
P11003748.34 GBOrthomosaic2.98 GB1.266 h 30 min
DSM1.97 GB1.26
DEM116 MB6.25
L110091762 MBOrthomosaic945 MB2.782 h 10 min
602361.92 GBOrthomosaic1.26 GB1.693 h 45 min
Table 5. LiDAR point cloud characteristics.
Table 5. LiDAR point cloud characteristics.
MissionRaw LiDAR Data SizeDJI Terra-Processed Data SizeCount Decrease after Filtering (%)
100 m-L12.5 GB4.53 GB38.1
60 m-L13.7 GB7.65 GB39.7
1270 MB415 MB17.1
2270 MB430 MB42.7
31.99 GB3.5 GB15.8
41.99 GB2.6 GB39.6
5270 MB410 MB17.0
6270 MB430 MB41.9
71.99 GB3.2 GB14.6
81.99 GB2.4 GB36.6
Table 6. Analysis of the LiDAR point cloud accuracy. The estimation of the RMSE includes all the GCPs of the corresponding campaign. RMSE: root mean square error. Marsh: including only GCPs on the marsh surface.
Table 6. Analysis of the LiDAR point cloud accuracy. The estimation of the RMSE includes all the GCPs of the corresponding campaign. RMSE: root mean square error. Marsh: including only GCPs on the marsh surface.
Point Cloud MissionRMSE (m)R2 ValueR2 Value (Marsh)
100 m-L10.1150.9760.939
60 m-L10.0890.9890.949
10.1140.9520.821
20.1280.9670.885
30.1010.9470.877
40.1140.9590.869
50.1020.9480.813
60.1260.9480.813
70.0980.9620.876
80.0970.9620.797
Table 7. LiDAR-derived DEM accuracy. nr: non-repetitive scan mode; r: repetitive scan mode; ps: point spacing.
Table 7. LiDAR-derived DEM accuracy. nr: non-repetitive scan mode; r: repetitive scan mode; ps: point spacing.
DEM MissionNon-Ground ThresholdAverage RMSER2 ValueR2 Value (Marsh)
5-ps10-ps15-ps5-ps10-ps15-ps5-ps10-ps15-ps
100 m-L10.030.0880.086-0.9700.973-0.8670.881-
0.100.1150.120-0.9840.982-0.9410.923-
60 m-L10.030.0970.140-0.9710.930-0.9720.971-
0.100.0990.101-0.9860.983-0.9680.975-
1nr-0.0690.0560.0760.9040.9300.8880.7030.7740.768
2r-0.0990.0960.0640.8730.9040.9690.5930.6820.916
3nr-0.0590.0580.1620.9170.9220.8800.7360.7510.593
4r-0.0660.0640.0530.9540.9670.9460.8640.9150.866
5nr-0.0640.0670.0690.9240.9160.9160.8130.8090.846
6r-0.1000.0910.0650.8630.9030.9490.6110.7230.859
7nr-0.0480.0450.0730.9540.9610.9460.7910.8530.863
8r-0.0480.0450.0370.9690.9710.9650.8730.8780.870
Table 8. Analysis of canopy height model accuracy. The CHM resolution is indicated in the subindex with 0.02, 0.06, and 0.09 m/pixel, respectively. RMSE: root mean squared error.
Table 8. Analysis of canopy height model accuracy. The CHM resolution is indicated in the subindex with 0.02, 0.06, and 0.09 m/pixel, respectively. RMSE: root mean squared error.
R2RMSE (m)
MissionCHM0.02CHM0.06CHM0.09CHM0.02CHM0.06CHM0.09
10.0020.0040.0050.1230.1560.173
20.0380.0290.0430.1790.1670.155
30.0690.0690.0330.1380.1030.100
40.0610.1160.0790.1640.1360.126
50.0210.0380.0270.1240.1000.093
60.0350.0360.0340.1830.1690.160
70.1050.0780.0820.1510.1190.104
80.0790.1260.1720.1520.1210.110
Table 9. Deviations between field measurements and trial calibration values according to methods A, B, and C. For Method B, p4d corresponds to photogrammetric-processing-derived DSM and the rest for LiDAR-processing DSM, indicating the corresponding spatial resolution (ps: point spacing). For Method C, the CHM subindex indicates the spatial resolution. Values are in m.
Table 9. Deviations between field measurements and trial calibration values according to methods A, B, and C. For Method B, p4d corresponds to photogrammetric-processing-derived DSM and the rest for LiDAR-processing DSM, indicating the corresponding spatial resolution (ps: point spacing). For Method C, the CHM subindex indicates the spatial resolution. Values are in m.
Method AMethod BMethod C
SectorMissionPoint Cloudp4d1-ps3-ps5-psCHM0.02CHM0.06CHM0.09
c1–c21–50.010.000.000.00−0.020.050.090.09
2–60.01−0.010.010.00−0.02−0.06−0.04−0.03
3–70.000.11−0.01−0.01−0.01−0.030.020.04
4–80.000.000.00−0.02−0.02−0.010.010.02
c3–c41–5−0.010.000.02−0.01−0.03−0.030.010.04
2–6−0.01−0.020.00−0.01−0.04−0.06−0.02−0.02
3–7−0.010.10−0.01−0.01−0.01−0.09−0.02−0.02
4–8−0.02−0.01−0.02−0.02−0.02−0.07−0.05−0.02
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Curcio, A.C.; Peralta, G.; Aranda, M.; Barbero, L. Evaluating the Performance of High Spatial Resolution UAV-Photogrammetry and UAV-LiDAR for Salt Marshes: The Cádiz Bay Study Case. Remote Sens. 2022, 14, 3582. https://doi.org/10.3390/rs14153582

AMA Style

Curcio AC, Peralta G, Aranda M, Barbero L. Evaluating the Performance of High Spatial Resolution UAV-Photogrammetry and UAV-LiDAR for Salt Marshes: The Cádiz Bay Study Case. Remote Sensing. 2022; 14(15):3582. https://doi.org/10.3390/rs14153582

Chicago/Turabian Style

Curcio, Andrea Celeste, Gloria Peralta, María Aranda, and Luis Barbero. 2022. "Evaluating the Performance of High Spatial Resolution UAV-Photogrammetry and UAV-LiDAR for Salt Marshes: The Cádiz Bay Study Case" Remote Sensing 14, no. 15: 3582. https://doi.org/10.3390/rs14153582

APA Style

Curcio, A. C., Peralta, G., Aranda, M., & Barbero, L. (2022). Evaluating the Performance of High Spatial Resolution UAV-Photogrammetry and UAV-LiDAR for Salt Marshes: The Cádiz Bay Study Case. Remote Sensing, 14(15), 3582. https://doi.org/10.3390/rs14153582

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop