Next Article in Journal
Modeling of the Flight Performance of a Plasma-Propelled Drone: Limitations and Prospects
Next Article in Special Issue
Early Drought Detection in Maize Using UAV Images and YOLOv8+
Previous Article in Journal
MFMG-Net: Multispectral Feature Mutual Guidance Network for Visible–Infrared Object Detection
Previous Article in Special Issue
Assessment of the Health Status of Old Trees of Platycladus orientalis L. Using UAV Multispectral Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

UAV-Based Wetland Monitoring: Multispectral and Lidar Fusion with Random Forest Classification

School of Geosciences, University of South Florida, Tampa, FL 33620, USA
*
Author to whom correspondence should be addressed.
Drones 2024, 8(3), 113; https://doi.org/10.3390/drones8030113
Submission received: 14 February 2024 / Revised: 13 March 2024 / Accepted: 18 March 2024 / Published: 21 March 2024

Abstract

:
As sea levels rise and temperatures increase, vegetation communities in tropical and sub-tropical coastal areas will be stressed; some will migrate northward and inland. The transition from coastal marshes and scrub–shrubs to woody mangroves is a fundamental change to coastal community structure and species composition. However, this transition will likely be episodic, complicating monitoring efforts, as mangrove advances are countered by dieback from increasingly impactful storms. Coastal habitat monitoring has traditionally been conducted through satellite and ground-based surveys. Here we investigate the use of UAV-LiDAR (unoccupied aerial vehicle–light detection and ranging) and multispectral photogrammetry to study a Florida coastal wetland. These data have higher resolution than satellite-derived data and are cheaper and faster to collect compared to crewed aircraft or ground surveys. We detected significant canopy change in the period between our survey (2020–2022) and a previous survey (2015), including loss at the scale of individual buttonwood trees (Conocarpus erectus), a woody mangrove associate. The UAV-derived data were collected to investigate the utility of simplified processing and data inputs for habitat classification and were validated with standard metrics and additional ground truth. UAV surveys combined with machine learning can streamline coastal habitat monitoring, facilitating repeat surveys to assess the effects of climate change and other change agents.

1. Introduction

Coastal wetlands and associated habitats provide many environmental services and economic benefits to coastal communities, including carbon sequestration, protection from storm surges and other erosive forces, and maintenance of biodiversity [1,2]. These valuable coastal environments are threatened by climate-induced changes to sea level, local climate, and storms. They require study and monitoring to help ensure their preservation. Coastal mangroves are common along tropical and subtropical coasts, ranging in stature from tall trees exceeding 50 m in the tropics to shrubs at the northern extent of their range [3]. Rising sea levels are contributing to their landward migration along waterways, while ocean warming is providing new habitats to the north [4,5]. Northward migration of storm tracks in the northern hemisphere combined with increasing storm intensity challenge this expansion [6]. Growth and dieback are a dynamic process, suggesting the need for monitoring. This migration will encounter anthropogenic and landscape barriers, causing a ‘coastal squeeze’, potentially reducing the extent of shoreline suitable for mangroves and requiring management in near real-time [7]. Effective monitoring of coastal forests will be key to this process.
Traditionally, wetland monitoring has relied on ground surveys that allow for a detailed understanding of the study site. However, this approach can be labor intensive, time consuming, and possibly dangerous. In response to these issues, studies taking place in the past 30 years have moved increasingly toward larger-scale airborne and satellite-based monitoring efforts. Primarily, these efforts have been directed at wetland classification and change detection [8]. These authors also found that, in its current state, satellite-based sensing has been most successful with both convolutional neural networks (CNN) for deep learning and random forest models for machine learning. These methods are not without their challenges. Some challenges to CNN are the minimum sampling area needed for mangrove detection and spectral confusion with species common to mangrove swamps [9,10]. For random forest methods, studies have shown better performance when optical- and radar-based data are used as model inputs [11,12]. Reference [13] describes wetlands as a ‘moving target’ based on large variability between environments. This variability exists between locations and even between seasons at a single location. These and related challenges are discussed by [14,15].
An alternative approach is the use of unoccupied aerial vehicles (UAVs), which can operate at a moderate cost, facilitating repeat surveys [16,17]. UAV systems can also generate accurate digital elevation models (DEMs), using LiDAR or structure from motion (SfM) [18], with higher spatial resolution compared to satellite-based data [19]. SfM is adequate for forest height and density monitoring in mangrove-dominant environments [20] but is challenged where bare earth elevation is required and vegetation is dense.
Reference [21] reviewed the current state of UAVs in wetland applications, reporting on strengths, weaknesses, and emerging trends. Among the strengths were the many ways a UAV survey can be customized. UAV-derived data have been used in biomass calculations, vegetation mapping, and four-dimensional (3D + time) change detection [22,23,24]. UAVs can be deployed at nearly any time, limited mainly by regulations and weather, and can be equipped with multiple sensor types for increased data gathering, with UAV-LiDAR becoming more common. However, some studies limited reporting on accuracy—for example, the number and quality of ground control points. In lieu of ground control points, real time kinematic (RTK) surveys are playing a larger role. However, processing the high-resolution data acquired by this instrumentation has proven challenging: the accuracy of RTK may be much less than the pixel resolution of the survey. At the pixel scale, heterogeneities in spectral signatures can make classification and delineation challenging. In response to this, some studies have preferred an object-based approach. This may then be complicated by the requirement to fine-tune the segmentation algorithms with appropriate variables. Reference [21] reviews these and related issues.
One emerging trend is to use UAV surveys to augment and scale up to satellite-based regional scales [25,26]. Hyperspectral data are also becoming more widely available [27,28,29,30], albeit with data processing challenges compared to simpler multispectral data with fewer bands. Machine learning has also become a tool used in many classification efforts. In part, these algorithms are employed to deal with the large number of descriptors that can be generated by UAV surveys, including UAV-LiDAR data [27,29,30,31].
In this study, we aim to use UAV-based methods with simplified data collection and processing protocols for wetland surveyance and classification. Our protocols highlight the utility of first-order data to monitoring efforts. We use high spatial-resolution multi-spectral data with five spectral bands, high-resolution ground elevation data derived from LiDAR, and a simple random forest machine learning classification algorithm.

2. Materials and Methods

2.1. Study Area

The study site (27.618640° North, −82.734852° East) is situated within Fort DeSoto County Park (Mullet Key), along the Gulf Coast of Florida, south of the City of St. Petersburg (Figure 1). The park includes recreational facilities, paved roads and bike paths, and native plant communities typical of Gulf Coast subtropical barrier islands. A dense mangrove forest occupies the saltwater lagoon’s shoreline and forms the study site’s eastern border.
The study site is ∼0.055 km   2 in area and includes one of twelve transects routinely ground surveyed by the Tampa Bay Estuary Program (TBEP), an EPA-funded activity that monitors the ecological health of Tampa Bay and its watershed. The surveys assess species diversity, ecological health, habitat loss, changes related to sea level rise, and other environmental stressors (https://tbep.org/ accessed on 6 January 2020 ). The ground-based surveys are conducted by skilled personnel and are typically repeated every one to three years at a considerable cost. Habitat restoration activities are recommended where appropriate.

2.2. Overview

The general workflow is summarized in Figure 2 and detailed in Figures S1 and S2 in Supplementary Materials. The overall study site contains two sections: the training site (0.016 km   2 ) to the south and the test site (0.03 km   2 ) to the north (Figure 1B). At both sites, we collected elevation data using UAV-LiDAR and ground truth with GNSS ground control and collected imagery using UAV-photogrammetry. The photogrammetry included RGB images, which were used to create orthomosaic maps [32], and multispectral images, which were used to create single-band rasters [33]. We compared UAV-LiDAR elevation data to legacy data through a digital elevation model (DEM) of difference to detect changes in elevation, including canopy height, over time. Our machine learning model was trained with a multi-band raster generated by merging five single-color bands, a normalized difference vegetation index (NDVI) [34] raster derived from the photogrammetric processing, and a digital terrain model (DTM) from the UAV-LiDAR. We collected initial vegetation data by conducting a ground survey in the training site and used principal component analysis to assess our input features’ ability to distinguish different habitats. We applied this model to the test site and validated the habitat classification by conducting additional ground surveys.

2.3. Photogrammetry and LiDAR Surveys

We conducted two UAV surveys at the training site in January 2020 and two at the testing site in February 2021 in order to stay within the same season as the previous survey. The first survey at each site employed a DJI Phantom 4 RTK (P4RTK; DJI, Shenzhen, China) equipped with a 20-megapixel RGB camera. The second survey used a DJI Phantom 4 RTK Multi-Spectral (P4Multi; DJI, Shenzhen, China) system that housed six 2-megapixel cameras and a down-welling light sensor for image calibration, ensuring reflectance stability during the survey. Five were single-band cameras with blue, green, red, ‘red-edge,’ and near-infrared (near-IR) channels (Table 1), and the sixth camera was a full-color RGB camera. A DJI GNSS base station (DJI, Shenzhen, China) provided in-flight location correction to both RTK-capable UAVs. Both the base station and the rover (UAV) collected raw RINEX GNSS files for post-survey post-processing kinematic (PPK) corrections.
In December 2022, again to stay within the winter season, we conducted a third survey using UAV-LiDAR at both field sites in order to create digital terrain and digital surface models (DTM, DSM). For this survey, we used a Hesai Pandar XT32 (Hesai, Shanghai, China; Table 2). This model provides 32-channel mid-IR range scanning lasers with 640,000 pts/s. The sensor has a 360-degree field of view, ±2 cm vertical precision, and a range of up to 120 m. The LiDAR sensor is coupled with a 24-megapixel camera. The camera provides georeferenced images for point cloud colorization.
We flew the UAV 50 m above ground with 50 m line spacing, calibrating the inertial measurement unit (IMU) to improve location measurements, by flying the UAV in a sinusoidal or figure-eight shape after take-off and before landing. We deployed a Trimble R10 GNSS unit as an RTK base station to collect data for PPK location solutions.

2.4. Photogrammetry Processing

Initial processing results revealed the presence of distortions. Further details are provided in Supplementary Materials. We corrected the position of both the P4RTK and P4Multi using PPK techniques. This technique uses the DJI GNSS base position, various GNSS Satellite positional parameters, and the UAV positions obtained in flight and corrects the UAV position information to cm-level accuracy.
To process the P4RTK and P4Multi images, we used standard photogrammetry procedures: feature selection, bundle adjustment, point cloud densification, and final rasterization [17,18,35]. We used Agisoft Metashape software [32] to create DEMs from the RGB images, which were only used for the creation of subsequent orthophotos (Figure 3). We used Pix4D software [33] to process multispectral images, following a predefined workflow provided by the software, with a minor modification of the addition of a step that calibrated the images via corrections from the downwelling light sensor. Pix4D produced a point cloud and raster for each color band. It also created an NDVI raster calculated from the relevant data bands. Rasters for the training site were of 1.6 m resolution and 2 cm resolution for the testing site. The six single-band rasters produced were part of the inputs used for our random forest classification.

2.5. LiDAR Processing

PCMaster is a proprietary software from the LiDAR vendor used for facilitating point cloud creation. PCMaster allows for the filtering of unwanted returns, IMU calibration, and post-processing kinematic (PPK) position correction. PCMaster performs PPK corrections during data upload. We filtered the initial point cloud to exclude points outside a 70-degree field of view. Points were also excluded if they were captured during turns in flight, minimizing errors from UAV tilting. Further cleaning was performed using CloudCompare software [36], which filters noise (e.g., clouds and power lines) and allows for manual point selection and deletion. The final point cloud has a density of 459 points/m   2 .
To segment the point cloud we used the Cloth Simulation Filter (CSF) plugin provided by CloudCompare [37]. This algorithm divides points into ground and ‘off-ground’ points. Separation is achieved through the simulation of a ‘cloth’ on an upside-down version of the input point cloud. The parameters we used were rigidness (‘Relief,’ 2), cloth resolution (2 m), classification threshold (0.5 m), maximum iterations (500), and slope post-processing (check box selected). The values were defaults except for the slope post-processing.
After separation, we rasterized the ground and original all-points cloud using the lidR and raster packages in R (R foundation for statistical computing, Vienna, Austria). We created a DTM from the ground point cloud and a DSM from the all-point cloud at 10 cm resolution using inverse-distance weighting. From these models, we created canopy height models (CHM) by subtracting the DTM from the DSM. This normalizes all elevations by height from the ground surface. The final DSMs and DTMs were cut to match the training and testing sites. This LiDAR-derived DTM was added to the spectral bands for input into the machine learning classification detailed in Section 2.7.

2.6. Habitat Characterization

We conducted a habitat survey of the training site in order to better understand the site before the creation of a training dataset for random forest classification. We identified representative habitat types through photo interpretation followed by ground surveys along transects oriented approximately east–west and north–south across the training site. At each of the 14 survey points, we obtained photographs, identified characteristic plant species, and described notable structural characteristics, e.g., dead branches, substrate type.
There are six characteristic habitat types: mixed hardwood, water, low vegetation, road, sand, and mangrove (Figure 4). The mixed hardwood habitat type is dominated by upland and facultative palm (e.g., Phoenix spp., Sabal palmetto), often with an understory of sea grape, (Coccoloba uvifera) and is primarily located along the west border by the paved road. The low vegetation habitat type is common along the side of the road (mowed grasses) and in the interior of the training site, where it is characterized by scattered buttonwood trees, both alive and dead, (Conocarpus erectus), hardwood shrubs (e.g., Maytenus phyllanthoides), and a patchy understory comprised of salt-tolerant species, such as glasswort (Salicornia ambigua) and seacoast sumpweed (Iva imbricata). Light-colored sands are evident in breaks in the vegetation, alongside the paved road, and from the surface of the unpaved road along the southern edge of the training site (Figure 3). A mangrove forest occupies the saltwater lagoon shoreline along the study site’s eastern border. The mangroves are tree-sized (approximately 5 m in height) and form a dense, almost impenetrable thicket in this protected environment (Figure 4F). Species include red mangrove (Rhizophora mangle), white mangrove (Laguncularia racemosa), and black mangrove (Avicennia germinans).

2.7. Ground-Based Elevation Survey

To assess the accuracy of our UAV-LiDAR DSM and DTM, we compared those elevation values to data we collected during a ground-based survey using a Topcon GTS-249 (white points, Figure 5; Topcon Positioning Systems, Inc., Livermore, CA, USA) and to data previously collected from crewed-aircraft LiDAR-based DSM and DTM, hereafter aircraft LiDAR.
We conducted a ground-based elevation survey at our training site by extracting elevation data along the road, traveling from south to north (Figure 5 and Figure 6). Each raster location matched the ground-based survey data, correcting for any offset in the UAV-LiDAR model.
The aircraft LiDAR DEMs were created from data collected between 2002 and 2015 (Table 2); [38,39,40,41,42]. This multi-year dataset was combined using weighted averages to create the models used here. DTMs and DSMs were created and provided at 2 m resolution by colleagues at Texas A&M-Corpus Cristi. We refer to these data as 2015 DTM or 2015 DSM.
We used the 2015 DTM and 2015 DSM to produce a 2015 aircraft LiDAR CHM and compared it and the 2022 UAV-LiDAR CHM at points sampled at 2 m intervals (Figure 5 and Figure 7). The 2 m interval corresponds to the resolution of the 2015 elevation models.
To create a DEM of difference between the 2022 UAV-LiDAR CHM and the 2015 aircraft LiDAR CHM, we subtracted the 2015 CHM values from the 2022 CHM. The DEM of difference illustrates the change in canopy surface elevation between 2015 and 2022.

2.8. Machine Learning Habitat Analysis

2.8.1. Random Forest Models

Random forest is an ensemble machine learning algorithm that takes a training set of data across multiple features, fitting hundreds of decision trees [43,44]. Classifications are decided from a consensus across these decision trees [44]. Random forest has been extensively used as a classifier and regression tool. Its performance has been compared to other ensemble-based and supervised machine learning algorithms, showing that it is one of the top performers [45,46]. It has also been used for vegetation classification and identification [47,48]. We chose a random forest approach for its ease of use, rapid processing, and ability to handle large datasets with moderate computing power [49,50]. We implemented the random forest method with the Python package sci-kit learn [51].

2.8.2. Training and Testing

Our initial model used the five single-band rasters from the multispectral photogrammetry, (Table 1), the NDVI raster, and the 2022 UAV-LiDAR DTM, which was resampled to match the 2 cm resolution of the other rasters (Figure 5). We chose the bare earth DTM raster since mangroves and wetlands are sensitive to slight changes in ground elevation and fused all rasters into a single multi-band raster for ease of processing.
To create the training data, we defined polygons where sections of land use were known to be uniform based on our initial field site survey and photo interpretation of the orthophoto imagery. Cumulatively, these polygons only represent a small area of the entire training site (Table 3). There were seven land cover classes, six determined during our initial habitat characterization plus a shadow class (Figure 4). Shadowed subjects in multispectral and hyperspectral imagery give different spectral signatures, so this class was created to minimize misclassification in dimly lit locations, following [28]. We sampled randomly from the pixels within each polygon to create the training sets. A pixel-based approach was used to fully utilize the high-resolution data gathered during the UAV surveys. We gathered two sets of samples for each class—one set of 2000 pixels per class and the other of 5000 pixels per class. These amounts were chosen to limit computing time, allowing us to quickly investigate multiple models with different input data sets. Principle component (PC) analysis was performed on each model to assess artifacts in the training process.
Before model fitting, 20% of the training data were held back for model validation. Models were fitted with default parameters, except with ‘oob_score’ set to true for calculation of the out-of-bag error. We created two models, one for each training set, a 2000-pixel model and a 5000-pixel model. After the initial model fit, we performed a grid search optimization, refitting the model to obtain one with optimum estimators. The estimators we optimized for were the number of decision trees and the number of features included in each decision tree. Feature importance graphs were used to better understand the final optimized random forest models for each training set. Feature importance was calculated using the mean decrease in impurity method. This method assigns importance values to features based on the number of classes in branching nodes that branch from that feature in a decision tree. This means the fewer the classes in a node, the more “pure” it is, and thus, the more discriminatory or important a feature is.

2.8.3. Classification Map Filtering

We generated the testing site dataset in the same manner as the training site dataset, described in Section 2.3, Section 2.4, Section 2.5 (Figure 2 and Figure S1). We then classified the testing site with each of the random forest models, 2000- and 5000-pixel models, resulting in two classified habitat maps, one for each model.
Both results had pixels classified as shadow based on the scheme described in Section 2.8.2. To improve information content, we reclassified shadow pixels into one of the other classes based on a nearest-neighbor analysis. We applied a moving window majority filter with a window size of 11 pixels. The majority filter includes pixels in the 11 × 11 window around the shadow pixel, reclassifying the target pixel to the most common class in that window. This procedure resulted in two additional rasters, which we refer to as either 2000- or 5000-pixel (no shadow) habitat maps.
We applied a 123-pixel window majority filter to the 2000- and 5000-pixel (no shadow) maps to create a second pair of maps with a resolution that reflected the approximate size of tree tops at the field site (6 m 2 ). We will refer to these maps as 2000- or 5000-pixel smoothed habitat maps. These two filtering processes gave us four habitat maps for validation.

2.8.4. Validation Habitat Characterization

To assess the habitat map accuracy, we conducted ground surveys in the testing site (January 2023 and December 2023) at 54 validation points. Forty-two validation points were pre-selected randomly using functions built into the QGIS software (QGIS Association). Twelve points were added opportunistically in the field to ensure adequate representation of habitat types initially underrepresented in the validation point dataset. At each validation point, we obtained photographs, identified characteristic plant species, and described notable structural characteristics, e.g., dead branches and substrate type.

2.8.5. Validation Metrics

We calculated the user’s accuracy and the producer’s accuracy for each habitat type. Producer’s accuracy is defined as the number of validation points correctly assigned by the model to a particular category divided by the total number of validation points in that category [52]. User’s accuracy is defined as the number of validation points correctly assigned by the model to a particular category divided by the total number of validation points the model assigned to that category [53]. Thus, the producer’s accuracy is a measure of the quality of a classification category, while the user’s accuracy is a measure of how well the classification maps to the real world. Other metrics include balanced accuracy and kappa score. Balanced accuracy is computed by computing the accuracy per class and then averaging across these values [54]. The kappa coefficient was computed for the whole dataset. Kappa coefficient is a measure, from –1 to 1, of how well the classifier compares to random chance [55].

3. Results

3.1. Elevation and Canopy Height Analyses

Initial elevation ground-truthing revealed a 1.6 m vertical offset between our points measured during the ground-based total station survey and those of the UAV-LiDAR. This offset occurs in the reference frame conversion from WGS84 to NAD83/NAVD88 and is compatible with observations by NOAA’s National Geodetic Survey [56]. After correction, the ground-based survey elevations were plotted against both the corrected UAV-derived elevations and the 2015 DTM elevations (Figure 6). The UAV-LiDAR has centimeter levels of correlation with the ground-based survey, giving a vertical RMSE of 1.9 cm. The 2015 DTM has a slightly larger scatter with a vertical RMSE of 3.7 cm.
The two LiDAR sources, 2022 UAV-based and 2015 aircraft-based DEMs, were measured at separate times and hence allowed some assessment of change in land use or canopy height. Here the elevation ground-truthing process becomes vital. We observed minor (cm-level) variation in the unmoving section (e.g., the paved road) of each data set. This allows us to separate real changes in the habitat from georeferencing errors. Our DEM of difference (Figure 8A) shows pockets of elevation loss and widespread growth in parts of the tree canopy. We acknowledge that aircraft LiDAR tends to overestimate canopy cover, due in part to scan angle, point density, etc. [57,58]. The metadata provided for the aircraft LIDAR source data did not describe these parameters; however, we validated the majority of the elevation loss. We accomplished this by field observations as described in Section 2.5 and through Google Earth imagery. The Google Earth satellite imagery from 2015 depicts a more continuous canopy throughout the field area (Figure 8C). In 2022, a more discontinuous living canopy is evident, with whitish-gray patches of dead vegetation (Figure 8B). Based on our ground-truth observations, we suggest that most of the elevation losses reflect the die-off of individual buttonwoods, while most of the elevation gains reflect canopy growth. Areas maintained by the park, roadways, and adjacent lawns show minimal to no overall growth, as expected.

3.2. Machine Learning Habitat Classification

The results of the principal component (PC) analysis reveal several key points, best seen by plotting the input features in the 3-D PC space (Figure 9). When plotting the input pixels by classification within the PC space, we observed that most classifications occupied discrete areas. The water classification was especially distinct, as expected. The most overlap was observed between mixed hardwood and mangrove PCs (Figure 9). Minor overlaps occurred between adjacent pixels near the margins of the low vegetation and shadow classes. These two classes also appear adjacent to mangrove and mixed hardwood (Figure 9).
Each of the seven model features shows good variability through the space. Spectral features are grouped by wavelength proximity. Red, green, and blue form one cluster, while near-IR and red-edge form a second cluster. NDVI and DTM show clear separation from all other groups, indicating their importance for classification.
Initial models (both 2000-pixel and 5000-pixel) showed high model accuracies (>95%) on the 20% of data that was held back. These initial models also had low OOB errors (Table 4). The final grid search optimized 2000-pixel model has an accuracy of 98%; the grid search optimized 5000-pixel tuned model was comparable at 97% (Figure 3). Most of the model misfit was between the mixed hardwood and mangrove classifications, consistent with the PC analysis. In the 2000-pixel model, the lowest classification accuracy was a tie between mixed hardwood and mangrove classes at 95%. In contrast, the 5000-pixel model’s lowest score was 92% accuracy for the mixed hardwood classification. Once the best-fit models were obtained, feature importance was calculated (Figure 10). These results indicate the DTM and NDVI are the most diagnostic parameters, while blue and green bands were the least important.
We applied the 2000- and 5000-pixel models to the testing site to generate habitat classification maps. The classifications from these models produced visually similar results (Figure 11 and Figure 12). Some obvious differences are present west of the road. Looking at the filtered classifications, where each map was down-sampled to 6 m resolution, we see the large-scale trends preserved while smoothing out noise in the finer-resolution map (Figure 12).
Incorporating higher-resolution data (i.e., 5000 pixels versus 2000 pixels) into the model did not increase map accuracy. The accuracy metrics were relatively similar across all four maps, but the map with consistently higher accuracy ratings was the 2000-pixel smoothed habitat map (Table 5). The 2000-pixel smoothed habitat map identified the water and the mangrove habitats best with a producer’s accuracy of 100% and 85.7% and a user’s accuracy of 100% and 80%, respectively (Table 6). From a producer’s perspective, mixed hardwood and sand habitats performed the worst (54.5% and 71.4%), while from a user’s perspective, the road and low vegetation were the worst-performing (50% and 71.4%) (Table 6).

4. Discussion

Our UAV methods provide a range of benefits compared to the 2015 DEMs derived from crewed aircraft. These include greatly improved spatial resolution and temporal sampling at the expense of limited aerial coverage. Incorporation of LiDAR shows benefits relative to photogrammetric methods, reflecting the latter’s inability to penetrate the canopy [27,59]. Flying low and slow with overlapping scans allowed us to successfully penetrate dense vegetation.
With its higher spatial resolution, UAV surveys can better match remotely sensed features with ground surveys (down to the level of individual plants) compared to aircraft or satellite surveys, tracking both small and substantial changes between time steps. This is best seen in the DEM of difference, which can be used to detect changes in canopy height while highlighting areas with significant (>1 m) elevation change (Figure 7 and Figure 8A). Colorized maps can be used in the field to connect these topographic changes with local vegetation (Figure 8B,C).

4.1. Assesment of Limiting Features and Pixel-Based Classification

With only two UAV flights, one for the P4Multi and one for the UAV-LiDAR, we were able to capture six model features (NDVI being derived from two of the six). Many studies of this type utilize tens to hundreds of features, at least initially, for machine learning-based classifications [12,60,61,62]. These features are usually whittled down using various methods to find only the most important, typically some combination of spectral indices, positional information, and optical data [8]. Our work reached similar conclusions using only the first-order data products, saving time and effort, albeit with some loss of accuracy. Further trade-offs exist when approaching classification from an object-based approach, requiring more pre-processing. This adds complexity with appropriate variables to delineate each image segment [61,63]. Our methods minimize these complexities and trade-offs, offering a process that is more approachable for park managers and other non-specialists. As technology progresses in UAV lift capacity and sensor miniaturization, multiple data types can be collected on a single flight, but this will increase data processing complexity while not necessarily increasing utility.

4.2. Model Efficacy

Using just a small fraction of available data (23.8% of the training area) we achieved accurate classification models with minimal parameter tuning. Both the 2000- and 5000-pixel models achieved this with less than 400 estimators using roughly half of the available features. These models could be made even more efficient by swapping lower-importance features with an additional vegetation index or positional model such as a DSM which has been found to increase model accuracy [26].
Both the 2000- and 5000-pixel habitat maps performed well overall. For individual habitats, the 2000-pixel smoothed map performed the best. Our results suggest that smoothing reduces edge effects, allowing a consensus of neighboring pixels to define the class. This has been found in at least one other study [63], which acknowledges that post-processing of pixel-based models can provide similar generalized maps compared to those created using an object-based approach.
Overall, these results show that more pixels in the training data do not necessarily give better results. One explanation for this could be the difference in important features of the 5000-pixel model (Figure 10). This model relies heavily on the NDVI and gives a higher importance to the NIR feature. These differences could result in the overfitting of certain classes.
Comparing the frequency histogram table (Table 7) for each habitat map allows us to compare results quantitatively. We see agreement between habitat maps that share random forest models. This suggests that our smoothing operations did not distort the classification. In some cases, smoothing clearly improved the classification. This suggests that an object-based approach would yield similar results, as it is based on larger groups of pixels like the smoothed maps. The largest change in frequency is between the 2000- and 5000-pixel low vegetation classification. In the 2000-pixel (no shadow) habitat maps, it was classified in 21.8% of the pixels, but in the 5000-pixel (no shadow) map, it comprised ∼15% of the pixels, an approximate 7% reduction (Table 7). In contrast, mixed hardwood’s frequency in the 2000-pixel habitat map is ∼8%, while in the 5000-pixel maps, it is ∼15%, a 7% increase in frequency. The mangrove classification was unchanged. This suggests that our models were well-defined in most of our classifications, with some refinement needed between the low vegetation and mixed hardwood classes. In future studies, this might be accomplished by adding a canopy height parameter.

4.3. Challenges and Future Work

There were several limitations to this survey which came from the use of UAVs as the mode of data collection. As discussed above, the field area for most UAV surveys is limited. This comes from the amount and capacity of the available batteries. Further, the instrumentation chosen can decrease the flight time based on the UAV’s lift capacity. These challenges can be overcome with proper planning. The timing of the photogrammetry surveys needs to be an additional consideration. For both the multispectral and RGB images, flights closer to solar noon result in the best quality of images by limiting shadows. Additionally, multispectral images need to be corrected based on solar reflectance. The standard method uses some form of calibrated reflectance panel; here, we instead rely solely on the downwelling light sensor integrated into the UAV. This could have introduced some error in our spectral data, possibly resulting in some of the misclassifications seen above. Future studies using these methods should conduct their respective surveys on the same day or otherwise close in time. Here, we used data collected in the same season for consistency. The next step for this research is to expand in size and scope, gathering data over a larger area using a long-endurance UAV such as a fixed-wing model [64] and gathering data throughout the year to further verify this methodologies’ results.

5. Conclusions

The proliferation of hardware and easy-to-use software has simplified geodetic and ecological surveying. UAVs have lowered the cost of data collection, enabling higher-resolution surveys compared to space-based satellite and high-altitude aviation, albeit with much less spatial coverage.
Landscape classification based on machine learning has also improved to the point where non-specialists can apply the technique. We used a Python-based machine learning package providing all the algorithms needed for machine learning model generation, data preparation, and analysis. This allowed us to create a streamlined workflow where we generated models and reviewed these in minutes on low-cost workstations.
Combining topographic and multispectral raster layers allowed us to study and characterize a mixed-use subtropical lowland at the mouth of Tampa Bay, Florida. Simple on-site workflows for UAV deployment facilitated rapid and spatially accurate data collection. Several machine learning-based classification models were tested using the Random Forest algorithm. Despite the small training area, comparison with ground-based validation data indicates that the 2000-pixel smoothed models are 80% accurate. A smoothing process based on a consensus of neighboring pixels improved classification accuracy.
For a coastal wetland site, adding bare earth elevation data to multi-spectral optical imagery improves classification. With the growing interest in wetland habitats as markers of climate change, detailed mapping will be needed. Our approach offers a cost-effective solution, with sparse parameter input (five spectral bands plus LiDAR-based elevation) and minimal processing. These methods should work in other environments where habitats are partitioned particularly by ground elevation, but this will require additional research.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/drones8030113/s1, The supplemental material describes further the exact steps used in the material and methods section of the manuscript. It also discusses the difference between RTK and PPK georeferencing results for photogrammetric data. We performed PPK methods with data from CODEs using the precise orbits and the rtklib software [65,66,67]. The data processing used functions from [68]. Figure S1: Workflow for processing the LiDAR and photogrammetry surveys to create a multi-band raster; Figure S2: Workflow for the random forest classification; Figure S3: Comparison of modeled DEM derived from RTK and PPK processing with ground truth; Figure S4: UAV position and elevation comparison between RTK and PPK processing with residuals.

Author Contributions

Conceptualization: R.V.A., M.R., R.M. and T.H.D.; methodology: R.V.A., K.C.R., M.R., R.M. and T.H.D.; formal analysis: R.V.A.; funding acquisition: M.R., R.M. and T.H.D.; project administration: R.V.A., M.R. and T.H.D.; resources: M.R., R.M. and T.H.D.; software: R.V.A. and M.R.; validation: R.V.A., K.C.R., M.R., R.M. and T.H.D.; visualization: R.V.A.; supervision: K.C.R., M.R., R.M. and T.H.D.; writing—original draft: R.V.A.; writing—reviewing and editing: R.V.A., K.C.R., M.R., R.M. and T.H.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by a grant from the National Science Foundation (SPOKES project, #1762493, F. Muller-Karger PI); grants to T.H.D. from NASA (80NSSC22K1106), NOAA’s Office of Coast Survey (COMIT Cooperative Agreement NA20NOS4000227), and the USACE Engineer Research and Development Center (Cooperative Agreement W9132V-22-2-0001) for final data analysis and refinement of methodology for expanded applications.

Data Availability Statement

Data are available on request from authors.

Acknowledgments

We would like to thank Texas A&M for supplying us with the aircraft LiDAR, Josephina Reyman for aiding in fieldwork, and Edgar Guerron-Orejuela for assistance in the creation of the flowcharts. We also thank the staff at Ft. DeSoto County Park, particularly Park Manager David Harshbarger, for allowing us access to the field area and giving us permission for UAV operations in the park.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Duarte, C.M.; Losada, I.J.; Hendriks, I.E.; Mazarrasa, I.; Marbà, N. The role of coastal plant communities for climate change mitigation and adaptation. Nat. Clim. Change 2013, 3, 961–968. [Google Scholar] [CrossRef]
  2. Fagherazzi, S.; Anisfeld, S.C.; Blum, L.K.; Long, E.V.; Feagin, R.A.; Fernandes, A.; Kearney, W.S.; Williams, K. Sea level rise and the dynamics of the marsh-upland boundary. Front. Environ. Sci. 2019, 7, 25. [Google Scholar] [CrossRef]
  3. Simard, M.; Fatoyinbo, L.; Smetanka, C.; Rivera-Monroy, V.H.; Castañeda-Moya, E.; Thomas, N.; Stocken, T.V.D. Mangrove canopy height globally related to precipitation, temperature and cyclone frequency. Nat. Geosci. 2019, 12, 40–45. [Google Scholar] [CrossRef]
  4. Feagin, R.A.; Martinez, M.L.; Mendoza-Gonzalez, G. Salt Marsh Zonal Migration and Ecosystem Service Change in Response to Global Sea Level Rise: A Case Study from an Urban Region. Ecol. Soc. 2010, 15, 4. [Google Scholar] [CrossRef]
  5. Rosenzweig, C.; Solecki, W.D.; Romero-Lankao, P.; Mehrotra, S.; Dhakal, S.; Ibrahim, S.A. Climate Change and Cities: Second Assessment Report of the Urban Climate Change Research Network; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  6. Fan, X.; Duan, Q.; Shen, C.; Wu, Y.; Xing, C. Global surface air temperatures in CMIP6: Historical performance and future changes. Environ. Res. Lett. 2020, 15, 104056. [Google Scholar] [CrossRef]
  7. Torio, D.D.; Chmura, G.L. Assessing Coastal Squeeze of Tidal Wetlands. J. Coast. Res. 2013, 29, 1049–1061. [Google Scholar] [CrossRef]
  8. Jafarzadeh, H.; Mahdianpari, M.; Gill, E.W.; Brisco, B.; Mohammadimanesh, F. Remote Sensing and Machine Learning Tools to Support Wetland Monitoring: A Meta-Analysis of Three Decades of Research. Remote Sens. 2022, 14, 6104. [Google Scholar] [CrossRef]
  9. DeLancey, E.R.; Simms, J.F.; Mahdianpari, M.; Brisco, B.; Mahoney, C.; Kariyeva, J. Comparing Deep Learning and Shallow Learning for Large-Scale Wetland Classification in Alberta, Canada. Remote Sens. 2019, 12, 2. [Google Scholar] [CrossRef]
  10. Sun, Z.; Jiang, W.; Ling, Z.; Zhong, S.; Zhang, Z.; Song, J.; Xiao, Z. Using Multisource High-Resolution Remote Sensing Data (2 m) with a Habitat–Tide–Semantic Segmentation Approach for Mangrove Mapping. Remote Sens. 2023, 15, 5271. [Google Scholar] [CrossRef]
  11. Slagter, B.; Tsendbazar, N.E.; Vollrath, A.; Reiche, J. Mapping wetland characteristics using temporally dense Sentinel-1 and Sentinel-2 data: A case study in the St. Lucia wetlands, South Africa. Int. J. Appl. Earth Obs. Geoinf. 2020, 86, 102009. [Google Scholar] [CrossRef]
  12. Chan-Bagot, K.; Herndon, K.E.; Nicolau, A.P.; Martín-Arias, V.; Evans, C.; Parache, H.; Mosely, K.; Narine, Z.; Zutta, B. Integrating SAR, Optical, and Machine Learning for Enhanced Coastal Mangrove Monitoring in Guyana. Remote Sens. 2024, 16, 542. [Google Scholar] [CrossRef]
  13. Gallant, A.L. The challenges of remote monitoring of wetlands. Remote Sens. 2015, 7, 10938–10950. [Google Scholar] [CrossRef]
  14. Krauss, K.W.; Mckee, K.L.; Lovelock, C.E.; Cahoon, D.R.; Saintilan, N.; Reef, R.; Chen, L. How mangrove forests adjust to rising sea level. New Phytol. 2014, 202, 19–34. [Google Scholar] [CrossRef]
  15. Sasmito, S.D.; Murdiyarso, D.; Friess, D.A.; Kurnianto, S. Can mangroves keep pace with contemporary sea level rise? A global data review. Wetl. Ecol. Manag. 2016, 24, 263–278. [Google Scholar] [CrossRef]
  16. McCarthy, M.J.; Radabaugh, K.R.; Moyer, R.P.; Muller-Karger, F.E. Enabling efficient, large-scale high-spatial resolution wetland mapping using satellites. Remote Sens. Environ. 2018, 208, 189–201. [Google Scholar] [CrossRef]
  17. Van Alphen, R.; Rodgers, M.; Dixon, T.H. A Technique-Based Approach to Structure-from-Motion: Applications to Human-Coastal Environments. Master’s Thesis, University of South Florida, Tampa, FL, USA, 2022. [Google Scholar]
  18. Rodgers, M.; Deng, F.; Dixon, T.H.; Glennie, C.L.; James, M.R.; Malservisi, R.; Van Alphen, R.; Xie, S. 2.03—Geodetic Applications to Geomorphology. In Treatise on Geomorphology; Shroder, J.F., Ed.; Academic Press: Oxford, UK, 2022; pp. 34–55. [Google Scholar] [CrossRef]
  19. Yang, B.; Hawthorne, T.L.; Torres, H.; Feinman, M. Using Object-Oriented Classification for Coastal Management in the East Central Coast of Florida: A Quantitative Comparison between UAV, Satellite, and Aerial Data. Drones 2019, 3, 60. [Google Scholar] [CrossRef]
  20. Warfield, A.D.; Leon, J.X. Estimating Mangrove Forest Volume Using Terrestrial Laser Scanning and UAV-Derived Structure-from-Motion. Drones 2019, 3, 32. [Google Scholar] [CrossRef]
  21. Dronova, I.; Kislik, C.; Dinh, Z.; Kelly, M. A review of unoccupied aerial vehicle use in wetland applications: Emerging opportunities in approach, technology, and data. Drones 2021, 5, 45. [Google Scholar] [CrossRef]
  22. Doughty, C.L.; Cavanaugh, K.C. Mapping Coastal Wetland Biomass from High Resolution Unmanned Aerial Vehicle (UAV) Imagery. Remote Sens. 2019, 11, 540. [Google Scholar] [CrossRef]
  23. Jeziorska, J. UAS for Wetland Mapping and Hydrological Modeling. Remote Sens. 2019, 11, 1997. [Google Scholar] [CrossRef]
  24. Suo, C.; McGovern, E.; Gilmer, A. Coastal Dune Vegetation Mapping Using a Multispectral Sensor Mounted on an UAS. Remote Sens. 2019, 11, 1814. [Google Scholar] [CrossRef]
  25. Alvarez-Vanhard, E.; Houet, T.; Mony, C.; Lecoq, L.; Corpetti, T. Can UAVs fill the gap between in situ surveys and satellites for habitat mapping? Remote Sens. Environ. 2020, 243, 111780. [Google Scholar] [CrossRef]
  26. Pricope, N.G.; Minei, A.; Halls, J.N.; Chen, C.; Wang, Y. UAS Hyperspatial LiDAR Data Performance in Delineation and Classification across a Gradient of Wetland Types. Drones 2022, 6, 268. [Google Scholar] [CrossRef]
  27. Sankey, T.; Donager, J.; McVay, J.; Sankey, J.B. UAV lidar and hyperspectral fusion for forest monitoring in the southwestern USA. Remote Sens. Environ. 2017, 195, 30–43. [Google Scholar] [CrossRef]
  28. Ishida, T.; Kurihara, J.; Viray, F.A.; Namuco, S.B.; Paringit, E.C.; Perez, G.J.; Takahashi, Y.; Marciano, J.J. A novel approach for vegetation classification using UAV-based hyperspectral imaging. Comput. Electron. Agric. 2018, 144, 80–85. [Google Scholar] [CrossRef]
  29. Cao, J.; Liu, K.; Zhuo, L.; Liu, L.; Zhu, Y.; Peng, L. Combining UAV-based hyperspectral and LiDAR data for mangrove species classification using the rotation forest algorithm. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102414. [Google Scholar] [CrossRef]
  30. Quan, Y.; Li, M.; Hao, Y.; Liu, J.; Wang, B. Tree species classification in a typical natural secondary forest using UAV-borne LiDAR and hyperspectral data. GISci. Remote Sens. 2023, 60, 2171706. [Google Scholar] [CrossRef]
  31. Wang, D.; Wan, B.; Qiu, P.; Tan, X.; Zhang, Q. Mapping mangrove species using combined UAV-LiDAR and Sentinel-2 data: Feature selection and point density effects. Adv. Space Res. 2022, 69, 1494–1512. [Google Scholar] [CrossRef]
  32. Agisoft LLC. Agisoft Metashape Pro, Version 1.8.2; Agisoft LLC: St. Petersburg, Russia, 2022. [Google Scholar]
  33. Pix4D S.A. Pix4DMapper; Pix4D S.A.: Prilly, Switzerland, 2022. [Google Scholar]
  34. Kriegler, F.J. Preprocessing transformations and their effects on multspectral recognition. In Proceedings of the Sixth International Symposium on Remote Sesning of Environment, Ann Arbor, MI, USA, 13–16 October 1969; pp. 97–131. [Google Scholar]
  35. James, M.R.; Robson, S. Straightforward reconstruction of 3D surfaces and topography with a camera: Accuracy and geoscience application. J. Geophys. Res. Earth Surf. 2012, 117, F03017. [Google Scholar] [CrossRef]
  36. Cloud Compare, Version 2.11.1 GPL Software; 2022. Available online: http://www.cloudcompare.org/ (accessed on 1 February 2024).
  37. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  38. OCM Partners. 2002 Florida USGS/NASA Airborne Lidar Assessment of Coastal Erosion (ALACE) Project for the US Coastline. 2002. Available online: https://www.fisheries.noaa.gov/inport/item/49631 (accessed on 11 February 2024).
  39. OCM Partners. 2007 Florida Division of Emergency Management (FDEM) Lidar Project: Southwest Florida. 2007. Available online: https://www.fisheries.noaa.gov/inport/item/49677 (accessed on 11 February 2024).
  40. OCM Partners. 2006 United States Army Corps of Engineers (USACE) Post Hurricane Wilma Lidar: Hurricane Pass to Big Hickory Pass, FL. 2006. Available online: https://www.fisheries.noaa.gov/inport/item/50059 (accessed on 11 February 2024).
  41. OCM Partners. 2015 USACE NCMP Topobathy Lidar: Florida Gulf Coast. 2015. Available online: https://www.fisheries.noaa.gov/inport/item/49720 (accessed on 11 February 2024).
  42. OCM Partners. 2015 USACE NCMP Topobathy Lidar: Egmont Key (FL). 2015. Available online: https://www.fisheries.noaa.gov/inport/item/49719 (accessed on 11 February 2024).
  43. Dietterich, T.G. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Mach. Learn. 2000, 40, 139–157. [Google Scholar] [CrossRef]
  44. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  45. Caruana, R.; Niculescu-Mizil, A. An empirical comparison of supervised learning algorithms. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006; pp. 161–168. [Google Scholar]
  46. Thanh Noi, P.; Kappas, M. Comparison of random forest, k-nearest neighbor, and support vector machine classifiers for land cover classification using Sentinel-2 imagery. Sensors 2017, 18, 18. [Google Scholar] [CrossRef]
  47. Burnett, M.; Mccauley, D.; Leo, G.A.D.; Micheli, F. Quantifying coconut palm extent on Pacific islands using spectral and textural analysis of very high resolution imagery Biogenic carbonates on sea and land View project Public Health and decision making View project. Int. J. Remote Sens. 2019, 40, 7329–7355. [Google Scholar] [CrossRef]
  48. Wu, N.; Shi, R.; Zhuo, W.; Zhang, C.; Tao, Z. Identification of native and invasive vegetation communities in a tidal flat wetland using gaofen-1 imagery. Wetlands 2021, 41, 46. [Google Scholar] [CrossRef]
  49. Pal, M. Random forest classifier for remote sensing classification. Int. J. Remote Sens. 2005, 26, 217–222. [Google Scholar] [CrossRef]
  50. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  51. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  52. Story, M.; Congalton, R.G. Accuracy assessment: A user’s perspective. Photogramm. Eng. Remote Sens. 1986, 52, 397–399. [Google Scholar]
  53. Congalton, R.G. Remote sensing and geographic information system data integration: Error sources and. Photogramm. Eng. Remote Sens. 1991, 57, 677–687. [Google Scholar]
  54. Guyon, I.; Bennett, K.; Cawley, G.; Escalante, H.J.; Escalera, S.; Ho, T.K.; Macià, N.; Ray, B.; Saeed, M.; Statnikov, A.; et al. Design of the 2015 chalearn automl challenge. In Proceedings of the 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 12–17 July 2015; pp. 1–8. [Google Scholar]
  55. Cohen, A.; Sattler, T.; Pollefeys, M. Merging the Unmatchable: Stitching Visually Disconnected SfM Models. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Killarney, Ireland, 12–17 July 2015; pp. 2129–2137. [Google Scholar] [CrossRef]
  56. Stone, W. 2022 Is Coming—Will You Be Ready? (Or NAD83 and NAVD88 Are Going Away). 2015. Available online: https://www.ngs.noaa.gov/web/science_edu/presentations_library/files/stone_gecowest_2015_for_upload.pdf (accessed on 4 April 2023).
  57. Lovell, J.L.; Jupp, D.L.; Culvenor, D.S.; Coops, N.C. Using airborne and ground-based ranging lidar to measure canopy structure in Australian forests. Can. J. Remote Sens. 2003, 29, 607–622. [Google Scholar] [CrossRef]
  58. Heiskanen, J.; Korhonen, L.; Hietanen, J.; Heikinheimo, V.; Schäfer, E.; Pellikka, P.K.E. Comparison of field and airborne laser scanning based crown cover estimates across land cover types in Kenya. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-7/W3, 409–415. [Google Scholar] [CrossRef]
  59. Wallace, L.; Lucieer, A.; Malenovský, Z.; Turner, D.; Vopěnka, P. Assessment of forest structure using two UAV techniques: A comparison of airborne laser scanning and structure from motion (SfM) point clouds. Forests 2016, 7, 62. [Google Scholar] [CrossRef]
  60. Fu, B.; Wang, Y.; Campbell, A.; Li, Y.; Zhang, B.; Yin, S.; Xing, Z.; Jin, X. Comparison of object-based and pixel-based Random Forest algorithm for wetland vegetation mapping using high spatial resolution GF-1 and SAR data. Ecol. Indic. 2017, 73, 105–117. [Google Scholar] [CrossRef]
  61. Chen, J.; Chen, Z.; Huang, R.; You, H.; Han, X.; Yue, T.; Zhou, G. The Effects of Spatial Resolution and Resampling on the Classification Accuracy of Wetland Vegetation Species and Ground Objects: A Study Based on High Spatial Resolution UAV Images. Drones 2023, 7, 61. [Google Scholar] [CrossRef]
  62. Musungu, K.; Dube, T.; Smit, J.; Shoko, M. Using UAV multispectral photography to discriminate plant species in a seep wetland of the Fynbos Biome. Wetl. Ecol. Manag. 2024. [Google Scholar] [CrossRef]
  63. Duro, D.C.; Franklin, S.E.; Dubé, M.G. A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens. Environ. 2012, 118, 259–272. [Google Scholar] [CrossRef]
  64. Boon, M.A.; Drijfhout, A.P.; Tesfamichael, S. Comparison of a Fixed-wing and Multi-rotor Uav for Environmental Mapping Applications: A Case Study. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W6, 47–54. [Google Scholar] [CrossRef]
  65. RTKLIB: An Open Source Program Package for RTK-GPS RTKLIB software, BSD 2-Clause License. Available online: https://gpspp.sakura.ne.jp/rtklib/rtklib.htm (accessed on 1 February 2024).
  66. Dach, R.; Selmke, I.; Villiger, A.; Arnold, D.; Prange, L.; Schaer, S.; Sidorov, D.; Stebler, P.; Jäggi, A.; Hugentobler, U. Review of recent GNSS modelling improvements based on CODEs Repro3 contribution. Adv. Space Res. 2021, 68, 1263–1280. [Google Scholar] [CrossRef]
  67. Lou, Y.; Dai, X.; Gong, X.; Li, C.; Qing, Y.; Liu, Y.; Peng, Y.; Gu, S. A review of real-time multi-GNSS precise orbit determination based on the filter method. Satell. Navig. 2022, 3, 15. [Google Scholar] [CrossRef]
  68. Boehm, J.; Werl, B.; Schuh, H. Troposphere mapping functions for GPS and very long baseline interferometry from European Centre for Medium-Range Weather Forecasts operational analysis data. J. Geophys. Res. 2006, 111, B02406. [Google Scholar] [CrossRef]
Figure 1. Study site and the surrounding landscape. (A): Map of Florida (inset) and the greater Tampa Bay area. The location of the study site is indicated by a red star. (B): Aerial photograph of the study area. The training site is indicated by the solid green box. The testing site is indicated by the dashed blue polygon. Validation survey sites in the testing site are indicated by red points.
Figure 1. Study site and the surrounding landscape. (A): Map of Florida (inset) and the greater Tampa Bay area. The location of the study site is indicated by a red star. (B): Aerial photograph of the study area. The training site is indicated by the solid green box. The testing site is indicated by the dashed blue polygon. Validation survey sites in the testing site are indicated by red points.
Drones 08 00113 g001
Figure 2. Simplified methods workflow emphasizing the steps taken for the random forest classification.
Figure 2. Simplified methods workflow emphasizing the steps taken for the random forest classification.
Drones 08 00113 g002
Figure 3. UAV structure from motion-derived orthomosaic maps of the testing ((A) to the north) and training ((B) to the south) field sites.
Figure 3. UAV structure from motion-derived orthomosaic maps of the testing ((A) to the north) and training ((B) to the south) field sites.
Drones 08 00113 g003
Figure 4. Representative images of habitat types. (A): Mixed hardwood. (B): Water. (C): Low vegetation. (D): Road. (E): Sand. (F): Mangrove (author for scale).
Figure 4. Representative images of habitat types. (A): Mixed hardwood. (B): Water. (C): Low vegetation. (D): Road. (E): Sand. (F): Mangrove (author for scale).
Drones 08 00113 g004
Figure 5. The digital terrain model results based on the 2022 UAV-LiDAR. This model was included as one of the classification features and to create the canopy height models. White points correspond to points in Figure 6. Black points correspond to points in Figure 7.
Figure 5. The digital terrain model results based on the 2022 UAV-LiDAR. This model was included as one of the classification features and to create the canopy height models. White points correspond to points in Figure 6. Black points correspond to points in Figure 7.
Drones 08 00113 g005
Figure 6. A comparison of the 2022 UAV-LiDAR, 2015 DEM, and ground-based survey elevation data obtained at point locations indicated by white dots in Figure 5. Black circles are the ground-based survey, red squares are the 2022 UAV-LiDAR, and blue triangles are the 2015 DTM. RMSE values were computed between the ground-based survey and the respective LiDAR in the vertical.
Figure 6. A comparison of the 2022 UAV-LiDAR, 2015 DEM, and ground-based survey elevation data obtained at point locations indicated by white dots in Figure 5. Black circles are the ground-based survey, red squares are the 2022 UAV-LiDAR, and blue triangles are the 2015 DTM. RMSE values were computed between the ground-based survey and the respective LiDAR in the vertical.
Drones 08 00113 g006
Figure 7. A comparison of canopy elevation data obtained in 2022 and in 2015 at point locations indicated by the black dots in Figure 5. The transect includes points obtained along the paved road (gray bar, left) and within vegetated areas (center and right). East and west are indicated by the letter in the top corners.
Figure 7. A comparison of canopy elevation data obtained in 2022 and in 2015 at point locations indicated by the black dots in Figure 5. The transect includes points obtained along the paved road (gray bar, left) and within vegetated areas (center and right). East and west are indicated by the letter in the top corners.
Drones 08 00113 g007
Figure 8. Comparison of the DEM of difference with color imagery. (A): DEM of difference between the 2015 aircraft-LiDAR and the 2022 UAV-LiDAR canopy height models, both normalized to the 2020 ground surface. (B): 2020 orthophoto from UAV photogrammetry. (C): Satellite image from 2015 of the training site. Blue circles highlight some areas of negative change. Imagery in (C) taken from Google Earth.
Figure 8. Comparison of the DEM of difference with color imagery. (A): DEM of difference between the 2015 aircraft-LiDAR and the 2022 UAV-LiDAR canopy height models, both normalized to the 2020 ground surface. (B): 2020 orthophoto from UAV photogrammetry. (C): Satellite image from 2015 of the training site. Blue circles highlight some areas of negative change. Imagery in (C) taken from Google Earth.
Drones 08 00113 g008
Figure 9. Three-dimensional plots of the principal component analysis. (A): Comparison of the PC space between mangroves and mixed hardwood. (B): Comparison between shadow and low vegetation. (C): Comparison of all the classifications. (D): Comparison between mangroves, mixed hardwood, shadow, and low vegetation.
Figure 9. Three-dimensional plots of the principal component analysis. (A): Comparison of the PC space between mangroves and mixed hardwood. (B): Comparison between shadow and low vegetation. (C): Comparison of all the classifications. (D): Comparison between mangroves, mixed hardwood, shadow, and low vegetation.
Drones 08 00113 g009
Figure 10. Feature importance graphs. (A): Feature importance calculated for the 2000-pixel model. (B): Feature importance calculated for the 5000-pixel model.
Figure 10. Feature importance graphs. (A): Feature importance calculated for the 2000-pixel model. (B): Feature importance calculated for the 5000-pixel model.
Drones 08 00113 g010
Figure 11. Habitat classification maps with shadow pixels removed (testing site). (A): The 2000-pixel (no shadow) habitat map. (B): The 5000-pixel (no shadow) habitat map.
Figure 11. Habitat classification maps with shadow pixels removed (testing site). (A): The 2000-pixel (no shadow) habitat map. (B): The 5000-pixel (no shadow) habitat map.
Drones 08 00113 g011
Figure 12. Habitat classification maps smoothed (testing site). (A): The 2000-pixel smoothed habitat map. (B): The 5000-pixel smoothed habitat map.
Figure 12. Habitat classification maps smoothed (testing site). (A): The 2000-pixel smoothed habitat map. (B): The 5000-pixel smoothed habitat map.
Drones 08 00113 g012
Table 1. Spectral Bands in the P4Multi Camera(Center wavelength and bandwidth in nm).
Table 1. Spectral Bands in the P4Multi Camera(Center wavelength and bandwidth in nm).
ChannelCenter Wavelength and Bandwidth (nm)
Blue450 ± 16
Green560 ± 16
Red650 ± 16
RedEdge730 ± 16
Near-Infrared (NIR)840 ± 26
Table 2. Metadata available for airborne LiDAR source data for 2015 DEMs compared with 2022 UAV-LiDAR.
Table 2. Metadata available for airborne LiDAR source data for 2015 DEMs compared with 2022 UAV-LiDAR.
Acquisition DatesHorizontal AccuracyVertical AccuracyScannerScanner WavelengthScan AnglePulse RatePoints per m   2
October 2002±0.8 m   1 ±15 cm   1 ATM II   1 Blue-Green (523 nm)   1 0°    3 2–10 kHz   1 0.1–0.2   3 , 4
May–June 200680 cm at 2 sigma   1 30 cm at 2 sigma   1 CHARTS system (SHOALS-3000 LiDAR )   1 NIR (1064 nm)   2 0°    3 n/a    5 0.27   3
June–August 2007±116 cm   1 9 cm RMSE   1 Leica ALS50   1 NIR
(1064 nm)   2
29°    1 75 kHz; 84.4 kHz   1 1.8   3
May 20151 m   1 19.6 cm   1 CZMIL   1 NIR (1064 nm)   2 −21–22°    3 , 4 10 khz   1 1–14   3 , 4
June 20151 m RMSE   1 9.5 cm RMSE (topographic data only)   1 Leica HawkEye III   1 Infrared   2 13–22°    3 300 kHz0.1–0.2   3 , 4
UAV-LiDAR, December 2022±2 cm   2 2 cm RMSEHesai Pandar XT32Mid-IR (905 nm)   2 ±30°3.5 MHz459   3
  1 Provided by metadata.   2 Provided by instrument documentation.   3 Extracted/Calculated from files.   4 Range based on tiles in the dataset.   5 Pulse rate not found in documentation. Scanner manufacturers in order presented in the table: NASA, Washington, D.C., USA; Teledyne Optech, Toronto, Canada; Leica Geosystems, Heerbrugg, Switzerland; Teledyne Optech, Toronto, Canada; Leica Geosystems, Heerbrugg, Switzerland; Hesai, Shanghai, China.
Table 3. Classification categories with their associated number of polygons for extracting training data and the percentage of training area occupied by the polygons. Polygons were only created where habitats were uniform with little ambiguity.
Table 3. Classification categories with their associated number of polygons for extracting training data and the percentage of training area occupied by the polygons. Polygons were only created where habitats were uniform with little ambiguity.
HabitatNumber of Polygons% of Training Area
Mixed Hardwood96.6
Water12.4
Low Vegetation91.6
Road34.6
Sand81.1
Mangrove77
Shadow220.5
Total5923.8
Table 4. Results of the random forest grid search and subsequent model fitting analyses performed on the 2000- and 5000-pixel models.
Table 4. Results of the random forest grid search and subsequent model fitting analyses performed on the 2000- and 5000-pixel models.
ModelNumber of EstimatorsMax FeaturesOut-of-Bag ErrorTuned Model Accuracy (%)
2000-point10050.2798
5000-point30040.297
Table 5. Accuracy metrics for each of the final habitat maps.
Table 5. Accuracy metrics for each of the final habitat maps.
ModelBalanced Average (%)User’s Accuracy Average (%)Producer’s Accuracy Average (%)Kappa
2000-point, no shadows7579.376.20.70
2000-point, filtered7880.879.20.70
5000-point, no shadows7780.279.10.68
5000-point, filtered7877.376.50.68
Table 6. Accuracy metrics by habitat for the 2000-pixel smoothed habitat map.
Table 6. Accuracy metrics by habitat for the 2000-pixel smoothed habitat map.
HabitatUser’s Accuracy (%)Producer’s Accuracy (%)
Mixed Hardwood10054.6
Water100100
Low Vegetation71.483.3
Road5080
Sand83.371.4
Mangrove8085.7
Table 7. Numerical breakdown of frequency histograms for each habitat map. Values in each column sum to 100%.
Table 7. Numerical breakdown of frequency histograms for each habitat map. Values in each column sum to 100%.
Habitat2000-Point (%)5000-Point (%)2000-Point, Filtered (%)5000-Point, Filtered (%)
Mixed Hardwood8.0914.87.114.8
Water19.420.619.420.6
Low Vegetation21.814.922.514.2
Road7.66.37.66.5
Sand8.7108.410.1
Mangrove34.233.334.833.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Van Alphen, R.; Rains, K.C.; Rodgers, M.; Malservisi, R.; Dixon, T.H. UAV-Based Wetland Monitoring: Multispectral and Lidar Fusion with Random Forest Classification. Drones 2024, 8, 113. https://doi.org/10.3390/drones8030113

AMA Style

Van Alphen R, Rains KC, Rodgers M, Malservisi R, Dixon TH. UAV-Based Wetland Monitoring: Multispectral and Lidar Fusion with Random Forest Classification. Drones. 2024; 8(3):113. https://doi.org/10.3390/drones8030113

Chicago/Turabian Style

Van Alphen, Robert, Kai C. Rains, Mel Rodgers, Rocco Malservisi, and Timothy H. Dixon. 2024. "UAV-Based Wetland Monitoring: Multispectral and Lidar Fusion with Random Forest Classification" Drones 8, no. 3: 113. https://doi.org/10.3390/drones8030113

Article Metrics

Back to TopTop