In this section, we first assess the ability of UAV LiDAR for mapping coastal environments using Dana Island datasets. The quality of the LiDAR point clouds is evaluated by examining the alignment between point clouds reconstructed from different flight lines. Qualitative and quantitative comparisons between LiDAR and image-based point clouds are carried out to show the level of information acquired by the two techniques. Then, we analyze the shoreline change at Dune Acres and Beverly Shores using UAV LiDAR data.
6.2. Comparative Quality Assessment of LiDAR and Image-Based Point Clouds
In order to evaluate the overall quality of UAV LiDAR, a comparison against image-based point cloud was performed in terms of spatial coverage, point density, horizontal and vertical alignment at selected profiles, and overall elevation differences. Throughout the entire missions, the LiDAR and imagery data were collected simultaneously. Therefore, the flight configuration and morphological changes can be eliminated from the potential factors for any discrepancy patterns between these modalities. Prior research has verified the absolute accuracy of image-based point clouds. More specifically, the UAV system used in this study is similar to the systems used in He et al. [48
], where the absolute horizontal and vertical accuracies of image-based point clouds were reported to be ±1
3 cm and ±2
4 cm, respectively. Therefore, the comparison in this study focuses on the relative differences between the LiDAR and image-based point clouds.
Point cloud reconstruction revealed that the LiDAR point cloud covers a larger area and has a higher point density when compared to the image-based point cloud. Figure 4
presents the LiDAR and image-based point cloud over the area of interest. The number of points in LiDAR and image-based point clouds are 478,313,717 and 131,178,714, respectively. The area covered by the LiDAR point cloud is 0.23 km2
, which is substantially larger than the 0.12 km2
spatial coverage of the image-based point cloud. The spatial coverage of the LiDAR point cloud depends on the reconstruction parameters. As we increase the across-scan FOV, the spatial coverage will increase, however, the precision of the point clouds at the edges of the swath might degrade depending on the accuracy of the system calibration and will also be impacted by the inherent accuracy of the sensors being used. Therefore, one should choose the reconstruction parameters based on the application of interest which in turn dictates the desired accuracy of the data. In this study, a relatively wide across-scan FOV (±70˚) is chosen, and as discussed in Section 6.1
, the alignment between strips is within the noise level of the point cloud even at the edge of the swath, which again demonstrates the strength of system calibration.
The point density maps for LiDAR and image-based point clouds (displayed in Figure 7
) suggest that in addition to higher point density, LiDAR point cloud distributes more uniformly over the terrain. Comparing Figure 7
a and b, we can see that the LiDAR point cloud has similar coverage on the ground surface and vegetation, while image-based point cloud mostly covers the ground surface, and is relatively sparse over the vegetation. The point density of the LiDAR point cloud depends more on technical factors, including flying speed, sensor-to-object distance, across-scan FOV, sidelap percentage between adjacent strips, and laser pulse repetition rate. While the laser pulse repetition rate remains constant throughout the field surveys, the impact of other factors can be seen in Figure 7
a. The flying speed affects the point density along the flight lines. In our field survey, the UAV was mostly flying at a constant flying speed, which guarantees a relatively uniform point density along the flying direction. The dark strips across the flying direction, i.e., high point density areas, in mission 2 in particular, appear to be where the UAV slowed down for turning. The sidelap percentage between adjacent strips, which has an effect on the point density across the flight lines, depends on i) the lateral distance between adjacent flight lines, ii) the across-scan FOV, and iii) the sensor-to-object distance which is defined by the flying height. In Figure 7
a, mission 4 has the highest point density because it has two different flying heights and thus more flight lines within the surveyed area. In mission 5, the flying height is approximately 50 m on the west side and gradually increases to 90 m on the east side, and therefore results in a slightly higher point density over the shore on the west side. Figure 7
b, in contrast, reveals that the environmental factors have a crucial impact on the point density of the image-based point cloud. As can be seen in Figure 7
b, image-based 3D reconstruction yields poor results over vegetated surfaces. In particular, only sparse points can be found near the tops of tree crowns and sizeable gaps are present near the sides of tree crowns. This finding is consistent with the results obtained in previous studies [54
]. According to previous work in this area [54
], near-nadir-oriented images can only reconstruct sparse point cloud on top of the tree crowns, whereas oblique and horizontal view images have to be included in order to generate a realistic representation of trees. Although for coastal monitoring we focus on the ground surface and thus, oblique images are not necessary, it is evident that dense vegetation precludes the reconstruction of the occluded ground surface.
Prior to investigating the alignment between LiDAR and image-based profiles, which is essentially a 3D to 3D comparison, we inspected the point cloud to image geo-referencing as a 2D to 3D qualitative quality control. As mentioned earlier, the data acquired by the LiDAR unit and camera are georeferenced to a common mapping frame defined by the GNSS/INS trajectory and the mounting parameters estimated through the system calibration procedure. The integration of data derived from different sensors is not only a key strength of the used mobile mapping system but provides a way for assessing the quality of the GNSS/INS trajectory and system calibration parameters. Given a point in the point cloud, we can locate it on the orthophoto, and more importantly, backproject it onto the images that captured the point. The quality of backprojection would indicate the accuracy level of the system calibration together with the GNSS/INS trajectory. Figure 8
shows the qualitative quality control result in an area with some ruins in the Dana Island. Here, an intersection of walls is selected in the LiDAR point cloud, shown in the orthophoto, and backprojected onto the image which captured the point from the closest position. The correct location of the resultant 2D point on the orthophoto and from backprojection validates the quality of the GNSS/INS trajectory, as well as the mounting parameters from system calibration. The registration quality also validates the accuracy of the estimated IOPs of the camera.
Profiles along the X and Y directions (referenced to the mapping frame) were extracted every 50 meters to evaluate the horizontal and vertical alignment between LiDAR and image-based point clouds. Figure 9
a depicts the profile locations for flight mission 1, where Px1
Px4 are profiles along X direction, and Py1
Py4 are profiles along the Y direction. Figure 9
b displays the side view of profiles Px2 and Py2. Table 3
lists the shift and rotation between the LiDAR and image-based profiles for all flight missions (using image-based profiles as reference), where Px1
Px22 are profiles along the X direction, and Py1
Py22 are profiles along the Y direction. Despite the considerable difference in spatial coverage and point density, the profiles illustrate good alignment between LiDAR and image-based point clouds. The overall discrepancy along the X and Y direction are
0.2 cm and 0.1 cm, respectively. A larger but still acceptable discrepancy is observed along the Z direction, where the shift ranges from
8.9 cm to 8.6 cm, with an average of 0.2 cm. The rotations around the X and Y axes are
0.0051 degrees and 0.0050 degrees, respectively, which can be transformed to a 0.4 cm shift at a distance of 50 m. Examples of the profiles displayed in Figure 9
b further demonstrate the potential of UAV LiDAR in mapping coastal environments. Compared to the image-based point cloud, the LiDAR point cloud is significantly denser and captures more details of the terrain. While the image-based approach can only reconstruct points on the visible surface, the LiDAR shows an ability to penetrate the vegetation and capture points below the canopy, which facilitates reliable digital terrain model (DTM) generation.
The elevation differences between the LiDAR and image-based point clouds were calculated to evaluate the overall discrepancy between the two point clouds as well as to investigate any spatial patterns. The area where the comparison is carried out contains full swaths of LiDAR data with an overlap up to 80% between adjacent LiDAR strips. As mentioned previously, the misalignment between overlapping LiDAR strips is most likely to happen at the edge of the swath as the precision is lower. Since the area covered by photogrammetric data contains full swaths and overlapping strips, it has similar characteristics to the whole area (which is not covered by the corresponding images) from a LiDAR point of view. Therefore, any problem/disagreement between the LiDAR-based and image-based point clouds will be reflected in the results obtained by comparing the overlapping areas obtained from the two modalities, which can be considered representative of all possible problems/misalignments over the entire area captured by either of the sensors. Figure 10
illustrates the elevation difference map along with the histogram at Dana Island. Based on the map and the histogram, the majority of elevation differences are close to zero, with an average of 0.020 m and a standard deviation of ±0.065 m. Looking into any spatial discrepancy patterns, large elevation differences in missions 1, 3, and 5 are distributed over the entire site rather than being concentrated on specific locations. On the contrary, large elevation differences in missions 2 and 4 are more concentrated on areas where the UAV slowed down or stopped for turning, which points to the poor image reconstruction due to narrow baselines between camera locations. In the field survey, different UAV flight configurations were applied: in missions 2 and 4, the UAV slowed down for changing direction and came to a full stop in order to make turns, while the UAV basically maintained a constant speed in missions 1, 3, and 5. Figure 11
shows the camera locations in missions 1 and 2, where we can see the short distances between camera locations at the turning parts and at the two ends of the flight lines. This narrow baseline would lead to poor image reconstruction results as reported in a previous study [56
]. To investigate the elevation differences in different geomorphic environments, we used the orthophoto as a reference and manually divided the study site into zone 1 (intertidal area and dry beach with no vegetation) and zone 2 (steep slope with dense vegetation). Table 4
lists the mean and standard deviation of the elevation differences in each zone for each flight mission, where flight mission 2 only captured data in the vegetated area. Based on Table 4
, LiDAR and image-based point clouds in different environments/flight missions are well-aligned in general. The mean and standard deviation of the elevation differences range from ±0.6 cm to ±3.1 cm and ±4.5 cm to ±10.1 cm, respectively. While zone 2 has a larger mean of elevation differences than zone 1 in missions 1 and 3, the mean and standard deviation of elevation differences between the two zones are similar in missions 4 and 5. For missions 2 and 4, which suffer from the narrow baseline problem, larger standard deviations are observed, which suggests that flight configuration could have a major impact. The analyses of the elevation difference reveal that both environmental factors and technical factors have an influence on the quality of the image-based point cloud. Overall, the point clouds generated by both techniques are compatible within a 5 to 10 cm range.
6.3. Shoreline Change Estimation
Prior to quantifying shoreline change, we first assess the agreement between the LiDAR datasets collected at different dates by manually picking and extracting a portion of point cloud capturing a structure (building, street light, etc.), which presumably remained stationary between subsequent data collections, and checked the alignment across multiple dates. Figure 12
a shows point cloud alignment over a building at Dune Acres between 17 May 2018, 7 November 2018, 5 December 2018, and 10 May 2019. We performed plane fitting over a planar segment on the roof for the combined dataset. The root-mean-square error (RMSE) of the normal distances of the roof points from the best-fitting plane is 0.05 m, indicating that all datasets are in agreement within a 0.05 m range. Figure 12
b shows point cloud alignment over a wooden stair at Beverly Shores as captured on 19 November 2018, 5 December 2018, and 10 May 2019. The plane fitting RMSE over a planar segment on the wooden stair for the combined dataset is 0.03 m, suggesting that the three datasets are compatible within a 0.03 m range. These results show that the point clouds collected at different dates are consistent in elevation. Moreover, these reported RMSE values support our prediction that the precision of the derived point cloud is ±0.10 m.
The UAV surveys at Dune Acres revealed substantial shoreline erosion both over the one-year period, as well as from the two storms between the 7 November and 5 December surveys. Figure 13
a shows the DSM of Dune Acres (using the 10 May 2019 dataset as an example) and the 45 cross-shore transect locations. The foredune can be clearly seen in the DSM as a sudden change in elevation. Figure 13
b depicts the corresponding orthophoto, from which the coastline can be easily identified. Comparing Figure 13
a and b, we can see that the orthophoto has a much smaller spatial coverage as it is limited by the ground coverage of the images. The elevation change maps over the entire survey period and the two storm periods are presented in Figure 14
. According to the change map, most of the shoreline erosion was concentrated at the tall face of the foredune, which was marked as the region of interest (ROI) for volumetric loss calculation. Four sample transects where the erosion was concentrated are selected and shown in Figure 15
, from which the transects and ridge points at different epochs can be clearly seen and demonstrated. These transects capture the variability in erosion patterns seen across the beach. The water levels at the time of the field surveys were obtained by averaging the NOAA gauge measurements within the flight time (there were typically three measurements within a 15-minute flight since the gauge takes a measurement every six minutes). These water levels are depicted in Figure 15
to indicate the ends of the transects. The ridge points (orange diamonds in Figure 14
) were manually identified and their coordinates were recorded. Table 5
lists the foredune ridge recession amounts, eroded volume per unit length of the shoreline, and the total eroded volume. Foredune ridge recession amounts are between 2 m and 9 m over the entire survey period, and between 0 m to 6 m within the storm-induced period (November 2018 to December 2018). The total volume of eroded sand within the one-year period is 3998.6 m3
, equating to an average volume loss of 18.2 cubic meters per meter of beach shoreline (the length of the ROI is 220 m). For the storm-induced period, the volume change is 2673.9 m3
, equating to an average volume loss of 12.2 cubic meters per meter of beach shoreline.
The surveys at Beverly Shores showed a smaller but still significant magnitude of storm-induced erosion over the one-month period between the November and December surveys. Figure 16
shows the DSM (using May 10th, 2019 dataset as an example), along with the locations of 35 cross-shore transects, and the orthophoto. Similar to Dune Acres, the DSM provides precise information over the beach and foredune, and the coastline can be identified in the orthophoto. Figure 17
depicts the elevation change map over the entire survey period and storm-induced period. Figure 18
shows four sample transects, demonstrating the erosion patterns across the beach. The foredune ridge recession amounts, eroded volume per meter shoreline, and total eroded volume at Beverly Shores are shown in Table 6
. Erosion at Beverly Shores also concentrated at the steep face of the foredune, causing foredune ridge recession magnitudes in the range from 0 m to 4 m, and a total volume loss of 938.4 m3
(equating to an average volume loss of 2.8 cubic meters per meter of beach shoreline as the length of ROI is 340 m) over the entire survey period (November 2018 to May 2019). Between the November and December surveys, the total volume of sand eroded from the storms was approximately 883.8 m3
over the 340 m length of beach surveyed, equating to an average volume loss of 2.6 cubic meters per meter of beach shoreline.