Next Article in Journal
Woody Surface Area Measurements with Terrestrial Laser Scanning Relate to the Anatomical and Structural Complexity of Urban Trees
Next Article in Special Issue
Geomorphic Evolution of Radial Sand Ridges in the South Yellow Sea Observed from Satellites
Previous Article in Journal
Multi-Criteria Selection of Surface Units for SAR Operations at Sea Supported by AIS Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identifying Geomorphological Changes of Coastal Cliffs through Point Cloud Registration from UAV Images

by
Xiangxiong Kong
Department of Physics and Engineering Science, Coastal Carolina University, P.O. Box 261954, Conway, SC 29528-6054, USA
Formerly: Division of Civil Engineering, School of Engineering, University of Guam, UOG Station, Mangilao, GU 96913, USA.
Remote Sens. 2021, 13(16), 3152; https://doi.org/10.3390/rs13163152
Submission received: 6 July 2021 / Revised: 3 August 2021 / Accepted: 5 August 2021 / Published: 9 August 2021
(This article belongs to the Special Issue Remote Sensing Application in Coastal Geomorphology and Processes)

Abstract

:
Cliff monitoring is essential to stakeholders for their decision-making in maintaining a healthy coastal environment. Recently, photogrammetry-based technology has shown great successes in cliff monitoring. However, many methods to date require georeferencing efforts by either measuring geographic coordinates of the ground control points (GCPs) or using global navigation satellite system (GNSS)-enabled unmanned aerial vehicles (UAVs), significantly increasing the implementation costs. In this study, we proposed an alternative cliff monitoring methodology that does not rely on any georeferencing efforts but can still yield reliable monitoring results. To this end, we treated 3D point clouds of the cliff from different periods as geometric datasets and further aligned them into the same coordinate system using a rigid registration protocol. We examined the performance of our approach through a few small-scale experiments on a rock sample as well as a full-scale field validation on a coastal cliff. The findings of this study would be particularly valuable for underserved coastal communities, where high-end GPS devices and GIS specialists may not be easily accessible resources.

1. Introduction

Monitoring coastal cliffs is essential for maintaining a healthy coastal ecosystem and is particularly crucial for the island of Guam. Being the largest island in the Marianas Chain in the Western Pacific, Guam has a coastline of 125.5 km and 59% of it is rocky coastlines characterized by steep cliffs and uplifted limestone terraces [1]. Due to the actions of the sea, strong winds, ground motions, and water surges [2], coastal cliffs are prone to erosion. For example, Typhoon Halong in 2002 struck Guam and led to erosion on the southeast shorelines; the 1993 Guam earthquake (magnitude of 7.8) also caused slides in coastal cliffs throughout the island [3]. Other natural impacts such as seasonal changes on rock thermal stress and/or cliff vegetation could also influence the cliff stability and cause geological hazards.
Cliff erosions could lead to sediments on coastal reefs and weaken the integrity of a local coastal ecosystem. One engineering approach to address this concern is to monitor the cliff erosion process using advanced technologies, based on which results can be delivered to the stakeholders for making timely decisions in managing a coastal zone. Traditionally, cartographic geological mapping [4,5] is the most popular method for surveying coastal erosion. However, this method is labor-intensive and prone to error due to mapping inaccuracy [6]. In addition, field deployments at inaccessible locations could be challenging and time-consuming. As such, terrestrial laser scanning (TLS)-based technology [7,8] has received increasing attention in coastal surveying for being able to achieve a non-contact and accurate solution through creating dense 3D point clouds of coastal areas. Nevertheless, the laser scanner could be costly, and inconvenient for field deployment due to its heavy self-weight. In addition, the scanner must be operated by trained technicians. Hence, field cliff monitoring on a routine basis may not be easily achievable.
Recently, photogrammetry-based methods have shown great successes in coastal monitoring. Utilizing computer vision algorithms, 2D images of the coastal cliffs can be processed to create dense 3D point clouds. Through the usages of unmanned aerial vehicles (UAVs), images of inaccessible coastal areas can be obtained. For example, Westoby et al. [6] created a 3D model of a coastal cliff in northeast England using a photogrammetry-based workflow. The authors concluded that the discrepancy between the photogrammetry model and the one obtained by TLS technology was less than 0.04 m. Letortu et al. [7] performed a similar investigation by comparing the datasets from TLS, terrestrial photogrammetry, and UAV photogrammetry using a testbed in France. Hayakawa and Obanawa [9] applied UAVs and photogrammetry to detect the volumetric changes of a coastal cliff on the Suzume-Jima Island in Japan. The authors of [10,11,12] review the recent work of photogrammetry-based remote sensing methods in coastal mapping.
A commonality from the above work is that georeferencing is required. This allows point clouds collected at different periods to be aligned into the same geographic coordinate system so that the geomorphological changes of the cliff can be identified. To date, the most dominant georeferencing approach is based on ground control points (GCPs). Made by large-size artificial targets, ideally, GCPs shall be evenly deployed to the cliff area. Georeferencing coordinates of each GCP then are measured by global positioning system (GPS) devices (e.g., total stations [7], global navigation satellite systems (GNSSs) [13,14]). GCPs-based georeferencing has several limitations. First, many locations of the cliff area could be inaccessible for deploying and measuring GCPs. For instance, a steep cliff face would be dangerous for GCP deployments due to rock falls [7]. Second, GPS measurements of GCPs usually require an error that is less than a few mm or cm, requiring high-cost GPS measuring devices (e.g., more than $10,000). Lastly, geographic information system (GIS) specialists need to be hired for the field GPS data collections and processing which would bring the extra cost to the project.
Some researchers have proposed direct georeferencing methods without using GCPs. The idea is to directly mount customized GNSS modules to portable digital cameras [15,16] or onboard UAV cameras [17,18]. As a result, the geotagged photos collected in the field can assist in the 3D mapping of a coastal cliff with accurate geographic coordinates. Methods based on direct georeferencing can map inaccessible areas of the cliff where deploying GCPs could be dangerous. However, methods based on direct georeferencing are still costly and would require extensive investigations on GNSS hookups [17]. Most recently, efforts have been made by investigating professional UAVs equipped with onboard off-the-shelf GNSS receivers (e.g., DJI Phantom 4 real-time kinematic (RTK) [19,20]). Nevertheless, the accuracy of elevation generated from the DJI Phantom 4 RTK could be problematic [21].
In this manuscript, we present a non-georeferenced approach for coast cliff monitoring. We apply a photogrammetry workflow to reconstructed dense 3D point clouds of the cliff from different periods. Then, different point clouds are aligned into the same coordinate system through a rigid registration protocol such that the geomorphological changes of the cliff can be identified. We examine the performance of our approach through a series of small-scale experiments on a rock sample against different lighting conditions and surface textures. It is followed by a full-scale field validation on a coastal cliff in Guam. Thereafter, we discuss the results from both small-scale and full-scale validations.
The major contribution of this study is to propose an alternative cliff monitoring methodology that does not rely on any georeferencing efforts but can still yield reliable monitoring results. Many previous researchers have focused on cliff monitoring using GCPs and/or GNSS-enabled UAVs. These georeferencing-based methods would be inflexible due to the high cost of hiring GIS specialists and/or purchasing expensive hardware. In contrast, our approach solely relies on computational algorithms to align point clouds together for uncovering the geomorphological changes caused by cliff erosion, significantly reducing the implementation cost. Although point cloud processing techniques such as the iterative closest point (ICP) algorithm [22,23] have been previously investigated in cliff monitoring [24,25,26,27,28], the roles of the ICP algorithm in these studies are limited as supplemental tools in improving point cloud alignment accuracy within the georeferencing framework. To the best knowledge of the author, there is no literature developing a completely non-georeferenced cliff monitoring methodology. The findings of this study would be particularly valuable for Guam and other underserved coastal communities, where high-end GPS devices and trained GIS professionals may not be easily accessible resources.
The rest of this manuscript is organized as follows: Section 2 illustrates the research methodology and explains the technical details; Section 3 demonstrates the soundness of the proposed method through a series of small-scale experiments; Section 4 validates the method using a full-scale coastal cliff; Section 5 further discusses applicability and limitations of our method; and Section 6 concludes the study.

2. Methodology

The research methodology, illustrated in Figure 1, contains three major components that include (a) image collection, (b) point cloud reconstruction, and (c) point cloud registration. Our method starts with the image collection of the cliff using UAVs. Then, UAV images are further processed by a series of computer vision algorithms, termed structure-from-motion with multi-view stereo (SfM-MVS), to reconstruct the point cloud of the cliff. Next, a new point cloud of the cliff can be obtained using the same procedure after the second field visit. Thereafter, these two point clouds are aligned into the same coordinate system through a protocol of rigid registration, which contains a few computational algorithms for point cloud alignment. Finally, the differential changes between two well-aligned point clouds can be extracted through computing the cloud-to-cloud distance. As a result, the geomorphological changes of the cliff can be identified. Each component in the research methodology is further explained in the rest of this section.

2.1. Image Collection

A large volume of digital images of the target cliff are collected using UAVs (see Figure 1a). Many consumer-grade UAVs can fit such a role. The flight routes and camera parameters (e.g., ISO, shutter speed, image resolution, and camera shooting interval) can be predefined through built-in flight operation apps. UAV images are intended to cover the cliff with different camera positions and angles. Adjacent images shall have enough overlapping for matching feature points that will be explained in Section 2.2.

2.2. Point Cloud Reconstruction

UAV images are processed by SfM-MVS for creating a 3D point cloud of the cliff (see Figure 1b). SfM-MVS is a well-established photogrammetry workflow that has been widely applied to coastal surveying [29], civil infrastructure inspection [30], river bathymetry extraction [31], and historic building preservation [32]. To this end, feature points (i.e., tie points, key points), which are small image patches that contain unique intensity distributions, are detected from each UAV image. Because feature points are invariant against image translation, rotation, and scaling, feature points with similar intensity distributions can be consistently tracked and matched across multiple UAV images. Some of the well-known features are scale-invariant feature transform (SIFT) [33], Shi-Tomasi [34], features from accelerated segment test (FAST) [35], Harris-Stephens [36], binary robust invariant scalable keypoints (BRISK) [37], and speeded up robust features (SURF) [38].
Next, feature points across different UAV images are matched based on their levels of similarities in intensity distributions. A geometric transformation matrix is also estimated in this stage to describe the relations between matched feature pairs (i.e., correspondences) of two adjacent UAV images. Based on the transformation matrix, incorrect matching results (i.e., outliers) can be eliminated.
Thereafter, SfM algorithms are adopted to estimate both extrinsic parameters (e.g., locations and orientations) and intrinsic parameters (e.g., focal length and pixel sensor size) of the camera. The 3D geometry of the cliff scene is also calculated in this stage. Then, camera positions and angles are further refined through bundle-adjustment algorithms to reduce reprojection errors in MVS. Next, multiple-view UAV images and their corresponding camera parameters are utilized for reconstructing the sparse 3D point cloud of the cliff. Users can also examine the quality of reconstruction errors in the sparse point cloud, and if needed, may change the parameters of the algorithms to re-create the sparse point cloud. Finally, pixels are back-projected to all UAV images to create an RGB-colored dense point cloud, which represents the 3D surface of the cliff. The detailed reviews of SfM-MVS are summarized in [39,40,41].

2.3. Point Cloud Registration

To uncover the geomorphological changes of the cliff, two dense point clouds at different periods are aligned together using the protocol of rigid registration (Figure 1c). The protocol can find geometric similarities of two point clouds and applies rotation, scaling, and translation to rigidly align one point cloud to another. This procedure further contains three steps that include (1) scaling one point cloud to a real-world length unit; (2) rough alignment of two point clouds based on manually selected correspondences; and (3) fine alignment of two point clouds using the automated ICP algorithm. Each step is further explained as follows.
As shown in Figure 1c, point cloud A is first scaled to the correct real-world unit using a scaling factor, which is the ratio of the distance between two existing points measured from the cliff site in the real world over the distance of the same two points from the point cloud. The point cloud after scaling is considered as the reference point cloud which will not move for the rest of the registration procedure.
Then, point cloud B (denoted as the floating point cloud) is roughly aligned to the reference point cloud (i.e., point cloud A) through manually finding correspondences. Correspondences are points that appear at similar locations in both reference and floating point clouds. Selections of correspondences are flexible as long as they can be visually identified. Based on correspondences, a geometric transformation matrix can be estimated, allowing the floating point cloud to be rigidly translated, rotated, and scaled for matching the reference point cloud.
Due to the manual selection of correspondences, errors are inevitably introduced during rough alignment. Such errors can be further reduced through fine registration. Here we adopt the ICP algorithm to further optimize the transformation matrix. The ICP algorithm starts with an initial guess of the rigid body transform of two point clouds, and iteratively improves the transformation matrix through repeatedly finding correspondences with minimum errors. The last row of Figure 1c illustrates comparisons of two point clouds at each stage of the registration.
The rough alignment can effectively align two point clouds together but small misalignments may exist. Fine alignment, on the other hand, is capable of adjusting small misalignments but may not work well if the initial misalignment of two point clouds is large. By successively adopting these two alignments in the correct order, the misalignments between two point clouds can be gradually reduced.

3. Small-Scale Validation

3.1. Test Configuration

A series of small-scale tests on a rock sample was performed with the purposes of (1) reconstructing dense 3D point clouds from the test sample under different lighting and surface texture conditions; and (2) detecting, localizing, and quantifying differential features of the rock sample under geometric changes. To this end, a rock sample was collected from Tumon Bay in Guam in June 2020. The longest diameter of the sample is about 13.5 cm, as shown in figure (a) in Table 1. Five test cases were established to mimic different testing environments. The third column of Table 1 elaborates the different lighting conditions and geometric changes for each test case. The rock sample in Case A had a darker texture due to the high moisture content after the sample was collected from the beach. Images of Cases B to E were taken a few days later; hence, the sample has a brighter surface texture.
To mimic the landscape changes that one would see in a cliff, some geometric features of the rock sample were intentionally changed in Cases C, D, and E (see the fourth column of Table 1). Briefly, in Case C, three small stones denoted S1, S2, and S3 were placed on the top of the rock sample (see Figure 2a). In Case D, instead of adding stones, a thin layer of salt particles was added on the top of the sample (see Figure 2b). Thereafter, such a layer was removed, and a new layer of salt particles was added to a different location of the sample in Case E (see Figure 2c).
A consumer-grade digital camera (Sony Alpha 6400 with the E PZ 16–50 mm Lens) was adopted for image collection. The auto mode was selected to allow the camera to define its preferred shooting parameters. The distance between the lens and the rock sample varied from 20 to 40 cm during image collection. Images were shot with a resolution of 6000 pixels by 4000 pixels. In Cases A to E, 199, 86, 70, 67, and 98 images were collected, respectively.

3.2. Point Cloud Reconstruction

The 3D point clouds of the sample were reconstructed using the off-the-shelf software Agisoft Metashape (version 1.6.2) [42] installed on a mobile workstation (Lenovo ThinkPad P72 with 16 GB of RAM and a 2.2 GHz CPU). Here, we use Case A as an example to illustrate the workflow. Figure 3a shows 7 out of 199 images of the rock sample in Case A. The collected images were then aligned together through the SfM-MVS algorithm. A sparse point cloud (Figure 3c) is first constructed, based on which a dense point cloud is built, as shown in Figure 3d. The camera positions are estimated in Figure 3b where small blue patches indicate the camera positions and angles.
Figure 4 shows the 3D reconstruction results of Cases B to E. The dense point clouds of the sample have different surface colors due to changes in lighting conditions. For instance, the dense point clouds have a lighter color representation in Cases B and E (Figure 4a,d) compared with dense point clouds in Cases C and D (Figure 4b,c). This is because the sample was in an outdoor environment for the former test cases. Additionally, notice that the dense point cloud in Case A (Figure 3d) has a slightly darker color than Cases C and D (Figure 4b,c). This is caused by the fact that Case A has a higher moisture content, despite all three test cases being under indoor lighting conditions.

3.3. Point Cloud Registration

To align dense point clouds together, we adopt open-source software, CloudCompare (version 2.10.2) [43], and first scale the point cloud in Case A with the real-world unit. To do this, two points (#4332244 and #3697936 in Figure 5a) were selected in the unscaled point cloud. The distance between these two points was measured as 7.753 from CloudCompare. Notice that there is no real-world dimension associated with this distance. Next, the locations of these two points were identified in the rock sample and the corresponding distance was measured as 10.5 cm. This further led to a scaling factor of 10.5 cm/7.753 = 1.354 cm/1. Thereafter, the initial point cloud was scaled up by multiplying 1.354 to the coordinates of each point. The new point cloud, after scaling, is treated as the reference point cloud. Figure 5b illustrates the comparison of the point clouds before and after scaling.
Next, a point cloud from a new test case is aligned to the reference point cloud. We use Case C as an example here for illustration. First, rough registration was performed using four correspondences (A0-R0, A1-R1, A2-R2, and A3-R3 in Figure 5c) from both point clouds. Thereafter, fine registration was conducted through the ICP algorithm. Point clouds from Case B, D, and E were aligned with the point cloud in Case A using the same procedure, but the procedures of these alignments are not shown in this manuscript due to the length constraint.

3.4. Point Cloud Comparison

Once point clouds of Case B to E are aligned with the reference point cloud in Case A, the differential features can be identified through computing cloud-to-cloud distance in CloudCompare. The cloud-to-cloud distance between Case A–B and Case A–C are illustrated in Figure 6. As shown in the figure, the test sample in Case B experienced no geometric change but was under a different lighting condition. As a result, the cloud-to-cloud distance between Case B and reference point cloud (i.e., Case A) is extremely small (0.07 cm in Figure 6a,b), indicating two point clouds match well with each other. The three stones in Case C can be identified from the cloud-to-cloud distance as shown in Figure 6d,e. The locations of stones agree well with the ground truth measurements in Figure 6f. Furthermore, the height of S1, S2, and S3 can be roughly quantified as 0.4, 0.3, and 0.7 cm.
The cloud-to-cloud distances between Case A–D and Case A–E are shown in Figure 7. As can be seen in the first and second columns of the figure, salt particles in the test samples in Cases D and E can be identified. The cloud-to-cloud distance in log scale has better demonstrations on finding the boundary of the particles; while the result in linear scale is more suitable for quantifying the thickness of the salt layer. Results indicate that the proposed method can reliably find geometric changes that occurred in the test sample, regardless of changes in the lighting conditions, as seen in Cases D and E.

4. Field Validation

4.1. Site Description

A cliff at Tagachang Beach in Guam is selected as the testbed for field validation. Tagachang Beach is located on the east side of the island. Showing in Figure 8d, the cliff starts at the south end of the beach and extends to the south. A small portion of the cliff is selected in this study (see the white circle in Figure 8d). Figure 9 illustrates the testbed from different views. The target cliff is about 30 m high measured from the cliff bottom and has a relatively flat top surface covered by vegetation (Figure 9a). Both north and east sides of the target cliff are steep rock surfaces (Figure 9b,c). A rock slide can be observed on the east vertical plane of the cliff due to the previous erosion (Figure 9d).

4.2. UAV Operation, Data Collection, and Point Cloud Reconstruction

Two visits were carried out on 25 June and 11 July 2020, respectively. The east side of the cliff was inaccessible due to high tides during both visits. Hence, the deployment work was performed at the north side of the cliff (i.e., see the deployment area in Figure 8d). Two off-the-shelf UAVs, the DJI Air (SZ DJI Technology Co., Ltd, Shenzhen, China) and DJI Phantom 4 Pro + V2.0 (DJI Phantom 4, hereafter, SZ DJI Technology Co., Ltd, Shenzhen, China), were adopted as tools for image collection.
To evenly capture the testbed under different camera positions, two image collection strategies were proposed. The first strategy was to take a series of images under a preprogrammed flight route to scan the cliff from the top. This was achieved by operating the DJI Air through an off-the-shelf smartphone app, Pix4Dcapture (version 4.10.0) [44], installed on an iPhone 11. A double-grid mapping mission was created in the app. The altitude of the flight was defined as 90.2 m with front and side overlapping of 90% and 75%, respectively, based on which the app calculated the UAV locations for shooting each image. As a result, 83 images were collected by the DJI Air for both field visits with an image resolution of 4056 pixels by 3040 pixels. The UAV camera angle was selected as 80 degrees.
For the second image collection strategy, images were captured by the DJI Phantom 4 through an intelligent mode, named point of interest (POI), using the smartphone app DJI Go 4 (version 4.3.36) [45]. The app was preinstalled in the all-in-one DJI remote controller. The POI mode allowed the UAV to fly along a circular path horizontally with a predefined center point and a radius. The center point was defined at the cliff’s top (see the white cross in Figure 9a), and the radius was selected as 62 m. Then, multiple POI flights were performed under altitudes of 25 m to 45 m. Images were automatically collected by the onboard UAV camera using a camera shooting interval of 2 seconds with an image resolution of 4864 pixels by 3648 pixels. In total, 284 and 251 images were collected in the field visits of 25 June and 11 July, respectively.
Figure 10a,c show the sample UAV images from the DJI Phantom 4 under the POI mode for both field visits. Figure 10b,d show the camera positions where the backgrounds are sparse point clouds of the testbed. As can be seen in the figures, the DJI Air follows flight missions of a 3-by-3 grid to cover the top of the cliff area. The DJI Phantom 4 is operated in POI mode to mainly scan the east and north sides of the cliff from four different altitudes.
Based on the collected UAV images from the DJI Air and Phantom 4, the dense point clouds of two field visits are reconstructed using Agisoft Metashape on a workstation (Dell XPS 8930-7814BLK-PUS with 32 GB of RAM and a 3.0 GHz CPU). Figure 11 illustrates the dense point clouds from both field visits where point clouds outside the scope of the testbed are truncated. The point cloud in the 25 June visit contains 48.5 million points, while the point cloud on 11 July contains 55.8 million points.

4.3. Point Cloud Registration

To align the point clouds, we first scale the point cloud into a correct real-world unit in CloudCompare. To this end, we treat the point cloud in the second visit on 11 July as the reference point cloud. During this visit, three markers (M1, M2, and M3) were placed in the testbed as seen in Figure 12d. M1 and M2 were X marks made by the blue paint tape, while M3 was the UAV landing pad. The distances between the three markers were taken by a measuring tape (see the second column in Table 2). Next, the markers were visually identified from the dense point cloud (Figure 12a–c). The distances between three markers in the point cloud were also measured (see the third column in Table 2). Finally, three scaling factors were calculated, based on which the average scaling factor of 1.054 m was applied for scaling the point cloud in the second visit.
Thereafter, the scaled point cloud in the first visit was aligned to the reference point cloud through the registration protocol. Figure 11 demonstrates the selections of correspondences (A1-R1, A2-R2, A3-R3, and A4-R4) from both point clouds for rough alignment. Next, the point cloud in the first visit was further aligned by the automated ICP algorithm.
Figure 13 shows the comparison of point clouds under different views of the cliff from two visits during the registration procedure. The point cloud from the 25 June visit is rendered in blue. As can be seen in Figure 13b,e,h, small misalignments can be observed after rough alignment. Such misalignments can be minimized after fine alignment is performed (Figure 13c,f,i).

4.4. Cliff Monitoring

The cloud-to-cloud distance is computed in CloudCompare and the results are shown in Figure 14. As can be seen from the figure, the majority of the cliff area is covered in green, indicating the discrepancies between two point clouds are about or less than 1.47 cm (read from the figure). However, scattered yellow and red spots can be also found from the results. The cloud-to-cloud distances for these locations span from 19 cm (yellow) to 2.47 m (red), showing significant discrepancies that occurred in the point clouds from two field visits.
To further investigate such discrepancies, two locations (Patch A and B) are identified from the bird’s-eye view in Figure 14a. Patch A contains a steep cliff face covered by scattered vegetation as shown in Figure 15b; Patch B locates at the flat top of the cliff filled by vegetation (Figure 15d). As observed from the figures, the cloud-to-cloud distances are large in the area of vegetation (yellow spots in Figure 15a,c) and become smaller around the cliff rock face (e.g., the green area in Figure 15a). This is because the SfM-MVS algorithm has difficulties reconstructing thin structures such as plants [41], leading to reconstruction errors to the point clouds.
To reduce the errors caused by vegetation, we truncate the cloud-to-cloud distance result in Figure 14 by only reserving the steep cliff faces on the east and north sides. The new results of cloud-to-cloud distance are shown in Figure 16. As a result, the maximum cloud-to-cloud distance has been reduced from 2.47 m in Figure 14 to 0.66 m in Figure 16. Red spots can be still observed from the figures, mainly caused by the scattered vegetation on the cliff faces. We further inquiry three cloud-to-cloud distances from typical cliff faces and the results range from 0.7 cm to 2.2 cm. Considering the size of the entire cliff (about 30 m in height), such differences are negligible.

5. Discussion

We validated the proposed method through a few small-scale experiments using a rock sample. Although SfM-MVS is a well-established workflow for reconstructing point clouds, few studies in the literature focused on the robustness of SfM-MVS against different lighting conditions and surface textures in the context of coastal cliffs. The small-scale validation in this study serves as the mean for addressing such concerns. The lighting conditions and surface textures (see the second and third columns in Table 1) would simulate the different weather conditions of a cliff one could see in the field. For instance, the lighting conditions of the cliff site would change across different periods of the day; the surface texture of rock may become dark after rain or a typhoon. The geometric changes in small-scale validation include abrupt changes, such as adding stones (Case C); or gradual changes, such as adding salt particle layers (Cases D and E). These changes mimic the geomorphological changes of the cliff. For the erosion behavior of the cliff, instead of adding contents, landscape features of the cliff would be removed. In this case, point clouds in Case C, D, and E can be considered as the initial models, while the point cloud in Case A shall be the new model after erosion.
Results from the small-scale validation demonstrated the effectiveness of our method in detecting, identifying, and quantifying geometric changes in the rock sample, regardless of variations in lighting conditions and surface texture. Although the cliff in the field validation of this study did not experience visible erosion due to a short inspection interval, the findings in the small-scale validation would serve as the basis for the success of our method in monitoring cliff erosion over the long term.
In terms of correspondence selection, four pairs of correspondences are selected on the top of the test sample in small-scale validation showing in Figure 5. Selecting correspondences from other locations of the rock sample is also feasible. Since correspondence selection only serves as the mean for rough alignment, errors that occurred in this registration stage can be further reduced during fine alignment and would not affect the final registration result.
One difference between small-scale and field validations is that extra errors are induced in the field validation due to vegetation in the cliff area. Vegetation fully covers the top surface of the cliff and appears in scattered patterns at the vertical cliff faces. Estimating the locations of true rock surfaces in these areas from the point cloud could be very challenging as the surfaces are barely visible from UAV images. However, the false-positive results can be easily identified through visual inspections between cloud-to-cloud distance and ground truth measurements (see Figure 15).
Since the nature of our method is a non-georeferenced approach, the point cloud generated by our method is not intended to contain any geographic information. Although most consumer-grade UAVs (including the ones in this study) provide geotagged images, such UAV images are not suitable for georeferencing due to the low accuracy of GIS coordinates. Secondly, the point cloud produced by our method cannot be directly linked to georeferenced datasets (e.g., geotagged maps, point clouds, or models). However, if a georeferenced point cloud of a cliff exists in the past, one can align a newly collected non-georeferenced point cloud from our method to the existing georeferenced one through the registration method established in this study. In terms of geomorphological changes, our method assumes that only a small portion of the cliff experiences erosion while the remainder of the cliff remains unchanged during inspections, which could be commonly found in coastal surveying [46]. Investigating dramatic geomorphological changes of a cliff due to severe erosions is out of the scope of this study.

6. Conclusions

Monitoring cliff erosion is essential for maintaining a healthy coastal ecosystem. The usage of photogrammetry-based workflows and UAVs have been proven effective in monitoring coastal cliffs. To date, many photogrammetry-based methods rely on georeferencing frameworks for point cloud alignments. Despite the successes reported in these studies, georeferencing efforts significantly increase the project cost through securing high-end GPS equipment, hiring GIS specialists, and/or relying on GNSS-enabled UAVs. This may hinder the usage of photogrammetry technology for monitoring cliffs on a routine basis, particularly in underserved coastal communities where expensive hardware and trained GIS specialists are limited resources.
In this study, we proposed a novel photogrammetry-based approach for identifying geomorphological changes of coastal cliffs that does not rely on any georeferencing efforts. The SfM-MVS algorithms were adopted in reconstructing 3D dense point clouds of the cliff. Then, a rigid registration protocol was established to gradually align two point clouds at different periods together to uncover the differential changes caused by cliff erosion. Our method has been examined by a series of small-scale experiments on a rock sample. Results indicated the proposed method can detect, localize, and quantify small changes that occurred in the rock sample, regardless of variations in lighting and surface texture conditions. Thereafter, we further validated our method on a full-scale coastal cliff in Guam. Point clouds from two field visits were reconstructed and aligned together to find the differential features caused by geomorphological changes. The findings of this study are highly impactful for being able to offer a low-cost and flexible cliff monitoring methodology to government agencies and stakeholders for their decision-making in coastal zone management.

Funding

This study is based on work supported by the seed grant through the National Science Foundation project Guam EPSCoR (Grant No. 1457769) in the United States. However, any opinions, findings, and conclusions, or recommendations expressed in this study are those of the author and do not necessarily reflect the views of the National Science Foundation or Guam EPSCoR.

Data Availability Statement

The data presented in this study are available from the author upon reasonable request.

Acknowledgments

This study was funded and the data was collected during the author’s previous term at the University of Guam. The author analyzed data and wrote the manuscript during the periods at both the University of Guam and Coastal Carolina University. The author wants to thank undergraduate student assistant Angela Maglaque for assisting in field visits; Maria Kottermair and Yuming Wen for providing fruitful discussions on equipment and software selections; the Guam EPSCoR program and the Research Corporation of the University of Guam for managing the grant; and the Office of Research and Sponsored Programs at the University of Guam for supporting a portion of the equipment used in this study. The author also wants to thank Colleen Bamba, Bastian Bentlage, Terry Donaldson, Roseann Jones, Jordan Jugo, and anonymous reviewers for improving the quality of this manuscript significantly.

Conflicts of Interest

The author declares no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Guam Coastal Zone Management Program and Draft Environmental Impact Statement; National Oceanic and Atmospheric Administration, U.S. Department of Commerce: New York, NY, USA, 1979.
  2. Emery, K.O.; Kuhn, G.G. Sea cliffs: Their processes, profiles, and classification. Geol. Soc. Am. Bull. 1982, 93, 644–654. [Google Scholar] [CrossRef]
  3. Guam Hazard Mitigation Plan, Guam Homeland Security Office. 2019. Available online: https://ghs.guam.gov/sites/default/files/final_2019_guam_hmp_20190726.pdf (accessed on 7 August 2021).
  4. Thieler, E.R.; Danforth, W.W. Historical shoreline mapping (II): Application of the digital shoreline mapping and analysis systems (DSMS/DSAS) to shoreline change mapping in Puerto Rico. J. Coast. Res. 1994, 10, 600–620. [Google Scholar]
  5. Sear, D.A.; Bacon, S.R.; Murdock, A.; Doneghan, G.; Baggaley, P.; Serra, C.; LeBas, T.P. Cartographic, geophysical and diver surveys of the medieval town site at Dunwich, Suffolk, England. Int. J. Naut. Archaeol. 2011, 40, 113–132. [Google Scholar] [CrossRef]
  6. Westoby, M.J.; Lim, M.; Hogg, M.; Pound, M.J.; Dunlop, L.; Woodward, J. Cost-effective erosion monitoring of coastal cliffs. Coast. Eng. 2018, 138, 152–164. [Google Scholar] [CrossRef]
  7. Letortu, P.; Jaud, M.; Grandjean, P.; Ammann, J.; Costa, S.; Maquaire, O.; Delacourt, C. Examining high-resolution survey methods for monitoring cliff erosion at an operational scale. GIScience Remote Sens. 2018, 55, 457–476. [Google Scholar] [CrossRef]
  8. Rosser, N.J.; Petley, D.N.; Lim, M.; Dunning, S.A.; Allison, R.J. Terrestrial laser scanning for monitoring the process of hard rock coastal cliff erosion. Q. J. Eng. Geol. Hydrogeol. 2005, 38, 363–375. [Google Scholar] [CrossRef]
  9. Hayakawa, Y.S.; Obanawa, H. Volumetric Change Detection in Bedrock Coastal Cliffs Using Terrestrial Laser Scanning and UAS-Based SfM. Sensors 2020, 20, 3403. [Google Scholar] [CrossRef]
  10. Klemas, V.V. Coastal and environmental remote sensing from unmanned aerial vehicles: An overview. J. Coast. Res. 2015, 31, 1260–1267. [Google Scholar] [CrossRef] [Green Version]
  11. Crommelinck, S.; Bennett, R.; Gerke, M.; Nex, F.; Yang, M.Y.; Vosselman, G. Review of automatic feature extraction from high-resolution optical sensor data for UAV-based cadastral mapping. Remote Sens. 2016, 8, 689. [Google Scholar] [CrossRef] [Green Version]
  12. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  13. Duffy, J.P.; Shutler, J.D.; Witt, M.J.; DeBell, L.; Anderson, K. Tracking fine-scale structural changes in coastal dune morphology using kite aerial photography and uncertainty-assessed structure-from-motion photogrammetry. Remote Sens. 2018, 10, 1494. [Google Scholar] [CrossRef] [Green Version]
  14. Fonstad, M.A.; Dietrich, J.T.; Courville, B.C.; Jensen, J.L.; Carbonneau, P.E. Topographic structure from motion: A new development in photogrammetric measurement. Earth Surf. Process. Landf. 2013, 38, 421–430. [Google Scholar] [CrossRef] [Green Version]
  15. Jaud, M.; Bertin, S.; Beauverger, M.; Augereau, E.; Delacourt, C. RTK GNSS-Assisted Terrestrial SfM Photogrammetry without GCP: Application to Coastal Morphodynamics Monitoring. Remote Sens. 2020, 12, 1889. [Google Scholar] [CrossRef]
  16. Forlani, G.; Pinto, L.; Roncella, R.; Pagliari, D. Terrestrial photogrammetry without ground control points. Earth Sci. Inform. 2014, 7, 71–81. [Google Scholar] [CrossRef]
  17. Chiang, K.W.; Tsai, M.L.; Chu, C.H. The development of an UAV borne direct georeferenced photogrammetric platform for ground control point free applications. Sensors 2012, 12, 9161–9180. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Turner, D.; Lucieer, A.; Wallace, L. Direct georeferencing of ultrahigh-resolution UAV imagery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 2738–2745. [Google Scholar] [CrossRef]
  19. Taddia, Y.; Stecchi, F.; Pellegrinelli, A. Coastal Mapping using DJI Phantom 4 RTK in Post-Processing Kinematic Mode. Drones 2020, 4, 9. [Google Scholar] [CrossRef] [Green Version]
  20. Peppa, M.V.; Hall, J.; Goodyear, J.; Mills, J.P. Photogrammetric assessment and comparison of DJI Phantom 4 pro and phantom 4 RTK small unmanned aircraft systems. ISPRS Geospat. Week 2019, XLII-2/W13, 503–509. [Google Scholar] [CrossRef] [Green Version]
  21. Urban, R.; Reindl, T.; Brouček, J. Testing of drone DJI Phantom 4 RTK accuracy. In Advances and Trends in Geodesy, Cartography and Geoinformatics II. In Proceedings of the 11th International Scientific and Professional Conference on Geodesy, Cartography and Geoinformatics (GCG 2019), Demänovská Dolina, Low Tatras, Slovakia, 10–13 September 2019; CRC Press: Boca Raton, FA, USA; p. 99. [Google Scholar]
  22. Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
  23. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Sensor Fusion IV: Control Paradigms and Data Structures; International Society for Optics and Photonics: Washington, DC, USA, 1992; Volume 1611, pp. 586–606. [Google Scholar]
  24. Costantino, D.; Settembrini, F.; Pepe, M.; Alfio, V.S. Develop of New Tools for 4D Monitoring: Case Study of Cliff in Apulia Region (Italy). Remote Sens. 2021, 13, 1857. [Google Scholar] [CrossRef]
  25. Obanawa, H.; Hayakawa, Y.S. Variations in volumetric erosion rates of bedrock cliffs on a small inaccessible coastal island determined using measurements by an unmanned aerial vehicle with structure-from-motion and terrestrial laser scanning. Prog. Earth Planet. Sci. 2018, 5, 1–10. [Google Scholar] [CrossRef] [Green Version]
  26. Michoud, C.; Carrea, D.; Costa, S.; Derron, M.H.; Jaboyedoff, M.; Delacourt, C.; Davidson, R. Landslide detection and monitoring capability of boat-based mobile laser scanning along Dieppe coastal cliffs, Normandy. Landslides 2015, 12, 403–418. [Google Scholar] [CrossRef]
  27. Dewez, T.J.B.; Leroux, J.; Morelli, S. Cliff Collapse Hazard from Repeated Multicopter UAV Acquisitions: Return on Experience. International Archives of the Photogrammetry. Remote Sens. Spat. Inf. Sci. 2016, 41, 805–811. [Google Scholar] [CrossRef] [Green Version]
  28. Stumpf, A.; Malet, J.P.; Allemand, P.; Pierrot-Deseilligny, M.; Skupinski, G. Ground-based multi-view photogrammetry for the monitoring of landslide deformation and erosion. Geomorphology 2015, 231, 130–145. [Google Scholar]
  29. Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussee, N.; Bertin, X. Accuracy assessment of coastal topography derived from UAV images. The International Archives of Photogrammetry. Remote Sens. Spat. Inf. Sci. 2016, 41, 1127. [Google Scholar]
  30. Chaiyasarn, K.; Kim, T.K.; Viola, F.; Cipolla, R.; Soga, K. Distortion-free image mosaicing for tunnel inspection based on robust cylindrical surface estimation through structure from motion. J. Comput. Civ. Eng. 2016, 30, 04015045. [Google Scholar] [CrossRef]
  31. Dietrich, J.T. Bathymetric structure--from--motion: Extracting shallow stream bathymetry from multi--view stereo photogrammetry. Earth Surf. Process. Landf. 2017, 42, 355–364. [Google Scholar] [CrossRef]
  32. Bhadrakom, B.; Chaiyasarn, K. As-built 3D modeling based on structure from motion for deformation assessment of historical buildings. Int. J. Geomate 2016, 11, 2378–2384. [Google Scholar] [CrossRef]
  33. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  34. Shi, J. Good features to track. In Proceedings of the Computer Vision and Pattern Recognition, Proceedings CVPR’94, IEEE Computer Society Conference on IEEE, Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar]
  35. Rosten, E.; Drummond, T. Fusing points and lines for high performance tracking. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; Volumes 1–2, pp. 1508–1515. [Google Scholar]
  36. Harris, C.; Stephens, M. A combined corner and edge detector. Alvey Vis. Conf. 1988, 15, 50. [Google Scholar]
  37. Leutenegger, S.; Chli, M.; Siegwart, R.Y. BRISK: Binary robust invariant scalable keypoints. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
  38. Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
  39. James, M.R.; Robson, S. Straightforward reconstruction of 3D surfaces and topography with a camera: Accuracy and geoscience application. J. Geophys. Res. Earth Surf. 2012, 117, F03017. [Google Scholar] [CrossRef] [Green Version]
  40. Furukawa, Y.; Hernández, C. Multi-view stereo: A tutorial. Found. Trends Comput. Graph. Vis. 2015, 9, 1–148. [Google Scholar] [CrossRef] [Green Version]
  41. Smith, M.W.; Carrivick, J.L.; Quincey, D.J. Structure from motion photogrammetry in physical geography. Prog. Phys. Geogr. 2016, 40, 247–275. [Google Scholar] [CrossRef] [Green Version]
  42. AgiSoft Metashape Professional (Version 1.6.2) (Software). 2020. Available online: http://www.agisoft.com/downloads/installer/ (accessed on 1 April 2020).
  43. CloudCompare (version 2.10.2) [GPL Software]. 2020. Available online: http://www.cloudcompare.org/ (accessed on 1 June 2020).
  44. Pix4Dcapture (Version 4.10.0) (Smartphone app). 2020. Available online: https://www.pix4d.com/product/pix4dcapture (accessed on 1 June 2020).
  45. DJI GO 4 (Version 4.3.36) (Smartphone app). 2020. Available online: https://www.dji.com/downloads/djiapp/dji-go-4 (accessed on 1 June 2020).
  46. Grottoli, E.; Biausque, M.; Rogers, D.; Jackson, D.W.; Cooper, J.A.G. Structure-from-Motion-Derived Digital Surface Models from Historical Aerial Photographs: A New 3D Application for Coastal Dune Monitoring. Remote Sens. 2021, 13, 95. [Google Scholar] [CrossRef]
Figure 1. Research methodology of this study where (a) image collection; (b) point cloud reconstruction; and (c) point cloud registration.
Figure 1. Research methodology of this study where (a) image collection; (b) point cloud reconstruction; and (c) point cloud registration.
Remotesensing 13 03152 g001
Figure 2. Bird’s-eye views of the rock sample to show the geometric changes under different test cases. (a) three stones were added in Case C; (b) a thin layer of salt particles was added in Case D; and (c) a thin layer of salt particles was added in Case E at a different location.
Figure 2. Bird’s-eye views of the rock sample to show the geometric changes under different test cases. (a) three stones were added in Case C; (b) a thin layer of salt particles was added in Case D; and (c) a thin layer of salt particles was added in Case E at a different location.
Remotesensing 13 03152 g002
Figure 3. A 3D reconstruction of the rock sample in Case A. (a) Sample input images; (b) camera positions; (c) sparse point cloud; and (d) dense point cloud.
Figure 3. A 3D reconstruction of the rock sample in Case A. (a) Sample input images; (b) camera positions; (c) sparse point cloud; and (d) dense point cloud.
Remotesensing 13 03152 g003
Figure 4. A 3D reconstruction of the rock sample in (a) Case B; (b) Case C; (c) Case D; and (d) Case E.
Figure 4. A 3D reconstruction of the rock sample in (a) Case B; (b) Case C; (c) Case D; and (d) Case E.
Remotesensing 13 03152 g004
Figure 5. (a) The unscaled point cloud from Case A; (b) a comparison of unscaled and scaled point clouds in Case A; and (c) the unscaled point cloud in Case C can be aligned to the reference point cloud by manually selecting four correspondences (A0-R0, A1-R1, A2-R2, and A3-R3).
Figure 5. (a) The unscaled point cloud from Case A; (b) a comparison of unscaled and scaled point clouds in Case A; and (c) the unscaled point cloud in Case C can be aligned to the reference point cloud by manually selecting four correspondences (A0-R0, A1-R1, A2-R2, and A3-R3).
Remotesensing 13 03152 g005
Figure 6. (a,b) Cloud-to-cloud distance between Case A and B; (c) point cloud in Case B serves as the ground truth measurement; (d,e,g,h,j,k) cloud-to-cloud distance between Case A and C; and (f,i,l) point clouds in Case C serves as the ground truth measurements. Results in (a,d,g,j) are in log scale with the unit of cm; and results in (b,e,h,k) are in linear scale with the unit of cm.
Figure 6. (a,b) Cloud-to-cloud distance between Case A and B; (c) point cloud in Case B serves as the ground truth measurement; (d,e,g,h,j,k) cloud-to-cloud distance between Case A and C; and (f,i,l) point clouds in Case C serves as the ground truth measurements. Results in (a,d,g,j) are in log scale with the unit of cm; and results in (b,e,h,k) are in linear scale with the unit of cm.
Remotesensing 13 03152 g006
Figure 7. (a,b,d,e) Cloud-to-cloud distance between Case A and D; (c,f) point clouds in Case D serves as the ground truth measurements; (g,h,j,k) cloud-to-cloud distance between Case A and E; and (i,l) point clouds in Case E serves as the ground truth measurements. Results in (a,d,g,j) are in log scale with the unit of cm; and results in (b,e,h,k) are in linear scale with the unit of cm.
Figure 7. (a,b,d,e) Cloud-to-cloud distance between Case A and D; (c,f) point clouds in Case D serves as the ground truth measurements; (g,h,j,k) cloud-to-cloud distance between Case A and E; and (i,l) point clouds in Case E serves as the ground truth measurements. Results in (a,d,g,j) are in log scale with the unit of cm; and results in (b,e,h,k) are in linear scale with the unit of cm.
Remotesensing 13 03152 g007
Figure 8. The location of the testbed. (a) shows the island of Guam. (bd) are blow-up orthographic satellite views accordingly. Tagachang Beach and the target cliff are illustrated in Figure 8d. All satellite images are generated from Google Earth. Map data in (a,b) are from Google, NOAA, Maxar Technologies, and CNES/Airbus; map data in (c,d) are from Google and CNES/Airbus.
Figure 8. The location of the testbed. (a) shows the island of Guam. (bd) are blow-up orthographic satellite views accordingly. Tagachang Beach and the target cliff are illustrated in Figure 8d. All satellite images are generated from Google Earth. Map data in (a,b) are from Google, NOAA, Maxar Technologies, and CNES/Airbus; map data in (c,d) are from Google and CNES/Airbus.
Remotesensing 13 03152 g008
Figure 9. (a) Bird’s-eye view of the target cliff; (b) view from north; (c) view from the east; and (d) blow-up view of the rock slide. CP in Figure 9a is the center point for UAV flight mode of POI.
Figure 9. (a) Bird’s-eye view of the target cliff; (b) view from north; (c) view from the east; and (d) blow-up view of the rock slide. CP in Figure 9a is the center point for UAV flight mode of POI.
Remotesensing 13 03152 g009
Figure 10. (a) Sample DJI Phantom 4 images from the visit on 25 June 2020; (b) camera positions from the visit on 25 June 2020; (c) sample DJI Phantom 4 images from the visit on 11 July 2020; and (d) camera positions from the visit on 11 July 2020.
Figure 10. (a) Sample DJI Phantom 4 images from the visit on 25 June 2020; (b) camera positions from the visit on 25 June 2020; (c) sample DJI Phantom 4 images from the visit on 11 July 2020; and (d) camera positions from the visit on 11 July 2020.
Remotesensing 13 03152 g010
Figure 11. Dense point clouds of the cliff from (a) the 25 June visit; and (b) the 11 July visit. Correspondences are selected from both point clouds for rough alignment.
Figure 11. Dense point clouds of the cliff from (a) the 25 June visit; and (b) the 11 July visit. Correspondences are selected from both point clouds for rough alignment.
Remotesensing 13 03152 g011
Figure 12. Deployment of markers during the visit on 11 July where (a) shows the dense point cloud in the top view; (b) shows the blow-up detail of the point cloud with three markers; (c) shows blow-up details of three markers truncated from the dense point cloud; and (d) shows the images of markers collected by a smartphone camera in the field.
Figure 12. Deployment of markers during the visit on 11 July where (a) shows the dense point cloud in the top view; (b) shows the blow-up detail of the point cloud with three markers; (c) shows blow-up details of three markers truncated from the dense point cloud; and (d) shows the images of markers collected by a smartphone camera in the field.
Remotesensing 13 03152 g012
Figure 13. Investigation of point cloud registration. (a,d,g) are rough alignments results of point clouds from the top, north, and east views; (b,e,h) are the blow-up details; and (c,f,i) are the same observation locations after applying fine alignment. The point cloud from the June 25 visit is rendered in blue.
Figure 13. Investigation of point cloud registration. (a,d,g) are rough alignments results of point clouds from the top, north, and east views; (b,e,h) are the blow-up details; and (c,f,i) are the same observation locations after applying fine alignment. The point cloud from the June 25 visit is rendered in blue.
Remotesensing 13 03152 g013
Figure 14. Cloud-to-cloud distance between point clouds from two visits from (a) bird’s-eye view; (b) top view; (c) east elevation view; and (d) north elevation view. Results are in log scale with the unit of m.
Figure 14. Cloud-to-cloud distance between point clouds from two visits from (a) bird’s-eye view; (b) top view; (c) east elevation view; and (d) north elevation view. Results are in log scale with the unit of m.
Remotesensing 13 03152 g014
Figure 15. Investigation of Patch A and B in Figure 14 where (a,b) are cloud-to-cloud distance and the point cloud in the area defined by Patch A; and (c,d) are cloud-to-cloud distance and point cloud in the area defined by Patch B.
Figure 15. Investigation of Patch A and B in Figure 14 where (a,b) are cloud-to-cloud distance and the point cloud in the area defined by Patch A; and (c,d) are cloud-to-cloud distance and point cloud in the area defined by Patch B.
Remotesensing 13 03152 g015
Figure 16. Cloud-to-cloud distance and the point cloud of the truncated cliff. (a,b) Results from the bird’s-eye view; (c,d) results from the east view; and (e,f) results from the north view. The unit in (a,c,e) is m.
Figure 16. Cloud-to-cloud distance and the point cloud of the truncated cliff. (a,b) Results from the bird’s-eye view; (c,d) results from the east view; and (e,f) results from the north view. The unit in (a,c,e) is m.
Remotesensing 13 03152 g016
Table 1. Test matrix for the small-scale validation.
Table 1. Test matrix for the small-scale validation.
Test CaseSurface TextureLighting ConditionGeometric ChangeSample and Test Environment
Case ADarkIndoor lighting condition; daylight was the only light sourceReference dataset Remotesensing 13 03152 i001
Case BLightOutdoor lighting condition; the sample was directly under sunlightNo geometric change was made Remotesensing 13 03152 i002
Case CLightIndoor lighting condition; daylight was the only light sourceThree small stones were added (see Figure 2a) Remotesensing 13 03152 i003
Case DLightIndoor lighting condition; the roof lamp was the only light sourceA thin layer of salt particles was added
(see Figure 2b)
Remotesensing 13 03152 i004
Case ELightOutdoor lighting condition; the sample was placed in the shadowA thin layer of salt particles was added to a different location (see Figure 2c) Remotesensing 13 03152 i005
Table 2. Scaling factor calculation.
Table 2. Scaling factor calculation.
Marker DistanceField MeasurementCloudCompare MeasurementScaling Factor Calculation
M1 to M217.48 m16.611.052 m/1Average: 1.054 m/1
M1 to M310.06 m9.521.057 m/1
M 2 to M314.99 m14.221.054 m/1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kong, X. Identifying Geomorphological Changes of Coastal Cliffs through Point Cloud Registration from UAV Images. Remote Sens. 2021, 13, 3152. https://doi.org/10.3390/rs13163152

AMA Style

Kong X. Identifying Geomorphological Changes of Coastal Cliffs through Point Cloud Registration from UAV Images. Remote Sensing. 2021; 13(16):3152. https://doi.org/10.3390/rs13163152

Chicago/Turabian Style

Kong, Xiangxiong. 2021. "Identifying Geomorphological Changes of Coastal Cliffs through Point Cloud Registration from UAV Images" Remote Sensing 13, no. 16: 3152. https://doi.org/10.3390/rs13163152

APA Style

Kong, X. (2021). Identifying Geomorphological Changes of Coastal Cliffs through Point Cloud Registration from UAV Images. Remote Sensing, 13(16), 3152. https://doi.org/10.3390/rs13163152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop