Measuring Change Using Quantitative Di ﬀ erencing of Repeat Structure-From-Motion Photogrammetry: The E ﬀ ect of Storms on Coastal Boulder Deposits

: Repeat photogrammetry is increasingly the go-too tool for long-term geomorphic monitoring, but quantifying the differences between structure-from-motion (SfM) models is a developing field. Volumetric di ﬀ erencing software (such as the open-source package CloudCompare) provides an e ﬃ cient mechanism for quantifying change in landscapes. In this case study, we apply this methodology to coastal boulder deposits on Inishmore, Ireland. Storm waves are known to move these rocks, but boulder transportation and evolution of the deposits are not well documented. We used two disparate SfM data sets for this analysis. The ﬁrst model was built from imagery captured in 2015 using a GoPro Hero 3 + camera (ﬁsheye lens) and the second used 2017 imagery from a DJI FC300X camera (standard digital single-lens reﬂex (DSLR) camera); and we used CloudCompare to measure the di ﬀ erences between them. This study produced two noteworthy ﬁndings: First, volumetric di ﬀ erencing reveals that short-term changes in boulder deposits can be larger than expected, and that frequent monitoring can reveal not only the scale but the complexities of boulder transport in this setting. This is a valuable addition to our growing understanding of coastal boulder deposits. Second, SfM models generated by di ﬀ erent imaging hardware can be successfully compared at sub-decimeter resolution, even when one of the camera systems has substantial lens distortion. This means that older image sets, which might not otherwise be considered of appropriate quality for co-analysis with more recent data, should not be ignored as data sources in long-term monitoring studies. coastal boulder Inishmore, Combining repeat photogrammetry (ﬁrst data collected by a GoPro Hero 3 + , and later using a DJI FC300X) and quantitative di ﬀ erencing, we demonstrate re-organization of the deposit by storm waves over a two-year period. This work is part of a long-term monitoring project that will contribute to understanding CBD dynamics, and thereby unlock the record of high-energy wave events preserved in these deposits.

The swiftness with which researchers have embraced and applied this relatively new technology speaks to its inherent potential-UAV mapping is ideal for repeat low-cost, high-resolution data collection [41]-but also reflects rapid advances in the efficiency and stability of UAVs, the usability and reliability of controller software, and the sophistication of automated photogrammetric image processing [42][43][44][45].

Coastal Boulder Deposits
Coastal boulder deposits (CBD) accumulate above the high-tide line on exposed rocky coasts, and include clasts that can weigh hundreds of tons in some cases [65][66][67][68]. They often form imbricated boulder ridges that can be several meters high, tens of meters wide, and hundreds of meters long [65,[69][70][71] (Figure 2), CBD occur world-wide along high-energy coastlines. They have been documented around the Mediterranean [72][73][74][75], as well as in the Atlantic on the Aran Islands, Ireland [70,76], the Shetland and Orkney Islands, Scotland [69], Banneg Island, France [77], and Iceland [78]. CBD are also located in both western and northeast Australia [66,79], Iran [80], Oman [81], the Philippines [67,68,82], the Caribbean [71,[83][84][85][86], South Africa [87], and elsewhere. Until recently, however they received little attention: about 90% of published studies are from the last 18 years [88]. CBD are ideal candidates for photogrammetric monitoring and CloudCompare analysis, because the processes by which they form and evolve are poorly understood. Some studies have inferred tsunami emplacement, possibly with subsequent modification by storm waves [72,[89][90][91][92]. Others argue that characteristic features of CBD, including boulder imbrication, are primary indicators of tsunami transport, and exclude storm waves [93]. Further, the precise mechanics of boulder generation and transport remain contentious, and the dynamics of boulder ridge evolution over time (including inland migration rates) are not understood. Before and after positions of individual boulders and of ridge fronts have been documented [65,67,68] but the details remain elusive. Boulders may remain in place for decades, perhaps even centuries; but may then be transported meters or tens of meters during a single event [65,67,79,81,94].
The Aran Islands are of particular interest in this context because waves generated by intense storms have recently been shown to deposit and rearrange CBD along the Atlantic-facing coasts of all three islands [65,88]. Recognition that these deposits are currenty active has made them a focus for studying the energetics and effects of high energy storms [95,96]. Repeat observations of boulder ridges and quantitative change analyses over short time scales are critical for understanding CBD dynamics.
Quantifying the timescale and magnitude of change from year to year sets a baseline, and even in years without extreme storms may reveal lower-energy dynamics in the CBD system. Documenting these changes is a necessary precursor to unravelling the long-term evolution of CBD systems, and better understanding the mechanisms of their movement. UAV-based photogrammetry is an ideal methodology for this work. Observations are easily repeatable even in relatively inaccessible areas, so that SfM models can be created on an annual basis or better. Point-cloud differencing using CloudCompare can rapidly quantify changes at all scales. Thus, SfM can record CBD changes, including responses to storm events.

Study Area
This case study focuses on part of an extensive CBD system on the Aran Islands ( Figure 2). Inishmore, the largest of the three islands, has about 7 linear km of CBD along its Atlantic coasts [70,76]; and our test site is at the far northwestern end (Figure 3), where wave energies tend to be greatest (Cox et al., In review).  Figure 1). The boulder ridge forms a sinuous line, positioned at the back of the shallowly dipping bedrock platform. The ridge front is~15 m above high water, and 30-60 m inland from the high-tide line. The photo was taken about one hour before high tide. he dark brown colouration close to the water marks the intertidal extent.
CBD are ideal candidates for photogrammetric monitoring and CloudCompare analysis, because the processes by which they form and evolve are poorly understood. Some studies have inferred tsunami emplacement, possibly with subsequent modification by storm waves [72,[89][90][91][92]. Others argue that characteristic features of CBD, including boulder imbrication, are primary indicators of tsunami transport, and exclude storm waves [93]. Further, the precise mechanics of boulder generation and transport remain contentious, and the dynamics of boulder ridge evolution over time (including inland migration rates) are not understood. Before and after positions of individual boulders and of ridge fronts have been documented [65,67,68] but the details remain elusive. Boulders may remain in place for decades, perhaps even centuries; but may then be transported meters or tens of meters during a single event [65,67,79,81,94].
The Aran Islands are of particular interest in this context because waves generated by intense storms have recently been shown to deposit and rearrange CBD along the Atlantic-facing coasts of all three islands [65,88]. Recognition that these deposits are currenty active has made them a focus for studying the energetics and effects of high energy storms [95,96]. Repeat observations of boulder ridges and quantitative change analyses over short time scales are critical for understanding CBD dynamics.
Quantifying the timescale and magnitude of change from year to year sets a baseline, and even in years without extreme storms may reveal lower-energy dynamics in the CBD system. Documenting these changes is a necessary precursor to unravelling the long-term evolution of CBD systems, and better understanding the mechanisms of their movement. UAV-based photogrammetry is an ideal methodology for this work. Observations are easily repeatable even in relatively inaccessible areas, so that SfM models can be created on an annual basis or better. Point-cloud differencing using CloudCompare can rapidly quantify changes at all scales. Thus, SfM can record CBD changes, including responses to storm events.

Study Area
This case study focuses on part of an extensive CBD system on the Aran Islands ( Figure 2). Inishmore, the largest of the three islands, has about 7 linear km of CBD along its Atlantic coasts [70,76]; Remote Sens. 2020, 12, 42 5 of 27 and our test site is at the far northwestern end (Figure 3), where wave energies tend to be greatest (Cox et al., In review).
The Aran Islands are composed of Carboniferous limestone that dips 2-4 • to the southwest [97]. A combination of near-flat bedding and orthogonal sets of vertical veins and joints creates stair-step cliffs and broad platforms on the exposed Atlantic sides of the islands. Erosion along these planes of weakness yields tabular boulders (Figure 3), which pile up at the back of the bedrock platforms. The Aran Islands CBD have been the site of several studies [65,70,76,98]. The largest boulders, which weigh hundreds of tonnes, tend to be close to sea level, while smaller boulders (still weighing tonnes or tens of tonnes) can be up to 220 m inland, or on cliffs that reach as high as 50 m above sea level [65,70,76].
The test site exhibits well-developed boulder ridges that sit 8-15 m above high water,~30-70 m inland (Figure 1), and contain boulders with characteristic masses of tonnes to several tonnes, with intermediate axes of order 1 m [99] (sites S31-S33 in Table A2 of Ronadh . Individual blocks in the ridges (often concentrated at the seaward edge: Figure 3) tend to be larger, and commonly weigh tens of tonnes, up to ≈100 t in some cases (intermediate axes of order 2-5 m, density 2.66 t m −3 [65,76]. Comparison with 1930s film footage revealed transport of boulders up to about 60 t [100], and during large storms in winter 2013-2014 movement of boulders weighing as much as 50 t was documented [65]. Conventional wisdom suggests there is little or no activity in these deposits on a year-to-year basis, with change happening only during particularly extreme storm events [76,95,98] The Aran Islands are composed of Carboniferous limestone that dips 2-4° to the southwest [97]. A combination of near-flat bedding and orthogonal sets of vertical veins and joints creates stair-step cliffs and broad platforms on the exposed Atlantic sides of the islands. Erosion along these planes of weakness yields tabular boulders (Figure 3), which pile up at the back of the bedrock platforms. The Aran Islands CBD have been the site of several studies [65,70,76,98]. The largest boulders, which weigh hundreds of tonnes, tend to be close to sea level, while smaller boulders (still weighing tonnes or tens of tonnes) can be up to 220 m inland, or on cliffs that reach as high as 50 m above sea level [65,70,76].
The test site exhibits well-developed boulder ridges that sit 8-15m above high water, ~30-70 m inland (Figure 1), and contain boulders with characteristic masses of tonnes to several tonnes, with intermediate axes of order 1 m [99] (sites S31-S33 in Table A2 of Ronadh . Individual blocks in the ridges (often concentrated at the seaward edge: Figure 3) tend to be larger, and commonly weigh tens of tonnes, up to ≈100 t in some cases (intermediate axes of order 2-5 m, density 2.66 t m −3 [65,76]. Comparison with 1930s film footage revealed transport of boulders up to about 60 t [100], and during large storms in winter 2013-2014 movement of boulders weighing as much as 50 t was documented [65]. Conventional wisdom suggests there is little or no activity in these deposits on a year-to-year basis, with change happening only during particularly extreme storm events [76,95,98]. The data presented in this study challenge that assumption by showing considerable deposit rearrangement during the winters of 2015-2016 and 2016-2017.

Materials and Methods
Our methodology had four phases: (1) on-site image acquisition using a DJI Phantom drone; (2) image correction and optimization in Adobe Lightroom; (3) 3-D point-cloud and mesh generation using the SfM software package Agisoft PhotoScan (referred to here as "Agisoft": www.agisoft.com); and (4) quantitative differencing and analysis using CloudCompare (www.danielgm.net/cc/). Fieldwork was carried out with a two-person field team, and the survey time (from unpacking the drone to leaving the area) was 30 minutes without ground control points (GCPs,) and 75 minutes with GCP deployment and collection.
We point out that Agisoft Photoscan has recently been renamed Agisoft Metashape, but despite the name change, Metashape is essentially an upgrade with added functionality. The integrated changelog is at www.agisoft.com/pdf/metashape_changelog.pdf).

Image Acquisition
Data were collected in 2015 and 2017 using a DJI Phantom 3 (FC300X) drone controlled by DJI Groundstation Pro (Table 1). Phantom drones are relatively inexpensive, costing between $400 and $1500, and have been used for similar SfM studies [57,82,101]. Images were collected at nadir along precisely calibrated flightpaths parallel to the coast (Figure 4), at altitude 90 m in 2015 and 50 m in 2017 (Table 1). To ensure sufficient target points in adjacent images, flight paths were set up to provide 80% overlap between successive images and 60% sidelap between adjacent paths. The benefits of high-overlap image sets are well-documented [25,64,[102][103][104], so the practice has become standard operating procedure.
Different cameras were used in the two different missions: a GoPro Hero 3+ in 2015, and a DJI single-lens reflex camera in 2017. Both cameras had 12-megapixel sensors, but the GoPro camera had a much wider field of view (Table 1). In both surveys, the in-camera GPS automatically tagged each image's metadata with the capture location co-ordinates. These positional data have limited accuracy, however, as they are captured while the camera is in motion, using a relatively low-precision GPS unit. To improve georeferencing, GCPs-i.e. discrete objects that would be recognizable in the photographs ( Figure 5)-were distributed throughout the study area during the 2017 survey. Their positions were recorded on the ground using hand-held mapping-grade GPS, which was sufficiently

Materials and Methods
Our methodology had four phases: (1) on-site image acquisition using a DJI Phantom drone; (2) image correction and optimization in Adobe Lightroom; (3) 3-D point-cloud and mesh generation using the SfM software package Agisoft PhotoScan (referred to here as "Agisoft": www.agisoft.com); and (4) quantitative differencing and analysis using CloudCompare (www.danielgm.net/cc/). Fieldwork was carried out with a two-person field team, and the survey time (from unpacking the drone to leaving the area) was 30 minutes without ground control points (GCPs,) and 75 minutes with GCP deployment and collection.
We point out that Agisoft Photoscan has recently been renamed Agisoft Metashape, but despite the name change, Metashape is essentially an upgrade with added functionality. The integrated changelog is at www.agisoft.com/pdf/metashape_changelog.pdf).

Image Acquisition
Data were collected in 2015 and 2017 using a DJI Phantom 3 (FC300X) drone controlled by DJI Groundstation Pro (Table 1). Phantom drones are relatively inexpensive, costing between $400 and $1500, and have been used for similar SfM studies [57,82,101]. Images were collected at nadir along precisely calibrated flightpaths parallel to the coast (Figure 4), at altitude 90 m in 2015 and 50 m in 2017 (Table 1). To ensure sufficient target points in adjacent images, flight paths were set up to provide 80% overlap between successive images and 60% sidelap between adjacent paths. The benefits of high-overlap image sets are well-documented [25,64,[102][103][104], so the practice has become standard operating procedure.
Different cameras were used in the two different missions: a GoPro Hero 3+ in 2015, and a DJI single-lens reflex camera in 2017. Both cameras had 12-megapixel sensors, but the GoPro camera had a much wider field of view (Table 1). In both surveys, the in-camera GPS automatically tagged each image's metadata with the capture location co-ordinates. These positional data have limited accuracy, however, as they are captured while the camera is in motion, using a relatively low-precision GPS unit. To improve georeferencing, GCPs-i.e., discrete objects that would be recognizable in the photographs ( Figure 5)-were distributed throughout the study area during the 2017 survey. Their positions were recorded on the ground using hand-held mapping-grade GPS, which was sufficiently precise for the scale of changes measured in this study. While not utilized in this project, Agisoft provides a tool to generate unique coded GCPs via the Print Markers tool that the software can then automatically recognize and label later via the Detect Markers tool [105] (see pages [71][72]. These can be useful in surveys flown at low altitude, but require very high image resolution for reliability. Further, while the coded GCPs are natively implemented by Agisoft, manual GCPs are widely used in the literature [47,106]. precise for the scale of changes measured in this study. While not utilized in this project, Agisoft provides a tool to generate unique coded GCPs via the Print Markers tool that the software can then automatically recognize and label later via the Detect Markers tool [105] (see pages [71][72]. These can be useful in surveys flown at low altitude, but require very high image resolution for reliability. Further, while the coded GCPs are natively implemented by Agisoft, manual GCPs are widely used in the literature [47,106].

Image optimisation and distortion correction
Drone imagery was imported into Adobe Lightroom for organization and processing. Lightroom is professional-grade software for photographic catalog management and bulk processing, which in addition to having powerful processing algorithms, is relatively inexpensive and user-friendly. Thus, it is often used in scientific photogrammetry for both image correction [35] and lens distortion correction [33,107]. The image corrections employed in this study were simple and uniform across the images: the contrast was increased to +30, and clarity (a mid-tone contrast enhancement) was increased to +25. This process improves

Image Optimisation and Distortion Correction
Drone imagery was imported into Adobe Lightroom for organization and processing. Lightroom is professional-grade software for photographic catalog management and bulk processing, which in addition to having powerful processing algorithms, is relatively inexpensive and user-friendly. Thus, it is often used in scientific photogrammetry for both image correction [35] and lens distortion correction [33,107]. The image corrections employed in this study were simple and uniform across the images: the contrast was increased to +30, and clarity (a mid-tone contrast enhancement) Remote Sens. 2020, 12, 42 9 of 27 was increased to +25. This process improves performance of the SfM software in common-point identification, minimizing errors. It is especially important when target features are relatively homogenous, as is the case with the monochrome limestone bedrock of the test site (e.g., Figure 5).
Lens distortion was addressed via Lightroom's lens correction profile. This tool fixes image distortion and chromatic aberration (either automatically or manually) by reading lens information from the metadata and applying lens-specific image adjustment profiles [108][109][110][111][112][113][114]. The 2017 images required very little adjustment, but the corrections were essential for the 2015 GoPro images, which had strong barrel (or "fisheye" distortion). More complex and sophisticated lens correction techniques exist [41,54,[115][116][117][118] but Adobe's lens profiles have been tested and recommended in the photogrammetric literature [64,[119][120][121][122]. The quality of Lightroom's corrections are evident in before-and-after images ( Figure 6), and in the quality of the final model. performance of the SfM software in common-point identification, minimizing errors. It is especially important when target features are relatively homogenous, as is the case with the monochrome limestone bedrock of the test site (e.g. Figure 5). Lens distortion was addressed via Lightroom's lens correction profile. This tool fixes image distortion and chromatic aberration (either automatically or manually) by reading lens information from the metadata and applying lens-specific image adjustment profiles [108][109][110][111][112][113][114]. The 2017 images required very little adjustment, but the corrections were essential for the 2015 GoPro images, which had strong barrel (or "fisheye" distortion). More complex and sophisticated lens correction techniques exist [41,54,[115][116][117][118] but Adobe's lens profiles have been tested and recommended in the photogrammetric literature [64,[119][120][121][122]. The quality of Lightroom's corrections are evident in before-and-after images (Figure 6), and in the quality of the final model.

Model Generation Using Agisoft Photogrammetry Software
Agisoft's ease of use, relatively low cost, and high-quality output [123][124][125] have led to broad uptake in scientific communities including forestry [101], geology [126] , geomorphology [41,56,82,127,128], and archeology [32,129,130]. The software has an intuitive workflow that guides users step-by-step through the process of generating a 3D model [105]. For this project, we used the 2017 version 1.3.4. The product has been updated and renamed since this study was completed (see www.agisoft.com/pdf/metashape_changelog.pdf) but the workflow described here is the same in the current version of the software.
Image alignment is the first workflow step. Agisoft's algorithms [105] are generally very good at aligning photos, but the software has a few percent failure rate even with systematically acquired, GPS-tagged, uniformly oriented photos, and a couple of passes may be necessary. The precise error rate varies depending on variables such as image resolution and format, image overlap, sensor type and size, lens focal length and aperture, lens geometry, camera calibration accuracy, camera GPS accuracy, target contrast, target geometry, and flight altitude, path, and speed [131][132][133]. In this study, 21 of 355 images (6%) in the 2015 survey and 4 of 211 (2%) in the 2017 survey failed to align on first pass. Selecting each of the failed images, resetting its alignment (via the Reset Camera Alignment command), and then using the Align Selected Cameras command fixed every failed alignment in this study, i.e. a 100% alignment success rate.
The sparse point cloud (i.e. a very low-resolution 3D visualization of tie points common to multiple images, which is generated during the image alignment process) permits the user to evaluate accuracy and identify problematic tie points prior to the more computationally intensive generation

Model Generation Using Agisoft Photogrammetry Software
Agisoft's ease of use, relatively low cost, and high-quality output [123][124][125] have led to broad uptake in scientific communities including forestry [101], geology [126], geomorphology [41,56,82,127,128], and archeology [32,129,130]. The software has an intuitive workflow that guides users step-by-step through the process of generating a 3D model [105]. For this project, we used the 2017 version 1.3.4. The product has been updated and renamed since this study was completed (see www.agisoft.com/ pdf/metashape_changelog.pdf) but the workflow described here is the same in the current version of the software.
Image alignment is the first workflow step. Agisoft's algorithms [105] are generally very good at aligning photos, but the software has a few percent failure rate even with systematically acquired, GPS-tagged, uniformly oriented photos, and a couple of passes may be necessary. The precise error rate varies depending on variables such as image resolution and format, image overlap, sensor type and size, lens focal length and aperture, lens geometry, camera calibration accuracy, camera GPS accuracy, target contrast, target geometry, and flight altitude, path, and speed [131][132][133]. In this study, 21 of 355 images (6%) in the 2015 survey and 4 of 211 (2%) in the 2017 survey failed to align on first pass. Selecting each of the failed images, resetting its alignment (via the Reset Camera Alignment command), and then using the Align Selected Cameras command fixed every failed alignment in this study, i.e., a 100% alignment success rate.
The sparse point cloud (i.e., a very low-resolution 3D visualization of tie points common to multiple images, which is generated during the image alignment process) permits the user to evaluate accuracy and identify problematic tie points prior to the more computationally intensive generation of dense point clouds. Each point has associated uncertainty information. Points with high uncertainty are likely to be inaccurate, so it is standard practice to improve model quality by systematic elimination of points that fail to meet defined thresholds [134]. Uncertain points were identified via two passes of Agisoft's Gradual Selection tool, with threshold set to reconstruction uncertainty = 10 [135] (as recommended by Mayer et al., 2018), and then deleted. This process may not identify all problematic reconstructions, so it is recommended that the user manually rotate the model about all axes to visually identify inaccurate tie points. For example, Agisoft sometimes mis-correlates points located in the sky and the ocean, or incorrectly reconstructs bedrock relationships such that some points 'float' above or below the model surface. These points can be manually selected and deleted. Previous work has shown that expert manual filtering is the most efficient and accurate way of removing outlier points, and an indispensable step in the model-creation process [1]. The optimized tie points are then used to construct the dense cloud.
Dense point clouds can be built at a range of resolution settings: lowest, low, medium, high, and ultra. Each increase in resolution produces an exponentially larger number of points in the final model. To achieve the highest resolution possible, all models in this study were built at ultra quality.
During dense-cloud creation, additional mis-correlated points are generated. Agisoft has a built-in function called Depth Filtering to help mitigate the number of these points. This tool has four settings (off, mild, moderate, and aggressive). Testing the various filtering levels revealed that aggressive filtering produced better dense clouds in less time, so that setting was used in this project.
Although GCPs are not required for model generation, their inclusion improves positional accuracy [47,116]. GCPs acquired during the 2017 survey were brought into the model via Agisoft's import function. Agisoft places a marker giving an estimate of the GCP location in each image, but for precision, the user should visually check the placement, and refine if necessary. Images can be filtered by GCP presence, so that the user can efficiently work through just those images containing GCPs. For example, sometimes the digital GCP marker may not be centered on the physical GCP marker in the photograph. In such cases, the user can manually move the digital marker to the correct place.
Once all GCP markers have been accurately positioned in the model, the digital marker coordinates are updated to match the coordinates measured by GPS in the field. Whether by use of pre-coded targets or by manual input of GCP locations, the GCPs thus provide accurate anchor points for georeferencing the model. Once all GCPs have been located and anchored, the user simply updates the model projection via the Update tool. Agisoft then measures the difference between GCP anchor point coordinates and the estimated coordinates for other points in the model, and refines the projection accordingly. The precise georeferencing of the 2017 model via the GCPs was transmitted to the 2015 model when the two models were aligned in Cloud Compare.
Polygon surfaces meshes were built from the georeferenced dense clouds via the Build Mesh command in the Workflow menu. Mesh surfaces have much smaller output files and therefore are much faster to process in CloudCompare. Meshes can be one of two types: arbitrary or height field. Arbitrary surfaces can be applied to any kind of object. No assumptions are made about the object being modeled, although this comes at a cost of higher memory consumption [136]. To achieve the maximum possible mesh resolution, the meshes for this project were of arbitrary type.
Building high-resolution models is computationally intensive. Using a desktop Mac Pro (with 1TB PCLe storage, 32GB of 1866MHz RAM, a 3.0GHz 8-core 16-thread Intel Xeon CPU, and two AMD FirePro D700 GPUs with 6GB of total memory) it took~25-30 hours to generate the dense clouds and 5-10 hours to build the meshes (the shorter times were for the lower resolution starting image sets). Those times would be less with more recently developed multi-core processers. It is important to note that not all applications require the highest precisions, and processing time should be taken into account when determining the optimal precision for addressing a specific research question.

Quantitative Differencing Using CloudCompare
CloudCompare is free open-source software that offers a comprehensive suite of tools for comparing a variety of model formats, largely via the Multiscale Model to Model Cloud Comparison (M3C2) algorithm [137] See www.cloudcompare.org/doc/wiki/index.php?title=FILE_I/O for a full list of formats. Supported file types are imported into CloudCompare via a simple Open menu. The software provides quantitative differencing and statistical manipulation functions, in addition to a variety of display enhancement features (custom color ramps, shaders, handling of calibrated pictures, etc.). CloudCompare can analyze any combination of meshes and dense point clouds: point-to-point, mesh-to-point, and mesh-to-mesh [136].
For this project, we used mesh-to-mesh analysis. Although dense clouds generally have better resolution than corresponding meshes, the point cloud file sizes can be several orders of magnitude larger and are therefore computationally cumbersome to work with. Our dense cloud files were~20 GB, while the meshes were only~300 MB. The smaller file sizes and simpler geometries made mesh-to-mesh comparisons much faster than point cloud comparisons of the same area. The triangulated/triangular irregular network (TIN) .ply mesh file format (also known as the Stanford Triangle Format) was optimal because of its broad compatibility and ease of integration with other 3D software [50,138].
Achieving high-quality alignment is the critical step in model differencing, and CloudCompare's registration methods have been tested and used in the literature [26,[139][140][141]. The process is iterative, with the alignment becoming better with each iteration. Three initial input values are required: theoretical overlap, iteration stop condition, and alignment scaling.A theoretical overlap of less than 100% allows CloudCompare to align partially-overlapping models (as is generally the case for drone flights collected at different times) without having to scale or move one of the models to completely overlap the other [136]. Even when data are collected in controlled environments and/or with very high-precision instruments, the overlap is rarely 100% [142,143].
The iteration stop condition can be set to a maximum number of iterations (e.g., stop after 30 iterations), or at a target root mean square (RMS) difference between iterations (e.g., stop when the improvement between iterations is less than 1 cm), whichever comes first [136]. For example, the distance between the models may decrease by several meters between the first and second iteration, but this improvement will get gradually smaller as the alignment improves. Thus, by the twentieth and twenty-first iterations, the improvement may be a centimeter or less. For this study, an improvement threshold value of 1 × 10 −5 m was used, and the minimum iterations were set to 1,000 to ensure that the improvement threshold was met first. The default settings are 20 iterations or an improvement threshold of 1 × 10 −5 units, and have been used elsewhere in the literature (e.g., Vasilakos et al., 2018). The extremely small threshold improvement value ensures that the algorithm is asymptotically approaching an optimized alignment, and has reached a plateau of diminishing returns, as each iteration is producing a tiny marginal improvement regardless of how much computation time is invested.
The last input is whether or not to enable model scaling for cases where the models are of slightly different sizes [136]. CloudCompare uses a scaling factor only when it improves alignment between models, so its use is highly recommended, especially given that scaling has been useful with models generated with wide-angle imagery even in highly controlled laboratory settings [58]. Alignment scaling was therefore enabled in this study.
Model alignment should maximise accuracy. Thus the model with the best georeferencing and/or resolution should provide the template to which other models will be aligned. In this case, 2017 model, which was georeferenced with GCPs, was the more accurate. Thus, the 2015 mesh was aligned to the 2017 mesh.
Vertical distances between sampled points on the two aligned mesh surfaces are calculated via the mesh-to-mesh option within CloudCompare's cloud-to-mesh (C2M) tool [141,[144][145][146]. Note that differences between point clouds are given as absolute values (i.e., the tools do not distinguish between surface raising and lowering, but simply compute the raw distance between sampled points on the two input models). A first-party plugin called M3C2 returns signed values (negative for surface lowering, positive for surface raising) [136], but this is not required for mesh-to-mesh comparisons, which automatically differentiate between added and lost volume.

Agisoft Models
The Agisoft models for 2015 and 2017 had resolutions of 3.7 and 2.4 cm/pixel respectively. For the larger boulders, with y-axis lengths in the range of meters to several meters, this represents tens to hundreds of pixels in area. Specific boulders are readily discernable, as are details of the bedrock (Figure 7). It was clear from initial comparison of the models that the majority of the area had not changed between 2015 and 2017, and that there had been no erosion of the bedrock platform, but that many individual boulders had changed locations.
Detailed visual comparisons revealed complex rearrangement dynamics in the clasts comprising the boulder ridge (Figure 7). Some boulders visible in 2015 could not be found in the 2017 model or imagery, and in addition there were new boulders in the 2017 data that could not be identified in the 2015 model or imagery. For example in Figure 7, there are 24 boulders with trackable movement, and 52 boulders that could not be matched to the 2015 data. We

Agisoft Models
The Agisoft models for 2015 and 2017 had resolutions of 3.7 and 2.4 cm/pixel respectively. For the larger boulders, with y-axis lengths in the range of meters to several meters, this represents tens to hundreds of pixels in area. Specific boulders are readily discernable, as are details of the bedrock (Figure 7). It was clear from initial comparison of the models that the majority of the area had not changed between 2015 and 2017, and that there had been no erosion of the bedrock platform, but that many individual boulders had changed locations.
Detailed visual comparisons revealed complex rearrangement dynamics in the clasts comprising the boulder ridge (Figure 7). Some boulders visible in 2015 could not be found in the 2017 model or imagery, and in addition there were new boulders in the 2017 data that could not be identified in the 2015 model or imagery. For example in Figure 7, there are 24 boulders with trackable movement, and 52 boulders that could not be matched to the 2015 data.
We were able to calculate volumes for individual boulders in Agisoft via the built-in Measure Area and Volume tool. Multiplying volume by the measured density of 2.66 t m −3 [65,76] provided approximate boulder masses ( Table 2) . , reveals movement vectors for boulders that could be identified in both models (arrows) and also 'new' boulders, (stars) for which the origin points could not be determined.

Aligning Disparate Datasets with CloudCompare
The 2015 and 2017 models aligned well in the CloudCompare output. Significant changes in surface elevation due to boulder motions are clearly evident as red and blue features in Figure 8, and provide the basis for a robust quantitative analysis, as will be discussed below.
We assessed the alignment quality in this study by examining areas where we know there was zero change over the time interval. Zero difference between models is represented in the output by green. We know that most of the area underwent no change, and that is borne out by the green color that dominates Figure 8. This gives us confidence in the quality of the quantitative differencing. Minor residual distortion from the fisheye 2015 images clearly impacted model alignment and hence the differencing. This is shown by areas in Figure 8 that display as pale yellow, indicating a slight (≤30 cm) offset between the models. As there was no erosion or deposition on the bedrock platform or in the fields to the north, this yellow tint reveals that lens corrections removed most but not all of the distortion. The residual effect is minor, however, and does not obscure the actual changes that occurred; as this study (in common with most geomorphic analyses) is interested in macroscopic change, the scale of the distortion does not affect the interpretations.

Aligning Disparate Datasets with CloudCompare
The 2015 and 2017 models aligned well in the CloudCompare output. Significant changes in surface elevation due to boulder motions are clearly evident as red and blue features in Figure 8, and provide the basis for a robust quantitative analysis, as will be discussed below.
We assessed the alignment quality in this study by examining areas where we know there was zero change over the time interval. Zero difference between models is represented in the output by green. We know that most of the area underwent no change, and that is borne out by the green color that dominates Figure 8. This gives us confidence in the quality of the quantitative differencing. Minor residual distortion from the fisheye 2015 images clearly impacted model alignment and hence the differencing. This is shown by areas in Figure 8 that display as pale yellow, indicating a slight (≤30 cm) offset between the models. As there was no erosion or deposition on the bedrock platform or in the fields to the north, this yellow tint reveals that lens corrections removed most but not all of the distortion. The residual effect is minor, however, and does not obscure the actual changes that occurred; as this study (in common with most geomorphic analyses) is interested in macroscopic change, the scale of the distortion does not affect the interpretations.  Figures 9 and 10). Black arrows indicate places where residual distortion in the GoPro images led to slightly imperfect surface alignment, leading to false detection of slight vertical changes by CloudCompare's C2M algorithm. Note that the majority of the output area is green (no change), indicating a successful alignment.

Quantitative Differencing via CloudCompare
The high resolution of the Agisoft models made boulder movement easily detectable in the CloudCompare output. Boulders that moved to new positions between 2015 and 2017 stand out in bright red (positive change in model surface elevation), while their previous locations are displayed as blue footprints (negative change).
Some boulders moved considerable distances along the platform; others simply rotated in place. Boulder movement was unequally distributed. Across most of the boulder ridge there were few or no changes. Differences between the 2015 and 2017 configurations are concentrated in two zones. First, on the west side, a patchy area ~10 m above high water, and >50 m inland lost substantial mass (blue in Figure 8) as many small (mostly 0.5-1 t) boulders were moved. Because sizable groups of contiguous boulders were removed, they display in the CloudCompare output as a more-or-less uniform surface lowering. Their new depositional locations are represented by a diffuse array of mass additions (warmer colors) elsewhere on the boulder ridge. Erosion in this area also revealed a paleosol formerly buried beneath the boulder ridge ( Figure 9). Second, along the seaward edge of the boulder ridge, a large number of individually identifiable larger clasts were dislocated (red in Figure  10).
We tracked more than 100 individual boulders, of which the largest 18 are listed in Table 2. Sixteen of the largest boulders were located more than 8 m above high water. Of the 12 for which we know both the before and after positions, 9 were transported more than 5 m. The dominant transportation mode was simple translation, but a few boulders were rotated or overturned ( Figure  11, Table 2). The largest single boulder (3m × 3m × 1m, ~28 t) was flipped upright ~90°. The second largest (19 t) slid 7 m. The longest transport distance was 23 m by a 3-tonne boulder. Especially  Figures 9 and 10). Black arrows indicate places where residual distortion in the GoPro images led to slightly imperfect surface alignment, leading to false detection of slight vertical changes by CloudCompare's C2M algorithm. Note that the majority of the output area is green (no change), indicating a successful alignment.

Quantitative Differencing via CloudCompare
The high resolution of the Agisoft models made boulder movement easily detectable in the CloudCompare output. Boulders that moved to new positions between 2015 and 2017 stand out in bright red (positive change in model surface elevation), while their previous locations are displayed as blue footprints (negative change).
Some boulders moved considerable distances along the platform; others simply rotated in place. Boulder movement was unequally distributed. Across most of the boulder ridge there were few or no changes. Differences between the 2015 and 2017 configurations are concentrated in two zones. First, on the west side, a patchy area~10 m above high water, and >50 m inland lost substantial mass (blue in Figure 8) as many small (mostly 0.5-1 t) boulders were moved. Because sizable groups of contiguous boulders were removed, they display in the CloudCompare output as a more-or-less uniform surface lowering. Their new depositional locations are represented by a diffuse array of mass additions (warmer colors) elsewhere on the boulder ridge. Erosion in this area also revealed a paleosol formerly buried beneath the boulder ridge ( Figure 9). Second, along the seaward edge of the boulder ridge, a large number of individually identifiable larger clasts were dislocated (red in Figure 10).
We tracked more than 100 individual boulders, of which the largest 18 are listed in Table 2. Sixteen of the largest boulders were located more than 8 m above high water. Of the 12 for which we know both the before and after positions, 9 were transported more than 5 m. The dominant transportation mode was simple translation, but a few boulders were rotated or overturned ( Figure 11, Table 2). The largest single boulder (3 m × 3 m × 1 m,~28 t) was flipped upright~90 • . The second largest (19 t) slid 7 m.
The longest transport distance was 23 m by a 3-tonne boulder. Especially notable boulders include numbers 16, 17 and, 18 (Table 2), which were initially located 12 m above high water and~60 m inland, and which by 2017 had moved 7-11 m further inland and gained 1-3 m in elevation ( Figure 11).  (Table 2), which were initially located 12 m above high water and ~60 m inland, and which by 2017 had moved 7-11 m further inland and gained 1-3 m in elevation (Figure 11).
(a) (b)    (Table 2), which were initially located 12 m above high water and ~60 m inland, and which by 2017 had moved 7-11 m further inland and gained 1-3 m in elevation (Figure 11).

Effective, Efficient Methodology for Quantitative Repeat Photogrammetric Analysis
Digital cameras are rapidly evolving. Increasing resolutions and the 'pixel-limit' have been the subject of intense study recently, as sensors are being made smaller for use in smartphones [147][148][149][150]. The action cameras used in this study, at 12 megapixels, were high-resolution for drone-mounted cameras at the time we collected these data. But now, four years and five camera generations later, GoPro's current model has 50% greater resolution at 18 megapixels (https://gopro.com/en/us/compare). Higher-end full-frame DSLR resolutions have similarly increased and at 40-60 megapixels or better, are currently more than double those of the action cameras. The point is that although image quality continues to increase, these technological changes do not affect scientists' ability to conduct longitudinal studies that may have to integrate across multiple formats and levels of resolution. In the same way that legacy technologies such as black-and-white aerial photography were integrated into Geographic Information Systems studies [151,152], photogrammetry software is sufficiently versatile to normalize digital image sets with a wide range of resolutions.
The lens corrections applied to the fisheye GoPro images, although seeming somewhat drastic ( Figure 6), were effective, resulting in an Agisoft model relatively free of distortion ( Figure 8). Our results show that distorted source imagery can be addressed by simple tools that are not labor-intensive (e.g., Hastedt et al., 2016;Wierzbicki, 2018). The Agisoft model built from the corrected imagery was of sufficient quality to permit quantitative comparison with a model generated by a different camera. This finding demonstrates that hardware consistency between surveys is not critical for high-resolution quantitative differencing. Examination of the output (Figure 8), specifically the pale yellow colour of some areas where we know no change occurred (thus they should be green), shows that the quantitative effects of residual distortion are at scales ≤30 cm. Thus even with non-complete removal of lens effects, the quantitative comparison clearly reveals the substantive changes between models.
This approach could be useful to a variety of scientists. For example, those who may have changed imaging platforms in the past and did not consider linking their datasets across platforms, those who are hoping to change imaging platforms in the future (e.g., to take advantage of technological improvements) and are worried about preserving the consistency of their data, or to scientists and citizens who wish to collaborate through the compilation and comparison of datasets collected at different times or for different purposes on disparate platforms. Some have advised that GoPro fisheye imagery should be avoided in SfM studies (e.g., Mosbrucker et al., 2017), but this analysis shows that distorted imagery, appropriately corrected, can be used successfully for quantitative geomorphology.
The workflow implemented in this study is ideal for efficient repeat observations. This potential for producing four-dimensional data has already been realized in a range of studies [19,20,41,46,153,154]. Much of the procedure in Agisoft can be automated via custom scripts or the Batch Process option [105]. Thus, once an effective workflow has been prototyped and tested, repeating those operations with additional datasets is very time-efficient. CloudCompare has fewer automation options [136], but setting the software up to process models requires very little time once effective alignment parameters are established through testing.

Boulder Movement
Comparison of high-resolution SfM models demonstrates that between 2015 and 2017, at least some waves in the study area were capable of transporting boulders that weigh up to 26 t, situated at considerable elevations above high water and tens of meters inland. It is impossible to hind-cast exact characteristics of the waves that drove boulder movement, because interactions between wave size, wave approach angle, coastline shape, cliff height, bathymetry, and cliff-top boulders are complex [155]. The large number of boulders transported along the seaward edge of the boulder ridge, and the variable directions of boulder movement (See Figures 7 and 10) suggest multiple rearrangement events.
The large numbers of 'missing' and 'new' boulders ( Figure 7) exemplify the complex nature of boulder transport and the challenge of tracking the movement of specific boulders. 'Missing' boulders are those that were present in the 2015 model but could not be identified in the 2017 model, and 'new' boulders are those that were not identifiable in 2015 images but were observed in 2017. Missing boulders have three possible explanations: they could have been transported out of the study site (into the ocean, or out of the model's field of view), they might have been buried under other clasts during rearrangement, or they might have been overturned, rendering them unrecognizable in the later photographs. There is no evidence that new boulders were quarried from the bedrock in this area, so 'new' boulders in the 2017 images are probably boulders that were uncovered, overturned, or transported into the field of view.
The results highlight the need to observe CBD on a regular basis in order to document year-to-year changes. The data illustrate that boulders weighing tens of tonnes are moved on short timescales and also provide insights into the ways in which CBD are reshaped. These changes are relatively small, but over time could lead to significant reorganization of the deposits. Furthermore, the distributed nature of the changes-with small areas experiencing significant erosion while adjacent boulders of the same size distribution are unaffected-illustrates the stochastic nature of change in this environment.
The two models only capture the start and end points of boulder movement, so the measured transport distances (Table 2) are minima. Boulders may in fact have travelled greater distances along non-linear paths, possibly in more than one transportation event. For example, in Figure 12  some waves in the study area were capable of transporting boulders that weigh up to 26 t, situated at considerable elevations above high water and tens of meters inland. It is impossible to hind-cast exact characteristics of the waves that drove boulder movement, because interactions between wave size, wave approach angle, coastline shape, cliff height, bathymetry, and cliff-top boulders are complex [155]. The large number of boulders transported along the seaward edge of the boulder ridge, and the variable directions of boulder movement (See Figures 7, 10) suggest multiple rearrangement events.
The large numbers of 'missing' and 'new' boulders ( Figure 7) exemplify the complex nature of boulder transport and the challenge of tracking the movement of specific boulders. 'Missing' boulders are those that were present in the 2015 model but could not be identified in the 2017 model, and 'new' boulders are those that were not identifiable in 2015 images but were observed in 2017. Missing boulders have three possible explanations: they could have been transported out of the study site (into the ocean, or out of the model's field of view), they might have been buried under other clasts during rearrangement, or they might have been overturned, rendering them unrecognizable in the later photographs. There is no evidence that new boulders were quarried from the bedrock in this area, so 'new'boulders in the 2017 images are probably boulders that were uncovered, overturned, or transported into the field of view.
The results highlight the need to observe CBD on a regular basis in order to document year-toyear changes. The data illustrate that boulders weighing tens of tonnes are moved on short timescales and also provide insights into the ways in which CBD are reshaped. These changes are relatively small, but over time could lead to significant reorganization of the deposits. Furthermore, the distributed nature of the changes-with small areas experiencing significant erosion while adjacent boulders of the same size distribution are unaffected-illustrates the stochastic nature of change in this environment.
The two models only capture the start and end points of boulder movement, so the measured transport distances (Table 2) are minima. Boulders may in fact have travelled greater distances along non-linear paths, possibly in more than one transportation event. For example, in Figure 12

Advantages and Disadvantages of CloudCompare
CloudCompare is a robust and flexible open-source tool but does have some limitations. Most notable is its inability to save intermediate analysis steps. The comparison output file can be saved as a custom ".bin" file, but currently there is no way to preserve a save-state after importing a file, or after registration. For example, there is no option for saving progress after loading models into CloudCompare (a process that can take several minutes to over an hour depending on the file's size and the storage medium's read/write speed), but before registration. Nor is there an option to save the two clouds' alignment once they have been registered. This requires that the entire comparison be run from start to finish without closing the CloudCompare application, and that a crash necessitates starting from scratch. Lastly, once the final comparison .bin file has been generated and saved, there is no way to go back and manually examine the two aligned clouds from which it was derived (i.e., in case an unusual artifact appears in the output).
CloudCompare was effective at identifying differences between models, enabling accurate calculations of volume change and volume redistribution. Tracking movement of individual objects, however, still requires non-trivial time investment by the user. In this study, we attempted to pair each blue 'before' boulder footprint with a red 'after' boulder in the comparison output so that boulder travel paths could be identified. However, this process was complicated by 'orphan' features, i.e., boulders that for which the corresponding origin footprint could not be identified, and likewise those origin footprints for which a corresponding boulder could not be found in the study area. Identifying the 'before' and 'after' boulder pairs required manually surveying the point clouds, meshes, and orthomosaics to try to match boulders by shape, texture, color, tone, and/or distinctive markings-a process that was laborious and time-consuming. It's important to note, however, that trying to make those same measurements on images or models without the benefit of the CloudCompare analysis would have been far more time consuming.

Conclusions
The analysis of drone-derived SfM model time-series via CloudCompare is a promising technique for quantifying volumetric change over time. SfM data can be collected essentially on-demand with a small field team, and the resulting models can be very high resolution (1-10 cm/pixel) and can cover kilometer-scale areas. Comparing these models via CloudCompare allows rapid and repeatable analysis of change.
This study establishes that neither consistent imaging hardware nor deployment of GCPs are required to take advantage of CloudCompare's quantitative differencing capabilities. The application of basic lens corrections in Adobe Lightroom and judicious choice of alignment parameters within CloudCompare enabled a model derived from GoPro fisheye imagery and built without GCPs to be successfully aligned with a fully georeferenced model derived from undistorted images. We hope that this finding encourages researchers to incorporate older or alternative photographic datasets in their own work, e.g., those produced with different hardware or incorporating different imaging parameters, permitting longer timelines for geomorphic comparison.
In this study, CloudCompare detected boulder movement down to sub-decimeter scale over a two-year period despite some minor residual distortion in the 2015 model. Boulders up to 28 t were rearranged, and scores of smaller boulders were moved and reorganized. These changes are much larger than conventional wisdom would suggest and indicate that the Aran Islands coastal boulder deposits are active on a yearly basis.
This study illustrates how CloudCompare provides a straightforward toolkit for quantifying change, both at the scale of individual boulders and for deposits as a whole. This potential for producing four-dimensional data using SfM has already been realized in a wide variety of fields (e.g., Bryson et al., 2012Bryson et al., , 2013Eltner et al., 2015Eltner et al., , 2017Gillan et al., 2017;Rossini et al., 2018), and the methodology described here could be widely adopted in disciplines other than geomorphology (e.g., ecology, land-use surveying). The ease of use and minimal training required mean that this methodology can be adopted by both expert and non-expert users, opening the door to rapid data acquisition, effective use of datasets collected with different hardware, and short-term monitoring of changing sites by researchers, citizen scientists, and other stakeholders. This is important as more frequent and detailed records are critical to developing a better understanding of CBD dynamics wave-event to event and the mechanisms behind boulder movement. With more data, specific wave events and their