Next Article in Journal
Along the Rivers and into the Plain: Early Crop Diversity in the Central and Western Balkans and Its Relationship with Environmental and Cultural Variables
Next Article in Special Issue
Palaeogeographical Reconstruction of Ancient Diolkos Slipway by Using Beachrocks as Proxies, West Corinth Isthmus, Greece
Previous Article in Journal
The Microvertebrates of Shanidar Cave: Preliminary Taphonomic Findings
Previous Article in Special Issue
Characterization of the Obsidian Used in the Chipped Stone Industry in Kendale Hecala
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Deforestation: Comparing Automated Approaches to the Production of Digital Terrain Models (DTMs) in Agisoft Metashape

by
Matthew D. Howland
1,*,
Anthony Tamberino
2,
Ioannis Liritzis
3,4,5 and
Thomas E. Levy
2
1
Jacob M. Alkow Department of Archaeology and Ancient Near Eastern Cultures, Tel Aviv University, Tel Aviv-Yafo 6997801, Israel
2
Scripps Center for Marine Archaeology, Department of Anthropology, University of California San Diego, La Jolla, CA 92092, USA
3
Key Research Institute of Yellow River Civilization and Sustainable Development, College of Environment and Planning, University of Henan, Kaifeng 475001, China
4
Department of Archaeology, School of History, Classics and Archaeology, College of Arts, Humanities & Social Sciences, The University of Edinburgh, Edinburgh EH8 9AG, UK
5
Department of Physics & Electronics, Rhodes University, P.O. Box 94, Makhanda 6140, South Africa
*
Author to whom correspondence should be addressed.
Quaternary 2022, 5(1), 5; https://doi.org/10.3390/quat5010005
Submission received: 20 November 2021 / Revised: 13 December 2021 / Accepted: 4 January 2022 / Published: 14 January 2022
(This article belongs to the Special Issue Advances in Geoarchaeology and Cultural Heritage)

Abstract

:
This paper tests the suitability of automated point cloud classification tools provided by the popular image-based modeling (IBM) software package Agisoft Metashape for the generation of digital terrain models (DTMs) at moderately-vegetated archaeological sites. DTMs are often required for various forms of archaeological mapping and analysis. The suite of tools provided by Agisoft are relatively user-friendly as compared to many point cloud classification algorithms and do not require the use of additional software. Based on a case study from the Mycenaean site of Kastrouli, Greece, the mostly-automated, geometric classification tool “Classify Ground Points” provides the best results and produces a quality DTM that is sufficient for mapping and analysis. Each of the methods tested in this paper can likely be improved through manual editing of point cloud classification.
Keywords:
photogrammetry; DTM; GIS; IBM

1. Introduction

In recent years, aerial laser scanning (ALS; also known as LIDAR) has attracted much attention for its ability to “see through” dense vegetation [1,2,3,4,5,6,7,8]. This technology can record ground surfaces below vegetation as some fraction of the lasers emitted from the scanner strike ground surfaces through small gaps between leaves. This allows for the processing of ALS-derived point clouds to filter out “first returns”, representing the canopy, leaving only “last returns”, representing the ground surface [8,9,10,11,12]. One end product of such a process is a digital terrain model (DTM; also referred to as a bare-earth digital elevation model), a measure of the elevation of a ground surface that does not include vegetation or structures [13,14,15]. DTMs are critical datasets for archaeologists, useful for identification of sites and features below dense vegetation but also for performing various kinds of landscape modeling and spatial analyses in GIS [13,14,15,16,17]. Thus, digitally stripping a site of its vegetation through the production of DTMs is critical for many avenues of archaeological inquiry.
Despite the advantages of ALS for DTM production, laser scanning approaches are not always feasible as the technique can be cost-prohibitive [2,18,19,20,21]—though costs are likely to come down with the rapid pace of technological development. As such, archaeologists often prefer the combination of low-altitude aerial photography, often from UAVs, and image-based modeling (IBM) for the collection of elevation datasets at sitewide scale [22,23,24,25,26,27,28,29,30]. Agisoft Metashape (formerly Agisoft Photoscan) stands out as one particularly common software package used for the production of 3D datasets through IBM [25,26,31,32,33]. Standard IBM procedures produce a digital surface model (DSM; i.e., an elevation dataset including both vegetation and structures) rather than a DTM [34]. DSMs are not suitable for most forms of archaeological analysis and mapping as the elevations in the dataset represent a combination of vegetation, structures, and ground surfaces. In the context of ALS, where point cloud filtering techniques are common and well-known, DSMs are infrequently used by archaeologists [14]. Among IBM practitioners, however, the use of DSMs produced from unfiltered or minimally-filtered point clouds is common [25,35].
Though IBM is often preferred by archaeologists due to cost considerations and ease of use, IBM approaches suffer in comparison to ALS with regard to the inability of the former approach to record data hidden by dense vegetation. As such, a DSM produced through IBM methods does not contain data on archaeological features hidden below trees. Thus, IBM/photogrammetry is often considered inappropriate for application in vegetated areas [20,26,36,37]. However, many—if not most—archaeological sites exist in moderately vegetated areas, often featuring some trees, shrubs, or bushes, but not enough to entirely cover the site and prevent it from being seen from above. Collection of elevation data through methods of low-altitude aerial photography and IBM at sites such as these is an attractive proposition due to the cost-effectiveness and user-friendliness of the techniques [21]. In this context, IBM is viable but conducting additional data processing to produce a DTM is necessary for mapping and spatial analysis.
IBM-derived data can be processed in a number of ways to produce a DTM [38]. Geometric filters normally used for the processing of ALS data are often applied to IBM data as well [39,40]. Examples of this include the use of algorithms such as the fast Fourier transform [41], cloth simulation filtering [42,43], and a TIN-based filtering approach [38,44,45,46]. These geometric algorithms are frequently applied using specialized point cloud filtration software tools such as LASGround [46], TerraSolid [45], or LP360 [44]. IBM point clouds can also be classified based on the use of RGB and NIR imagery to identify the spectral signature of points, either through the use of an NDVI threshold [38,47,48] or a machine learning-based classification [47]. Machine learning classifiers can also be based on geometric relationships within a scene [49], or both color and geometry [50]. GIS-based filtering of DSMs proves yet another approach to DTM production from IBM data [51]. Most of these tools are suitable for use by specialists in ALS or remote sensing but are beyond the technical expertise of most field archaeologists who nevertheless require DTMs for mapping or analysis. This is problematic as a major appeal of LAAP and IBM-based approaches is their ease of use.
Fortunately, user-friendly alternatives exist for point cloud filtration and DTM production. One prominent example is the point cloud classification toolkit provided by Agisoft Metashape [52]. The set of tools in this program, ranging from fully manual to fully automated, can be used effectively to filter out vegetation and structures from photogrammetric point clouds in an easy and straightforward way [21,28,34,53,54]. The results of these approaches have been validated as accurate and useful for analysis of bare earth surfaces through comparative analysis of derivative DSMs and DTMs [34]. However, studies measuring the accuracy of Agisoft-based point cloud filtration have found that results are not as accurate as those generated from more technically-intensive point cloud filtering algorithms or DTM production through ALS [34,38,55]. Moreover, users of the automated tools in Agisoft have also pointed out the need for manual editing of point cloud classification results [21]. Still, automated approaches for IBM point cloud filtration mean that IBM can be a viable and accurate alternative to ALS even when ground classification is necessary [21]. As such, the availability of user-friendly point cloud classification tools within Agisoft is a boon to archaeologists working at all but the most densely vegetated sites. These tools allow for the use of cost-effective and user-friendly photogrammetric methods to generate GIS-based elevation datasets that can facilitate both sophisticated spatial analyses and aid more straightforward goals of contour generation and mapping.
This paper explores the point cloud classification and DTM production functionality offered by Agisoft Metashape Professional [52,56]. The approaches offered by Agisoft are largely automated and user-friendly and as such, provide a useful alternative to more technical approaches requiring specialized knowledge or additional software. The utility of the approaches offered by Agisoft has not been adequately tested despite the ubiquity of use of the software by archaeologists. Each of the three automated approaches to point cloud classification provided by Agisoft Metashape is tested based on IBM data recorded at the Late Helladic period site Kastrouli, in central Greece. The generation of derivative contour data for both an unmodified DSM and a DTM using standard GIS techniques is also presented in order to discuss the utility of DTM production using these techniques.
Kastrouli is located in the Phokis region of central Greece, on the Desfina Peninsula [57]. The site lies at the end of a ridgeline at the top of a hill, the top of which is encircled by a fortification wall. The slopes of the hill on which the site lies are rocky with sparse vegetation, while the top of the hill is partially covered by trees, large bushes, and shrubs, as well as several low field walls. The site’s earliest and most notable remains are, in addition to the fortification wall, three tombs that date to the Late Helladic period, one of which (Tomb A) was dated more precisely to the LH IIIB 2 [58,59], though the tomb was re-assigned to the LH IIIA 2 or slightly later based on excavations in 2016 and optically stimulated luminescence dating [60,61]. The tomb was also re-used through the LH IIIC and again later in the Middle Geometric period, before being partially looted in the 20th century [58,59]. The fortification wall was also constructed in the Late Helladic period and was reinforced in later periods [59,62,63]. Excavations in 2017 and scientific analysis of finds and skeletal remains have shed additional light on the site’s occupation in the Late Helladic period [62,63,64,65,66,67]. The current case study at Kastrouli was part of the 2016 excavation campaign at Kastrouli, which excavated Tomb A at the site and conducted two small wall sample probes in other parts of the site. The campaign also featured a 3D and spatial recording program designed to comprehensively and intensively document the site and the progress of the excavation through spatial and 3D recording [58,59,60,68].

2. Materials and Methods

The data collected to create a sitewide model of Kastrouli was collected using a helium balloon with an attached, custom frame and a Canon EOS 50D DSLR triggered by an interval timer (updated from the LAAP system described in [25]). The balloon was tethered to an operator on the ground, who maneuvered the balloon around the site with the goal of collecting images in transects across the site and with a great deal of (ca. 90%+) overlap between consecutive images. The balloon, though less predictable in flight than a typical small UAV, carries a higher resolution camera and allows for longer flight time. As such, the balloon system generates comparable results to a UAV system for photogrammetric purposes if coverage is adequate, though recent technological development in UAVs has rendered them the gold standard of LAAP recording. In total, sidewide LAAP photography at Kastrouli captured 790 images of the site, a sufficient number to generate a detailed model of the site.
Once these images were captured, they were processed using a straightforward and standard workflow in Agisoft Metashape Pro. The ways in which these stages work within Agisoft is discussed in further detail elsewhere [25,27]. At Kastrouli, the model was georeferenced using nine control points established at the site using differential GPS. The model overall had a horizontal spatial error—as reported by Agisoft—of 8.23 cm, which we regard as acceptable for a sitewide model, especially given the acquisition of smaller, more precise models at key parts of the site. The processed and georeferenced model was sufficient to generate a high-resolution (2.5 cm) orthophotograph suitable for the project’s mapping goals (Figure 1) as well as a DSM of the site. As Kastrouli is moderately vegetated, a DSM includes elevations of vegetation and architecture at the site, meaning that the dataset is inadequate for mapping and spatial analysis. Thus, producing a DTM was necessary. As discussed above, many methods for producing a DTM from IBM datasets exist, though most require a relatively high level of technical expertise. Those provided by Agisoft Metashape, however, are relatively user-friendly, though varied in degree of automation and effectiveness.
Agisoft Metashape provides a suite of point cloud classification tools that facilitate the production of DTMs from a subset of points in a model (Table 1) [52]. These include four options to classify points at varying levels of automation. First, a nearly fully automated classification (Classify Points) is available, applying machine learning techniques to sort points into standard LIDAR classes. Users have the option to select which classes will be used and a confidence parameter for point classification. At 0.00 confidence, this tool will classify every point in a scene based on Agisoft’s proprietary classification algorithm. This tool is the easiest to apply as it requires only the input of one parameter.
A second, mostly automated tool (Classify Ground Points) requires users to set three geometric parameters, which are used to distinguish points representing the bare earth from those representing other features based on the angle and slope from adjacent points [34,38]. This tool uses a TIN-based algorithm [21]. The Classify Ground Points tool in Agisoft Metashape is very easy to use but requires an iterative process in which the three geometric metrics used to differentiate points are manually adjusted and the selection tool re-run in order to find a quality result [34]. As such, it is difficult to ensure that the best possible result is attained.
A third tool provided by Agisoft (Select Points by Color) allows for the classification of points by their color values, with an option to set a tolerance range. This tool can be applied to select and classify points into the various categories one at a time. At Kastrouli, a medium-light shade of orange (hex code #b69b8a) generally representing the color of the surface of the site in the acquired imagery was selected for identifying ground points. This tool also requires an iterative process in which colors and tolerance are adjusted in order to find a visually optimal result.
The final tool (Assign Class) is fully manual, allowing users to select points by hand and sort them into classes. The first three automated approaches are tested and presented here. In each case, the threshold for user-provided values was chosen based on the best appearance of the resulting classification (Table 1). In order to test the reliability of these approaches, results are presented below without any manual editing of point cloud classification, which should normally be applied for best results [21]. These values will naturally vary by site and data collection methods.
Producing a DTM from a subset of a dense point cloud in Agisoft Metashape is a straightforward task and only requires the selection of relevant point classes during DEM production [52]. A DTM was produced from the results of each point cloud classification process described above. The results of these processes are presented below, along with smoothed contours generated from the DTMs/DSM for comparative purposes.

3. Results

The methods described above resulted in four elevation models of 2.5 cm spatial resolution: an unmodified DSM (Figure 2a) and three DTMs produced through classifying points geometrically (Figure 2b), through color selection (Figure 2c), and using the fully automatic procedure (Figure 2d). Each DTM is intended to represent the bare earth surface of the site, stripped of vegetation and architecture, though results vary by method. In each method, in areas of the site where the points were classified as ground, the cloud of points was used by Agisoft to generate a continuous raster surface. However, in areas covered by vegetation or architecture in which no ground classified points existed due to the limitations of photography and photogrammetry discussed above, the program interpolated a surface from the ground points on each side of the open space. In other words, the elevation measurements in areas covered by vegetation or stone walls serve as estimates based on the nearest ground points [21]. Ultimately the entire DTM is generated through an interpolative process, though this interpolation becomes more speculative in areas with fewer relevant points—i.e., those with dense vegetation and a complex ground surface [21]. It is difficult to quantify the error in elevation induced by the interpolation of vegetated areas, given a lack of a control dataset of points below vegetation. Testing of the quality of the DTM-generation process was not a primary goal of the field project, and so the data needed to conduct this type of accuracy testing was not collected in the field. However, visual inspection of the resulting DTMs is insightful for the relative utility of the results and subsequent research can investigate the accuracy of each classification method [38].

4. Discussion

Visual inspection of each resulting DEM is useful for understanding the extent to which each point cloud classification method was successful in identifying ground points, and therefore removing vegetation and architecture at the site from the elevation model. The DSM produced without classifying points (Figure 2a) serves as a control for comparison. On this DSM, the vegetation across the site is clearly visible as roughly circular high points across the site, especially its summit. Architecture is also apparent on closer inspection.
Of the other elevation models, the DTM produced by the geometric classification tool Classify Ground Points is clearly superior. The model is free of any elements of elevation representing large vegetation and architecture at the site is not apparent in the dataset. Previous testing of this point cloud classification technique for DTM production has reported that quantitative error results for the approach are highly competitive, but qualitative results are less so [43]. In the present case study, visual inspection of the results shows that the qualitative results in terms of the algorithm’s ability to differentiate vegetation from ground points are superior to other methods provided within Agisoft. However, further testing can examine the extent to which quantitative error rates compare to other approaches.
Other methods provided by Agisoft were less successful in differentiating ground points from vegetation. A color-based approach (Figure 2b), despite classifying the lowest number of points out of the three classification techniques, was the least successful in removing vegetation from the DEM. The DEM produced through this method clearly illustrates that points on top of vegetation were classified as ground points, even despite the iterative process and fine tuning of the color and tolerance parameters. Despite the flawed result, this tool still has potential for point cloud classification as it can be used in combination with other tools, including manual identification, to produce a more refined result. However, at Kastrouli, this tool was not sufficient to produce a DTM without manual editing.
The final approach tested here is the fully automated tool, Classify Points, which applies Agisoft’s proprietary algorithm, developed through machine learning, to sort points in the model into classes. Points identified as “ground” were subsequently used to produce the DEM. Unfortunately, though easy to use, this approach was not effective in classifying points at Kastrouli. This automated approach classified many points on the surface of the site as “building” (orange points in Figure 3), despite the lack of buildings in the scene. Low stone walls across the site were correctly classified as building; however, bedrock outcrops and much of the bare earth at the site was incorrectly grouped as such. Only a fraction of the actual ground surface was classified as “ground” (brown points in Figure 3) as well, with this class also featuring enough low-lying vegetation to disrupt the quality of the DTM. However, much of the vegetation at the site was classified correctly, suggesting potential for more nuanced use of this approach to classify and remove vegetation rather than rely on the identification of ground points. Overall, this tool was not accurate at Kastrouli, though it may be effective in other contexts and sites.
Though these approaches were of varying effectiveness, in total they suggest that the use of multiple methods, combined with some manual editing of point cloud classification, will provide effective results. However, at Kastrouli, the geometric procedure Classify Ground Points provided quality results without even a need for manual editing. However, producing a highly accurate DTM would likely require manual editing of automated classifications [21], which in turn requires many hours of additional labor in classification by a trained eye able to differentiate rocky outcrops from stone walls and ground-truthing. These tasks are important for best results, but they exponentially complicate and limit the efficiency of the overall process. As such, projects producing DTMs from photogrammetric datasets should consider the balance between efficiency of data processing and the ultimate accuracy of a final DTM.
In this case study, the geometric approach produced a DTM that generally reflects the topography of Kastrouli, especially on the top area of the site. The elevation dataset clearly has removed the artificially high measurements that represent the highest trees and bushes. This is a valuable accomplishment, as the current vegetation on the site most likely does not reflect ancient vegetation, and these elements also disrupt mapping and analysis efforts. The smoother surface of the DTM on the top of the site more accurately reflects the ground surface in that area, as it is largely flat and gently sloped within the area encircled by the fortification wall. The vast majority of the archaeological remains at Kastrouli are also found in this area. That the DTM generation method performed most strongly in this area is an encouraging result for study and analysis of the site’s anthropogenic component and therefore the project’s goals. This is evidenced by the contours generated from the DTM, which—by contrast with contours generated from an unmodified DEM of the site—provide a much simpler and more intuitive representation of the site’s elevation (Figure 4). The DTM-generated contours serve as a strong basis for mapping of key site features and excavation areas. Meanwhile, many of the contours generated from the DSM appear to represent the topography of the vegetation at the site rather than the variations in ground level elevations. Ultimately, the DTM produced through these methods provides a more useful basis for understanding ancient occupation of the site as it more closely reflects the ancient occupation surface rather than more recent vegetation. Future investigation at the site will be able to make use of this dataset to examine the patterns of occupation across the site.
One of the main benefits to the workflow described here is the relative simplicity and efficiency of the approach. Combined LAAP-IBM methods have become relatively standard applications at archaeological sites, with Agisoft also commonly being applied for its integrated photogrammetric workflow [27,33]. The steps for generating a DTM are also efficient in terms of the manual labor required, with the most time-consuming steps being field collection of data and testing different point selection and classification methods and standards. Refining the classification of the point cloud to reduce false negatives and false positives can also be manually intensive, depending on the standards needing to be met. The work described here produced quality results without manual editing in order to demonstrate the efficiency of that process, though review and manual editing of point cloud classification would likely improve outputs [21]. Processing photographs into 3D models, and subsequently DTMs, is another potential limiting factor on the efficiency of the process, depending on the quality of the computational resources available. In general, however, these LAAP, photogrammetric processing, and point cloud classification methods serve as an effective and efficient method for DTM generation and in many cases may complement already practiced workflows on archaeological projects. These methods also allow for the generation of accurate and simple contours that are important for cartography at archaeological sites. The approach presented here is applicable to all moderately-vegetated sites recorded with IBM, though the quality of point cloud classification methods may vary according to the topography and vegetation at the site. Though more sophisticated methods may also provide better results, the methods tested here are extremely user-friendly and can be improved by manual editing of point cloud classification. As such, this approach provides an easy and straightforward method of DTM production.
Drawbacks of the methods described above are its applicability only to areas with limited vegetation or architecture and its dependence on automated methods that may not capture the complexity of an archaeological site. In areas with no vegetation or architecture, the DTM does not depend on interpolation, which means that the dataset can achieve high fidelity to the ground surface given the potential for photogrammetric methods to achieve high levels of accuracy [35,69]. However, any part of a site where the ground surface is obscured by vegetation or architecture relies on interpolation from the nearest ground surface, which increases the amount of estimation at the expense of actual measurement. Thus, more vegetated or built-up sites will suffer in accuracy compared to more sparsely-covered sites. This trade-off should be factored in when choosing this method for DTM generation. The combined approach described above also should be applied with the understanding that automated methods lack the sophistication provided by expertise and a trained eye. A strictly automated approach may be unable to distinguish between a low stone wall and a rocky outcrop, for example. However, tools for manual point cloud classification allow archaeologists to consider the extent to which they would like to modify or replace the results of an automated system. Ultimately, maps and spatial analyses are subjective enterprises that can be facilitated by objective methods, so a combination of automated and manual approaches seems appropriate for this type of study. Looking forward, future studies should address the extent to which an interpolative method of creating a DTM reflects the reality of the ground surface of the site below vegetation and architecture by comparing the results generated using the method described above to measurements derived from other means [38,55]. This verification will help to demonstrate the accuracy of an IBM-based point cloud classification approach to DTM generation.

5. Conclusions

A combined LAAP-photogrammetry-point cloud classification approach is an effective and efficient workflow for generating a DTM of an archaeological site sufficient for cartography and spatial analysis. Such a methodology bests LiDAR on cost and may integrate more fully into already existing archaeological practices. The popular IBM software package Agisoft Metashape provides a number of mostly-automated tools for point cloud classification that can be applied to DTM production. Of these, the Classify Ground Points tool appears to be the most reliable based on a case study from Kastrouli, Greece. However, it is likely that automated methods can be improved by manual editing of results, at the expense of efficiency. Overall, the use of automated point cloud classification tools in Agisoft provides a streamlined and user-friendly process for the production of DTMs. This approach then represents a useful addition to the toolbox of archaeological projects interested in mapping and spatial analysis at moderately-vegetated or moderately-built up sites.

Author Contributions

Conceptualization, M.D.H.; Methodology, M.D.H. and A.T.; Investigation, M.D.H.; Writing—original draft preparation, M.D.H.; Writing—review and editing, M.D.H., A.T., I.L. and T.E.L.; Visualization, M.D.H.; Project administration, I.L. and T.E.L.; Funding acquisition, T.E.L. All authors have read and agreed to the published version of the manuscript.

Funding

Some graduate student support was provided by the University of California Office of the President through a Research Catalyst Grant for At-Risk Cultural Heritage and the Digital Humanities (Grant ID: CA16-376911; Lead PI: Thomas Levy).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

GIS data described in this article are available in a brief StoryMap: https://storymaps.arcgis.com/stories/39280437536b4c458817fa4aa5c0b541 (accessed on 19 November 2021). 3D datasets are available on request.

Acknowledgments

The authors would like to thank the Ministry of Culture for granting permission to dig and A. Tsaroucha, member and representative of the Ephoreia of Antiquities at Phokis (EAPh) in Delphi, for excellent collaboration. We also thank Malcom Wiener for his thoughts about the Tomb A assemblage, as well as Fotis Dasios for discussing with us the Sykia chamber tomb and its finds. We are grateful to the villagers of Kastrouli for hosting the 2016 expedition in their homes and tavernas with warmth and good will. The Kastrouli municipality was very helpful throughout the project and we are grateful for their help and support. We thank also the president of Desfina municipality community, Babis Kaliakoudas, for excellent support prior, during, and after the 2016 fieldwork; Panagiota Karamani, conservator, for her assistance; Fotini Koukou, the guardian, for excellent cooperation; and Alina Levy for her care with logistics and the whole project. Tom Levy is thankful to the Kershaw Family Foundation for their support, and we are grateful to Liz Anne and Phokion Potamianos of La Jolla, California, for their generous support of this project given to Tom Levy. All archaeological works and surveys were supervised by the EAPh’s delegate, Anthoula Tsaroucha, and made possible through the constant support of the EAPh and Nancy Psalti, its director.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Opitz, R.S.; Cowley, D.C. Interpreting Archaeological Topography. AIRBORNE Laser Scanning, 3D Data and Ground Observation; Oxbow Books (Occasional Publication of the Aerial Archaeology Research Group, 5): Oxford, UK, 2013. [Google Scholar]
  2. Casana, J.; Laugier, E.J.; Hill, A.C.; Reese, K.M.; Ferwerda, C.; McCoy, M.D.; Ladefoged, T. 2021 Exploring archaeological landscapes using drone-acquired lidar: Case studies from Hawai’i, Colorado, and New Hampshire, USA. J. Archaeol. Sci. Rep. 2021, 39, 103133. [Google Scholar]
  3. Chase, A.F.; Chase, D.Z.; Weishampel, J.F.; Drake, J.B.; Shrestha, R.L.; Slatton, K.C.; Awe, J.J.; Carter, W.E. Airborne LiDAR, archaeology, and the ancient Maya landscape at Caracol, Belize. J. Archaeol. Sci. 2011, 38, 387–398. [Google Scholar] [CrossRef]
  4. Devereux, B.J.; Amable, G.S.; Crow, P.; Cliff, A.D. The potential of airborne lidar for detection of archaeological features under woodland canopies. Antiquity 2005, 79, 648–660. [Google Scholar] [CrossRef]
  5. Doneus, M.; Briese, C.; Fera, M.; Janner, M. Archaeological prospection of forested areas using full-waveform airborne laser scanning. J. Archaeol. Sci. 2008, 35, 882–893. [Google Scholar] [CrossRef]
  6. Johnson, K.M.; Ouimet, W.B. Rediscovering the lost archaeological landscape of southern New England using airborne Light Detection and Ranging (LiDAR). J. Archaeol. Sci. 2014, 43, 9–20. [Google Scholar] [CrossRef]
  7. Inomata, T.; Triadan, D.; Pinzón, F.; Burham, M.; Ranchos, J.L.; Aoyama, K.; Haraguchi, T. Archaeological application of airborne LiDAR to examine social changes in the Ceibal region of the Maya lowlands. PLoS ONE 2018, 13, e0191619. [Google Scholar]
  8. Doneus, M.; Briese, C. Full-waveform airborne laser scanning as a tool for archaeological reconnaissance. BAR Int. Ser. 2006, 1568, 99. [Google Scholar]
  9. Meng, X.; Currit, N.; Zhao, K. Ground filtering algorithms for airborne LiDAR data: A review of critical issues. Remote Sens. 2010, 2, 833–860. [Google Scholar] [CrossRef] [Green Version]
  10. Montealegre, A.L.; Lamelas, M.T.; de la Riva, J. A comparison of open-source LiDAR filtering algorithms in a Mediterranean forest environment. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 4072–4085. [Google Scholar] [CrossRef] [Green Version]
  11. Opitz, R.S. An overview of airborne and terrestrial laser scanning in archaeology. In Interpreting Archaeological Topography: 3D Data, Visualization and Observation; Opitz, R.C., Cowley, D.C., Eds.; Oxbow Books: Oxford, UK, 2013; pp. 13–31. [Google Scholar]
  12. Sithole, G.; Vosselman, G. Experimental comparison of filter algorithms for bare-Earth extraction from airborne laser scanning point clouds. ISPRS J. Photogramm. Remote Sens. 2004, 59, 85–101. [Google Scholar] [CrossRef]
  13. Mlekuž, D. 2013 Skin deep: LiDAR and good practice of landscape archaeology. In Good Practice in Archaeological Diagnostics; Corsi, C., Slapšak, B., Vermeulen, F., Eds.; Springer: Cham, Switzerland, 2013; pp. 113–129. [Google Scholar]
  14. Štular, B.; Eichert, S.; Lozić, E. Airborne LiDAR Point Cloud Processing for Archaeology. Pipeline and QGIS Toolbox. Remote Sens. 2021, 13, 3225. [Google Scholar] [CrossRef]
  15. White, D.A. LiDAR, point clouds, and their archaeological applications. In Mapping Archaeological Landscapes from Space; Comer, D.C., Harrower, M.J., Eds.; Springer: New York, NY, USA, 2013; pp. 175–186. [Google Scholar]
  16. Evans, D.H.; Fletcher, R.J.; Pottier, C.; Chevance, J.-B.; Soutif, D.; Tan, B.S.; Im, S.; Ea, D.; Tin, T.; Kim, S.; et al. Uncovering archaeological landscapes at Angkor using lidar. Proc. Natl. Acad. Sci. USA 2013, 110, 12595–12600. [Google Scholar] [CrossRef] [Green Version]
  17. Masini, N.; Lasaponara, R. Airborne lidar in archaeology: Overview and a case study. In Computational Science and Its Applications–ICCSA 2013, Proceedings of the International Conference on Computational Science and Its Applications, Ho Chi Minh City, Vietnam, 24–27 June 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 663–676. [Google Scholar]
  18. Fernandez-Diaz, J.C.; Carter, W.E.; Shrestha, R.L.; Glennie, C.L. Now you see it… now you don’t: Understanding airborne mapping LiDAR collection and data product generation for archaeological research in Mesoamerica. Remote Sens. 2014, 6, 9951–10001. [Google Scholar] [CrossRef] [Green Version]
  19. Howland, M.D. 3D Recording in the Field: Style Without Substance? In Cyber-Archaeology and Grand Narratives; Jones, I.W.N., Levy, T.E., Eds.; Springer: Cham, Switzerland, 2018; pp. 19–33. [Google Scholar]
  20. Vilbig, J.M.; Sagan, V.; Bodine, C. Archaeological surveying with airborne LiDAR and UAV photogrammetry: A comparative analysis at Cahokia Mounds. J. Archaeol. Sci. Rep. 2020, 33, 102509. [Google Scholar] [CrossRef]
  21. Jiménez-Jiménez, S.I.; Ojeda-Bustamante, W.; Marcial-Pablo, M.d.J.; Enciso, J. Digital Terrain Models Generated with Low-Cost UAV Photogrammetry: Methodology and Accuracy. ISPRS Int. J. Geo-Inf. 2021, 10, 285. [Google Scholar] [CrossRef]
  22. Campana, S. Drones in archaeology. State-of-the-art and future perspectives. Archaeol. Prospect. 2017, 24, 275–296. [Google Scholar] [CrossRef]
  23. Hill, A.C.; Rowan, Y. Droning on in the Badia: UAVs and site documentation at Wadi al-Qattafi. Near East. Archaeol. 2017, 80, 114–123. [Google Scholar] [CrossRef]
  24. Hill, A.C. Economical drone mapping for archaeology: Comparisons of efficiency and accuracy. J. Archaeol. Sci. Rep. 2019, 24, 80–91. [Google Scholar] [CrossRef]
  25. Howland, M.D.; Kuester, F.; Levy, T.E. Photogrammetry in the field: Documenting, recording, and presenting archaeology. Mediterr. Archaeol. Archaeom. 2014, 14, 101–108. [Google Scholar]
  26. Magnani, M.; Douglass, M.; Schroder, W.; Reeves, J.; Braun, D.R. The digital revolution to come: Photogrammetry in archaeological practice. Am. Antiq. 2020, 85, 737–760. [Google Scholar] [CrossRef]
  27. Olson, B.R.; Placchetti, R.A.; Quartermaine, J.; Killebrew, A.E. The Tel Akko Total Archaeology Project (Akko, Israel): Assessing the suitability of multi-scale 3D field recording in archaeology. J. Field Archaeol. 2013, 38, 244–262. [Google Scholar] [CrossRef]
  28. Themistocleous, K. 2020 The use of UAVs for cultural heritage and archaeology. In Remote Sensing for Archaeology and Cultural Landscapes; Hadjimitsis, D.G., Themistocleous, K., Cuca, B., Agapiou, A., Lysandrou, V., Lasaponara, R., Masini, N., Schreier, G., Eds.; Springer: Cham, Switzerland, 2020; pp. 241–269. [Google Scholar]
  29. Waagen, J. New technology and archaeological practice. Improving the primary archaeological recording process in excavation by means of UAS photogrammetry. J. Archaeol. Sci. 2019, 101, 11–20. [Google Scholar]
  30. Wernke, S.A.; Adams, J.A.; Hooten, E.R. Capturing Complexity: Toward an Integrated Low-Altitude Photogrammetry and Mobile Geographic Information System Archaeological Registry System. Adv. Archaeol. Pract. 2014, 2, 147–163. [Google Scholar] [CrossRef] [Green Version]
  31. Hill, A.C.; Rowan, Y.; Kersel, M.M. Mapping with aerial photographs: Recording the past, the present, and the invisible at Marj Rabba, Israel. Near East. Archaeol. 2014, 77, 182–186. [Google Scholar] [CrossRef]
  32. Jones, C.A.; Church, E. Photogrammetry is for everyone: Structure-from-motion software user experiences in archaeology. J. Archaeol. Sci. Rep. 2020, 30, 102261. [Google Scholar] [CrossRef]
  33. Verhoeven, G. Taking Computer Vision Aloft–Archaeological Three-Dimensional Reconstructions from Aerial Photographs with Photoscan. Archaeol. Prospect. 2011, 18, 67–73. [Google Scholar] [CrossRef]
  34. Dubbini, M.; Curzio, L.I.; Campedelli, A. Digital elevation models from unmanned aerial vehicle surveys for archaeological interpretation of terrain anomalies: Case study of the Roman castrum of Burnum (Croatia). J. Archaeol. Sci. Rep. 2016, 8, 121–134. [Google Scholar] [CrossRef] [Green Version]
  35. Uysal, M.; Toprak, A.S.; Polat, N. DEM generation with UAV Photogrammetry and accuracy analysis in Sahitler hill. Measurement 2015, 73, 539–543. [Google Scholar] [CrossRef]
  36. O’Driscoll, J. Landscape applications of photogrammetry using unmanned aerial vehicles. J. Archaeol. Sci. Rep. 2018, 22, 32–44. [Google Scholar] [CrossRef]
  37. Sapirstein, P.; Murray, S. Establishing best practices for photogrammetric recording during archaeological fieldwork. J. Field Archaeol. 2017, 42, 337–350. [Google Scholar] [CrossRef]
  38. Anders, N.; Valente, J.; Masselink, R.; Keesstra, S. Comparing filtering techniques for removing vegetation from UAV-based photogrammetric point clouds. Drones 2019, 3, 61. [Google Scholar] [CrossRef] [Green Version]
  39. Zeybek, M.; Şanlıoğlu, İ. Point cloud filtering on UAV based point cloud. Measurement 2019, 133, 99–111. [Google Scholar] [CrossRef]
  40. Serifoglu Yilmaz, C.; Gungor, O. Comparison of the performances of ground filtering algorithms and DTM generation from a UAV-based point cloud. Geocarto Int. 2016, 33, 1–16. [Google Scholar] [CrossRef]
  41. Fernández-Lozano, J.; Gutiérrez-Alonso, G. Improving archaeological prospection using localized UAVs assisted photogrammetry: An example from the Roman Gold District of the Eria River Valley (NW Spain). J. Archaeol. Sci. Rep. 2016, 5, 509–520. [Google Scholar] [CrossRef]
  42. Polat, N.; Uysal, M. DTM generation with UAV based photogrammetric point cloud. ISPRS 2017, XLII-4/W6, 77–79. [Google Scholar] [CrossRef] [Green Version]
  43. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An easy-to-use airborne lidar data filtering method based on cloth simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  44. Jensen, J.L.R.; Mathews, A.J. Assessment of image-based point cloud products to generate a bare earth surface and estimate canopy heights in a woodland ecosystem. Remote Sens. 2016, 8, 50. [Google Scholar] [CrossRef] [Green Version]
  45. Rahmayudi, A.; Rizaldy, A. Comparison of Semi Automatic DTM from Image Matching with DTM from LIDAR. ISPRS 2016, 41, 373–380. [Google Scholar]
  46. Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M.Y. Filtering Photogrammetric Point Clouds using Standard LIDAR Filters Towards DTM Generation. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 4, 319–326. [Google Scholar] [CrossRef] [Green Version]
  47. Durupt, M.; Flamanc, D.; le Bris, A.; Iovan, C.; Champion, N. Evaluation of the potential of Pleiades system for 3D city models production: Building, vegetation and DTM extraction. In Proceedings of the ISPRS Commission I Symposium, Karlsruhe, Germany, 9–12 October 2016. [Google Scholar]
  48. Skarlatos, D.; Vlachos, M. Vegetation removal from UAV derived DSMS, using combination of RGB and NIR imagery. In Proceedings of the ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, Riva del Garda, Italy, 4–7 June 2018; Volume IV-2, pp. 255–262. [Google Scholar]
  49. Özdemir, E.; Remondino, F.; Golkar, A. Aerial point cloud classification with deep learning and machine learning algorithms. ISPRS 2019, 42, 843–849. [Google Scholar] [CrossRef] [Green Version]
  50. Becker, C.; Rosinskaya, E.; Häni, N.; d’Angelo, E.; Strecha, C. Classification of aerial photogrammetric 3D point clouds. Photogramm. Eng. Remote Sens. 2018, 84, 287–295. [Google Scholar] [CrossRef]
  51. Sammartano, G.; Spanò, A. DEM Generation based on UAV Photogrammetry Data in Critical Areas. In Proceedings of the GISTAM, Rome, Italy, 26–27 April 2016; pp. 92–98. [Google Scholar]
  52. Dense Cloud Classification. Available online: https://agisoft.freshdesk.com/support/solutions/articles/31000148866-dense-cloud-classification#Automatic-Classify-Ground-Points%C2%A0 (accessed on 31 October 2021).
  53. Hatzopoulos, J.N.; Stefanakis, D.; Georgopoulos, A.; Tapinaki, S.; Pantelis, V.; Liritzis, I. Use of Various Surveying Techonologies to 3D Digital Mapping and Modelling of Cultural Heritage Structures for Maintenance and Restoration Purposes: The Tholos in Delphi, Greece. Mediterr. Archaeol. Archaeom. 2017, 17, 311–336. [Google Scholar]
  54. Sonnemann, T.F.; Hung, J.U.; Hofman, C.L. 2016 Mapping indigenous settlement topography in the Caribbean using drones. Remote Sens. 2016, 8, 791. [Google Scholar] [CrossRef] [Green Version]
  55. Salach, A.; Bakuła, K.; Pilarska, M.; Ostrowski, W.; Górski, K.; Kurczyński, Z. 2018 Accuracy assessment of point clouds from LiDAR and dense image matching acquired using the UAV platform for DTM creation. ISPRS Int. J. Geo-Inf. 2018, 7, 342. [Google Scholar] [CrossRef] [Green Version]
  56. Tutorial (Intermediate Level): Dense Cloud Classification and DTM Generation with Agisoft PhotoScan Pro 1.1. Available online: https://www.agisoft.com/pdf/PS_1.1%20-Tutorial%20(IL)%20-%20Classification%20and%20DTM.pdf (accessed on 31 October 2021).
  57. Liritzis, I. Kastrouli fortified settlement (desfina, phokis, Greece): A chronicle of research. Sci. Cult. 2021, 7, 17–32. [Google Scholar]
  58. Raptopoulos, S. Phokis (Φωκίδα). Arkheologiko Deltio 2005, 60, 463–464. [Google Scholar]
  59. Raptopoulos, S. 2012 Mycenaean tholos tomb in Desfina of Phokis (Μυκηναϊκός θολωτός τάφος στη Δεσφίνα Φωκίδος). Arkheologiko Ergo Thessal. Kai Ster. Elladas 2012, 3, 1071–1078. [Google Scholar]
  60. Liritzis, I.; Polymeris, G.S.; Vafiadou, A.; Sideris, A.; Levy, T.E. Luminescence dating of stone wall, tomb and ceramics of Kastrouli (Phokis, Greece) Late Helladic settlement: Case study. J. Cult. Herit. 2019, 35, 76–85. [Google Scholar] [CrossRef]
  61. Sideris, A.; Liritzis, I.; Liss, B.; Howland, M.D.; Levy, T.E. At-Risk Cultural Heritage: New Excavations and Finds from the Mycenaean Site of Kastrouli, Phokis, Greece. Mediterr. Archaeol. Archaeom. 2017, 17, 271–285. [Google Scholar]
  62. Liritzis, I.; Sideris, A. The Mycenaean Site of Kastrouli, Phokis, Greece: Second Excavation Season, July 2017. Mediterr. Archaeol. Archaeom. 2018, 18, 209–224. [Google Scholar]
  63. Baziotis, I.; Xydous, S.; Manimanaki, S.; Liritzis, I. An integrated method for ceramic characterization: A case study from the newly excavated Kastrouli site (Late Helladic). J. Cult. Herit. 2020, 42, 274–279. [Google Scholar] [CrossRef]
  64. Chovalopoulou, M.-E.; Bertsatos, A.; Manolis, S.K. Identification of skeletal remains from a Mycenaean burial in Kastrouli-Desfina, Greece. Mediterr. Archaeol. Archaeom. 2017, 17, 265–269. [Google Scholar]
  65. Koh, A.J.; Birney, K.J.; Roy, I.M.; Liritzis, I. 2020 The Mycenaean citadel and environs of Desfina-Kastrouli: A transdisciplinary approach to southern Phokis. Mycen. Archaeol. Archaeom. 2020, 20, 47–73. [Google Scholar]
  66. Kontopoulos, I.; Penkman, K.; Liritzis, I.; Collins, M.J. 2019 Bone diagenesis in a Mycenaean secondary burial (Kastrouli, Greece). Archaeol. Anthropol. Sci. 2019, 11, 5213–5230. [Google Scholar] [CrossRef] [Green Version]
  67. Liritzis, I.; Xanthopoulou, V.; Palamara, E.; Papageorgiou, I.; Iliopoulos, I.; Zacharias, N.; Vafiadou, A.; Karydas, A.G. Characterization and provenance of ceramic artifacts and local clays from Late Mycenaean Kastrouli (Greece) by means of p-XRF screening and statistical analysis. J. Cult. Herit. 2020, 46, 61–81. [Google Scholar] [CrossRef]
  68. Levy, T.E.; Sideris, T.; Howland, M.; Liss, B.; Tsokas, G.; Stambolidis, A.; Fikos, E.; Vargemezis, G.; Tsourlos, P.; Georgopoulos, A.; et al. At-risk world heritage, cyber, and marine archaeology: The Kastrouli–Antikyra Bay land and sea project, Phokis, Greece. In Cyber-Archaeology and Grand Narratives; Jones, I.W.N., Levy, T.E., Eds.; Springer: Cham, Switzerland, 2018; pp. 143–234. [Google Scholar]
  69. Doneus, M.; Verhoeven, G.; Fera, M.; Briese, C.; Kucera, M.; Neubauer, W. From deposit to point cloud – a study of low-cost computer vision approaches for the straightforward documentation of archaeological excavations. Geoinf. FCE CTU 2011, 6, 81–88. [Google Scholar] [CrossRef] [Green Version]
Figure 1. An orthophotograph of the site of Kastrouli, Greece. Note the vegetation obscuring the ground surface across the site.
Figure 1. An orthophotograph of the site of Kastrouli, Greece. Note the vegetation obscuring the ground surface across the site.
Quaternary 05 00005 g001
Figure 2. Comparison between an unmodified DEM of Kastrouli produced by (a) LAAP and photogrammetry and DTMs of the site generated by LAAP, photogrammetry, and point cloud classification through: (b) Classify Ground Points, (c) Select Points by Color, and (d) Classify Points. The site wall and excavation areas are shown for context. DEMs (a,b) are also viewable in interactive format here: https://storymaps.arcgis.com/stories/39280437536b4c458817fa4aa5c0b541 (accessed on 19 November 2021).
Figure 2. Comparison between an unmodified DEM of Kastrouli produced by (a) LAAP and photogrammetry and DTMs of the site generated by LAAP, photogrammetry, and point cloud classification through: (b) Classify Ground Points, (c) Select Points by Color, and (d) Classify Points. The site wall and excavation areas are shown for context. DEMs (a,b) are also viewable in interactive format here: https://storymaps.arcgis.com/stories/39280437536b4c458817fa4aa5c0b541 (accessed on 19 November 2021).
Quaternary 05 00005 g002
Figure 3. The dense point cloud from Kastrouli, as classified by the fully automated Classify Points algorithm. Green points are classified as “high vegetation”, orange points are classified as “building”, and brown points are classified as “ground”. Much of the bare earth at the site is incorrectly classified as “building.”
Figure 3. The dense point cloud from Kastrouli, as classified by the fully automated Classify Points algorithm. Green points are classified as “high vegetation”, orange points are classified as “building”, and brown points are classified as “ground”. Much of the bare earth at the site is incorrectly classified as “building.”
Quaternary 05 00005 g003
Figure 4. (a) Contour map produced from the unmodified DSM. Note the contours representing vegetation. (b) Contour map produced from the Classify Ground Points-derived, geometrically filtered DTM. In both cases, contour lines below 5 m have been removed and contours have been smoothed. These maps are also viewable in interactive format here: https://storymaps.arcgis.com/stories/39280437536b4c458817fa4aa5c0b541 (accessed on 19 November 2021).
Figure 4. (a) Contour map produced from the unmodified DSM. Note the contours representing vegetation. (b) Contour map produced from the Classify Ground Points-derived, geometrically filtered DTM. In both cases, contour lines below 5 m have been removed and contours have been smoothed. These maps are also viewable in interactive format here: https://storymaps.arcgis.com/stories/39280437536b4c458817fa4aa5c0b541 (accessed on 19 November 2021).
Quaternary 05 00005 g004
Table 1. Methods of point cloud classification in Agisoft Metashape Professional and the corresponding user parameters used to produce optimal DTMs at Kastrouli.
Table 1. Methods of point cloud classification in Agisoft Metashape Professional and the corresponding user parameters used to produce optimal DTMs at Kastrouli.
Point Cloud Classification MethodUser Parameters
Classify PointsConfidence: 0.00
Classify Ground PointsMax angle (deg): 15
Max distance (m): 0.05
Cell size (m): 10
Select Points by ColorColor: #b69b8a
Tolerance: 15
Channels: red, green, blue, hue, saturation, value
Assign ClassFully manual
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Howland, M.D.; Tamberino, A.; Liritzis, I.; Levy, T.E. Digital Deforestation: Comparing Automated Approaches to the Production of Digital Terrain Models (DTMs) in Agisoft Metashape. Quaternary 2022, 5, 5. https://doi.org/10.3390/quat5010005

AMA Style

Howland MD, Tamberino A, Liritzis I, Levy TE. Digital Deforestation: Comparing Automated Approaches to the Production of Digital Terrain Models (DTMs) in Agisoft Metashape. Quaternary. 2022; 5(1):5. https://doi.org/10.3390/quat5010005

Chicago/Turabian Style

Howland, Matthew D., Anthony Tamberino, Ioannis Liritzis, and Thomas E. Levy. 2022. "Digital Deforestation: Comparing Automated Approaches to the Production of Digital Terrain Models (DTMs) in Agisoft Metashape" Quaternary 5, no. 1: 5. https://doi.org/10.3390/quat5010005

Article Metrics

Back to TopTop