Next Article in Journal
RPAS Automatic ADS-B Based Separation Assurance and Collision Avoidance System Real-Time Simulation Results
Next Article in Special Issue
Revealing Archaeological Sites under Mediterranean Forest Canopy Using LiDAR: El Viandar Castle (husum) in El Hoyo (Belmez-Córdoba, Spain)
Previous Article in Journal
Development, Modeling and Control of a Dual Tilt-Wing UAV in Vertical Flight
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Correlation among Earthwork and Cropmark Anomalies within Archaeological Landscape Investigation by Using LiDAR and Multispectral Technologies from UAV

1
Department of Civil Engineering, University of Salerno, 84084 Fisciano, Italy
2
Spiron Heritage and Survey, 00198 Rome, Italy
*
Author to whom correspondence should be addressed.
Drones 2020, 4(4), 72; https://doi.org/10.3390/drones4040072
Submission received: 16 October 2020 / Revised: 26 November 2020 / Accepted: 27 November 2020 / Published: 30 November 2020

Abstract

:
This project aimed to systematically investigate the archaeological remains of the imperial Domitian villa in Sabaudia (Italy), using different three-dimensional survey techniques. Particular attention in the research was paid to the identification and documentation of traces that buried structures left on the surface occupied by the villa, which extended for 46 hectares, an area that was fully covered with structures. Since a dense pine forest was planted during the 1940s and is currently covering the site, this contribution investigates particularly the correlation among the presence of cropmarks, identifiable with the processing of multispectral maps and vegetation indices from RGB images, and earthwork anomalies identified in a Digital Terrain Model (DTM) built, by utilizing a light detection and ranging (LiDAR) flight from an Unmanned Aerial Vehicle (UAV). The study demonstrates how the use of vegetation maps—calculated starting from RGB and multispectral aerial photos—can provide a more expeditious preliminary analysis on the position and extension of areas characterized by the presence of buried structures, but also that, in order to investigate in-depth a context in similar conditions, the most effective approach remains the one based on LiDAR technology. The integration between the two techniques may prove fruitful in limiting the extension of the areas to be investigated with terrestrial survey techniques.

Graphical Abstract

1. Introduction

Applications for accurate documentation of anomalies within the archaeological field are intensely discussed in the scientific community [1,2,3]. Until the early 2000s, the most frequent technique for archaeological prospecting purposes was undoubtedly the identification of cropmarks, parch marks, and soil marks through aerial photography, more recently it has become applications from UAV (Unmanned Aerial Vehicle) essentially for the lower cost of data acquisition and higher ground resolution [4,5].
It is possible to classify three-dimensional survey equipment installed on UAVs in two possible categories of sensors: passive and active. Passive techniques are based on the acquisition of images that can be processed in software, exploiting the principles of photogrammetric science and/or Structure from Motion (SfM) [6,7] technology, which allows the generation of dense point clouds, 3D models, Digital Surface Model and orthomosaics with high data density and accuracy [8]. This type of acquisition is used in good visibility conditions and in the absence of vegetation—such as trees or shrubs—as the photogrammetric technique is limited to the calculation of the visible surface and therefore not able to go beyond the vegetation. Most of the aerial photogrammetric applications from UAVs for archaeological prospecting purposes are based on the use of RGB sensors, performing well for geometric documentation and for capturing textures characterizing the structures to be documented. In the last decade, there has been gradual miniaturization of sensors—not imaginable until a few years ago—which allowed the installation of compact and lightweight cameras on UAVs, favoring their development both in mapping and inspection [4]. Multispectral cameras too have been involved in this miniaturization phenomenon. These types of cameras are equipped with sensors capable of recording the amount of energy reflected by objects in the different wavelengths of the electromagnetic spectrum (generally the most investigated bands in archaeology are the Near Infrared and Red-Edge range). A multispectral sensor can therefore produce a multi-band image, allowing the analysis of the spectral response in the different acquired bands, and to generate accurate thematic maps, especially when using classifiers for crops and vegetation [9]. The accessibility of this type of sensor, both from the size and from the economic point of view, in recent years it has allowed an increase in applications also in the archaeological field, especially in the field of surveying archaeological structures buried in cultivated fields [10,11].
Active sensors often employed to identify traces of buried archaeological structures, are the ones based-on LiDAR (light detection and ranging) technology. There are two primary platforms: by plane, used almost entirely until the last decade and on UAVs. Especially the LiDAR systems based on the UAV platform has recently shown evident progress, while Airborne LiDAR or Airborne Laser Scanner have few reported developments, notwithstanding so extensively required [12,13,14]. The advantages of installing LiDAR sensors on UAVs compared to airplane are essentially the same as the photogrammetric technique, i.e., higher resolution and a lower cost. In the field of archaeological prospecting, LiDARs on UAVs are often used for the detection of earthwork anomalies, i.e., depressions and/or elevations of the ground level in correspondence with buried archaeological structures, especially in the presence of vegetation [15] LiDAR technology on UAVs is often combined with multi-echo technology, where several sensors can evaluate multiple echo signals for each measuring beam emitted, which allows them to deliver reliable measurement results on the soil (to generate Digital Terrain Models (DTMs)) even in the presence of thick vegetation [16,17].
Given the rapid development of UAV surveying techniques—equipped both with RGB and multispectral sensors—and the use of LiDAR sensors, this paper aims to compare the identification of archaeological anomalies found by processing a LiDAR point cloud acquired by UAV (earthwork anomalies), with vegetation variations determined by the presence of buried archaeological structures (cropmarks). In the proposed study, the earthworks were identified starting from a LiDAR dataset obtained from UAVs, while the cropmarks visible on the vegetation covering the area—tall trees of the maritime pine type—were identified starting from vegetation indices obtained by an aerial photogrammetric flight equipped with an RGB camera and subsequently by another flight with a multi-band multispectral sensor.
The test case for this work is the imperial villa of Domitian located in the Circeo Natural Park (Italy), a site entirely covered by maritime pines (Figure 1). This circumstance, which makes excavation and survey extremely difficult, at the same time represents an interesting and satisfying challenge for an integrated application of all the latest Remote Sensing techniques. For field data collection, other possible techniques could have been implemented: geophysics, Terrestrial Laser Scanner (TLS) and manual survey. Although these approaches are possible, they would require a longer time for data acquisition and survey logistics due to the extension and terrain’s orography, as well as effects on the noise of the data, due to vegetation. With the type of vegetation that characterizes the case study, even from a UAV with LiDAR technology, it may be difficult to obtain data from which extracting information relating to archaeological anomalies, however, through careful planning of flight operations, it was possible to document the site [18].

2. Archaeological Background and Methodological Overview

Domitian villa stands on the shores of Lago di Paola in Sabaudia (Italy), within the Circeo National Park (Figure 2). It is a vast imperial villa, second in size only to the Hadrian’s one in Tivoli (Italy), explored in minimal part and probably extended much beyond the 46 fenced hectares. The site, largely still to be excavated, is characterized by complete coverage of maritime pines planted in the 1940s [19]. It is therefore possible to consider the surveyed area—about 46 hectares—covered by homogeneous vegetation. From Google Earth, as from the produced orthoimages, the areas with less vegetation (always consisting of maritime pines) is due to a greater number of buried ancients remains. We would like to underline that for detecting purposes, also the nearby agricultural areas have been surveyed by multi-spectral aerial photogrammetry, finding several anomalies, but calculations presented are based only on maritime pine type vegetation. Schematically, from a chronological point of view, we can say that, after the most ancient phases of occupation, dating back to the first half of the 1st century BC, the peninsula was affected by two major construction interventions: one, which is responsible for the current structure and extension of the complex, datable to the beginning of the first century AD, the other, datable to the end of the Domitian age, important, more than for the constructive impact, even if not negligible from this point of view, for the interventions on the decorative apparatus and water distribution systems.
The other phases identified, albeit numerous, can generally be traced back to limited restoration or reuse interventions, interventions implemented however within the planning framework defined by the great construction gestures to which we have just referred. In the period between the 1930s and the 1990s, only three areas are systematically excavated, and for this reason, they are now characterized by good or excellent visibility: the “Area Nord” or “Area del Bacino Absidato” [20], the “Area Centrale” [21] and the “Area dei Balnea ad Esedre” [22].
At the end of this brief presentation for the case study a clarification is needed regarding the visibility encountered during land surveying operations, undertaken to verify the information coming from multispectral and LiDAR flight. The archaeological visibility encountered was poor [23], both because of the occasional bramble bushes and the pine needles that almost completely cover the ground.
The vegetation placed also significant limits on measurement operations with GNSS techniques, requiring their use exclusively in open areas. The extremely dense vegetation makes ground survey and excavation very difficult.
The research presented in this paper is part of the “The Domitian Villa: An Imperial Residence in Sabaudia, Italy” project, winner of the 2019 Grant for the “Shelby White and Leon Levy” program. This project’s aim is the realization of a complete edition of the Domitian Villa, including unpublished excavation, already explored structures and archaeological materials. Particular importance in the project, as well as for the survey and complete documentation of three hectares of structures already excavated, using TLS and photogrammetry, is given at identifying archaeological anomalies deriving from the presence of buried structures. For the identification and mapping of these anomalies, which affect the entire area, and which testify of an incredibly vast and articulated complex, an acquisition was planned integrating the systematic use of the most recent technologies for a three-dimensional survey, including aerial photogrammetry from UAVs with RGB and multispectral sensors, UAV–LiDAR, both presented in this paper. The acquisition of documentation related to buried archaeological remains through the observation of the anomalies that they have left on landscape has a long history that, starting from the aerial-photo interpretation of historical air images in the 1960s, had a constant development, substantially parallel to that of sensors [24]. From the observation of anomalies different typology [25,26], influenced by both the environmental context and the nature of the buried testimony, it is possible to observe in literature different approaches and best practices to their identification and documentation also in relation to the conditions of visibility. Archaeological visibility often drives the choice for the best approach in documentation technique. Observing the methodological guidelines available in literature, visibility and anomaly type are the variables to be taken into consideration when planning a documentation campaign for archaeological traces.
For the documentation of feature anomalies (emerging or partially emerging portions of structures) whose surveying and positioning is crucial to correctly interpret still buried evidences, one of the most frequent technique is UAV photogrammetry with RGB sensor. If feature anomalies are covered by vegetation, other approaches, such as topographic survey, TLS, LiDAR or SLAM (Simultaneous Localization and Mapping), are preferable [27]. To map earthwork anomalies (depressions or bumps on ground surface caused by the presence of buried structures, Figure 3) in good visibility conditions, an effective approach is the generation of DTM from aerial photogrammetry from UAV [28,29]. In case of poor visibility, other approaches are preferable: Although, in recent years, surveys of earthwork anomalies have been carried out by using total station or GNSS techniques [30], for speed, density and quality of the final data, the approach providing the most information is the use of LiDAR instrumentation [31,32]. In this case, it was consequently chosen to detect earthwork anomalies, using LiDAR technology by UAV.
In order to spot and document cropmarks (vegetative states influenced by ground conditions), some approaches are possible [33] in our case two were chosen, both based on UAV aerial photogrammetric data acquisition, using RGB and multispectral sensors. When using RGB sensors for cropmarks identification, in addition to the processing of photorealistic orthophotos [34] from which to proceed to the photointerpretation of anomalies, some raster manipulations are possible. These manipulations allow the extraction of vegetation indices that can facilitate this type of operation [35,36]. When instead using multispectral sensors, which are able to record the amount of energy reflected also in the Near Infrared and/or Red-Edge range, it is possible to obtain an accurate map of the anomalies—especially using classifiers for crops and vegetation—which account for the health of the plant and, indirectly, for the presence of buried structures that have influenced its vegetative condition [37].
Table 1 shows the targeted types of archaeological anomalies and the different possible approaches to their location and documentation.

3. Background Theory

3.1. RGB Maps

Recent technologies applied within landscape archaeology made it possible to acquire images in short times, at a particular moment of the year, to locate buried and otherwise invisible archaeological evidences. Such technological development of these vehicles is among the main reasons for the widespread incensement in UAV applications that have proven to be increasingly reliable and straightforward to pilot. In literature, there are various vegetation indices, which can be calculated from a single band of an RGB sensor and which can provide further information and/or confirmations of traces partially visible in archaeological survey operations or from the orthophoto itself. The RGB-based vegetation indices are computed on a pixel basis coming from an orthophoto, often previously calculated by photogrammetric processing. Some of the most used in literature, even for non-archaeological purposes, are listed below [38].
V A R I —Visible Atmospherically Resistant Index [39]—is based on the ARVI (Atmospherically Resistant Vegetation Index):
V A R I   =   R G R E E N     R R E D   R G R E E N   +   R R E D   R B L U  
TGI—Triangular Greenness Index [40]—is highly correlated with leaf chlorophyll content:
T G I   =   R G R E E N     0.39 R R E D   0.61 R B L U  
GLI—Green Leaf Index [41]—was originally designed for use with a digital RGB camera to measure wheat cover. GLI values range from −1 to +1:
G L I   =   2 R G R E E N     R R E D   R B L U E 2 R G R E E N   +   R R E D   + R B L U E  
RGBV—RGB vegetation index [42]—was developed for estimating biomass:
R G B V   =   R G R E E N   2   ( R R E D     R B L U E ) R G R E E N   2 +   ( R R E D     R B L U E   )
EXG—Excess of Green [43]—is generally used to distinguish between plant material and the background of soil or residue [44]:
E X G   =   2 R G R E E N     R R E D   R B L U E  
where, for each equation,   R R E D   , R G R E E N   and R B L U E   are the R, G and B bands, respectively.

3.2. Multispectral Maps

A multispectral sensor is an instrument capable of recording the amount of energy that an object reflects in different wavelengths of the electromagnetic spectrum (generally visible and infrared). From the analysis of the images obtained from this type of sensor and the analysis of the different spectral responses, it is possible to produce, with the use of classifiers, accurate thematic maps, useful for the identification of buried archaeological structures [45,46].
By combining intensities pixel value of two or more image bands, it is possible to generate different vegetation indices, or maps generated pixel by pixel by relating the bands of interest to each other. These indices mainly use three spectral ranges (Green, Red and NIR) related to the spatial differences in the density and physical state of vegetation and soil moisture which often appear to be linked to the presence of archaeological remain [47].
The study reported here aims to evaluate the potential of a low-cost multispectral sensor to spot vegetation anomalies through interrelation of trees with buried archaeological structures. From the information obtained from the multispectral camera, the following indices were calculated:
NDVI—Normalized Difference Vegetation Index—is a measure of healthy green vegetation. The combination of its normalized difference formulation and use of the highest absorption and reflectance regions of chlorophyll make it robust over a wide range of conditions. The NDVI ranges from −1 to +1, where positive values indicate increasing greenness, and negative values indicate non-vegetated features such as water, barren areas, ice, snow or clouds [44];
N D V I   =   R N I R     R R E D   R N I R   +   R R E D  
GNDVI—Green Normalized Difference Vegetation Index—is an index that is similar to NDVI, except that it measures the Green spectrum from 540 to 570 nm instead of the red spectrum. This index is more sensitive to chlorophyll concentration than NDVI and ranges from 0 to 1 [48]:
G N D V I   =   R N I R     R G R E E N   R N I R   +   R G R E E N
LCI—Leaf Chlorophyll Index—is an index generally used to evaluate the chlorophyll content in areas with full leaf coverage. The leaf chlorophyll content is an important variable for agricultural remote sensing because of its close relationship to leaf nitrogen content [49]:
L C I   =   R N I R     R R E D e d g e   R N I R   +   R R E D
NDRE—Normalized Difference Red-Edge—is similar to NDVI but uses Red-Edge instead of red, that is, the band of wavelengths found in the transition zone between Red and near-infrared. The Red-Edge light (the one used in NDRE) can pass through the leaves far deeper than Red light (the one used in NDVI). In other studies, the NDRE index detected changes in plant stress prior to NDVI and GNDVI. Red-Edge information has the potential to considerably improve monitoring of forest health from UAVs, Airborne or satellite [50]:
N D R E   =   R N I R     R R E D e d g e R N I R   +   R R E D e d g e
MCARI—Modified Chlorophyll Absorption in Reflectance Index—[51] measures the absorption depth of chlorophyll and is very sensitive to changes in chlorophyll concentrations. MCARI shows weakness in predicting low concentrations of chlorophyll, especially the impact of the soil signal limits its functionality. To avoid misjudgments, MCARI should be interpreted in conjunction with NDVI [52]:
M C A R I   =   ( 0.8 r 700 r 650 + 0.2 r 550 ) r 700 r 670
where, for each equation, R R E D , R G R E E N , R N I R , R R E D e d g e and r x are, respectively, the Red, Green, NIR, Red-Edge bands and the reflectance value at x nanometres.

4. Data and Methods

The data used were collected in two different aerial surveys. The first dataset, a dense point cloud collected with UAV-mounted LiDAR, was obtained in October 2019, with the 3DTARGET ScanFly system (www.scanfly.3dtarget.it/it-it). In particular, the system used was the ScanFly ULTRA, equipped with the Velodyne ULTRA VLP-32C LiDAR sensor. Among the main features are 32 channels, range of 200 m, acquisition speed up to 600,000 p/s in dual return mode, FOV of 360° horizontal and 40° vertically. Weighing less than 2 kg, it can be installed onboard a UAV, also because the positioning devices have 2 antennas and dual-frequency receivers, capable of receiving GPS and Glonass data, and which can operate in PPK or RTK. The external orientation of the scan origin (inertial platform with frequency from 100 Hz to 250 Hz, and positioning accuracy of 0.05°) are integrated into the laser scanner. The ScanFly ULTRA system sensor was installed on a POTENZA 8HSE type 8-rotor multicopter from Italdron (www.italdron.com/it/i-nostri-droni/potenza-8hse/#!/specs). The main features of this drone are a maximum payload of 10 Kg, with a maximum flight range of up to 40 min, a max range of 3 km and a maximum speed of 15 m/s (Figure 4). The second survey was performed in July 2020, using the fixed-wing Sensefly eBee X (www.sensefly.com/drone/ebee-x-fixed-wing-drone), equipped in first flight with the S.O.D.A. 3D RGB camera (www.sensefly.com/camera/sensefly-soda-3d-mapping-camera), and, on the same day, in a second flight with a multispectral camera Parrot Sequoia (Figure 5, (www.sensefly.com/app/uploads/2018/05/Parrot_Sequoia_senseFly_Specifications_EN.pdf). The S.O.D.A. 3D is a 24 MPixel camera with a sensor size of 1” (13.2 × 8.8 mm, Pixel size 2.33 µm), and focal length of 10.6 mm, installed inside a sled that allows it to change the orientation during flight to capture three images for each waypoint (2 oblique at an assigned slope and 1 nadir), instead of the single nadiral image.
The Sequoia multispectral sensor is capable of capturing imagery at the visible and near-infrared spectrum, providing calibrated data to optimally monitor the health and vegetation vigor. It has five cameras integrated within a single body, including in the Green (~550 nm), Red (~660 nm), NIR (~790 nm) and Red-Edge (~735 nm) regions. The system includes a sky-facing fully integrated sunshine sensor. This sensor captures and logs the current lighting conditions and automatically calibrates outputs of the camera, so measurements are absolute. It is essential that UAV based multispectral sensors observations are calibrated though such a system. The four multispectral bands have 1/3.2 “sensor size (1280 × 960 pixel, Pixel size 3.75 µm, focal lens 4 mm). The camera also captures true color simultaneous images with a 16 MPixel sensor (RGB Sequoia) with dimensions of 6.17 × 4.55 mm (4608 × 3456 pixels, pixel size 1.34 µm) and focal length of 5 mm. For cropmarks detection, a project GSD—with RGB and multispectral camera—of at least 10 cm has been defined.

4.1. LiDAR Data Acquisition and Preprocessing

A total of 50 hectares has been scanned by LiDAR technology from UAVs in three flights, positioning the take-off point in such a way as to be in VLOS (Visual Line of Sight) conditions during each acquisition, in accordance to ENAC regulations [53].
Each flight took place in automatic mode using a flight plan. The flight lines were designed using a DJI Ground-Station software package. For all the surveys, the UAV was set to a targeted altitude of 70 m above the take-off point with a horizontal ground speed of 3.0 m/s. The height was computed in the DJI Ground-Station software, using elevation data derived from Google Earth. Parallel flight lines were programmed to have an acquisition front-overlap of 80% and side-overlap of 60%. The returned point cloud, 260 million points large with a density of approximately 508 pt/m, contained both the vegetated part—mainly the crowns of the maritime pines, their trunks, a few emerged structures and scrub—and the ground.
The classification of ground from non-ground LiDAR returned was performed in the LiDAR360 software (https://greenvalleyintl.com/software/lidar360). The commands used for this operation were “Filter Ground Points”. In order to obtain an adequate descriptive result, the following parameters were set: Grid Size (in our case 0.1 m), Ground Thickness (in our case set at 0.3 m), i.e., the thickness from the lowest point of the point cloud where the points belonging to the ground will be searched, the Window Smooth parameter (optional), to use neighborhood grid data to conduct ground point consistency filtering, and the Window Size parameter (in our case equal to 3, the size of the neighborhood window 3 × 3).
After filtering a “ground layer”, the generated cloud was optimized using a noise filter to clean points located above and below the ground plan. Denoising procedure was performed using the “Noise Filter” command in LiDAR360. The parameters that have been set for this filter are the radius of the fitting plane, in our case set as the calculation grid, and Multiples of standard deviation parameter, to use the relative error (σ) as a parameter for outliers removal (in our case study value is 1.0). The chosen parameters did not cause loss of information on the submerged archaeological evidence but improved the general cleanliness of the cloud by eliminating the presence of brushwood.
The algorithm automatically calculates the standard deviation (SD) of a point P’s surrounding fitting plane by eliminating P if the distance is greater than σ. It is possible to eliminate the Outliers point by the Remove Isolated Points parameter, to delete the isolated point when there are less than 4 points within the distance of the searching radius. From the obtained point cloud DTMs with a grid resolution of 10 cm were calculated. Particularly, it can be observed that the filtering for DTM generation has been cleaned, with a good result, the trees, while in some areas, where the slopes vary rapidly, a minimum noise remains, in particular in areas strongly vegetated in the brushwood. The point cloud containing only the ground layer has about 13 million points, with a density of about 25 pt/m (Figure 6).

4.2. RGB and Multispectral Images Data Acquisition

The aerial photogrammetric survey was performed with a commercial grade fixed-wing UAV (Sensefly eBee plus). This UAV has a central body, detachable fixed-wings and a tail propeller. It features an on-board Global Navigation Satellite System (GNSS) module and Inertial Measurement Unit (IMU) modules. The hardware component weighs approximately 1.2 kg, including the battery and the camera housing module. Two GNSS modes are available both for the flight and images georeferencing; in our acquisitions the “standalone” mode was chosen, meaning without RTK correction.
For georeferencing and photogrammetric optimization were used GCPs (Ground Control Points), previously positioned both inside and outside the villa. The first flight with the fixed-wing system was carried out with the S.O.D.A. 3D camera. This camera allows the possibility to take triplets of images for each waypoint calculated from a pre-programmed flight plan. In our application, the camera inclination for the oblique photogrammetric takes was set ± 30° for the nadir image.
A total of 564 images were acquired for RGB acquisition. The flight lines were designed by using Emotion version 3.13.0 Build 236 software package (Figure 7). For this survey, there was a fixed setting to target an altitude of 145 m above the take-off point. This software package also estimates flight height by using elevation data derived from Google Earth. Parallel flight lines were programmed to have an acquisition front-overlap of 85% and side-overlap of 60%. For fixed-wing RGB camera flight, altitude has been set according to current flight regulations (altitude lower than 150 m), at the same time guaranteeing predefined GSD and an overlap of 85%, assuming that higher overlap may improve alignment in cases of homogeneous texture such as for maritime pine vegetation. The flight took approximately 22 min to complete the take-off, landing and flight plan procedures of the acquired 55 hectares. The image acquisition was planned with the project requirements in mind —a ground sampling distance (GSD) of approximately 3.3 cm/px.
With Multispectral Sequoia images were acquired from 120 m AGL with 80% longitudinal overlap and 60% lateral overlap, yielding a GSD of 8 cm/px (Figure 8). The surveyed area is the same as that acquired by the S.O.D.A. 3D camera, i.e., an area of 55 ha in an estimated flight time of 35 min. The total images acquired—in the four available bands and RGB—are 1075. The Parrot Sequoia was radiometrically calibrated according to the method proposed for dual-sensor pair calibration in Jin and Eklundh [54]. The method revolves around radiometrically calibrating a pair of sensors by using the sun as the illumination source. The radiometric calibration is done by having an upwards sensor registering incoming radiation and a downward-looking sensor fixated upon a reflectance plate registering outgoing radiation.
All flight operations were conducted in visual line of sight (VLOS) have to be maintained whilst operating UAV and in according to ENAC regulations [53].
For Sequoia, a calibration target provided by the manufacturer was recorded to perform radiometric calibration in postprocessing [55]. In UAV photogrammetry, the best solution is to distribute the GCPs uniformly both at the edges and within the area.
To support the photogrammetric project, 6 Ground Control Points (GCPs) were measured to georeference and assess the accuracy of the generated 3D model and orthophotos. The GCPs were materialized on the ground, using photogrammetric targets (50 × 50 cm) and topographic nails. The GNSS survey refers to the Italian geodetic and cartographic system UTM/ETRF00 [56]. The instrumentation used to measure each target consists of an antenna with a built-in receiver of the Geomax Zenith 20.

4.3. RGB and Multispectral Images Data Processing

In order to process the RGB photogrammetric data by S.O.D.A. 3D, which required no metric calibration, it was preferred to use Agisoft Metashape (ver. 1.5.3 build 8469, 2019) [57]. The orientation parameters were estimated in Metashape, using a self-calibrating bundle adjustment (BA) by including the GCPs. These estimated parameters were then used to orient the images. Additionally, the estimated parameters were kept constant during the entire RGB data processing. The following parameters were set for the calculation of point clouds: in the Align Photos phase, Accuracy = High (original photos), Key-Point limit = 40,000 and Tie-Point limit = 40,000. To optimize the camera alignment process, f (focal length); cx and cy (principal point offset); and k1, k2, k3, and k4 (radial distortion coefficients) were fitted. In the building of the Dense Cloud, the parameters used were as follows: Quality = High (1/4 of original photos), and Depth Filtering = Disable; once the complete elaboration of the photogrammetric shots was done, it created the texturized 3D model of the villa, used to extract the orthophoto, required for the next calculates needed.
Thus, an orthophoto was returned in the GCPs reference system, with a resolution of 4 cm/px. The analysis of the georeferencing residuals on the GCPs, the RMSE, estimates on the coordinates and their combination were considered:
R M S E E = 1 n i = 1 n ( E ¯ C i E R i ) 2
R M S E N = 1 n i = 1 n ( N ¯ C i N R i ) 2
R M S E h = 1 n i = 1 n ( h ¯ C i h R i ) 2
R M S E = R M S E E 2 + R M S E N 2 + R M S E h 2
where the subscript C indicates the coordinates estimated from the bundle adjustment, R indicates the reference values, E and N are respectively the East and North cartographic coordinates, h is the ellipsoidal height, and n indicates the number of GCPs.
R M S E E , R M S E N and R M S E h are respectively the RMSE (Root Mean Square Error) on GCPs on East and North cartographic coordinates, and h is the ellipsoidal height. R M S E E and R M S E N are estimated respectively in about 2 and 3 cm. R M S E h is estimated in 3 cm.
For the multispectral images acquired with Parrot Sequoia, the photogrammetric project was processed, using Pix4Dmapper Pro, version 4.5.6 [58], while for the RGB images—as already mentioned—the software Agisoft Metashape was used. The results differ from each other in terms of metric (point clouds), but the study was centered on the spectral response on orthoimagery. It is then assumed that for the resolutions proposed (GSD <10 cm) and for final purposes (earthwork and cropmarks correspondences between DTM from LiDAR and Vegetation Indeces (maps from orthophotos), metric differences in processing may be considered marginal.
We preferred to switch to the Pix4Dmapper Pro software because specific processing-option templates for the Sequoia camera are implemented in it: in particular, for multispectral images, i.e., the four multispectral bands (NIR, Red-Edge, Red and Green) have been processed inside of the Ag Multispectral template, while the RGB images, obtained from a mechanical shutter and sometimes also subject to rolling shutter phenomena, were processed in the Ag Modified camera template.
The software environment uses a classic SfM pipeline that can be summarized in the following steps: (1) calculation of internal and external orientation parameters (with automated detection of Key-Points and Tie-Points) and creation of sparse cloud, (2) extraction of a dense cloud, (3) construction of a polygonal model, (4) texture mapping and (5) orthophoto and eventually vegetation index production [59]. The Ag Multispectral processing options template was used for the multispectral images. The Initial Processing was set in the Pix4D processing options at full image scale. The Matching Image Pairs was set to Aerial Grid or Corridor. The targeted number of Key-Points was set at 10,000. For the dense point cloud generation phase an image reduction factor equal to 1/2 per side (1/16 of the total resolution) was with a minimal number of matching of 2 images and for the generation of 3D texturized mesh a medium resolution with a texture of 8192 × 8192 pixels dimension. For this project, the estimated RMSEE and RMSEN value on the GCPs was about 6 cm, and the estimated RMSEh was about 9 cm. After the realization of the orthophotos for the four georeference bands, using the GCP, from the Index calculator tool present in Pix4DMapper, it was possible to calculate the following multispectral vegetation indices in raster format (pixel size of 10 cm): NDVI, GNDVI, LCI and MCARI, used for subsequent correspondence analyses. For the RGB images of the Sequoia, however, the Ag Modified camera processing options template was used. The Initial Processing was set in the Pix4D processing options at full image scale. The Matching Image Pairs was set to Aerial Grid or Corridor.
The targeted number of Key-Points was set at 10,000. For the generation phase of the dense point cloud an image reduction factor equal to 1/4 per side (1/16 of the total resolution) was used with a minimal number of matching of two images and for the generation of 3D texturized mesh a medium resolution with a texture of 8192 × 8192 pixels dimensions.
The orthophoto generated by the RGB images of the Sequoia will not be affected by calculation procedures within the vegetation maps. However, it was supposed to be useful to define a comparison between the two RGB orthophotos that were available in this application, i.e., from S.O.D.A. 3D and Parrot Sequoia.
The tests were carried out with a Dell Alienware Area 51 with 32 Gb Ram, 1 Tb SSD storage, Intel Core i7-5820K processor and a double NVIDIA Titan X GPU.
Figure 9 shows a flow chart of the data processing, while Figure 10 show respectively the maps of the vegetation indices and the color orthophotos, respectively from the processing of RGB images (S.O.D.A. 3D) and the multispectral camera (Parrot Sequoia).

4.4. LiDAR Data Processing

After ground-filtering activities were carried out with the LiDAR360 software, a DTM was generated by using the Create Elevation Grid from 3D Vector Data command in Global Mapper software. This is a straightforward and effective procedure that allows the software to estimate, or impose, spacing for the raster grid. In our case, the spacing has been imposed at 10 cm.
After having computed the DTM, to highlight the jumps in height, two shaders were created, one in grayscale at 10 cm and with a random passage of colors at 20 cm. It was decided to create these two views to observe the general conformation of the buildings identified in the least resolute one and to use the one with the passage of color every 10 cm of height to distinguish, when possible, the internal articulation of the anomalies.
After the shaders production, using the DTM, contours with 1 m inter-distance were processed in Global Mapper software, to develop a cartographic base for the archaeological map of the peninsula.
Concluded the production of cartographic data, the analytical phase was started to outline the anomalies. This operation was based on the use of featured anomalies as indicators for the orientation and truthfulness of the data produced and, when possible, integrated by using metric grids based on Roman measurement units, in order of to identify more accurately the limits of identified buildings.

5. Anomalies Analysis and Conclusions

The research project at the villa, in order to produce a map of the vast archaeological site covered by the dense vegetation, envisaged a first step of documentation of the visible emerging structures, both through RGB aerial photogrammetry and topographic survey. Using the data collected in this first part of the campaign, it was possible to draw up an archaeological map of the site. As previously highlighted, through a LiDAR acquisition, and a multispectral flight it was possible to add a considerable amount of data to those obtained during the first step of the campaign. At first, macroscopic information among those that emerged is that relating to the articulation of the villa in artificial terraces to regularize the slope to better exploit the entire available area. For this purpose, the terrain was organized with a system of terracing, visible both through feature anomalies and, very clearly, in the LiDAR data. This terracing system is characterized by two main levels: a higher one, at an average altitude of 11.3 m, and a lower one at an average altitude of 4.5 m (Figure 10). Below this lower terrace, at an altitude ranging between 0 and 2 m the land, variously organized, is characterized by a very dense building occupation. For what concerns the detection of anomalies related to the presence of systems for slope regularization, a clear disproportion can be observed in favor of the LiDAR survey compared to multispectral and RGB maps (Figure 11).
In addition to the traces relating to land organization, it was possible to identify nine major anomalies relating to buildings as shown in Figure 12.
Anomaly 1 (Figure 13) refers to a building located along the northern coast of the peninsula, immediately behind the north edge of the fence, towards the hinterland, a modern limit exceeded by the ancient structure.
The anomaly has a recognizable rectangular shape and measures 180.6 × 248.6 m, or 5 × 7 actus (1 actus = 35.52 m), the LiDAR data made it possible to reconstruct the significant details relating to the internal articulation of the building, located immediately below the “Cisterna Maggiore”.
In the center of the long southern side of the anomaly, a square trace (A) of 1 side actus can be observed. Two walls (B and C) branch off from this square trace which probably describes a corridor. It can also observed, how the space inside the structure was divided into bands, using a ½ actus grid. During the survey, it was possible to observe the presence of a feature anomalies (D). consisting of a structure, raised about 2 m, considerably buried, roughly rectangular, on the top of which the ridges of two walls can be observed. From the observation of multispectral and RGB maps, it is possible to observe only a small anomaly corresponding to the illustrated feature anomalies (D).
Anomaly 2 (Figure 14) is located along the northern coast of the peninsula, immediately east of the “Area Nord” to which it is closely linked, as the feature anomalies identified indicate.
This is a highly evident anomaly consisting of a narrow and long building band (A) measuring 29.70 × 148.70 m (100 × 500 feet), which borders part of the terraces (B), behind which there is access to the highest terrace (C). It should be noted that, although visible in all multispectral and RGB maps, the articulation within the anomaly is visible only in the LiDAR acquisition.
Anomaly 3 (Figure 15), is clearly visible through all the techniques implemented, consists of a quadrangular portico (A) open along the southwestern bank of the peninsula. The rectangular space open to the lake, measuring 82.50 × 55.40 m, is surrounded by a portico divided into two halves (B) by a passage (C), perhaps a staircase, which leads to the top of the substructure.
The whole area is characterized by a very dense frequency of feature anomalies; in particular, in the southernmost half of the portico, it was possible to identify the remains of thermae (bath) (D) with systems equipped with suspensurae and some lobed pools. As previously observed, although the trace is visible in both multispectral and RGB maps, it is still impossible to appreciate its internal articulation, using only passive mapping technique.
Anomaly 4 (Figure 16), is clearly visible in the LiDAR images, is practically invisible both in the multispectral and in the RGB maps. It is possible to interpret what is visible as a passage, whose path that connected the northern area, and the portico (anomaly 3), with the southern thermae (bath) complex area, seems to be visible. This passage area preserves an evident trace of masonry works in opus coementicium, made to regularize the coastline.
Anomaly 5 (Figure 17) has a rectangular shape and is probably to be interpreted as a part of thermae (bath) still to be excavated. It is evident in the LiDAR data but remains indistinguishable when using passive techniques. Only modest feature anomalies can be observed, also due to the particularly poor visibility in this area.
The large anomaly 6 (Figure 18) has a rectangular shape, that is visible in the LiDAR data, and that is almost invisible in multispectral and RGB flight, except for one anomaly in correspondence with some emerging structures (A). The structure, of which few testimonies can be observed in the thick of the woods, occupies an intermediate position on the terracing system that marks the slope of the peninsula in regular planes. Given the internal organization of the building, it can be assumed to be an ergastolum (room intended for slaves’ detention and rest) or a modular structure like a horreum (warehouse).
The anomaly 7 (Figure 19) visible both in the LiDAR data and in the multispectral maps has a square shape; unfortunately, not much can be said about its interpretation. Nothing of characterizing for this anomaly was possible to observe during the survey since no feature anomaly was present.
The anomaly 8 (Figure 20) has a rectangular shape, it is visible, always with great clarity, both in the LiDAR data and in the multispectral and RGB maps. This is an anomaly, also investigated through a survey campaign with MMS, is characterized by the frequent presence of feature anomalies. As for its interpretation, the relationship of the structure with the “Cisterna dell’Eco” through a corridor useful for water management and the observation of structures equipped with suspensurae, has made it possible to interpret the area as a thermae (bath).
Anomaly 9 (Figure 21) has a square shape and is in a corner area of the terracing system. The anomaly is excellently visible both in LiDAR data, as well as in multispectral and RGB maps. Frequent feature anomalies characterize the area, but due to burial and vegetation, its interpretation remains impossible.
Following field survey, LiDAR, RGB and multispectral acquisitions, it was possible to produce an archaeological map of the Domitian Villa, a site characterized by poor visibility and very difficult accessibility conditions.
As shown in Figure 21, the possibility of identifying traces of terracing by using exclusively passive mapping techniques is significantly reduced compared to the possibilities offered by LiDAR technology.
It should also be added that it would have been impossible to deduce the shapes, sizes and heights of the terraces, using exclusively multispectral and RGB maps.
As an additional element, useful for the evaluation of comparative acquisition techniques, it must be emphasized that the resolution obtainable by passive techniques in this circumstance is limited by the size of the tree crowns. A better result was produced by passive techniques in building identification, but, even in this case, a clear superiority of the information that can be deduced from LiDAR can be observed. As a matter of fact, both in terms of identified anomalies and for the possibility of observing internal articulation of the structures, LiDAR data proved to be the most effective choice. Out of nine anomalies identified by active technology, only six are visible through multispectral and RGB maps and of these six only four allowed the appreciation of the extent of the building, albeit with less precision than with LiDAR. It was not possible to observe a clear qualitative prevalence among the multispectral indices; however, it is possible to observe a slightly better quality of the data that can be returned through multispectral maps than that inferred from RGB maps (Table 2).
As for the latter, the most useful index to identify traces, in this case, was the VARI. A final observation relating to the case study derives from the peculiarity of the vegetation, mainly composed of tall trees with foliage ranging from 5 to 8 m. This aspect has certainly made the application of techniques for documenting archaeological anomalies based on vegetation indices less effective, significantly reducing the possibility of observing smaller details than those of the three crowns.
Among the multispectral indices, the one that offered the least correspondence with the traces from LiDAR is MCARI. This difference is probably due to the specific wavelength present in his calculation. In conclusion, it must be emphasized that the type of vegetation present considerably attenuates the responses. In fact, from the processing of the same multispectral data in portions of the crops on the side of the pine forest, some of the indices on display highlighted the clear presence of some traces. The latter are not presented in this work in order not to vary the homogeneity of the vegetation taken as a reference (large-crowned maritime pines).

Author Contributions

Conceptualization, S.B., M.L. and D.R.; data curation, M.L. and D.R.; formal analysis: M.L.; investigation, D.R.; methodology, M.L. and D.R.; validation: D.R.; writing—review and editing: S.B., M.L. and D.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was carried out as part of the “The Domitian Villa: An Imperial Residence in Sabaudia, Italy” project, winner of the 2019 Grant for the “Shelby White and Leon Levy” program.

Acknowledgments

We sincerely thank the Shelby White and Leon Levy Foundation for the generous grant allowing to complete and publish the research on Domitian Villa. We also would like to thank the Soprintendenza (SABAP Lazio) and F. Di Mario for granting access permission to the site, and for allowing result publication. The authors would like to thank the drone pilot Angelo Pizzella for the collaboration in acquiring the images with the Sensefly eBee.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gojda, M. The contribution of aerial archaeology to European landscape studies: Past achievements, recent developments and future perspectives. J. Eur. Archaeol. Arch. 1997, 5, 91–104. [Google Scholar] [CrossRef]
  2. Gojda, M.; Warszawa, P. Current development in archaeological remote sensing: A central European experience and evaluation. Interdiscip. Archaeol. Nat. Sci. Archaeol. 2019, X, 155–164. [Google Scholar] [CrossRef]
  3. Luo, L.; Wang, X.; Guo, H.; Lasaponara, R.; Zong, X.; Masini, N.; Wang, G.; Shi, P.; Khatteli, H.; Chen, F.; et al. Airborne and spaceborne remote sensing for archaeological and cultural heritage applications: A review of the century (1907–2017). Remote Sens. Environ. 2019, 232, 111280. [Google Scholar] [CrossRef]
  4. Adamopoulos, E.; Rinaudo, F. UAS-based archaeological remote sensing: Review, meta-analysis and state-of-the-art. Drones 2020, 4, 46. [Google Scholar] [CrossRef]
  5. Eisenbeiß, H. UAV Photogrammetry. Ph.D. Thesis, University of Technology Dresden, Dresden, Germany, 2009. [Google Scholar]
  6. Rinaudo, F.; Chiabrando, F.; Lingua, A.; Spanò, A. Archaeological site monitoring: UAV photogrammetry can be an answer. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXIX-B5, 583–588. [Google Scholar] [CrossRef] [Green Version]
  7. Fernández-Hernandez, J.; González-Aguilera, D.; Rodríguez-Gonzálvez, P.; Mancera-Taboada, J. Image-based Modelling from unmanned aerial vehicle (UAV) photogrammetry: An effective, low-cost tool for archaeological applications. Archaeometry 2015, 57, 128–145. [Google Scholar] [CrossRef]
  8. Barba, S.; Barbarella, M.; Di Benedetto, A.; Fiani, M.; Gujski, L.; Limongiello, M. Accuracy assessment of 3D photogrammetric models from an unmanned aerial vehicle. Drones 2019, 3, 79. [Google Scholar] [CrossRef] [Green Version]
  9. Verhoeven, G.J.; Smet, P.F.; Poelman, D.; Vermeulen, F. Spectral characterization of a digital still camera’s NIR modification to enhance archaeological observation. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3456–3468. [Google Scholar] [CrossRef]
  10. Moriarty, C.; Cowley, D.C.; Wade, T.; Nichol, C.J. Deploying multispectral remote sensing for multi-temporal analysis of archaeological crop stress at Ravenshall, Fife, Scotland. Archaeol. Prospect. 2019, 26, 33–46. [Google Scholar] [CrossRef] [Green Version]
  11. Calleja, J.F.; Requejo Pagés, O.; Díaz-Álvarez, N.; Peón, J.; Gutiérrez, N.; Martín-Hernández, E.; Cebada Relea, A.; Rubio Melendi, D.; Fernández Álvarez, P. Detection of buried archaeological remains with the combined use of satellite multispectral data and UAV data. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 555–573. [Google Scholar] [CrossRef]
  12. Challis, K.; Forlin, P.; Kincey, M. A generic toolkit for the visualization of archaeological features on airborne LiDAR elevation data. Archaeol. Prospect. 2011, 18, 279–289. [Google Scholar] [CrossRef]
  13. Davis, D.S.; Sanger, M.C.; Lipo, C.P. Automated mound detection using lidar and object-based image analysis in Beaufort County, South Carolina. Southeast. Archaeol. 2019, 38, 23–37. [Google Scholar] [CrossRef]
  14. Corns, A.; Shaw, R. High resolution 3-dimensional documentation of archaeological monuments & landscapes using airborne LiDAR. J. Cult. Herit. 2009, 10, 72–77. [Google Scholar] [CrossRef]
  15. Sittler, B.; Schellberg, S. The potential if LiDAR in assessing elements of cultural heritage hidden under forest canopies of overgrown by vegetation: Possibilities and limits in detecting microrelief structures for archaeological surveys. In From Space to Place: 2nd International Conference on Remote Sensing in Archaeology, Proceedings of the 2nd International Workshop, CNR, Rome, Italy, 4–7 December 2006; Campana, S., Forte, M., Eds.; BAR British Archaeological Reports International Series; Archaeopress: Oxford, UK, 2006; pp. 117–122. [Google Scholar]
  16. Crow, P.; Benham, S.; Devereux, B.J.; Amable, G.S. Woodland vegetation and its implications for archaeological survey using LiDAR. For. Int. J. For. Res. 2007, 80, 241–252. [Google Scholar] [CrossRef] [Green Version]
  17. Devereux, B.J.; Amable, G.S.; Crow, P.; Cliff, A.D. The potential of airborne lidar for detection of archaeological features under woodland canopies. Antiquity 2005, 79, 648–660. [Google Scholar] [CrossRef]
  18. Rajani, M.B.; Kasturirangan, K. Multispectral remote sensing data analysis and application for detecting moats around medieval settlements in South India. J. Indian Soc. Remote Sens. 2014, 42, 651–657. [Google Scholar] [CrossRef]
  19. Ronchi, D. La Villa di Domiziano sul Lago di Paola: Fotointerpretazione ed Indagini Archeologiche; Ceraudo, G., Ed.; Istituto poligrafico e Zecca dello Stato, Libreria dello Stato: Roma, Italy, 2010; pp. 347–349. Available online: https://www.academia.edu/38199498/La_Villa_di_Domiziano_sul_lago_di_Paola_Fotointerpretazione_ed_indagini_archeologiche (accessed on 29 November 2020).
  20. Ronchi, D. La Villa di Domiziano sul Lago di Paola a Sabaudia (Latina): Le nuove indagini presso la zona nord. In Lazio e Sabina 8, Proceedings of the Ottavo Incontro di Studi sul Lazio e la Sabina, Roma, Italy, 30 March–1 April 2011; Ghini, G., Mari, Z., Eds.; Edizioni Quasar: Rome, Italy, 2012; pp. 393–401. [Google Scholar]
  21. Livi, V.; Righi, R. Studi e Ricerche sul Patrimonio Archeologico del Parco Nazionale del Circeo: Atti del Convegno Promosso dall’Ufficio Gestione Beni ex ASFD di Sabaudia in Occasione del Settantennale del Parco Nazionale del Circeo, Sabaudia, 27 marzo 2004. Available online: https://opac.sbn.it/opacsbn/opac/iccu/scheda.jsp?bid=IT%5CICCU%5CRMS%5C1327869 (accessed on 1 September 2019).
  22. Vivaldi, A. Il rilievo dei “Balnea ad esedre” della Villa di Domiziano sul Lago di Paola a Sabaudia (Latina), acquisizioni preliminari: Le fasi costruttive del complesso. In Lazio e Sabina 9, Proceedings of the Nono Incontro di Studi sul Lazio e la Sabina, Roma, Italy, 27 March–29 March 2012; Ghini, G., Mari, Z., Eds.; Edizioni Quasar: Rome, Italy, 2013; pp. 385–391. [Google Scholar]
  23. Cambi, F.; Tentennato, N. Introduzione All’archeologia dei Paesaggi; La Nuova Italia scientifica: Rome, Italy, 1994. [Google Scholar]
  24. Campana, S.; Musson, C.; Rog, P. In Volo nel Passato. Aerofotografia e Cartografia Archeologica; Edizioni All’Insegna del Giglio: Firenze, Italy, 2005; ISBN 9788878144996. [Google Scholar]
  25. Wilson, D.R. Air Photo Interpretation for Archaeologists; Stroud, T., Ed.; Tempus Publishing: Stroud, Gloucestershire, UK, 2000; ISBN 978-0752414980. [Google Scholar]
  26. Picarreta, F. Manuale di Fotografia Aerea: Uso Archeologico; L’Erma di Bretschneider (Studia Archaeologica 2): Roma, Italy, 1994; ISBN 9788870626025. [Google Scholar]
  27. Barrile, V.; Bilotta, G.; Meduri, G. Archaeological investigations with TLS and GPR surveys and geomatics techniques. In Proceedings of the Towards Horizon 2020—33rd EARSeL Symposium, Matera, Italy, 3–6 June 2013. [Google Scholar]
  28. Haala, N.; Rothermel, M. Dense multi-stereo matching for high quality digital elevation models. PFG Photogramm. Fernerkundung Geoinf. 2012, 2012, 331–343. [Google Scholar] [CrossRef]
  29. Leberl, F.; Irschara, A.; Pock, T.; Meixner, P.; Gruber, M.; Scholz, S.; Wiechert, A. Point clouds: Lidar versus 3D vision. Photogramm. Eng. Remote Sens. 2010, 76, 1123–1134. [Google Scholar] [CrossRef]
  30. Capra, A.; Gandolfi, S.; Laurencich, L.; Mancini, F.; Minelli, A.; Orsini, C.; Rodríguez, A. Multidisciplinary approach for archeological survey: Exploring GPS method in landscape archeology studies. Mancini and C. Orsini worked on data and figures and edited the present paper. A. Capra and L. Laurencich coordinated the fieldwork and collabo. J. Cult. Herit. 2002, 3, 93–99. [Google Scholar] [CrossRef]
  31. Doneus, M.; Briese, C.; Studnicka, N. Analysis of full-waveform ALS data by simultaneously acquired TLS data: Towards an advanced DTM generation in wooded areas. Geography 2010. [Google Scholar]
  32. Opitz, R.S. Interpreting Archaeological Topography: 3D Data, Visualisation and Observation; Oxbow Books: Oxford, UK, 2013. [Google Scholar]
  33. Stott, D.; Boyd, D.S.; Beck, A.; Cohn, A.G. Airborne LiDAR for the detection of archaeological vegetation marks using biomass as a proxy. Remote Sens. 2015, 7, 1594–1618. [Google Scholar] [CrossRef] [Green Version]
  34. Powlesland, D. Identifying, mapping and managing the unmanageable: The implications of long term multi-sensor research into the archaeology of the Vale of Pickering, Yorkshire, England. In Space, Time, Place. Proceedings of the International Conference 3rd Remote Sensing in Archaeology, Tiruchirappalli, Tamil, Nadu, India, 17–21 August 2009; Archaeopress: Oxford, UK, 2010; Volume 2010, pp. 9–16. [Google Scholar]
  35. Agapiou, A.; Lysandrou, V.; Hadjimitsis, D.G. Optical remote sensing potentials for looting detection. Geosciences 2017, 7, 98. [Google Scholar] [CrossRef] [Green Version]
  36. Agudo, P.U.; Pajas, J.A.; Pérez-Cabello, F.; Redón, J.V.; Lebrón, B.E. The potential of drones and sensors to enhance detection of archaeological cropmarks: A comparative study between multi-spectral and thermal imagery. Drones 2018, 2, 29. [Google Scholar] [CrossRef] [Green Version]
  37. Koucká, L.; Kopackova, V.; Farova, K.; Gojda, M. UAV mapping of an archaelogical site using RGB and NIR high-resolution data. Proceedings 2018, 2, 5164. [Google Scholar] [CrossRef] [Green Version]
  38. Bennett, R.; Welham, K.; Hill, R.A.; Ford, A.L.J. The application of vegetation indices for the prospection of archaeological features in grass-dominated environments. Archaeol. Prospect. 2012, 19, 209–218. [Google Scholar] [CrossRef]
  39. Gitelson, A.A.; Stark, R.; Grits, U.; Rundquist, D.; Kaufman, Y.; Derry, D. Vegetation and soil lines in visible spectral space: A concept and technique for remote estimation of vegetation fraction. Int. J. Remote Sens. 2002, 23, 2537–2562. [Google Scholar] [CrossRef]
  40. Hunt, E.R.; Cavigelli, M.; Daughtry, C.S.T.; Mcmurtrey, J.E.; Walthall, C.L. Evaluation of digital photography from model aircraft for remote sensing of crop biomass and nitrogen status. Precis. Agric. 2005, 6, 359–378. [Google Scholar] [CrossRef]
  41. Louhaichi, M.; Borman, M.M.; Johnson, D.E. Spatially located platform and aerial photography for documentation of grazing impacts on wheat. Geocarto Int. 2001, 16, 65–70. [Google Scholar] [CrossRef]
  42. Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.L.; Bareth, G. Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley. Int. J. Appl. Earth Obs. Geoinf. 2015, 39, 79–87. [Google Scholar] [CrossRef]
  43. Meyer, G.; Neto, J. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  44. Rouse, J.W.; Haas, R.H.; Schell, J.A.D.D. Monitoring vegetation systems in the Great Plains with ERTS. In Proceedings of the Third ERTS Symposium, NASA, Washington, DC, USA, 10–14 December 1973; pp. 309–317. [Google Scholar]
  45. Lasaponara, R.; Masini, N. Identification of archaeological buried remains based on the normalized difference vegetation index (NDVI) from Quickbird satellite data. IEEE Geosci. Remote Sens. Lett. 2006, 3, 325–328. [Google Scholar] [CrossRef]
  46. Kalayci, T.; Lasaponara, R.; Wainwright, J.; Masini, N. Multispectral contrast of archaeological features: A quantitative evaluation. Remote Sens. 2019, 11, 913. [Google Scholar] [CrossRef] [Green Version]
  47. Hunt, E.R.; Doraiswamy, P.C.; McMurtrey, J.E.; Daughtry, C.S.T.; Perry, E.M.; Akhmedov, B. A visible band index for remote sensing leaf chlorophyll content at the canopy scale. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 103–112. [Google Scholar] [CrossRef] [Green Version]
  48. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  49. Cerra, D.; Agapiou, A.; Cavalli, R.M.; Sarris, A. An objective assessment of hyperspectral indicators for the detection of buried archaeological relics. Remote Sens. 2018, 10, 500. [Google Scholar] [CrossRef]
  50. Easterday, K.; Kislik, C.; Dawson, T.E.; Hogan, S.; Kelly, M. Remotely sensed water limitation in vegetation: Insights from an experiment with unmanned aerial vehicles (UAVs). Remote Sens. 2019, 11, 1853. [Google Scholar] [CrossRef] [Green Version]
  51. Daughtry, C.S.T.; Walthall, C.L.; Kim, M.S.; de Colstoun, E.B.; McMurtrey, J.E. Estimating corn leaf chlorophyll concentration from leaf and canopy reflectance. Remote Sens. Environ. 2000, 74, 229–239. [Google Scholar] [CrossRef]
  52. Haboudane, D.; Miller, J.R.; Tremblay, N.; Zarco-Tejada, P.J.; Dextraze, L. Integrated narrow-band vegetation indices for prediction of crop chlorophyll content for application to precision agriculture. Remote Sens. Environ. 2002, 81, 416–426. [Google Scholar] [CrossRef]
  53. ENAC. Regolamento “Mezzi Aerei A Pilotaggio Remoto”. Edizione 3 del 11 novembre 2019. Available online: https://www.enac.gov.it/la-normativa/normativa-enac/regolamenti/regolamenti-ad-hoc/regolamento-mezzi-aerei-pilotaggio-remoto (accessed on 29 November 2020).
  54. Jin, H.; Eklundh, L. In situ calibration of light sensors for long-term monitoring of vegetation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3405–3416. [Google Scholar] [CrossRef]
  55. Domenzain, L.M.; Fallet, C. Necessary steps for the systematic calibration of a multispectral imaging system to achieve a targetless workflow in reflectance estimation: A study of Parrot SEQUOIA for precision agriculture. In Proceedings of the Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXIV, Orlando, FL, USA, 17–19 April 2018; Velez-Reyes, M., Messinger, D.W., Eds.; SPIE: Bellingham, WA, USA, 2018; Volume 10644, pp. 354–361. [Google Scholar]
  56. Barbarella, M. Digital technology and geodetic infrastructures in Italian cartography. Città Storia 2014, 9, 91–110. [Google Scholar]
  57. Agisoft LLC. Agisoft Metashape User Manual: Professional Edition, Version 1.5. 2019. Available online: https://www.agisoft.com/pdf/metashape-pro_1_5_en.pdf (accessed on 1 September 2019).
  58. Pix4Dmapper Pix4D. Available online: https://www.pix4d.com/product/pix4dmapper-photogrammetrysoftware (accessed on 9 September 2020).
  59. Franzini, M.; Ronchetti, G.; Sona, G.; Casella, V. Geometric and radiometric consistency of parrot sequoia multispectral imagery for precision agriculture applications. Appl. Sci. 2019, 9, 5314. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The “Area del Bacino Absidato” and the vegetation of maritime pines covering Domitian’s villa.
Figure 1. The “Area del Bacino Absidato” and the vegetation of maritime pines covering Domitian’s villa.
Drones 04 00072 g001
Figure 2. Test area: (a) location map, (b) the “Domitian Villa” (Italy), (c) “Area dei Balnea ad Esedre” and (d) an example of earthwork inside the pine forest.
Figure 2. Test area: (a) location map, (b) the “Domitian Villa” (Italy), (c) “Area dei Balnea ad Esedre” and (d) an example of earthwork inside the pine forest.
Drones 04 00072 g002
Figure 3. Representation and differentiation between cropmarks and earthwork anomalies.
Figure 3. Representation and differentiation between cropmarks and earthwork anomalies.
Drones 04 00072 g003
Figure 4. The ScanFly system with LiDAR Velodyne Ultra Puck sensor.
Figure 4. The ScanFly system with LiDAR Velodyne Ultra Puck sensor.
Drones 04 00072 g004
Figure 5. Sensefly fixed-wing eBee Plus with S.O.D.A. 3D and Parrot Sequoia cameras.
Figure 5. Sensefly fixed-wing eBee Plus with S.O.D.A. 3D and Parrot Sequoia cameras.
Drones 04 00072 g005
Figure 6. (a) Point Cloud from LiDAR pre-filtering vegetation (DSM) and (b) post-filtering (Digital Terrain Model (DTM)).
Figure 6. (a) Point Cloud from LiDAR pre-filtering vegetation (DSM) and (b) post-filtering (Digital Terrain Model (DTM)).
Drones 04 00072 g006
Figure 7. Flight plan in Emotion for RGB image acquisition with Sensefly eBee X.
Figure 7. Flight plan in Emotion for RGB image acquisition with Sensefly eBee X.
Drones 04 00072 g007
Figure 8. Flight plan in Emotion for multispectral image acquisition with Sensefly eBee X.
Figure 8. Flight plan in Emotion for multispectral image acquisition with Sensefly eBee X.
Drones 04 00072 g008
Figure 9. Block diagram of the processes adopted for Unmanned Aerial Vehicle (UAV) in different stages, data acquisition, data processing and data products generation.
Figure 9. Block diagram of the processes adopted for Unmanned Aerial Vehicle (UAV) in different stages, data acquisition, data processing and data products generation.
Drones 04 00072 g009
Figure 10. Vegetation indices and orthophoto extracted from (a) S.O.D.A. 3D and (b) Parrot Sequoia (on the right).
Figure 10. Vegetation indices and orthophoto extracted from (a) S.O.D.A. 3D and (b) Parrot Sequoia (on the right).
Drones 04 00072 g010
Figure 11. A comparison among anomalies attributable to terracing systems detectable by (a) LiDAR product and (b) RGB and Vegetation Indices maps t. In red are the structures surveyed with photogrammetry and TLS; gray hatching indicates the terracing system.
Figure 11. A comparison among anomalies attributable to terracing systems detectable by (a) LiDAR product and (b) RGB and Vegetation Indices maps t. In red are the structures surveyed with photogrammetry and TLS; gray hatching indicates the terracing system.
Drones 04 00072 g011
Figure 12. In red are the structures detected with photogrammetry and TLS, and in gray is the position of the anomalies identified with LiDAR and multispectral photogrammetry.
Figure 12. In red are the structures detected with photogrammetry and TLS, and in gray is the position of the anomalies identified with LiDAR and multispectral photogrammetry.
Drones 04 00072 g012
Figure 13. Anomaly 1, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Figure 13. Anomaly 1, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Drones 04 00072 g013
Figure 14. Anomaly 2, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Figure 14. Anomaly 2, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Drones 04 00072 g014
Figure 15. Anomaly 3, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Figure 15. Anomaly 3, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Drones 04 00072 g015
Figure 16. Anomaly 4, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Figure 16. Anomaly 4, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Drones 04 00072 g016
Figure 17. Anomaly 5, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Figure 17. Anomaly 5, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Drones 04 00072 g017
Figure 18. Anomaly 6, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Figure 18. Anomaly 6, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Drones 04 00072 g018
Figure 19. Anomaly 7, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Figure 19. Anomaly 7, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Drones 04 00072 g019
Figure 20. Anomaly 8, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Figure 20. Anomaly 8, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Drones 04 00072 g020
Figure 21. Anomaly 9, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Figure 21. Anomaly 9, comparison between (a) LiDAR anomalies and (b) anomalies visible with vegetation index.
Drones 04 00072 g021
Table 1. Summary table of detection techniques by type of anomaly.
Table 1. Summary table of detection techniques by type of anomaly.
RGB PhotogrammetryMultispectralLiDARSLAMTLS
Feature AnomaliesA BBC
Earthwork AnomaliesA XD
CropmarksXXE
(A) Only in case of excellent visibility; (B) in case of little or no visibility, noisy returns; (C) in case of little or no visibility, slower but more precise acquisitions; (D) slow acquisitions only for small extensions or refinements; (E) light detection and ranging (LiDAR) intensities can be used if properly calibrated to map moisture or radiometric differentiations on images of vegetated areas caused by undersoil features; (X) effective procedure in the majority of cases. SLAM = Simultaneous Localization and Mapping; TLS = Terrestrial Laser Scanner.
Table 2. Comparison among anomalies detection procedures (X anomaly is visible and X(1) only in correspondence of feature anomalies).
Table 2. Comparison among anomalies detection procedures (X anomaly is visible and X(1) only in correspondence of feature anomalies).
Detection SystemAnomalies
123456789
Multispectral MapsGNDVIX (1)XX---XXX
LCIX (1)XX---XXX
MCARIX (1)X----X-X
NDREX (1)XX---XXX
NDVIX (1)XX---XXX
RGB MapsEXG-X------X
GLIX (1)X-----XX
RGBIX (1)X----XXX
TGI-X------X
VARIX (1)XX---XXX
LiDAR Product XXXXXXXXX
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ronchi, D.; Limongiello, M.; Barba, S. Correlation among Earthwork and Cropmark Anomalies within Archaeological Landscape Investigation by Using LiDAR and Multispectral Technologies from UAV. Drones 2020, 4, 72. https://doi.org/10.3390/drones4040072

AMA Style

Ronchi D, Limongiello M, Barba S. Correlation among Earthwork and Cropmark Anomalies within Archaeological Landscape Investigation by Using LiDAR and Multispectral Technologies from UAV. Drones. 2020; 4(4):72. https://doi.org/10.3390/drones4040072

Chicago/Turabian Style

Ronchi, Diego, Marco Limongiello, and Salvatore Barba. 2020. "Correlation among Earthwork and Cropmark Anomalies within Archaeological Landscape Investigation by Using LiDAR and Multispectral Technologies from UAV" Drones 4, no. 4: 72. https://doi.org/10.3390/drones4040072

Article Metrics

Back to TopTop