Next Article in Journal
Towards Safe and Efficient Unmanned Aircraft System Operations: Literature Review of Digital Twins’ Applications and European Union Regulatory Compliance
Next Article in Special Issue
Digital Recording of Historical Defensive Structures in Mountainous Areas Using Drones: Considerations and Comparisons
Previous Article in Journal
Partially Observable Mean Field Multi-Agent Reinforcement Learning Based on Graph Attention Network for UAV Swarms
Previous Article in Special Issue
Scan-to-HBIM Reliability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis and Precision of Light Detection and Ranging Sensors Integrated in Mobile Phones as a Framework for Registration of Ground Control Points for Unmanned Aerial Vehicles in the Scanning Technique for Building Information Modelling in Archaeological Sites

by
Juan Moyano
1,*,
Juan E. Nieto-Julián
1,
María Fernández-Alconchel
1,
Daniela Oreni
2 and
Rafael Estévez-Pardal
3
1
Department of Graphical Expression and Building Engineering, University of Seville, Ave. Reina Mercedes 4A, 41012 Seville, Spain
2
Department of Architecture, Built Environment and Construction Engineering, Politecnico di Milano, Via Bonardi 9, 20133 Milano, Italy
3
Department of Graphical Engineering, University of Seville, Ave. Reina Mercedes 4A, 41012 Seville, Spain
*
Author to whom correspondence should be addressed.
Drones 2023, 7(7), 477; https://doi.org/10.3390/drones7070477
Submission received: 9 June 2023 / Revised: 5 July 2023 / Accepted: 15 July 2023 / Published: 20 July 2023

Abstract

:
The protection of heritage sites is one of the keys that our civilisation presents. That is why great efforts have been invested in order to protect and preserve movable and immovable property with a certain historical value, as is the case of archaeological sites scattered throughout the territory of southern Iberia (Spain) in the form of dolmens and negative structures dug into the ground, constituting a good sample of the megalithic culture in southern Spain. To study, manage and preserve these archaeological monuments, considered a set of cultural assets, various techniques and methodologies are required to facilitate the acquisition of three-dimensional geometric information. The Scan-to-BIM approach has become one of the most up-to-date work exponents to carry out these objectives. The appearance of LiDAR techniques, and recently their incorporation into smartphones through integrated sensors, is revolutionising the world of 3D scanning. However, the precision of these techniques is an issue that has yet to be addressed in the scientific community. That is why this research proposes a framework, through experimental measurement, comparison and knowledge of the limitations of this technology, to know the precision of the use of these smartphones, specifically the iPhone 13 Pro, as a measurement element to establish points of control with the aid of photogrammetry by unmanned aerial vehicles (UAVs) in archaeological sites. The results demonstrate a residual uncertainty of ±5 mm in the capture of GCPs from the mobile phone’s LiDAR light detection and ranging sensor, and there was a deviation of the measurements in a range between 0 and 28 m of distance between the GCPs of (0.021, 0.069) m.

1. Introduction

The Industrial Revolution 4.0 is changing the way the industry and construction sectors are perceived, making the use of operational and information technology increasingly important. The paradigm of the fourth industrial revolution involves data management and the interconnection between machines, objects and processes [1]. In this context, digitisation covers both architectural and archaeological fields and is an integral process in the convergence towards industrial digitisation. In particular, the geometric characterisation of the complex forms present in archaeology and related fields implies a detailed and exhaustive description of the geometry of the objects that make up the configuration of the structures.
The idea of conservation of Cultural Heritage has to do with qualification systems where relationships of internal order (principles governing geometric shape) and relationships of shapes and volumes are described. Therefore, geometric characterisation implies the use of 3D digitisation techniques and instruments and interactive environments that bring us closer to a Historic Building Information Model (HBIM) [2] based on geomatic techniques that generate precise and detailed data on the shape, surface, texture and volume, among other geometric properties of the object.
In this sense, to acquire records in the complex framework of complex shapes that occur in archaeology, it is necessary to have certain technologies that have a high acquisition price, such as terrestrial laser scanning (TLS) technology.
However, portable devices such as mobile phones or tablets have emerged since 12 October 2020 that incorporate a sensor with a LiDAR (light detection and ranging) system in different versions of the iPhone, making them accessible to many people and professional fields. This Apple release in 2020 integrated a sensor designed for Augmented Reality (AR) without specifications on relative and absolute accuracies. Despite this, the versatility and applicability of the system challenged the geomatics and telemetry specialist community [3].
On the other hand, Structure from Motion (SfM) short-range photogrammetry has a highly developed space in the fields of architecture and archaeology [4,5,6] due to the enormous amount of data it provides. When this process is developed by unmanned aerial vehicles (UAVs), it acquires greater applications, especially in the registration of building roofs, where the surfaces of the zenithal planes are important since terrestrial laser scanning is not capable of covering them.
Fields such as the dynamic study of the earth’s surface have been explored in various works, an example the one developed by Duró et al. [7], who tried to study the alterations in erosion on lake shores by means of UAVs, placing the ground control points (GCPs) in the alluvial plain. This type of study has been carried out in other work related to civil engineering [8] or safety at construction sites [9].
UAV-based photogrammetry typically uses ground control points to aid in measurements and establish precise scales in the model. If the points are made from a total station, georeferencing is indirect, referring to methods that assign world frame coordinates to 3D measurements collected in a relative local reference frame.
In the case of ground control points (GCPs) that have been taken with a global navigation satellite system (GNSS) receiver, the GCPs will be georeferenced in a universal navigation system. In the field of archaeology, the precision of unmanned aerial flights is an important factor, which involves flight plan indicators and georeferencing [10].
The advantages of unmanned aerial vehicles with respect to the terrestrial laser scanner are well known, due to the low cost of some flight equipment and the speed of data acquisition, although this aspect is relative if precision is currently considered for personal laser scanners [11] and the speed of their records and the taking of measurements for the GCPs.
Regarding the study of the precision of integrated mobile phones, one of the first studies was carried out in the field of geoscience [12] on small objects making comparisons with photogrammetry techniques. The choice of fourteen samples presented a singular morphology with sharp edges, and the field samples for its registration took measurements that did not exceed fifty centimetres in length.
In the case of three-dimensional models for small objects studied by some authors [13], it is stated that the limitation of the method is found at a length of 5 m. However, the case studies are varied, from the facade of a building to the sculpture and round bust of a piece of art.
One of the most interesting works on light detection and ranging (LiDAR) precision is the work of Costantino et al. [14], in which they perform a sampling of ten objects with an analysis of the quality of the cloud of points obtained in relation to the variation in the surface, loss of planarity and omnivariance.
Another interesting work is the one carried out by Chase et al. [15], who studied the accuracy of the sensor integrated in the iPhone 12/13 Pro in an interior space with a length of 25 m and with different control points that took as reference the TLS and the total station (TS).
In addition, in the framework of architectural topography, Spreafico et al. [3] evaluated the LiDAR sensor for fast 3D mapping.
In summary, there are some very current studies focused on the precision of the aforementioned sensor, which range from the analysis of small objects to interior spaces of greater dimension, reaching its maximum expression at a length of 25 m. A basic question that the researchers expose in the work of Kršák et al. [16] is the low precision of 3D models from UAV photogrammetry. It is necessary to validate the sets of points obtained through algorithms to know in advance the level of precision that one is going to achieve in the records and if these are specially intended to achieve planimetry or to obtain results in the Scan-to-BIM process.
On the other hand, the possibility of using portable devices such as mobile phones or tablets in Pro versions that are relatively cheap and easy to transport presents a new challenge in the field of geomatics, replacing large-scale TS or TLS equipment. Therefore, an operator with a simple drone and a mobile phone could perform a suitable data capture record for Scan-to-BIM in archaeological sites.
For all these reasons, this research proposes a framework where, through measurement and comparison techniques and knowledge of the limitations of the experimental methods, the precision of using the iPhone 13 Pro as a measurement element to establish control points is determined. The aim is to create a framework that assists in photogrammetry through unmanned aerial vehicles (UAVs) in works related to archaeology and heritage architecture. For this purpose, a set of archaeological sites was investigated through different work procedures that will be seen below.

2. Case Study and 3D Processing

The megalithic ensembles of Los Gabrieles (Valverde del Camino), Spain, are one of the most relevant spaces in terms of quality and number of megalithic monuments. They are located in the province of Huelva (Spain) (Figure 1a). Huelva’s megalithism has developed as a cultural phenomenon exclusive to the death of the Copper Age [17]. It was after the discoveries by Obermaier in 1924 of the Dolmen del Soto that this type of construction became popular, especially due to the appearance of grave goods and ceramic artefacts.
As a result of the study of funerary architecture, researchers and archaeologists have agreed that two types of culture were established in Huelva: On the one hand, there is the group of shepherds of the Neolithic tradition who inhabited the Andévalo and the Sierra regions and, on the other hand, there are the tombs with false domes that relate to the culture of the Millares [17].
The territorial context in which these deposits are found is very similar to the rest of those found in the Sierra de Huelva and Seville regions, where mountain orography and forest spaces predominate and where the georeferenced data in a Geographic Information System have been processed using the QGIS software [18], which allowed us to define priorities in the cartographic information (Figure 1b).
Massive Data Capture Systems (MDCS) like aerial photogrammetry processes are ideal for capturing geometry from nadiral planes, which ground-based laser scans cannot reach [19]. Therefore, from a technical point of view, it improves the capture of the process that we call Scan-to-BIM, that is, the obtaining of construction information models of historical buildings or, in this case, archaeological sites.
For the specific aerial photogrammetry work at these aforementioned sites, we used two small dolmens with an open morphological structure and small dimensions located in the Los Gabrieles complex in the municipality of Huelva. Thus, we could observe the process of photographic shots through two intelligent flights; the image in Figure 2 is a shot at a height of 12 m, and in Figure 3, a second intelligent flight mode based on zenithal and oblique exposures that determine a minimum height of 24 m can be seen. The 3D photogrammetric models that are extracted from the UAVs make it possible to collect a large amount of data in a short time, but like any system, it has its limitations, despite covering an overhead space that does not capture the terrestrial geometry. To obtain an estimate of the accuracies achieved, one of the most common methods from a scientific point of view is to make a comparison between the SfM model and the TLS.
The aerial photogrammetry point cloud was processed using v. 1.8.5 of the Agisoft Metashape [20] software. In the case of the study of the dolmens, seven control points were taken using wooden targets of 37.5 × 25.3 cm strategically placed (i.e., uniformly distributed) on the ground. For the measurements of the control points, measurements were made with a LEICA Flexline TS02 total station. To cover the entire surface, the GCPs were distributed using diagonal geometry and outside the surface of interest typical of the dolmens. The coordinate system was established through axes parallel and perpendicular to some established axes. The local XYZ coordinates set were 100, 100 and 10 m, respectively. Subsequently, the coordinates of the elements in each space were recorded to achieve a uniform set of points.
Accordingly, after importing the images to a Chunk, Agisoft Metashape performed an automatic calibration process using an exchangeable image file (EXIF) [21], detecting each of the images. The processing method used is the one described by Westoby et al. [22], which is carried out from the alignment of the photographs to the postprocessing with mesh generation. The processing parameters of said software were set to “high accuracy” for dense cloud extraction. Once the dense cloud extraction results for the two processes were obtained—one dense cloud with images at a height of 12 m, obtaining a set of 7,046,235 points, and another processing at a height of 24 m with oblique images, obtaining a dense point cloud of 2,281,026 points—the processing was also repeated for Dolmen number 3, which is very close to number 4 (they are separated by approximately ninety metres).

3. Experimental Process Equipment

One of the unmanned aerial vehicles with enormous potential for field work in inspection and photogrammetric records is the DJI Mavic 2 Enterprise Dual drone, whose takeoff weight does not reach 910 g and has a GPS+GLONASS GNSS system with an operating frequency of 2.400–2.4835/5.725–5.850 GHz (Figure 4). It features two cameras contained in the same gimbal: (i) an infrared M2ED camera with an uncooled Vox Microbolometer and an HFOV lens of 57° with f/1.1 aperture and a 12-micron pixel size; (ii) an M2ED visual camera with a 1/2.3″ CMOS sensor with 12 M effective pixels and an FVO lens of approximately 85° that is equivalent to a 35 mm format with f/2.8 aperture. Both sensors have output in .jpeg format and MP4 video format.
The operator, through the Smart Controller, can see the two spectra that present different views in each of the sensors that are in the gimbal of the drone. The DJI has four intelligent flight plan modes, including the “oblique plan”, but the minimum flight height in this plan is 25 m, which implies a GSD as ground sampling distance of 0.76 cm/px for images acquired with the UAV. The other configured flight plan is a flight mode in which a minimum height of 12 m can be achieved by reducing the GSD to 0.43 cm/px.
In the experimental work, the TLS was used as an auxiliary record. With the appearance of new geomatic technology equipment such as Personal Laser Scanning (PLS), which simplifies data capture and the manageability of records in architectural spaces, it was decided to carry out a first survey through BLK360, which uses digitalisation technology in waveform (WFD) with a maximum scanning speed of 360,000 points/second.
There are three HDR digital cameras with a colour sensor and fixed focal length (single image 2592 × 1944 pixels, 60° × 45° (vx hz), full-dome scanning of 30 images and automatic gap rectification, 150 Mpx, 360° × 300°), as well as an infrared thermal camera (160 × 120 pixel single image, 71° × 56° (vx hz), 10-image full-dome scan, 360° × 70°), all four included in the kit. This instrument achieves a range accuracy, according to the manufacturer, of 4 mm at 10 m and 7 mm at 20 m [11].
The result obtained is a point cloud of 1,350,000 points, confined to the space of the archaeological site. The use of two massive data acquisition techniques allows us to guarantee that the entire surface of the dolmens is covered. For this reason, another part of the data capture techniques based on Structure from Motion (SfM) images is well known by the scientific and academic community [23] and allows for the construction of 3D models widely used in the field of archaeological architecture [4]. The photogrammetry workflow must manage a data plan based on measurement control points so that they are incorporated into Metashape’s Agisoft software and, arriving at the intermediate postprocessing process of the images, include the measurements that scale the final model.
Control points are widely used for formal architectural control, in which an analytical model can be made after geometric analysis. Dos Santos et al. [24] revealed the complexity of the dimensional control of the control points in the georeferencing of a record with a terrestrial laser scanner through a scenario of interior and exterior spaces. And in this complexity, several authors [25] carried out a study that examined the georeferencing and geometric quality of a SfM record through multiple combinations of ground control points (GCPs). The conclusions established the proportionality of the control points in a uniformly distributed distribution. In our case study, the control points were taken by the LEICA Flexline TS02 total station with a precision of 2 mm [26]. The full description of the photogrammetry processing methodology used is described in previous works [27,28], leaving, in this case, the workflow well defined. Regarding the process carried out for the orientation of the base station, it was fixed by means of the local XYZ coordinates established at a single point of the project, which were taken as 100, 100 and 10 m, respectively.
Regarding the iPhone 13 Pro, its technical characteristics are the same as those of its predecessor, the 12 Pro version, as described by Monsalve et al. [29], who are a reliable source since Monsalve et al. quote their predecessors Luetzenburg et al. [12]. The LiDAR light detection and ranging sensor of this device operates at 8XX nm wavelengths, using vertical-cavity surface-emitting lasers to emit the laser, and the direct time of flight of the pulses is measured with a Single-Photon Avalanche Diode (SPAD). The iPhone 13 Pro captures a mesh through a mobile app such as “Polycam”, which requires a paid subscription to export the data. Another well-known and, in this case, free app is “3d Scanner App”. There are other applications, such as Scaniverse [30], but the most used are those specified above that export the data into various point cloud files such as .e57, .ply or .pts, among other files.
In the case of the “3d Scanner” application, it presents six capture modes in which LiDAR Advanced stands out as a prominent mode where the resolution of the point cloud can be set in a range of 5 to 50 millimetres. When measuring with Polycam, this software captures a record through a triangular mesh on the surface of the objects. In this case study, since the objective is to measure the different distances between targets, used as GCPs, the field on view is based on following the trajectory marked by the points. It must be remembered that the iPhone 13 Pro with a LiDAR sensor is equipped with a barometer, a three-axis gyroscope and an accelerometer, which the data capture software uses to determine the three spatial coordinates in the local reference system.

4. Limitations of the Methods Used

Stringent and reproducible measurements based on SfM must proceed in a flow order (established by researchers such as James et al. [31]) that determines the parameters to be taken into account, and any evaluation of the quality of the data must include a comparison with the coordinates of the verification points, the surfaces or the measures of length. A split test to determine systematic errors is also admissible. A split test aims to produce two sets of data with different shapes in the survey design. For example, it seems interesting that measurements carried out between the difference the GCPs and those obtained by a total station are analyzed in terms of the differential between the three coordinate axes (X, Y, Z), with which a data set could also be taken into account when comparing absolute distances between point units.
In the measurements carried out, there is an uncertainty added to the method that we could call residual uncertainty. This uncertainty stems from the absence of points recorded by the UAV on the crosshairs of the target and, likewise, from the absence of points on the crosshairs of the data captured with the iPhone 13 Pro scan and the TLS. In most cases, the best estimate is the arithmetic mean Ǭ of the n observations, according to Equation (1).
Ǭ = 1 n k = 1 n q k
One of the parameters to take into account in the study of precision is the quality of the point cloud. This factor will be established by means of a verification procedure of the density of the cloud of points, measured in cm2, and could be determined from the data set obtained from one of the targets, which will be the reference of the measurement model. Costantino et al. [14] determined three parameters that influence the quality of the point cloud: (i) the variation in the surface, (ii) the planarity and (iii) the omnivariance. These three parameters determine aspects that can influence the construction of a 3D model, that is, the transformation of the cloud of points to a digital mesh, but for the precision of the data sets that we are going to observe, the most important thing is to know the omnivariance, that is, the degree of inhomogeneity of the cloud of points in the three dimensions (Equation (2)), or what we could specify as the density of the cloud of points.
Oλ = (λ1⋅λ2⋅λ3)13
In our experiment, we analyse each of the results that show the density of points in a fixed element that, as already mentioned, has as its object a rectangular wooden target of 948.75 cm2 strategically placed on the surface of the ground as GCPs. The procedure consists of taking a series of 12 samples from the points closest to the filial cross of the target on the cloud of points and doing an analysis and average of the results obtained in each of the samples involved in the analysis. The sets of points of the aerial photogrammetry series at 24 m will be taken first, followed by 12 m, whose denominations of the set of points obtained are Dsfm_24 and Dsfm_12. In the same way, we work with the set of TLS points, and this procedure will also be carried out in the cloud of points from the scan using the sensor integrated in the mobile phone. From this analysis, the random error can be naturally inferred when taking the measurements of each of the samples.
To determine the density of the 3D point cloud, CloudCompare [32] software was used, in which elements are segmented to allow one to observe the factors already exposed by Rizali et al. [33], such as roughness, reflectivity strength, RGB colours and texture.
After the analysis, the following results were obtained (illustrated in Figure 5): The Dsfm_24 obtained a value of 0.3573 pt/cm2, the Dsfm-12 obtained a value of 1.2057 pt/cm2, the mobile phone sensor recorded 4.0737 pt /cm2 and, finally, the TLS obtained a value of 7.9841 pt/cm2, obtained through surveys carried out in Dolmen no. 3. The values are reflected in red on the graph in Figure 5. Thus, from the values of the 12 samples that determine the distance between the points, the box plots of each of the selected samples were obtained.
Of all the surveys, the one with the greatest dispersion was the one from the TLS, and the one that performed best was the record of the iPhone 13 Pro, although its uncertainty expressed in the range of metric units was very similar to the values of the TLS. Clearly, the density values obtained from aerial photogrammetry between 24 and 12 m are very different, with the 24 m survey yielding greater uncertainty.
Point density is one of the parameters that determines the quality of the geometry of a 3D mesh. In the modelling process, the number of points and their distribution can determine the morphological parameters of the objects and their final result. The resulting triangular mesh for 3D modelling varies according to the point density parameters, the modelling algorithms [34,35] and the shape of the object surface [23]. The dot density is a parameter that is measured according to the surface area. Therefore, the dot density of the dot array surface was evaluated per square centimetre, and the pattern taken was a rectangular bull’s eye. The best way to check the spatial resolution of the point cloud, that is, the 3D Euclidean distance (see Equation (3)) between the closest points, is to sample the set of points. The distance between the points and the box plots of each of the selected samples was obtained as follows:
d E ( P 1 , P 2 , P 3 ) = ( x 2 x 1 ) 2 + ( y 2 y 1 ) 2 + ( z 2 z 1 ) 2
where dE is the Euclidean distance between points in space and x , y and z are the Cartesian coordinates of those points.

5. Experimental Methodology and Results

In the first phase, an alignment is carried out with the Hand-Held Mobile Laser Scanner (HMLS), a portable mobile laser scanner that obtains a dense point cloud exported in a .ply file in the same coordinate system, and then a network of control points is measured with the total station (TS) using LEICA Flexline TS02 equipment. Therefore, throughout the work, different tests were used to obtain a set of data related to the comparison between the measurements of the total station (TS) and the set of points obtained through the iPhone 13 Scan Pro.
In the second phase of the experiment, a comparison is carried out to find out the geometric precision between the Hand-Held Mobile Laser Scanner and the SfMUAV using ground control points taken with the LEICA Flexline TS02 equipment. All of this estimation processing has been carried out using the open-source software CloudCompare. This analysis will determine the behaviour of the HMLS in the morphological structures of archaeological sites that involve organic forms and complex geometries.
Finally, in a case study different from the previous ones, we proceed to evaluate the precision of the HMSL that serves as a reference for capturing the control points on the ground for photogrammetry using a UAV. In this case, the point cloud record obtained from the TLS is taken as a reference. A comparison between the two of them will determine new data on the performance of the systems in the area of the archaeological site being studied. The 3D model of the terrestrial laser scanner is used as a reference to determine the coincidence points and their dispersion in the system used.

5.1. Analysis of the Distances between TS Targets and iPhone 13 Pro

For this section, three case studies were used, the first of which was Dolmen no. 3, in which five ground control points were used (Figure 6). The data set was exported as a file .ply and TS data were exported in .txt. The data obtained from the deviation were taken to a graph to establish the existence of a direct correlation between the deviation and the measurements obtained in the images of Figure 7, 9 and 11. In this first data output, a total of ten samples were taken with a maximum distance of 9.7367 m and a minimum distance between targets of 3.0432 m. The absolute differential values obtained ranged from 20.47 cm (points 3–6) to 1.69 cm (points 1–6). This analysis is performed to be able to predict the changes and local trends observed after the field work is carried out.
The following work was carried out in an aerial photogrammetry of the Palma del Condado train station (Figure 8), in which seven control points were taken. In total, this analysis was carried out using twelve samples with a maximum distance of 14.1996 m and a minimum distance between targets of 2.8557 m. The absolute differential values obtained ranged from 1.98 cm (points 5–6) to 0.471 cm (points 4–5).
The last work (Figure 9) was carried out on an almost horizontal plane with little unevenness to check how the LiDAR sensor of the mobile phone works in a parking lot of the Higher Technical School of Building Engineering. Also in this work, twelve samples were used with a maximum distance of 23.3692 m and a minimum distance of 2.3680 m. The absolute differential values obtained ranged from 23.1087 cm (points 1–8) to 2.3560 cm (points 7–8). Therefore, the results of the following studies will determine the correlation between the accuracy of the measurements and the distance between targets.
The data sets obtained from the three series, Dolmen no. 3 (Figure 6), station (Figure 8), and school (Figure 10), were mathematically analysed to see the behaviour of the mobile phone sensor Figure 7, Figure 9 and Figure 11.
From the processed data set, the range of the deviation was determined in a box plot to find out the dispersion of the values obtained Figure 12. The distance between previously selected characteristic points represented the differences between the three-point clouds in the experiment carried out at the School, the Station, and Dolmen no. 3. The standard deviation results were lower in the Station experiment since a value of 0.02125 m was obtained, compared to the two values obtained in the School and Dolmen no. 3 experiments, which were 0.06921 and 0.06343 m, respectively.
The results are shown in Table 1, configured by the parameters of the total number of samples in each case study and their corresponding values.
Therefore, after analysing the experiment results for the three sets of points, we can observe that the school experiment was the one that obtained the worst results. However, the maximum length value of 23.3692 m between targets 1 to 8 shows that the accuracy of the values decreases at greater distances in this case.

5.2. Analysis of the Deviation between Photogrammetry Point Clouds Using an UAV and the iPhone 13 Pro

From the field work carried out in Dolmen no. 3 of the Gabrieles complex, data related to aerial SfMUAV photogrammetry with precision processing based on three GCPs at a distance of 12 m using an intelligent flight plan were obtained. The photogrammetry results show average error values in the scale points, a precision of 0.005 m, an error of 0.197 pix and a GSD of 0.43 cm/px, so it can be considered that the use of the UAV is reliable for subsequent modelling. The SfMUAV data at a distance of 24 m yielded average error values in the scale points, a precision of 0.005 m, an error of 0.168 pix and a GSD of 0.76 cm/px.
The applications that are developed in photogrammetry for surveying purposes, land construction or Scan-to-BIM in the field of archaeology have shown that GCPs greatly increase the accuracy of a dense point cloud [36]. Three targets with a black and white photogrammetric target are enough to establish a scale model, although it is already known that there are other parameters that can influence the quality of the point cloud in UAV photogrammetry, such as the flight height, the orientation of the camera in the flight plan process and the quality of the image that the drone gimbal has [37,38]. The objectives of this part of the work are to perform a scan with the LiDAR light detection and ranging sensor of the HMLS mobile phone and, through an analysis process with the algorithms available from CloudCompare, to make a comparison.
Once the SfM point cloud data were obtained using the UAV, it was first scaled with the measurements taken in the Agisoft Metashape software and, subsequently, aligned with the registration data obtained from the light detection and ranging sensor (mobile phone LiDAR) by selecting common point pairs between these two-point clouds in CloudCompare. The three pairs of points (R0-A0, R1-A1 and R2-A2) were located at the centre of the ground control points, which, as already specified, used a 10 mm thick coloured chipboard base in black and white, yielding a final alignment RMS value of 0.00580 m.
Subsequently, the Iterative Closest Point (ICP) algorithm was applied to try to automatically optimize the alignment in order to obtain better results, but this process did not yield positive results, so the coupling between both point clouds was maintained by searching for pairs of adjacent points in the two data sets, and then the transformation parameters between them were calculated [39].
To determine the deviation between clouds of points, the C2C tool, which calculates the distance from each point, is applied. In this case, the point cloud of the aerial photogrammetry was taken as a reference to compare with the closest point in the point cloud of the iPhone 13 Pro.
In point cloud-to-point cloud comparison procedures, it is likely that there are outliers that are not visible to the naked eye but may affect the results. The best way to detect them is to conduct a first comparison analysis and then eliminate those points that can generate noise and alter the final results. These work filters in the cloud of points establish a certain adjustment that allows for the evaluation of the results in their entirety [40].
In aerial photogrammetry over forests or rocky massifs where there is abundant vegetation, the absence of records in forests is very frequent, especially between the ground and the base of the tree crown [41]. The occlusion of the points that record the tree trunk in the process of recording naridal photogrammetry means that the SfM model, using a UAV, cannot reconstruct its position. Therefore, in the open dolmen records in the field, occlusions may appear, as shown in Figure 13. The advantage of using a mobile phone LiDAR light detection and ranging sensor is the possibility of getting closer to the details of the morphology of the environment of the archaeological site, which, in the case of the SfM using a UAV, if the flight plan is not processed correctly, can affect the final results. The traditional way of characterising an archaeological complex through TLS involves certain difficulties [42], especially in horizontal planes of the ground, such as eliminating nondetailed parts, inaccessibility and bias in the collection of overhead data.
Once the differential between both point clouds has been processed, it is possible to extract information from matching points, where the CloudCompare software [39] determines the C2C deviation on a colour scale that represents the deviation of the compared model.
The distance between the point sets was calculated through a comparison between SfMUAV and HMLS, Figure 14 analysis, and Figure 15 histogram of the previous figure. The parameters that yield the results of the comparison process they expressed in Table 2: are the root mean square error (RMS), the standard deviation, the minimum distance, the maximum distance, the mean distance and the maximum estimation error. The deviation between sets of similar points has two characteristics: (i) the high presence of points in values close to zero in relation to other values that are especially far from the value 0.04 m and the rest of the intervals; and (ii) the high standard deviation according to the formulation of the points along those intervals (Equation (4)).
σ = 1 n 1 i = 1 n ( x i x ¯ ) 2
where n is the sample size, x i are the points in the intervals and x ¯ is the average sample value.

5.3. Analysis of the Deviation between the Cloud of Points Obtained from SfMUAV Using GCP by HMLS and TLS

It is evident that the direct processing that is carried out can bring knowledge of the results closer, taking into account that the main objective of the research work is to know the limitations of the experimental methods in relation to the precision of the use of the iPhone 13 Pro as a measurement element to establish control points on the ground with the help of photogrammetry using unmanned aerial vehicles (UAV) in archaeological sites.
For this case, a point cloud was processed using UAV image capture with ground control points, in which the “Scala Bars” parameters were entered with the dimensions obtained by the LiDAR light detection and ranging sensor of the mobile phone. For both analyses of the sets of points, a partial part of the area surrounding the archaeological site was selected, segmenting both previously aligned sets.
According to studies carried out in other works, remote sensing-based image capture (SfM) using remote sensing is reaching great popularity in the characterisation of landscapes with centimetre-level precision. For example, in landslide studies [43] (where it was shown that the use of multi-rotor UAVs improves the point cloud by 16% compared to the use of fixed-wing UAVs) or in the heritage preservation field [44] (in a study on a Monastery, the quality parameters of the point cloud were evaluated using what the author calls the Remotely Piloted Aircraft System (RPAS) and the TLS). However, in the field of archaeology, there are studies [45] that only approach this investigation using SfM without UAV. Hence, it can be deduced from the previous approaches that the deviation analysis between SfMUAV by means of GCP by HMLS and TLS is innovative.
Figure 16 shows the deviation between aerial photogrammetry using intelligent flight at 24 m and TLS developed on four BLK360 stands using LiDAR technology. The results show a standard deviation of 0.066 m and a maximum distance of 0.83 m.
Certainly, from the point of view of geometric understanding, the highest values are found in the registration points of the treetops, but these data are not representative. The mostly green area that corresponds to values of 0.31 m refers to occlusion points on the floor of the Dolmen recorded by the TLS.
Figure 17 shows the histogram of the study, where 90% of all points have a coincidence of less than 0.032 m. The coincidence between Figure 16 and Figure 17 is consistent since most of the points are in the consolidated values in the blue colour range. The quantitative data provided by the analysis software is represented in Table 3.
Analysing the morphological structure of archaeological sites and the geomorphology of the environment requires accurate topographic information, but when deposits are inaccessible, operators must resort to technologies that revolutionise the current digitalisation market. The presence of trees and shrubs should not be considered an obstacle to characterising these structures. In the processing and analysis stages, it was observed that the SfMUAV point cloud at 12 m behaved differently from the SfMUAV point cloud at 24 m. For this reason, it was decided to open a new analysis section.

5.4. Analysis of the Deviation between Photogrammetry Point Clouds Using a UAV in Intelligent Flight at a Height of 12 m and 24 m

Very recent studies emphasise the study of processing through image capture records using UAVs at different distances on the Z axis. These studies have focused on forestry and agriculture to analyse the morphology of trees, essentially volume analysis. In other words, most studies [46] fall within the framework of forestry analysis and geology studies [25], in which different flight scenarios are exposed with different angles of the UAV camera in complex landscapes with high tree density and structured layered morphology, while other works fall within the framework of knowledge about the accuracy and reconstruction of extensive topographical environments [47]. Thus, it seems interesting that, in the context of archaeological sites, the difficulties that we can find in processing field work in different intelligent flight plans are exposed. For this, a study including the processing within the context of the dolmen was carried out, taking the set of SfMUAV points at 24 and 12 m, as can be determined in Figure 18.
The data revealed that 71% of the points are coincident at values lower than 0.122 m. This is an important deviation, considering that many points are in the green zone with deviations of 40 cm. This analysis can be seen in the histogram of Figure 19.
The presence of deviations around forty centimetres that represent the majority of the treetops, shrubs and, especially, the upper part of the orthostats, that is, the stone blocks that make up the course of the wall of the dolmen, can be observed. From this line, it can be inferred that there is a presence of systematic vertical deviations. Such errors, which in typical UAV usage data are expressed as a vertical “dome” of the surface, are the result of a combination of nearly parallel imaging directions and an imprecise correction for the distortion of the radial lens [48].
The error in the systematic vertical deviations can be corrected with the additional capture of oblique images. By separating the context from the site and focusing on the dolmen itself, the evidence for deviations is more significant, and a greater appreciation of the results can be obtained. In this second data output, a segmentation of a rectangular prism is carried out on Dolmen no. 3, in which the set of points of the 24 m SfMUAV is taken as a reference.
In Figure 20, it can be seen that the systematic vertical deviations referred to in the studies by James and Robson [48] appear in red.
The resulting histogram (Figure 21) shows the number of points per distance. This is counted in absolute values and in metric units. The distribution parameters corresponding to the maximum, minimum, standard deviation and mean values are not really significant here, but the most relevant piece of data is that the area of points in red in the range of colours has a range between 0.15 and 0.25 m.

6. Scanning for BIM in an Archaeological Context

The use of HBIM for archaeology arises in analogy and continuity with research and applications on BIM for historical buildings, a subject on which the scientific community has been debating for many years, with different regulatory repercussions from country to country. Nevertheless, there are several specificities in the use of BIM models for archaeology that call for a separate discussion.
As it is well known, BIM is, according to ISO 19650, a series of standards whose content is intended to support agents and operators that participate in any phase of the building cycle or infrastructure pathways through BIM methodology [49], which includes a part of process management. In the case of archaeology, this tool could help not only in the management of all the activities related to preservation, conservation and restoration but also in the “interpretation, presentation, access and public use of the material remains of the past” through virtual archaeology. In the international document “Principles of Seville” [50], aimed at providing practical applications of the principles of the “London Charter” [51], virtual archaeology is defined as “the scientific discipline that seeks to research and develop ways of using computer-based visualisations for the comprehensive management of archaeological heritage” [52], including “virtual restoration, anastylosys, reconstruction, and recreation” [53,54]. To this aim, “inventories, surveys, excavation work, documentation, and research” become fundamental knowledge moments to create two- and three-dimensional representations of the real objects, guaranteeing their “authenticity”, in the meaning expressed in the Sevilla Charter, to be obtained through a rigorous and faithful process of representation of their real consistency, also using parametric software, which is fundamental to being true to the geometric and morphological complexity that characterises the stratified ancient heterogeneous structures [55]. All this makes the construction of an HBIM particularly complex [56]. However, research and experimentation in the field of three-dimensional and parametric modelling of these kinds of objects, especially through Scan-to-BIM and photogrammetric processes, have led to the identification of modelling workflows shared by the scientific community [57].
From this point of view, the use of HBIM models represents a valid aid in terms of integrated management, in a single three-dimensional work environment, of different types of data: geometric, stratigraphic, historical, material, environmental, geological, etc. The HBIM model could be considered a 3D database that can be queried in different ways [58] during the design, construction and valorisation phases. The possibility of linking parametric objects, more or less simplified according to different levels of geometric and informative detail, depending on the purpose, with relational databases to be implemented and updated continuously, guarantees that we have a tool for various users to exchange and share information, from archaeologists to conservation offices to scholars. Operators involved in a restoration project can consult the model to access related documents, such as degradation mapping, to use them directly on the archaeological site and plan the interventions to be carried out [59]. In the same way, the model can be consulted by experts interested in accessing iconographic material or archival sources inserted within the HBIM, allowing for new spatial correlations to be established and not only documentary ones. At the same time, there is the possibility of the models being consulted by tourists or simply curious people who want to live immersive experiences through new modes of digital narration, using, for example, augmented and virtual reality [60]. Finally, an HBIM could represent a valid support for public administrations, taking advantage of a valid and reliable verification and control system that, in the long term, provides a mapping of works on the territory and their state of conservation [61], thanks to the possibility of BIM/GIS integration.
It is for this reason that, in recent years, many researchers, at an international level, have dealt with different topics related to BIM for archaeology. The most frequent ones tackle the issues of parametric three-dimensional modelling [62,63], starting from complex data surveying [64], ontologies and sharable vocabularies [65], interoperability between different software [66] and stratigraphic analysis [67].
Currently, there is an evaluation method that combines the creation of a geometric model from a cloud of points with the set of points obtained from area photogrammetry. This method, known as C2M and implemented through CloudCompare, has been used by several researchers [42,68,69] in the context of Cultural Heritage. The direct geometric model is the result of the process of creating static 3D models with simple or complex surfaces [70]. In the field of archaeology, most of the time complex or very complex surfaces are involved, which means that field tests for their evaluation require more complete processes.

7. Discussion

New data capture systems for three-dimensional representation are changing the way geomatics operators and topography experts work. Newer and easier-to-use tools appear, but they can call into question their precision compared to other, more experimental equipment.
The incorporation of LiDAR techniques, such as sensors integrated into mobile phones, is revolutionising the world of 3D scanning. One of the most positive aspects in terms of LiDAR smartphones is their working conditions, among which one can find the possibility of scanning different surfaces from a medium-range distance, including materials in the same scenario, and the processing of data in the “app” of the system lasting a few minutes.
But these record systems are not exempt from serious problems. For example, Costantino et al. [14], for the evaluation of an iPhone 12 Pro, showed that small-scale variations in objects of up to 2 cm of difference in different materials and the architectural space used with somewhat larger dimensions, such as the transept of a church, were detected with precisions of the order of 1 to 3 cm of variation. Some inertial navigation system drift and loss of planarity issues were also mentioned.
Luetzenburg et al. [12], who made measurements of a 130 m by 15 m wide cliff, compared the SfM-MVS with the iPhone 12 Pro point cloud, and the results obtained showed that for 80% of all recorded points, the maximum distance was 15 cm. Through the analysis of the graph of the percentage of precision with respect to the three coordinate axes, it should be emphasised in this work that Z (height) yields quite imprecise values.
The comparison between the integrated sensor of an iPad Pro and TLS was measured by Spreafico et al. [3], who determined that the precision was 2 cm and the accuracy was 4 cm.
One of the most interesting studies developed in the field of interior spaces is the work of Chase et al. [15], where the results in workspaces with a length between 1 and 13 m determined an average error of 3 cm in coordinate (X) and 7 mm in coordinate (Z) in fifteen different shots.
The main difference in the results between Luetzenburg et al. and Chase et al. [15] is the variability of the precision in coordinate (Z), and we believe that Chase’s work occurs in conditions where the surface is slightly flat, so the iPhone sensor behaves extremely precisely in these measurements. On the other hand, when the morphology is abrupt and highly variable, precision changes are important.
One of the study samples that made him reevaluate the behaviour of the mobile phone sensor was the result that he showed in the work of Chase et al. [15]. Thus, we selected the results from Table 3 of his work and proceeded to graph them, observing the existence of a linear correlation between the equipment error and the distance. This process was carried out by validating the total station equipment as a reference on the iPhone. This graph is shown in Figure 22, in which it can be seen that there is no correlation between distance and relative precision, so it is determined that the differences may be related to other factors.
We can assume that the relationship between the distance between target points and their precision can be maintained in increasing order and thus predict the behaviour of the deviation between the values of length and precision of the LiDAR system from the linear regression line.
On the basis of these studies, we carried out a section on the uncertainty associated with the measurement method, which has been called “residual uncertainty”, by studying the density of points on the object to be measured. Interestingly, the method that registered the highest density of points was TLS, followed by the LiDAR light detection and ranging sensor of the mobile phone. Regarding the dispersion of the points in the measurements, the iPhone behaved quite stably; that is, in short measurements, the set of points is very distributive, reaching uncertainty values of a range of 0.005 m, that is, values of 5 millimetres.
Next, we generated an experimental methodology consisting of four sections. The first one had to do with the errors of the LiDAR light detection and ranging sensor of the mobile phone compared to the data obtained by the total station (TS) using the LEICA Flexline TS02 equipment. On the basis of this analysis, we proceeded to determine the existence of a linear correlation between the distance of the GCPs and the deviation that occurs with respect to the measurements with the TS. The statistical results of the comparison revealed that there is a slight relationship between the parameters analysed (R2) in Figure 7 and Figure 8, with very low values of 0.04 and 0.09 m, respectively. The graph that presents some correlation is Figure 11, with values of 0.24.
One must think that the differences between the graphs of Chase et al. [15] and those elaborated in this investigation are justified in that the author takes distances as absolute values. From the subsequent analysis (Figure 12), there are three cases that comprise a standard deviation between 0.021 and 0.069 m. In the case of the station’s field work study, the deviation value was quite satisfactory with only a 21 mm difference, but the rest of the studies presented highly significant variations.
Within the development of the field work, the case studies of the School and the Station were studied on slightly flat surfaces, which was not the case in the Dolmen case study, where the Z axis was more variable. But it is not ruled out that the analysis surface in a variable geomorphological environment increases the error value of the LiDAR light detection and ranging sensor of the mobile phone.
In the next study, an analysis of the difference between photogrammetry using UAV and HMLS was carried out. The results showed a standard deviation of 0.0889 m. However, this deviation was estimated to be due to non-alignment values since most of the scattered points were at the end of the CGPs, as shown in Figure 14. A significant value was the high presence of points in values close to 0 in relation to other values that were especially far from the value of 0.04 m and the rest of the intervals; that is to say, most of the points presented a differential of 4 centimetres. This would comprise an average interval of the results obtained in the previous investigation [15].
Regarding the analysis of the difference between SfMUAV and GCP by HMLS and TLS, there was a good degree of agreement. In other words, after meeting the requirements of the objectives set in this part of the study, the results were positive. In this case, the standard deviation parameter was lower than in the previous one, with a result of 0.0666 m. A significant value was the high presence of points at values close to 0, in which 90% of all points had a coincidence of less than 0.032 m. These values were lower than in previous studies. Therefore, when we process photogrammetry using a UAV at a height of 24 m with intelligent flight and nadiral and oblique images and use the iPhone 13 Pro to measure ground control points, we obtain an error against the TLS using the BLK360 32 mm. Thus, these are some results that can be assumed for Scan-to-BIM environments.
Also, the use of aerial photogrammetry at different levels and ways of proceeding is essential to knowing how an optimal 3D model can be obtained. Thus, an analysis of the difference between SfMUAV at 12 and 24 m with the intelligent flights provided by the drone used was determined with the next field work test. In this work, it was possible to determine the presence of systematic vertical deviations, in addition to the fact that these errors, in the typical data of the use of the UAV, are expressed as a vertical “dome” of the surface in the absence of oblique images. In this document, it was possible to quantify the deviation value of the orthostats, establishing a range between 0.15 and 0.25 m.
To provide comprehensive knowledge in the context of Scan-to-BIM applied to archaeological sites, all results have been graphed in Figure 23, where the representation in the range of millimetres of the slight accuracy (LOA) of the independent standard deviation is combined for each method used to acquire the rank of these measurements in USIBD [71]. The green points are sets of points randomly established to represent the record of a surface from a point cloud, where the actual surface is established as an imaginary line close to a value of zero. The second overlaid graph represents the mean deviation of four researchers and the results of the four experiments conducted in this work in a range expressed in metres. The equivalence is evident between the coordinates of the ordinate axes in red and black, although both are expressed in different units. It should be mentioned that the difference between the last survey and the rest of the tests is that each and every one of the tests was conducted with the LiDAR light detection and ranging sensor equipment of the mobile phone except for Survey_4, which determines the deviation between photogrammetry areas with heights of 12 and 24 m.

8. Conclusions

Disruptive techniques of massive data acquisition are presented as a new opportunity for work that completes the records of 3D digitisation and the implementation of Scan-to-BIM nowadays. Each method requires different data processing and is generally associated with specific software that processes the data until the point cloud is ready to take it to BIM. With this study, we have had the possibility of knowing the deviations shown by the use of the LiDAR light detection and ranging sensor on the iPhone 13 Pro.
Thus, in the data processing phase of this work, four comparison studies were carried out, along with a section in which an approximation to the Scan-to-BIM of archaeological sites was investigated. Through the study of aerial photogrammetry data obtained through two intelligent flight plans, the deviations generated by the measurements of a mobile phone could be compared, and its applicability to capture data from GCPs (ground control points) that serve to improve photogrammetry was assessed.
It must be taken into account that the analysis of the morphological structure of archaeological sites and their geomorphology is carried out in a complex context, such as on open sites where abrupt vegetation and groves abound. Most of these deposits are difficult to access, so having a drone and a mobile phone with these characteristics in one’s backpack means that any expert can carry out 3D survey work with certain guarantees of precision.
With the set of data obtained, data on the precision of this equipment can be inferred from these investigations, although the door is left open for other investigations that relate to the capture of records made with a LiDAR light detection and ranging sensor of a mobile phone, correlating the results with other parameters to clarify questions about how to reach precisions close to an LOA of 50 (range values between 0 and 1 millimetre).
It must be recognised that applying BIM to heritage is a form of digital registration that is open to different administrations and that, sometimes, the registration can be optimal in its performance. The path that leads us towards digitisation and the construction information model in historical buildings or archaeological sites consists of innumerable stages, one of the first being the generation of geospatial data. Knowing the requirements and specifications of the levels of precision allows professionals in the Architecture, Engineering, Construction and Ownership (AECO) industry to clearly articulate the objectives of the works. Therefore, precisely understanding the scope of the impact of component errors improves the results of the applicability of BIM.
In the future, through experimental work related to point cloud mapping in archaeological pieces and altarpieces, it will be possible to study how certain deformations produced by the LiDAR light detection and ranging sensor of the mobile phone influence linear structures, quantifying the deformation according to image capture.
Finally, it is also considered necessary to indicate that one of the most important objectives of this work has been fulfilled since a new scenario of possibilities for the applicability of new techniques in the management of the acquisition of massive records to produce increasingly accurate 3D models has opened up.

Author Contributions

J.M.: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Software, Supervision, Visualization, Writing—original draft, Writing—review and editing; J.E.N.-J.: Data curation, Software, Supervision, Validation; M.F.-A.: Visualization, Writing—original draft, Writing; D.O.: Data curation, Resources, Visualization, Writing—original draft, Writing, Project administration; R.E.-P.: Data curation, Software, Resources. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Universidad de Sevilla through VII Plan Propio de Investigación y Transferencia (VIIPPIT).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Daniotti, B.; Pavan, A.; Lupica Spagnolo, S.; Caffi, V.; Pasini, D.; Mirarchi, C. Collaborative Working in a BIM Environment (BIM Platform). In BIM-Based Collaborative Building Process Management; Springer Tracts in Civil Engineering; Springer: Cham, Swizerland, 2020; pp. 71–102. [Google Scholar] [CrossRef]
  2. Moyano Campos, J.J. Desde la Forma al Modelo Digital: Teoría de la Morfogénesis en el Ejercicio de la Forma Natural; de Sevilla, U., Ed.; Universidad de Sevilla: Sevilla, Spain, 2022; ISBN 9788447223800. [Google Scholar]
  3. Spreafico, A.; Chiabrando, F.; Teppati Losè, L.; Giulio Tonolo, F.; Spreafico, A.; Chiabrando, F.; Teppati Losè, L.; Giulio Tonolo, F. The Ipad Pro Built-In LIDAR Sensor: 3d Rapid Mapping Tests and Quality Assessment. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 43, 63–69. [Google Scholar] [CrossRef]
  4. Borrero, M.; Stroth, L.R. A Proposal for the Standardized Reporting of Error and Paradata Regarding Structure from Motion (SfM) 3D Models Used in Recording and Consolidating Archaeological Architecture. Adv. Archaeol. Pract. 2020, 8, 376–388. [Google Scholar] [CrossRef]
  5. Jones, C.A.; Church, E. Photogrammetry is for everyone: Structure-from-motion software user experiences in archaeology. J. Archaeol. Sci. Rep. 2020, 30, 102261. [Google Scholar] [CrossRef]
  6. Colosi, F.; Malinverni, E.S.; Leon Trujillo, F.J.; Pierdicca, R.; Orazi, R.; Di Stefano, F. Exploiting HBIM for Historical Mud Architecture: The Huaca Arco Iris in Chan Chan (Peru). Heritage 2022, 5, 2062–2082. [Google Scholar] [CrossRef]
  7. Duró, G.; Crosato, A.; Kleinhans, M.G.; Uijttewaal, W.S.J. Bank erosion processes measured with UAV-SfM along complex banklines of a straight mid-sized river reach. Earth Surf. Dyn. 2018, 6, 933–953. [Google Scholar] [CrossRef] [Green Version]
  8. Turner, D.; Lucieer, A.; de Jong, S.M. Time series analysis of landslide dynamics using an Unmanned Aerial Vehicle (UAV). Remote Sens. 2015, 7, 1736–1757. [Google Scholar] [CrossRef] [Green Version]
  9. de Melo, R.R.S.; Costa, D.B.; Álvares, J.S.; Irizarry, J. Applicability of unmanned aerial system (UAS) for safety inspection on construction sites. Saf. Sci. 2017, 98, 174–185. [Google Scholar] [CrossRef]
  10. Elkhrachy, I. Accuracy Assessment of Low-Cost Unmanned Aerial Vehicle (UAV) Photogrammetry. Alex. Eng. J. 2021, 60, 5579–5590. [Google Scholar] [CrossRef]
  11. Moyano, J.; Justo-Estebaranz, Á.; Nieto-Julián, J.E.; Barrera, A.O.; Fernández-Alconchel, M. Evaluation of records using terrestrial laser scanner in architectural heritage for information modeling in HBIM construction: The case study of the La Anunciación church (Seville). J. Build. Eng. 2022, 62, 105190. [Google Scholar] [CrossRef]
  12. Luetzenburg, G.; Kroon, A.; Bjørk, A.A. Evaluation of the Apple iPhone 12 Pro LiDAR for an Application in Geosciences. Sci. Rep. 2021, 11, 22221. [Google Scholar] [CrossRef]
  13. Łabędź, P.; Skabek, K.; Ozimek, P.; Rola, D.; Ozimek, A.; Ostrowska, K. Accuracy Verification of Surface Models of Architectural Objects from the iPad LiDAR in the Context of Photogrammetry Methods. Sensors 2022, 22, 8504. [Google Scholar] [CrossRef]
  14. Costantino, D.; Vozza, G.; Pepe, M.; Alfio, V.S. Smartphone LiDAR Technologies for Surveying and Reality Modelling in Urban Scenarios: Evaluation Methods, Performance and Challenges. Appl. Syst. Innov. 2022, 5, 63. [Google Scholar] [CrossRef]
  15. Chase, P.P.C.; Clarke, K.H.; Hawkes, A.J.; Jabari, S.; Jakus, J.S. Apple IPhone 13 Pro LiDAR Accuracy Assessment for Engineering Applications. Transform. Constr. Real. Capture Technol. 2022, 1–10. [Google Scholar] [CrossRef]
  16. Kršák, B.; Blišťan, P.; Pauliková, A.; Puškárová, P.; Kovanič, L.; Palková, J.; Zelizňaková, V. Use of low-cost UAV photogrammetry to analyze the accuracy of a digital elevation model in a case study. Meas. J. Int. Meas. Confed. 2016, 91, 276–287. [Google Scholar] [CrossRef]
  17. Lazarich, M.; Briceño, E.; Ramos, A.; Carreras, A.; Fernández, J.V.; Jenkins, V.; Feliu, M.J.; Versaci, M.; Torres, F.; Richarte, M.J.; et al. Análisis arquitectónico y territorial de los conjuntos megalíticos de Los Gabrieles (Valverde del Camino) y El Gallego-Hornueco (Berrocal-Madroño): El megalitismo en el Andévalo Oriental (Huelva). In IV Encuentro de Arqueología del Suroeste Peninsular [Recurso Electrónico]; Universidad de Huelva: Huelva, Spain, 2010; pp. 209–248. ISBN 978-84-92679-59-1. [Google Scholar]
  18. Qguis Project Descarga QGIS. Available online: https://qgis.org/es/site/forusers/download.html (accessed on 29 May 2023).
  19. Tian, J.; Dai, T.; Li, H.; Liao, C.; Teng, W.; Hu, Q.; Ma, W.; Xu, Y. A Novel Tree Height Extraction Approach for Individual Trees by Combining TLS and UAV Image-Based Point Cloud Integration. Forests 2019, 10, 537. [Google Scholar] [CrossRef] [Green Version]
  20. Agisoft PhotoScan Software. Agisoft Metashape. Available online: https://www.agisoft.com/ (accessed on 30 January 2020).
  21. Romero, N.L.; Gimenez Chornet, V.V.G.C.; Cobos, J.S.; Selles Carot, A.A.S.C.; Centellas, F.C.; Mendez, M.C. Recovery of descriptive information in images from digital libraries by means of EXIF metadata. Libr. Hi Tech 2008, 26, 302–315. [Google Scholar] [CrossRef]
  22. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’ photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  23. Park, H.; Lee, D. Comparison between point cloud and mesh models using images from an unmanned aerial vehicle. Measurement 2019, 138, 461–466. [Google Scholar] [CrossRef]
  24. dos Santos, D.R.; Dal Poz, A.P.; Khoshelham, K. Indirect Georeferencing of Terrestrial Laser Scanning Data using Control Lines. Photogramm. Rec. 2013, 28, 276–292. [Google Scholar] [CrossRef]
  25. Sanz-Ablanedo, E.; Chandler, J.H.; Rodríguez-Pérez, J.R.; Ordóñez, C. Accuracy of Unmanned Aerial Vehicle (UAV) and SfM Photogrammetry Survey as a Function of the Number and Location of Ground Control Points Used. Remote Sens. 2018, 10, 1606. [Google Scholar] [CrossRef] [Green Version]
  26. Geosystems Leica Geosystems. Leica FlexLine TS02/TS06/TS09 User Manual. 2008. Available online: https://leica-geosystems.com/ (accessed on 16 March 2020).
  27. Moyano, J.; Nieto-Julián, J.E.; Antón, D.; Cabrera, E.; Bienvenido-Huertas, D.; Sánchez, N. Suitability Study of Structure-from-Motion for the Digitisation of Architectural (Heritage) Spaces to Apply Divergent Photograph Collection. Symmetry 2020, 12, 1981. [Google Scholar] [CrossRef]
  28. Moyano, J.; Fernández-Alconchel, M.; Nieto-Julián, J.E.; Carretero-Ayuso, M.J. Methodologies to Determine Geometrical Similarity Patterns as Experimental Models for Shapes in Architectural Heritage. Symmetry 2022, 14, 1893. [Google Scholar] [CrossRef]
  29. Monsalve, A.; Yager, E.M.; Tonina, D. Evaluating Apple iPhone LiDAR measurements of topography and roughness elements in coarse bedded streams. J. Ecohydraulics 2023, 1–11. [Google Scholar] [CrossRef]
  30. Carter, N.; Hashemian, A.; Rose, N.A.; Neale, W.T.C. Evaluation of the Accuracy of Image Based Scanning as a Basis for Photogrammetric Reconstruction of Physical Evidence. SAE Tech. Pap. 2016. [Google Scholar] [CrossRef]
  31. James, M.R.; Chandler, J.H.; Eltner, A.; Fraser, C.; Miller, P.E.; Mills, J.P.; Noble, T.; Robson, S.; Lane, S.N. Guidelines on the use of structure-from-motion photogrammetry in geomorphic research. Earth Surf. Process. Landf. 2019, 44, 2081–2084. [Google Scholar] [CrossRef]
  32. Girardeau-Montaut, D. Cloud-to-Mesh Distance. Available online: http://www.cloudcompare.org/doc/wiki/index.php?title=Cloud-to-Mesh_Distance (accessed on 25 February 2023).
  33. Razali, M.I.; Idris, A.N.; Razali, M.H.; Syafuan, W.M. Quality Assessment of 3D Point Clouds on the Different Surface Materials Generated from iPhone LiDAR Sensor. Int. J. Geoinform. 2022, 18, 51–59. [Google Scholar] [CrossRef]
  34. Bernardini, F.; Mittleman, J.; Rushmeier, H.; Silva, C.; Taubin, G. The ball-pivoting algorithm for surface reconstruction. IEEE Trans. Vis. Comput. Graph. 1999, 5, 349–359. [Google Scholar] [CrossRef]
  35. Kazhdan, M.; Bolitho, M.; Hoppe, H. Poisson Surface Reconstruction. In Proceedings of the Eurographics Symposium on Geometry Processing 2006, Sardinia, Italy, 26–28 June 2006. [Google Scholar]
  36. Mukhlisin, M.; Astuti, H.W.; Kusumawardani, R.; Wardihani, E.D.; Supriyo, B. Rapid and low cost ground displacement mapping using UAV photogrammetry. Phys. Chem. Earth 2023, 130, 103367. [Google Scholar] [CrossRef]
  37. Gindraux, S.; Boesch, R.; Farinotti, D. Accuracy assessment of digital surface models from Unmanned Aerial Vehicles’ imagery on glaciers. Remote Sens. 2017, 9, 186. [Google Scholar] [CrossRef] [Green Version]
  38. Harwin, S.; Lucieer, A. Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from Unmanned Aerial Vehicle (UAV) imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar] [CrossRef] [Green Version]
  39. Girardeau-Montaut, D. CloudCompare Point Cloud Processing Workshop. Available online: https://docplayer.net/184042061-Cloudcompare-point-cloud-processing-workshop.html (accessed on 28 January 2021).
  40. Moyano, J.; Cabrera-Revuelta, E.; Nieto-Julián, J.E.; Fernández-Alconchel, M.; Fernández-Valderrama, P. Evaluation of Geometric Data Registration of Small Objects from Non-Invasive Techniques: Applicability to the HBIM Field. Sensors 2023, 23, 1730. [Google Scholar] [CrossRef] [PubMed]
  41. Webster, C.; Westoby, M.; Rutter, N.; Jonas, T. Three-dimensional thermal characterization of forest canopies using UAV photogrammetry. Remote Sens. Environ. 2018, 209, 835–847. [Google Scholar] [CrossRef] [Green Version]
  42. Moyano, J.; Odriozola, C.P.; Nieto-Julián, J.E.; Vargas, J.M.; Barrera, J.A.; León, J. Bringing BIM to archaeological heritage: Interdisciplinary method/strategy and accuracy applied to a megalithic monument of the Copper Age. J. Cult. Herit. 2020, 45, 303–314. [Google Scholar] [CrossRef]
  43. Ruggles, S.; Clark, J.; Franke, K.W.; Wolfe, D.; Reimschiissel, B.; Martin, R.A.; Okeson, T.J.; Hedengren, J.D. Comparison of sfm computer vision point clouds of a landslide derived from multiple small uav platforms and sensors to a tls-based model. J. Unmanned Veh. Syst. 2016, 4, 246–265. [Google Scholar] [CrossRef]
  44. Deliry, S.I.; Ugur, A. Accuracy evaluation of UAS photogrammetry and structure from motion in 3D modeling and volumetric calculations. J. Appl. Remote Sens. 2023, 17, 024515. [Google Scholar] [CrossRef]
  45. Marín-Buzón, C.; Pérez-Romero, A.M.; León-Bonillo, M.J.; Martínez-álvarez, R.; Mejías-García, J.C.; Manzano-Agugliaro, F. Photogrammetry (SfM) vs. Terrestrial Laser Scanning (TLS) for Archaeological Excavations: Mosaic of Cantillana (Spain) as a Case Study. Appl. Sci. 2021, 11, 11994. [Google Scholar] [CrossRef]
  46. Mlambo, R.; Woodhouse, I.H.; Gerard, F.; Anderson, K. Structure from motion (SfM) photogrammetry with drone data: A low cost method for monitoring greenhouse gas emissions from forests in developing countries. Forests 2017, 8, 68. [Google Scholar] [CrossRef] [Green Version]
  47. Del Savio, A.A.; Luna Torres, A.; Chicchón Apaza, M.A.; Vergara Olivera, M.A.; Llimpe Rojas, S.R.; Urday Ibarra, G.T.; Reyes Ñique, J.L.; Macedo Arevalo, R.I. Integrating a LiDAR Sensor in a UAV Platform to Obtain a Georeferenced Point Cloud. Appl. Sci. 2022, 12, 12838. [Google Scholar] [CrossRef]
  48. James, M.R.; Robson, S. Mitigating systematic error in topographic models derived from UAV and ground-based image networks. Earth Surf. Process. Landf. 2014, 39, 1413–1420. [Google Scholar] [CrossRef] [Green Version]
  49. Almeida, R.; Chaves, L.; Silva, M.; Carvalho, M.; Caldas, L. Integration between BIM and EPDs: Evaluation of the main difficulties and proposal of a framework based on ISO 19650:2018. J. Build. Eng. 2023, 68, 106091. [Google Scholar] [CrossRef]
  50. ICOMOS Principles of Seville. International Priciples of Virtual Archeology. Available online: http://www.sevilleprinciples.com/ (accessed on 15 February 2020).
  51. Londoncharter, L. London Charter for the Computer-Based Visualisation of Cultural Heritage. Available online: https://www.london-charter.org/downloads.html (accessed on 30 May 2023).
  52. Valente, R.; Brumana, R.; Oreni, D.; Banfi, F.; Barazzetti, L.; Previtali, M.; Valente, R.; Brumana, R.; Oreni, D.; Banfi, F.; et al. Object-Oriented Approach for 3d Archaeological Documentation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 707–712. [Google Scholar] [CrossRef] [Green Version]
  53. Fiorillo, F.; Rossi, C. 3D Parametric Modelling based on Point Cloud for the Interpretation of the Archaeological Remains. Disegnare Con 2021, 14, 1–11. [Google Scholar] [CrossRef]
  54. Demetrescu, E. Archaeological stratigraphy as a formal language for virtual reconstruction. Theory and practice. J. Archaeol. Sci. 2015, 57, 42–55. [Google Scholar] [CrossRef]
  55. Banfi, F.; Brumana, R.; Landi, A.G.; Previtali, M.; Roncoroni, F.; Stanga, C. Building archaeology informative modelling turned into 3D volume stratigraphy and extended reality time-lapse communication. Virtual Archaeol. Rev. 2022, 13, 1–21. [Google Scholar] [CrossRef]
  56. Barazzetti, L. Parametric as-built model generation of complex shapes from point clouds. Adv. Eng. Inform. 2016, 30, 298–311. [Google Scholar] [CrossRef]
  57. Banfi, F. HBIM, 3D drawing and virtual reality for archaeological sites and ancient ruins. Virtual Archaeol. Rev. 2020, 11, 16–33. [Google Scholar] [CrossRef]
  58. Diara, F.; Rinaudo, F. Building Archaeology Documentation and Analysis Through Open Source Hbim Solutions via Nurbs Modelling. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 1381–1388. [Google Scholar] [CrossRef]
  59. Trizio, I.; Savini, F.; Giannangeli, A.; Boccabella, R.; Petrucci, G.; Trizio, I.; Savini, F.; Giannangeli, A.; Boccabella, R.; Petrucci, G. The Archaeological Analysis of Masonry for the Restoration Project in Hbim. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 715–722. [Google Scholar] [CrossRef] [Green Version]
  60. Trizio, I.; Savini, F. Archaeology of buildings and HBIM methodology: Integrated tools for documentation and knowledge management of architectural heritage. In Proceedings of the IMEKO International Conference on Metrology for Archaeology and Cultural Heritage, TC4 MetroArchaeo, Bergamo, Italy, 22–24 October 2020; pp. 84–89. [Google Scholar]
  61. Chiabrando, F.; Lo Turco, M.; Rinaudo, F.; Chiabrando, F.; Lo Turco, M.; Rinaudo, F. Modeling the Decay in AN Hbim Starting from 3d Point Clouds. A Followed Approach for Cultural Heritage Knowledge. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 605–612. [Google Scholar] [CrossRef] [Green Version]
  62. Dore, C.; Murphy, M.; McCarthy, S.; Brechin, F.; Casidy, C.; Dirix, E. Structural simulations and conservation analysis-historic building information model (HBIM). Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 351–357. [Google Scholar] [CrossRef] [Green Version]
  63. Brumana, R.; Banfi, F.; Cantini, L.; Previtali, M.; Della Torre, S. Hbim level of detail-geometry-Accuracy and survey analysis for architectural preservation. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 293–299. [Google Scholar] [CrossRef] [Green Version]
  64. Adami, A. (Ed.) Geomatica e HBIM per i beni culturali. In Geomatica e HBIM per i Beni Culturali; Franco Angeli: Milan, Italy, 2021; pp. 1–233. [Google Scholar]
  65. Acierno, M.; Cursi, S.; Simeone, D.; Fiorani, D. Architectural heritage knowledge modelling: An ontology-based framework for conservation process. J. Cult. Herit. 2017, 24, 124–133. [Google Scholar] [CrossRef]
  66. Daniotti, B.; Gianinetto, M.; Della Torre, S. (Eds.) Digital Transformation of the Design, Construction and Management Processes of the Built Environment; Research for Development; Springer Nature: Cham, Swizerland, 2020; p. 400. ISBN 978-3-030-33569-4. [Google Scholar]
  67. Brusaporci, S.; Trizio, I.; Ruggeri, G.; Maiezza, P.; Tata, A.; Giannangeli, A. AHBIM per l’analisi stratigrafica dell’architettura storica. Restauro Archeol. 2018, 26, 112–131. [Google Scholar] [CrossRef]
  68. Fryskowska, A.; Stachelek, J. A no-reference method of geometric content quality analysis of 3D models generated from laser scanning point clouds for hBIM. J. Cult. Herit. 2018, 34, 95–108. [Google Scholar] [CrossRef]
  69. Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R. Scan-to-bim output validation: Towards a standardized geometric quality assessment of building information models based on point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 45–52. [Google Scholar] [CrossRef] [Green Version]
  70. Barazzetti, L.; Banfi, F.; Brumana, R.; Previtali, M. Creation of Parametric BIM Objects from Point Clouds Using Nurbs. Photogramm. Rec. 2015, 30, 339–362. [Google Scholar] [CrossRef]
  71. U.S. Institute of BUILDING Documentation USIBD Level of Accuracy (LOA) Specification Guide Document C120 TM [Guide] Version 2.0-2016 Guide for USIBD Document C220 TM : Level of Accuracy (LOA) Specification for Building Documentation. Available online: https://cdn.ymaws.com/www.nysapls.org/resource/resmgr/2019_conference/handouts/hale-g_bim_loa_guide_c120_v2.pdf (accessed on 25 February 2020).
Figure 1. (a) Situation of the archaeological sites; (b) Digital Elevation Model of the cartographic context of the Gabrieles dolmens; (c) layout of a subregion as 3D surface models.
Figure 1. (a) Situation of the archaeological sites; (b) Digital Elevation Model of the cartographic context of the Gabrieles dolmens; (c) layout of a subregion as 3D surface models.
Drones 07 00477 g001
Figure 2. Photogrammetry of Dolmen no. 4 using the Linear Intelligent Flight Plan at a height of 12 m.
Figure 2. Photogrammetry of Dolmen no. 4 using the Linear Intelligent Flight Plan at a height of 12 m.
Drones 07 00477 g002
Figure 3. Photogrammetry of Dolmen no. 4 using an Intelligent Oblique Flight Plan at 24 m.
Figure 3. Photogrammetry of Dolmen no. 4 using an Intelligent Oblique Flight Plan at 24 m.
Drones 07 00477 g003
Figure 4. DJI Mavic 2 Enterprise Dual drone.
Figure 4. DJI Mavic 2 Enterprise Dual drone.
Drones 07 00477 g004
Figure 5. Analysis of the density of points and uncertainty.
Figure 5. Analysis of the density of points and uncertainty.
Drones 07 00477 g005
Figure 6. Scanning with the LiDAR sensor of Dolmen no. 3 iOS device.
Figure 6. Scanning with the LiDAR sensor of Dolmen no. 3 iOS device.
Drones 07 00477 g006
Figure 7. Linear correlation diagram of the deviation between TS and iPhone 13 Pro measurements in Dolmen no. 3.
Figure 7. Linear correlation diagram of the deviation between TS and iPhone 13 Pro measurements in Dolmen no. 3.
Drones 07 00477 g007
Figure 8. Partial aerial photogrammetry of the GCPs taken for an SfMUAV at the Palma del Condado train station (Spain).
Figure 8. Partial aerial photogrammetry of the GCPs taken for an SfMUAV at the Palma del Condado train station (Spain).
Drones 07 00477 g008
Figure 9. Linear correlation diagram of the deviation between TS and iPhone measurements at the station.
Figure 9. Linear correlation diagram of the deviation between TS and iPhone measurements at the station.
Drones 07 00477 g009
Figure 10. Photograph of the field of experimentation at the School.
Figure 10. Photograph of the field of experimentation at the School.
Drones 07 00477 g010
Figure 11. Linear correlation diagram of the deviation between the measurements of the TS and iPhone at the school.
Figure 11. Linear correlation diagram of the deviation between the measurements of the TS and iPhone at the school.
Drones 07 00477 g011
Figure 12. Box-and-whisker plot of measurement results and comparison between TS and iPhone.
Figure 12. Box-and-whisker plot of measurement results and comparison between TS and iPhone.
Drones 07 00477 g012
Figure 13. Overlap of the two sets of SfM points using UAV and iPhone 13 Pro LiDAR Light Detection and Ranging Sensor from Dolmen no. 3.
Figure 13. Overlap of the two sets of SfM points using UAV and iPhone 13 Pro LiDAR Light Detection and Ranging Sensor from Dolmen no. 3.
Drones 07 00477 g013
Figure 14. Analysis of the difference between SfMUAV and HMLS. Result of applying the C2C algorithm. Colour scale determines the distance range. In the colour scale, red represents the maximum distance, expressed in metres.
Figure 14. Analysis of the difference between SfMUAV and HMLS. Result of applying the C2C algorithm. Colour scale determines the distance range. In the colour scale, red represents the maximum distance, expressed in metres.
Drones 07 00477 g014
Figure 15. Histogram of Figure 14. Histogram units: metres (X axis) and number of points (Y axis).
Figure 15. Histogram of Figure 14. Histogram units: metres (X axis) and number of points (Y axis).
Drones 07 00477 g015
Figure 16. Analysis of the difference between SfMUAV and GCP by HMLS and TLS. Result of applying the C2C algorithm. Distance distribution colour map. In the colour scale, red represents the maximum distance, expressed in metres, and blue represents the minimum distance.
Figure 16. Analysis of the difference between SfMUAV and GCP by HMLS and TLS. Result of applying the C2C algorithm. Distance distribution colour map. In the colour scale, red represents the maximum distance, expressed in metres, and blue represents the minimum distance.
Drones 07 00477 g016
Figure 17. Histogram of Figure 16. Histogram units: metres (X axis) and number of points (Y axis).
Figure 17. Histogram of Figure 16. Histogram units: metres (X axis) and number of points (Y axis).
Drones 07 00477 g017
Figure 18. Analysis of the difference between SfMUAV at 12 and 24 m. Result of applying the C2C algorithm. Distance distribution colour map. In the colour scale, red represents the maximum distance, expressed in metres, and blue represents the minimum distance.
Figure 18. Analysis of the difference between SfMUAV at 12 and 24 m. Result of applying the C2C algorithm. Distance distribution colour map. In the colour scale, red represents the maximum distance, expressed in metres, and blue represents the minimum distance.
Drones 07 00477 g018
Figure 19. Histogram of Figure 18. Histogram units: metres (X axis) and number of points (Y axis).
Figure 19. Histogram of Figure 18. Histogram units: metres (X axis) and number of points (Y axis).
Drones 07 00477 g019
Figure 20. Analysis of the difference between SfMUAV at 12 and 24 m. Result of applying the C2C algorithm. Distance distribution colour map. In the colour scale, red represents the maximum distance, expressed in metres.
Figure 20. Analysis of the difference between SfMUAV at 12 and 24 m. Result of applying the C2C algorithm. Distance distribution colour map. In the colour scale, red represents the maximum distance, expressed in metres.
Drones 07 00477 g020
Figure 21. Histogram of Figure 20. Histogram units: metres (X axis) and number of points (Y axis).
Figure 21. Histogram of Figure 20. Histogram units: metres (X axis) and number of points (Y axis).
Drones 07 00477 g021
Figure 22. Linear correlation diagram of the deviation in graphing of the results from Table 3 of the work of Chase et al. [15].
Figure 22. Linear correlation diagram of the deviation in graphing of the results from Table 3 of the work of Chase et al. [15].
Drones 07 00477 g022
Figure 23. Average values with their errors in a graph with LOA range in the USIBD guide comparing the authors: Costantino et al. [14]; Luetzenburg et al. [12]; Spreafico et al. [3] and Chase et al. [15] with surveys carried out.
Figure 23. Average values with their errors in a graph with LOA range in the USIBD guide comparing the authors: Costantino et al. [14]; Luetzenburg et al. [12]; Spreafico et al. [3] and Chase et al. [15] with surveys carried out.
Drones 07 00477 g023
Table 1. Table of deviations of the three case studies.
Table 1. Table of deviations of the three case studies.
TOTAL NO.Mean (m)Standard Deviation (σ) (m)Sum (m)Min.
Distance (m)
Median (m)Max.
Distance (m)
SCHOOL120.111310.069211.335730.023560.115240.25848
STATION120.024470.021250.293630.002550.018130.06552
DOLMEN100.070360.063430.703610.001690.058090.20470
Table 2. Table of deviation between the SfMUAV and HMLS sets of points from the survey in Figure 14.
Table 2. Table of deviation between the SfMUAV and HMLS sets of points from the survey in Figure 14.
Comparison between SfMUAV and HMLSStandard
Deviation (σ) (m)
Min. Distance (m)Max. Distance (m)Average Distance (m)Estimated
Standard Error (m)
STEP 1 0.088900.3860.04030.09651
Table 3. Table of deviation between the SfMUAV and TLS sets of points from the survey in Figure 16.
Table 3. Table of deviation between the SfMUAV and TLS sets of points from the survey in Figure 16.
Comparison between SfMUAV and TLSStandard
Deviation (σ) (m)
Min. Distance (m)Max. Distance (m)Average Distance (m)Estimated
Standard Error (m)
STEP 1 0.0665800.81940.017690.0359
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moyano, J.; Nieto-Julián, J.E.; Fernández-Alconchel, M.; Oreni, D.; Estévez-Pardal, R. Analysis and Precision of Light Detection and Ranging Sensors Integrated in Mobile Phones as a Framework for Registration of Ground Control Points for Unmanned Aerial Vehicles in the Scanning Technique for Building Information Modelling in Archaeological Sites. Drones 2023, 7, 477. https://doi.org/10.3390/drones7070477

AMA Style

Moyano J, Nieto-Julián JE, Fernández-Alconchel M, Oreni D, Estévez-Pardal R. Analysis and Precision of Light Detection and Ranging Sensors Integrated in Mobile Phones as a Framework for Registration of Ground Control Points for Unmanned Aerial Vehicles in the Scanning Technique for Building Information Modelling in Archaeological Sites. Drones. 2023; 7(7):477. https://doi.org/10.3390/drones7070477

Chicago/Turabian Style

Moyano, Juan, Juan E. Nieto-Julián, María Fernández-Alconchel, Daniela Oreni, and Rafael Estévez-Pardal. 2023. "Analysis and Precision of Light Detection and Ranging Sensors Integrated in Mobile Phones as a Framework for Registration of Ground Control Points for Unmanned Aerial Vehicles in the Scanning Technique for Building Information Modelling in Archaeological Sites" Drones 7, no. 7: 477. https://doi.org/10.3390/drones7070477

Article Metrics

Back to TopTop