Morphometric Analysis through 3D Modelling of Bronze Age Stone Moulds from Central Sardinia

Stone moulds were basic elements of metallurgy during the Bronze Age, and their analysis and characterization are very important to improve the knowledge on these artefacts useful for typological characterization. The stone moulds investigated in this study were found during an archaeological field survey in several Nuragic (Bronze Age) settlements in Central Sardinia. Recent studies have shown that photogrammetry can be effectively used for the 3D reconstruction of small and medium-sized archaeological finds, although there are still many challenges in producing high-quality digital replicas of ancient artefacts due to their surface complexity and consistency. In this paper, we propose a multidisciplinary approach using mineralogical (X-ray powder diffraction) and petrographic (thin section) analysis of stone materials, as well as an experimental photogrammetric method for 3D reconstruction from multi-view images performed with recent software based on the CMPMVS algorithm. The photogrammetric image dataset was carried out using an experimental rig equipped with a 26.2 Mpix full frame digital camera. We also assessed the accuracy of the reconstruction models in order to verify their precision and readability according to archaeological goals. This allowed us to provide an effective tool for more detailed study of the geometric-dimensional aspects of the moulds. Furthermore, this paper demonstrates the potentialities of an integrated minero-petrographic and photogrammetric approach for the characterization of small artefacts, providing an effective tool for more in-depth investigation of future typological comparisons and provenance studies.


Introduction
In this paper we propose a multidisciplinary approach to investigate five fragmented stone moulds, using mineralogical (X-ray powder diffraction), petrographic (thin section) analysis and an experimental photogrammetric method for 3D reconstruction from multiview images. The five moulds were found during field survey in several Nuragic sites in West-Central Sardinia (Italy; Figure 1) belonging to the Early Bronze Age [1]. The first three fragmented moulds (B1, B2, B3) were collected in a large Nuragic settlement called Santu Perdu (district of Bidonì) near the South-Eastern shore of Omodeo Lake (Sardinia, Italy). Another artefact (N1) is a multiple mould recovered in the Nuragic settlement of Mura, in the municipality of Narbolia, (Sardinia, Italy; Figure 1) [2]. The last stone mould fragment (S1) was found close to Nuraghe Zuri (district of Sorradile, Sardinia, Italy; Figure 1), currently submerged by Omodeo Lake.
These moulds have been studied by various authors from an archaeological point of view, describing the typological and morphological aspects in a traditional way and attributing the lithological nature based exclusively on the comparison with similar artefacts and on their macroscopic aspects [1][2][3]. These moulds have been studied by various authors from an archaeological point of view, describing the typological and morphological aspects in a traditional way and attributing the lithological nature based exclusively on the comparison with similar artefacts and on their macroscopic aspects [1][2][3].
We carried out a preliminary mineralogical (XRD) and petrographic analysis of these five moulds in order to verify their lithological nature. This characterization was integrated with a morphometric analysis based on 3D photogrammetric reconstruction from multi-view images (MVS).
The morphometric analysis allowed accurate and repeatable measurements to be made of the artefacts. In particular, it was possible to define the dimensional parameters of the shapes more completely (reciprocal orientation of the worked surfaces and volumes of the artefacts) and to obtaining physical characteristics (apparent specific weight) useful for lithological comparison from both the moulds and compatible raw materials.
3D digitalization is increasingly being adopted in archaeological studies, and research efforts in this field aim to identify workflows that guarantee the best performance in relation to processing time and costs for the production of 3D digital replicas. Each workflow involves the use of different hardware, software and digital sensor solutions [4]. In recent years, several software solutions have become available for the reconstruction of 3D objects from images using a combination of Structure-From-Motion (SFM) and dense Multi-View Stereo Reconstruction (MVS) algorithms. The SFM algorithm reconstructs a scattered point cloud of a scene or object that has been captured by an arbitrary number of images arranged in space according to more or less rigid and ordered geometries by calculating the intrinsic and extrinsic parameters of the digital camera. The richer the surface to be detected, the more effective the SFM algorithm will be. In order to obtain a point cloud or a dense polygonal mesh, SFM is coupled with MVS algorithms that calculate the correspondence of information for each pixel of the images and the corresponding depth maps [4]. The field of application of photogrammetric methods based on SFM-MVS varies depending on the scale of investigation, from 3D reconstruction of artefacts in the cultural heritage sector to monuments of historical interest and archaeological excavations [5,6]. Recent scientific studies have attempted to evaluate the applicability of photogrammetric method in relation to the purpose of the study and the quality of the expected models [4]. For photogrammetric acquisition of the moulds, we We carried out a preliminary mineralogical (XRD) and petrographic analysis of these five moulds in order to verify their lithological nature. This characterization was integrated with a morphometric analysis based on 3D photogrammetric reconstruction from multiview images (MVS).
The morphometric analysis allowed accurate and repeatable measurements to be made of the artefacts. In particular, it was possible to define the dimensional parameters of the shapes more completely (reciprocal orientation of the worked surfaces and volumes of the artefacts) and to obtaining physical characteristics (apparent specific weight) useful for lithological comparison from both the moulds and compatible raw materials.
3D digitalization is increasingly being adopted in archaeological studies, and research efforts in this field aim to identify workflows that guarantee the best performance in relation to processing time and costs for the production of 3D digital replicas. Each workflow involves the use of different hardware, software and digital sensor solutions [4]. In recent years, several software solutions have become available for the reconstruction of 3D objects from images using a combination of Structure-From-Motion (SFM) and dense Multi-View Stereo Reconstruction (MVS) algorithms. The SFM algorithm reconstructs a scattered point cloud of a scene or object that has been captured by an arbitrary number of images arranged in space according to more or less rigid and ordered geometries by calculating the intrinsic and extrinsic parameters of the digital camera. The richer the surface to be detected, the more effective the SFM algorithm will be. In order to obtain a point cloud or a dense polygonal mesh, SFM is coupled with MVS algorithms that calculate the correspondence of information for each pixel of the images and the corresponding depth maps [4]. The field of application of photogrammetric methods based on SFM-MVS varies depending on the scale of investigation, from 3D reconstruction of artefacts in the cultural heritage sector to monuments of historical interest and archaeological excavations [5,6]. Recent scientific studies have attempted to evaluate the applicability of photogrammetric method in relation to the purpose of the study and the quality of the expected models [4]. For photogrammetric acquisition of the moulds, we used an experimental photogrammetry rig and a Full-Frame digital camera. For the processing of the image dataset, we used the photogrammetric reconstruction software Reality Capture based on the Clustering Multi-View Patch/Multi-View Stereo (CMPMVS) algorithm [7,8]. The finds generally present a homogeneous coloring; on the contrary, they have considerable complexities linked to the irregularity of their shape due to the presence of grooves and holes with different depths, which require testing and careful fine-tuning of the 3D reconstruction method in order to obtain models with a high level of detail and readability that make it possible to overcome the limits linked to the traditional methods of graphic restitution and measurement often used to study archaeological artefacts [4]. Another aspect of great interest is the possibility of creating detailed 3D digital models reproducing the original geometry of the forms, thus allowing a better understanding and interpretation of the nature of the artefacts themselves [5]. We would also like to highlight some critical issues related to the reliability of the recording, which can vary in its ability to reach the level of accuracy and precision required by the scientific investigation depending on the technique and acquisition system used [5,6].

The Stone Moulds
The first three fragmented moulds B1, B2, B3 ( Figure 2) come from the Nuragic settlement of Santu Perdu (district of Bidonì, Sardinia, Italy) near the South-Eastern shores of Omodeo Lake. The artefact B1 has a rhomboidal shape and preserves the mould for a flat axe; on the same face there are three impervious holes, probably functional for the fixing of the lid [1]. The B2 find has an irregular parallelepiped shape and keeps the carved moulds for three different kinds of cast: a flat axe, a double axe with converging or orthogonal cuts, and a flat object that is not recognizable [1]. The third mould fragment (B3) has a trapezoidal shape with remains of at least two well-finished faces. It preserves the carving for a probable discoidal object; on the same face there is also a very ruined elongated cut, which is probably the residue of the carving for another kind of cast [1]. On the edge and on two other faces there are several holes facing in different directions. The artefact (N1) is a multiple carved mould recovered in the Nuragic settlement of Mura, in the municipality of Narbolia (Sardinia, Italy) [2] (Figure 2). This find has a very complex shape, with three well-finished faces and traces of at least seven carvings to produce axes and chisels; in addition, on two faces there are four irregular holes, presumably for fixing the lid [1]. The stone mould S1 is from Nuraghe Zuri, district of Sorradile ( Figure 2); it is loosely defined by archaeologists as "greenstone" and has a parallelepiped shape, strongly blackened, with two sides well finished, one face broken and the other raw. On the upper face it preserves a part of the carving for an axe hypothesized to be with orthogonal or converging cutting edges, with a hole for the handle and a cylindrical collar; furthermore, on a lateral face it is possible to see part of a hollow with an arched margin; there are no holes for fixing the lid [1]. All the moulds are characterized as being of the monovalve type and as intended for reuse by the making of new carved shapes, often involving all available faces of the stone block [3].

Mineralogical And Petrographic Characterization of the Stone Moulds
All the mould fragments were carefully sampled to carry out X-Ray powder diffraction analysis (XRD) to define the mineral composition of the rock type used. The XRD analysis was performed with a Rigaku Geigerflex B/Max diffractometer (Cu anode), op-

Mineralogical and Petrographic Characterization of the Stone Moulds
All the mould fragments were carefully sampled to carry out X-Ray powder diffraction analysis (XRD) to define the mineral composition of the rock type used. The XRD analysis was performed with a Rigaku Geigerflex B/Max diffractometer (Cu anode), operating at 30 kV and 30 mA. For the thin section petrographic analysis, due to the importance of maintaining the integrity of the finds as much as possible, only one of the artefacts was allowed to be sampled (B1). The analysis was carried out using a Zeiss Axioplan petrographic microscope (Carl Zeiss, Oberkochen, Germany). Following the results of the minero-petrographic analyses a preliminary geological survey was carried out to find the possible provenance of the stone materials used for the moulds. The survey led to finding, sampling and analysing some compatible materials.

Photogrammetric Method and 3D Modeling
It is well known that, in terms of accuracy and precision, the reliability of acquired 3D data is closely related to the size and quality of the minimum pixel acquired on the object (pixmin) and the baseline/distance (B/D) ratio between object and camera position. The accuracy of reconstructing a point cloud in space is a non-linear function of the B/D ratio and the angle of convergence of the camera's optical axis (β). For shots with axes converging on an object, the optimal range of B/D values is from 0.4 to 0.8. These values limit the stereo matching (photogrammetric rules), so the best balance between image matching and accuracy vs. depth estimation errors can be obtained with B/D values ranging between 0.1 and 0.5 [4,5]. Preliminary measurement of the findings was carried out with a DIN862 gauge, with a measuring range of 0-300 mm, resolution 0.01 mm and maximum permissible error <0.040 mm (Table 1). In this work, we set the pixmin value to 0.060 mm and the B/D value to 0.1 with B = 50 mm and the fixed camera-object distance D = 500 mm. The photographs were taken with a Canon EOS RP DSLR camera (26.2 megapixel 6240 × 4160 pix, Raw 24 bit, full frame CMOS sensor 35.9 × 24.0 mm, pixel size 5.75 × 5.75 µm), equipped with a Canon EF 50 mm f/1.4 lens with polarizing filter, positioned on the photogrammetry rig schematically shown in Figure 3. The shooting sequence was carried out with a turntable on which the finds were placed. In order to guarantee uniform light conditions without shadows, the turntable was placed in a diffuse lighting system 80 × 80 × 80 cm (Light Led Box 90+ CRI).
The complexity of the shapes, their dimensions and the number of available flat faces of the artefact defined the maximum possible number of rotations for the photogrammetric capture of the object. Assuming that the ratio of the size of each pixel on the digital sensor to the pixel on the object is equal, the minimum possible pixel size (pixmin) of the object to be captured can be calculated as the ratio of the camera-object distance (D) to the focal length of the lens (F), multiplied by the pixel size of the digital sensor used. Thus, in our case the minimum object pixel with a 50 mm lens will be: frame CMOS sensor 35.9 × 24.0 mm, pixel size 5.75 × 5.75 µm), equipped with a Canon EF 50 mm f/1.4 lens with polarizing filter, positioned on the photogrammetry rig schematically shown in Figure 3. The shooting sequence was carried out with a turntable on which the finds were placed. In order to guarantee uniform light conditions without shadows, the turntable was placed in a diffuse lighting system 80 × 80 × 80 cm (Light Led Box 90+ CRI). The complexity of the shapes, their dimensions and the number of available flat faces of the artefact defined the maximum possible number of rotations for the photogrammetric capture of the object. Assuming that the ratio of the size of each pixel on the digital sensor to the pixel on the object is equal, the minimum possible pixel size (pixmin) of the object to be captured can be calculated as the ratio of the camera-object distance (D) to the focal length of the lens (F), multiplied by the pixel size of the digital sensor used. Thus, in our case the minimum object pixel with a 50 mm lens will be: pixmin = (D/F) × 5.75 µm = (500/50) × 0.00575 mm = 0.057 mm

Pre-Acquisition Procedure and High-Precision Positioning
For the photogrammetric acquisition process, various common coded targets are used which can be classified into two categories, dot-dispersing and circular. In our study, we used circular Schneider targets placed on the surfaces of the moulds [9]. Such targets have a design with a central circle and a surrounding pattern of concentric ring segments [9,10]. Targets with a Schneider design are mostly used for industrial measurements, due to the simplicity of their structure, the encoding combinations that can be obtained and the lower distortion in the detecting phase. The recognition of concentric circular coded targets was performed with RC software during the alignment of the images acquired with an optimal viewing angle α0 = 60°, using an automatic identification and decoding algorithm with a sub-pixel accuracy on n ≥ 6 photos. In addition, in order to define the reference coordinates an L-shaped reference system (LRS) consisting of black and white encoded target CPs (12-16 bit) was placed on the turntable plane of the photogrammetric rig [5,10]. The LRS design was calibrated according to the size of each artefact and printed on special heavyweight matte paper, used to define the reference XYZ coordinate system for each artefact. Using this system, 10 images were captured, one

Pre-Acquisition Procedure and High-Precision Positioning
For the photogrammetric acquisition process, various common coded targets are used which can be classified into two categories, dot-dispersing and circular. In our study, we used circular Schneider targets placed on the surfaces of the moulds [9]. Such targets have a design with a central circle and a surrounding pattern of concentric ring segments [9,10]. Targets with a Schneider design are mostly used for industrial measurements, due to the simplicity of their structure, the encoding combinations that can be obtained and the lower distortion in the detecting phase. The recognition of concentric circular coded targets was performed with RC software during the alignment of the images acquired with an optimal viewing angle α 0 = 60 • , using an automatic identification and decoding algorithm with a sub-pixel accuracy on n ≥ 6 photos. In addition, in order to define the reference coordinates an L-shaped reference system (LRS) consisting of black and white encoded target CPs (12-16 bit) was placed on the turntable plane of the photogrammetric rig [5,10]. The LRS design was calibrated according to the size of each artefact and printed on special heavyweight matte paper, used to define the reference XYZ coordinate system for each artefact. Using this system, 10 images were captured, one for each rotation α 0 = 12, with a camera tilt angle of β 0 = 60 • . The procedure was carried out only once for each artefact, positioned on one single face ( Table 2).

Acquisition
Each artefact was placed on of the rig's turntable, in which 180 images were captured in Raw format every rotation angle α 0 = 12 • (repeated, where possible, for all six supporting faces). The camera-object distance was fixed at D = 500 mm, while the shooting axis was tilted respectively at β 0 = 60 • for the finds B1, B2, B3 and N1; β 0 = 60 • and β 1 = 30 • , α 1 = 7 • for the larger find, S1. To obtain the sharpest and most focused images possible, we calculated the optimal Depth of Field (DoF) values, setting the maximum possible lens aperture values (Table 2). Given the camera-object distance and the focal length of the lens, with the Krauss formula for the calculation of the DoF we were able to establish the two limits, near an and fair af, within which optimal sharpness as a function of the aperture and the diameter c of the circle of confusion could be attained [11]. For the finds B1, B2, B3 and N1 we used the following values: DoF (f/16) = 87 mm; an = 46 mm; a f = 54.7 mm and c = 0.03 mm. For the find S1, we calculated DoF (f/22) = 124 mm; a n = 44.6 mm; a f = 57.0 mm and c = 0.03 mm. According to the mean of the near an and fair a f values, we obtained a pixmed = 0.060 mm. The fill ratio of object to frame was 25% for the smallest artefacts (B1, B2, B3) and 45% for the larger ones (N1, S1). This acquisition procedure was repeated five times for each artefact.

Image Pre-Processing
Once the photographic acquisition phase was completed, the dataset in Raw format was processed for image enhancement optimizing, in particular, the color balance and the brightness/contrast ratio, to improve the performance of the entire photogrammetric workflow in terms of 3D model quality and elaboration time ( Figure 4).  The image datasets were then processed with the Reality Capture (RC) photogrammetric software, installed on a Windows Workstation with Intel i7-4820k 3.7 GHz Cpu, 64 GB Ram, 512 GB SSD, Nvidia GTX 1080 Ti Gpu with 11 GB Ram.

Alignment Procedure
The camera calibration was performed automatically by the photogrammetric software (RC), which accurately calculated the internal and external parameters. The distortion correction model applied was Brown 3 [8,12,13]. The models were aligned and scaled with a semi-automatic procedure of RC software on the basis of three assigned XYZ values (X,0,0; 0,0,0; 0,Y,0) to the vertex CPs on the LRS in order to orient in space, scale and position each 3D model for comparison. The average displacement error calculated on the X, Y, Z coordinates of the LRS CPs was 0,015 mm (RMS value, with a The image datasets were then processed with the Reality Capture (RC) photogrammetric software, installed on a Windows Workstation with Intel i7-4820k 3.7 GHz Cpu, 64 GB Ram, 512 GB SSD, Nvidia GTX 1080 Ti Gpu with 11 GB Ram.

Alignment Procedure
The camera calibration was performed automatically by the photogrammetric software (RC), which accurately calculated the internal and external parameters. The distortion correction model applied was Brown 3 [8,12,13]. The models were aligned and scaled with a semi-automatic procedure of RC software on the basis of three assigned XYZ values (X,0,0; 0,0,0; 0,Y,0) to the vertex CPs on the LRS in order to orient in space, scale and position each 3D model for comparison. The average displacement error calculated on the X, Y, Z coordinates of the LRS CPs was 0,015 mm (RMS value, with a maximum displacement error of 0.065595 pix, as Maximum Projection Error). The positions of the CPs aligned in a coordinate system is therefore to be considered acceptable for each sample.

Point Cloud Reconstruction
For each find, a reconstruction region (RR) was established in order to delimit the essential space for processing the dense clouds and mesh models. For this purpose, a sub-dataset of images was selected, excluding those acquired with the reference system (LRS) positioned on the turntable. The RC software allows the selection of different density levels (normal, medium, high) in the cloud construction phase; the clouds of our finds were all processed at the highest density (high quality) with a minimum spacing between points of 0.060 mm (PC MS ). The reconstruction of the artefacts in RC did not require a clipping mask. During data processing, we also made effective use of the RC algorithms for editing (clean model topology tool, manifolds vertices and edges, holes and isolate vertices) and for model simplification (simplify tool), in order to standardize all the test models for comparison in terms of minimal edge length and target triangle count, which allowed a correct comparison between them as well as assessment of the accuracy of the proposed method ( Figure 5).

Texturing Procedure
During the unwrapping, the minimum calculation value was consistent with the established PCMS = 0.060 mm. In the following texturing and coloring phase, a sub-dataset of images was selected, which allowed improvement of the quality of the final texture in relation to the processing time. From the image alignment phase to the 3D reconstruction and final texturing, the total time required for the generation of each single 3D model was up to 2 h. At the end this procedure, a total of 25 point clouds were generated, five for each archaeological find. Table 3 shows the total number of vertices and triangles obtained for each find.

Texturing Procedure
During the unwrapping, the minimum calculation value was consistent with the established PC MS = 0.060 mm. In the following texturing and coloring phase, a sub-dataset of images was selected, which allowed improvement of the quality of the final texture in relation to the processing time. From the image alignment phase to the 3D reconstruction and final texturing, the total time required for the generation of each single 3D model was up to 2 h. At the end this procedure, a total of 25 point clouds were generated, five for each archaeological find. Table 3 shows the total number of vertices and triangles obtained for each find.

Evaluation of Reconstruction Accuracy
To evaluate the accuracy of the photogrammetric method proposed in this study, we compared a series consisting of five 3D models that were processed separately for each of the studied artefacts. We have verified the spatial variability of the estimated error in terms of the distance between the reconstructed 3D surfaces, the volume variation (V), and the variability of the linear measurements taken in the X, Y and Z directions. For this verification, a three-dimensional surface deviation analysis was performed based on the estimation of the averages and relative standard deviations of the Euclidean distances, calculated between the different point clouds processed for each artefact. All of the dense point clouds (25 in total) were exported by RC software as text files (X, Y, Z, R, G, B, Nx, Ny, Nz) according to the following conditions: same minimum spacing between points (PC MS = 0.060 mm) and same number of points for each cloud. The point clouds were processed with CloudCompare software (version 2.11.3, Free Software Foundation, Inc., Boston, USA) for data recording and analysis and cloud to cloud comparison. The point clouds, already oriented and scaled on the basis of the previously defined CPs, were aligned and registered with each other using the Iterative Closest Point (ICP) algorithm, which is one of the most popular methods for registering point cloud data [14]. The C2C comparison method is fast and straightforward, as it does not require data gridding, meshing or calculation of surface normals [4,5,15,16]. With C2C comparison, the mean and standard deviation of the Euclidean distances of each point from its nearest neighbours was calculated. The C2C algorithm uses an octree structure to divide 3D spaces and then calculates Hausdorff distances (maximum distance between each point in the PC1 cloud and the nearest points in the PC2 cloud [17].

Morphometric Analysis Procedure
After verifying the limits of the photogrammetric method, each 3D model of the moulds was measured in a CAD environment to define all geometrical aspects for the identification the main characteristics of the bronze artefacts produced. For this purpose, each 3D model was positioned according to the support plane (XY) of the mould and oriented in a reference direction chosen according to the shape of the artefact. Then, the RLS coordinates of the selected 3D model were fixed in the workspace of RC software to process the orthoviews into the same reference system. The casting moulds were measured by orienting the 3D models according to the horizontal plane of best fit corresponding to each worked surface. This analysis allowed for better understanding of the shape of each carving, using different views as well as longitudinal and cross sections of the 3D models. For each digital model, a further analysis of the surface curvature was carried out using CloudCompare software with Shade Vis plug-in on. This algorithm is derived from one developed by Tarini et al. [18] and calculates the illumination of a point cloud (or vertices of a mesh) as if the light were coming from a theoretical hemisphere or sphere around the object. It uses normal maps for recovering the detail on meshes to evaluate the quality of a 3D model. The technique automatically produces a normal-map illumination for a simple 3D model based on the computation of visibility and ambient occlusion information. With this tool, we added a texture map to improve the quality of the given texture mapping to better highlight discontinuities such as cuts, hollows, and flat sides on the surfaces that often are not well defined on photorealistic textured models.

Mineralogical-Petrographic Analysis and Provenance of Stone Materials
The XRD analysis shows a compositional homogeneity with a mineralogical assemblage mainly represented by chlorite mineral (clinochlore), quartz and subordinated plagioclase (Figures 6 and 7).  The preliminary petrographic analysis in thin section (only possible on artefact B1) confirms the mineral association of the XRD analysis, given mainly by chlorite group phenocrysts with associated very fine size quartz and subordinated plagioclase crystals ( Figure 8). The observation in non-polarized light, besides the typical pleochroism of clinochlore, highlights some opaque spots with a reddish halo, probably due to oxidation of the structural iron in the chlorite mineral. In addition to the compositional homogeneity of the investigated artefacts, their apparent specific weight results are practically identical. These considerations allow us to assert that our stone moulds were probably made from the same lithotype. Their mineralogical composition led us to investigate the mineralised outcrops in the nearby district of Orani (Central Sardinia, Italy), where talc-chlorite bodies have been actively exploited in open-pit mines during most of the past century [19]. A preliminary geoarchaeological survey conducted in this region highlighted the presence of materials that seemed likely to be compatible with our stone moulds. The mineropetrographic analysis carried out on these samples showed the same mineral association. However, the comparison with only one petrographic sample (taken from B1) does not allow us to define with certainty the precise location of the materials used to make the casting moulds. The preliminary petrographic analysis in thin section (only possible on artefact B confirms the mineral association of the XRD analysis, given mainly by chlorite grou

Accuracy of the Photogrammetric Models
Most accuracy verifications on photogrammetric models are based on comparison of the lengths calculated by the software from the relevant 3D models with those detected using other systems; the errors detected can be expressed by the ratio 1/k, where k is the maximum size of the subject divided by the absolute error [5]. With the use of CPs (coded targets), alignment errors can reach accuracies of 1:100,000. Several studies report errors for 3D surfaces created by photogrammetry in the range 1:1000 ÷ 1:5000 compared to models generated by optical scanner systems. However, errors in the range of 1:1000 or greater are likely to be acceptable for many reconstruction projects in the Cultural Heritage field [5,6]. In this work, we assumed a tolerance value for the measurements between 1:1000 ÷ 1:2000. Recent studies have shown that comparisons of model distances with measurements taken manually using digital calipers show generally low discrepancies between reference points, in the range of 1:300 ÷ 1:1000; however, this may indicate gauge and operator error rather than problems with the photogrammetric reconstruction process [5].
As mentioned above, in order to evaluate the accuracy of the proposed photogrammetric method a three-dimensional surface deviation analysis was performed based on estimation of the averages and relative standard deviations of the Euclidean distances, calculated between the point clouds processed for each find with CloudCompare software for data recording and analysis using the Iterative Closest Point (ICP) algorithm [14]. The general purpose of the ICP algorithm is to estimate a rigid transformation among the points in the reference cloud and those in the comparison points cloud.

Accuracy of the Photogrammetric Models
Most accuracy verifications on photogrammetric models are based on comparison of the lengths calculated by the software from the relevant 3D models with those detected using other systems; the errors detected can be expressed by the ratio 1/k, where k is the maximum size of the subject divided by the absolute error [5]. With the use of CPs (coded targets), alignment errors can reach accuracies of 1:100,000. Several studies report errors for 3D surfaces created by photogrammetry in the range 1:1000 ÷ 1:5000 compared to models generated by optical scanner systems. However, errors in the range of 1:1000 or greater are likely to be acceptable for many reconstruction projects in the Cultural Heritage field [5,6]. In this work, we assumed a tolerance value for the measurements between 1:1000 ÷ 1:2000. Recent studies have shown that comparisons of model distances with measurements taken manually using digital calipers show generally low discrepancies between reference points, in the range of 1:300 ÷ 1:1000; however, this may indicate gauge and operator error rather than problems with the photogrammetric reconstruction process [5].
As mentioned above, in order to evaluate the accuracy of the proposed photogrammetric method a three-dimensional surface deviation analysis was performed based on estimation of the averages and relative standard deviations of the Euclidean distances, calculated between the point clouds processed for each find with CloudCompare software for data recording and analysis using the Iterative Closest Point (ICP) algorithm [14]. The general purpose of the ICP algorithm is to estimate a rigid transformation among the points in the reference cloud and those in the comparison points cloud.
The ICP method implements the closest neighbours and Euclidean distance calculation and estimates the nearest point [5,14]. During the accuracy verification process, the algorithm was run cyclically with an increasing number of calculation points until the minimum possible RMS value was obtained. The rigid registration (ICP) demonstrates a better initial orientation for findings B1, B2 and B3. The presence of cloud noise and outliers can influence the results of registration with ICP, in particular for exhibits N1 and S1. A final RMS registration ranging from 0.032 ÷ 0.040 mm was obtained for each comparison.
Once the point cloud datasets were spatially registered, surface discordance analysis was performed using the C2C method, calculating absolute Euclidean distances between clouds using the nearest point technique. A total of 10 comparisons were carried out for each artefact, showing the differences in the mean distance and standard deviation values for each C2C comparison (Table 4). Table 4. Average distances (M) and standard deviations (SD) of C2C absolute distance errors of 3D comparisons, after ICP registration method.  Following this preliminary analysis, the mean comparison data were selected and for each one the scalar ranges (C2C-X, C2C-Y and C2C-Z) of C2C distance and the local Cloud-to-Cloud-L distance (C2C-L; Table 5) were calculated. The performance of the three-dimensional surface deviation analysis was calculated with a statistical distance error threshold at the Q 95 percentile, cumulative for C2C and C2C-L and at M ± 2SD (for C2C scalar fields). Based on these values, we defined the variability of the C2C and C2C-L displacement vectors and the maximum surface deviation intervals used to verify the tolerance limits established for this study. Table 5. Average distances (M) and standard deviations (SD) of selected C2C and C2C-L local distance comparison (from 2D1/2 triangulation model): B1 1-2 = compared points cloud 1 and 2, . . . , S1 1-5 = compared points cloud 1 and 5. The distance calculated in the comparison process is affected by variability in point cloud density, roughness and possible outliers [16,19,20]. To assess the presence of outliers in the data, deviations at the Q 99.9 percentile (C2C and C2C-L) were calculated for each comparison to eliminate incongruent distance values (outliers 0.08 ÷ 0.09% of the total population). Outliers due to isolated points inside the reconstruction region (RR) were removed manually following a visual assessment on the reconstructed model in RC software and by selection of maximum distance values in C2C (maximum search distance). Table 4 shows the results of the estimated variability of the absolute error using the C2C method. Finds B1, B2 and B3 showed mean absolute error values for C2C ranging from 0.032 ÷ 0.035 mm and standard deviation ranging from 0.012 ÷ 0.015 mm. The mean absolute C2C error was greater for the artefacts N1 and S1, which have more problematic shape and surface complexity, with the mean and standard deviation values ranging from 0.039 ÷ 0.043 mm and 0.015 ÷ 0.018 mm, respectively.

Compared
C2C-L, as mentioned before, is a local scale estimation method that allows reduction of the effects of noise and elimination of outliers using a modelling strategy focused around the nearest point in order to obtain a better calculation of the 'real' distance, which may be overestimated by the C2C comparison. C2C-L works by fitting locally to the surface of the reference cloud using a mathematical primitive on the nearest point and some of its closest neighbours. The effectiveness of the local surface model is statistically dependent on the cloud sampling and how appropriate the local surface approximation is [16]. CloudCompare software offers three local surface model methods, the least square plane, 2D1/2 triangulation and quadric (Figure 9). The local model used in this study was 2 D1/2 triangulation where the projection of points onto the plane is used to calculate the Delaunay triangulation, with the original 3D points as vertices for meshing to obtain a 2.5D mesh [20]. In the C2C distance computation, each point cloud is characterised by a roughness (σ1 and σ2) that is a combination of the instrumental noise and the surface roughness of each object. The simplest cloud-to-cloud (C2C and scalar fields X-Y-Z) distance is based on the distance to the nearest point. For values of surface deviation lower than PC MS , C2C depends on the roughness and point density of the clouds to be compared. The distance to the closest point in the local C2C-L model is calculated using the closest point and its neighbour points (n = 6) to provide a better approximation of the true distance.
were removed manually following a visual assessment on the reconstructed model in RC software and by selection of maximum distance values in C2C (maximum search distance). Table 4 shows the results of the estimated variability of the absolute error using the C2C method. Finds B1, B2 and B3 showed mean absolute error values for C2C ranging from 0.032 ÷ 0.035 mm and standard deviation ranging from 0.012 ÷ 0.015 mm. The mean absolute C2C error was greater for the artefacts N1 and S1, which have more problematic shape and surface complexity, with the mean and standard deviation values ranging from 0.039 ÷ 0.043 mm and 0.015 ÷ 0.018 mm, respectively.
C2C-L, as mentioned before, is a local scale estimation method that allows reduction of the effects of noise and elimination of outliers using a modelling strategy focused around the nearest point in order to obtain a better calculation of the 'real' distance, which may be overestimated by the C2C comparison. C2C-L works by fitting locally to the surface of the reference cloud using a mathematical primitive on the nearest point and some of its closest neighbours. The effectiveness of the local surface model is statistically dependent on the cloud sampling and how appropriate the local surface approximation is [16]. CloudCompare software offers three local surface model methods, the least square plane, 2D1/2 triangulation and quadric (Figure 9). The local model used in this study was 2 D1/2 triangulation where the projection of points onto the plane is used to calculate the Delaunay triangulation, with the original 3D points as vertices for meshing to obtain a 2.5D mesh [20]. In the C2C distance computation, each point cloud is characterised by a roughness (σ1 and σ2) that is a combination of the instrumental noise and the surface roughness of each object. The simplest cloud-to-cloud (C2C and scalar fields X-Y-Z) distance is based on the distance to the nearest point. For values of surface deviation lower than PCMS, C2C depends on the roughness and point density of the clouds to be compared. The distance to the closest point in the local C2C-L model is calculated using the closest point and its neighbour points (n = 6) to provide a better approximation of the true distance. On the basis of the 10 comparisons carried out for each 3D model, we defined the mean values of the surface deviation errors, calculated with C2C and C2C-L, respectively (Table 4). Then, we identified the comparisons corresponding to the mean average values B1 1−2, B2 2-5, B3 1-2, N1 2-4, and S1 1-5. The three-dimensional surface deviation analysis was based on the verification of the scalar C2C-X, C2C-Y and C2C-Z distance values and the On the basis of the 10 comparisons carried out for each 3D model, we defined the mean values of the surface deviation errors, calculated with C2C and C2C-L, respectively (Table 4). Then, we identified the comparisons corresponding to the mean average values B1 1−2 , B2 2-5 , B3 1-2 , N1 2-4 , and S1 1-5 . The three-dimensional surface deviation analysis was based on the verification of the scalar C2C-X, C2C-Y and C2C-Z distance values and the calculated values at the local scale C2C-L referenced to PC MS = 0.060 mm (point cloud minimum spacing) and T = 0.100 mm (tolerance limit). In this case, the absolute error estimated for the C2C method has mean distance values ranging from 0.018 ÷ 0.032 mm and a standard deviation range 0.011 ÷ 0.017 mm. The distance error at the cumulative percentile Q 95 of the C2C method ranges from 0.0 ÷ 0.055 mm (B1 1-2 ) to 0.0 ÷ 0.076 mm (S1 1-5 ).
The absolute error estimated with C2C-L method has mean distance values ranging from 0.016 ÷ 0.021mm and a standard deviation range 0.011 ÷ 0.018 mm. The distance error at the cumulative percentile Q95 with the C2C-L method ranges from 0.0 ÷ 0.035 mm (B11-2) to 0.0 ÷ 0.053 mm (S11-5). Estimated errors at the M ± 2SD for the three C2C scalar fields ranges from 0.002 ± 0.038 mm (B31-2) to 0.003 ± 0.058 mm (S11-5). The results of this study show the potential of the proposed photogrammetric reconstruction workflow with a deviation range of 1:2000 ÷ 1:3500, lower than the assumed maximum tolerance range of 1:1000 ÷ 1:2000 (Tables 5 and 6, Figure 10).

Dimensional Analysis of Archaeological Finds from 3D Models
The dimensional analysis carried out on the casting moulds allowed us to define the following parameters:

Dimensional Analysis of Archaeological Finds from 3D Models
The dimensional analysis carried out on the casting moulds allowed us to define the following parameters: • maximum dimension in the three axis directions (X, Y, Z); • worked surface characteristics and orientation in space; • angles, arches and curves characteristics of the hollows; • minimum and maximum thicknesses of the mould area; • reciprocal orientation of the multiple moulds with respect to the shape of the stone; • examples of virtual reconstruction of the possible casting artefact; • weight, volume and apparent specific weight of the stone artefact.
The average dimensional characteristics and some physical parameters of the studied artefacts have been carried out by measurements on the respective 3D models (Tables 7 and 8). Digital elaborations of the orthoviews, isometric views and sections with the dimensional parameters used to define the morphometric characters of the moulds are shown below (Figures 11-17).