Next Article in Journal
Urban Historiography and Graphic Reconstruction of a Historic Area in Valencia, Spain
Previous Article in Journal
A Bibliometric Analysis of Museum Visitors’ Experiences Research
Previous Article in Special Issue
Virtually Reconstructing Bernhard Heine’s Osteotome
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ML Approaches for the Study of Significant Heritage Contexts: An Application on Coastal Landscapes in Sardinia

by
Marco Cappellazzo
1,*,
Giacomo Patrucco
1 and
Antonia Spanò
1,2
1
LabG4CH—Laboratory of Geomatics for Cultural Heritage, Department of Architecture and Design (DAD)—Politecnico di Torino Viale Mattioli 39, 10125 Torino, Italy
2
Polito FULL|The Future Urban Legacy Lab—OGR Tech.—Corso Castelfidardo 22, 10128 Torino, Italy
*
Author to whom correspondence should be addressed.
Heritage 2024, 7(10), 5521-5546; https://doi.org/10.3390/heritage7100261
Submission received: 9 August 2024 / Revised: 27 September 2024 / Accepted: 2 October 2024 / Published: 5 October 2024

Abstract

:
Remote Sensing (RS) and Geographic Information Science (GIS) techniques are powerful tools for spatial data collection, analysis, management, and digitization within cultural heritage frameworks. Despite their capabilities, challenges remain in automating data semantic classification for conservation purposes. To address this, leveraging airborne Light Detection And Ranging (LiDAR) point clouds, complex spatial analyses, and automated data structuring is crucial for supporting heritage preservation and knowledge processes. In this context, the present contribution investigates the latest Artificial Intelligence (AI) technologies for automating existing LiDAR data structuring, focusing on the case study of Sardinia coastlines. Moreover, the study preliminary addresses automation challenges in the perspective of historical defensive landscapes mapping. Since historical defensive architectures and landscapes are characterized by several challenging complexities—including their association with dark periods in recent history and chronological stratification—their digitization and preservation are highly multidisciplinary issues. This research aims to improve data structuring automation in these large heritage contexts with a multiscale approach by applying Machine Learning (ML) techniques to low-scale 3D Airborne Laser Scanning (ALS) point clouds. The study thus develops a predictive Deep Learning Model (DLM) for the semantic segmentation of sparse point clouds (<10 pts/m2), adaptable to large landscape heritage contexts and heterogeneous data scales. Additionally, a preliminary investigation into object-detection methods has been conducted to map specific fortification artifacts efficiently.

1. Introduction

Since the conservation of landscape and built heritage is a well-known global issue [1], the frameworks for documenting and digitizing Cultural Heritage (CH), though extensively discussed, remain crucial [2]. The necessities and questions of these specific research frameworks are thus increasingly leading to developing and addressing new methodological strategies for enhancing mapping processes of widespread CH. In this sense, the proposed contribution applies Machine Learning (ML) methodologies to support documentation in extended heritage contexts. The historical defensive heritage conservation theme is increasingly important nowadays, as from the 12th century, territorial defense has been a primary concern throughout history [3]. Since military offensive technologies have continuously evolved over the centuries, the defensive systems have been characterized by a parallel and constant adaptation to address the territory fortification. In fact, the focus on territorial defense has led to the development of a complex, rich, and stratified landscape whose preservation is crucial for its historical, social, and anthropogenic significance [4]. Given the memory of tragic events and the chronotypological stratification of diverse elements, the identification of these architectures is highly significant. In fact, these structures are increasingly oriented toward touristic and economic exploitation.
Within the disciplinary framework, the notion of military archaeology [5] arises from the need to conduct investigations beyond merely mapping individual artifacts. Understanding the continuously evolving relationships between these heterogeneous heritage assets and their context (Figure 1) is a crucial issue [4]. For this reason, documenting activities are requested to provide sufficiently accurate and extended data to support complex CH analyses and investigations.
Integrating remotely sensed data for documentation workflows is increasingly essential in such a widespread and complex heritage [6]. In this sense, satellite imagery, airborne Light Detection And Ranging (LiDAR), and digital photogrammetry techniques influenced the built and landscape heritage domain. These represent an alternative to or a supplement for ground-based 3D metric survey approaches, where the study is conducted with a multi-scale approach [7]. Specifically, Airborne Laser Scanning (ALS) data are considered significantly valuable for many specific applications. In fact, in recent decades, the development of Airborne LiDAR technology has provided significant advancements for echo information and extension coverage. Furthermore, technologies nowadays allow densities up to 100 points/m2, depending on the Above-Ground Level (AGL), flight speed, and scanner type [8]. In this framework, Unmanned Aerial System (UAS) LiDAR solutions could provide ultra-high-scale data that reach 800 points/m2 [9].
However, it is worth mentioning that existing airborne datasets do not easily reach average densities higher than 10 points per square meter when the final results are cartographical products at a standard regional scale (usually 1:10,000/1:5000). In fact, providing low-resolution airborne LiDAR surveys is often sufficient for generating altimetric and geometric information through Digital Elevation Models (DEMs). These models can be exploited for cartographical, geomorphological, hydraulic, and general engineering applications, as well as for creating orthoimages [10]. Although such data are generally used for standard applications, dense elevation models and LiDAR/photogrammetric primary data are particularly valuable in the context of built and landscape heritage preservation strategies. This is especially true at a territorial scale, which concerns large areas characterized by a high number of artifacts.
Given an overall framework, the research aims have been addressed starting from two primary considerations:
  • Is it possible to provide efficient and automatic data structuring pipelines for existing regional low-scale datasets even though they have not been acquired for heritage documentation and detection purposes?
  • Is it possible to enrich semantically unstructured 3D datasets through ML techniques in the context of widespread heritage?
Starting from these specific yet significantly complex questions, the present contribution has been developed to address 3D data structuring and semantic content enhancement related to widespread heritage conservation processes. This article explores the solutions offered by Remote Sensing (RS) and Deep Learning (DL) techniques and their application for data acquisition and structuring [11]. In this sense, the research gaps that the proposed contribution addresses are related to the provision of efficient data structuring methodologies for heritage contexts. Specifically, one of the objectives is to leverage the existing low-scale ALS datasets to train a predictive Deep Learning Model (DLM) for semantic segmentation. In this sense, the focus is to provide a data structuring method that is adaptable to large heritage contexts where chronological stratification generates a complex landscape context.
The proposed research represents an applied development of a previous study that began with analyzing and examining high-resolution DEMs and geomorphological classification methodologies for the landscape and archaeological heritage context in Como, Italy [12]. The initial study focused on the ground-level identification of anthropic shape features using ML techniques, leveraging Digital Terrain Models (DTMs) and geomorphological raster analyses obtained from a helicopter LiDAR survey. Building on this, a subsequent study [13] extended the approach to a comprehensive user-oriented ML methodology for airborne point clouds and geomorphological analysis classification. Specifically, this second study provided an exhaustive comparison between supervised DLM and unsupervised geometric filters for macro-class point cloud segmentation (point cloud density: 75 pts/m2). Moreover, the research also integrated several composite geomorphological layers for an Object-Based Image Analysis (OBIA) using Machine Learning Classifiers (MLCs).
This article aims to fill the gap of a transferable methodological approach that varies by primary data density, addressing low-density applications. Furthermore, in addition to the continuous development of tailored methods for structuring 3D data in specific heritage contexts, it is possible to forward the investigation of DL object-detection methods applied to the identification of artifacts related to historical defensive heritage.
Finally, it is essential to underline that the research investigates accessible and easy-to-use solutions via Graphical User Interfaces (GUIs), such as the Esri Python Environment and Anaconda. This novel approach aims to discuss and develop the main issues of geomatics methodologies related to the CH framework, such as mapping automation enhancement and technological accessibility for heritage domain experts.

1.1. Research Background and Related Works

1.1.1. Heritage Research Framework

In this context, the island of Sardinia (Italy) represents a significant example of territorial defense development over time. From the 12th century to the 20th century, Sardinian fortification has been directed towards constructing and improving its defensive structure, especially along the coastal strips. However, despite the severe risk of obliteration of these artifacts, there are no established documentation and conservation plans [3], as well as consolidated methodologies for the recognition and mapping of the multiple historical defensive systems. It is thus important to underline the plurality of architectural systems that have been built and developed over the centuries. However, in this contribution, a particular focus has been given to two different systems. The coastal tower systems that have been commissioned by the Spanish Crown (16th–19th centuries) and the coastal defensive containment arch of World War II (1939–1943).
Concerning the coastal tower system, starting from the 16th century, the Spanish Crown has been particularly committed to the construction of coastal towers and bastions to be protected against Saracen raids. Active until their dissolution in the 19th century, the Spanish administration maintained an integrated defense network of fixed towers and mobile units.
Furthermore, concerning the WWII coastal containment arch, particular attention has been given to the construction of concrete pillbox bunkers and strongholds as part of a system designed for the most vulnerable coastal urban centers of Sardinia. These architectures are referred to as “difficult heritage” since they have been neglected for a long time due to their association with war traumatic events [14]. Moreover, it should be underlined that after losing their defensive function, some of these artifacts have been completely obliterated, while the remaining “modern ruins” are awaiting appropriate documentation, preservation, and possibly reuse processes.
These defensive heritage systems have been locally studied from historical and landscape perspectives, while a more comprehensive census is missing. Consequently, the present contribution focuses on a mapping automation approach development from a knowledge base of specific domain datasets related to localized censuses produced by University of Cagliari research groups [3] and authors [15].

1.1.2. Integration of Passive and Active Sensors for Landscape Context Mapping and Documentation

As briefly introduced, in recent decades, RS and ML technologies have significantly advanced the field of CH documentation [16,17]. The integration of these technologies allows for detailed, efficient, and non-invasive methods of recording and analyzing CH sites [11]. This section provides an overview of the state-of-the-art in RS and ML as they apply to the CH domain. Since sensed data have revolutionized data acquisition by offering both far-distance and close-range approaches, methods can be categorized into passive and active technologies, which could also be described as spaceborne, airborne, and terrestrial [18]. So on, spaceborne multi-spectral imagery and airborne LiDAR scanning systems will be focused on the perspective of multi-sensor data acquisition for large heritage contexts.
In recent decades, spaceborne and airborne imagery techniques have been interested in the advent of multispectral and hyperspectral imaging [19]. In this context, several applications have been developed for the CH domain to detect land or subsurface features and monitor site conditions at a landscape scale. For example, [20] has applied multispectral imagery enhancement and analysis to investigate potential buried structures in the medieval monastic settlement of San Vincenzo al Volturno, Italy. Furthermore, the potential for data fusion between point clouds and multispectral UAS photogrammetry has been demonstrated in [21]. Specifically, the integration of Mobile Mapping System (MMS) data and multispectral imagery has allowed for an extended and comprehensive mapping and understanding of military heritage.
Following this multi-sensory approach, active RS technologies, including Terrestrial Laser Scanning (TLS), MMS, and airborne LiDAR, have become essential tools in CH domain study. In fact, multi-sensory and multi-scale acquisition methodologies have been particularly adopted in heritage documentation processes, as in [22]. Airborne LiDAR, in particular, has shown great potential in densely vegetated extended areas where other survey methods are less effective [18]. Such systems can penetrate forest canopies to reveal underlying archaeological or landscape features [23], making them an invaluable tool for extended CH sites’ mapping, study, and analysis [11]. For instance, in [24], ALS data in several heritage case studies have been conducted and analyzed, demonstrating the airborne LiDAR capabilities in specific heritage contexts, specifically mapping unknown features and detecting landforms in dense woodland areas. Among others, Golden et al. [25] collected 331 km2 of high-resolution airborne LiDAR data, demonstrating ALS point clouds to be suitable for extended heritage contexts, such as the detection of unmapped rural Centro-American heritage artifacts and infrastructures.
Moreover, it is worth mentioning that Unmanned Aerial Systems (UAS) equipped with LiDAR and photogrammetric sensors offer flexible and high-resolution data acquisition capabilities [26], enhancing the documentation of both large and small sites. An instance of this is represented by the work of Mazzacca et al. [27], where a UAS LiDAR system was used to detect ground-level archaeological features in a densely vegetated area using point cloud ML approaches. Furthermore, these systems have proven particularly effective in dense urban areas, as described in [9], where a VHR (Very-High resolution) point cloud was used to generate a semantic segmentation benchmark dataset for urban contexts. However, it should be underlined that one of the critical challenges in using airborne LiDAR data derives from interpolation processing to generate accurate DEMs. In [28], the spatial accuracy of raster interpolation techniques is evaluated, demonstrating how the density and context of input data affect the accuracy of interpolation. Moreover, in [29], it is underlined how interpolation techniques directly impact the utility of the data for archaeological analysis. Furthermore, the recent advancements in full-waveform LiDAR [30] and improved classification algorithms have enhanced the pulse return accuracy, facilitating better analysis of micro-topographic features [31].

1.1.3. Semantic Enrichment and Structuring: Data Fusion and ML Approaches for Point Cloud Segmentation and Object Detection

In the framework of semantic enrichment, the classification and segmentation of 3D data have been proven to be crucial aspects of the RS technique. Furthermore, it should be underlined that a multi-sensor data fusion approach leads to the generation of more comprehensive and informative datasets in CH contexts [32].
Furthermore, innovations in data processing and analysis, such as the development of multiple spectral indices [33] and machine learning algorithms [34], have improved the efficiency and effectiveness of these techniques. These advancements allow for better visualization and interpretation of the data, facilitating the identification of features crucial for 3D data structuring in the CH research framework [18].
The application of ML and DL techniques in analyzing remotely sensed data has opened new avenues for the analysis and interpretation of CH sites. The classification and segmentation of point clouds are essential for generating semantic content and extracting meaningful information from unstructured data [35].
In this context, semantic segmentation involves dividing a point cloud into meaningful parts or objects, a critical step in automated analysis. Several methods, such as clustering and unsupervised filters [36], rely on geometric properties, but ML approaches use data-derived features [37] to improve performance accuracy. Techniques such as supervised learning, where algorithms are trained on labeled datasets, have shown great promise in accurately classifying and segmenting point clouds or mesh [38].
Moreover, DL algorithms, specifically artificial Neural Networks (NNs), are increasingly used for object detection and semantic labeling in several application fields [39]. These techniques, being applied both on 2D and 3D data [37], allow for the automated identification of particular elements, artifacts, and other features, enhancing the efficiency of the automatic identification process. However, it is essential to note that point cloud object detection is predominantly developed for applications in autonomous driving and indoor modeling [40,41]. Therefore, most of the benchmark datasets, such as KITTI [42] and H3D [43], have been designed and generated for these purposes, consisting of tens of thousands of labels, making them less directly applicable to heritage contexts without significant adaptation [44]. In this framework, deep NNs that exploit LiDAR data can be divided into three main categories, relating to the point cloud representation. Projection-based methods apply 2D Convolutional Neural Networks (CNNs) on a 2D projection of the point cloud, such as MV3D and Scanet [45,46]. Voxel-based methods [47,48] are designed to structure the disordered point clouds into a 3D voxel structure, forming multiple bird’s eye view (BEV) feature maps, where 2D CNNs are applied. Point-set methods extract point-wise features from the unstructured point clouds by neighbor cluster aggregation [49,50].
Finally, the integration of RS and ML segmentation approaches thus leads to significant advancements in processing automation for large CH contexts, providing powerful tools for preserving and understanding heritage. However, concerning point cloud deep learning object-detection methodologies for heritage mapping, it is worth mentioning that these represent an unexplored domain. Furthermore, the correlation of geometries in spatial unstructured data with semantic ontologies plays a significant role in structuring 3D data in heritage contexts, as highlighted in [51].
In this context, while heritage-tailored mapping and digitization methodologies have been widely discussed, the challenges and issues related to adaptable ML approaches for unstructured 3D data semantic segmentation are still discussed and not consolidated. For this reason, the present research focuses on the training and application of a transferable DL predictive model for extended landscape contexts.
This manuscript is organized as follows: in Section 2, Materials and Methods, Section 2.1 describes the case studies and the primary open data from the Sardinia Region, while Section 2.2.1 and Section 2.2.2 present the methodologic approach to ML classification and object-detection pipelines. Section 3 describes and discusses the issue concerning data preparation for trained DLM and the results of the DLM itself. Conclusions and future perspectives follow in Section 4.

2. Materials and Methods

The present contribution thus explores data fusion techniques and innovative ML approaches for the semantic structuring of existing low-scale point cloud data. Therefore, the semantic structuring of 3D data should be intended as part of an integrated methodology that addresses the mapping and detection of historical defensive artifacts in extended landscape heritage contexts. Despite Section 1.1.2 highlighting how several research studies have used high spatial-resolution UAS datasets [9], the challenge remains the employment of already available data that have been acquired for more general purposes.

2.1. Case Studies and Airborne LiDAR Sardinia Datasets

This research proposes a methodologically transferable workflow for Sardinia defensive heritage mapping by considering two distinct case studies (Figure 2). One focuses on a portion of the southern main city (Cagliari), while the other is located in a northwest coastal urban center with a strong tourism vocation (Alghero). The two case studies primarily differ in the chrono-typological heterogeneity of military artifacts and the scale of datasets that have been exploited. In both case studies, the primary data that have been identified for the application of the ML methodologies pertains to a regional dataset derived from two 2008 airborne LiDAR surveys. The acquisition campaigns were planned and executed with different characteristics and sensors, resulting in point clouds with different superficial densities.
The application of two distinct case studies has thus allowed the evaluation of the methodology from multi-scale and multi-sensor perspectives.
Case study 1 (Figure 3) is related to the urban area of Cagliari Municipality in the southern part of the region. The density of regional data is 2 points per square meter, and a balanced chrono-typological heterogeneity characterizes the present historical defensive artifacts. There are thus different object typologies distinguishable from towers, bastions, fortresses, bunkers, and batteries.
The map in Figure 4 represents the area concerning case study 2 in the Alghero municipality. In this case, the ALS point cloud density is 10 points/m2, while the strong presence of WWII bunker artifacts characterizes the landscape.
Given the vast dimension of the research areas and the widespread diverse typologies of fortifications, the necessity was to acquire sufficient data that would be useful to adequately document the territorial context of military landscapes [4].
After a brief analysis of the grid DTM products in the catalog of the Sardinia Geoportal website (Sardinia Geoportal—Autonomous Region of Sardinia, 2024), the datasets of the research areas have been made available by the Cartographic Information Sector of Sardinia region. The ALS point cloud pertains to two surveys (Table 1) conducted in 2008.
The total extension of the acquisitions (Figure 5) covers almost the entire waterline length. Specifically, Survey 1* represents most of the acquisitions and was thus planned and executed at 1400 m AGL and a ground speed of 140 knots (≈260 km/h). This resulted in a point cloud dataset with an average density of 2 points per square meter, which is the primary data from case study 1. Contrarily, Survey 2 was planned to focus only on the Alghero municipality area (case study 2). In this case, the acquisition was conducted with a low AGL (500 m) and a 110-knot ground speed (≈200 km/h), resulting in a density of approximately 10 pts/m2. Besides the flight characteristics, it is possible to observe from the available information that the sensors and the operating conditions were significantly different.
In fact, the sensor from Survey 1* was a multi-echo [52] laser scanner head system operating with a 125 kHz laser pulse repetition rate (LPRR). The point cloud datasets strips (Figure 5) were obtained from the regional Cartographic Information Sector in a .las file format, storing sensor information as intensity and echo returns (incomplete).
Instead, the sensor from Survey 2 was a full-waveform topographic LiDAR [52] laser head and operated instead at a 240 kHz PRR frequency. Despite the technological superiority of the second sensor, these point cloud strips were given in a .xyz file format, not storing significant information regarding the multiple echo returns.
For this specific reason, the deep learning methodologies described in the following section were developed exploiting only the geometrical information of the point clouds.

2.2. Methodologic Approaches for Semi-Automatic Data Structuring

In order to introduce the integrated methodology, it is worth underlining that exploiting the regional low-scale point cloud datasets (>10 pts/m2) from the 2008 surveys could be crucial for semantic segmentation and object-detection methodologies.
In fact, the application aims to verify if the leverage of the existing regional ALS dataset as primary data for ML analysis of the defensive landscape could allows a robust and comprehensive analysis of fortification systems without the necessity of acquiring new data. The following sections thus describe the primary datasets and the methodological pipeline that has been developed to answer the research question and aims. The proposed methodological schema consists of 4 blocks (Figure 6): Block 1 lists the two case studies addressed by exploiting the existing regional data presented in Section 2.1. Block 2 is related to the point cloud data structuring integrated methodology, whereas Block 2a is related to the application of unsupervised geometrical filters. Given the lack of significant waveform information on the primary data, Block 2b provides a satellite imagery data fusion approach to integrate multispectral bands within the point cloud. Finally, reference data are generated where Block 2c describes the training stage of the macro-class semantic segmentation DLM. While describing the methodologies that have been used for defensive heritage systems and architecture mapping and analysis, it should be clear that the proposed integrated method is conceived as a multi-scale approach. In fact, Block 3 consists of the preliminary investigation answers and training data generation for the DL object-detection strategies aiming at the identification of specific point cloud objects. Therefore, the application of DL methodologies has been developed at a lower scale (semantic segmentation) for land cover classification combined with a higher scale approach for the possible automatic mapping of heterogeneous defensive architectures. In this last sense, a class schema has been prepared for the generation of training labels for the object-detection DL approach (Table 2).
While the proposed class schema has been deployed as a macro-categorization for a holistic application, in the proposed contribution, the aim is to investigate the requirements and adaptations for an object detection approach. In fact, in order to develop a methodology for the entire class group, a preliminary test was conducted on the Bunker class. Specifically, this class embeds pillbox fortification that can be easily recognized from sufficiently dense point cloud datasets. Moreover, the bunker class is highly represented, particularly in Area 2. Although data augmentation strategies are needed, most elements are original, which enhances the generalization capabilities of a potential detection model.

2.2.1. Data Preparation for Semantic Segmentation DLM Training

In order to document the defensive landscape, the adequate structuring of ALS point cloud datasets was essential. This was performed by exploiting unsupervised filtering algorithms and supervised learning approaches. In such a complex research context, it was crucial to define a semantic classification scheme that could equally meet both the mapping and data structuring necessities dictated by the objectives and the constraints imposed by the data scales. For this reason, a 5-class scheme is proposed (Table 3).
The classification scheme consists of 5 classes, which, given the scale and thus the data density, are based on the .las specifications of the American Society for Photogrammetry and Remote Sensing (ASPRS) standard [53]. In the framework of the present research, it should be underlined that since the application of ML analysis is directed toward the comprehension of fortification architectures, particular attention is given to the building class that embeds most of the studied artifacts.
The class scheme aim is to provide an efficient macro-class segmentation of unstructured point clouds, distinguishing between terrain, trees, buildings, and water. Moreover, the Unclassified class is developed to host object classes that are not identifiable due to the scarce density issues. In fact, points pertaining to low vegetation, urban furniture, and cars are not clearly distinguishable from one another. Therefore, data density remains a crucial factor when it comes to providing a highly specific classification scheme [54].
Since one semantic classification scheme was defined, the first step in structuring the data was the employment of unsupervised filtering approaches. First, a preprocessing phase was necessary for the entire dataset, from which data blocks organized into 2000 m by 2000 m tiles were generated. After organizing the data for both areas, it was decided to generate a reference ground-truth dataset using the point clouds from case study 1 (Cagliari). In fact, using low-density data (2 pts/m2) would allow training a classification model with significantly representative data while avoiding GPU memory overflow errors [55]. Preliminary analyses of a part of the ground truth were thus conducted, similarly to previous research work. The eigenvalues of the geometric vectors (normal—λ3, curvature, sphericity, number of neighbors, surface density, surface variation) of the point clouds were calculated with a radius of 1, 2.5, 5, and 10 m. These indices were used for an unsupervised segmentation of the points (non-ground) remaining from applying the Simple Morphological Filter (SMRF) algorithm [56]. Moreover, while evaluating different filters, the Cloth Simulation Filter (CSF) [57], despite its accuracy capabilities, was considered less effective in this context.
Figure 6. Methodological schema. The workflow is developed for heterogeneous landscape heritage frameworks leveraging multiple existing airborne LiDAR datasets (1). The case studies that have been selected thus not only pertain to distinguished heritage contexts but are characterized by different acquisition scales and densities. The second stage of the methodology consists of applying unsupervised (2a) and data fusion strategies (2b) to prepare reference data for DL classification model training (2c). Finally, a preliminary investigation of object detection strategies (3) addresses the system mapping and artifact recognition challenges of historical defensive heritage using point cloud deep learning approaches [56].
Figure 6. Methodological schema. The workflow is developed for heterogeneous landscape heritage frameworks leveraging multiple existing airborne LiDAR datasets (1). The case studies that have been selected thus not only pertain to distinguished heritage contexts but are characterized by different acquisition scales and densities. The second stage of the methodology consists of applying unsupervised (2a) and data fusion strategies (2b) to prepare reference data for DL classification model training (2c). Finally, a preliminary investigation of object detection strategies (3) addresses the system mapping and artifact recognition challenges of historical defensive heritage using point cloud deep learning approaches [56].
Heritage 07 00261 g006
Multi-sensor data integration techniques provide rich datasets that allow capturing a wide range of information about the condition and composition of the territory and landscape [58]. In fact, these have risen in popularity in several application fields, such as cartography, forestry sciences, and agricultural sciences [59,60]. Moreover, providing valuable information about the material composition has been proven to be essential for the study of heritage sites [61]. Since spaceborne imagery has been proven efficient in several water and vegetation resource studies, this contribution proposes a data fusion approach to enhance the automation of the point cloud labeling process. Specifically, an integration of ALS point cloud data with Sentinel-2 missions near-infrared band (NIR) [62] has been developed to support manual labeling activities in this coastal landscape heritage context. However, it is worth mentioning that due to the resolution difference between the ALS point cloud and spaceborne imagery, an upsampling strategy has been essential to match the spatial resolution of the Sentinel-2 bands (10 m) with the average point spacing of the ALS point cloud data (1 m). For this necessity, a cubic resampling algorithm has been selected to accurately achieve the point cloud spatial resolution while avoiding sharp interpolation resulting from the nearest neighbor algorithm [63]. However, despite the spatial resolution resampling algorithms being a consolidated approach, those do not provide an efficient solution for radiometric resolution enhancement. In fact, the integration of point cloud data with the NIR band has been exploited for the segmentation of the water class. In this sense, to segment the water class points, two different approaches have been tested, as shown in Figure 7.
The first approach consisted of generating a vector 2D mask to be used in Cloud Compare for a semi-automatic areal water segmentation. The normalized difference water index (1) [64] was calculated, generating thus a 2D polyline vector.
N D W I = G r e e n   ( b 03 ) N I R   ( b 08 ) G r e e n   ( b 03 ) + N I R   ( b 08 )
Additionally, a second approach was tested to directly integrate ALS data with the near-infrared information as a point cloud scalar value. In this case, the NIR band was projected on a 3D mesh triangulated from the DSM, and thus, a point cloud scalar field was generated with a majority voting nearest neighbor (6 points) algorithm. However, despite the successful integration of NIR semantic information, the enriched ALS point clouds are not declared sufficient for the identification or discrimination of other elements (e.g., vegetation and buildings), not coherent with the original spatial resolution of the Sentinel 2 data (10 m). The results of geometric filtering and data fusion approaches will be further discussed in Section 3.
Therefore, this approach led to an efficient automatic segmentation of water class points. Finally, after a rapid manual correction of the data classification resulting from the unsupervised approaches, valid ground-truth reference data were selected to train a DLM for semantic segmentation (Table 4).
The training dataset was selected from the ALS data of case study 1 and consisted of training and validation datasets (Figure 8) balanced at 80%–20%. Furthermore, particular attention was given to the selection of data to ensure both the representativeness of the urban and landscape morphology of the area and a homogeneous distribution of classes (Table 4) between the two parts of the dataset.
Moreover, in order to evaluate the predictive model performance, four test datasets were selected (Table 5) from the ALS data of case study 2 (Figure 9). The primary aim was to evaluate the performance of the DL classification model with data different from the training area. Furthermore, assessing the model effectiveness and accuracy was useful from a multiscale perspective. The 100 m by 100 m areas were selected to diverge as much as possible from the class frequency of the training data, as shown in Table 4 and Table 5. For example, area C is characterized by many points related to the building class (80%), while area D has a more balanced frequency but with more points related to vegetation (36%).
Finally, a predictive segmentation DLM was trained using the training dataset from case study 1. The training process employed RandLA-Net [65] neural network architecture. RandLA-Net is a point-wise multi-layer perceptron architecture that is designed to efficiently handle large-scale datasets, incorporating a random sampling phase for each network node. A cross-entropy loss logarithmic function was used to validate the model. The evaluation of the predictive DLM will be further discussed in Section 3.

2.2.2. Historical Defensive Heritage Artifact Label Generation and NN Investigations

Since one of the aims was intended to be the development and application of an integrated methodology for historical defensive landscapes, it has been necessary to evaluate the priorities, challenges, and issues related to the identification of fortifications. Due to the high number of fortification sites and buildings and the accessibility challenges of their location, a preliminary investigation regarding the automatic object-detection approach exploiting the semantic segmented point clouds was addressed.
Therefore, in this research framework, this part of the methodology thus explored the solutions for fortification classes’ automatic identification from existing airborne LiDAR point clouds. However, it should be underlined that input data density, training data preparation, and predictive model training require several specific capabilities.
As mentioned in Section 1.1.3, deep 3D object-detection NNs for LiDAR data are increasingly being studied due to LiDAR spatial capture capabilities. Contrarily to 2D detection based on imagery data, the spatial comprehension given by active LiDAR techniques of an environment is a great benefit for recognizing 3D objects. Yet several challenges in 3D detection are present because of the sparsity of LiDAR point clouds [66].
Moreover, the preparation of a reference dataset requires analysis of data characteristics, scene typology, and the number of object classes and annotations. In Table 6, some dataset examples are presented.
In the proposed contribution, the authors addressed two essential preliminary technical questions: input point cloud data and reference data preparation. In order to test the label generation process, the research focused on bunker architectures of the WWII coastal containment arch system that are widespread in Alghero’s territory (case study 2).
Moreover, the ALS data from case study 2 were considered dense enough (10 pts/m2) to recognize bunker objects that were easily distinguishable from other elements (e.g., bushes). Relative to this context, the case study 2 area was characterized by a relatively high number of bunker artifacts, as reported in Table 2. Comparing the numerosity of the presented objects with the number of annotations in the object-detection benchmark datasets, it is evident that the data augmentation challenge should be addressed. This issue is deepened in Section 4.
The generation of reference data was completed in a GIS environment, starting from point geometries of different census domain datasets [3,15]. The label generation workflow consisted of extruding the squared buffers of the object centroid between two altimetric levels (Figure 10). The base height of the 3D bounding box was calculated from the average reference DTM height. Consequently, the extrusion value was summed to the base height, generating the 3D bounding box for the training labels stored in ASCII format.
Section 1.1.3 presents different types of 3D detector NNs that have precise characteristics and limitations. In fact, the projection and voxel methodologies result in the loss of LiDAR data information. However, the application of 2D CNNs in voxel-based methods still allows local features to be learned and extracted from the 3D representation [66]. Yet, point-based approaches fully exploit the 3D geometry without significant information loss [37], despite some limitations that occur for point-wise feature generation.
In the framework of the present contribution, a preliminary training experiment was conducted exploiting the SECOND (Sparsely Embedded CONvolutional Detection) [47] detector with ESRI arcpy 2.9 libraries.

3. Results and Discussion

Although the predictive model results represent the final stage of the methodology, it is equally important to discuss the outcomes achieved during the data preparation stage. As introduced in Section 2.1, the echo information embedded in the airborne LiDAR data was either absent or incomplete. To address this, the present research analyzed and compared return information, reflectance intensity, and newly calculated fields (see Figure 11). From both visual and statistical inspections, the echo return information (Figure 11a) and intensity (Figure 11b) were found to be incomplete and noisy, making them unsuitable for accurately distinguishing between classes. Moreover, is it possible to observe that the presented NIR data fusion approach (Figure 11c) is particularly suitable for segmentation where water bodies correspond to Equation (2).
NIR   reflectance 0.09  
Furthermore, λ3 (normals) eigenvalue (Figure 11d) is considered suitable for a preliminary building class exclusion compared to LiDAR intensity and return information. The threshold value is, in this case, given by the statistical distribution analysis and is relative to the topology of this specific LiDAR dataset (3).
λ 3 0.1
In this specific case study, the integration of the NIR band and the employment of a reduced number of geometric features for the unsupervised filter allowed effective results for the generation of a reference dataset.
As described in Section 2.2.1, a DL predictive model for the semantic segmentation of low-scale ALS point clouds was trained and completed in 46 h after 38 epochs. The model was trained and validated with reference data pertaining to case study 1 (Figure 8). As shown in Table 7, the model achieved excellent results in terms of Accuracy and Precision, while slightly lower and improvable results were observed for the Recall and F1 score metrics.
The lower overall metrics results are attributable to classes 0 and 9, which are under-represented in the training dataset. In fact, the segmentation results on the validation dataset for the most represented objects (ground, high vegetation, and building) are slightly better than those for the remaining classes, as observed in Figure 12.
Nevertheless, despite the overall good results, it is possible to observe some cases of the following:
  • Overprediction of building class over flat ground areas;
  • Underprediction of building class over oblique occluded surfaces;
  • Overprediction of unclassified class on small wall objects;
  • Underprediction of unclassified class (small evergreen shrubs difficult distinguishable from ground);
  • Underprediction of water class.
Since the focus of the application of DL semantic segmentation methodologies is on historical defensive contexts, it should be underlined that specific artifact recognition is considered a challenging issue. In fact, the accuracy evaluation indicated that while the model is highly precise, there is a trade-off in its ability to classify correctly relevant structures, particularly those that are partially eroded or obscured. In fact, the identification of fortification walls and bastions represents a true limit where there is a lack of reference training data.
In order to analyze accurately the model behavior, four supplementary data blocks from case study 2 (Figure 9) were selected to test the DLM performances. The aim was to understand the model generalization capabilities with higher-scale ALS point cloud datasets, describing areas with different urban and territorial morphology.
The results of applying the semantic segmentation DLM on the test dataset areas (1,2,3,4) are analyzed and summarized in Table 8. The metric statistics successfully demonstrate the generalization capabilities of the predictive model, although further improvements are still to be estimated.
Moreover, Figure 13 confirms from a visual perspective the excellent generalization capability of the trained model in case study 2. In the test areas, cases of under-prediction and over-prediction are observable, similar to the validation dataset areas.
In this scenario, deep learning approaches for point cloud semantic segmentation are thus confirmed as a valuable tool for the heritage context, although the accurate identification issue of specific elements (e.g., bastions) should be adequately addressed for future perspectives.
Concerning the object-detection approach of historical defense architectures, the semi-automatic labeling task effectiveness was demonstrated, where 66 3D bounding boxes were generated (Figure 14).
The object-detection model was trained employing an entropy loss function for training and validation datasets (4) [70]
Loss = 1 N i N ( y t r u e · log ( p i ) + ( 1 y t r u e ) · log ( 1 p i ) )
where N is the total number of observations, y t r u e is the binary indicator for the correctness prediction for observation i, and p i is the probability of the observation being in the correct class. Since a 3D detector should be evaluated by taking into consideration classification and localization accuracy, the performance of the model has been analyzed with the mean Average Precision (mAP) metric, as described in [66].
However, from the training graph in Figure 15, it can be observed that the calculation of the validation loss function was unconcluded throughout the batch processing, demonstrating the difficulties of the model in effectively learning the given task. Moreover, the localization and classification accuracies given by the mAP metric were not applicable. The experimental result was yet predictable because the number of bunker architectures—and thus of reference training data—was not sufficient to train and validate any object-detection model.
Several considerations regarding object-detection future perspectives will be discussed in the Conclusions and Future Perspectives section.

4. Conclusions and Future Perspectives

In the framework of heritage preservation strategies, the integration of various proposed methodologies provides valuable lessons and opportunities. The proposed study underlines the importance of combining diverse approaches, as well as critically evaluating and analyzing the method from a multi-scale perspective. In fact, such insights significantly contribute to advancements in CH site conservation through remote sensing data analysis. Moreover, the application presented in this contribution also addressed the topic of user-oriented approaches, demonstrating how graphical user interface software could represent a valuable solution for heritage domain experts.
Regarding airborne LiDAR technology, it should be clear that previous research works have underlined the suitability of full-waveform LiDAR sensors for archaeological, heritage investigations, and semantic classification [13,27,30]. The present contribution also demonstrates the suitability of the multi-echo sensor in heritage contexts. In fact, despite recent advancements in remote sensing technologies [9,26], lower-density ALS data [71] from less recent sensors has proven to be still effective for semantic segmentation purposes.
Furthermore, this research highlighted that integrating point clouds with satellite multispectral imagery data is a valuable strategy to overcome the lack of information provided by point clouds consisting only of coordinates. Still, it should be noted that enhancing the spatial and radiometric resolution of free, open-source satellite data [62] remains a crucial issue, where the necessity is to match the resolution of point cloud data. In this research framework, the employment of high-resolution satellite datasets can be considered, although the DL super-resolution approaches could represent a more innovative and valid further implementation, e.g., [72].
Since the research aimed to test DL approaches for data structuring, it should be clear that NNs are widely explored but not yet fully established tools for heritage domain applications [17,35]. The proposed contribution thus explored ML methodologies, demonstrating how segmentation predictive DLMs are a valuable approach for ALS point cloud data semantic structuring in a landscape heritage context. Moreover, despite the DLM being suitable for extended applications, the model has been tested in areas that were representative of the morphological and urban conformation of the case studies, assessing its accuracy in this heritage context.
However, scarce density issues have made challenging the disambiguation of ground-level anthropic features, which can be addressed with different classification strategies. In this sense, previously developed OBIA ML approaches of DTM can be considered [13]. Moreover, starting from the application of DL pipelines, the generation of semantic ontologies by correlating point cloud geometries should also be addressed, as demonstrated in [51].
Since it is crucial to establish a minimum threshold for data density and accuracy, several issues were overcome when investigating point cloud object-detection approaches. In this research context, low-scale ALS point clouds in case study 1 (2 pts/m2) were declared not compatible due to the difficulty of recognizing specific elements from point clouds. Moreover, due to the original defensive specific functions, most of these architectural objects are not easily detectable using spaceborne or airborne imagery products due to the designed camouflage characteristics. For this reason, further developments of a DL object-detection methodology exploiting existing LiDAR point clouds are considered a crucial future challenge.
Yet it has already been stated in Section 1.1.3 that most DL methodologies related to point cloud processing mostly pertain to robotics, autonomous driving, and indoor modeling [37]. Therefore, in CH and landscape contexts, it is hard to address the most relevant issues of these approaches without significant adaptations.
Specifically, the number of annotations required for training NN detectors is considerably high, with a minimum of 70k bounding boxes [67], as shown in Table 6. The limitations of the first training experiment conducted (Section 2.2.2) are particularly evident, as shown by the loss functions and the non-calculable metrics (Figure 15). This predictable result, due to the number of available annotations (Table 2), nonetheless represents a starting point for overcoming the challenges inherent in 3D detection. In this sense, preparing algorithms that allow the generation of synthetic point cloud data and 3D bounding box labels by copying and transforming (e.g., translation, rotation) the original data is considered to be a valuable future development. Therefore, it is thus crucial to integrate data augmentation strategies [44,73] that will be investigated in future research works.

Author Contributions

Conceptualization, M.C.; methodology, M.C., G.P. and A.S.; software, M.C.; validation, M.C. and G.P.; formal analysis, M.C.; investigation, M.C.; resources, M.C., G.P. and A.S.; data curation, M.C. and G.P.; writing—original draft preparation, M.C.; writing—review and editing, M.C., G.P. and A.S.; visualization, M.C. and G.P.; supervision, G.P. and A.S.; project administration, A.S.; funding acquisition, A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding authors.

Acknowledgments

Thanks should go to Donatella Rita Fiorino, Caterina Giannattasio, and the whole research group of the University of Cagliari for the granted domain datasets. Thanks to the Sardegna Region for the data availability and, specifically, to Manuela Matta and all the staff members of the Cartographic Information Sector.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Stubbs, J.H. Time Honored: A Global View of Architectural Conservation; Wiley: Hoboken, NJ, USA, 2009; ISBN 978-0-470-26049-4. [Google Scholar]
  2. Moullou, D.; Vital, R.; Sylaiou, S.; Ragia, L. Digital Tools for Data Acquisition and Heritage Management in Archaeology and Their Impact on Archaeological Practices. Heritage 2024, 7, 107–121. [Google Scholar] [CrossRef]
  3. Fiorino, D.R. Sinergies: Interinstitutional Experiences for the Rehabilitation of Military Areas; UNICApress: Cagliari, Italy, 2021; Volume 1, ISBN 9788833120485. [Google Scholar]
  4. Fiorino, D.R. Military Landscapes: A Future for Military Heritage. In Proceedings of the International Conference, La Maddalena, Italia, 21–24 June 2017; Fiorino, D.R., Ed.; Skirà: Milano, Italy, 2017. [Google Scholar]
  5. Virilio, P. Bunker Archaeology, 2nd ed.; Princeton Architectural Press: New York, NY, USA, 1994; ISBN 9781568980157. [Google Scholar]
  6. Bassier, M.; Vincke, S.; Hernandez, R.d.L.; Vergauwen, M. An Overview of Innovative Heritage Deliverables Based on Remote Sensing Techniques. Remote Sens. 2018, 10, 1607. [Google Scholar] [CrossRef]
  7. Rabbia, A.; Sammartano, G.; Spanò, A. Fostering Etruscan Heritage with Effective Integration of UAV, TLS and SLAM-Based Methods. In Proceedings of the 2020 IMEKO TC-4 International Conference on Metrology for Archaeology and Cultural Heritage, Trento, Italy, 22–24 October 2020; pp. 322–327. [Google Scholar]
  8. Petras, V.; Petrasova, A.; McCarter, J.B.; Mitasova, H.; Meentemeyer, R.K. Point Density Variations in Airborne Lidar Point Clouds. Sensors 2023, 23, 1593. [Google Scholar] [CrossRef] [PubMed]
  9. Kölle, M.; Laupheimer, D.; Schmohl, S.; Haala, N.; Rottensteiner, F.; Wegner, J.D.; Ledoux, H. The Hessigheim 3D (H3D) Benchmark on Semantic Segmentation of High-Resolution 3D Point Clouds and Textured Meshes from UAV LiDAR and Multi-View-Stereo. ISPRS Open J. Photogramm. Remote Sens. 2021, 1, 100001. [Google Scholar] [CrossRef]
  10. Brovelli, M.A.; Cina, A.; Crespi, M.; Lingua, A.; Manzino, A.; Garretti, L. Ortoimmagini e Modelli Altimetrici a Grande Scala-Linee Guida; CISIS, Centro Interregionale per I Sistemi Informatici Geografici e Statistici In Liquidazione: Rome, Italy, 2009. [Google Scholar]
  11. Argyrou, A.; Agapiou, A. A Review of Artificial Intelligence and Remote Sensing for Archaeological Research. Remote Sens. 2022, 14, 6000. [Google Scholar] [CrossRef]
  12. Cappellazzo, M.; Baldo, M.; Sammartano, G.; Spano, A. Integrated Airborne LiDAR-UAV Methods for Archaeological Mapping in Vegetation-Covered Areas. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 357–364. [Google Scholar] [CrossRef]
  13. Cappellazzo, M.; Patrucco, G.; Sammartano, G.; Baldo, M.; Spanò, A. Semantic Mapping of Landscape Morphologies: Tuning ML/DL Classification Approaches for Airborne LiDAR Data. Remote Sens. 2024, 16, 3572. [Google Scholar] [CrossRef]
  14. Cherchi, G.; Fiorino, D.R.; Pais, M.R.; Pirisino, M.S. Bunker Landscapes: From Traces of a Traumatic Past to Key Elements in the Citizen Identity. In Defensive Architecture of the Mediterranean: Vol. XV; Pisa University Press: Pisa, Italy, 2023; pp. 1195–1201. [Google Scholar] [CrossRef]
  15. Cappellazzo, M. Layered Landscape and Archeology of Military Heritage: Valorization Strategies for Porto Conte Park Territories (Alghero, SS) with GIS Technologies and Low-Cost Survey Contributions. Master’s Thesis, Politecnico di Torino, Torino, Italy, 2019. [Google Scholar]
  16. Bassier, M.; Vergauwen, M.; Van Genechten, B. Automated Classification of Heritage Buildings for As-Built BIM Using Machine Learning Techniques. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 25–30. [Google Scholar] [CrossRef]
  17. Yang, S.; Hou, M.; Li, S. Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage: A Comprehensive Review. Remote Sens. 2023, 15, 548. [Google Scholar] [CrossRef]
  18. Luo, L.; Wang, X.; Guo, H.; Lasaponara, R.; Zong, X.; Masini, N.; Wang, G.; Shi, P.; Khatteli, H.; Chen, F.; et al. Airborne and Spaceborne Remote Sensing for Archaeological and Cultural Heritage Applications: A Review of the Century (1907–2017). Remote Sens. Environ. 2019, 232, 111280. [Google Scholar] [CrossRef]
  19. Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective, 2nd ed.; Prentice Hall, Inc.: Upper Saddle River, NJ, USA, 1996; ISBN 0132058405. [Google Scholar]
  20. Abate, N.; Frisetti, A.; Marazzi, F.; Masini, N.; Lasaponara, R. Multitemporal–Multispectral UAS Surveys for Archaeological Research: The Case Study of San Vincenzo Al Volturno (Molise, Italy). Remote Sens. 2021, 13, 2719. [Google Scholar] [CrossRef]
  21. Santoro, V.; Patrucco, G.; Lingua, A.; Spanò, A. Multispectral Uav Data Enhancing the Knowledge of Landscape Heritage. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 1419–1426. [Google Scholar] [CrossRef]
  22. Martino, A.; Gerla, F.; Balletti, C. Multi-Scale and Multi-Sensor Approaches for the Protection of Cultural Natural Heritage: The Island of Santo Spirito in Venice. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2023, 48, 1027–1034. [Google Scholar] [CrossRef]
  23. Wieser, M.; Hollaus, M.; Mandlburger, G.; Glira, P.; Pfeifer, N. ULS LiDAR Supported Analyses of Laser Beam Penetration from Different ALS Systems into Vegetation. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 233–239. [Google Scholar] [CrossRef]
  24. Masini, N.; Coluzzi, R.; Lasaponara, R.; Masini, N.; Coluzzi, R.; Lasaponara, R. On the Airborne Lidar Contribution in Archaeology: From Site Identification to Landscape Investigation. In Laser Scanning, Theory and Applications; IntechOpen: London, UK, 2011. [Google Scholar] [CrossRef]
  25. Golden, C.; Scherer, A.K.; Schroder, W.; Murtha, T.; Morell-Hart, S.; Fernandez Diaz, J.C.; Del Pilar Jiménez Álvarez, S.; Firpi, O.A.; Agostini, M.; Bazarsky, A.; et al. Airborne Lidar Survey, Density-Based Clustering, and Ancient Maya Settlement in the Upper Usumacinta River Region of Mexico and Guatemala. Remote Sens. 2021, 13, 4109. [Google Scholar] [CrossRef]
  26. Kalacska, M.; Arroyo-Mora, J.P.; Lucanus, O. Comparing UAS LiDAR and Structure-from-Motion Photogrammetry for Peatland Mapping and Virtual Reality (VR) Visualization. Drones 2021, 5, 36. [Google Scholar] [CrossRef]
  27. Mazzacca, G.; Grilli, E.; Cirigliano, G.P.; Remondino, F.; Campana, S. Seeing among Foliage with LiDaR and Machine Learning: Towards a Transferable Archaeological Pipeline. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 46, 365–372. [Google Scholar] [CrossRef]
  28. Adedapo, S.M.; Zurqani, H.A. Evaluating the Performance of Various Interpolation Techniques on Digital Elevation Models in Highly Dense Forest Vegetation Environment. Ecol. Inform. 2024, 81, 102646. [Google Scholar] [CrossRef]
  29. Albrecht, C.M.; Fisher, C.; Freitag, M.; Hamann, H.F.; Pankanti, S.; Pezzutti, F.; Rossi, F. Learning and Recognizing Archeological Features from LiDAR Data. In Proceedings of the 2019 IEEE International Conference on Big Data, Big Data 2019, Los Angeles, CA, USA, 9–12 December 2019; pp. 5630–5636. [Google Scholar] [CrossRef]
  30. Mallet, C.; Bretar, F.; Roux, M.; Soergel, U.; Heipke, C. Relevance Assessment of Full-Waveform Lidar Data for Urban Area Classification. ISPRS J. Photogramm. Remote Sens. 2011, 66, S71–S84. [Google Scholar] [CrossRef]
  31. Mancini, F.; Dubbini, M.; Gattelli, M.; Stecchi, F.; Fabbri, S.; Gabbianelli, G. Using Unmanned Aerial Vehicles (UAV) for High-Resolution Reconstruction of Topography: The Structure from Motion Approach on Coastal Environments. Remote Sens. 2013, 5, 6880–6898. [Google Scholar] [CrossRef]
  32. Adamopoulos, E.; Rinaudo, F. Close-Range Sensing and Data Fusion for Built Heritage Inspection and Monitoring—A Review. Remote Sens. 2021, 13, 3936. [Google Scholar] [CrossRef]
  33. Agapiou, A.; Lysandrou, V.; Hadjimitsis, D.G. Optical Remote Sensing Potentials for Looting Detection. Geosciences 2017, 7, 98. [Google Scholar] [CrossRef]
  34. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of Machine-Learning Classification in Remote Sensing: An Applied Review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef]
  35. Matrone, F.; Grilli, E.; Martini, M.; Paolanti, M.; Pierdicca, R.; Remondino, F. Comparing Machine and Deep Learning Methods for Large 3D Heritage Semantic Segmentation. ISPRS Int. J. Geo-Inf. 2020, 9, 535. [Google Scholar] [CrossRef]
  36. Poux, F.; Billen, R. Voxel-Based 3D Point Cloud Semantic Segmentation: Unsupervised Geometric and Relationship Featuring vs Deep Learning Methods. ISPRS Int. J. Geo-Inf. 2019, 8, 213. [Google Scholar] [CrossRef]
  37. Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; Bennamoun, M. Deep Learning for 3D Point Clouds: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 4338–4364. [Google Scholar] [CrossRef]
  38. Laupheimer, D.; Haala, N. Multi-modal semantic mesh segmentation in urban scenes. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 5, 267–274. [Google Scholar] [CrossRef]
  39. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment Anything. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 3992–4003. [Google Scholar] [CrossRef]
  40. Xiong, X.; Adan, A.; Akinci, B.; Huber, D. Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data. Autom. Constr. 2013, 31, 325–337. [Google Scholar] [CrossRef]
  41. Li, Y.; Ma, L.; Zhong, Z.; Liu, F.; Chapman, M.A.; Cao, D.; Li, J. Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 3412–3432. [Google Scholar] [CrossRef]
  42. Geiger, A.; Lenz, P.; Urtasun, R. Are We Ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar] [CrossRef]
  43. Patil, A.; Malla, S.; Gang, H.; Chen, Y.T. The H3D Dataset for Full-Surround 3D Multi-Object Detection and Tracking in Crowded Urban Scenes. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 9552–9557. [Google Scholar] [CrossRef]
  44. Hahner, M.; Dai, D.; Liniger, A.; Van Gool, L. Quantifying Data Augmentation for LiDAR Based 3D Object Detection. arXiv 2020, arXiv:2004.01643. [Google Scholar]
  45. Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-View 3D Object Detection Network for Autonomous Driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–27 July 2017; pp. 6526–6534. [Google Scholar] [CrossRef]
  46. Lu, H.; Chen, X.; Zhang, G.; Zhou, Q.; Ma, Y.; Zhao, Y. Scanet: Spatial-Channel Attention Network for 3D Object Detection. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1992–1996. [Google Scholar] [CrossRef]
  47. Yan, Y.; Mao, Y.; Li, B. SECOND: Sparsely Embedded Convolutional Detection. Sensors 2018, 18, 3337. [Google Scholar] [CrossRef] [PubMed]
  48. Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. Pointpillars: Fast Encoders for Object Detection from Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 12689–12697. [Google Scholar] [CrossRef]
  49. Yang, Z.; Sun, Y.; Liu, S.; Jia, J. 3DSSD: Point-Based 3D Single Stage Object Detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11037–11045. [Google Scholar] [CrossRef]
  50. Yang, Z.; Sun, Y.; Liu, S.; Shen, X.; Jia, J. STD: Sparse-to-Dense 3D Object Detector for Point Cloud. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1951–1960. [Google Scholar] [CrossRef]
  51. Colucci, E.; Xing, X.; Kokla, M.; Mostafavi, M.A.; Noardo, F.; Spanò, A. Ontology-Based Semantic Conceptualisation of Historical Built Heritage to Generate Parametric Structured Models from Point Clouds. Appl. Sci. 2021, 11, 2813. [Google Scholar] [CrossRef]
  52. Mallet, C.; Bretar, F. Full-Waveform Topographic Lidar: State-of-the-Art. ISPRS J. Photogramm. Remote Sens. 2009, 64, 1–16. [Google Scholar] [CrossRef]
  53. Graham, L. LAS 1.4 Specification. Photogramm. Eng. Remote Sens. 2012, 78(2), 93–102. [Google Scholar]
  54. Niemeyer, J.; Rottensteiner, F.; Soergel, U. Contextual Classification of Lidar Data and Building Object Detection in Urban Areas. ISPRS J. Photogramm. Remote Sens. 2014, 87, 152–165. [Google Scholar] [CrossRef]
  55. Matrone, F.; Lingua, A.; Pierdicca, R.; Malinverni, E.S.; Paolanti, M.; Grilli, E.; Remondino, F.; Murtiyoso, A.; Landes, T. A Benchmark for Large-Scale Heritage Point Cloud Semantic Segmentation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 1419–1426. [Google Scholar] [CrossRef]
  56. Pingel, T.J.; Clarke, K.C.; McBride, W.A. An Improved Simple Morphological Filter for the Terrain Classification of Airborne LIDAR Data. ISPRS J. Photogramm. Remote Sens. 2013, 77, 21–30. [Google Scholar] [CrossRef]
  57. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  58. Batar, A.K.; Watanabe, T.; Kumar, A. Assessment of Land-Use/Land-Cover Change and Forest Fragmentation in the Garhwal Himalayan Region of India. Environments 2017, 4, 34. [Google Scholar] [CrossRef]
  59. Iglhaut, J.; Cabo, C.; Puliti, S.; Piermattei, L.; O’Connor, J.; Rosette, J. Structure from Motion Photogrammetry in Forestry: A Review. Curr. For. Rep. 2019, 5, 155–168. [Google Scholar] [CrossRef]
  60. Candiago, S.; Remondino, F.; De Giglio, M.; Dubbini, M.; Gattelli, M. Evaluating Multispectral Images and Vegetation Indices for Precision Farming Applications from UAV Images. Remote Sens. 2015, 7, 4026–4047. [Google Scholar] [CrossRef]
  61. Picollo, M.; Cucci, C.; Casini, A.; Stefani, L. Hyper-Spectral Imaging Technique in the Cultural Heritage Field: New Possible Scenarios. Sensors 2020, 20, 2843. [Google Scholar] [CrossRef] [PubMed]
  62. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  63. Roy, D.P.; Li, J.; Zhang, H.K.; Yan, L. Best Practices for the Reprojection and Resampling of Sentinel-2 Multi Spectral Instrument Level 1C Data. Remote Sens. Lett. 2016, 7, 1023–1032. [Google Scholar] [CrossRef]
  64. McFeeters, S.K. The Use of the Normalized Difference Water Index (NDWI) in the Delineation of Open Water Features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  65. Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; Markham, A. RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11105–11114. [Google Scholar] [CrossRef]
  66. Wu, Y.; Wang, Y.; Zhang, S.; Ogai, H. Deep 3D Object Detection Networks Using LiDAR Data: A Review. IEEE Sens. J. 2021, 21, 1152–1171. [Google Scholar] [CrossRef]
  67. Huang, X.; Wang, P.; Cheng, X.; Zhou, D.; Geng, Q.; Yang, R. The ApolloScape Open Dataset for Autonomous Driving and Its Application. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 42, 2702–2719. [Google Scholar] [CrossRef]
  68. Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B.; et al. Scalability in Perception for Autonomous Driving: Waymo Open Dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2443–2451. [Google Scholar] [CrossRef]
  69. Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. NuScenes: A Multimodal Dataset for Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11618–11628. [Google Scholar] [CrossRef]
  70. Good, I.J. Rational Decisions. J. R. Stat. Society. Ser. B 1952, 14, 107–114. [Google Scholar] [CrossRef]
  71. Sardinia Geoportal—Autonomous Region of Sardinia. Available online: https://www.sardegnageoportale.it/index.html (accessed on 30 July 2024).
  72. Collins, C.B.; Beck, J.M.; Bridges, S.M.; Rushing, J.A.; Graves, S.J. Deep Learning for Multisensor Image Resolution Enhancement. In Proceedings of the 1st Workshop on GeoAI: AI and Deep Learning for Geographic Knowledge Discovery, GeoAI 2017, Los Angeles, CA, USA, 7–10 November 2017; pp. 37–44. [Google Scholar]
  73. Zhu, Q.; Fan, L.; Weng, N. Advancements in Point Cloud Data Augmentation for Deep Learning: A Survey. Pattern Recognit. 2024, 153, 110532. [Google Scholar] [CrossRef]
Figure 1. Defensive heritage artifacts. Capo Boi tower, Sinnai—Cagliari (a). Sant’Ignazio fortress, Calamosca—Cagliari (b). Position no. 5 of Stronghold V, Porto Ferro—Alghero (c). Position no. 2 (hypothesis) of Stronghold XI, Alghero (d).
Figure 1. Defensive heritage artifacts. Capo Boi tower, Sinnai—Cagliari (a). Sant’Ignazio fortress, Calamosca—Cagliari (b). Position no. 5 of Stronghold V, Porto Ferro—Alghero (c). Position no. 2 (hypothesis) of Stronghold XI, Alghero (d).
Heritage 07 00261 g001
Figure 2. Map of the location of the presented case studies related to the research framework of the present contribution, Sardinia (Italy).
Figure 2. Map of the location of the presented case studies related to the research framework of the present contribution, Sardinia (Italy).
Heritage 07 00261 g002
Figure 3. Case study 1. The case study 1 area is located in the southern region of Sardinia and covers the whole extension of Cagliari town and hinterland.
Figure 3. Case study 1. The case study 1 area is located in the southern region of Sardinia and covers the whole extension of Cagliari town and hinterland.
Heritage 07 00261 g003
Figure 4. Case study 2. The case study 2 area is located in the northwest region of Sardinia and covers the whole extension of Alghero town and hinterland.
Figure 4. Case study 2. The case study 2 area is located in the northwest region of Sardinia and covers the whole extension of Alghero town and hinterland.
Heritage 07 00261 g004
Figure 5. Map of the data provided by the Sardinia Region. The stripes available from the two airborne LiDAR surveys are located on this map. The two surveys were carried out using two different sensors, as detailed in Table 1.
Figure 5. Map of the data provided by the Sardinia Region. The stripes available from the two airborne LiDAR surveys are located on this map. The two surveys were carried out using two different sensors, as detailed in Table 1.
Heritage 07 00261 g005
Figure 7. Sentinel-2 data fusion approaches. Vector water mask generation for areal segmentation (a). Band 8 NIR projection on DSM mesh for scalar value interpolation (b).
Figure 7. Sentinel-2 data fusion approaches. Vector water mask generation for areal segmentation (a). Band 8 NIR projection on DSM mesh for scalar value interpolation (b).
Heritage 07 00261 g007
Figure 8. Map of the location of the training dataset (case study 1, Cagliari). The red tiles are about the training set, while the green blocks relate to the point cloud tiles used to validate the model.
Figure 8. Map of the location of the training dataset (case study 1, Cagliari). The red tiles are about the training set, while the green blocks relate to the point cloud tiles used to validate the model.
Heritage 07 00261 g008
Figure 9. Map of the test dataset areas, A, B, C, and D locations. The test point clouds are in the case study 2 area (Alghero).
Figure 9. Map of the test dataset areas, A, B, C, and D locations. The test point clouds are in the case study 2 area (Alghero).
Heritage 07 00261 g009
Figure 10. Label generation workflow: from a point feature class to a 3D geometry.
Figure 10. Label generation workflow: from a point feature class to a 3D geometry.
Heritage 07 00261 g010
Figure 11. Point cloud echo information, reflectance intensity, and newly calculated scalar field comparison for geometrical and digital number filtering unsupervised segmentation. Number of returns (a). Intensity (b). Data fusion near infrared from Sentinel 2 band 8, 784 nm–899.5 nm (c). λ3 eigenvalue (normals) calculated on 2.5 m radius (d).
Figure 11. Point cloud echo information, reflectance intensity, and newly calculated scalar field comparison for geometrical and digital number filtering unsupervised segmentation. Number of returns (a). Intensity (b). Data fusion near infrared from Sentinel 2 band 8, 784 nm–899.5 nm (c). λ3 eigenvalue (normals) calculated on 2.5 m radius (d).
Heritage 07 00261 g011
Figure 12. Predictive model training results. The model performance is evaluated using the validation data from the Training dataset of case study 1 (Cagliari).
Figure 12. Predictive model training results. The model performance is evaluated using the validation data from the Training dataset of case study 1 (Cagliari).
Heritage 07 00261 g012
Figure 13. Predictive model testing results. The model performance is evaluated using the test dataset A, B, C, and D of case study 2 (Alghero).
Figure 13. Predictive model testing results. The model performance is evaluated using the test dataset A, B, C, and D of case study 2 (Alghero).
Heritage 07 00261 g013
Figure 14. Bounding box generation processing for reference data generation. The aim is to apply 3D deep learning for defensive heritage mapping. In this case, the three areas are focused on bunker class objects.
Figure 14. Bounding box generation processing for reference data generation. The aim is to apply 3D deep learning for defensive heritage mapping. In this case, the three areas are focused on bunker class objects.
Heritage 07 00261 g014
Figure 15. Model validation graph, showing training and validation logarithmic loss functions during epochs. While the training loss function decreases, validation loss is constantly flat.
Figure 15. Model validation graph, showing training and validation logarithmic loss functions during epochs. While the training loss function decreases, validation loss is constantly flat.
Heritage 07 00261 g015
Table 1. Surveys details. This table provides information about the sensor used and the survey and post-processing characteristics.
Table 1. Surveys details. This table provides information about the sensor used and the survey and post-processing characteristics.
SurveySensorLPRR
[kHz]
Target EchoesOverlapVertical Accuracy [m]Ground Speed
[kn]
AGL [m]Average Density [pts/m2]
1 *ALTM Gemini125430%2514014002
2 *Riegl LMS-Q560240unlimited **60%1011050010
* Survey 1 is intended to be the sum of the acquisitions that have been completed in several weeks. Echo information that is embedded in the point clouds is often missing or incomplete. ** The echo information that is stored is practically limited only by the maximum data range allowed by the data recorder of the LiDAR system.
Table 2. Historical defensive heritage object-detection and land cover schema. This class schema has been created to provide a basic semantic structure for the DL object-detection approach. For each class is also listed the number of elements in each area.
Table 2. Historical defensive heritage object-detection and land cover schema. This class schema has been created to provide a basic semantic structure for the DL object-detection approach. For each class is also listed the number of elements in each area.
Class ValueClass NameArea 1 ObjectsArea 2 ObjectsDescription
0Not classified or in use28 out of 62
45%
28 out of 113
25%
This class contains all the points remaining from the other classes.
1Tower2 out of 62
3%
6 out of 113
5%
This class integrates all kinds of objects related to military observation towers and outposts.
2Walls-1 out of 113
1%
This class must be intended as objects pertaining to fortification walls and bastions.
3Bunker14 out of 62
23%
66 out of 113
58%
This class contains objects related to the pillbox-fortified structures. These structures must be intended as a special type of concrete camouflaged guard post.
4Batteries15 out of 62
24%
12 out of 113
11%
This class contains objects related to military batteries, whether those be antiaircraft or antinavy.
5Fortress3 out of 6
25%
-This class contains objects pertaining to fortresses and strongholds.
Table 3. Semantic segmentation class scheme description.
Table 3. Semantic segmentation class scheme description.
Class ValueClass NameDescription
0UnclassifiedThis class contains all the points remaining from the other classes. It particularly refers to low vegetation (e.g., bushes), cars, etc.
2GroundThis class contains all the points pertaining to the ground surface.
5High vegetationThis class contains objects recognized as trees.
6BuildingsThis class contains points related to human-made artifacts, such as buildings, ruins, etc.
9WaterThis class contains water surface points.
Table 4. Data consistency and class distribution of the training dataset that was used for training and validating the predictive segmentation model.
Table 4. Data consistency and class distribution of the training dataset that was used for training and validating the predictive segmentation model.
Training DatasetData PercentageClassClass Distribution
Training78%Unclassified5%
Ground58%
High Vegetation10%
Building26%
Water2%
Validation22%Unclassified6%
Ground67%
High Vegetation8%
Building18%
Water1%
Table 5. Class frequency of the test datasets used to evaluate the trained predictive model. This dataset was selected.
Table 5. Class frequency of the test datasets used to evaluate the trained predictive model. This dataset was selected.
UnclassifiedGroundHigh VegetationBuildingWater
A2%52%2%38%6%
B4%43%9%44%-
C1%19%1%80%-
D3%41%36%20%-
Table 6. Summary of existing open benchmark datasets for 3D object detection, showing data type, number of classes, and annotated 3D bounding boxes.
Table 6. Summary of existing open benchmark datasets for 3D object detection, showing data type, number of classes, and annotated 3D bounding boxes.
DatasetReferenceYearDataTypeObject ClassesAnnotated 3D Boxes
KITTI[42]2012 *RGB + LiDARAutonomous driving8200K
ApolloScape[67]2018RGB + LiDARAutonomous driving670K
H3D[43]2019RGB + LiDARAutonomous driving81.1M
Waymo Open[68]2020RGB + LiDARAutonomous driving412M
nuScenes[69]2020RGB + LiDARAutonomous driving231.4M
* In 2017 KITTI released 3D benchmark for 3D object detection.
Table 7. Predictive model validation results. The metrics of the table were calculated using the validation data from the Training dataset (case study 1).
Table 7. Predictive model validation results. The metrics of the table were calculated using the validation data from the Training dataset (case study 1).
ClassAccuracyPrecisionRecallF1 score
0—Unclassified0.950.780.290.42
2—Ground0.920.930.940.94
5—High Vegetation0.970.810.900.85
6—Building0.940.780.890.84
9—Water0.990.710.640.67
Macro Average0.950.800.730.74
Table 8. Predictive model test results. The metrics of the table were calculated using the test data A, B, C, and D.
Table 8. Predictive model test results. The metrics of the table were calculated using the test data A, B, C, and D.
AreaClassAccuracyPrecisionRecallF1-Score
A0—Unclassified0.990.710.580.64
2—Ground0.840.800.930.86
5—High Vegetation0.980.570.660.61
6—Building0.890.890.810.85
9—Water0.950.970.210.35
Macro Average0.930.790.640.66
B0—Unclassified0.970.780.530.64
2—Ground0.940.871.000.93
5—High Vegetation0.970.860.800.83
6—Building0.890.920.830.87
Macro Average0.960.860.790.82
C0—Unclassified0.990.980.300.46
2—Ground0.990.941.000.97
5—High Vegetation1.000.870.920.89
6—Building0.980.990.990.99
Macro Average0.990.950.800.83
D0—Unclassified0.990.970.580.73
2—Ground0.960.911.000.96
5—High Vegetation0.990.990.980.99
6—Building0.960.950.830.89
Macro Average0.970.960.850.89
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cappellazzo, M.; Patrucco, G.; Spanò, A. ML Approaches for the Study of Significant Heritage Contexts: An Application on Coastal Landscapes in Sardinia. Heritage 2024, 7, 5521-5546. https://doi.org/10.3390/heritage7100261

AMA Style

Cappellazzo M, Patrucco G, Spanò A. ML Approaches for the Study of Significant Heritage Contexts: An Application on Coastal Landscapes in Sardinia. Heritage. 2024; 7(10):5521-5546. https://doi.org/10.3390/heritage7100261

Chicago/Turabian Style

Cappellazzo, Marco, Giacomo Patrucco, and Antonia Spanò. 2024. "ML Approaches for the Study of Significant Heritage Contexts: An Application on Coastal Landscapes in Sardinia" Heritage 7, no. 10: 5521-5546. https://doi.org/10.3390/heritage7100261

APA Style

Cappellazzo, M., Patrucco, G., & Spanò, A. (2024). ML Approaches for the Study of Significant Heritage Contexts: An Application on Coastal Landscapes in Sardinia. Heritage, 7(10), 5521-5546. https://doi.org/10.3390/heritage7100261

Article Metrics

Back to TopTop