Next Article in Journal
Flood Runoff Simulation under Changing Environment, Based on Multiple Satellite Data in the Jinghe River Basin of the Loess Plateau, China
Next Article in Special Issue
Data Preparation Impact on Semantic Segmentation of 3D Mobile LiDAR Point Clouds Using Deep Neural Networks
Previous Article in Journal
Multi-Swin Mask Transformer for Instance Segmentation of Agricultural Field Extraction
Previous Article in Special Issue
Rethinking Design and Evaluation of 3D Point Cloud Segmentation Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage: A Comprehensive Review

1
College of Geosciences and Surveying Engineering, China University of Mining and Technology, Beijing 100083, China
2
School of Geomatics and Urban Information, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
3
Department of Civil Engineering, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(3), 548; https://doi.org/10.3390/rs15030548
Submission received: 1 December 2022 / Revised: 4 January 2023 / Accepted: 11 January 2023 / Published: 17 January 2023
(This article belongs to the Special Issue Semantic Segmentation Algorithms for 3D Point Clouds)

Abstract

:
In the cultural heritage field, point clouds, as important raw data of geomatics, are not only three-dimensional (3D) spatial presentations of 3D objects but they also have the potential to gradually advance towards an intelligent data structure with scene understanding, autonomous cognition, and a decision-making ability. The approach of point cloud semantic segmentation as a preliminary stage can help to realize this advancement. With the demand for semantic comprehensibility of point cloud data and the widespread application of machine learning and deep learning approaches in point cloud semantic segmentation, there is a need for a comprehensive literature review covering the topics from the point cloud data acquisition to semantic segmentation algorithms with application strategies in cultural heritage. This paper first reviews the current trends of acquiring point cloud data of cultural heritage from a single platform with multiple sensors and multi-platform collaborative data fusion. Then, the point cloud semantic segmentation algorithms are discussed with their advantages, disadvantages, and specific applications in the cultural heritage field. These algorithms include region growing, model fitting, unsupervised clustering, supervised machine learning, and deep learning. In addition, we summarized the public benchmark point cloud datasets related to cultural heritage. Finally, the problems and constructive development trends of 3D point cloud semantic segmentation in the cultural heritage field are presented.

1. Introduction

The use of 3D point cloud data for cultural heritage assets is becoming widespread since 3D models provide various applications of digital documentation [1,2,3,4], interpretation of information and knowledge [5,6], and visual experience [7,8]. Point cloud data include massive 3D geometric information with colour and reflection intensity attributes. However, semi-automatic or automatic, efficient, and reliable solutions for linking point clouds with semantic information still need to be solved. Three-dimensional point cloud semantic segmentation (3DPCSS) has been regarded as a technology that links a single point or a set of points in the point cloud data with semantic labels (e.g., roof, facade, wall of architectural heritage, plants, damaged area, etc.) [9]. Moreover, 3DPCSS is the key step to extract implicit geometric features and semantic information from point cloud data for 3D scene understanding and cognition. Point clouds with semantic and geometric feature information make it easier for non-remote sensing professional users, such as cultural heritage managers, researchers, conservators, and restorers, to apply and understand the content of the data [10]. For example, the scan-to-BIM (building information modeling) [11,12] method, from point cloud to BIM [13] or heritage building information modeling (H-BIM) [14,15], mainly depends on 3DPCSS to extract geometric primitive, semantic information and building structure for parametric modeling. Smart point cloud [16,17] and rich point cloud [18] are new intelligent point cloud data structures that can be used for 3D scene understanding and decision-making, of which 3DPCSS acts as a core step of their technology framework. 3DPCSS can also achieve damaged area investigation, thematic mapping, and surface material analysis of cultural relics [19,20,21]. In addition, the scientometric method has proven that the semantic segmentation of point clouds is one of the hot topics of leveraging point cloud data in the cultural heritage field [22]. Therefore, it is evident that 3DPCSS has become the essential data processing step in cultural heritage.
Cultural heritage is represented by objects of nature, sizes, and complexity, which can be classified into tangible cultural heritage and intangible cultural heritage. Tangible cultural heritage consists of two categories, immovable cultural heritage (e.g., built heritage, monuments, archaeological sites, etc.) and moveable cultural heritage (e.g., paintings, sculptures, furniture, coins, etc.) [23]. The tasks of point cloud semantic segmentation are different at multi spatial scales, from landscape (i.e., immovable heritage and surrounding environments) to small artifacts (i.e., a part of an immovable heritage). In macro geographical space, a cultural heritage landscape is regarded as a complex dynamic environment composed of elements with semantic information such as land, vegetation, water, buildings (modern and historical buildings), and artifacts [24]. These elements can be reconstructed in geographic information systems (GIS) to support landscape and archaeological site annotation [25] and landscape planning [26]. With regard to the immovable heritage, point cloud semantic segmentation can improve the degree of automation of parametric modeling of objects at different levels of detail [27,28]. The common goal is to locate different elements in a three-dimensional point cloud scene, annotate them, and associate them with semantics and attributes and external knowledge databases [29]. At present, the application of point cloud semantic segmentation mainly focuses on immovable cultural heritage, especially historical buildings.
As point cloud semantic segmentation has been applied in various industries fields, such as urban and rural scenes [30], agricultural scenes [31], and railway environment [32], the field of cultural heritage has its unique features, including:
(1)
Point clouds in cultural heritage require a higher point density to express the complex geometric details of the object surface.
(2)
The basic geometric elements of cultural heritage include a lot of non-planar, curved geometrical shapes, irregular shapes, and complex structures [33].
(3)
Before handling point clouds, the segmentation categories always depend on the knowledge of experts in the field of cultural heritage.
(4)
For the same cultural heritage, the segmentation categories can be identified based on different research objectives and practical applications [34].
(5)
The same segmentation categories in different heritages have very significant morphological differences. For example, different historical periods and architectural styles include a variety of vaults supported by pillars of various patterns and shapes.
(6)
A high level of accuracy is required for the semantic segmentation of point clouds for applications such as structural analysis and damage detection [35].
3DPCSS has attracted significant attention in computer vision, remote sensing, and robotics. As a result, similar terms have emerged to describe the same meaning. In existing studies, point cloud semantic segmentation has also been known as classification [36,37] or point labeling [38,39]. In addition, 3D point cloud segmentation (3DPCS) is also an essential task for point cloud data processing. 3DPCS is essentially a process of extracting the specific geometric structures or features in cultural heritage scenes based on geometric constraints and statistical rules rather than explicit supervised learning or prior knowledge [9]. 3DPCS also groups point clouds into subsets with one or more common characteristic, whereas 3DPCSS means defining and assigning points to specific classes with semantic labels according to different criteria [40]. Therefore, 3DPCS algorithms are also included in this article.
As shown in Figure 1, it demonstrates an overall pipeline from point cloud data acquisition to applications in the cultural heritage field. Three-dimensional laser scanning and photogrammetry are the techniques that transform objects in the physical world into point cloud data. The natural characteristics of high-density, incompleteness, massive, and no neighborhood information in point cloud data bring significant challenges to be directly used in the cultural heritage field [41,42]. 3DPCS and 3DPCSS provide a high-level and semantic representation for original point cloud data to be applied in digital orthophoto maps, damaged area investigations, object recognition, BIM/HBIM, etc.
We searched the Web of Science database (WoS) on 20 October 2022 using “(point cloud*) AND (segmentation OR classification)” to conduct queries through “Topic”, which includes the title, abstract, keywords, and keyword plus. We further filtered the documents by checking “review article” under the “document types”, obtaining 96 papers. Finally, four papers related to cultural heritage were selected manually. Specifically, Yang et al. (2020) [43] focused on built heritage documentation using geometry modeling, knowledge management, and BIM/HBIM. Semantic segmentation of 3D point clouds can enhance the ability of knowledge fusion in built heritage modeling, but it is not the main topic of the paper. Bassier et al. (2020) [44] reviewed the interpretation and reconstruction of raw point cloud data in the construction industry, and semantic segmentation was mentioned as a key step. Moyano et al. (2021) [45] reviewed point cloud segmentation for HBIM parameterization in the field of architecture and archaeology, but semantic segmentation algorithms are not the paper’s focus. Rashdi et al. (2022) [46] explained the point cloud data processing in scan-to-BIM methodologies, such as sampling, registration, and semantic segmentation. However, the above review articles focus on BIM and HBIM technology in the construction field without comprehensively reviewing the 3DPCS and 3DPCSS algorithms and extensive applications in the cultural heritage field. In addition, we conducted a supplementary search on Google Scholar using the same search conditions as WoS. We retrieved one review paper about point cloud semantic segmentation algorithm in cultural heritage. Grilli et al. (2017) [40] carried out a short review of point cloud segmentation and classification algorithms in cultural heritage, but it was not entirely exhaustive, especially lacking detailed knowledge and applications.
Each of the five mentioned review papers have explored 3DPCS and 3DPCSS in building heritage from a specific point of view. Therefore, it is necessary to propose a comprehensive review to cover the up-to-date studies and efficiently organize them. It is supposed to contain an introduction to point cloud data acquisition and a detailed summary of the algorithms and applications of point cloud semantic segmentation in the cultural heritage field. This review aims to introduce 3DPCS and 3DPCSS algorithms and comprehensively analyze applications in different segmentation tasks. The various sources (platforms or sensors) of point cloud raw data significantly affect the performance of 3DPCSS algorithms. Therefore, this paper first surveys the existing studies from two perspectives of 3D point cloud data acquisition in cultural heritage, i.e., a single platform with multiple sensors and multi-platform data fusion. Then, we classified 3DPCS algorithms into three types: region growing, model fitting, and clustering-based methods. 3DPCSS algorithms are classified into supervised machine learning and deep learning methods. The criteria for the classification are determined by those general review papers [9,40,47]. Since the edge-based method usually reviewed in the literature has barely been used for processing cultural heritage, we excluded it from this survey. Last, we discussed the current challenges and potential future research directions.
The rest of this paper is organized as follows. Section 2 introduces the acquisition of multi-source point cloud data in cultural heritage. Section 3 summarizes the three types of 3DPCS methods in detail. The learning-based method of 3DPCSS and public benchmark datasets are summarized in detail in Section 4. Section 5 discusses some existing problems and development trends in 3DPCSS. Finally, some conclusions are drawn in Section 6.

2. Three-Dimensional Point Cloud Data in Cultural Heritage

This section first introduces point cloud data acquisition in the field of cultural heritage and the performance of various data acquisition technologies of different types of cultural heritage at different spatial scales. Then, we summarize the characteristics of data acquisition and data fusion from two perspectives, i.e.:
(1)
A single platform with multiple sensors: a point cloud data acquisition platform equipped with various sensors can obtain additional information in a single acquisition task. This additional information can improve the effect of 3D point cloud segmentation and semantic segmentation.
(2)
Multi-platform data fusion: combining the advantages of point cloud data acquisition of different platforms, a more complete and multi-resolution point cloud can be obtained by data fusion.

2.1. Point Cloud Data Acqusition Technologies

Photogrammetry and 3D laser scanning have become the best approaches for 3D point cloud data acquisition of cultural heritage. Salonia et al. (2009) [48] presented a quick photogrammetric system for surveying archaeological and architectural artifacts at different spatial scales. Multi-image photogrammetry with UAV has become the most economical and convenient 3D survey technology for cultural heritage landscapes, archaeological sites, and immovable heritage (e.g., historical buildings) [49,50,51]. Jeon et al. (2017) [52] compared the performance of image-based 3D reconstruction from UAV photography using different commercial software, e.g., Context Capture, Photo Scan, and Pix4Dmapper. Kingsland (2020) [53] used three types of digital photogrammetry processing software including Metashape, Context Capture, and Reality Capture for small-scale artifact digitization. In addition, computer vision (CV) is a mathematical technique for reconstructing 3D models in imagery [54]. CV together with photogrammetry is known as an image-based point cloud data acquisition method. Aicardi et al. (2018) [55] reviewed the definitions, similarities and differences between the two technologies in detail. In the last decade, software, hardware, and algorithms proposed in the CV field have improved photogrammetric solutions in terms of workflow automation and computational efficiency. It is not easy nowadays to underline the actual break point between the photogrammetry and CV approaches. Dense matching [56,57,58], multiple-view stereovision (MVS), and structure from motion (SFM) [59,60,61,62,63] as CV algorithms have been widely used in structural engineering and conservation and maintenance restoration of sites and structures belonging to the cultural heritage field. Photometric stereo is another popular method for reconstructing small objects, which can recover detailed surface shapes, even on the objects that are without texture. This is an advantage of photometric stereo over the geometry methods [64,65].
Three-dimensional laser scanning is another technology used to acquire cultural heritage point cloud data. A variety of 3D laser scanning platforms can be used to obtain 3D point cloud data of cultural heritage at different spatial scales, such as airborne laser scanning (ALS), terrestrial laser scanning (TLS), mobile laser scanning (MLS), and handheld laser scanning. For example, Risbøl et al. (2014) [66] used ALS data for change detection of the detailed information of a landscape and individual monuments automatically. Damięcka-Suchocka et al. (2020) [67] used TLS point cloud data at the millimeter level to investigate historical buildings, walls, and structures for conducting inventory activities, documentation, and conservation work. Di Filippo et al. (2018) [68] proposed a wearable MLS for collecting indoor and outdoor point cloud data of complex cultural heritage buildings. Lou et al. (2022) [69] proposed a methodology to digitize, extract, and classify cave features of rockeries in Chinese classical gardens by a handheld laser scanner with a camera. Ramm et al. (2022) [70] used a structured-light 3D sensor and a photo camera to capture 3D models of museum objects with a resolution of 0.1 mm.
The requirements of documentation and application determine the choice of technologies, platforms, and sensors. Gomes et al. (2014) [71] reviewed the 3D digitization of cultural heritage in the past and the technical workflow of 3D reconstruction. Maté-González et al. (2022) [72] compared the performance of three lidar techniques (TLS, ALS, and MLS) in surveying built heritage in vegetated areas. Ruiz et al. (2022) [73] made a comparative analysis between the main 3D scanning techniques: photogrammetry, TLS, and a structured light scanner in sculpture heritage. Table 1 provides the basic information about various point clouds, including technology, point density, advantages, disadvantages, and spatial scales.

2.2. A single Platform with Multiple Sensors

In the field of cultural heritage, the equipment used for acquiring point cloud data is mainly based on 3D lidar and photogrammetric cameras equipped with unmanned aerial vehicles (UAV), mobile vehicles, robots, terrestrial methods, handheld devices, and other platforms. The purpose of multiple sensors equipped with one platform is to obtain more information in one data acquisition process. For example, Nagai et al. (2009) [74] proposed a UAV 3D mapping platform consisting of charge-coupled device cameras, a laser scanner, an inertial measurement unit, and a global positioning system (GPS). Erenoglu et al. (2017) [75] used digital, thermal, and multi-spectral camera systems on a UAV platform to collect visible, thermal, and infrared radiations of the electromagnetic spectrum. That multi-source information was employed to produce a highly accurate geometric model of an ancient theater and it revealed the material features because of the spectral classification. Rodríguez-Gonzálvez et al. (2017) [76] employed the Optech LYNX Mobile Mapper platform on a vehicle to acquire 3D point clouds of enormous cultural heritage sites. The platform includes two lidar sensors, four RGB cameras, and an inertial navigation system. During data collection, the system can simultaneously acquire point cloud data, colour information, and spatial geographic references. Milella et al. (2018) [77] assembled a stereo camera, a visible and near-infrared camera, and a thermal imager on a robot with an inertial measurement unit (IMU) to identify soil characteristics and detect changes through the integration of data from different sensors. Hakala et al. (2012) [78] designed a prototype of a full waveform hyperspectral terrestrial laser scanner to produce hyperspectral 3D point clouds, which can provide visualization and automate classification of the point cloud and calculation of spectral indices for extraction of target physical properties. Zlot et al. (2014) [79] developed a handheld mobile mapping system called Zebedee for cultural heritage applications. The system consisted of a laser scanner, a camera, and an IMU, suitable for 3D data acquisition of a large range of complex cultural heritage scenes and capable of collecting data from inside buildings.
To sum up, two main trends exist in using a single platform for scientific observation of cultural heritage. The first is a mobile platform consisting of a global positioning system and inertial navigation, which can achieve the global geospatial reference of cultural heritage without ground control points. The mobile platform can reduce the time required for planning sensors’ network and installing instrumentation. This approach can quickly obtain point cloud data in a large area with a complex spatial structure, such as cultural heritage landscapes and historical buildings. Planning sensor network deployment by non-professional researchers may not be justified due to a lack of expertise [80]. However, cultural heritage authorities may prohibit UAVs and mobile vehicles from accessing cultural heritage sites. Sometimes, there is not enough space and road for flying drones and driving through. In addition, the accuracy of the mobile platform is lower than the ground-station-based method. The second trend is that the content of point cloud data evolves from geometric to synchronous acquisition with the spectrum and texture information in one data acquisition. Table 2 summarizes the data acquisition cases of a single platform with multiple sensors.

2.3. Multi-Platform Data Fusion

In cultural heritage, it is difficult for a single platform to meet the requirements of point cloud integrity and accuracy within its limited observation perspective [81]. Multi-platform collaborative observation can effectively integrate the advantages of different platforms. Multi-source point cloud fusion can make the scene have a more complete spatial scale and geometric information.
Multi-platform data fusion can make up for the problems of incomplete data and excessive data errors that exist in a single platform. For example, Fassi et al. (2011) [82] proposed that in order to understand the global structure of complex artifacts (e.g., Milan Cathedral’s main spire, Italy), together with its reconstruction accuracy, connections, and topological and geometrical logic, different instruments and modeling methods must be used and integrated. Achille et al. (2015) [83] constructed a 3D model of interior and exterior buildings with complex structures by integrating UAV photogrammetry and TLS data. Galeazzi (2016) [84] combined 3D laser scanning and photogrammetry to reconstruct the archaeological record of cave microtopography in extreme environments, such as extreme humidity, difficulty of access, and challenging light conditions. Zaragoza et al. (2017) [85] integrated UAV photography with TLS for 3D documentation in a hazardous situation. The UAV photography method has higher accuracy at the roof surface area, while the TLS method is more suitable for obtaining the facade area (such as walls). Herrero-Tejedor et al. (2020) [86] used the UAV-TSL model in a densely vegetated cultural heritage area (an ancient garden). In their data acquisition process, densely vegetated areas hindered the penetration of airborne lidar, making it challenging to detect near-ground objects. UAV photography destroys the image effect due to ground shadow and vegetation occlusion, resulting in point cloud noise in the near-ground area.
Another purpose is to reconstruct a multi-spatial resolution point cloud model for cultural heritage. For example, Guidi et al. (2009) [87] reported on a multi-resolution and multi-sensor approach developed for the accurate and detailed 3D modeling of the entire Roman Forum. Abate et al. (2018) [88] provided a 3D resolution spanning from a few centimeters in the landscape digital terrain model (DTM) to a few millimeters in the layer-by-layer archaeological site. Young et al. (2019) [89] used a combination method of TLS and UAV photogrammetry to establish a 3D model of a temple. In the study, UAV photogrammetry yielded a higher planar data acquisition rate on the roof of a building than TLS. However, laser scanning was observed to provide higher positional accuracy than photogrammetry. A few studies of multi-platform data fusion are shown in Table 3.

3. 3D Point Cloud Segmentation

This section introduces the main methods and applications of 3DPCS in cultural heritage. 3DPCS is mainly used in the recognition and extraction of basic geometric shapes from point clouds, especially immovable cultural heritage. The 3DPCS algorithms is divided into three parts: region growing, model fitting, and clustering-based. Among them, the region growing method is used to detect and segment planes from point clouds of historic buildings. The model fitting method can segment more basic geometric or irregular geometric shapes. These geometric elements are mainly used for the parametric modeling of BIM and HBIM. The model fitting and clustering-based method can provide surface defect detection and deformation analysis of immovable heritage by calculating the distance between the real point and the fitted plane.

3.1. Region Growing

The region growing method is a classic algorithm in 3DPCS. This algorithm is widely used in the geometric segmentation of plane structures [90,91]. In addition, the region growing algorithm is also applied to segment the plane elements from the point cloud of historic buildings. [92,93]. The basic idea of the region growing algorithm is to merge two spatial points or two spatial regions when they are close enough in a particular geometric measure.
Three critical factors need to be considered when constructing a region growing algorithm with different strategies. The first key factor is selecting the appropriate seed point or region [94,95,96]. The second key factor is dividing the region growth unit to improve the computational efficiency using region unit division or hybrid units division, such as voxel [97], super voxel [98], KD-Tree [94], and Octree [99] structures, etc. For example, Xiao et al. (2013) [100] used a single point and a sub-region as growth units to detect planes. Dong et al. (2018) [101] adopted a hybrid region-growing algorithm based on a single point and super voxel units to achieve coarse segmentation before global energy optimization. The third key factor is determining the appropriate criteria for measuring the similarity, including the normal vector, the distance between adjacent points and the adjustment plane, and the distance between the current point and the candidate point [90,92].
The region growing method is mainly applied to segmenting walls and roofs of historic buildings. For example, Grussenmeyer et al. (2008) [92] selected the gravity point of each voxel unit as the seed point for region growing and then extracted the plane for parameter modeling from the TLS point cloud of a medieval castle. Paiva et al. (2020) [93] extracted plane elements from point cloud data of five different styles and periods of historical buildings. Their method combines hierarchical watershed transform and curvature analysis with region growing to obtain more suitable growing seeds. The method can apply to multi-source point clouds from drones and terrestrial laser scanners. Pérez-Sinticala et al. (2019) [102] used a hybrid region growing algorithm and primitive fitting by sample consensus appears, simplifying the point clouds into a simpler model based on geometric primitives, such as walls, towers, segmentation, and automatic recognition of roofs and slopes.
The region growing method has been proven to be able to segment point cloud data for ancient historical buildings effectively. However, there are still some problems with its weak generalization ability and complex computing power [103]. For example, the reliability of the segmentation results depends on the growth criteria of the seed. Selecting parameters (such as the number of nearest points) and determining region growth units require prior knowledge, leading to poor generalization of the algorithm. The point-wise calculation is time-consuming for massive point clouds (millions, tens of millions of points).

3.2. Model Fitting

Model fitting is a shape detection method that can match point clouds to different original geometries. Therefore, it also could also segment regular geometric shapes from point clouds. The two most important model fitting algorithms are Hough transform (HT) and random sample consensus (RANSAC).

3.2.1. Hough Transform (HT)

Hough transform can detect parametric geometric objects such as lines [104], planes [105], cylinders [106], and spheres [107] in point clouds. There have been several review articles about 3D Hough transform [108,109,110]. Concerning HT, a sample extracted from the 2D images and 3D point cloud data in an original space is mapped into a discretized parameter space, where an accumulator with a cell array is put. A vote is then cast for each sample’s basic geometric element, represented by parameter coordinates in the original space. The cell with the local maximal score is selected as the output.
One of the applications of the HT algorithm in cultural heritage is to extract planner features. For example, Lerma and Biosca (2005) [111] extracted planner surfaces from the point cloud of a monument through the HT algorithm to remove irrelevant points and reduce the data volume. Another application is to create a digital orthophoto map of cultural relics. A digital orthophoto map is an essential documentation of two-dimensional reference systems in cultural heritage. Selecting the appropriate projection surface is the key to generating orthoimages with sufficient geometric accuracy and visual quality [112]. Markiewicz et al. (2015) [113] utilized extended random Hough transform (ERHT) to detect horizontal and vertical planes in point clouds for automatically generating orthophotos from TLS data and digital images. Maltezos et al. (2018) [114] proposed adaptive point random Hough transform (APRHT) to extract planes from polyhedral cultural heritage. The advantage of APRHT is that it automatically selects subregions around the adaptive center point, an automatic descent adjustment process for parameters. It removes points that meet distance tolerances and additional normal tolerances concerning the probed plane. Therefore, the geometric constraint criteria used by APRHT and the new properties of the accumulator parameters make this algorithm widely applicable, scalable, and general. Alshawabkeh [115] proposed a depth-image-based method for linear feature extraction to detect façade surface features such as edges and cracks from TLS point clouds. This work clearly explains and quantifies weathering processes and dangerous fissures in a survey of the Treasury monument in the ancient city of Petra, Jordan.
The advantage of the HT algorithm is that all points are processed independently and they are not affected by outliers. It is robust to noise and can recognize multiple geometric shapes (such as multiple different planes). The disadvantages of the HT algorithm are large computation, high computational complexity, and the choice of parameter length [110].

3.2.2. Random Sample Consensus (RANSAC)

In 1981, Fischler and Bolles [116] proposed the RANSAC model fitting method, which has been widely used in computer vision to detect simple shapes. Since then, many classical RANSAC-based algorithms have been proposed for detecting planar features [117,118,119] and irregular geometric shapes [120,121] from point clouds.
The RANSAC algorithm supports the segmentation of historic buildings to create a subset of point clouds belonging to building components, including basic geometric shapes such as planes, spheres, cylinders, cones, and tori. For example, Aitekadi et al. (2013) [122] extracted the principal planes of classical architecture from coloured point cloud data combined with RGB values, laser intensity, and geometric data. Chan et al. (2021) [123] used the RANSAC algorithm to separate individual planar features, which is the first step of the point cloud colourization method based on a point-to-pixel orthogonal projection. Kivilcim and Zaide (2021) [124] extracted the geometries of architectural façade elements from airborne lidar and ground lidar data that contain noise. The extracted geometric elements facilitate transfer to BIM following Industry Foundation Classes (IFC) standards. Macher et al. (2014) [125] segmented geometric elements based on the RANCAS paradigm, such as planes, cylinders, cones, and spheres from the TLS point cloud data of churches and fortresses in the 11th and 12th centuries. Andrés et al. (2012) [126] used the least median of squares (LMedS) and improved the RANSAC to segment the surface of the Antioch Gate in an ancient building in Aleppo, Syria. The segmented results were used to construct a parametric geometric model to calculate structural deformation through finite element analysis (FEA).
The RANSAC algorithm can also detect defects on the surface of cultural relics by calculating the spatial distance between the original point cloud and the fitted plane. For example, Nespeca and Luca (2016) [127] created a deviation map between the point cloud of a wall and the RANSAC-fitted plane in the Saint-Maurice church in Carrom, southern France. The deviation map can show areas of material loss on the artifact’s surface. The roughness of the material surface of the cultural relic shows the degree of surface corrosion and can indicate stormwater runoff. Poux et al. [128] used RANSAC and the convex hull algorithm to extract contour polygons of each tesserae (a small piece of stone, glass, ceramic, or other hard material cut in a cubical or some other regular shape in mosaic work). The tesserae are classified by domain knowledge of size, geometry, and spatial distribution.
Due to high computational requirements, the HT method exhibits poor runtime performance when applied to large-scale and complex geometric structures of cultural heritage. At the same time, the RANSAC algorithm significantly improves massive point cloud data processing. The RANSAC algorithm shows strong robustness and effectiveness when dealing with point cloud models [129] and can even handle data containing more than 50% outliers robustly [130].

3.3. Unsupervised Clustering Based

Point cloud unsupervised classification algorithms mainly include K-means [131], mean shift [132], and fuzzy clustering [133]. Cluster-based 3DPCS methods are more suitable for irregular geometric objects than region growing and model fitting methods because they do not require the basic geometry of objects to be predefined. Cultural heritage surface or structural diseases often manifest as irregular geometric features such as biological diseases, weathering, cracks, and partial defects. Therefore, this algorithm is very suitable for extracting surface defects.
Segmenting damaged areas with similar geometric, colour, and reflection intensity features from the 3D point cloud data and then making a map of cultural heritage disease investigation have become an efficient method for diagnosing conservation status [134,135]. Armesto-González et al. (2010) [136] used TLS with unsupervised classification methods to produce a thematic map of damage affecting building materials. This work tested three types of 3D laser scanning equipment (FARO Photon, TRIMBLE GX200, and RIEGL-Z390i) with different wavelengths. The best result for the classification is the fuzzy K-means algorithm rather than K-means or ISO data. Sánchez-Aparicio et al. (2018) [137] used the fuzzy K-means method to detect certain types of pathological processes (biological colonization, salts, or moisture) from point cloud data of highly affected historical masonry. Wood et al. (2021) [138] selected three damage-sensitive features (covariance-based, normal vector-based, and curvature-based) and used the ordering points to identify the clustering structure (OPTICS) classifier [139] to detect damaged areas (surface damage, defects, and cracks) in frescoes. The advantage of the OPTICS classifier is to detect clusters with different shapes and spatial distributions in a 3D space. This classifier is suitable for the classification of damaged regions with random patterns. An unsupervised clustering method based on point clouds supports multi-dimensional feature classification. Therefore, the reflection intensity and colour information from point cloud data acquisition can improve classification accuracy [140,141]. The disadvantage is the difficulty of choosing suitable predefined parameters to improve the segmentation effect. The abovementioned research [136,137,138] all involve predefined parameters and parameter selection. Hou et al. (2017) [142] compared the efficiency of four clustering algorithms (K-means, fuzzy C-means, subtractive clustering, and density-based spatial clustering of applications with noise (DBSCAN)) in detecting damage areas on modern building surfaces. As a result, K-means and fuzzy C-means’ performance were better than subtractive clustering and DBSCAN. Although the test was applied in modern architecture, the method used provides a reference for the damaged area investigation of historical buildings.

4. Three-Dimensional Point Cloud Semantic Segmentation

This section introduces the 3DPCSS algorithms, the public benchmark dataset, and their application in cultural heritage. Currently, the study case of the 3DPCSS algorithms is mainly based on historical buildings in Europe, with complex structures, diverse roof types, decorated windows, different styles of columns, and complex geometric decorations. The 3DPCSS algorithm can segment building elements and the surrounding environment (the ground and trees) in one task. These elements have different semantic and physical properties and functions in cultural heritage. The result of the segmentations are used for parametric modeling, incorporating digital models of artifacts into the HBIM environment.

4.1. Supervised Machine Learning

3DPCSS based on machine learning has been widely used for the semantic understanding of indoor and outdoor 3D urban scenes [143,144]. This method has also become a trend in classifying complex structures from cultural heritage point clouds, especially ancient architecture [145,146]. The basic process for importing point clouds to outputting semantic annotation 3D point clouds includes four steps [38,147,148]:
(1)
Point cloud neighborhood selection.
(2)
Local feature extraction.
(3)
Salient feature selection.
(4)
Point cloud supervised classification.
Weinmann et al. (2015) [147] discussed seven methods for defining point cloud proximity, twenty-one geometric features, seven feature selection methods, and ten classifiers for supervised classification. They used two standard datasets (urban 3D point clouds) to evaluate the method in a detailed assessment (Figure 2).
The applications of 3DPCSS based on supervised machine learning involves investigating damaged areas [149,150] and extracting building structures [34,151,152]. Depending on the data source, point density, object type, and application, we have different choices for conducting the segmentation steps. The detailed summary of semantic segmentation in Table 4 is based on the process in Figure 2.
In order to improve the effect of semantic segmentation, some scholars [34,150,151,152] have used radiation features and colour information as additional features. Since the density of cultural heritage point cloud data generated by the fusion of various sensors is not uniform, it is necessary to select the point cloud neighborhood according to the actual situation of the point cloud data. There is a correlation between the point cloud neighborhood and its local feature extraction. For example, neighborhood selection differs at different spatial scales when calculating local features in the feature selection step [34,151,152]. Teruggi et al. (2020) [153] adopted the hierarchical machine learning method for semantic segmentation of multi-level and multi-resolution point clouds, which realizes the main structure of the building (such as roofs, the ground, and walls), to the basic elements of the building (such as doors and windows), to architectural details (such as ornaments and sculptures on pillars), etc. These cases demonstrate the success of machine learning algorithms in extracting cultural heritage disease information and the refined segmentation of structures. However, there still needs to be a gap between application in the cultural heritage field and the 3D urban scene. For example, the machine learning classifier used for segmentation is dominated by random forests. Such segmentation algorithms build local geometric feature descriptors based on a single point or a set of points in the point cloud for classification and do not consider point cloud’s contextual features. Although individual classifiers have high computational efficiency, they are prone to noise and affect classification accuracy.

4.2. Deep Learning

With the emergence of deep learning techniques, point cloud semantic segmentation has improved tremendously. In recent years, many deep learning models have emerged to achieve semantic segmentation of 3D point clouds [154].Compared with traditional segmentation algorithms, deep-learning-based model technology has a higher performance of multi-scale spatial 3D information with semantic information at a different level of granularity, such as semantic segmentation (scene level), instance segmentation (object level), and part segmentation (part level) [155,156]. According to the irregularity of point clouds, deep-learning-based point cloud semantic segmentation models are divided into indirect and direct methods [157]. The indirect method converts the irregular point cloud into a regular structure to achieve segmentation. For example, Pellis et al. (2022) [158] adopted the DeepLabv3+ deep learning model (one of the CNN models) for the 2D image semantic segmentation algorithm, used a pre-trained version of the network on the ImageNet database, and used ResNet-18 as the primary classification architecture. Finally, the semantic information was reverse mapped onto the point cloud through the direct linear transform (DLT) algorithm. The method was validated using four European architectural heritages to classify and label architectural 3D point clouds (Arch, column, moldings, floor, door, wall, stairs, vault, roof, other, and none). However, there are some unavoidable defects in the semantic segmentation of point clouds indirectly. Since the picture is not 3D information, the invisible areas cannot be classified.
PointNet [159] is a pioneering deep learning framework that directly uses the 3D information of the point cloud for classification. In a further study, Qi et al. (2017) [160] improved the basic PointNet model using a hierarchical neural network to capture local geometric features. Malinverni et al. (2019) [161] chose PointNet++ for the semantic segmentation of 3D point clouds of historical buildings and achieved remarkable results in terms of effect. Three-dimensional deep learning research focuses on enhancing features, especially local features, and relationships between points, using knowledge from cultural heritage fields to improve the performance of the basic algorithms of PointNet++. For that purpose, Wang et al. (2019) [162] designed a procedure called edgeconv to extract edge features while maintaining permutation invariance called dynamic graph CNN (DGCNN). Pierdicca et al. (2020) [29] proposed a basic framework to recognize historical architectural elements (Figure 3), which adopted an improved DGCNN (dynamic graph convolutional neural network), adding meaningful features. Morbidoni et al. (2020) [163] presented an improved version of the dynamic graph convolutional neural network (DGCNN) named RadDGCNN, which exploits the radius distance. Francesca et al. (2020) [164] made a comparison between machine learning methods (k-nearest neighbor, naive Bayes, decision trees, and random forests) and deep learning methods (DGCNN, DGCNN-Mod, and DGCNN-3Dfeat) on European architectural heritage point cloud data. Then, they proposed an architecture named DGCNN-Mod+3Dfeat for large 3D cultural heritage semantic segmentation. Chen et al. (2021) [165] proposed the ring grouping neural network architecture with an attention module (RGAM) to enhance complex scene recognition. Lee et al. (2021) [166] proposed a graph-based hierarchical DGCNN (HGCNN) model for representing 3D objects such as bridge components in the BIM framework. Yin et al. (2021) [167] proposed a deep-learning-based approach, i.e., ResPointNet++, by integrating deep residual learning with a conventional PointNet++ network to automatically create as-built BIM models from point clouds.
However, deep learning methods also have certain limitations. In order to build a suitable deep learning model, many human resources are required to label and train the data in the early stage. Many point clouds by manual means are very time-consuming. Moreover, compared with the urban 3D point cloud scene, the point cloud annotation related to cultural heritage requires specific relevant knowledge (a correct understanding of the cultural heritage scene and structure) and non-professional annotation is prone to errors.

4.3. Public Benchmark Dataset

The public standard benchmark dataset helps train the 3DPCSS models, based on which the automation of point cloud data processing and 3D reconstruction of cultural heritage can be improved. An intensive search indicated only four benchmark datasets that include cultural heritage, while most of them concern architectural heritage. An architectural cultural heritage (ArCH) dataset was collected by 3D laser scanning and oblique photography, including 17 annotated and ten unlabeled scenes [168]. The WHU-TLS point cloud benchmark dataset [169,170,171] is not a dataset dedicated to the cultural heritage field but includes a small part of architectural heritage. SEMANTIC3D.NET [172] presents a 3D point cloud classification benchmark dataset with manually labeled points. It covers a few historic buildings. Pepe et al. (2022) [173] shared a point cloud for 3D reconstruction of Temple of Hera (Italy) based on a photogrammetric approach and georeferenced through a UAV survey. The 3D pottery dataset [174] includes 1012 digitized, hand-modeled, and semi-automatically generated 3D models. Due to the wide variety of cultural heritage artifacts, such a small number of benchmark datasets for point clouds in the cultural heritage field hinders the automatic interpretation of cultural heritage scenes using point cloud data.

5. Discussion

5.1. Multi-Source Point Cloud Data

Multi-source point cloud data fusion aims at obtaining more complete data (TSL has difficulty observing building roofs), multi-resolution data (UAV and TLS data fusion), and additional information (colour, spectral information, etc.), as described in Section 2. Regarding point clouds from difference observation modes, spatial scales, and accuracy among platforms and sensors, multi-source point clouds have a series of problems, such as uneven point density, different colour spaces (e.g., RGB or HSV), multiple spatial scales, and additional heterogeneous sensor information. The characteristics of multi-source point clouds also bring new challenges to 3DPCSS. For example, 3D laser scanning point clouds contain information on laser reflection intensity. A dense matching point cloud of a photogrammetric image has colour information. Both reflection intensity and pixel colour contribute to point cloud semantic segmentation. However, in the same scene, the semantic segmentation of point cloud data obtained by 3D laser scanning and photogrammetry may lead to different results.

5.2. Over-Segmentation Results in Useless Classes

Over-segmentation is a common problem in semantic segmentation, resulting in many meaningless classifications. It is necessary to further structure regularization and smooth the segmentation results. Hao et al. (2022) [175] designed a self-supervised pretext task to improve the poor performance of point cloud semantic segmentation in detail processing, especially in boundary regions. Yang et al. (2022) [176] built an encoder–decoder network based on CRF (conditional random fields) graph convolution (CRFConv) to enhance the location ability of the network, thereby benefiting segmentation. However, there is no similar processing for 3DPCSS in cultural heritage. A potential solution to overcome the over-segmentation problem is introducing the contextual information or domain knowledge strategy in the 3DPCSS process [177]. For example, doors and windows of buildings often do not need more detailed semantic elements. Therefore, if a new category appears in the area of doors and windows, it needs to be merged into a major category. The 3DPCSS algorithm for cultural heritage can be improved through those known conditions, making the segmentation results more aligned with the human cognition. Colucci et al. (2021) [178] used an ontological scheme to guide the segmentation of point clouds from cultural heritage, then generated parametric geometries to be used in a historical building information model (HBIM).

5.3. Supervised Machine Learning Versus Deep Learning

Learning-based 3DPCSS algorithms require a large number of labeled point cloud data samples. Manual sample labeling methods are inefficient. Compared with typical 3D urban scenes, the annotation of point cloud samples in cultural heritage relies more on professional knowledge and domain experts. The weakly supervised learning method that has been applied in urban scenarios is an alternative approach to avoid this exhausting annotation [179,180].
Deep learning can automatically learn features from massive point cloud data without relying on manual design. However, there are few point cloud datasets of cultural heritage landscapes, immovable heritage, or objects, and the content needs to be more prosperous. The diversity of artifact geometries also results in an unbalanced sample of different artifact types. Especially in the field of cultural heritage, there are always some extremely complex geometric forms and unique appearances, such as decorative objects and sculptures. Furthermore, since point cloud data are unstructured data (discrete points in space), convolutional neural network (CNN) filters cannot be directly used in deep learning methods. Most deep learning models require extensive preprocessing of point clouds, such as voxelization, which increases algorithmic complexity and loses local details.
The classifiers and feature selections commonly used in machine learning methods are shown in Table 4. The training of these classifiers depends on the selection of valuable features. When using machine learning methods for classification, only the training data and the data to be classified have the same characteristics, and a better classification effect can be achieved. From the contributions of the literature [29,34,149,151,164], the learning-based method has mainly been applied to ancient European buildings achieving, satisfactory research progress and results, and it has accumulated some datasets. However, 3DPCSS algorithms are not used in other types of cultural heritage, such as grottos, caves, and ancient ruins.

5.4. The Application of 3DPCSS in Cultural Heritage

Point cloud data have been used for the 3D reconstruction of cultural heritage as digital documentation and visualization. As important spatial information in cultural heritage, point cloud data can express complex geometric shapes through sufficiently dense points. As shown in Section 2, point cloud data will result in uneven density through data fusion, and complex internal structures will lead to incomplete point clouds. In addition, point cloud data lack semantic, attribute, and functional information. These problems mean that point cloud data are difficult to use directly by cultural heritage experts. The user must identify collections of points that belong to individual surfaces and then fit surfaces and solid geometry objects appropriate for the analysis [181]. The point clouds of cultural heritage landscapes and archaeological sites need to generate vector data through semantic segmentation of elements for thematic mapping in GIS to realize spatial retrieval, semantic annotation, and heritage management. For the immovable heritage of historical buildings, semantic segmentation is also needed to build parametric models, such as 3D CAD or HBIM, to realize numerical simulation and restoration engineering management. However, those tasks are prohibitively time-consuming and, in response, 3DPCSS has focused on developing algorithms for automating vector data extraction and parametric or solid modeling processes.
The particularity of cultural heritage makes 3DPCSS face greater challenges in industrial application. For example, deformation analysis can provide results for structural analysis (stress and deformation) of wooden building construction. The algorithm needs to extract building units from complex structures and irregular geometric shapes [182]. The classification and analysis of surface damage areas of cultural heritage also require high segmentation. Ancient architectural heritage undergo deformation (straight boundary becomes circular arc) and local defects. Damage detection of a heritage surface is irregular, with geometric and small deformations. It is necessary to accurately calculate the distance between the area with high roughness and the actual surface. 3DPCSS can effectively assist the generation of 3D geometric models and solve key problems when rehabilitating a heritage building, such as controlling the deformations related to the static and dynamic structure behavior [183]. Three-dimensional geometric models can be further used for 3D numerical finite element analysis to study the structural response, such as providing information for construction rules, geometry, and connection between different structural elements [33,184]. In the reconstruction process, the point cloud can only cover the surface of the structure within the structure that is missing. Therefore, the result of segmentation cannot be directly converted to a solid model and it even requires knowledge related to the architectural structure or historical drawings. In the construction of HBIM and HBIM, the point cloud or mesh of 3D components can be inserted into the HBIM environment; in particular, parametric modeling makes it difficult to express complex geometric objects and structures (such as decorations and sculptures) [185].

5.5. Understanding and Cognition of Cultural Heritage 3D Scenes

3DPCSS promotes the 3D virtual reconstruction of cultural heritage with semantic information. An autonomous understanding of 3D scenes can be achieved by identifying the physical meaning of each point or a set of points in the point clouds. Furthermore, 3DPCSS results are widely used in 3D GIS, HBIM, and other 3D spatial information systems or models for complex calculations such as spatial and structural analysis. These calculations and analyses can help understand the spatial relationships and spatial patterns between physical entities/elements in cultural heritage 3D scenes. However, forming spatial cognition still depends on the complex data processing process to transform point clouds into raster or vector data for a 3D spatial information system. The geometric details of point clouds as a data source in presenting 3D scenes with complex geometric shapes and structures are unmatched by other parametric models. More refined three-dimensional representations are needed, especially in cultural heritage. For example, irregular surfaces may be disease areas. Point clouds still face significant challenges in forming autonomous cognitive abilities without relying on 3D spatial information systems.
The smart point cloud is a more potential development trend in the field of cultural heritage (a detailed discussion on the concept of the smart point cloud has been included in the papers published by Poux et al. [6,17,18,186]). However, each point in the point cloud data after point cloud semantic segmentation has spatial coordinates, physical attributes, semantic information, etc. These tasks transform the point cloud data into a high-dimensional point cloud. High-dimensional point clouds require constructing new data structures for individual points in the data. The structure requires standardized meta-data or conceptual models to describe it. Defining the new data structure is highly related to cultural heritage knowledge. In a cultural heritage point cloud 3D scene, a structured presentation of the scene is required to express the spatial relationship between single points or point sets (point cloud patch) with the same semantic division. Convert discrete and disordered point clouds into geometric primitives with topology to support computation and analysis of spatiotemporal patterns. For example, the spatial topological relationship expression method, the calculation method, and the spatial query method between point cloud patches must be detailed.

6. Conclusions

This paper reviews and analyzes current 3DPCS and 3DPCSS algorithms and applications in cultural heritage. Since 3DPCS and 3DPCSS algorithms are highly dependent on the density and additional information of point cloud data, this paper first introduces point cloud data acquisition and two development trends in the cultural heritage field: single platforms with multi-sensors and multi-platform data fusion. 3DPCS algorithms are mainly used to extract the basic geometry elements from point cloud data and can be used for surface damage detection and deformation analysis. Supported by machine learning and deep learning technologies, 3DPCSS algorithms have become the focus of current research. Learning-based approaches can separate different categories of elements in a point cloud scene in one task. According to the existing studies, the main applications of 3DPCSS in the field of cultural heritage consist of landscapes and immovable cultural heritage, especially historical buildings and their surrounding environment. The extracted geometric primitives can be directly associated with semantic information and external knowledge and improve the efficiency of parametric models, such as BIM and HBIM. The development of the HBIM platform illustrates the strengthening of the connection between geometrical entities and the physical domain. The combination of 3D digitization, semantic segmentation, reconstruction of geometric entities, and knowledge engineering enables computers to understand scene content expressed by digital models, rather than just visualizing scenes. More importantly, based on semantic annotation, technical terms, qualitative attributes, morphological characteristics, and the correlation between physical data, digital twin technology can realize the mapping and feedback between the physical world and the real world to support the preventive conservation of heritage places [187,188,189,190]. However, the diversity and geometric complexity of cultural heritage pose challenges for point cloud semantic segmentation. Many methods have produced adequate results for automatically segmenting cultural heritage geometric structures and damaged areas but they are mainly limited to historic buildings. More publicly available datasets are needed to ensure more discussion of algorithms in more types of cultural heritage. The main problems of the 3DPCSS of cultural heritage include the semantic consistency of multi-source point cloud data, semantic coherence under different spatial scales, and the over-segmentation problem. Learning-based methods are the primary development trend in the future, but public benchmark datasets and the generalization ability of the algorithms limit those methods. Semantic segmentation technology will also be used for scene cognition to identify the implicit spatial knowledge and patterns in cultural heritage scenes.

Author Contributions

Conceptualization, S.Y. and M.H.; methodology, S.Y.; validation, S.Y. and M.H.; formal analysis, S.Y.; investigation, S.Y., M.H. and S.L.; resources, M.H.; writing—original draft preparation, S.Y.; writing—review and editing, M.H. and S.L.; visualization, S.Y.; supervision, M.H. and S.L.; project administration, M.H.; funding acquisition, M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Beijing Natural Science Foundation grant number KZ202110016021, National Natural Science Foundation of China grant number 4217012259.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bakirman, T.; Bayram, B.; Akpinar, B.; Karabulut, M.F.; Bayrak, O.C.; Yigitoglu, A.; Seker, D.Z. Implementation of ultra-light UAV systems for cultural heritage documentation. J. Cult. Herit. 2020, 44, 174–184. [Google Scholar] [CrossRef]
  2. Pan, Y.; Dong, Y.; Wang, D.; Chen, A.; Ye, Z. Three-Dimensional Reconstruction of Structural Surface Model of Heritage Bridges Using UAV-Based Photogrammetric Point Clouds. Remote Sens. 2019, 11, 1204. [Google Scholar] [CrossRef] [Green Version]
  3. Yastikli, N. Documentation of cultural heritage using digital photogrammetry and laser scanning. J. Cult. Herit. 2007, 8, 423–427. [Google Scholar] [CrossRef]
  4. Pavlidis, G.; Koutsoudis, A.; Arnaoutoglou, F.; Tsioukas, V.; Chamzas, C. Methods for 3D digitization of Cultural Heritage. J. Cult. Herit. 2007, 8, 93–98. [Google Scholar] [CrossRef] [Green Version]
  5. Pepe, M.; Costantino, D.; Alfio, V.S.; Restuccia, A.G.; Papalino, N.M. Scan to BIM for the digital management and representation in 3D GIS environment of cultural heritage site. J. Cult. Herit. 2021, 50, 115–125. [Google Scholar] [CrossRef]
  6. Poux, F.; Neuville, R.; Van Wersch, L.; Nys, G.-A.; Billen, R. 3D Point Clouds in Archaeology: Advances in Acquisition, Processing and Knowledge Integration Applied to Quasi-Planar Objects. Geosciences 2017, 7, 96. [Google Scholar] [CrossRef] [Green Version]
  7. Barrile, V.; Bernardo, E.; Fotia, A.; Bilotta, G. A Combined Study of Cultural Heritage in Archaeological Museums: 3D Survey and Mixed Reality. Heritage 2022, 5, 1330–1349. [Google Scholar] [CrossRef]
  8. Bekele, M.K.; Pierdicca, R.; Frontoni, E.; Malinverni, E.S.; Gain, J. A Survey of Augmented, Virtual, and Mixed Reality for Cultural Heritage. J. Comput. Cult. Herit. 2018, 11, 1–36. [Google Scholar] [CrossRef]
  9. Xie, Y.; Tian, J.; Zhu, X.X. Linking Points with Labels in 3D: A Review of Point Cloud Semantic Segmentation. IEEE Geosci. Remote Sens. Mag. 2020, 8, 38–59. [Google Scholar] [CrossRef] [Green Version]
  10. Poux, F.; Billen, R. Voxel-based 3D Point Cloud Semantic Segmentation: Unsupervised Geometric and Relationship Featuring vs. Deep Learning Methods. ISPRS Int. J. Geo-Inf. 2019, 8, 213. [Google Scholar] [CrossRef]
  11. Bosché, F.; Ahmed, M.; Turkan, Y.; Haas, C.T.; Haas, R. The value of integrating Scan-to-BIM and Scan-vs-BIM techniques for construction monitoring using laser scanning and BIM: The case of cylindrical MEP components. Autom. Constr. 2015, 49, 201–213. [Google Scholar] [CrossRef]
  12. Rocha, G.; Mateus, L.; Fernández, J.; Ferreira, V. A Scan-to-BIM Methodology Applied to Heritage Buildings. Heritage 2020, 3, 47–67. [Google Scholar] [CrossRef] [Green Version]
  13. Volk, R.; Stengel, J.; Schultmann, F. Building Information Modeling (BIM) for existing buildings—Literature review and future needs. Autom. Constr. 2014, 38, 109–127. [Google Scholar] [CrossRef] [Green Version]
  14. López, F.; Lerones, P.; Llamas, J.; Gómez-García-Bermejo, J.; Zalama, E. A Review of Heritage Building Information Modeling (H-BIM). Multimodal Technol. Interact. 2018, 2, 21. [Google Scholar] [CrossRef] [Green Version]
  15. Pocobelli, D.P.; Boehm, J.; Bryan, P.; Still, J.; Grau-Bové, J. BIM for heritage science: A review. Herit. Sci. 2018, 6, 30. [Google Scholar] [CrossRef] [Green Version]
  16. Yang, S.; Hou, M.; Shaker, A.; Li, S. Modeling and Processing of Smart Point Clouds of Cultural Relics with Complex Geometries. ISPRS Int. J. Geo-Inf. 2021, 10, 617. [Google Scholar] [CrossRef]
  17. Florent Poux, R.B. A Smart Point Cloud Infrastructure for intelligent environments. In Laser Scanning; CRC Press: London, UK, 2019; p. 23. [Google Scholar]
  18. Poux, F.; Neuville, R.; Hallot, P.; Billen, R. Model for Semantically Rich Point Cloud Data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-4/W5, 107–115. [Google Scholar] [CrossRef] [Green Version]
  19. Alkadri, M.F.; Alam, S.; Santosa, H.; Yudono, A.; Beselly, S.M. Investigating Surface Fractures and Materials Behavior of Cultural Heritage Buildings Based on the Attribute Information of Point Clouds Stored in the TLS Dataset. Remote Sens. 2022, 14, 410. [Google Scholar] [CrossRef]
  20. Arias, P.; GonzÁLez-Aguilera, D.; Riveiro, B.; Caparrini, N. Orthoimage-Based Documentation of Archaeological Structures: The Case of a Mediaeval Wall in Pontevedra, Spain. Archaeometry 2011, 53, 858–872. [Google Scholar] [CrossRef]
  21. Chen, S.; Hu, Q.; Wang, S.; Yang, H. A Virtual Restoration Approach for Ancient Plank Road Using Mechanical Analysis with Precision 3D Data of Heritage Site. Remote Sens. 2016, 8, 828. [Google Scholar] [CrossRef]
  22. Yang, S.; Xu, S.; Huang, W. 3D Point Cloud for Cultural Heritage: A Scientometric Survey. Remote Sens. 2022, 14, 5542. [Google Scholar] [CrossRef]
  23. Ronchi, A.M. Cultural Content. In eCulture: Cultural Content in the Digital Age; Springer: Berlin/Heidelberg, Germany, 2009; pp. 15–20. [Google Scholar]
  24. Van Eetvelde, V.; Antrop, M. Indicators for assessing changing landscape character of cultural landscapes in Flanders (Belgium). Land Use Policy 2009, 26, 901–910. [Google Scholar] [CrossRef]
  25. Soler, F.; Melero, F.J.; Luzón, M.V. A complete 3D information system for cultural heritage documentation. J. Cult. Herit. 2017, 23, 49–57. [Google Scholar] [CrossRef]
  26. Sánchez, M.L.; Cabrera, A.T.; Del Pulgar, M.L.G. Guidelines from the heritage field for the integration of landscape and heritage planning: A systematic literature review. Landsc. Urban Plan. 2020, 204, 103931. [Google Scholar] [CrossRef]
  27. Moyano, J.; Justo-Estebaranz, Á.; Nieto-Julián, J.E.; Barrera, A.O.; Fernández-Alconchel, M. Evaluation of records using terrestrial laser scanner in architectural heritage for information modeling in HBIM construction: The case study of the La Anunciación church (Seville). J. Build. Eng. 2022, 62, 105190. [Google Scholar] [CrossRef]
  28. Barrile, V.; Fotia, A. A proposal of a 3D segmentation tool for HBIM management. Appl. Geomat. 2021, 14, 197–209. [Google Scholar] [CrossRef]
  29. Pierdicca, R.; Paolanti, M.; Matrone, F.; Martini, M.; Morbidoni, C.; Malinverni, E.S.; Frontoni, E.; Lingua, A.M. Point cloud semantic segmentation using a deep learning framework for cultural heritage. Remote Sens. 2020, 12, 1005. [Google Scholar] [CrossRef] [Green Version]
  30. Chew, A.W.Z.; Ji, A.; Zhang, L. Large-scale 3D point-cloud semantic segmentation of urban and rural scenes using data volume decomposition coupled with pipeline parallelism. Autom. Constr. 2022, 133, 103995. [Google Scholar] [CrossRef]
  31. Chen, Y.; Xiong, Y.; Zhang, B.; Zhou, J.; Zhang, Q. 3D point cloud semantic segmentation toward large-scale unstructured agricultural scene classification. Comput. Electron. Agric. 2021, 190, 106445. [Google Scholar] [CrossRef]
  32. Grandio, J.; Riveiro, B.; Soilán, M.; Arias, P. Point cloud semantic segmentation of complex railway environments using deep learning. Autom. Constr. 2022, 141, 104425. [Google Scholar] [CrossRef]
  33. Angjeliu, G.; Cardani, G.; Coronelli, D. A parametric model for ribbed masonry vaults. Autom. Constr. 2019, 105, 102785. [Google Scholar] [CrossRef]
  34. Grilli, E.; Özdemir, E.; Remondino, F. Application of Machine and Deep Learning Strategies for The Classification of Heritage Point Clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-4/W18, 447–454. [Google Scholar] [CrossRef] [Green Version]
  35. Hamid-Lakzaeian, F. Point cloud segmentation and classification of structural elements in multi-planar masonry building facades. Autom. Constr. 2020, 118, 103232. [Google Scholar] [CrossRef]
  36. Grilli, E.; Remondino, F. Classification of 3D Digital Heritage. Remote Sens. 2019, 11, 847. [Google Scholar] [CrossRef] [Green Version]
  37. Li, Y.; Luo, Y.; Gu, X.; Chen, D.; Gao, F.; Shuang, F. Point Cloud Classification Algorithm Based on the Fusion of the Local Binary Pattern Features and Structural Features of Voxels. Remote Sens. 2021, 13, 3156. [Google Scholar] [CrossRef]
  38. Hackel, T.; Wegner, J.D.; Savinov, N.; Ladicky, L.; Schindler, K.; Pollefeys, M. Large-Scale Supervised Learning For 3D Point Cloud Labeling: Semantic3d.Net. Photogramm. Eng. Remote Sens. 2018, 84, 297–308. [Google Scholar] [CrossRef]
  39. Ramiya, A.M.; Nidamanuri, R.R.; Ramakrishnan, K. A supervoxel-based spectro-spatial approach for 3D urban point cloud labelling. Int. J. Remote Sens. 2016, 37, 4172–4200. [Google Scholar] [CrossRef]
  40. Grilli, E.; Menna, F.; Remondino, F. A Review of Point Clouds Segmentation and Classification Algorithms. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 339–344. [Google Scholar] [CrossRef] [Green Version]
  41. van Oosterom, P.; Martinez-Rubi, O.; Ivanova, M.; Horhammer, M.; Geringer, D.; Ravada, S.; Tijssen, T.; Kodde, M.; Gonçalves, R. Massive point cloud data management: Design, implementation and execution of a point cloud benchmark. Comput. Graph. 2015, 49, 92–125. [Google Scholar] [CrossRef]
  42. Yang, J.; Huang, X. A Hybrid Spatial Index for Massive Point Cloud Data Management and Visualization. Trans. GIS 2014, 18, 97–108. [Google Scholar] [CrossRef]
  43. Yang, X.; Grussenmeyer, P.; Koehl, M.; Macher, H.; Murtiyoso, A.; Landes, T. Review of built heritage modelling: Integration of HBIM and other information techniques. J. Cult. Herit. 2020, 46, 350–360. [Google Scholar] [CrossRef]
  44. Bassier, M.; Vergauwen, M. Unsupervised reconstruction of Building Information Modeling wall objects from point cloud data. Autom. Constr. 2020, 120, 103338. [Google Scholar] [CrossRef]
  45. Moyano, J.; León, J.; Nieto-Julián, J.E.; Bruno, S. Semantic interpretation of architectural and archaeological geometries: Point cloud segmentation for HBIM parameterisation. Autom. Constr. 2021, 130, 103856. [Google Scholar] [CrossRef]
  46. Rashdi, R.; Martínez-Sánchez, J.; Arias, P.; Qiu, Z. Scanning Technologies to Building Information Modelling: A Review. Infrastructures 2022, 7, 49. [Google Scholar] [CrossRef]
  47. Nguyen, A.; Le, B. 3D Point Cloud Segmentation: A survey. In Proceedings of the 6th IEEE International Conference on Robotics, Automation and Mechatronics (RAM), De La Salle Univ, Manila, Philippines, 12–15 November 2013; pp. 225–230. [Google Scholar]
  48. Salonia, P.; Scolastico, S.; Pozzi, A.; Marcolongo, A.; Messina, T.L. Multi-scale cultural heritage survey: Quick digital photogrammetric systems. J. Cult. Herit. 2009, 10, e59–e64. [Google Scholar] [CrossRef]
  49. McCarthy, J. Multi-image photogrammetry as a practical tool for cultural heritage survey and community engagement. J. Archaeol. Sci. 2014, 43, 175–185. [Google Scholar] [CrossRef]
  50. Nikolakopoulos, K.G.; Soura, K.; Koukouvelas, I.K.; Argyropoulos, N.G. UAV vs. classical aerial photogrammetry for archaeological studies. J. Archaeol. Sci. Rep. 2017, 14, 758–773. [Google Scholar] [CrossRef]
  51. Vavulin, M.V.; Chugunov, K.V.; Zaitceva, O.V.; Vodyasov, E.V.; Pushkarev, A.A. UAV-based photogrammetry: Assessing the application potential and effectiveness for archaeological monitoring and surveying in the research on the ‘valley of the kings’ (Tuva, Russia). Digit. Appl. Archaeol. Cult. Herit. 2021, 20, e00172. [Google Scholar] [CrossRef]
  52. Jeon, E.-I.; Yu, S.-J.; Seok, H.-W.; Kang, S.-J.; Lee, K.-Y.; Kwon, O.-S. Comparative evaluation of commercial softwares in UAV imagery for cultural heritage recording: Case study for traditional building in South Korea. Spat. Inf. Res. 2017, 25, 701–712. [Google Scholar] [CrossRef]
  53. Kingsland, K. Comparative analysis of digital photogrammetry software for cultural heritage. Digit. Appl. Archaeol. Cult. Herit. 2020, 18, e00157. [Google Scholar] [CrossRef]
  54. Szeliski, R. Computer Vision: Algorithms and Applications; Springer Nature: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  55. Aicardi, I.; Chiabrando, F.; Maria Lingua, A.; Noardo, F. Recent trends in cultural heritage 3D survey: The photogrammetric computer vision approach. J. Cult. Herit. 2018, 32, 257–266. [Google Scholar] [CrossRef]
  56. Murtiyoso, A.; Grussenmeyer, P. Documentation of heritage buildings using close-range UAV images: Dense matching issues, comparison and case studies. Photogramm. Rec. 2017, 32, 206–229. [Google Scholar] [CrossRef] [Green Version]
  57. Grussenmeyer, P.; Alby, E.; Landes, T.; Koehl, M.; Guillemin, S.; Hullo, J.-F.; Assali, P.; Smigiel, E. Recording approach of heritage sites based on merging point clouds from high resolution photogrammetry and terrestrial laser scanning. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2012, 39, 553–558. [Google Scholar] [CrossRef] [Green Version]
  58. Remondino, F.; Spera, M.G.; Nocerino, E.; Menna, F.; Nex, F. State of the art in high density image matching. Photogramm. Rec. 2014, 29, 144–166. [Google Scholar] [CrossRef] [Green Version]
  59. Pepe, M.; Alfio, V.S.; Costantino, D. UAV Platforms and the SfM-MVS Approach in the 3D Surveys and Modelling: A Review in the Cultural Heritage Field. Appl. Sci. 2022, 12, 12886. [Google Scholar] [CrossRef]
  60. Capolupo, A. Accuracy assessment of cultural heritage models extracting 3D point cloud geometric features with RPAS SfM-MVS and TLS techniques. Drones 2021, 5, 145. [Google Scholar] [CrossRef]
  61. Koutsoudis, A.; Ioannakis, G.; Arnaoutoglou, F.; Kiourt, C.; Chamzas, C. 3D reconstruction challenges using structure-from-motion. In Applying Innovative Technologies in Heritage Science; IGI Global: Hershey, PA, USA, 2020; pp. 138–152. [Google Scholar]
  62. Adamopoulos, E.; Rinaudo, F. Enhancing image-based multiscale heritage recording with near-infrared data. ISPRS Int. J. Geo-Inf. 2020, 9, 269. [Google Scholar] [CrossRef] [Green Version]
  63. Peppa, M.; Mills, J.; Fieber, K.; Haynes, I.; Turner, S.; Turner, A.; Douglas, M.; Bryan, P. Archaeological feature detection from archive aerial photography with a SfM-MVS and image enhancement pipeline. Int. Arch. Photogramm. Remote Sens. Spat. Inf. 2018, XLII-2, 869–875. [Google Scholar] [CrossRef] [Green Version]
  64. Ju, Y.; Shi, B.; Jian, M.; Qi, L.; Dong, J.; Lam, K.-M. NormAttention-PSN: A High-frequency Region Enhanced Photometric Stereo Network with Normalized Attention. Int. J. Comput. Vis. 2022, 130, 3014–3034. [Google Scholar] [CrossRef]
  65. Woodham, R.J. Photometric method for determining surface orientation from multiple images. Opt. Eng. 1980, 19, 139–144. [Google Scholar] [CrossRef]
  66. Risbøl, O.; Briese, C.; Doneus, M.; Nesbakken, A. Monitoring cultural heritage by comparing DEMs derived from historical aerial photographs and airborne laser scanning. J. Cult. Herit. 2015, 16, 202–209. [Google Scholar] [CrossRef] [Green Version]
  67. Damięcka-Suchocka, M.; Katzer, J.; Suchocki, C. Application of TLS Technology for Documentation of Brickwork Heritage Buildings and Structures. Coatings 2022, 12, 1963. [Google Scholar] [CrossRef]
  68. di Filippo, A.; Sánchez-Aparicio, L.; Barba, S.; Martín-Jiménez, J.; Mora, R.; González Aguilera, D. Use of a Wearable Mobile Laser System in Seamless Indoor 3D Mapping of a Complex Historical Site. Remote Sens. 2018, 10, 1897. [Google Scholar] [CrossRef] [Green Version]
  69. Lou, L.; Wei, C.; Wu, H.; Yang, C. Cave feature extraction and classification from rockery point clouds acquired with handheld laser scanners. Herit. Sci. 2022, 10, 177. [Google Scholar] [CrossRef]
  70. Ramm, R.; Heinze, M.; Kühmstedt, P.; Christoph, A.; Heist, S.; Notni, G. Portable solution for high-resolution 3D and colour texture on-site digitization of cultural heritage objects. J. Cult. Herit. 2022, 53, 165–175. [Google Scholar] [CrossRef]
  71. Gomes, L.; Regina Pereira Bellon, O.; Silva, L. 3D reconstruction methods for digital preservation of cultural heritage: A survey. Pattern Recognit. Lett. 2014, 50, 3–14. [Google Scholar] [CrossRef]
  72. Maté-González, M.Á.; Di Pietra, V.; Piras, M. Evaluation of Different LiDAR Technologies for the Documentation of Forgotten Cultural Heritage under Forest Environments. Sensors 2022, 22, 6314. [Google Scholar] [CrossRef] [PubMed]
  73. Ruiz, R.M.; Torres, M.T.M.; Allegue, P.S. Comparative Analysis Between the Main 3D Scanning Techniques: Photogrammetry, Terrestrial Laser Scanner, and Structured Light Scanner in Religious Imagery: The Case of The Holy Christ of the Blood. J. Comput. Cult. Herit. 2022, 15, 1–23. [Google Scholar] [CrossRef]
  74. Nagai, M.; Tianen, C.; Shibasaki, R.; Kumagai, H.; Ahmed, A. UAV-Borne 3-D Mapping System by Multisensor Integration. IEEE Trans. Geosci. Remote Sens. 2009, 47, 701–708. [Google Scholar] [CrossRef]
  75. Erenoglu, R.C.; Akcay, O.; Erenoglu, O. An UAS-assisted multi-sensor approach for 3D modeling and reconstruction of cultural heritage site. J. Cult. Herit. 2017, 26, 79–90. [Google Scholar] [CrossRef]
  76. Rodríguez-Gonzálvez, P.; Jiménez Fernández-Palacios, B.; Muñoz-Nieto, Á.; Arias-Sanchez, P.; Gonzalez-Aguilera, D. Mobile LiDAR System: New Possibilities for the Documentation and Dissemination of Large Cultural Heritage Sites. Remote Sens. 2017, 9, 189. [Google Scholar] [CrossRef] [Green Version]
  77. Milella, A.; Reina, G.; Nielsen, M. A multi-sensor robotic platform for ground mapping and estimation beyond the visible spectrum. Precis. Agric. 2018, 20, 423–444. [Google Scholar] [CrossRef]
  78. Hakala, T.; Suomalainen, J.; Kaasalainen, S.; Chen, Y. Full waveform hyperspectral LiDAR for terrestrial laser scanning. Opt. Express 2012, 20, 7119–7127. [Google Scholar] [CrossRef] [PubMed]
  79. Zlot, R.; Bosse, M.; Greenop, K.; Jarzab, Z.; Juckes, E.; Roberts, J. Efficiently capturing large, complex cultural heritage sites with a handheld mobile 3D laser mapping system. J. Cult. Herit. 2014, 15, 670–678. [Google Scholar] [CrossRef]
  80. Alsadik, B. Practicing the geometric designation of sensor networks using the Crowdsource 3D models of cultural heritage objects. J. Cult. Herit. 2018, 31, 202–207. [Google Scholar] [CrossRef]
  81. Ramos, M.M.; Remondino, F. Data fusion in Cultural Heritage—A Review. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W7, 359–363. [Google Scholar] [CrossRef] [Green Version]
  82. Fassi, F.; Achille, C.; Fregonese, L. Surveying and modelling the main spire of Milan Cathedral using multiple data sources. Photogramm. Rec. 2011, 26, 462–487. [Google Scholar] [CrossRef]
  83. Achille, C.; Adami, A.; Chiarini, S.; Cremonesi, S.; Fassi, F.; Fregonese, L.; Taffurelli, L. UAV-Based Photogrammetry and Integrated Technologies for Architectural Applications--Methodological Strategies for the After-Quake Survey of Vertical Structures in Mantua (Italy). Sensors 2015, 15, 15520–15539. [Google Scholar] [CrossRef] [Green Version]
  84. Galeazzi, F. Towards the definition of best 3D practices in archaeology: Assessing 3D documentation techniques for intra-site data recording. J. Cult. Herit. 2016, 17, 159–169. [Google Scholar] [CrossRef]
  85. Martínez-Espejo Zaragoza, I.; Caroti, G.; Piemonte, A.; Riedel, B.; Tengen, D.; Niemeier, W. Structure from motion (SfM) processing of UAV images and combination with terrestrial laser scanning, applied for a 3D-documentation in a hazardous situation. Geomat. Nat. Hazards Risk 2017, 8, 1492–1504. [Google Scholar] [CrossRef]
  86. Herrero-Tejedor, T.R.; Arques Soler, F.; Lopez-Cuervo Medina, S.; de la O Cabrera, M.R.; Martin Romero, J.L. Documenting a cultural landscape using point-cloud 3d models obtained with geomatic integration techniques. The case of the El Encin atomic garden, Madrid (Spain). PLoS ONE 2020, 15, e0235169. [Google Scholar] [CrossRef] [PubMed]
  87. Guidi, G.; Russo, M.; Ercoli, S.; Remondino, F.; Rizzi, A.; Menna, F. A multi-resolution methodology for the 3D modeling of large and complex archeological areas. Int. J. Archit. Comput. 2009, 7, 39–55. [Google Scholar] [CrossRef]
  88. Abate, D.; Sturdy-Colls, C. A multi-level and multi-sensor documentation approach of the Treblinka extermination and labor camps. J. Cult. Herit. 2018, 34, 129–135. [Google Scholar] [CrossRef]
  89. Jo, Y.; Hong, S. Three-Dimensional Digital Documentation of Cultural Heritage Site Based on the Convergence of Terrestrial Laser Scanning and Unmanned Aerial Vehicle Photogrammetry. ISPRS Int. J. Geo-Inf. 2019, 8, 53. [Google Scholar] [CrossRef] [Green Version]
  90. Nurunnabi, A.; Belton, D.; West, G. Robust segmentation in laser scanning 3D point cloud data. In Proceedings of the 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA), Fremantle, Australia, 3–5 December 2012; pp. 1–8. [Google Scholar]
  91. Su, Z.; Gao, Z.; Zhou, G.; Li, S.; Song, L.; Lu, X.; Kang, N. Building Plane Segmentation Based on Point Clouds. Remote Sens. 2021, 14, 95. [Google Scholar] [CrossRef]
  92. Grussenmeyer, P.; Landes, T.; Voegtle, T.; Ringle, K. Comparison methods of terrestrial laser scanning, photogrammetry and tacheometry data for recording of cultural heritage buildings. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 213–218. [Google Scholar]
  93. Paiva, P.V.V.; Cogima, C.K.; Dezen-Kempter, E.; Carvalho, M.A.G. Historical building point cloud segmentation combining hierarchical watershed transform and curvature analysis. Pattern Recognit. Lett. 2020, 135, 114–121. [Google Scholar] [CrossRef]
  94. Deschaud, J.-E.; Goulette, F. A fast and accurate plane detection algorithm for large noisy point clouds using filtered normals and voxel growing. In 3DPVT; Hal Archives-Ouvertes: Paris, France, 2010. [Google Scholar]
  95. Fan, Y.; Wang, M.; Geng, N.; He, D.; Chang, J.; Zhang, J.J. A self-adaptive segmentation method for a point cloud. Vis. Comput. 2017, 34, 659–673. [Google Scholar] [CrossRef]
  96. Ning, X.; Zhang, X.; Wang, Y.; Jaeger, M. Segmentation of architecture shape information from 3D point cloud. In Proceedings of the 8th International Conference on Virtual Reality Continuum and its Applications in Industry, Yokohama, Japan, 14–15 December 2009; pp. 127–132. [Google Scholar]
  97. Saglam, A.; Makineci, H.B.; Baykan, N.A.; Baykan, Ö.K. Boundary constrained voxel segmentation for 3D point clouds using local geometric differences. Expert Syst. Appl. 2020, 157, 113439. [Google Scholar] [CrossRef]
  98. Aijazi, A.; Checchin, P.; Trassoudaine, L. Segmentation Based Classification of 3D Urban Point Clouds: A Super-Voxel Based Approach with Evaluation. Remote Sens. 2013, 5, 1624–1650. [Google Scholar] [CrossRef] [Green Version]
  99. Vo, A.-V.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote Sens. 2015, 104, 88–100. [Google Scholar] [CrossRef]
  100. Xiao, J.; Zhang, J.; Adler, B.; Zhang, H.; Zhang, J. Three-dimensional point cloud plane segmentation in both structured and unstructured environments. Robot. Auton. Syst. 2013, 61, 1641–1652. [Google Scholar] [CrossRef]
  101. Dong, Z.; Yang, B.; Hu, P.; Scherer, S. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds. ISPRS J. Photogramm. Remote Sens. 2018, 137, 112–133. [Google Scholar] [CrossRef]
  102. Pérez-Sinticala, C.; Janvier, R.; Brunetaud, X.; Treuillet, S.; Aguilar, R.; Castañeda, B. Evaluation of Primitive Extraction Methods from Point Clouds of Cultural Heritage Buildings. In Structural Analysis of Historical Constructions; RILEM Bookseries; Springer: Cham, Switzerland, 2019; pp. 2332–2341. [Google Scholar]
  103. Poux, F.; Mattes, C.; Selman, Z.; Kobbelt, L. Automatic region-growing system for the segmentation of large point clouds. Autom. Constr. 2022, 138, 104250. [Google Scholar] [CrossRef]
  104. Dalitz, C.; Schramke, T.; Jeltsch, M. Iterative Hough Transform for Line Detection in 3D Point Clouds. Image Process. Line 2017, 7, 184–196. [Google Scholar] [CrossRef] [Green Version]
  105. Tian, P.; Hua, X.; Yu, K.; Tao, W. Robust Segmentation of Building Planar Features From Unorganized Point Cloud. IEEE Access 2020, 8, 30873–30884. [Google Scholar] [CrossRef]
  106. Rabbani, T.; Van Den Heuvel, F. Efficient hough transform for automatic detection of cylinders in point clouds. Isprs Wg Iii/3 Iii/4 2005, 3, 60–65. [Google Scholar]
  107. Camurri, M.; Vezzani, R.; Cucchiara, R. 3D Hough transform for sphere recognition on point clouds. Mach. Vis. Appl. 2014, 25, 1877–1891. [Google Scholar] [CrossRef]
  108. Borrmann, D.; Elseberg, J.; Lingemann, K.; Nüchter, A. The 3d hough transform for plane detection in point clouds: A review and a new accumulator design. 3D Res. 2011, 2, 3. [Google Scholar] [CrossRef]
  109. Hassanein, A.S.; Mohammad, S.; Sameer, M.; Ragab, M.E. A survey on Hough transform, theory, techniques and applications. arXiv 2015, arXiv:1502.02160. [Google Scholar]
  110. Kaiser, A.; Ybanez Zepeda, J.A.; Boubekeur, T. A survey of simple geometric primitives detection methods for captured 3D data. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2019; pp. 167–196. [Google Scholar]
  111. Lerma, J.; Biosca, J. Segmentation and filtering of laser scanner data for cultural heritage. In Proceedings of the CIPA 2005 XX International Symposium, Torino, Italy, 26 September–1 October 2005; p. 6. [Google Scholar]
  112. Pierrot-Deseilligny, M.; De Luca, L.; Remondino, F. Automated image-based procedures for accurate artifacts 3D Modeling and orthoimage. J. Geoinform. FCE CTU 2011, 6, 1–10. [Google Scholar] [CrossRef]
  113. Markiewicz, J.; Podlasiak, P.; Zawieska, D. A New Approach to the Generation of Orthoimages of Cultural Heritage Objects—Integrating TLS and Image Data. Remote Sens. 2015, 7, 16963–16985. [Google Scholar] [CrossRef] [Green Version]
  114. Maltezos, E.; Ioannidis, C. Plane detection of polyhedral cultural heritage monuments: The case of tower of winds in Athens. J. Archaeol. Sci. Rep. 2018, 19, 562–574. [Google Scholar] [CrossRef]
  115. Alshawabkeh, Y. Linear feature extraction from point cloud using colour information. Herit. Sci. 2020, 8, 28. [Google Scholar] [CrossRef]
  116. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  117. Li, L.; Yang, F.; Zhu, H.; Li, D.; Li, Y.; Tang, L. An improved RANSAC for 3D point cloud plane segmentation based on normal distribution transformation cells. Remote Sens. 2017, 9, 433. [Google Scholar] [CrossRef] [Green Version]
  118. Xu, B.; Jiang, W.; Shan, J.; Zhang, J.; Li, L. Investigation on the weighted ransac approaches for building roof plane segmentation from lidar point clouds. Remote Sens. 2015, 8, 5. [Google Scholar] [CrossRef] [Green Version]
  119. Yang, M.Y.; Förstner, W. Plane detection in point cloud data. In Proceedings of the 2nd International Conference on Machine Control Guidance, Bonn, Germany, 25 January 2010; pp. 95–104. [Google Scholar]
  120. Tittmann, P.; Shafii, S.; Hartsough, B.; Hamann, B. Tree detection and delineation from LiDAR point clouds using RANSAC. In Proceedings of the SilviLaser 2011, Hobart, AU, USA, 16–19 October 2011; pp. 1–23. [Google Scholar]
  121. Xu, B.; Chen, Z.; Zhu, Q.; Ge, X.; Huang, S.; Zhang, Y.; Liu, T.; Wu, D. Geometrical Segmentation of Multi-Shape Point Clouds Based on Adaptive Shape Prediction and Hybrid Voting RANSAC. Remote Sens. 2022, 14, 2024. [Google Scholar] [CrossRef]
  122. Aitelkadi, K.; Tahiri, D.; Simonetto, E.; Sebari, I.; Polidori, L. Segmentation of heritage building by means of geometric and radiometric components from terrestrial laser scanning. ISPRS Ann. Photogramm. Remote Sens Spat. Inf. Sci 2013, 1, 1–6. [Google Scholar] [CrossRef] [Green Version]
  123. Chan, T.O.; Xiao, H.; Liu, L.; Sun, Y.; Chen, T.; Lang, W.; Li, M.H. A Post-Scan Point Cloud Colourization Method for Cultural Heritage Documentation. ISPRS Int. J. Geo-Inf. 2021, 10, 737. [Google Scholar] [CrossRef]
  124. Kivilcim, C.Ö.; Duran, Z. Parametric Architectural Elements from Point Clouds for HBIM Applications. Int. J. Environ. Geoinform. 2021, 8, 144–149. [Google Scholar] [CrossRef]
  125. Macher, H.; Landes, T.; Grussenmeyer, P.; Alby, E. Semi-automatic segmentation and modelling from point clouds towards historical building information modelling. In Proceedings of the Euro-Mediterranean Conference, Limassol, Cyprus, 3– November 2014; pp. 111–120. [Google Scholar]
  126. Andrés, A.N.; Pozuelo, F.B.; Marimón, J.R.; de Mesa Gisbert, A. Generation of virtual models of cultural heritage. J. Cult. Herit. 2012, 13, 103–106. [Google Scholar] [CrossRef]
  127. Nespeca, R.; De Luca, L. Analysis, thematic maps and data mining from point cloud to ontology for software development. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2016, XLI-B5, 347–354. [Google Scholar] [CrossRef]
  128. Poux, F.; Neuville, R.; Hallot, P.; Billen, R. Point cloud classification of tesserae from terrestrial laser data combined with dense image matching for archaeological information extraction. Int. J. Adv. Life Sci. 2017, 4, 203–211. [Google Scholar] [CrossRef] [Green Version]
  129. Li, Z.; Shan, J. RANSAC-based multi primitive building reconstruction from 3D point clouds. ISPRS J. Photogramm. Remote Sens. 2022, 185, 247–260. [Google Scholar] [CrossRef]
  130. Roth, G.; Levine, M.D. Extracting geometric primitives. CVGIP Image Underst. 1993, 58, 1–22. [Google Scholar] [CrossRef]
  131. Shi, B.-Q.; Liang, J.; Liu, Q. Adaptive simplification of point cloud using k-means clustering. Comput. -Aided Des. 2011, 43, 910–922. [Google Scholar] [CrossRef]
  132. Melzer, T. Non-parametric segmentation of ALS point clouds using mean shift. J. Appl. Geod. 2007, 1, 159–170. [Google Scholar] [CrossRef]
  133. Biosca, J.M.; Lerma, J.L. Unsupervised robust planar segmentation of terrestrial laser scanner point clouds based on fuzzy clustering methods. ISPRS J. Photogramm. Remote Sens. 2008, 63, 84–98. [Google Scholar] [CrossRef]
  134. Quagliarini, E.; Clini, P.; Ripanti, M. Fast, low cost and safe methodology for the assessment of the state of conservation of historical buildings from 3D laser scanning: The case study of Santa Maria in Portonovo (Italy). J. Cult. Herit. 2017, 24, 175–183. [Google Scholar] [CrossRef]
  135. Galantucci, R.A.; Fatiguso, F. Advanced damage detection techniques in historical buildings using digital photogrammetry and 3D surface anlysis. J. Cult. Herit. 2019, 36, 51–62. [Google Scholar] [CrossRef]
  136. Armesto-González, J.; Riveiro-Rodríguez, B.; González-Aguilera, D.; Rivas-Brea, M.T. Terrestrial laser scanning intensity data applied to damage detection for historical buildings. J. Archaeol. Sci. 2010, 37, 3037–3047. [Google Scholar] [CrossRef]
  137. Sánchez-Aparicio, L.J.; Del Pozo, S.; Ramos, L.F.; Arce, A.; Fernandes, F.M. Heritage site preservation with combined radiometric and geometric analysis of TLS data. Autom. Constr. 2018, 85, 24–39. [Google Scholar] [CrossRef]
  138. Wood, R.L.; Mohammadi, M.E. Feature-Based Point Cloud-Based Assessment of Heritage Structures for Nondestructive and Noncontact Surface Damage Detection. Heritage 2021, 4, 775–793. [Google Scholar] [CrossRef]
  139. Ankerst, M.; Breunig, M.M.; Kriegel, H.-P.; Sander, J. OPTICS: Ordering points to identify the clustering structure. ACM Sigmod Rec. 1999, 28, 49–60. [Google Scholar] [CrossRef]
  140. Hassan, M.; Akçamete Güngör, A.; Meral, Ç. Investigation of terrestrial laser scanning reflectance intensity and RGB distributions to assist construction material identification. In Proceedings of the Joint Conference on Computing in Construction, Heraklion, Greece, 4–7 July 2017; pp. 507–515. [Google Scholar]
  141. Valero, E.; Bosché, F.; Forster, A. Automatic segmentation of 3D point clouds of rubble masonry walls, and its application to building surveying, repair and maintenance. Autom. Constr. 2018, 96, 29–39. [Google Scholar] [CrossRef]
  142. Hou, T.-C.; Liu, J.-W.; Liu, Y.-W. Algorithmic clustering of LiDAR point cloud data for textural damage identifications of structural elements. Measurement 2017, 108, 77–90. [Google Scholar] [CrossRef]
  143. Niemeyer, J.; Rottensteiner, F.; Soergel, U. Contextual classification of lidar data and building object detection in urban areas. ISPRS J. Photogramm. Remote Sens. 2014, 87, 152–165. [Google Scholar] [CrossRef]
  144. Vosselman, G.; Coenen, M.; Rottensteiner, F. Contextual segment-based classification of airborne laser scanner data. ISPRS J. Photogramm. Remote Sens. 2017, 128, 354–371. [Google Scholar] [CrossRef]
  145. Fiorucci, M.; Khoroshiltseva, M.; Pontil, M.; Traviglia, A.; Del Bue, A.; James, S. Machine Learning for Cultural Heritage: A Survey. Pattern Recognit. Lett. 2020, 133, 102–108. [Google Scholar] [CrossRef]
  146. Mesanza-Moraza, A.; García-Gómez, I.; Azkarate, A. Machine Learning for the Built Heritage Archaeological Study. J. Comput. Cult. Herit. 2021, 14, 1–21. [Google Scholar] [CrossRef]
  147. Weinmann, M.; Jutzi, B.; Hinz, S.; Mallet, C. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS J. Photogramm. Remote Sens. 2015, 105, 286–304. [Google Scholar] [CrossRef]
  148. Hackel, T.; Wegner, J.D.; Schindler, K. Fast semantic segmentation of 3D point clouds with strongly varying density. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 177–184. [Google Scholar] [CrossRef]
  149. Grilli, E.; Dininno, D.; Marsicano, L.; Petrucci, G.; Remondino, F. Supervised segmentation of 3D cultural heritage. In Proceedings of the 2018 3rd Digital Heritage International Congress (DigitalHERITAGE) held jointly with 2018 24th International Conference on Virtual Systems & Multimedia (VSMM 2018), San Francisco, CA, USA, 26–30 October 2018; pp. 1–8. [Google Scholar]
  150. Valero, E.; Forster, A.; Bosché, F.; Hyslop, E.; Wilson, L.; Turmel, A. Automated defect detection and classification in ashlar masonry walls using machine learning. Autom. Constr. 2019, 106, 102846. [Google Scholar] [CrossRef]
  151. Grilli, E.; Remondino, F. Machine learning generalisation across different 3D architectural heritage. ISPRS Int. J. Geo-Inf. 2020, 9, 379. [Google Scholar] [CrossRef]
  152. Croce, V.; Caroti, G.; De Luca, L.; Jacquot, K.; Piemonte, A.; Véron, P. From the semantic point cloud to heritage-building information modeling: A semiautomatic approach exploiting machine learning. Remote Sens. 2021, 13, 461. [Google Scholar] [CrossRef]
  153. Teruggi, S.; Grilli, E.; Russo, M.; Fassi, F.; Remondino, F. A hierarchical machine learning approach for multi-level and multi-resolution 3D point cloud classification. Remote Sens. 2020, 12, 2598. [Google Scholar] [CrossRef]
  154. Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; Bennamoun, M. Deep learning for 3d point clouds: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 4338–4364. [Google Scholar] [CrossRef]
  155. Bello, S.A.; Yu, S.; Wang, C.; Adam, J.M.; Li, J. Deep learning on 3D point clouds. Remote Sens. 2020, 12, 1729. [Google Scholar] [CrossRef]
  156. Liu, W.; Sun, J.; Li, W.; Hu, T.; Wang, P. Deep learning on point clouds and its application: A survey. Sensors 2019, 19, 4188. [Google Scholar] [CrossRef] [Green Version]
  157. Zhang, J.; Zhao, X.; Chen, Z.; Lu, Z. A review of deep learning-based semantic segmentation for point cloud. IEEE Access 2019, 7, 179118–179133. [Google Scholar] [CrossRef]
  158. Pellis, E.; Murtiyoso, A.; Masiero, A.; Tucci, G.; Betti, M.; Grussenmeyer, P. 2D to 3D Label Propagation for the Semantic Segmentation of Heritage Building Point Clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 861–867. [Google Scholar] [CrossRef]
  159. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  160. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 201; Volume 30.
  161. Malinverni, E.S.; Pierdicca, R.; Paolanti, M.; Martini, M.; Morbidoni, C.; Matrone, F.; Lingua, A. Deep learning for semantic segmentation of 3D point cloud. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W15, 735–742. [Google Scholar] [CrossRef]
  162. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. Acm Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef] [Green Version]
  163. Morbidoni, C.; Pierdicca, R.; Paolanti, M.; Quattrini, R.; Mammoli, R. Learning from synthetic point cloud data for historical buildings semantic segmentation. J. Comput. Cult. Herit. 2020, 13, 1–16. [Google Scholar] [CrossRef]
  164. Matrone, F.; Grilli, E.; Martini, M.; Paolanti, M.; Pierdicca, R.; Remondino, F. Comparing machine and deep learning methods for large 3D heritage semantic segmentation. ISPRS Int. J. Geo-Inf. 2020, 9, 535. [Google Scholar] [CrossRef]
  165. Chen, X.-T.; Li, Y.; Fan, J.-H.; Wang, R. RGAM: A novel network architecture for 3D point cloud semantic segmentation in indoor scenes. Inf. Sci. 2021, 571, 87–103. [Google Scholar] [CrossRef]
  166. Lee, J.S.; Park, J.; Ryu, Y.-M. Semantic segmentation of bridge components based on hierarchical point cloud model. Autom. Constr. 2021, 130, 103847. [Google Scholar] [CrossRef]
  167. Yin, C.; Wang, B.; Gan, V.J.; Wang, M.; Cheng, J.C. Automated semantic segmentation of industrial point clouds using ResPointNet++. Autom. Constr. 2021, 130, 103874. [Google Scholar] [CrossRef]
  168. Matrone, F.; Lingua, A.; Pierdicca, R.; Malinverni, E.; Paolanti, M.; Grilli, E.; Remondino, F.; Murtiyoso, A.; Landes, T. A benchmark for large-scale heritage point cloud semantic segmentation. In Proceedings of the XXIV ISPRS Congress, Nice, France, 31 August–2 September 2020; pp. 1419–1426. [Google Scholar]
  169. Dong, Z.; Liang, F.; Yang, B.; Xu, Y.; Zang, Y.; Li, J.; Wang, Y.; Dai, W.; Fan, H.; Hyyppä, J. Registration of large-scale terrestrial laser scanner point clouds: A review and benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 163, 327–342. [Google Scholar] [CrossRef]
  170. Dong, Z.; Yang, B.; Liang, F.; Huang, R.; Scherer, S. Hierarchical registration of unordered TLS point clouds based on binary shape context descriptor. ISPRS J. Photogramm. Remote Sens. 2018, 144, 61–79. [Google Scholar] [CrossRef]
  171. Dong, Z.; Yang, B.; Liu, Y.; Liang, F.; Li, B.; Zang, Y. A novel binary shape context for 3D local surface description. ISPRS J. Photogramm. Remote Sens. 2017, 130, 431–452. [Google Scholar] [CrossRef]
  172. Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J.D.; Schindler, K.; Pollefeys, M. Semantic3d. net: A new large-scale point cloud classification benchmark. arXiv 2017, arXiv:1704.03847. [Google Scholar]
  173. Pepe, M.; Alfio, V.S.; Costantino, D.; Scaringi, D. Data for 3D reconstruction and point cloud classification using machine learning in cultural heritage environment. Data Brief 2022, 42, 6. [Google Scholar] [CrossRef] [PubMed]
  174. Lengauer, S.; Sipiran, I.; Preiner, R.; Schreck, T.; Bustos, B. A Benchmark Dataset for Repetitive Pattern Recognition on Textured 3D Surfaces. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2021; pp. 1–8. [Google Scholar]
  175. Hao, F.; Li, J.; Song, R.; Li, Y.; Cao, K. Mixed Feature Prediction on Boundary Learning for Point Cloud Semantic Segmentation. Remote Sens. 2022, 14, 4757. [Google Scholar] [CrossRef]
  176. Yang, F.; Davoine, F.; Wang, H.; Jin, Z. Continuous conditional random field convolution for point cloud segmentation. Pattern Recognit. 2022, 122, 108357. [Google Scholar] [CrossRef]
  177. Ponciano, J.-J.; Roetner, M.; Reiterer, A.; Boochs, F. Object Semantic Segmentation in Point Clouds—Comparison of a Deep Learning and a Knowledge-Based Method. ISPRS Int. J. Geo-Inf. 2021, 10, 256. [Google Scholar] [CrossRef]
  178. Colucci, E.; Xing, X.; Kokla, M.; Mostafavi, M.A.; Noardo, F.; Spanò, A. Ontology-based semantic conceptualisation of historical built heritage to generate parametric structured models from point clouds. Appl. Sci. 2021, 11, 2813. [Google Scholar] [CrossRef]
  179. Wang, P.; Yao, W. A new weakly supervised approach for ALS point cloud semantic segmentation. ISPRS J. Photogramm. Remote Sens. 2022, 188, 237–254. [Google Scholar] [CrossRef]
  180. Zhang, Y.; Qu, Y.; Xie, Y.; Li, Z.; Zheng, S.; Li, C. Perturbed self-distillation: Weakly supervised large-scale point cloud semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–18 October 2021; pp. 15520–15528. [Google Scholar]
  181. Dimitrov, A.; Golparvar-Fard, M. Segmentation of building point cloud models including detailed architectural/structural features and MEP systems. Autom. Constr. 2015, 51, 32–45. [Google Scholar] [CrossRef]
  182. Cabaleiro, M.; Hermida, J.; Riveiro, B.; Caamaño, J. Automated processing of dense points clouds to automatically determine deformations in highly irregular timber structures. Constr. Build. Mater. 2017, 146, 393–402. [Google Scholar] [CrossRef]
  183. Moyano, J.; Gil-Arizón, I.; Nieto-Julián, J.E.; Marín-García, D. Analysis and management of structural deformations through parametric models and HBIM workflow in architectural heritage. J. Build. Eng. 2022, 45, 103274. [Google Scholar] [CrossRef]
  184. Cardani, G.; Angjeliu, G. Integrated Use of Measurements for the Structural Diagnosis in Historical Vaulted Buildings. Sensors 2020, 20, 4290. [Google Scholar] [CrossRef]
  185. Barrile, V.; Bernardo, E.; Bilotta, G. An Experimental HBIM Processing: Innovative Tool for 3D Model Reconstruction of Morpho-Typological Phases for the Cultural Heritage. Remote Sens. 2022, 14, 1288. [Google Scholar] [CrossRef]
  186. Poux, F.; Hallot, P.; Neuville, R.; Billen, R. Smart Point Cloud: Definition and Remaining Challenges. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, IV-2/W1, 119–127. [Google Scholar] [CrossRef] [Green Version]
  187. Marra, A.; Gerbino, S.; Greco, A.; Fabbrocino, G. Combining integrated informative system and historical digital twin for maintenance and preservation of artistic assets. Sensors 2021, 21, 5956. [Google Scholar] [CrossRef]
  188. Jouan, P.; Hallot, P. Digital twin: Research framework to support preventive conservation policies. ISPRS Int. J. Geo-Inf. 2020, 9, 228. [Google Scholar] [CrossRef] [Green Version]
  189. Funari, M.F.; Hajjat, A.E.; Masciotta, M.G.; Oliveira, D.V.; Lourenço, P.B. A parametric scan-to-FEM framework for the digital twin generation of historic masonry structures. Sustainability 2021, 13, 11088. [Google Scholar] [CrossRef]
  190. De Luca, L. Towards the Semantic-aware 3D Digitisation of Architectural Heritage: The” Notre-Dame de Paris” Digital Twin Project. In Proceedings of the 2nd Workshop on Structuring and Understanding of Multimedia heritAge Contents, Seattle, WA, USA, 12 October 2020; pp. 3–4. [Google Scholar]
Figure 1. The generic workflow of 3D point cloud semantic segmentation for cultural heritage.
Figure 1. The generic workflow of 3D point cloud semantic segmentation for cultural heritage.
Remotesensing 15 00548 g001
Figure 2. The basic supervised machine learning process for point cloud semantic segmentation (Adopt from [147]).
Figure 2. The basic supervised machine learning process for point cloud semantic segmentation (Adopt from [147]).
Remotesensing 15 00548 g002
Figure 3. A deep learning framework for 3DPCSS (adopted from [29]).
Figure 3. A deep learning framework for 3DPCSS (adopted from [29]).
Remotesensing 15 00548 g003
Table 1. An overview of various point cloud acquisition technologies for different types of cultural heritage.
Table 1. An overview of various point cloud acquisition technologies for different types of cultural heritage.
TechnologyPoint DensityAdvantagesDisadvantagesSpatial Scales
PhotogrammetryDepends on the resolution of the camera sensorsIncluding colour and spectral information, can be installed on different platforms,Influenced by light and shadowsLandscape, immovable heritage, and movable heritage
3D laser scanning—ALSLow densityrapid acquisition of a wide range (meters to centimeters resolution), and able to penetrate occlusion2.5D point cloudsLandscape scale (heritage landscape, large site)
3D laser scanning—TLSHigh desityHigh accuracy (centimeter-to-millimeter resolution), access to a geometric surface, and structural detailsExpensive and time-consuming, object occlusionImmovable heritage scale (archaeological site, small landscape, and historical building)
3D laser scanning—MLSMiddle densityHigh accuracy (centimeter resolution), larger measurement range, and higher efficiency than TLSExpensive and time-consuming, object occlusionLandscape and immovable heritage scale
3D laser scanning—HandheldFrom high desity to very high densityVery high accuracy (centimeter to submillimeter)Expensive and time-consumingImmovable and movable heritage scale (artefacts, objects, a part that is immovable)
Table 2. Three-dimensional point cloud data acquisition using a single platform with multiple sensors.
Table 2. Three-dimensional point cloud data acquisition using a single platform with multiple sensors.
PaperPlatformSensorsData Characteristics
Nagai et al. [74]UAV
Charge-coupled device cameras
Laser scanner
Inertial measurement unit
GPS
Terrain shapes, detailed textures, and global geospatial references.
Erenogl et al. [75]UAV
Stereo camera
Visible, thermal, and infrared radiations sensor
Geometric features and material classification information
Rodríguez-Gonzálvez et al. [76]Mobile vehicle
Laser scanner
RGB cameras
Applanix POS LV 520 IMU
GPS
Colour information and spatial geographic reference
Milell et al. [77]All-terrain vehicle
Stereo camera
Visible and near infrared camera
Thermal imager
Colour, geometry, spectral, and mechanical properties of soil
Hakala et al. [78]Ground station
Full waveform hyperspectral
Laser scanner
Hyperspectral point clouds
Zlot et al. [79]Handheld
Laser scanner
IMU
Cameras
Both site context and building detail comparable in accuracy
Table 3. The multi-platform data fusion.
Table 3. The multi-platform data fusion.
CasePlatform and Main SensorsApplication
Fassi et al. [82]
UAV Photogrammetry (Canon 5D Mark II)
TLS (Leica HDS6000)
Integrating different instrumentation and modeling methods to surveying and modeling very complex architecture (Main spire of MILAN CATHEDRAL)
Achille et al. [83]
UAV Photogrammetry (Canon EOS 5D Mark III with a 35 mm lens)
TLS (Leica HDS 7000)
Integration of the building’s interior and exterior 3D model with a tall and complex façade.
Galeazzi [84]
Ground Photogrammetry (Nikon D90 at 12 MPixel with a 60 mm Nikkor lens)
TLS (Faro Focus 3D)
DeWALT DC020 fluorescent light
3D documentation of archaeological stratigraphy in extreme environments characterized by extreme humidity, access difficulty, and challenging light conditions.
Zaragoza et al. [85]
UAV Photogrammetry (Canon IXUS 220HS with 35 mm lens)
TLS (Riegl Vz1000)
Integrate the survey of roofs, gardens, and inner courts.
Herrero-Tejedor et al. [86]
UAV Photogrammetry (RGB Phantom 4 camera and Micasense RedEdge multi-spectral camera)
TLS (Faro Focus S330)
3D documentation for the management and conservation of cultural landscapes with unique biogeographical features
Guidi et al. [87]
UAV Photogrammetry (Zeiss RMK A 30/23)
TLS (Leica HDS3000 and Leica HDS6000)
Ground Photogrammetry(Canon 10DCanon 20D, Kodak DCS Pro)
Multi-resolution 3D modeling of the complex area of Roman Pompeii (150 m × 80 m), DSM (25 mm), medium resolution 3D model (5–20 mm), ground photogrammetry (0.5–10 mm)
Abate et al. [88]
UAV Laser scanner
UAV Photogrammetry (Canon 5D Mark v)
Centimeter to millimeter multi-resolution 3D model of Treblink concentration camp (3.75 square kilometers)
Young et al. [89]
TLS (Leica, Scan Station C10)
Fisheye camera (SIGMA, F3.5 EX DG Circular fisheye)
UAV Photogrammetry (SONY, Alpha 6000)
Establish a 3D model and the associated digital documentation of the Magoksa Temple, Republic of Korea.
Table 4. Application of unsupervised machine learning 3DPCSS in the cultural heritage field.
Table 4. Application of unsupervised machine learning 3DPCSS in the cultural heritage field.
CaseObjectClassificationNeighborhood SelectionFeature
Extraction
Geometry Feature
Selection
Classifier
Grilli et al. [149]European historical buildings, ancient ruins, and stone cultural relicsDamaged areas--Orthophoto or UV map 2D supervised machine learning projection onto 3D dataj48, random tree, RepTREE, LogitBoost, random forest, fast random forest (16), and fast random forest (40)
Valero et al. [150]Ancient ruins wallWall structure and damage information (erosion, delamination, mechanical, damage, and non-defective)-17 colour-related features,
16 geometric features
Ten geometric featuresLogistic regression multi-class classifier, and binary classifier
Grilli et al. [151]European historical buildingsNine structures0.1–0.8 mRadiometric features and
77 multi-scale geometric features
Seven geometric featuresRandom forest classifier
Croce et al. [152]European historical buildings19 structures0.2 m, 0.4 m, and 0.6 m27 geometric features,
RGB values,
laser scanner intensity,
and point cloud Z coordinate
Nine geometric featuresRandom forest classifier
Grilli et al. [34]European historical buildings and TempleBuilding:
15 structures
Temple:
15 structures
-Decentralized coordinates,
radiometric values,
and geometric features
Seven geometric featuresRandom forest
one-versus-one (OvO) classifier
Teruggi et al. [153] European historical buildingsBuilding structures,
subdivision structures, and detailed structures
0.2 cm–3 mGeometric featuresSix geometric featuresRandom forest
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, S.; Hou, M.; Li, S. Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage: A Comprehensive Review. Remote Sens. 2023, 15, 548. https://doi.org/10.3390/rs15030548

AMA Style

Yang S, Hou M, Li S. Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage: A Comprehensive Review. Remote Sensing. 2023; 15(3):548. https://doi.org/10.3390/rs15030548

Chicago/Turabian Style

Yang, Su, Miaole Hou, and Songnian Li. 2023. "Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage: A Comprehensive Review" Remote Sensing 15, no. 3: 548. https://doi.org/10.3390/rs15030548

APA Style

Yang, S., Hou, M., & Li, S. (2023). Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage: A Comprehensive Review. Remote Sensing, 15(3), 548. https://doi.org/10.3390/rs15030548

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop