Highlights
What are the main findings?
- The integration of terrestrial laser scanning (TLS) and UAV photogrammetry produced a high-consistency and complete 3D model of a heritage building; the observed differences between the two reconstructions (0.8–3.2 cm) reflect relative internal consistency.
- The developed workflow enables the generation of HBIM models at different Levels of Detail (LOD100-300), suitable for technical documentation and spatial–historical analysis.
What is the implication of the main finding?
- The proposed method provides a replicable approach for creating digital twins of heritage objects, supporting sustainable conservation and monitoring.
Abstract
This study presents an integrated workflow for acquiring, processing, and fusing terrestrial laser scanning and Unmanned Aerial Vehicle (UAV) photogrammetric data to generate digital twins of heritage buildings within Heritage Building Information Modeling (HBIM) and Historical Geographic Information System (HGIS) environments. Using a historic wooden church as a case study, the proposed approach demonstrates improved completeness and geometric quality compared to UAV-only models. Dimensional differences between UAV-only and integrated models ranged from 0.8 to 3.2 cm, confirming internal consistency and suitability for documentation purposes. The workflow standardizes key stages of acquisition, scaling, and point cloud fusion, and establishes links between HBIM models at Level of Detail (LOD) 100–300 and conservation requirements. Additionally, it identifies integration points for Artificial Intelligence (AI)-based automation, supporting future developments in classification, segmentation, and conversion of 2D documentation into HBIM. The results highlight the potential of terrestrial laser scanning (TLS)-UAV integration for accurate, replicable heritage documentation and spatial–historical analysis.
1. Introduction
Geo-information technologies enabling the development of advanced techniques for capturing, storing, processing, analysing, sharing and visualising geo-information are developing rapidly and are increasingly available. These technologies enable, among other things, the creation of detailed 3D models, virtual reconstructions and interactive visualisations. Photogrammetry and laser scanning are advanced technologies that enable precise data collection and the creation of accurate 2D and 3D models. The use of modern technologies to automate survey processes allows data to be collected, analysed and shared more quickly and efficiently. With these technologies, it is possible to obtain high-quality data and therefore more accurate 3D models than before, which increases the reliability and usability of the results and the efficiency of the research. Higher data quality is revolutionising the way in which data on the geometry of historic buildings is interpreted. Innovative analysis methods, such as drone and laser scanning data processing, are introducing new possibilities for interpreting the results. In addition to increasing the quality of the models, such automation increases productivity, reducing the time needed for surveys and cutting costs. It is fundamental to acquire high-quality data in a short time while determining the exact position in space of the object being measured.
Methods such as photogrammetry and scanning are not able to economically reproduce the smallest but essential elements of the construction and furnishing of monuments, so that the 3D models created are not digital twins and cannot be used for further work. It is therefore necessary, in addition to the creation of the model itself, to design the database in an interoperable way, adapted to the acquisition of data from 2D documentation and GIS databases. In the context of KIS 10 (National Smart Specialisations), the development and integration of information, communication and geo-information technologies allow for an innovative approach to spatial data management and processing. Geoinformation technologies enable more accurate planning and minimise interference with the environment, which is in line with the principles of sustainable development.
With the development of technology at the end of the 20th century came the first commercial laser scanners, which revolutionised the way spatial data is acquired. Laser scanning (TLS) enables the rapid and precise collection of large amounts of geometric information in the form of point clouds from which 3D models, numerical terrain models (NMT) or highly detailed visualisations can be created. Thanks to the possibility of texturing point clouds with photographs taken during the survey, colour models in RGB space can be obtained, which further increases the documentary value of the acquired data [1,2]. In parallel, the use of photogrammetry using unmanned aerial vehicles (UAV) is becoming increasingly common. Drones allow fast, inexpensive and flexible acquisition of images from different heights and angles, which is particularly useful for hard-to-reach architectural features. With appropriate calibration, they can be used to generate high-quality point clouds and orthophotos [3,4,5].
1.1. State of the Art
The literature emphasises that the choice of survey method should depend on the purpose of the study. TLS is characterised by its high consistency and suitability for modelling architectural details [6], while UAV-photogrammetry is better suited for rapid documentation of larger areas [7,8,9,10], including robust matching for oblique imagery [11]. The fusion of data from different sources (integration of point clouds from TLS and UAV) is also becoming crucial, resulting in more complete and representative spatial models [12].
Recent studies further highlight the expanding role of UAV photogrammetry in cultural heritage documentation, including the use of combined nadir and oblique imagery, high-resolution multi-view reconstruction and integration with Heritage Building Information Modeling (HBIM)-oriented workflows. These works demonstrate both the maturity and diversity of UAV-based approaches in capturing complex architectural geometry and improving model completeness, as shown in several recent applications published in dedicated UAV and heritage research collections [12,13,14,15].
Integrated workflows combining TLS and UAV photogrammetry are increasingly used for heritage documentation and conservation [13,16,17]. These approaches enable accurate and comprehensive 3D reconstruction of complex architectural structures, supported by advanced image matching algorithms for oblique UAV imagery [18] and fully automated reconstruction pipelines [19]. Furthermore, transformer-based architectures such as the Visual Geometry Grounded Transformer (VGGT) represent state-of-the-art solutions for scene and building reconstruction [20], which we discuss as future directions for automation. Recent transformer-based advances in 3D reconstruction and point-cloud understanding [21,22,23] further confirm the relevance of transformer architectures for future automation in HBIM-oriented workflows.
Comparing different photogrammetric methods and software packages is crucial for assessing their impact on model completeness and accuracy, as this defines best practices in heritage documentation workflows [24,25].
In addition, the adoption of HBIM and its interoperability with Geographic Information System (GIS) and Historical Geographic Information System (HGIS) environments is increasingly recognized as a key strategy for managing cultural heritage data [13,25]. This trend is accompanied by the formalization of Level of Detail (LOD) standards, which ensure consistency in geometric accuracy and semantic enrichment across different stages of conservation [26,27].
Advances in Artificial Intelligence (AI)-based segmentation and semantic enrichment of point clouds further enhance the automation of these processes [28,29]. State-of-the-art approaches, such as deep learning frameworks for point cloud classification and transformer-based models for scene reconstruction, are paving the way for semi-automated HBIM generation and integration with historical datasets [30,31].
1.2. Standards and Guidelines
In line with current trends, effective cultural heritage documentation relies on the fusion of multiple data acquisition methods, supported by specialised software. Parallel developments in technology and increasing access to precision measurement tools are making the creation of 3D models—both technical and semantic—not only more accessible, but also more cost-effective [12,32,33,34].
The object of the inventory and research in the presented article is a historic wooden building. Therefore, the digitisation of cultural heritage should follow the guidelines of the Commission Expert Group on the common European Data Space for Cultural Heritage (CEDCHE), which place great emphasis on strategic planning and proper management of digitisation processes [35,36]. It is important to take into account a long-term vision, including the sustainability of digital resources and their preservation and sharing.
To achieve adequate data quality according to CEDCHE, it is necessary to use standards for image quality, file formats and metadata that ensure interoperability. It is recommended to use formats such as TIFF for archiving and Europeana Data Model for metadata [26,27]. Safeguarding digitised data and creating backups is important [37]. The guidelines also recommend open access to the created resources, while respecting copyright.
1.3. Workflow Overview and Future AI Integration
Another key element is the sustainability of digital assets. The guidelines impose the need to regularly migrate data to new media and formats to prevent data loss. All digitisation and preservation processes should be carefully documented.
This paper outlines potential AI-based automation steps for future work, rather than presenting implemented solutions. The methods and analyses described provide a conceptual introduction to how AI could be applied for detection and segmentation, aiming to automatically extract data from documentation about solid elements.
AI steps were not implemented—they are noted as potential future research directions. Combining a simplified model with information obtained from an AI operation would require the development of algorithmic patterns capable of detecting where in the model a defined element is located [30]. This creates the possibility, in subsequent stages, to integrate BIM with GIS and perform advanced spatial analyses and interpretation of the results (Figure 1).
Figure 1.
A diagram showing the entire TLS/UAV → HBIM → HGIS workflow, with optional stages where AI could be integrated in future work.
Current solutions offer automatic detection of elements limited to elements, i.e., walls, solids, based only on the development of 3D models from scanning or direct measurements. Future developments may enable identification of more complex elements and their types, which is currently not feasible with existing tools.
1.4. Research Gap, Positioning and Contributions
Reviews of image-based 3D modeling methods highlight the maturity of tools and standardized procedures [32]. In cultural heritage documentation, TLS-UAV fusion is widely used to fill gaps and improve model completeness (e.g., [12]). However, despite the technological maturity, there is still a lack of replicable, clearly defined workflows that guide the process from acquisition to HBIM/HGIS integration, including LOD specification and AI integration points. Existing studies often focus on algorithmic innovation or isolated case studies, leaving a practical gap in operational standardization and interoperability.
This study addresses the identified gap by introducing a comprehensive workflow that harmonizes sensor configuration and data acquisition, point cloud scaling and alignment, mesh generation and texturing, and translation into HBIM embedded in HGIS.
Unlike studies focused on algorithmic innovation, our contribution lies in standardizing the process and explicitly linking it to LOD requirements and AI-ready steps (e.g., point cloud cleaning, segmentation, and recognition of elements from 2D documentation). These AI steps are indicated as future possibilities and were not implemented in this study. This approach ensures reproducibility and facilitates reuse in conservation practice.
The main contributions of this paper include the development of a structured workflow for TLS-UAV integration leading to HBIM models within HGIS environments, the formalization of LOD100-LOD300 for heritage objects with defined accuracy requirements and data sources, and the identification of AI integration points (planned for future automation), covering classification, segmentation, and 2D-to-HBIM translation under specified quality constraints. Additionally, the study provides a comparative analysis reporting geometric differences between UAV-only and integrated models as an indicator of internal consistency, while outlining a protocol for absolute accuracy validation.
2. Materials and Methods
2.1. Data Acquisition and Processing Workflow
The object used for the research was the All Saints’ Church in Sobolów (Figure 2). The village is located in the Lesser Poland Voivodeship, in Bochnia County, Łapanów Municipality. The temple lies on the trail of wooden architecture in Małopolska. It is a wooden single-nave church, arranged on a rectangular plan. The structure of the church consists of a nave and a three-sided closed presbytery, with a sacristy to the north and a chapel to the south. There is also an open vestibule on the west side of the nave, which is connected to the satellites that surround the whole building. The greater part of the church is covered by a gabled roof, with the remainder of the church covered by a pent roof [38].
Figure 2.
The parish of St. Mary’s. All Saints, Sobolów.
2.1.1. Data Preparation and Methodology
A key element of this study was the acquisition of a point cloud from a Leica ScanStation P40 (Leica Geosystems AG, Heerbrugg, Switzerland) laser scanner. There were 7 sites spread around the site, the survey was orientated to reference spheres, which were laid out so that 3 spheres were visible backwards and forwards from each site. Equipment settings are presented in Table 1.
Table 1.
Equipment settings.
The problem is the lack of standardisation of input data. Vector data extracted from 2D projects often lack a homogeneous layer and geometric structure. The objects themselves are often represented in slightly different ways. The basic information needed to generate a digital twin is obscured by irrelevant elements.
After the initial steps, a new project was set up in the scanner along with the hardware settings.
In the scanning mode for all seven instrument stations, the option scan + photos was selected. This option allows the images taken during the scan to be used for subsequent texturing of the object. In order to carry out the analysis, it was necessary to acquire a point cloud generated from photographs taken with the UAV Dji Air 2S (SZ DJI Technology Co., Shenzhen, China) fly more combo. As a result of the raid, 310 photographs were taken of the building and a film was shot, with only the roof of the church covered. The photographs were taken in rows with sufficient longitudinal and transverse coverage to ensure the highest possible quality of the final product. An example photo is shown in Figure 3.
Figure 3.
Photo of a section of the church taken with UAV.
2.1.2. Data Analysis
Camera work was carried out using four programs: Agisoft Metashape Professional 2.2.0, Leica Cyclone 2022, CloudCompare 2.13.2 and Microstation V8i. The point cloud acquired from the scanner was processed in Cyclone, while the images acquired from the drone were combined in Agisoft Metashape Professional 2.2.0. CloudCompare was then used to merge the two point clouds and create a differential model, while Microstation was used to perform the necessary further analyses and create the final 3D model.
2.1.3. Processing of Point Cloud Data from a Laser Scanner
The first step after exporting the data from the scanner was to set up a new base in Leica Cyclone, where all the point cloud processing was done [38]. Once the correct settings were entered and the cloud export was performed, the registration of reference spheres on the scan from each site individually began. Once all the reference points were registered at each station, the orientation of the scans was performed. All seven scans were then combined into a single point cloud, and the registration quality was checked to assess internal consistency between scans. To do this, a report was generated, an excerpt of which can be seen in Figure 4, providing information on the sphere fit errors, which in this case did not exceed 0.003 m for any of the sites.
Figure 4.
Errors in fitting reference spheres in Cyclone.
The final step was to clean up the point cloud, a section of which can be seen in Figure 5, export it in .e57 format and subject it to the 3D modelling process.
Figure 5.
First fragment of the cleared point cloud. Author’s own work prepared in Leica Cyclone.
2.1.4. UAV Image Processing
The next stage involved the processing of images taken using the UAV. All work was carried out in the Agisoft Metashape Professional 2.2.0 software. During the fieldwork, distances on the object were not measured directly; however, a point cloud obtained from the laser scanner was used. Based on this, distances were measured in Microstation V8i at various characteristic points of the object, and then markers were added in Agisoft at the locations where the distances had been determined. Therefore, it was essential to select a local coordinate system in advance, as after placing all the markers on the images and entering the distances between them, the software automatically rescaled the point cloud and aligned all images, which significantly facilitated the subsequent stage of integrating the two point clouds. Once all markers were placed, optimization was performed and errors were checked for both the markers and the distances. The errors amounted to approximately 9 mm for the markers and over 8 mm for the distances. In the next phase, the generation of the point cloud from the images was initiated (Figure 6). It was created at the highest possible quality, and after the process was completed, it was cleaned of measurement noise [38].
Figure 6.
Point cloud generated from images in Agisoft Metashape.
Then, further work was undertaken, which involved creating a three-dimensional model based on the dense point cloud and the Digital Elevation Model (DEM) (Figure 7).
Figure 7.
Church front—Model. Author’s own work prepared in Agisoft Metashape.
To clarify the UAV image processing pipeline, Figure 8 illustrates the sequence of operations applied in this study. The workflow begins with the acquisition of nadir and oblique images during UAV flights, ensuring sufficient longitudinal and transverse overlap for complete coverage of the object. The next step involves the placement of markers and scaling, where distances between characteristic points—measured in Microstation using TLS data as a reference—are imported into Agisoft Metashape to enable accurate rescaling and alignment of the photogrammetric model. Following this, image alignment and optimization are performed using structure-from-motion algorithms combined with bundle adjustment to minimize internal errors. Once alignment is complete, a dense point cloud is generated at the highest quality setting and subsequently cleaned to remove measurement noise. The point cloud is then converted into a mesh and textured using UAV imagery to enhance visual realism. Finally, the UAV-derived dataset is integrated with TLS data in CloudCompare, producing a unified point cloud suitable for HBIM.
Figure 8.
UAV image processing workflow with optional AI integration points.
The diagram also highlights optional integration points for artificial intelligence, which could automate several stages of the process in future research. These include automatic detection of homologous points during marker placement, adaptive error correction during image alignment, intelligent filtering and semantic segmentation of point clouds, and automated mesh optimization and texture alignment. AI-based registration and fusion of multi-source point clouds represent further opportunities for streamlining the integration phase.
This approach follows current UAV 3D reconstruction practices, including robust matching for oblique imagery and mesh optimization strategies, as discussed in “Full-automatic high-precision scene 3D reconstruction method with water-area intelligent complementation and mesh optimization for UAV images” [19] and viewpoint-invariant matching techniques [18].
2.1.5. Generation of a Differential Model in CloudCompare
After completing all the previously described steps, two point clouds were generated and saved in the .e57 format, then imported into the CloudCompare software, where a differential model was created. The imported clouds were in a uniform scale and shared identical coordinates, thanks to the earlier step in which characteristic points were identified on the laser scanner cloud and distances between them were measured. These points and distances were entered into Agisoft. The point cloud obtained from the laser scanner was used as the reference. The result of these operations was a differential model, shown in Figure 9. Upon analysis, only minor differences were observed. The most significant anomalies appeared on the roof turret, which was nearly absent in the UAV-derived point cloud, as well as on the rear wall of the church. The remaining parts show very small discrepancies, which can be seen on the differential model plot. Subsequently, the two clouds were merged using the same approach as previously used to combine segments from the laser scanner cloud. Once the integration was complete, the resulting point cloud was exported in .e57 format and used for generating the final 3D model.
Figure 9.
Cloud-to-Cloud (C2C) deviation map generated in CloudCompare (TLS as reference), illustrating relative point-to-point deviations on the façade.
The differences between the UAV-derived and TLS point clouds were evaluated in CloudCompare using the Cloud-to-Cloud (C2C) distance metric. For interpretation, only the overlapping regions of the two point clouds were considered conceptually (non-corresponding areas, such as the roof turret and wall–ground contact, would otherwise bias the distribution). Based on the C2C deviation map (Figure 9), the vast majority of points exhibit small deviations, predominantly within 0–6 cm, which is indicative of relative internal consistency between the TLS and UAV datasets. The spatial pattern suggests a distribution concentrated near 0 cm, with larger local deviations occurring in geometrically complex or less favorably imaged areas (e.g., the roof turret). This narrative summary complements the C2C map and is intended to support the interpretation of relative internal consistency.
2.1.6. Creation of a 3D Model of the Object
The 3D model was generated as the final result of the described process. Its final version was based on the integrated point cloud, combining properly scaled data from the laser scanner and the point cloud obtained from UAV image processing. The model data were further enriched with identified features and attributes from the design documentation.
The overlaid point clouds were merged and exported in .e57 format. After verifying the quality of the combined dataset, it was imported into Agisoft Metashape Professional, where the 3D model was generated using the mesh-based method. Upon loading the previously integrated point cloud into the project, the software settings were reviewed, and the 3D model generation tool was launched with customized parameters to ensure the highest possible quality of the reconstructed object (Figure 10).
Figure 10.
Model view—right elevation of the object. Author’s own work prepared in Agisoft Metashape.
2.1.7. Comparison of Methods
In the documentation of historic architecture, various photogrammetric methods and software packages are used to enable 3D modeling of the same object using different technological approaches. UAV photogrammetry offers speed and cost efficiency but is prone to errors in shaded areas and at the junction between the building and the ground. TLS guarantees high consistency of architectural details but is time-consuming and has limitations in capturing upper parts of structures. The best results are achieved by integrating both methods, supported by specialized software: Agisoft Metashape (photogrammetry), Leica Cyclone (TLS), CloudCompare (analysis and integration), and MicroStation (verification and preparation of data for HBIM). A comparison of models shows that the integrated model contains significantly more mesh elements (approximately eight times higher than the UAV-only model) and provides greater completeness than models created separately. Table 2 summarizes the advantages and limitations of the two data streams and the benefits of integration in the context of heritage objects.
Table 2.
Comparative analysis of UAV photogrammetry, TLS, and integrated TLS-UAV workflows for heritage documentation.
Beyond these general characteristics, each method and software tool has distinct advantages and limitations that affect the final quality of 3D models of historic architecture. UAV photogrammetry stands out for its speed of data acquisition, low cost, and ability to document hard-to-reach elements, especially roofs and upper parts of buildings. However, UAV-derived data is sensitive to shading, flight geometry, and image quality, which can lead to gaps or distortions in complex areas. TLS provides very high consistency and point cloud density, enabling precise reproduction of architectural details such as windows, columns, and decorative elements. Its main drawback is the time required for planning and scanning, as well as difficulty in capturing high-altitude elements due to geometric constraints.
The software used in this workflow plays a critical role in shaping model quality. Agisoft Metashape is effective for processing UAV imagery and generating dense point clouds, meshes, and textures, but its performance depends on image quality. Leica Cyclone offers precise tools for TLS scan registration and error control, while CloudCompare provides an analytical environment for comparing and merging point clouds and creating differential models. MicroStation is used for geometric analysis and dimensional verification, as well as for preparing input data for HBIM systems. Each tool contributes unique functionality, but only their complementary use ensures a model that is accurate, complete, and suitable for HBIM integration and digital twin development.
This comparison aligns with state-of-the-art practices in photogrammetric modeling and supports the integration of TLS and UAV workflows for heritage documentation [12]. Advanced image matching strategies for oblique UAV imagery, such as viewpoint-invariant algorithms [18], further enhance the robustness of photogrammetric reconstruction. These approaches, combined with HBIM interoperability standards and LOD formalization [39], provide a foundation for creating AI-ready models that can be embedded in Historical Geographic Information Systems (HGIS) and Common Data Environments (CDE).
2.1.8. Analysis of Results
As a result of the data integration process, a point cloud consisting of 79,617,791 points was created. This total includes two point clouds: one generated from UAV images (50,680,249 points) and the other from laser scanning (28,937,542 points). In the case of UAV-based imagery, the number of points in the resulting cloud depends on the number of photographs, the camera sensor, and—most importantly—the resolution of the images. The presented differential model, based on both point clouds, shows the geometry, dimensions, and spatial position of the object, with no visually evident gross deformations; quantitative differences reported below are interpreted as internal consistency. For the purposes of the analysis, the laser scanning point cloud was designated as the reference. While this measurement method has limitations, such as small gaps in certain areas, visual analysis of the color-coded difference map helped identify zones requiring further optimization and correction in the 3D modeling process.
Subsequent quality analyses were conducted on mesh models: one created solely from the UAV point cloud and another based on the integrated point cloud. Both models were textured using photographs. Already at the model generation stage, significant differences in surface detail were noticeable. The model generated exclusively from images consisted of 8,725,440 flat surfaces (so-called “faces”), while the integrated model contained as many as 69,075,531 faces. These values illustrate the substantial difference in the level of detail between the two models. Visualizations composed of a larger number of surfaces are well suited for detailed technical documentation and the analysis of heritage structures. The number of elements in each model also affected the time required to generate the final 3D object. The first model took considerably less time to generate, with the texturing process for the integrated cloud consuming the most time.
The resulting model shows no missing parts, which contrasts sharply with the version based solely on UAV images. Notable deficiencies in the UAV-only model include the upper part of the roof, where the turret failed to generate, and a gap on the left side of the building near the ground contact point. In addition, the insufficient number of frontal images caused the top of the front wall to appear darkened. All of these deficiencies significantly reduced the quality of the model. Besides these major flaws, smaller errors were also visible in the textures, particularly in areas with structural breaks or stairs.
In the case of the final integrated model, no significant defects were observed. The generated object preserved its geometry and displayed details clearly on the roofs, pillars, and windows. Moreover, a small chapel located on the rear wall of the church was accurately reconstructed, without noticeable inconsistencies. Some minor issues with textures remained on the roof, such as color mismatches or slightly blurred areas; however, all details, including crosses and the turret, were fully rendered.
In the final stage of the analysis, the models were imported into Microstation 2023, where differences in the lengths and heights of the walls were examined. Additionally, measurements were taken of two different windows and the width of the side door located on the church’s right elevation. As a result of the measurements, differences in distances were determined between two models by measuring the same elements on the building elevations. These differences ranged from 3.2 cm to 0.8 cm. These values represent relative internal consistency between the TLS and UAV datasets and must not be interpreted as absolute geospatial accuracy, as no independent check points (CPs) were collected in this measurement campaign. In the case of the integrated point cloud, details were significantly more visible and featured clearer edges. However, for the purpose of the analysis, only those elements were selected whose accurate identification was possible in both models. Although the differences are relatively small and within an acceptable range, the data from the integrated point cloud proved to be more internally consistent and reliable for relative dimensional analysis, making it the preferred source for design work where relative consistency is sufficient.
2.2. Accuracy Assessment (Planned Methodology)
Absolute accuracy will be evaluated using an independent set of check points (CPs) surveyed with a total station or Real-Time Kinematic (RTK) Global Navigation Satellite System (GNSS), ensuring CP precision exceeds model resolution. After TLS-UAV data fusion and georeferencing, model coordinates at CP locations will be extracted and compared with the surveyed CPs to compute ΔX, ΔY, ΔZ and 3D RMSE, along with error distributions and spatial residual maps. This protocol, illustrated in Figure 11, provides a rigorous and traceable approach to accuracy reporting.
Figure 11.
Workflow for absolute accuracy validation using independent check points (CPs).
The dimensional differences observed (0.8–3.2 cm) between the UAV-only and integrated models should be interpreted as a measure of internal consistency rather than absolute accuracy metrics. It is important to note that independent check points with higher geodetic precision were not implemented in this study; their inclusion is planned for subsequent measurement campaigns to enable full 3D validation against CPs and reporting of absolute XYZ errors. Figure 11 presents the proposed pipeline for absolute accuracy validation using independent control points, which will be implemented in future research stages. The process includes: network and CP design (total station/GNSS, CP accuracy higher than model resolution), CP measurement independent of modeling, model registration and fusion, georeferencing, error analysis (ΔX, ΔY, ΔZ, 3D Root Mean Square Error—RMSE, error distribution maps), and reporting of absolute accuracy and uncertainty. Potential AI integration points are indicated for network design, model optimization (combining transformation and selection), and deficiency detection.
In practical terms, the proposed workflow mitigates typical limitations of single-source approaches, such as gaps in TLS data on roof structures and artifacts in UAV-derived models at wall–ground junctions. As a result, it provides a robust foundation for HBIM models at LOD200-LOD300 and supports seamless integration into HGIS environments. This approach introduces operational standardization and defines AI-ready steps (planned for future automation), such as point cloud cleaning, segmentation, and recognition of elements from 2D documentation, thereby addressing a critical implementation gap in heritage documentation workflows.
3. HBIM Model Development
A key aspect in the creation of a data infrastructure is the preparation of input for the development of a BIM model, which is intended to support all stages of the infrastructure’s life cycle, including design processes and asset management [40].
By linking geospatial data with BIM models, it becomes possible not only to represent the object itself, but also to analyze its behavior within a broader spatial context. A crucial factor in this regard is georeferencing, which assigns the model a specific position within a GIS coordinate reference system. This allows the building to be located in relation to terrain topography, infrastructure elements, and other surrounding structures.
An integrated approach combining Heritage Building Information Modeling (HBIM) with Historical Geographic Information Systems (HGIS) can be implemented at both the data exchange and application levels. As noted in [41], such integration enables the connection of construction-related data with the environmental and social conditions specific to a given historical period. As a result, a digital HBIM model embedded within an HGIS context becomes a powerful tool for multidimensional and diachronic analysis [42,43].
In practice, this means that traditional design and engineering documentation can be transformed into structured HGIS components, accessible to various stakeholders—from designers to cultural heritage managers. Digital twins created from this kind of integrated dataset support more informed and multidimensional management of heritage buildings.
In situations where standardized formats for HBIM-HGIS data exchange are not yet available, it is advisable to consider flexible integration methods. At the operational level, it is possible to combine CAD tools with a semantic approach, for example through the use of ontologies, which help preserve the semantic consistency of the data. An alternative solution with significant potential is the use of graph-based databases, which can serve as carriers of semantics and spatial relationships in complex information systems.
A general issue is the lack of standardization of input data. There is a need to develop solutions for structuring input—both in terms of the content of drawings and the organization of complete technical documentation—and to automate the digitalization process so that the data complies with BIM standards, as has been the case in countries like the United Kingdom, where BIM documentation standards have been mandatory since 2016 [44].
It is not possible to create a universal algorithm for such a detailed task as developing a model of a heritage object (digital twin). Often, the process requires an individual approach to unique architectural elements. For LOD100-300 levels of detail, this approach appears to be sufficient.
3.1. Level of Detail in HBIM
Heritage Building Information Modeling (HBIM) extends the classical BIM (Building Information Modeling) approach to account for the specific requirements of cultural heritage. In the case of historical objects, such as the wooden church in Sobolów, it is possible to use data from terrestrial laser scanning and photogrammetry to generate dense point clouds, which serve as the basis for creating HBIM models at various levels of geometric detail.
In this study, we adopted a classification based on Levels of Development (LOD), adapted to the needs of conservation documentation and integration with spatial information systems. The developed models range from LOD100 (conceptual model), through LOD200 and LOD300 (geometric models varying in level of geometric detail and completeness). Using LOD as a classification framework allows for clear identification of the scope and intended use of modeled data, while also facilitating compatibility with other digital environments, such as Historical Geographic Information Systems (HGIS), with which HBIM models can be directly integrated.
Integrating HBIM with HGIS enables the building to be situated within its spatial and temporal context, which is particularly important for studying the urban evolution of a site and analyzing the relationship between the object and its surroundings across historical periods.
According to the proposals in [39] regarding the integration of LOG (Level of Geometry) and GOA (Grade of Accuracy) in HBIM, it is important to emphasize the standardization of geometric levels for various purposes—from documentation to design and conservation management throughout the lifecycle of the object. HBIM models, when shared in Common Data Environments (CDE), should be enriched with technical metadata describing accuracy, level of detail, and sources of primary data. This enhances their interoperability and facilitates reuse.
In addition to the integration of LOG (Level of Geometry) and GOA (Grade of Accuracy), recent research emphasizes the importance of advanced Level of Detail (LOD) rendering techniques for realistic visualization and efficient data management in HBIM environments. State-of-the-art methods address challenges related to mesh simplification and texture alignment while preserving structural integrity across multiple levels of detail. One notable approach is the multilevel structure-keeping mesh simplification combined with fast texture alignment, which enables the creation of highly realistic 3D models without compromising geometric accuracy or visual quality. This technique significantly improves performance in large-scale heritage documentation projects and supports interactive visualization in Common Data Environments (CDE). Such solutions are discussed in “A Novel LOD Rendering Method With Multilevel Structure-Keeping Mesh Simplification and Fast Texture Alignment for Realistic 3-D Models” [45], which represents a benchmark for future developments in HBIM and visualization.
It is also worth mentioning the solutions proposed in [46], such as the BIMExplorer tool, which enables semantic searching and browsing of HBIM models directly in a web interface, thus supporting conservation planning based not only on geometric data, but also on historical, material, and inspection-related information.
Furthermore, the research presented in [47] demonstrates that the effective implementation of HBIM for planned conservation requires appropriate systems for the semantic classification of technological elements, aligned with conservation practices and the logic of BIM environments. Such an approach allows for better organization of information while facilitating its ongoing updating and long-term analysis.
In addition to measurement data, the HBIM process can be supported by digitized historical sources such as old cadastral maps, architectural drawings, photographs, and engravings. Comparative analysis of these materials with contemporary data allows for the identification of transformations, stages of development, and the reconstruction of the historical appearance of the object and its surroundings.
This study links LOD classifications with source data requirements, including the specification of essential datasets, modeling tolerance thresholds (GOA) and documentation of data provenance. Furthermore, the HBIM models developed in this workflow were structured as AI-ready, incorporating standardized naming conventions, metadata on geometric fidelity and data sources, and explicit identification of non-standard families requiring recognition. Such formatting enhances the reusability of HBIM models within Common Data Environments (CDE) and facilitates their integration into Historical Geographic Information Systems (HGIS), thereby supporting interdisciplinary conservation and spatial–historical analysis.
3.2. Application of LOD Levels in the HBIM Model of the Sobolów Church
To illustrate differences in the levels of geometric detail in HBIM models, three versions of a digital model of the same object—the historic wooden church in Sobolów—were developed. Each version corresponds to a different Level of Development (LOD), in accordance with recommendations for use in documentation and conservation analysis.
Figure 12 provides an intuitive visual comparison between LOD100, LOD200, and LOD300, illustrating the progressive increase in geometric richness and semantic specification. LOD100 represents only the basic volumetric massing of the church, LOD200 introduces simplified but survey-based geometric elements, and LOD300 incorporates detailed openings, secondary architectural components, and refined proportions derived from the integrated TLS-UAV dataset. This visual differentiation clarifies both the geometric and semantic progression across LOD levels and strengthens the contribution of the LOD framework within the workflow.
Figure 12.
HBIM models of the church at three Levels of Detail (LOD100, LOD200, LOD300), illustrating the increasing geometric richness and semantic specification across LOD levels.
The decision to develop HBIM models in LOD100, LOD200, and LOD300 variants was made to ensure their compatibility with Historical Geographic Information Systems (HGIS). Thanks to their varied levels of geometric detail, these models can be used not only for technical documentation but also for spatial–historical analysis. LOD100 and LOD200 enable the integration of geolocation, building orientation, and basic structure data with historical and cartographic datasets. LOD300, on the other hand, allows for linking with more detailed sources such as cadastral maps, archival photographs, or conservation records. In this way, the models serve as a foundation for digitally representing the transformation of the object over time and its role in urban, landscape, and social contexts.
LOD400 represents a very high level of geometric detail, suitable for capturing architectural features (e.g., cornice profiles) with modeling tolerances on the order of ±1 cm. This level is typically used for advanced conservation analysis, restoration planning, and long-term building management. Such models also include extended material parameters and records of past technical interventions. In this study, LOD400 was not required, as the focus was on integrating spatial data with HGIS rather than producing full conservation-grade documentation.
The following levels of detail were adopted:
- −
- LOD100—Conceptual Model
A simplified model used in orientation and early-stage spatial analysis.
- Represents the main massing of the building.
- Modeling is limited to primary shapes: building footprint, roof, and walls.
- No material information or interior components are included.
- −
- LOD200—Geometric Model Based on Survey Data
A model reflecting real building dimensions (main volume) obtained from photogrammetry or terrestrial laser scanning.
- Represents basic elements such as walls and roof as simple volumes.
- No detailed architectural features are included.
- Required geometric accuracy: ±10 cm.
- A point cloud should be imported as a reference (e.g., RCP file).
- −
- LOD300—Detailed Geometric Model Based on Survey Data
A model reflecting the building’s real dimensions obtained from photogrammetry or TLS.
- Includes primary geometric elements: walls (with window and door openings), roof, and secondary volumes (e.g., annexes, shelters).
- Required geometric accuracy: ±5 cm.
- A point cloud should be imported as a reference (e.g., RCP file).
- Custom Revit families should be created for non-standard elements.
All model versions were saved in native Revit format (.RVT) with the appropriate LOD level indicated in the filename. These models can be integrated with Common Data Environments (CDE), HGIS platforms, and semantic data exploration tools (e.g., BIMExplorer), increasing their usefulness in interdisciplinary conservation and research projects.
3.3. HBIM Implementation and IFC/HGIS Integration
Figure 13 illustrates the workflow for preparing HBIM models for export in IFC format and their integration with HGIS, addressing interoperability requirements and potential automation using AI. The process includes LOD mapping (LOD100-LOD300), family strategy for non-standard elements, which refers to the definition and management of custom parametric families for heritage components within the HBIM environment, ensuring accurate representation and compatibility with IFC export, semantic enrichment (materials and conservation attributes), IFC export (version, MVD, interoperability), metadata integration (GOA/LOG, source datasets, CDE), and georeferencing. Optional AI integration points are indicated for automatic element recognition, attribute assignment, automated classification, and AI-ready automation, which represent future research directions based on recent advances in deep learning and semantic enrichment for HBIM.
Figure 13.
Proposed Workflow for HBIM Model Preparation and IFC Export with Integration into HGIS, Highlighting Potential AI Integration Points.
HBIM models (LOD100-300) were developed in Autodesk Revit 2024 based on the integrated point cloud. Models were exported to IFC 4.3 (alternatively IFC 2 × 3) using Model View Definitions (Coordination View 2.0/Reference View), ensuring that object GUIDs and metadata (GOA/LOG, source datasets, accuracy class) were preserved for interoperability within Common Data Environments (CDE). Georeferencing was applied using an appropriate projected coordinate system, and local HBIM coordinates were linked via a Helmert transformation to align IFC models with HGIS layers, including topography, cadastral data, and conservation zones. Geometry was not generated using AI in this study. AI integration is planned as a future enhancement for classification, element recognition, and 2D-to-HBIM attribution.
4. Discussion
4.1. Discussion of Results and Comparison with Literature
The article by Mitka B. [48] provides information on the potential applications of terrestrial laser scanning in the documentation of heritage buildings. The author emphasizes the high quality of data obtained using this technique. Data quality is the most important factor when creating a three-dimensional model. In the model developed for this study, the point cloud acquired from the laser scanner proved essential for accurately reproducing features such as windows and pillars. However, in addition to the scanning process itself, proper data preparation for further processing is equally important. This is discussed in the work by Klapa P. [33], which focuses on point cloud processing techniques, particularly the filtering of the cloud and the removal of measurement noise. These techniques were also significant in the context of this thesis, as they form a key step in producing a high-quality 3D model. In this case, the point cloud had to be thoroughly cleaned due to the use of mesh-based modeling.
The study in [7] highlights that integrating point clouds obtained from different methods results in 3D models that are more detailed and complete. In this research, data from both a laser scanner and a UAV were integrated. The results of the analysis and comparisons confirm that the integrated data exhibited higher geometric fidelity. The combined point cloud consisted of 79,617,791 points and was more faithful in capturing the geometry of the object compared to the individual point clouds acquired from the UAV (50,680,249 points) or the laser scanner (28,937,542 points).
In [4], the functionality of unmanned aerial vehicles (UAVs) is discussed. The author emphasizes that drones allow for rapid and relatively inexpensive data acquisition compared to other measurement methods. However, UAV-based photogrammetry has certain limitations, which result in data of slightly lower quality than, for example, laser scanning. Therefore, such data is best used as auxiliary information. The findings of this thesis confirm these assumptions: photogrammetric data proved very useful in filling in missing parts of the model, their acquisition was relatively quick, but in shaded areas, zones requiring high-detail representation, and at the contact point between the building and the ground, gaps were observed. This confirms that the accuracy of this method is limited.
The findings of this study are consistent with emerging international trends that emphasize operational standardization and interoperability in heritage documentation workflows. Recent research demonstrates the growing relevance of AI-assisted segmentation and automated HBIM generation for improving efficiency and semantic enrichment [23]. Similarly, advances in transformer-based architectures for scene reconstruction (e.g., VGGT) and optimized LOD rendering techniques [45] indicate a clear trajectory toward automation and high-fidelity modeling. The proposed workflow provides a structured basis for TLS-UAV integration and explicitly identifying AI-ready steps, which address the current implementation gap in heritage modeling practices. Furthermore, the integration of HBIM with Historical Geographic Information Systems (HGIS), as highlighted in recent studies [39,41], reinforces the importance of linking geometric accuracy with spatial and historical context, a principle embedded in the presented methodology.
4.2. Strengths of the Study
A key strength is the end-to-end operational standardization: from acquisition (target-based TLS registration and scaled UAV photogrammetry), through point-cloud fusion and mesh generation, to HBIM preparation with explicit LOD mapping and HGIS embedding. The manuscript documents a replicable configuration of widely used tools (Cyclone, Metashape, CloudCompare, Revit) and clarifies where AI-ready steps could be integrated later, without conflating such perspectives with the present experimental scope. The explicit LOD framing (100–300) and the demonstration that the integrated dataset supports richer geometry and semantics are directly actionable for conservation workflows.
4.3. Limitations, Uncertainty, and Sources of Error
First, absolute accuracy was not evaluated because independent CPs (total station/RTK GNSS) were not collected; therefore, all claims refer to internal consistency between TLS and UAV reconstructions. Second, the Cloud-to-Cloud (C2C) analysis was used qualitatively via a deviation map; a single model-wide histogram (Mean/SD/RMSE) was not reported due to non-overlapping regions (e.g., roof turret, occluded wall–ground contact), which would bias global statistics toward non-corresponding areas. Third, the workflow relies on closed-source commercial software; consequently, certain internal routines are not transparent, which introduces limits to full methodological reproducibility. Finally, results can be sensitive to acquisition geometry and parameter choices (e.g., image overlap, TLS setup distribution, dense-cloud filtering), which may affect local completeness and the stability of mesh detail; these effects are partially visible in roof textures and at structural discontinuities.
4.4. Implications for Heritage HBIM and LOD Practice
Operationally, the findings support using integrated TLS-UAV datasets as reference geometry with demonstrated internal consistency for LOD200-LOD300 HBIM, where modeling tolerances and semantic requirements can be met by combining high-fidelity TLS detail with UAV coverage of elevated or occluded elements. Positioning the models within HGIS facilitates spatial–historical analysis and multi-campaign comparisons. Importantly, such comparative analyses may be tracked over time to support long-term condition monitoring, which is particularly relevant for risk-prone or seismically active areas.
4.5. Integration of AI in the Process of Creating 3D and HBIM Models of Heritage Objects
Artificial intelligence can streamline heritage-modeling pipelines by automating key stages of data handling and interpretation. At the acquisition and preprocessing steps, AI-based routines may classify and clean point clouds, reducing noise and measurement artifacts and accelerating preparation of inputs for subsequent modeling. During documentation processing, recognition of graphical components in 2D technical drawings can enable structured conversion into BIM/HBIM-compliant entities, limiting manual transcription. For geometric modeling, AI methods for element detection and semantic classification on spatial/photogrammetric data have been shown to support the generation of structurally coherent HBIM components [28,49]. In addition, AI-based analytics can contribute to condition assessment by highlighting deformations, damage, or anomalies and by facilitating time-series comparisons for long-term monitoring. Recent transformer-based pipelines, including the Visual Geometry Grounded Transformer (VGGT), indicate end-to-end scene-reconstruction capabilities with potential relevance to heritage scenarios [20]. In the present study, such AI procedures were not implemented; they are discussed as prospective, optional extensions intended to augment a standardized TLS-UAV workflow and to enhance semantic enrichment and integration with GIS/HGIS contexts.
4.6. Future Challenges and Development Pathways
Future work will include independent CP-based validation (ΔX, ΔY, ΔZ, 3D RMSE, residual maps) to report absolute accuracy and uncertainty. The deviation analysis will be extended with region-of-overlap C2C histograms (with robust masking), and a parameter-sensitivity study will be conducted for dense-cloud and mesh reconstruction. Semi-automated steps for element recognition and data attribution (AI-ready) are considered prospective and remain outside the scope of the present experiment.
To support reproducibility, user-controlled processing parameters will be consolidated in openly accessible resources in future work, while vendor-internal routines will be explicitly acknowledged as non-transparent.
Future research may focus on matching algorithms to link components extracted from 2D project documentation with simplified photogrammetric models, enabling the generation of comprehensive 3D outputs while reducing acquisition and processing time. Such low-resolution models could serve as an intermediate step for AI-based extraction of information from CAD drawings, supporting semi-automated HBIM generation. This approach would require advances in detection and segmentation methodologies [50] and rigorous input standardization to ensure reliable identification of volumetric components and effective integration with high-resolution TLS-UAV datasets.
Current barriers to full automation include the lack of input data standardization, varying quality of TLS and UAV point clouds, difficulties in precise alignment, and the manual conversion of 2D documentation into HBIM. In this context, the TLS-UAV integration presented here should be regarded as a necessary baseline that enables subsequent algorithmic advances in normalization, adaptive reconstruction, and AI-driven semantic enrichment. Effective HBIM implementation also depends on structured information management strategies that ensure interoperability and support long-term conservation workflows [25].
There is a need for robust procedures to automatically normalize and align point clouds, detect data gaps, and support adaptive reconstruction of elements inaccessible to UAV or TLS; combined with AI, such procedures could enable semi-automated enrichment of HBIM models with material and structural attributes, tighter 2D-3D integration, and stable, repeatable difference analyses for conservation diagnostics over time.
5. Summary and Conclusions
The aim of this study was to analyze three-dimensional models created using data acquired through laser scanning and unmanned aerial vehicles (UAVs). The focus was on a comparative assessment of the internal geometric consistency, completeness, and methodology of these surveying techniques, as well as their integration for HBIM and spatial–historical analysis.
The project began with the design and execution of a field survey that included both terrestrial laser scanning and UAV photography. The collected data were processed using Leica Cyclone, Agisoft Metashape PRO, Microstation V8i, and CloudCompare software. The final result was a mesh-based 3D model of the church, built from an integrated point cloud. Final analyses demonstrated that using integrated data from multiple sources—such as laser scanning and UAV imagery—enables the generation of higher-quality and more complete 3D models [51]. For example, laser scanning allows for detailed capture of architectural features, while drone images effectively supplement missing areas that are difficult to access by scanner, such as the roof or turret. Automated model generation accelerated the workflow but required careful preparation of input data, including proper filtering and thorough removal of measurement noise.
During the process, several issues were encountered. The scanner-based point cloud had gaps on the roof and in shaded areas. Additionally, a small chapel located at the rear of the church was problematic for the scanner. These deficiencies were resolved using drone images of sufficient quantity and quality, along with carefully placed markers. This made it possible to generate a complete point cloud including the roof, turrets, and the aforementioned chapel. In contrast, the drone data contained more errors near the ground and where walls joined the roof. However, combining the two datasets enabled the generation of a complete and detailed point cloud, which served as the basis for building the final 3D model. The high level of detail and large number of points negatively impacted processing time and required high computational performance.
Both technologies used in the study have their advantages and disadvantages. Photogrammetry is significantly cheaper and faster in the data acquisition phase, whereas laser scanning requires careful planning of measurement positions, making it more time-consuming. Scanning is clearly the better choice when the object is large, complex, and rich in architectural detail that must be accurately represented. Conversely, drone imagery is better suited for smaller, simpler objects with fewer features. Nonetheless, both methods can be used to create technical documentation for heritage buildings.
The integrated TLS-UAV dataset provides a detailed and consistent geometric basis that can support repeated measurements and comparative analyses in future campaigns. Importantly, such analyses can be tracked over time, supporting long-term monitoring of the building’s condition [14,15,29], which is especially valuable in seismically active areas [52]. This reinforces the applicability of the proposed workflow for heritage documentation and ongoing condition assessment, even though absolute accuracy could not be evaluated without independent check points.
Author Contributions
Conceptualization, J.B.-B., I.P. and G.W.; methodology, J.B.-B., I.P. and G.W.; software, J.B.-B., I.P. and G.W.; validation, J.B.-B., I.P. and G.W.; formal analysis, J.B.-B., I.P. and G.W.; investigation, J.B.-B., I.P. and G.W.; resources, J.B.-B., I.P. and G.W.; data curation, J.B.-B., I.P. and G.W.; writing—original draft preparation, J.B.-B., I.P. and G.W.; writing—review and editing, J.B.-B., I.P. and G.W.; visualization, J.B.-B., I.P. and G.W.; supervision, J.B.-B., I.P. and G.W.; project administration, J.B.-B., I.P. and G.W.; funding acquisition, J.B.-B., I.P. and G.W. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to privacy restrictions related to the cultural heritage site and institutional data protection policies.
Acknowledgments
The authors would like to acknowledge the technical and administrative support provided by their home institutions and all individuals who contributed to the research activities and data processing related to this study.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| AI | Artificial Intelligence |
| BIM | Building Information Modeling |
| HBIM | Heritage Building Information Modeling |
| HGIS | Historical Geographic Information System |
| TLS | Terrestrial Laser Scanning |
| UAV | Unmanned Aerial Vehicle |
| DEM | Digital Elevation Model |
| GNSS | Global Navigation Satellite System |
| GIS | Geographic Information System |
| LOD | Level of Detail/Level of Development |
| CDE | Common Data Environment |
| CAD | Computer-Aided Design |
| NMT | Numerical Terrain Model |
| RGB | Red, Green, Blue (color space) |
| RVT | Revit File Format |
| LOG | Level of Geometry |
| GOA | Grade of Accuracy |
| LiDAR | Light Detection and Ranging |
References
- Bęcek, K.; Gawronek, P.; Klapa, P.; Kwoczyńska, B.; Matuła, P.; Michałowska, K.; Mikrut, S.; Mitka, B.; Piech, I.; Makuch, M. Modelowanie i Wizualizacja Danych 3D na Podstawie Pomiarów Fotogrametrycznych i Skaningu Laserowego (3D Data Modeling and Visualization Based on Photogrammetric and Laser Scanning Measurements); WSIiE: Rzeszów, Poland, 2015. [Google Scholar]
- Bernat, M.; Byzdra, A.; Chmielecki, M.; Laskowski, P.; Orzechowski, J.; Rzepa, S.; Szulwic, J.; Ziółkowski, P. Zastosowanie Naziemnego Skaningu Laserowego i Przetwarzanie Danych: Inwentaryzacja i Inspekcja Obiektów Budowlanych (Terrestrial Laser Scanning and Data Processing: Inventory and Inspection of Construction Objects); Wydawnictwo Polskiego Internetowego Informatora Geodezyjnego: Politechnika Gdańska: Gdańsk, Poland, 2016. [Google Scholar]
- Bakuła, K.; Ostrowski, W. Zastosowanie cyfrowej kamery niemetrycznej w fotogrametrii lotniczej na wybranych przykładach (Use of a Non-Metric Digital Camera in Aerial Photogrammetry: Selected Case Studies). Arch. Fotogram. Kartogr. I Teledetekcji 2012, 24, 11–20. [Google Scholar]
- Burdziakowski, P. Przegląd budowy i funkcjonalności współczesnych bezzałogowych statków powietrznych do celów fotogrametrycznych (A review of construction and functionality of photogrammetric unmanned aerial vehicles). Biul. Wojsk. Akad. Tech. 2016, 65, 69–91. [Google Scholar]
- Drzewicki, R.; Bujakiewicz, A. Ocena dokładności modelu budynku z bardzo gęstej chmury punktów pozyskanej z integracji zdjęć o różnej geometrii (Accuracy Assessment of Building Models from Dense Point Clouds Obtained by Integrating Images with Varying Geometry). Arch. Fotogram. Kartogr. I Teledetekcji 2018, 30, 83–93. [Google Scholar]
- Wyjadłowski, M.; Muszyński, Z.; Kujawa, P. Application of laser scanning to assess the roughness of the diaphragm wall. Sensors 2021, 21, 7275. [Google Scholar] [CrossRef]
- Piech, I.; Adam, T.; Dudas, P. 3D Modeling with the Use of Photogrammetric Methods. Arch. Civ. Eng. 2022, 68, 481–500. [Google Scholar]
- Mitka, B.; Mikołajczyk, Ł.; Noszczyk, T. Modelowanie obiektów przemysłowych z użyciem chmur punktów: Możliwości i wyzwania (Modeling Industrial Objects Using Point Clouds: Opportunities and Challenges). Infrastrukt. i Ekol. Teren. Wiej. Pol. Akad. Nauk. Oddział w Krakowie 2013, 2/II/2013, 5–16. [Google Scholar]
- Borkowski, A.; Józków, G. Ocena dokładności modelu 3D zbudowanego na podstawie danych skaningu laserowego—Przykład Zamku Piastów Śląskich w Brzegu (Accuracy Evaluation of a 3D Model Based on Laser Scanning Data: The Case of the Piast Castle in Brzeg). Arch. Fotogram. Kartogr. I Teledetekcji 2012, 23, 37–47. [Google Scholar]
- Kędzięrski, M.; Walczykowski, P.; Fryśkowska, A. Aspekty pozyskiwania danych z Naziemnego Skaningu Laserowego (Aspects of Data Acquisition from Terrestrial Laser Scanning). Biul. Wojsk. Akad. Tech. 2010, 59, 211–221. [Google Scholar]
- Luhmann, T.; Chizhova, M.; Gorkovchuk, D. Fusion of UAV and Terrestrial Photogrammetry with Laser Scanning for 3D Reconstruction of Historic Churches in Georgia. Drones 2020, 4, 53. [Google Scholar] [CrossRef]
- Chatzistamatis, S.; Kalaitzis, P.; Chaidas, K.; Chatzitheodorou, C.; Papadopoulou, E.E.; Tataris, G.; Soulakellis, N. Fusion of TLS and UAV photogrammetry data for post-earthquake 3D modeling of a cultural heritage church. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 143–150. [Google Scholar] [CrossRef]
- Martínez-Carricondo, P.; Carvajal-Ramírez, F.; Yero-Paneque, L.; Agüera-Vega, F. Combination of nadiral and oblique UAV photogrammetry and HBIM for the virtual reconstruction of cultural heritage. Case study of Cortijo del Fraile in Níjar, Almería (Spain). Build. Res. Inf. 2020, 48, 140–159. [Google Scholar] [CrossRef]
- Wang, C.; Shi, H.; Song, B.; Cai, L.; Wu, L. Hierarchical weighted LSTM with one-class classifier for preventive protection of cultural heritage in museums. ACM J. Comput. Cult. Herit. 2025, 4, 4. [Google Scholar] [CrossRef]
- Dolińska, N.; Wojciechowska, G.E.; Bednarz, Ł.J. Monitoring environmental and structural parameters in historical masonry buildings using IoT LoRaWAN-based wireless sensors. Buildings 2025, 15, 282. [Google Scholar] [CrossRef]
- Jo, Y.H.; Hong, S. Three-dimensional digital documentation of cultural heritage site based on the convergence of terrestrial laser scanning and unmanned aerial vehicle photogrammetry. ISPRS Int. J. Geo-Inf. 2019, 8, 53. [Google Scholar] [CrossRef]
- Owda, A.; Balsa-Barreiro, J.; Fritsch, D. Methodology for digital preservation of the cultural and patrimonial heritage: Generation of a 3D model of the Church St. Peter and Paul (Calw, Germany) by using laser scanning and digital photogrammetry. Sens. Rev. 2018, 38, 282–288. [Google Scholar] [CrossRef]
- Xiao, X.; Li, D.; Guo, B.; Jiang, W.; Zang, Y.; Liu, J. A Robust and Rapid Viewpoint-Invariant Matching Method for Oblique Images. Geomat. Inf. Sci. Wuhan Univ. 2016, 41, 1151–1159. [Google Scholar] [CrossRef]
- Guo, B.; Ge, Y.; Xiao, X.; Wang, C.; Gong, J.; Li, D. Full-Automatic High-Precision Scene 3D Reconstruction Method with Water-Area Intelligent Complementation and Mesh Optimization for UAV Images. Int. J. Digit. Earth 2024, 17, 2317441. [Google Scholar] [CrossRef]
- Wang, J.; Chen, M.; Karaev, N.; Vedaldi, A.; Rupprecht, C.; Novotny, D. Vggt: Visual geometry grounded transformer. arXiv 2025, arXiv:2025.11651. [Google Scholar] [CrossRef]
- Aleissaee, A.A.; Kumar, A.; Anwer, R.M.; Khan, S.; Cholakkal, H.; Xia, G.S.; Khan, F.S. Transformers in remote sensing: A survey. Remote Sens. 2023, 15, 1860. [Google Scholar] [CrossRef]
- Wang, R.; Ma, L.; He, G.; Johnson, B.A.; Yan, Z.; Chang, M.; Liang, Y. Transformers for remote sensing: A systematic review and analysis. Sensors 2024, 24, 3495. [Google Scholar] [CrossRef]
- Wang, Y.; Zhou, P.; Geng, G.; An, L.; Zhou, M. Enhancing point cloud registration with transformer: Cultural heritage protection of the Terracotta Warriors. Herit. Sci. 2024, 12, 314. [Google Scholar] [CrossRef]
- Turchetti, F.; Cuca, B.; Oreni, D.; Agapiou, A. Recording of Historic Buildings and Monuments for FEA: Current Practices and Future Directions. Heritage 2025, 8, 55. [Google Scholar] [CrossRef]
- Parente, M.; Bruno, N.; Ottoni, F. HBIM and Information Management for Knowledge and Conservation of Architectural Heritage: A Review. Heritage 2025, 8, 306. [Google Scholar] [CrossRef]
- Ioannides, M.; Patias, P. (Eds.) 3D Research Challenges in Cultural Heritage III: Complexity and Quality in Digitisation; Springer: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
- Yiğit, A.Y.; Uysal, M. Detection of cracks in cultural heritage buildings using UAV photogrammetry-based digital twin. ACM J. Comput. Cult. Heritagei 2025, 18, 13. [Google Scholar] [CrossRef]
- Pierdicca, R.; Paolanti, M.; Matrone, F.; Martini, M.; Morbidoni, C.; Malinverni, E.S.; Lingua, A.M. Point cloud semantic segmentation using a deep learning framework for cultural heritage. Remote Sens. 2020, 12, 1005. [Google Scholar] [CrossRef]
- Niccolucci, F.; Felicetti, A. Digital twin sensors in cultural heritage applications. Sensors 2024, 24, 3978. [Google Scholar] [CrossRef]
- Ma, J.W.; Czerniawski, T.; Leite, F. Automated scan-to-building information modeling. In Research Companion to Building Information Modeling; Edward Elgar Publishing: Cheltenham, UK, 2022; pp. 169–189. [Google Scholar]
- Capolupo, A. Accuracy Assessment of Cultural Heritage Models Extracting 3D Point Cloud Geometric Features with RPAS SfM-MVS and TLS Techniques. Drones 2021, 5, 145. [Google Scholar] [CrossRef]
- Remondino, F.; El-Hakim, S. Image-based 3D modelling: A review. Photogramm. Rec. 2006, 21, 269–291. [Google Scholar] [CrossRef]
- Klapa, P. TLS Point Cloud as a Data Source for Multi-LOD of 3D Models. Geomat. Landmanag. Landsc. 2022, 2, 63–73. [Google Scholar]
- Piech, I.; Klapa, P.; Szatan, P. The Use of Terrestrial Laser Scanning in the Preservation of Valuable Architectural Objects. Geomat. Landmanag. Landsc. 2021, 3, 53–64. [Google Scholar] [CrossRef]
- European Commission. Expert Group on the Common European Data Space for Cultural Heritage (CEDCHE). 2021. Available online: https://digital-strategy.ec.europa.eu/en/news/commission-proposes-common-european-data-space-cultural-heritage (accessed on 19 January 2026).
- European Commission. Recommendation (EU) 2021/1970 of 10 November 2021 on a Common European Data Space for Cultural Heritage. Off. J. Eur. Union 2020, 401, 5–16. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32021H1970 (accessed on 19 January 2026).
- Panezi, A. Europe’s new renaissance: New policies and rules for digital preservation and access to European cultural heritage. Columbia J. Eur. Law 2017, 24, 596. [Google Scholar]
- Knaś, P. Accuracy Analysis of 3D Models Created Using Different Photogrammetric Methods. Master’s Thesis, Hugo Kołłątaj University of Agriculture in Krakow, Faculty of Environmental Engineering and Geodesy, Krakow, Poland, July 2024. [Google Scholar]
- Brumana, R.; Stanga, C.; Banfi, F. Models and scales for quality control: Toward the definition of specifications (GOA-LOG) for the generation and re-use of HBIM object libraries. Appl. Geomat. 2021, 13, 317–337. [Google Scholar] [CrossRef]
- Volk, R.; Stengel, J.; Schultmann, F. Building information modeling (BIM) for existing buildings—Literature review and future needs. Autom. Constr. 2014, 38, 109–127. [Google Scholar] [CrossRef]
- Zhu, J.; Wu, P. BIM/GIS data integration from the perspective of information flow. Autom. Constr. 2022, 136, 104166. [Google Scholar] [CrossRef]
- Borkowski, A.S.; Osińska, N.; Szymańska, N. Przegląd dotychczasowych rozwiązań na poziomie aplikacyjnym w zakresie integracji technologii BIM i GIS (Review of Existing Solutions for BIM-GIS Integration at the Application Level). Builder 2022, 305, 64–69. [Google Scholar] [CrossRef]
- Gotlib, D.; Gnat, M. Conversion between BIM and GIS models: Objectives and selected issues. Rocz. Geomatykii 2018, 16, 19–31. [Google Scholar]
- UK Government. Government Construction Strategy 2016–2020. 2016. Available online: https://assets.publishing.service.gov.uk/media/5a80ac49ed915d74e622fca7/Government_Construction_Strategy_2016-20.pdf (accessed on 19 January 2026).
- Ge, Y.; Xiao, X.; Guo, B.; Shao, Z.; Gong, J.; Li, D. A Novel LOD Rendering Method with Multilevel Structure-Keeping Mesh Simplification and Fast Texture Alignment for Realistic 3-D Models. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5640519. [Google Scholar] [CrossRef]
- Quattrini, R.; Malinverni, E.S.; Clini, P.; Nespeca, R.; Orlietti, E. From TLS to HBIM: High-quality semantically aware 3D modeling of complex architecture. ISPRS Arch. 2015, 40, 367–374. [Google Scholar] [CrossRef]
- Adami, A.; Bruno, N.; Rosignoli, O.; Scala, B. HBIM for planned conservation: A new approach to information management. In Proceedings of the 23rd Conference on Cultural Heritage and New Technologies, CHNT 23, Vienna, Austria, 12–15 November 2018. [Google Scholar]
- Mitka, M. Możliwości zastosowania naziemnych skanerów laserowych w procesie dokumentacji i modelowania obiektów zabytkowych (Possibilities of Using Terrestrial Laser Scanners in the Documentation and Modeling of Historic Objects). Arch. Fotogram. Kartogr. I Teledetekcji 2007, 17, 525–534. [Google Scholar]
- Mahmoudnejad, A.; Andaroodi, E. Characterizing geometric decoration styles of the Ilkhanid period using advanced image analysis. ACM J. Comput. Cult. Herit. 2025, 14, 14. [Google Scholar] [CrossRef]
- Yin, C.; Wang, B.; Gan, V.J.L.; Wang, M.; Cheng, J.C.P. Automated semantic segmentation of industrial point clouds using ResPointNet++. Autom. Constr. 2021, 130, 103874. [Google Scholar] [CrossRef]
- Bac-Bronowicz, J.K.; Wojciechowska, G.; Piech, I.A. 3D data acquisition for spatio-temporal analysis of architectural and urban environment changes of Wrocław Cathedral. Civ. Environ. Eng. Rep. 2025, 35, 314–329. [Google Scholar] [CrossRef]
- Mosoarca, M.; Onescu, I.; Onescu, E.; Anastasiadis, A. Seismic vulnerability assessment methodology for historic masonry buildings in the near-field areas. Eng. Fail. Anal. 2020, 115, 104662. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.












