Abstract
Historic buildings and urban streetscapes face increasing threats from climate change, development, and aging infrastructure, creating a pressing need for accurate and scalable documentation methods. This review assesses the combined use of photogrammetry and unmanned aerial vehicle (UAV) technologies in preserving built cultural heritage. We systematically analyze the end-to-end workflow, from the sophisticated processing of imagery into highly detailed and accurate 3D models in photogrammetry software via data acquisition using diverse UAV platforms and sensor payloads. Through case studies, including the mapping of ancient Maya sites in the Yucatán Peninsula and the conservation of the Notre Dame Cathedral, the review highlights the accuracy, efficiency, and accessibility offered by this technological synergy, underscoring its significance for heritage conservation, research, and the development of digital twins. Furthermore, it explores how these advancements foster public engagement and virtual accessibility, enabling immersive experiences and enriched educational opportunities. The paper also critically assesses the inherent technical, ethical, and legal challenges associated with this methodology, offering a balanced perspective on its application. By synthesizing the current knowledge, this review proposes future research trajectories and advocates for best practices, aiming to guide heritage professionals in leveraging photogrammetry and UAVs for the effective documentation and safeguarding of global cultural heritage.
1. Introduction
1.1. Background of Cultural Building Heritage
Historic buildings and streetscapes tell the stories of who we are. They are tangible links to the past that capture the architectural skill, artistic values, and social histories of the generations before us. Whether in the fine details of a façade or the layout of an old city street, these places anchor the community’s identity and offer a direct link to its history [,]. In this sense, preserved buildings are not static monuments but active classrooms to experience the past []. Preserving this heritage is therefore not just about protecting bricks and mortar; it is about sustaining cultural identity and ensuring that future generations can learn from and connect with their history.
However, their most captivating features—from weathered carvings to historic urban layers—also make them fragile. This vulnerability is now amplified. Climate change brings harsher weather that accelerates material decay [], while the economics of urban growth often sacrifice conservation for new development []. These forces, along with aging infrastructure, create a plethora of threats that can lead to the permanent loss of our built heritage.
To counter these risks, we need documentation methods that are both precise and practical. Although traditional surveys remain important, Letellier reminds us that they are often too slow, costly, and difficult to apply on a large scale [,]. This limitation has pushed the heritage field toward photogrammetry technologies, which can produce detailed 3D models of entire sites with remarkable speed and accuracy [,]. These digital replicas serve as precise records, safeguarding cultural treasures for future generations and providing a basis for research, education, and conservation. Thus, they are becoming essential tools, allowing us to create a lasting digital record that can protect our architectural legacy from the threats of time and change.
1.2. The Rise of Photogrammetry for Cultural Building Heritage
The pressing need for more effective documentation has spurred the adoption of “capture the reality”, a term for technologies that create 3D models that are dimensionally accurate from physical objects and environments []. For the construction of cultural heritage, these methods provide an invaluable tool for preservation and analysis, supporting crucial restoration and educational work by capturing detailed digital records of historic structures []. Central to this approach is photogrammetry, the science of extracting 3D information from overlapping photographs. As this field has matured, the development of specialized software has become instrumental in streamlining the complex pipeline from image acquisition to a fully realized 3D model.
Beyond geometric documentation, photogrammetry also plays a critical role in the study of building pathology. High-resolution, image-based models allow researchers and conservators to detect and monitor cracks, surface erosion, moisture ingress, and biological growth over time []. Because these surveys capture texture and color, as well as shape, they can reveal subtle material changes that conventional inspections may miss. This information supports predictive maintenance, helping stakeholders to prioritize interventions before damage becomes severe.
There are numerous leading photogrammetry software programs, such as RealityCapture (Reality Scan v2.0) [], Agisoft Metashape v2.2 [], and Pix4Dmapper v4.10 [], and open-source alternatives like Meshroom []. RealityCapture is renowned for its high-speed processing and exceptional detail, making it a favorite in professional surveying and VFX []. Agisoft Metashape offers a robust feature set suitable for a wide array of applications, from archaeology to construction []. Pix4Dmapper is particularly strong in UAV-based mapping and analysis []. Meanwhile, Meshroom provides a powerful and accessible entry point into photogrammetry for researchers and hobbyists [].
At the same time, the hardware for data acquisition has advanced equally rapidly, with unmanned aerial vehicles (UAVs) (or drones) becoming essential tools for heritage documentation []. UAVs overcome the physical limitations of ground-based surveys, safely accessing high-altitude features and systematically covering large areas in a fraction of the time []. Equipped with high-resolution cameras, they can capture both nadir (top-down) and oblique (angled) imagery, providing the comprehensive dataset required for a complete and accurate 3D reconstruction. Additionally, the combination of UAVs with light detection and ranging (LiDAR) and 3D laser scanning is also an emerging trend in aerial data capture []. Photogrammetry can be integrated with these methods to produce highly precise 3D data.
The combination of UAV-based photogrammetry and related technologies unlocks a highly efficient documentation workflow. UAVs gather rich, overlapping imagery across hard-to-reach areas, while RealityScan processes these photographs into accurate, georeferenced 3D models. In tandem, they streamline all aspects, from broad site surveys to the capture of fine architectural details, offering heritage professionals a fast, scalable way to preserve at-risk structures.
1.3. Aim and Scope
This review provides a focused analysis of the workflow that combines UAVs with photogrammetry technology for the documentation of cultural buildings and street heritage. The primary aim is to move beyond a general overview of photogrammetry and instead to focus on the specific outcomes, advantages, and challenges inherent in this widely adopted technological pairing.
To achieve this, the scope of this paper is defined by four objectives:
- To analyze and detail the complete workflow, from data acquisition to final model generation in photogrammetry software and UAVs, as it applies specifically to the complex geometries of historic buildings and urban streetscapes;
- To identify the unique contributions that this specific hardware and software combination offers the heritage field, such as enhanced model fidelity, scalability, and efficiency compared to other methods;
- To critically examine the distinct limitations and challenges that arise from this workflow, including issues of data quality, processing bottlenecks, and the practical hurdles in field deployment in heritage contexts;
- To synthesize these findings to propose future research directions and establish a set of best practices for heritage professionals using UAVs photogrammetry for documentation and preservation.
2. Review Methodology
2.1. Data Sources and Search Strategy
To support the objectives of this review, we conducted a comprehensive literature search across academic databases with high relevance to cultural heritage, geospatial technologies, and digital documentation. Background studies are organized into two sections: databases and registers (DNR) and other methods. The DNR category includes Scopus, Web of Science, IEEE Xplore, and the ACM Digital Library, and we also targeted specialist journals in UAVs, photogrammetry, and LiDAR (e.g., drones, remote sensing). Other methods comprise preprint servers (ArXiv), vendor and technical documentation for relevant software and hardware, and publications from leading cultural protection organizations.
The search strategy was guided by a targeted set of “keywords and Boolean operators” to capture the literature [] at the intersection of UAV technologies, photogrammetric methods, and heritage documentation. There were three stages of searching.
The first stage involved a technology overview. Examples of search strings include the following:
- “photogrammetry” AND “algorism”;
- “photogrammetry” AND “Sfm”;
- “photogrammetry” AND “MVS”;
- “photogrammetry” AND “RealityCapture” OR “RealityScan”;
- “photogrammetry” AND “software”;
- “Quixel” AND “RealityCapture” OR “RealityScan”;
- “Unreal” AND “RealityCapture” OR “RealityScan”;
- “UAV photogrammetry” AND “building”;
- “UAV sensors” AND “building”;
- “UAV LiDAR” AND “building”;
- “UAV thermal” AND “building”;
- “UAV” AND “flight plan”;
- “photogrammetry” AND “BIM”.
The second stage involved different projects. Examples of search strings include the following:
- “UAV photogrammetry” AND “Maya”;
- “UAV LiDAR” AND “Maya”;
- “UAV photogrammetry” AND “Notre-Dame Cathedral”;
- “UAV LiDAR” AND “Notre-Dame Cathedral”;
- “photogrammetry” AND “virtual reality”;
- “photogrammetry” AND “game”;
- “game” AND “Notre-Dame Cathedral”.
The third stage involved different challenges. Examples of search strings include the following:
- “UAV photogrammetry” AND “limitation”;
- “UAV sensors” AND “limitation”;
- “Surface-only Capture” AND “limitation”;
- “UAV photogrammetry” AND “cost”;
- “UAV LiDAR” AND “cost”;
- “UAV photogrammetry” AND “privacy”;
- “UAV photogrammetry” AND “copyright”.
Additionally, secondary search strategies were employed to expand the scope and ensure the inclusion of highly relevant research. This involved the following:
- Backward Searching (Citation Tracking): Examining the bibliographies of key papers identified through the initial search to discover foundational or related studies that might have been missed.
- Forward Searching (Cited By): Utilizing the “cited by” features within databases (e.g., Scopus, Web of Science, Google Scholar) to identify more recent research that has built upon or referenced the foundational works.
2.2. Screening, Inclusion/Exclusion, and Reporting
The PRISMA flow diagram can provide transparency and supports the reproducibility of the review methodology. Therefore, the overall search and selection process, including the numbers of records identified, screened, excluded, and retained, is illustrated in the PRISMA-style flow diagram (Appendix A Figure A1).
Screen Process: We included only English-language publications that directly addressed UAVs, photogrammetry, LiDAR, or related digital documentation methods to ensure relevance and quality. Priority was given to peer-reviewed journal articles, conference proceedings, and major software or institutional reports; promotional and non-technical materials were excluded.
Retrieval Process: The titles and abstracts of all retrieved records were screened systematically. Full texts of potentially relevant studies were then reviewed in detail. To reduce bias, the screening and selection were conducted by the first author and cross-checked by a second reviewer (other authors). Disagreements were resolved through discussion until consensus was achieved.
Guolong has noted the rapid growth in research output on spatial technology for world heritage since 2015 []. This review’s literature search included studies published between 2015 and 2025 (inclusive). Some significant theories and findings from earlier periods could be included by specific consideration. This range was selected to reflect the emergence and rapid evolution of UAV and photogrammetry technologies applied in cultural heritage contexts. It also aligns with the timeframe in which RealityScan (first version in 2016 []) and other photogrammetric tools became widely accessible and robust enough for professional-grade heritage work.
Eligibility Selection Process: To strengthen the reliability of our findings, we conducted a methodological quality appraisal of all included studies. Because the corpus included peer-reviewed articles, conference papers, and technical reports, we used a narrative appraisal approach informed by the Joanna Briggs Institute (JBI) checklists appropriate to each study design []. This flexible yet systematic method allowed us to assess the methodological rigor of each publication and incorporate these assessments into our synthesis. Our appraisal focused on 4 core criteria to evaluate the trustworthiness, relevance, and potential for bias in each study, as shown in Table 1.

Table 1.
Core criteria in eligibility selection process.
As with the retrieval process, the quality appraisal was performed independently by the first author and verified by a second reviewer (other authors). Any discrepancies in the assessment were resolved through discussion to reach a consensus. The outcomes of this quality appraisal, along with important details of each included study, are presented in a comprehensive table in Appendix A Table A1.
3. How to Capture Reality
3.1. Understanding Photogrammetry Technology
3.1.1. Core Functionality
Photogrammetry technology’s core functionality is based on the twin pillars of structure from motion (SfM) and multiview stereo (MVS) [], the two principal photogrammetric processes to derive 3D geometry from overlapping photographs, as shown in Figure 1. In the SfM stage, the application automatically detects and matches feature points in input images, estimates camera positions and orientations, and generates a sparse point cloud representing the general shape of the object or scene []. Based on this, the MVS stage densifies the sparse cloud by estimating depth maps for each view and combining them into a high-density point cloud, before constructing a continuous mesh surface []. This two-step pipeline ensures both robustness (through SfM’s global bundle adjustment) and detail (through MVS’s local depth estimation).

Figure 1.
Sfm and MVS workflow.
A key software program in this field is RealityCapture from Epic Games, which has set a high standard for photogrammetric processing [,]. Its latest version, rebranded as RealityScan in June 2025, has a reputation for speed and the ability to manage immense datasets []. The engine is highly effective in aligning images and reconstructing them into detailed, textured 3D models with impressive fidelity. For these reasons, it has become a go-to tool for heritage professionals who need to produce archival-grade digital models from photographic data.
When compared to other leading photogrammetry software, such as Agisoft Metashape [] and Pix4Dmapper [], RealityScan consistently outperforms in handling more than 1 million images [], both in terms of speed and memory efficiency. Its GPU-centric architecture offers a significant advantage over CPU-bound competitors, reducing reconstruction times by up to 50–70% on equivalent hardware configurations []. In addition, RealityScan’s automated outlier detection and camera calibration routines require minimal user intervention, making it particularly suitable for large-scale heritage surveys, where a rapid response and ease of use are paramount.
We summarize the key features of RealityScan, the most popular representation of photogrammetry technology, as follows:
- High-speed processing: By taking advantage of GPU acceleration and optimized parallel algorithms, RealityScan dramatically reduces the alignment and reconstruction times, enabling users to process thousands of images in a matter of hours rather than days [].
- Accuracy and robustness: The integrated bundle adjustment routines minimize the reprojection error on all cameras, producing geometrically precise models even under challenging conditions (e.g., low-texture or repetitive patterns) [].
- Dense mesh generation: After MVS reconstruction, the software efficiently converts the dense point cloud into a watertight triangular mesh, preserving fine architectural details such as ornamentation and weathering [].
- High-quality texture mapping: RealityScan projects the original high-resolution images onto the mesh, blending multiple views to produce seamless, photorealistic textures that faithfully reproduce the surface color and material properties [].
In short, the software’s key features include its remarkable speed in processing large datasets, its high degree of accuracy in geometric reconstruction, its ability to generate dense and detailed meshes, and its sophisticated texture mapping capabilities. These attributes make it particularly well suited for complex projects involving extensive data acquisition, such as those encountered in cultural heritage documentation.
3.1.2. Workflow and Integration Within Epic Ecosystem
Apart from its abovementioned strong advantages, RealityScan has another significant/unique feature: integration into a standardized game development solution. RealityScan’s versatile output formats enable a fluid workflow and deep integration within the Epic Games ecosystem, as shown in Figure 2. This synergy is particularly beneficial for heritage documentation, allowing the creation of rich and interactive digital experiences.
- Scanning Assets: RealityScan’s output seamlessly complements Quixel, a vast library of high-quality scanned assets. Heritage models can be integrated with scan assets under Mixer to enrich scenes [], providing context and detail that might not have been captured or is difficult to replicate. This combination allows for the creation of highly detailed and historically accurate digital environments.
- Game Engine: Direct integration with Unreal Engine allows these detailed, reality-captured models to be imported and used in interactive real-time environments. This is crucial in creating virtual reconstructions of heritage sites, enabling immersive exploration and detailed analysis. The ability to create large-scale, high-fidelity open worlds within Unreal Engine 5’s Nanite technologies means that digitized heritage assets can be presented with unprecedented realism [,,].
By combining RealityScan’s accurate photogrammetry with Epic’s real-time engine, creators can build immersive, data-driven reconstructions. These can include interactive guides, realistic lighting, and scalable deployment, all within the Epic ecosystem.

Figure 2.
How heritage object information proceeds from scan to virtual environment via Epic ecosystem.
3.2. UAV Photogrammetry Data Acquisition
3.2.1. Types of UAVs and Sensor Payloads
UAVs employed in photogrammetric surveys are broadly classified into two main categories: multirotor and fixed-wing platforms []. Multirotor UAVs, including quadcopters and hexacopters, provide superior maneuverability, stable hovering, and safe operation within confined or complex environments. These attributes make them highly suitable for the high-resolution, detailed mapping of small- to medium-scale areas, such as archaeological sites, the exteriors of buildings, and localized topographic surveys [,]. However, their primary limitations are limited flight durations (typically 20–40 min) and lower cruise speeds, restricting the geographical scope of a single mission [,]. In contrast, fixed-wing UAVs are more efficient in covering large areas due to their longer flight times (often exceeding one hour) and higher airspeeds [,]. Consequently, they are favored for applications such as agricultural monitoring, watershed-scale terrain modeling, and other scenarios that demand broad, continuous aerial coverage. The inherent trade-off is that fixed-wing platforms lack hovering capabilities and require larger open spaces for takeoff and landing, potentially limiting their deployment in urban or densely vegetated terrains [].
A camera’s sensor plays a huge role in photogrammetry. Today, most UAV surveys use high-resolution RGB cameras with either full-frame or APS-C sensors and interchangeable lenses. Full-frame sensors gather more light, produce less noise, and offer a greater dynamic range, which translates into sharper images and more dependable feature matching in processing []. Cameras with 20 MP or more also reduce the ground sampling distance (GSD) at a given altitude, so that the digital surface models and orthophotos are crisper []. While a fixed-focus, downward-looking mount is the simplest setup, adding a gimbal to counteract pitch and roll is usually beneficial, maintaining even overlap and reducing motion blur. Moreover, when users need the highest precision—e.g., for engineering control surveys or measuring stockpile volumes—swapping in metric or cine-style lenses and a global (mechanical) shutter eliminates the rolling-shutter “jello” or skew effects common to CMOS sensors [].
While photogrammetry is powerful, there are scenarios, particularly in vegetated or highly complex terrains, where LiDAR payloads provide complementary advantages [,]. LiDAR can be categorized by platform and by signal/return type. Platform categories include airborne (topographic) LiDAR for broad-area surveys; terrestrial laser scanning (TLS) for high-resolution, close-range documentation; and satellite LiDAR, carried on Earth-orbiting satellites, which can survey large terrestrial and atmospheric regions []. For UAV applications, the most relevant are lightweight UAV-mounted units and TLS.
UAV-borne LiDAR systems emit laser pulses and measure return times to directly capture three-dimensional point clouds directly, penetrating gaps in foliage to map underlying ground surfaces and intricate structural details []. Such systems typically weigh more and consume more power than standard cameras, and their raw data often require different processing pipelines []. Nevertheless, integrating UAV LiDAR with photogrammetric imagery can yield richer datasets: the LiDAR point cloud adds true 3D structure and penetration capabilities, while the high-resolution imagery supplies color information and texture for enhanced visualization and classification [,]. In practice, mission planners select between photogrammetry, LiDAR, or a hybrid approach based on the project scale, desired accuracy, terrain complexity, and available payload capacity.
Apart from photogrammetry and LiDAR, some high-value projects have also used thermal infrared (TIR) sensors on UAVs to acquire radiometric imagery [,]. TIR sensors measure the surface brightness temperature rather than visible-spectrum reflectance and therefore provide information complementary to photogrammetric and LiDAR measurements []. However, TIR sensors impose specific operational and processing constraints that must be addressed to produce quantitatively useful results. Typical TIR cameras have lower spatial resolutions than visible cameras for a given payload class, and the measured temperatures are affected by sensor calibration, detector warmup and drift, the sensor’s field of view, atmospheric effects, surface emissivity, mixed pixels, and the time of day and sky conditions at acquisition [].
3.2.2. UAV Flight Planning and Data Acquisition
Effective UAV photogrammetry begins with thorough mission planning, typically conducted in dedicated flight planning software (e.g., Pix4Dcapture, DJI Ground Station Pro, UgCS [,,,]), as shown in Figure 3. Figure 3 shows how these platforms allow users to define the survey area, set the flight altitude, specify the overlap of the image (front and side), and generate waypoints along predetermined flight lines []. Standard flight patterns include nadir passes (vertical), ideal for terrain and orthophoto production, and oblique passes that capture building façades and vertical structures. Combining nadir and oblique imagery significantly enhances the 3D reconstruction of complex features such as rooftops, balconies, and architectural details [].

Figure 3.
Overview of UAV flight planning and data acquisition.
When surveying buildings, special attention must be paid to image overlap and camera angles to ensure the complete coverage of façades and roof planes. A minimum of 75% frontal and 60% side overlap is common for nadir flights, as shown in Figure 3 []; oblique flights often require similar or slightly higher overlap to resolve vertical details []. Complex geometries (e.g., turrets, dormers, overhangs) and inaccessible areas (e.g., steep roofs, upper façades) may necessitate additional or tighter flight lines, variable-altitude passes, and manual waypoint adjustments. Ground control points (GCPs) or real-time kinematic (RTK) GNSS can improve the absolute accuracy, particularly in urban settings, where GPS multipath may be problematic [].
Furthermore, street-scale surveys introduce further constraints. Linear flight corridors must follow street centerlines or parallel offsets to capture façades and road surfaces, while maintaining safe distances from buildings, pedestrians, and vehicles []. Image overlap remains critical, but the flight altitude and speed should be balanced against local airspace regulations and the need to avoid moving obstacles []. Vertical elements, such as street signs, lamp posts, and trees, are best captured with nadir and shallow oblique passes to support 3D modeling and semantic classification []. In pedestrian zones or narrow alleys, manual piloting or hybrid automatic/manual missions may be required to navigate tight turns and ensure comprehensive coverage without intrusion into no-fly or high-traffic zones [].
3.3. The Synergy: UAV Data Processed by RealityScan
Combining UAV multi-angle imagery with RealityScan’s GPU-powered engine delivers a rapid, seamless workflow and produces detailed 3D outputs, including the following:
- Highly Detailed 3D Models: Through its robust SfM and MVS pipelines, RealityScan reconstructs the captured environment into dense, geometrically precise 3D models. These models preserve intricate architectural details, surface textures, and the overall form of the surveyed objects or areas with remarkable fidelity.
- Textured and Material: The software seamlessly projects the high-resolution, often color-rich imagery onto the generated 3D mesh. This process creates photorealistic, visually compelling textured meshes that faithfully represent the surface appearance, materials, and colors of the original subject, making them ideal for visualization, virtual tours, and detailed inspection.
Beyond geometric reconstruction, these outputs serve as the foundation for the creation of semantically rich building information models (BIM) through a process known as Scan-to-BIM []. This workflow uses computer vision and deep learning algorithms to perform semantic segmentation on the unstructured point clouds or meshes, automatically identifying and classifying architectural elements like walls, floors, windows, and doors []. These classified point clusters are then converted into structured, parametric BIM objects, transforming a purely visual model into an intelligent, data-rich asset. While this process is often semi-automated and requires human oversight for validation, it bridges the gap between reality capture and digital design and construction, enabling advanced analysis, facility management, and heritage conservation.
4. Contributions to Cultural Building Heritage
4.1. Unprecedented Detail and Accuracy
We note that modern documentation techniques have elevated heritage preservation beyond traditional photography and architectural drawings. High-resolution laser scanning, photogrammetry, aerial drone platforms, and 3D modeling now enable researchers, conservators, and architects to create comprehensive “as-is” digital archives—rich in detail at the centimeter to millimeter level. The Yucatán Peninsula and Notre Dame Cathedral are high-profile subjects that attract researchers, organizations, and corporations. Therefore, we conducted case studies to review the value of these methods in both contexts.
4.1.1. Case Study: Mapping and Managing Ancient Maya Sites in the Yucatán Peninsula, Mexico
The Yucatán Peninsula of Mexico is home to numerous sprawling ancient Mayan archaeological sites, many of which are nestled within dense tropical rainforests []. These environments present significant challenges for traditional archaeological survey methods, including limited visibility, difficult terrain, and the risk of damaging delicate archaeological features. UAVs, equipped with both photogrammetric sensors and LiDAR, have emerged as a gamechanger for the mapping and management of these complex heritage landscapes.
Projects at sites such as Chichén Itzá and Calakmul have extensively utilized UAV technology to overcome these obstacles, as shown in Figure 4. In areas around Chichén Itzá, where dense vegetation obscures ground-level features, UAV-mounted LiDAR has proven invaluable. Geoearth used the YellowScan Explorer LiDAR unit on a Skyfront Perimeter drone to survey approximately 30 km2 across Chichén Itzá over just three days []. They achieved a point density of 110 pts/m2 and digital terrain models with a 30 cm resolution, penetrating the forest canopy to reveal buried structures, causeways, and terraces formerly hidden beneath foliage []. Similarly, surveys in Yucatán’s Puuc region mapped water storage features—such as chultuns and aguadas—as well as terracing and stone workshops, illustrating LiDAR’s utility for landscapes beyond monumental architecture []. In 2024, Northern Arizona University scholar Luke Auld-Thomas reanalyzed decade-old LiDAR survey data (originally gathered for carbon stock studies) across Campeche, uncovering a “lost” city (Valeriana) with approximately 6674 unseen structures—some comparable to Chichén Itzá—in just 50 mi2 [].

Figure 4.
Overview of UAV photogrammetry and LiDAR work in Yucatán Peninsula.
Complementing the LiDAR data, high-resolution UAV photogrammetry has been used at the same site to create detailed 3D models of visible monumental architecture—such as El Castillo and the Great Ball Court—capturing carvings, limestone textures, and architectural forms with exceptional accuracy within the Chichen Itza 3D Atlas project []. These models facilitate detailed condition assessments, pinpointing areas of erosion or structural instability and enabling targeted restoration and conservation.
At Calakmul, a UAV-LiDAR survey in the Campeche rainforest revealed hundreds of previously unmapped structures, plazas, and agricultural features under heavy canopies. Hansen’s team conducted broader regional airborne LiDAR surveys in the Mirador–Calakmul Karst Basin, which uncovered over 775 ancient Maya settlements within a 1703 km2 region, linked by an extensive network of causeways—underscoring the method’s regional-scale potential [].
In conclusion, UAVs—when equipped with LiDAR and photogrammetric sensors—have transformed archaeological 3D scanning in the Yucatán. From high-density 3D terrain mapping at Chichén Itzá to regional-scale settlement revelations across Calakmul and beyond, these tools enable non-invasive, comprehensive documentation, critical for conservation, urban planning, tourism, and research into Maya urbanism and environmental adaptation.
4.1.2. Case Study: The Conservation and Digital Recreation of the Notre Dame Cathedral, Paris
The devastating fire that affected the Notre Dame Cathedral in April 2019 [] highlighted the critical importance of accurate and comprehensive digital documentation for cultural heritage. Fortunately, years before the disaster, numerous high-resolution photogrammetric and LiDAR surveys had been conducted on the iconic Parisian landmark. These surveys, often carried out for conservation and research purposes, provided an invaluable digital repository of the cathedral’s intricate gothic architecture.
Before the fire, architectural historians and conservationists had already leveraged drone-based photogrammetry and laser scanning to document Notre Dame with unprecedented detail. In 2010, art historian Andrew Tallon deployed a tripod-mounted Leica ScanStation C10 LiDAR scanner, spending 5 days capturing data from approximately 50 positions inside and outside the cathedral [,,]. This meticulous process collected over a billion points, yielding a point cloud model with accuracy of within 5 millimeters that formed the foundation of a precise digital blueprint of the cathedral []. Tallon also combined these scans with high-resolution panoramic photographs to overlay color and texture onto the 3D data []. Between 2014 and 2016, Art Graphique Patrimoine (AGP) expanded on this work using UAVs and additional terrestrial scans. They assembled a rich billion-point database of the entire structure [,]. Their detailed scan of the oak timber roof, known as “the forest”, involved 150 scan locations that captured 3 to 5 billion data points, resulting in an exceptional point density of one to two points per square millimeter []. During the fire response, UAVs were indeed deployed for aerial reconnaissance, thermal assessment, and post-fire structural mapping [].
These aerial and ground photogrammetric surveys, processed via SfM pipelines, produced high-resolution 3D meshes capturing façade ornamentation, statue sculptures, gargoyles, and stone erosion patterns. Meanwhile, LiDAR point clouds provided accurate geometric data—complete spatial coordinates for every architectural feature and structural element. The combined dataset was integrated within a BIM framework by Autodesk and AGP [,], enabling a digital twin that fused geometry, material metadata, temporal documentation, and structural annotations. The immediate aftermath of the fire saw these pre-existing digital datasets become indispensable. They served as the primary reference for the international efforts to assess the damage and plan the reconstruction. Architects and engineers relied heavily on these high-accuracy 3D models to understand the original forms and constructions of the damaged sections, particularly the collapsed spire and roof.
Furthermore, these digital twins enabled researchers and conservationists to virtually explore and analyze the cathedral in its pre-fire state. These digital replicas also powered immersive virtual reality (VR) experiences like Ubisoft’s “Journey Back in Time []” and Orange’s “Eternal Notre Dame” [], where visitors could wander through the cathedral in stereoscopic 3D, viewing the nave, towers, and inaccessible spaces from multiple historical perspectives. Such experiences fostered emotional connections and enabled condition assessments even when the physical site was closed for restoration.
We summarize how digital protection was implemented in major projects related to the Notre Dame site below (see Figure 5). Significantly, the Notre Dame fire serves as a stark reminder of heritage’s fragility—and a powerful testament to the irreplaceable value of integrated photogrammetry, LiDAR, and digital twin technologies for future preservation initiatives.

Figure 5.
Overview of major digital protection project at Notre Dame Cathedral.
4.2. Public Engagement and Virtual Accessibility
Emerging digital technologies are not only revolutionizing professional conservation, restoration, and monitoring workflows but are also opening up new avenues for public engagement and education. By transforming heritage sites into interactive digital assets, these tools dissolve geographic and physical barriers, inviting global audiences to explore cultural treasures from anywhere in the world []. Such accessibility fosters a deeper public connection to heritage, positioning communities as active participants in preservation, rather than passive observers [].
A core component of this transformation is the creation of highly realistic 3D models. Photogrammetry platforms like RealityScan enable the rapid generation of dense, textured meshes from smartphone imagery, which can then be optimized and imported into game engines such as Unreal Engine []. The result is an immersive environment where users can undertake guided virtual tours, engage in augmented-reality sandbox experiences, or navigate gamified narratives that blend play with pedagogy. Industry leaders have already demonstrated the educational power of this approach. CyArk’s MasterWorks: Journey Through History—developed with FarBridge and optimized for Oculus Rift, Gear VR, and Rift—transforms precise LiDAR and photogrammetry data into fully explorable VR environments []. It offers guided tours through four iconic heritage sites (Ayutthaya, Chavín de Huántar, Mesa Verde, and Mount Rushmore), complete with audio narration from experts, interactive artifact collection, and nuanced storytelling that is rooted in cultural and climate contexts. In addition to Journey Back in Time, Ubisoft’s “Discovery Tour” series utilizes the Anvil engine (used in Assassin’s Creed) to deliver historian-curated, combat-free explorations of ancient worlds—offering a structured and engaging learning experience rooted in photogrammetric authenticity [].
The potential of these technologies also extends to educational applications, where interactive 3D models can significantly promote cultural awareness and facilitate digital storytelling. One notable project is the “Re-Live History” project—a VR learning experience that reconstructs a Maltese Neolithic hypogeum []. Developed in collaboration with heritage experts and tested with middle-school students (ages 11–12), this immersive VR environment blends spatial exploration with representations of prehistoric cultural practices. Evaluation showed strong engagement, authenticity, and improved learning outcomes compared to traditional materials. Such VR stories effectively dissolve geographical barriers, enabling any student or enthusiast with a headset to explore remote archaeological contexts from anywhere. Yueting’s research also supported this by combining guided navigation, multimodal narration, and visually rich reconstructions; these experiences democratize heritage education—making it accessible, immersive, and emotionally resonant []. In doing so, they help to foster deeper cultural connections, support curriculum-aligned learning, and empower users globally to engage with heritage, irrespective of physical constraints.
5. Challenges and Limitations
5.1. Technical Challenges
5.1.1. UAV Data Acquisition Specifics
Collecting high-quality UAV imagery in street heritage environments presents a range of technical challenges. Environmental and operational constraints can degrade both flight performance and the accuracy of downstream photogrammetric processing. Key issues include the following:
- Environmental Factors: Wind gusts can destabilize small multirotor platforms, introducing motion blur or misalignment between frames. Rain or high humidity not only reduces image clarity but also risks water ingress into sensitive electronics [,]. Temperature extremes—both hot and cold—affect battery chemistry, reducing the available flight time and potentially triggering automatic safety cut-outs [].
- Lighting Conditions: Street scenes feature highly variable illumination: sharp contrasts between sunlit façades and deep shadows, glare from glass windows or wet pavement, and rapid changes as clouds pass overhead []. These inconsistencies result in uneven feature detection across images, leading to gaps or noise in 3D reconstructions.
- Sensor Limitations: Standard RGB cameras struggle to “see through” dense foliage or deep recesses beneath overhangs. Although LiDAR sensors offer better penetration, they also come with a higher payload, weight, power consumption, and cost []. Deploying hybrid systems can be challenging on lightweight UAVs.
- Surface-Only Capture Limitations: Photogrammetry typically records surface responses such as visible texture. Therefore, it cannot directly reveal subsurface features, interior structural conditions, or elements hidden behind occluding objects [].
- Battery Life and Flight Duration: Typical UAV batteries support short-time flight under ideal conditions []. Mapping extensive street corridors or complex heritage districts thus demands multiple batteries and careful mission planning to ensure complete coverage without data gaps.
To tackle these issues in the field, it is necessary to choose rugged, weather-rated UAVs and fly them in the clearest daylight and calmest conditions possible. One can add extra sensors—like LiDAR or multispectral cameras—to fill in the blindspots that a standard RGB camera cannot handle. Then, one can rely on smart mission-planning tools that adjust the altitude, speed, and image overlap on the fly, based on live telemetry. Altogether, these steps make it significantly easier to capture clean, complete data when surveying heritage streetscapes.
5.1.2. Photogrammetry Software Processing Specifics
Post-flight processing in software such as RealitySacn and Pix4Dmapper introduces its own set of technical challenges, particularly when handling UAV-derived imagery for street heritage projects. Key issues include the following:
- Massive Data Volume: A single mission can yield thousands of high-resolution photographs. Processing such datasets demands substantial RAM (often 64 GB+), powerful GPUs, and high-speed storage (SSDs or RAID arrays) []. Long load times and intermediate cache files further increase disk usage and extend the overall processing time.
- Processing Complexity: Insufficient image overlap, repetitive architectural textures (e.g., brick façades), reflective surfaces (glass windows, wet stone), and motion blur from wind-induced UAV shake can confuse feature matching. These factors often manifest as alignment errors, mesh holes, or texture artifacts in the final model [].
- Learning Curve: Achieving optimal results requires operators to master UAV flight planning (setting overlap, altitude, and camera parameters) as well as RealityScan’s parameters (alignment tolerances, reconstruction modes, texture baking). Inexperienced users can inadvertently degrade model quality or waste compute resources.
5.1.3. Cost Considerations
An additional practical consideration is cost: photogrammetry workflows can often be deployed at a relatively low upfront cost (consumer/prosumer drones plus photogrammetry software), while LiDAR-based systems typically require substantially higher hardware and platform investments []. Photogrammetry hardware and software options exist across a broad price range, from hobbyist setups to professional subscriptions, which makes them attractive for budget-constrained projects. However, the photogrammetric processing of large image sets can drive up the computing and cloud processing costs []. By contrast, UAV LiDAR solutions commonly incur higher capital, maintenance, and operator-training costs (sensor, GNSS/IMU integration and platform). LiDAR does offer operational advantages in certain contexts, especially in terms of the better penetration of vegetation and reduced dependence on ambient lighting. Thus, the choice often becomes a trade-off between the required deliverable (penetration/accuracy/texture fidelity), project budget, and operational complexity. Hybrid workflows (photogrammetry for broad-area, texture-rich mapping and targeted LiDAR for vegetation penetration or engineering-grade surveys) can balance cost and data quality.
5.2. Ethical, Legal, and Social Considerations
5.2.1. Privacy Concerns
Deploying UAVs for street heritage documentation raises privacy challenges. Aerial imaging can inadvertently record identifiable individuals, license plates, private gardens, and other private property details. While UAVs provide valuable perspectives on architecture and urban patterns, they also traverse airspace in ways that may infringe on privacy rights [].
First, capturing identifiable persons in public or semi-public spaces creates legal risks. Under regulations such as the EU’s General Data Protection Regulation (GDPR) or various US privacy laws, recognizable images constitute personal data []. Collecting and processing such footage without consent can incur fines and reputational harm. Even in “public” airspace, individuals may expect privacy in contexts like private courtyards or small gatherings.
Second, because license plates and other vehicle identifiers visible in UAV imagery are treated as personal data under frameworks like the GDPR and various US laws [], they must be anonymized by pixelating or masking—otherwise, the footage can leave people vulnerable to stalking, identity theft, or other harms.
Third, details of private properties are often captured unintentionally. While landmark owners may welcome documentation, neighboring residents typically do not. Heritage surveys must clearly define flight boundaries to avoid peering over fences or recording beyond public rights of way.
Balancing the legitimate need for comprehensive heritage documentation with these privacy concerns requires a multi-pronged strategy.
- Pre-flight Planning. Conduct a privacy impact assessment to identify sensitive areas. Plan routes to maintain setbacks from private homes and schedule flights when foot traffic is minimal.
- Community Engagement. Notify residents, property owners, and authorities in advance. Secure written permissions and post public notices to allow opt-outs.
- Technical Controls. Use face and license plate detection tools to blur or mask identifiers.
- Data Governance. Restrict raw footage access through role-based controls. Define retention periods after which images containing private details are deleted or permanently anonymized.
- Transparency. Publish a privacy statement with any public release, detailing the collection scope, anonymization measures, and intended uses. Audit flight logs and archives regularly for compliance.
5.2.2. Data Ownership and Copyright
Advances in UAV-based photogrammetry and software like RealityScan have made it straightforward to generate high-fidelity 3D models of heritage sites. However, the question of who owns these models is nuanced, especially when the subject is public property or protected cultural heritage. Four main factors determine ownership and downstream rights:
Source Imagery and Raw Data: Typically, the UAV operator holds the raw aerial imagery under the principle that photographs are protected by copyright from the moment of capture. Ahmad indicated that, if the operator is an employee or contractor, the commissioning entity (e.g., a municipal heritage office) may claim “work for hire” ownership []. In the absence of a contract, the photographer retains copyright, including rights over any derivative works created from the images.
EULA and Output Rights: An end-user license agreement (EULA) generally grants the licensee full rights to the output data, subject to payment of any applicable fees []. Unlike some SaaS photogrammetry solutions that restrict the commercial use of derived 3D meshes, RealityScan allows the user to exploit, modify, and distribute the results without additional royalties []. It is critical to review the specific terms to ensure that there are no clauses requiring attribution or restricting redistribution.
Commissioning Agreements and Institutional Policies: Public heritage projects are often funded by governmental or non-profit bodies that impose open data mandates. The local government’s cultural heritage department might require that all 3D assets be released under a Creative Commons license (CC BY or CC BY–SA) to maximize public access and scholarly reuse []. Conversely, a private donor may insist on exclusive commercial rights, limiting sharing to internal or paid licensing channels. Clear contractual language at project inception is therefore essential.
Copyrightability of 3D Models of Public-Domain Subjects: While many historic monuments are themselves in the public domain, the photogrammetric model can still attract copyright if it exhibits sufficient originality—for instance, in the choice of viewpoints, texture editing, or post-processing enhancements. For instance, purely mechanical reconstructions with no creative embellishment may be deemed “merely factual” and thus uncopyrightable in US jurisdictions [], but practitioners should not assume automatic public-domain status without legal counsel.
6. Discussion
The integration of photogrammetry with UAV technology represents a significant leap forward in the documentation and preservation of cultural building heritage. As highlighted throughout this review, this synergy addresses the limitations of traditional survey methods by offering unparalleled efficiency, detail, and accessibility [,,]. The capability to rapidly capture vast amounts of high-resolution imagery from diverse perspectives, even from previously inaccessible vantage points, has revolutionized how we record, analyze, and engage with historic structures.
6.1. Technological Contributions
UAVs have transformed data acquisition by providing stable, high-resolution aerial platforms. Their ability to execute pre-programmed flight plans with precise overlap ensures comprehensive coverage, while the flexibility for manual control allows for the capture of intricate details on complex facades and roofs [,]. The evolution of sensor payloads, from high-resolution RGB cameras to complementary LiDAR systems, further enhances the richness and accuracy of the captured data, allowing for the penetration of dense foliage and detailed structural analysis where needed [,].
Simultaneously, photogrammetry software, exemplified by RealityScan, has kept pace with these hardware advancements. The sophisticated SfM and MVS algorithms, coupled with powerful GPU acceleration, enable the processing of massive image datasets into highly accurate, textured 3D models [,,]. The seamless integration of photogrammetry outputs with game engines like Unreal Engine further unlocks innovative applications in digital heritage, facilitating immersive virtual reconstructions and interactive educational experiences [,].
The case studies from the Yucatán Peninsula and Notre Dame Cathedral vividly illustrate the profound impact of these technologies. In the Yucatán, UAV-LiDAR and photogrammetry have been instrumental in uncovering and mapping vast archaeological landscapes previously hidden by dense rainforests, revealing the scale and complexity of Maya urbanism [,]. At Notre Dame, pre-existing digital records proved invaluable in the assessment and planning of its reconstruction following a devastating fire, underscoring the critical role of accurate digital twins for heritage resilience [,]. Digital twins can link 3D assets with geographic information systems (GIS), placing buildings and monuments in a broader digital management context []. They support integration with geospatial and building datasets, enable multi-user web–GIS access for stakeholders and the public, and make data actionable for conservation workflows, automated alerts, and multidisciplinary collaboration. VR applications can extend public access, improve training, and enhance interpretation and outreach []. These examples highlight how advanced documentation methods provide unprecedented detail and accuracy, crucial for conservation, restoration, and scholarly research.
Furthermore, the democratization of these technologies through user-friendly software and more accessible hardware has broadened their application beyond specialist institutions. This accessibility is fostering greater public engagement with and virtual accessibility to cultural heritage. Interactive 3D models and VR experiences allow global audiences to explore heritage sites, enhancing cultural awareness and educational outreach in ways that were previously unimaginable [,].
6.2. Technical and Ethical Challenges
Despite the immense progress, several challenges remain. Technical hurdles in data acquisition, such as environmental factors, lighting variability, and battery limitations, necessitate careful planning and execution [,]. In terms of processing, managing massive datasets, mitigating alignment errors caused by complex textures or reflective surfaces, and mastering the software’s parameters are critical in achieving optimal results [,]. Ethical and legal considerations, including privacy concerns related to aerial imaging and the complexities of data ownership and copyright, require careful navigation through robust planning, community engagement, and transparent data governance [,].
6.3. Practical Recommendations
To fully exploit UAV photogrammetry, practitioners should adopt clear, repeatable workflows; combine drone surveys with complementary methods (LiDAR); and commit to regular training so that teams can keep up with new tools and techniques. Institutions should back this up by funding training and building in-house capacity; setting practical protocols for metadata, storage, backups, and access; and promoting close collaboration between conservators, surveyors, GIS/IT specialists, and decision makers so that digital records are useful, reliable, and reusable over the long term.
6.4. Future Directions
6.4.1. Intangible Heritage
Although this review focuses on tangible heritage, it should be acknowledged that the methods highlighted here can be explored for the capturing of the intangible cultural heritage (ICH) [] of a space []. This is an unexplored area, as most methods utilize standardized video and/or photo capture.
6.4.2. Technological Directions
Looking ahead, future research should focus on developing more robust and automated workflows that can better handle challenging acquisition environments and data processing complexities. The continued integration of AI and machine learning will help to automate feature extraction and matching, enable semantic segmentation and object recognition, support the temporal analysis of performances and rituals, detect anomalies and damage, and give clear measures of model confidence. Moreover, establishing standardized best practices for data management, ethical deployment, and digital preservation will be crucial in maximizing the long-term benefits of UAV and photogrammetry technologies for cultural heritage worldwide. The ongoing evolution of these tools promises to further revolutionize our ability to document, understand, and safeguard our shared global heritage for generations to come.
Author Contributions
Conceptualization, Y.X. and J.S.; Methodology, Y.X. and J.S.; Investigation, Y.X.; Resources, S.Y., C.F., T.H. and J.S.; Writing—original draft preparation, Y.X., S.Y., C.F., T.H. and J.S. All authors have read and agreed to the published version of the manuscript.
Funding
This work was partly supported by the Philosophy and Social Sciences 2025 Annual Planning Project in Foshan (grant no. 2025-QN11), the 14th Five-Year Planning Project for the Development of Philosophy and Social Sciences in Guangzhou (grant no. 2025GZGJ240), and the Humanities and Social Sciences Planning Fund Project of the Ministry of Education of China (grant no. 23YJA760014).
Data Availability Statement
No new data were created or analyzed in this study.
Acknowledgments
The manuscript’s language was refined with the assistance of two AI-based proofreading tools, ChatGPT (OpenAI) and Writefull, within Overleaf. These tools were used solely for grammar correction, sentence restructuring, and overall readability enhancement, guided by prompts such as “Proofread the following content” and “Proofread only”, as well as Overleaf’s sentence correction suggestions. No content was generated, analyzed, or interpreted by AI. All scientific content, data interpretation, and conclusions remain entirely the original work of the authors.
Conflicts of Interest
The authors declare no conflicts of interest.
Appendix A

Figure A1.
Background study flow diagram.

Table A1.
Template data extraction table for included studies (DNR).
Table A1.
Template data extraction table for included studies (DNR).
Study ID | Site | Design | Technology | Deliverables | Data Sources | Key Findings | Theme(s) Linked |
---|---|---|---|---|---|---|---|
Ahmad et al., 2021 [] | General | Critical overview | Drones (UAVs) | Analysis of privacy threats from unregulated drones | Literature review | Unregulated drones pose a significant threat to the right to privacy, necessitating a new legal framework. | Privacy; Regulation; Drones |
Alsadik et al., 2022 [] | N/A (Simulation) | Simulation | Hybrid acquisition system for UAVs | Simulated performance of a hybrid UAV–sensor system | Simulation data | A hybrid system can optimize data acquisition for UAV platforms. | UAV Technology; Simulation; Data Acquisition |
Asadzadeh et al., 2022 [] | N/A (Review) | State-of-the-art review | UAV-based remote sensing | Overview of UAV applications in the petroleum industry and environmental monitoring | Literature review | UAVs are a powerful tool for the petroleum industry and environmental monitoring. | UAV Applications; Remote Sensing; Environmental Monitoring |
Avrami, 2019 [] | General | Theoretical framework | Heritage management approaches | Discussion of values in heritage management | Case studies, theoretical analysis | Emphasizes the importance of value-based approaches in heritage management. | Heritage Management; Cultural Values |
Barbara, 2022 [] | N/A (Virtual) | Case study | Immersive virtual reality (VR) | VR learning experience for prehistoric intangible cultural heritage | Development and user experience data | VR can provide an immersive and effective way to experience and learn about cultural heritage. | Virtual Reality; Cultural Heritage; Education |
Bao et al., 2024 [] | N/A (Software) | Comparative performance analysis | Game engines (Unity, Unreal) | Performance comparison of rendering optimization methods | Benchmarking and performance metrics | Provides a comparative analysis of rendering optimization in different game engines. | Computer Graphics; Software Performance; Game Development |
Biryukova and Nikonova, 2017 [] | General | Review and analysis | Digital technologies for cultural heritage | Analysis of the role of digital technologies | Literature review | Digital technologies play a crucial role in the preservation and accessibility of cultural heritage. | Digital Heritage; Cultural Preservation; Technology |
Boon et al., 2017 [] | N/A (Case study) | Case study and comparison | Fixed-wing and multirotor UAVs | Comparison of two UAV types for environmental mapping | Field data from case study | The choice between fixed-wing and multirotor UAVs depends on the specific application. | UAV Technology; Environmental Mapping; Remote Sensing |
Bramer et al., 2018 [] | N/A (Methodology) | Methodological paper | Systematic literature searching methods | An efficient and complete method for developing literature searches | Methodological description | Proposes a systematic approach to improve the efficiency and completeness of literature searches. | Literature Search; Research Methodology |
Cai et al., 2025 [] | N/A (Dataset) | Dataset creation | Drones; semantic segmentation | A varied drone dataset (VDD) for semantic segmentation tasks | Drone imagery | Introduces a new dataset to advance research in semantic segmentation for drone-captured images. | Drones; Computer Vision; Datasets |
Chen et al., 2024 [] | China | Methodological paper | Digital cultural heritage | Approach for integrating digital cultural heritage into sustainable education | Literature review | Explores paths for integrating digital cultural heritage into sustainable education to popularize knowledge and improve heritage protection. | Digital Heritage; Education; Cultural Preservation |
Dai et al., 2023 [] | N/A (Methodology) | Methodological research | UAV-SfM photogrammetry | Method to improve terrain modeling accuracy | Experimental data; error analysis | Analyzing the spatial structure of errors can enhance the accuracy of terrain models created with UAV-SfM photogrammetry. | UAV Photogrammetry; Terrain Modeling; Error Analysis |
Dai et al., 2023 [] | Low-altitude urban environments | Algorithm development | Vision-based UAV self-positioning | A method for UAV self-positioning without GPS | Image data; experimental results | Developed a vision-based method enabling accurate UAV self-positioning in low-altitude urban areas where GPS may be unavailable. | UAV Navigation; Computer Vision; Urban Environments |
De Fino et al., 2023 [] | Heritage buildings | Scoping review | Photogram-metry | Review on photogrammetry for heritage building condition assessment | Literature review | Provides a comprehensive overview of how photogrammetry can be used for the condition assessment of heritage buildings from a decision maker’s perspective. | Heritage Management; Photogrammetry; Condition Assessment |
Enesi et al., 2022 [] | N/A (Case study) | Case study | Photogrammetry | Quality assessment of 3D reconstruction for small objects | Case study data | Investigates the quality of 3D reconstructions of small objects using photogrammetry. | 3D Reconstruction; Photogrammetry; Quality Assessment |
Epaud, 2019 [] | Notre Dame | Historical analysis | N/A (Historical study) | Description of the lost roof structure of Notre Dame | Historical and architectural analysis | Provides a historical account of the medieval timber roof structure of Notre Dame Cathedral that was lost in the 2019 fire. | Architectural History; Cultural Heritage; Notre Dame |
Fatorić and Seekamp, 2017 [] | Global | Systematic literature review | N/A (Review) | Review of threats from climate change to cultural heritage | Literature databases | Identifies that cultural heritage is threatened by climate change and highlights research gaps. | Climate Change; Cultural Heritage; Risk Assessment |
Furukawa and Hernández, 2015 [] | N/A (Methodology) | Tutorial | Multiview stereo (MVS) | A comprehensive tutorial on MVS for 3D reconstruction | Computer vision literature | Provides a foundational overview and tutorial regarding multiview stereo algorithms used for 3D reconstruction from multiple images. | 3D Reconstruction; Computer Vision; Multiview Stereo |
Gao, 2024 [] | China | Visualization study | Digital simulation and reconstruction | Visualization of traditional artistic crafts from the “Kao Gong Ji” text | The “Kao Gong Ji” ancient text | Digital visualization can reproduce ancient craft technology, providing inspiration for modern craft innovation and promoting the integration of traditional crafts with modern technology. | Digital Heritage; Visualization; Traditional Crafts |
Hansen et al., 2023 [] | Guatemala – Mirador–Calakmul Basin | Archaeological analysis | LiDAR | Analysis of regional early Maya organization | LiDAR data | LiDAR analyses provide new perspectives on the socioeconomic and political organization of the early Maya civilization. | LiDAR; Archaeology; Maya Civilization |
Hu and Minner, 2023 [] | General (Urban environments) | Systematic review | UAVs, 3D city modeling | Review of UAVs and 3D modeling for urban planning and historic preservation | Literature review | Systematically reviews the use of UAVs and 3D city modeling to support urban planning and historic preservation, identifying trends and gaps. | UAVs; 3D Modeling; Urban Planning; Historic Preservation |
Hurst et al., 2024 [] | N/A (Lab/research setting) | Technology assessment | Apple’s Object Capture photogrammetry API | Assessment of the API for creating cultural heritage 3D models | 3D models created with the API, quality assessment metrics | Assesses the suitability and quality of Apple’s Object Capture for rapidly creating research-quality 3D models of cultural heritage objects. | 3D Modeling; Photogrammetry; Cultural Heritage; Technology Assessment |
Jacquot, 2024 [] | Notre Dame de Paris | Communication and image analysis | N/A (Analysis of images) | Analysis of the images of the “Forest” (roof structure) of Notre Dame | Images, media representations | Analyzes the communication and representation through images of the “Forest”, the historic timber roof of the Notre Dame. | Notre Dame; Cultural Heritage; Image Analysis; Communication |
Joosen et al., 2022 [] | EU | Political analysis | N/A (Rulemaking) | Analysis of EU aviation rulemaking | Policy analysis | Examines how interest groups and national agencies influence the rulemaking of the European Union Aviation Safety Agency (EASA). | Regulation; Aviation Safety; EU Policy |
Komárek, 2025 [] | General | Analysis | Drones (UAVs) | Analysis of wind constraints on drone mapping | Literature review, meteorological data | Exposes how wind acts as a major constraint for drone-based environmental mapping, affecting feasibility and data quality. | Drones; Environmental Mapping; Weather Constraints |
Kong et al., 2022 [] | N/A (Methodology) | Methodological development | UAV photogrammetry; GCPs | Automatic method for marking ground control points (GCPs) | Experimental data | Proposes an automatic and accurate method for marking GCPs to improve the efficiency and accuracy of UAV photogrammetry. | UAV Photogrammetry; Automation; Ground Control Points |
Kovanič et al., 2023 [] | N/A (Review) | Literature review | UAV, photogrammetry, LiDAR | Review of photogrammetric and LiDAR applications of UAVs | Literature review | Provides a comprehensive review of the current applications and developments in UAV-based photogrammetry and LiDAR. | UAV; Photogrammetry; LiDAR; Remote Sensing |
Letellier, 2007 [] | General | Guiding principles | Information management | Principles for heritage documentation and conservation | Best practices, case studies | Establishes guiding principles for recording, documentation, and information management for conserving heritage places. | Heritage Conservation; Documentation; Information Management |
Li et al., 2025 [] | Urban forests | Systematic review | LiDAR | Review of LiDAR for estimating individual tree aboveground biomass | Empirical studies | Systematically reviews empirical studies on modeling individual tree aboveground biomass in urban forests using LiDAR-derived metrics. | LiDAR; Urban Forestry; Remote Sensing; Biomass Estimation |
Maboudi et al., 2023 [] | N/A (Review) | Literature review | UAV, 3D reconstruction | Review of viewpoint and path planning methods for UAV-based 3D reconstruction | Literature review | Reviews and classifies path planning methods for UAVs to optimize the process of 3D reconstruction. | UAV; 3D Reconstruction; Path Planning; Automation |
Marek, 2022 [] | General | Legal analysis | Digital cultural heritage | Analysis of intellectual property (IP) issues | Legal frameworks, literature | Navigates the complex landscape of intellectual property rights as they apply to digital cultural heritage. | Digital Heritage; Intellectual Property; Law |
Masters et al., 2022 [] | N/A (Virtual) | Experimental study | Immersive virtual reality (VR) and VR application for stress reduction (forest bathing) | User study data | Investigates the effect of biomass levels in a virtual forest on stress reduction in an immersive VR forest bathing application. | Virtual Reality; Health | Wellbeing; Stress Reduction |
Moyano et al., 2022 [] | Architectural restoration | Methodological paper | Historical building information modeling (HBIM) | Systematic approach for HBIM generation | Case study/project data | Proposes a systematic approach for generating HBIM models for architectural restoration projects. | HBIM; Architectural Restoration; Digital Heritage |
Niu et al., 2024 [] | N/A (Field test) | Accuracy assessment | UAV photogrammetry; RTK | Accuracy assessment of RTK-UAV systems for direct georeferencing | Field measurements, UAV data | RTK-equipped UAVs can achieve centimeter-level accuracy without GCPs, but urban multipath is a challenge. | UAV Photogrammetry; RTK; Georeferencing; Accuracy |
O’Connor et al., 2017 [] | Geosciences | Methodological review | Aerial survey cameras | Guidelines for optimizing image data for aerial surveys | Literature review, technical specs | Discusses the importance of sensor size, pixel pitch, GSD, and calibration in optimizing aerial survey imagery. | Aerial Survey; Remote Sensing; Photogrammetry; Data Quality |
Patrucco et al., 2020 [] | Built heritage | Data fusion methodology | Thermal imaging, optical sensors | Fused thermal and optical data to support built heritage analyses | Sensor data | Demonstrates the benefits of fusing thermal and optical data to support the analysis and conservation of built heritage. | Data Fusion; Thermal Imaging; Built Heritage |
Poiron, 2021 [] | Virtual Ancient Egypt | Case study | Video game (Assassin’s Creed) | “Discovery Tour” educational mode | Game development process | Provides a behind-the-scenes look at the creation of the educational “Discovery Tour”, highlighting its potential for public engagement. | Virtual Heritage; Education; Video Games; Public Engagement |
Remondino and Rizzi, 2010 [] | Heritage sites | Review | 3D documentation (photogrammetry, laser scanning) | Review of techniques, problems, and examples in reality-based 3D documentation | Case studies, literature review | Reviews techniques and problems of reality-based 3D documentation, highlighting its importance for heritage conservation. | 3D Documentation; Cultural Heritage; Photogrammetry |
Ringle et al., 2021 [] | Yucatan, Mexico | Archaeological survey | LiDAR | LiDAR survey of ancient Maya settlement | LiDAR data | LiDAR survey reveals extensive ancient Maya settlement and landscape modifications in the Puuc region. | LiDAR; Archaeology; Maya Civilization |
Rocha et al., 2024 [] | Lisbon, Portugal | Case study | Scan-to-BIM | BIM model for heritage maintenance | Laser scan data, building docs | Demonstrates the application of a Scan-to-BIM approach for the maintenance of historical heritage buildings. | Scan-to-BIM; HBIM; Heritage Maintenance |
Roders and Van Oers, 2011 [] | World Heritage cities | Management analysis | N/A (Management frameworks) | Analysis of management practices | Case studies, policy documents | Discusses the challenges in and approaches to managing World Heritage cities, balancing conservation and development. | World Heritage; Urban Management; Heritage Policy |
Sandron and Tallon, 2020 [] | Notre Dame Cathedral | Historical monograph | N/A (Historical/ architectural analysis) | Comprehensive history of the cathedral | Historical archives, architectural analysis | Provides a detailed historical and architectural account of the Notre Dame Cathedral through nine centuries. | Notre Dame; Architectural History; Cultural Heritage |
Schonberger and Frahm, 2016 [] | N/A (Methodology) | Algorithmic improvement | Structure from motion (SfM) | An improved and more robust SfM pipeline | Image datasets, algorithm metrics | Presents a revisited and improved structure-from-motion pipeline that is more robust and accurate. | Structure From Motion; 3D Reconstruction; Computer Vision |
Smith, 2006 [] | General | Theoretical framework/book | N/A (Heritage studies) | Theoretical framework on the uses of heritage | Theoretical analysis, literature review | Argues that heritage is a cultural process and discourse rather than an inherent quality of objects, shaped by social and political contexts. | Heritage Studies; Cultural Theory; Social Value |
Themistocleous, 2019 [] | Cultural heritage and archaeology | Review and book chapter | UAVs | Overview of UAV applications in cultural heritage and archaeology | Literature review, case studies | UAVs are valuable tools for documentation, monitoring, and analysis in cultural heritage and archaeology. | UAVs; Cultural Heritage; Archaeology; Remote Sensing |
Tu et al., 2021 [] | Rock formations | Methodological study | UAVs, photogrammetry | Improved method for 3D reconstruction of complex structures | UAV imagery (nadir, oblique, façade) | Combining nadir, oblique, and façade imagery enhances the completeness and accuracy of 3D reconstructions of complex rock formations. | UAV Photogrammetry; 3D Reconstruction; Data Acquisition |
Wang et al., 2022 [] | Street scenes (virtual) | Algorithm development | Neural light fields, differentiable rendering | Method for neural light field estimation and virtual object insertion | Image datasets | Developed a neural network approach to estimate light fields, enabling realistic virtual object insertion into street scenes. | Computer Graphics; Neural Rendering; Augmented Reality |
Wesner and Blevins, 2021 [] | USA and UK | Comparative legal analysis | Automated license plate readers (ALPRs) | Comparison of privacy policies for ALPRs | Legal documents, policy analysis | Compares US and UK privacy policies for ALPRs, highlighting different approaches to regulating surveillance technology. | Surveillance; Privacy; Regulation; Policy |
Yan and Du, 2025 [] | Historical districts | User study and empirical research | Virtual reality (VR) | Analysis of factors influencing tourists’ travel intention from VR experiences | User surveys, experimental data | Investigates how VR-based reconstructions of historical districts influence a tourist’s intention to visit the actual site. | Virtual Reality; Tourism; Cultural Heritage; User Experience |
Zhang et al., 2022 [] | Urban environments | Methodological development | LiDAR, photogrammetry, deep learning (U-Net) | A method for 3D urban building extraction | Airborne LiDAR, photogrammetric point clouds | A deep learning model (U-Net) can effectively fuse LiDAR and photogrammetric data to extract 3D urban buildings. | 3D Modeling; Urban Mapping; LiDAR; Deep Learning; Data Fusion |
Zhou et al., 2020 [] | N/A (Methodology) | Methodological development | UAV photogrammetry | A two-step method to correct rolling shutter distortion | UAV imagery, experimental data | Proposes a two-step approach that effectively corrects rolling shutter distortion in UAV imagery, improving photogrammetric accuracy. | UAV Photogrammetry; Image Processing; Data Quality |
Barker et al., 2023 [] | N/A (Methodology) | Methodological development | N/A (Appraisal tools) | Revised JBI quantitative critical appraisal tools | Development process, expert consultation | Outlines the process and methods for revising the JBI critical appraisal tools to improve their applicability. | Research Methodology; Critical Appraisal; Evidence Synthesis |
Casini, 2022 [] | Smart buildings | Review | Extended reality (XR) | A review of XR for smart building operation and maintenance | Literature review | Reviews the application of extended reality technologies in improving the operation and maintenance of smart buildings. | Extended Reality; Smart Buildings; Building Management |
Doulamis et al., 2017 [] | Cultural heritage | Book chapter/methodology | 3D Modeling; digitization | Methods for digitizing tangible and intangible cultural heritage | N/A (Methodological description) | Discusses methods for modeling and digitizing both static (tangible) and moving (intangible) aspects of cultural heritage. | Digital Heritage; 3D Modeling; Intangible Heritage |
Gazagne et al., 2023 [] | Vietnam | Case study and application | UAVs, thermal infrared (TIR) sensors | Assessment of UAVs for primate monitoring | Field data (UAV imagery) | Demonstrates that UAVs with thermal sensors are effective in monitoring and counting threatened primate species. | UAVs; Thermal Imaging; Wildlife Monitoring; Conservation |
Gerchow et al., 2025 [] | N/A (Vegetation) | Methodological development | UAVs, thermal imaging | Enhanced flight planning and calibration methods | Experimental data | Proposes enhanced flight planning and calibration techniques for UAV-based thermal imaging to improve analysis of canopy temperature. | UAVs; Thermal Imaging; Remote Sensing; Vegetation Analysis |
Koo et al., 2021 [] | N/A (BIM) | Algorithm development | BIM, deep learning (DNNs) | A method for automatic classification of BIM element subtypes | 3D geometric data | Developed a deep neural network to automatically classify wall and door subtypes in BIM models based on their 3D geometry. | BIM; Deep Learning; Automation; 3D Modeling |
Lenzerini, 2011 [] | General | Legal and theoretical analysis | N/A | Analysis of intangible cultural heritage | Legal frameworks, literature | Discusses the concept of intangible cultural heritage as the living culture of people and its significance in international law. | Intangible Heritage; Cultural Policy; Law |
Meschini et al., 2022 [] | University campus | System development/case study | Digital twins, BIM, GIS | A BIM-GIS asset management system for a cognitive digital twin | University building data | Presents a framework for creating cognitive digital twins for university asset management by integrating BIM and GIS. | Digital Twin; BIM; GIS; Asset Management |
Probst et al., 2018 [] | N/A (Vegetation) | Comparative study | Photogram-metry software | Comparison of photogrammetry software for 3D vegetation modeling | Test datasets, software outputs | Provides an intercomparison of different photogrammetry software packages for their effectiveness in 3D vegetation modeling. | Photogrammetry; 3D Modeling; Vegetation; Software Comparison |
Tallon, 2014 [] | Architectural history | Historical and methodological analysis | Laser scanning | Analysis of architectural proportions using modern tech | Laser scan data of gothic cathedrals | Discusses how modern technologies like laser scanning can be used to analyze and understand the geometric proportions of historical architecture. | Architectural History; Laser Scanning; Digital Heritage |
Verykokou and Ioannidis, 2023 [] | N/A | Review | 3D modeling, photogrammetry, laser scanning | An overview of image-based and scanner-based 3D modeling | Literature review | Provides a comprehensive overview and comparison of image-based (photogrammetry) and scanner-based (LiDAR) 3D modeling technologies. | 3D Modeling; Photogrammetry; Laser Scanning; Review |
Virtue et al., 2021 [] | N/A | Methodological development | UAVs, thermal sensors | A method for thermal sensor calibration | Experimental data | Proposes a method for calibrating thermal sensors on UAVs using an external heated shutter to improve data accuracy. | UAVs; Thermal Imaging; Sensor Calibration; Data Quality |
Yin et al., 2016 [] | N/A (Simulation) | Simulation model development | LiDAR, DART model | Simulation of LiDAR with the DART model | N/A (Model description) | Details the simulation of airborne and terrestrial LiDAR systems using the DART model, including features like multi-pulse and photon counting. | LiDAR; Remote Sensing; Simulation |
Yoo et al., 2024 [] | N/A (Buildings) | Methodological development | 3D semantic segmentation, point clouds | A semi-automated method for creating building datasets | Point cloud data | Proposes a semi-automated process for creating labeled building datasets from point clouds for 3D semantic segmentation tasks. | 3D Modeling; Semantic Segmentation; Datasets; Automation |
References
- Avrami, E. Values in Heritage Management: Emerging Approaches and Research Directions; Getty Publications: Los Angeles, CA, USA, 2019. [Google Scholar]
- Chen, M.; Wu, S.; Wang, F.; Pang, X. Study on the Approach of Integrating Cultural Heritage into Sustainable Education in the Context of Digitization. J. Glob. Humanit. Soc. Sci. 2024, 5, 383–388. [Google Scholar] [CrossRef]
- Smith, L. Uses of Heritage; Routledge: Abingdon, UK, 2006. [Google Scholar]
- Fatorić, S.; Seekamp, E. Are cultural heritage and resources threatened by climate change? A systematic literature review. Clim. Change 2017, 142, 227–254. [Google Scholar] [CrossRef]
- Roders, A.P.; Van Oers, R. World Heritage cities management. Facilities 2011, 29, 276–285. [Google Scholar] [CrossRef]
- Letellier, R. Recording, Documentation, and Information Management for the Conservation of Heritage Places: Guiding Principles; Getty Conservation Institute: Los Angeles, CA, USA, 2007. [Google Scholar]
- Gao, F. Visualization Study of Traditional Artistic Crafts of “Kao Gong Ji” Based on Digital Simulation and Reconstruction. J. Glob. Humanit. Soc. Sci. 2024, 5, 234–239. [Google Scholar] [CrossRef]
- De Fino, M.; Galantucci, R.A.; Fatiguso, F. Condition assessment of heritage buildings via photogrammetry: A scoping review from the perspective of decision makers. Heritage 2023, 6, 7031–7066. [Google Scholar] [CrossRef]
- Moyano, J.; Carreño, E.; Nieto-Julián, J.E.; Gil-Arizón, I.; Bruno, S. Systematic approach to generate Historical Building Information Modelling (HBIM) in architectural restoration project. Autom. Constr. 2022, 143, 104551. [Google Scholar] [CrossRef]
- Remondino, F.; Rizzi, A. Reality-based 3D documentation of natural and cultural heritage sites—Techniques, problems, and examples. Appl. Geomat. 2010, 2, 85–100. [Google Scholar] [CrossRef]
- Rocha, G.; Mateus, L.; Ferreira, V. Historical heritage maintenance via Scan-to-BIM approaches: A case study of the lisbon agricultural exhibition pavilion. ISPRS Int. J. Geo-Inf. 2024, 13, 54. [Google Scholar] [CrossRef]
- Games, E. RealityScan 2.0: New Release Brings Powerful New Features to a Rebranded RealityCapture. 2015. Available online: https://www.realityscan.com/en-US/news/realityscan-20-new-release-brings-powerful-new-features-to-a-rebranded-realitycapture (accessed on 10 June 2025).
- Agisoft LLC. Agisoft Metashape User Manual, Professional Edition, Version 2.1.2. 2024. Available online: https://www.agisoft.com/downloads/user-manuals/ (accessed on 10 June 2025).
- Pix4D SA. PIX4Dmapper: Photogrammetry Software for Professional Drone Mapping. 2023. Available online: https://www.pix4d.com/product/pix4dmapper-photogrammetry-software (accessed on 11 June 2025).
- Enesi, I.; Kuqi, A.; Zanaj, E. Quality of 3D reconstruction based on photogrammetry for small objects, a case study. IOP Conf. Ser. Mater. Sci. Eng. 2022, 1254, 012039. [Google Scholar] [CrossRef]
- Themistocleous, K. The use of UAVs for cultural heritage and archaeology. In Remote Sensing for Archaeology and Cultural Landscapes: Best Practices and Perspectives Across Europe and the Middle East; Springer: Berlin/Heidelberg, Germany, 2019; pp. 241–269. [Google Scholar]
- Patrucco, G.; Cortese, G.; Giulio Tonolo, F.; Spanò, A. Thermal and optical data fusion supporting built heritage analyses. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 619–626. [Google Scholar] [CrossRef]
- Kovanič, L.; Topitzer, B.; Pet’ovskỳ, P.; Blišt’an, P.; Gergel’ová, M.B.; Blišt’anová, M. Review of photogrammetric and lidar applications of UAV. Appl. Sci. 2023, 13, 6732. [Google Scholar] [CrossRef]
- Bramer, W.M.; De Jonge, G.B.; Rethlefsen, M.L.; Mast, F.; Kleijnen, J. A systematic approach to searching: An efficient and complete method to develop literature searches. J. Med. Libr. Assoc. JMLA 2018, 106, 531. [Google Scholar] [CrossRef]
- Chen, G.; Yang, R.; Zhao, X.; Li, L.; Luo, L.; Liu, H. Bibliometric analysis of spatial technology for world heritage: Application, trend and potential paths. Remote Sens. 2023, 15, 4695. [Google Scholar] [CrossRef]
- Barker, T.H.; Stone, J.C.; Sears, K.; Klugar, M.; Leonardi-Bee, J.; Tufanaru, C.; Aromataris, E.; Munn, Z. Revising the JBI quantitative critical appraisal tools to improve their applicability: An overview of methods and the development process. JBI Evid. Synth. 2023, 21, 478–493. [Google Scholar] [CrossRef]
- Schonberger, J.L.; Frahm, J.M. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
- Furukawa, Y.; Hernández, C. Multi-view stereo: A tutorial. Found. Trends® Comput. Graph. Vis. 2015, 9, 1–148. [Google Scholar] [CrossRef]
- Hurst, S.; Franklin, L.; Johnson, E. Assessment of Apple’s object capture photogrammetry API for rapidly creating research quality cultural heritage 3D models. PLoS ONE 2024, 19, e0314560. [Google Scholar] [CrossRef]
- Masters, R.; Interrante, V.; Watts, M.; Ortega, F. Virtual nature: Investigating the effect of biomass on immersive virtual reality forest bathing applications for stress reduction. In Proceedings of the SAP ’22: ACM Symposium on Applied Perception 2022, Online, 22–23 September 2022; pp. 1–10. [Google Scholar]
- Epic Games, Inc. Unreal Engine 5. 2025. Available online: https://www.unrealengine.com/en-US/unreal-engine-5 (accessed on 20 June 2025).
- Bao, M.; Tao, Z.; Wang, X.; Liu, J.; Sun, Q. Comparative Performance Analysis of Rendering Optimization Methods in Unity Tuanjie Engine, Unity Global and Unreal Engine. In Proceedings of the 2024 IEEE Smart World Congress (SWC), Denarau Island, Fiji, 2–7 December 2024; pp. 1627–1632. [Google Scholar]
- Epic Games, Inc. Quixel FAb. 2025. Available online: https://www.fab.com/sellers/Quixel (accessed on 22 June 2025).
- Boon, M.A.; Drijfhout, A.P.; Tesfamichael, S. Comparison of a fixed-wing and multi-rotor UAV for environmental mapping applications: A case study. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 47–54. [Google Scholar] [CrossRef]
- Crosby, A. Fixed-Wing vs Multirotor Drones: Which Is Better? 2022. Available online: https://geonadir.com/fixed-wing-vs-multirotor-drones/ (accessed on 22 June 2025).
- O’Connor, J.; Smith, M.J.; James, M.R. Cameras and settings for aerial surveys in the geosciences: Optimising image data. Prog. Phys. Geogr. 2017, 41, 446–467. [Google Scholar] [CrossRef]
- Zhou, Y.; Daakir, M.; Rupnik, E.; Pierrot-Deseilligny, M. A two-step approach for the correction of rolling shutter distortion in UAV photogrammetry. ISPRS J. Photogramm. Remote Sens. 2020, 160, 51–66. [Google Scholar] [CrossRef]
- Li, R.; Wang, L.; Zhai, Y.; Huang, Z.; Jia, J.; Wang, H.; Ding, M.; Fang, J.; Yao, Y.; Ye, Z.; et al. Modeling LiDAR-Derived 3D Structural Metric Estimates of Individual Tree Aboveground Biomass in Urban Forests: A Systematic Review of Empirical Studies. Forests 2025, 16, 390. [Google Scholar] [CrossRef]
- Asadzadeh, S.; de Oliveira, W.J.; de Souza Filho, C.R. UAV-based remote sensing for the petroleum industry and environmental monitoring: State-of-the-art and perspectives. J. Pet. Sci. Eng. 2022, 208, 109633. [Google Scholar] [CrossRef]
- Yin, T.; Lauret, N.; Gastellu-Etchegorry, J.P. Simulation of satellite, airborne and terrestrial LiDAR with DART (II): ALS and TLS multi-pulse acquisitions, photon counting, and solar noise. Remote Sens. Environ. 2016, 184, 454–468. [Google Scholar] [CrossRef]
- Zhang, P.; He, H.; Wang, Y.; Liu, Y.; Lin, H.; Guo, L.; Yang, W. 3D urban buildings extraction based on airborne lidar and photogrammetric point cloud fusion according to U-Net deep learning model segmentation. IEEE Access 2022, 10, 20889–20897. [Google Scholar] [CrossRef]
- Gerchow, M.; Kühnhammer, K.; Iraheta, A.; Marshall, J.D.; Beyer, M. Enhanced flight planning and calibration for UAV based thermal imaging: Implications for canopy temperature and transpiration analysis. Front. For. Glob. Change 2025, 8, 1457762. [Google Scholar] [CrossRef]
- Gazagne, E.; Gray, R.J.; Ratajszczak, R.; Brotcorne, F.; Hambuckers, A. Unmanned aerial vehicles (UAVs) with thermal infrared (TIR) sensors are effective for monitoring and counting threatened Vietnamese primates. Primates 2023, 64, 407–413. [Google Scholar] [CrossRef]
- Virtue, J.; Turner, D.; Williams, G.; Zeliadt, S.; McCabe, M.; Lucieer, A. Thermal sensor calibration for unmanned aerial systems using an external heated shutter. Drones 2021, 5, 119. [Google Scholar] [CrossRef]
- Pix4D Documentation Team. Mission Types—PIX4Dcapture Pro. 2025. Available online: https://support.pix4d.com/hc/en-us/articles/8945986694301 (accessed on 24 June 2025).
- DJI Documentation Team. DJI GS Pro. 2025. Available online: https://www.dji.com/uk/ground-station-pro?backup_page=index (accessed on 24 June 2025).
- UgCS Documentation Team. UgCS-Drone Flight Planning Software. 2025. Available online: https://www.sphengineering.com/flight-planning/ugcs (accessed on 24 June 2025).
- UgCS Documentation Team. UgCS Downloads. 2025. Available online: https://www.sphengineering.com/flight-planning/ugcs-downloads (accessed on 24 June 2025).
- Maboudi, M.; Homaei, M.; Song, S.; Malihi, S.; Saadatseresht, M.; Gerke, M. A review on viewpoints and path planning for UAV-based 3-D reconstruction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 5026–5048. [Google Scholar] [CrossRef]
- Tu, Y.H.; Johansen, K.; Aragon, B.; Stutsel, B.M.; Ángel, Y.; Camargo, O.A.L.; Al-Mashharawi, S.K.; Jiang, J.; Ziliani, M.G.; McCabe, M.F. Combining nadir, oblique, and façade imagery enhances reconstruction of rock formations using unmanned aerial vehicles. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9987–9999. [Google Scholar] [CrossRef]
- Pix4D Documentation Team. How to Verify That There Is Enough Overlap Between the Images. 2025. Available online: https://support.pix4d.com/hc/en-us/articles/203756125 (accessed on 24 June 2025).
- Wallin, D.O.; Bergner, J. Evaluation of Recreational Impacts on Eelgrass Using Unoccupied Aerial Systems and Virtual Ground Truth Data. J. Coast. Res. 2025, 41, 187–198. [Google Scholar] [CrossRef]
- Niu, Z.; Xia, H.; Tao, P.; Ke, T. Accuracy assessment of UAV photogrammetry system with RTK measurements for direct georeferencing. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2024, X-1, 169–176. [Google Scholar] [CrossRef]
- Dai, M.; Zheng, E.; Feng, Z.; Qi, L.; Zhuang, J.; Yang, W. Vision-based UAV self-positioning in low-altitude urban environments. IEEE Trans. Image Process. 2023, 33, 493–508. [Google Scholar] [CrossRef] [PubMed]
- Kong, L.; Chen, T.; Kang, T.; Chen, Q.; Zhang, D. An automatic and accurate method for marking ground control points in unmanned aerial vehicle photogrammetry. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 16, 278–290. [Google Scholar] [CrossRef]
- Cai, W.; Jin, K.; Hou, J.; Guo, C.; Wu, L.; Yang, W. Vdd: Varied drone dataset for semantic segmentation. J. Vis. Commun. Image Represent. 2025, 109, 104429. [Google Scholar] [CrossRef]
- Yoo, H.; Kim, Y.; Ryu, J.H.; Lee, S.; Lee, J.H. Semi-Automated Building Dataset Creation for 3D Semantic Segmentation of Point Clouds. Electronics 2024, 14, 108. [Google Scholar] [CrossRef]
- Koo, B.; Jung, R.; Yu, Y. Automatic classification of wall and door BIM element subtypes using 3D geometric deep neural networks. Adv. Eng. Inform. 2021, 47, 101200. [Google Scholar] [CrossRef]
- Centre, U.W.H. Ancient Maya City and Protected Tropical Forests of Calakmul, Campeche. 2014. Available online: https://whc.unesco.org/en/list/1061/ (accessed on 28 June 2025).
- Scan, Y. How LiDAR Mapping Uncovered Archeological Ruins. 2024. Available online: https://www.yellowscan.com/knowledge/how-lidar-mapping-uncovered-archeological-ruins-in-mexico/ (accessed on 28 June 2025).
- Ringle, W.M.; Gallareta Negrón, T.; May Ciau, R.; Seligson, K.E.; Fernandez-Diaz, J.C.; Ortegón Zapata, D. Lidar survey of ancient Maya settlement in the Puuc region of Yucatan, Mexico. PLoS ONE 2021, 16, e0249314. [Google Scholar] [CrossRef]
- Jill Kimball. Have We Found All the Major Maya Cities? Not even Close. 2024. Available online: https://news.nau.edu/maya-lidar-study/ (accessed on 28 June 2025).
- McAvoy, S. Chichen Itza 3D Archaeological Atlas. 2023. Available online: https://pointcloud.ucsd.edu/Mexico/chichen_itza/ (accessed on 28 June 2025).
- Hansen, R.D.; Morales-Aguilar, C.; Thompson, J.; Ensley, R.; Hernández, E.; Schreiner, T.; Suyuc-Ley, E.; Martínez, G. LiDAR analyses in the contiguous Mirador-Calakmul Karst Basin, Guatemala: An introduction to new perspectives on regional early Maya socioeconomic and political organization. Anc. Mesoam. 2023, 34, 587–626. [Google Scholar] [CrossRef]
- Vallée, A.; Sorbets, E.; Lelong, H.; Langrand, J.; Blacher, J. The lead story of the fire at the Notre-Dame cathedral of Paris. Environ. Pollut. 2021, 269, 116140. [Google Scholar] [CrossRef]
- Tallon, A. Divining proportions in the information age. Archit. Hist. 2014, 2, 15. [Google Scholar]
- Sandron, D.; Tallon, A. Notre Dame Cathedral: Nine Centuries of History; Penn State Press: University Park, PA, USA, 2020. [Google Scholar]
- Alice Vincent. The Man Who Scanned Notre-Dame: Could Andrew Tallon’s Work Help Rebuild the Cathedral? 2019. Available online: https://www.telegraph.co.uk/art/architecture/man-scanned-notre-dame-could-andrew-tallons-work-help-rebuild/ (accessed on 1 July 2025).
- Art Graphique Patrimoine. Notre-Dame de Paris. 2021. Available online: https://artgp.fr/references/notre-dame/ (accessed on 1 July 2025).
- Epaud, F. La charpente disparue de Notre-Dame. Rev. Hist. 2019, 57, 40–45. [Google Scholar]
- Lynne Peskie, Yang. Paris Firefighters Used This Remote-Controlled Robot to Extinguish the Notre Dame Blaze. IEEE. Available online: https://spectrum.ieee.org/colossus-the-firefighting-robot-that-helped-save-notre-dame (accessed on 1 July 2025).
- Jacquot, K. Les images de la «Forêt» de Notre-Dame de Paris. Études de Commun. 2024, 113–131. [Google Scholar] [CrossRef]
- Ubisoft. Notre-Dame de Paris: Journey Back in Time. 2020. Available online: https://store.steampowered.com/app/1341280/NotreDame_de_Paris_Journey_Back_in_Time/ (accessed on 1 July 2025).
- Orange SA. Eternal Notre-Dame. 2022. Available online: https://www.meta.com/experiences/eternal-notre-dame/5456047137753124/ (accessed on 1 July 2025).
- Biryukova, M.V.; Nikonova, A.A. The role of digital technologies in the preservation of cultural heritage. Muzeológia a Kultúrne Dedičstvo 2017, 5, 1. [Google Scholar]
- McKenna, F.; Rivera-Carlisle, J.; Foale, M.; Gould, A.; Rivera-Carlisle, P.; Andrews, H.; Grant, S.; Hawcroft, A.; Head, D. Digital Cultur-al Heritage: Imagination, Innovation and Opportunity. 2025. Available online: https://www.britishcouncil.org/research-insight/digital-cultural-heritage (accessed on 3 July 2025).
- John Ristevski. CyArk Brings Accurate 3D Immersive Data to Life in Virtual Reality. 2018. Available online: https://www.cyark.org/about/cyark-brings-accurate-3d-immersive-data-to-life-in-virtual-reality? (accessed on 3 July 2025).
- Poiron, P. Assassin’s Creed Origins Discovery Tour: A behind the scenes experience. Near East. Archaeol. 2021, 84, 79–85. [Google Scholar] [CrossRef]
- Barbara, J. Re-live history: An immersive virtual reality learning experience of prehistoric intangible cultural heritage. Front. Educ. 2022, 7, 1032108. [Google Scholar] [CrossRef]
- Yan, Y.; Du, Q. From digital imagination to real-world exploration: A study on the influence factors of VR-based reconstruction of historical districts on tourists’ travel intention in the field. Virtual Real. 2025, 29, 1–19. [Google Scholar] [CrossRef]
- Gao, M.; Hugenholtz, C.H.; Fox, T.A.; Kucharczyk, M.; Barchyn, T.E.; Nesbit, P.R. Weather constraints on global drone flyability. Sci. Rep. 2021, 11, 12092. [Google Scholar] [CrossRef]
- Komárek, J. When the Wind Blows: Exposing the Constraints of Drone-Based Environmental Mapping. Nat. Sci. 2025, 5, e70003. [Google Scholar] [CrossRef]
- Wang, Z.; Chen, W.; Acuna, D.; Kautz, J.; Fidler, S. Neural light field estimation for street scenes with differentiable virtual object insertion. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 380–397. [Google Scholar]
- Alsadik, B.; Remondino, F.; Nex, F. Simulating a hybrid acquisition system for UAV platforms. Drones 2022, 6, 314. [Google Scholar] [CrossRef]
- Verykokou, S.; Ioannidis, C. An overview on image-based and scanner-based 3D modeling technologies. Sensors 2023, 23, 596. [Google Scholar] [CrossRef] [PubMed]
- Hu, D.; Minner, J. UAVs and 3D City Modeling to Aid Urban Planning and Historic Preservation: A Systematic Review. Remote Sens. 2023, 15, 5507. [Google Scholar] [CrossRef]
- Pix4D. Recommended Hardware—PIX4Dmatic. 2025. Available online: https://support.pix4d.com/hc/en-us/articles/360044895711 (accessed on 3 July 2025).
- Dai, W.; Qiu, R.; Wang, B.; Lu, W.; Zheng, G.; Amankwah, S.O.Y.; Wang, G. Enhancing UAV-SfM photogrammetry for terrain modeling from the perspective of spatial structure of errors. Remote Sens. 2023, 15, 4305. [Google Scholar] [CrossRef]
- Probst, A.; Gatziolis, D.; Strigul, N. Intercomparison of photogrammetry software for three-dimensional vegetation modelling. R. Soc. Open Sci. 2018, 5, 172192. [Google Scholar] [CrossRef] [PubMed]
- Ahmad, N.; Chaturvedi, S.; Masum, A. Unregulated drones and an emerging threat to right to privacy: A critical overview. J. Data Prot. Priv. 2021, 4, 124–145. [Google Scholar] [CrossRef]
- Joosen, R.; Haverland, M.; de Bruijn, E. Shaping EU agencies’ rulemaking: Interest groups, national regulatory agencies and the European Union Aviation Safety Agency. Comp. Eur. Politics 2022, 20, 411–442. [Google Scholar] [CrossRef]
- Wesner, K.; Blevins, K. Restraining the Surveillance Society: Comparing Privacy Policies for Automated License Plate Readers in the United States and the United Kingdom. Ohio St. Tech. LJ 2021, 18, 99. [Google Scholar]
- ServiceNow. What Is EULA? 2025. Available online: https://www.servicenow.com/products/it-asset-management/what-is-eula.html (accessed on 5 July 2025).
- Capturing Reality/Epic Games. RealityScan End User License Agreement. 2025. Available online: https://www.realityscan.com/en-US/eula (accessed on 5 July 2025).
- Wallace, A.; Pavis, M. Working with Open Licences. 2024. Available online: https://www.heritagefund.org.uk/good-practice-guidance/working-open-licences (accessed on 5 July 2025).
- Marek, H.M. Navigating intellectual property in the landscape of digital cultural heritage sites. Int. J. Cult. Prop. 2022, 29, 1–21. [Google Scholar] [CrossRef]
- Meschini, S.; Pellegrini, L.; Locatelli, M.; Accardo, D.; Tagliabue, L.C.; Di Giuda, G.M.; Avena, M. Toward cognitive digital twins using a BIM-GIS asset management system for a diffused university. Front. Built Environ. 2022, 8, 959475. [Google Scholar] [CrossRef]
- Casini, M. Extended reality for smart building operation and maintenance: A review. Energies 2022, 15, 3785. [Google Scholar] [CrossRef]
- Lenzerini, F. Intangible cultural heritage: The living culture of peoples. Eur. J. Int. Law 2011, 22, 101–120. [Google Scholar] [CrossRef]
- Doulamis, N.; Doulamis, A.; Ioannidis, C.; Klein, M.; Ioannides, M. Modelling of static and moving objects: Digitizing tangible and intangible cultural heritage. In Mixed Reality and Gamification for Cultural Heritage; Springer: Cham, Switzerland, 2017; pp. 567–589. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).