Next Article in Journal
Performance Analysis of Data-Driven and Deterministic Latency Models in Dynamic Packet-Switched Xhaul Networks
Previous Article in Journal
The Use of Myocardial Work in Athletes: A Novel Approach to Assess Cardiac Adaptations and Differentiate Physiological Remodeling from Pathology
Previous Article in Special Issue
Real-Time Live Streaming Framework for Cultural Heritage Using Multi-Camera 3D Motion Capture and Virtual Avatars
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low-Altitude Photogrammetry and 3D Modeling for Engineering Heritage: A Case Study on the Digital Documentation of a Historic Steel Truss Viaduct

by
Tomasz Ciborowski
1,
Dominik Księżopolski
2,
Dominika Kuryłowicz
2,
Hubert Nowak
2,
Paweł Rocławski
2,
Paweł Stalmach
2,
Paweł Wałdowski
2,
Anna Banas
3,* and
Karolina Makowska-Jarosik
4
1
Department of Mechanics of Materials and Structures, Faculty of Civil and Environmental Engineering and EkoTech Center, Gdansk University of Technology, ul. Narutowicza 11/12, 80-233 Gdansk, Poland
2
Faculty of Civil and Environmental Engineering and EkoTech Center, Gdansk University of Technology, ul. Narutowicza 11/12, 80-233 Gdansk, Poland
3
Department of Engineering Structures, Faculty of Civil and Environmental Engineering and EkoTech Center, Gdansk University of Technology, ul. Narutowicza 11/12, 80-233 Gdansk, Poland
4
Department of Geodesy, Faculty of Civil and Environmental Engineering and EkoTech Center, Gdansk University of Technology, ul. Narutowicza 11/12, 80-233 Gdansk, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(23), 12491; https://doi.org/10.3390/app152312491
Submission received: 14 October 2025 / Revised: 12 November 2025 / Accepted: 20 November 2025 / Published: 25 November 2025

Featured Application

The presented UAV-based photogrammetric workflow provides a validated, non-invasive methodology for the accurate digital documentation of historic steel bridges. It can be directly applied to the preservation, monitoring, and restoration planning of other engineering heritage structures with complex geometries.

Abstract

For many historic engineering structures, including early 20th-century truss bridges, no comprehensive technical documentation has survived, making them highly vulnerable to irreversible loss. This study addresses this challenge by developing and testing a non-invasive, UAV-based photogrammetric methodology for the comprehensive documentation of the Niestępowo railway viaduct in Northern Poland. A dense geodetic control network was established using GNSS and total station measurements, providing a metrically verified reference framework for 3D reconstruction. Two photogrammetric software platforms—Bentley ContextCapture and Agisoft Metashape—were employed and comparatively evaluated in terms of processing workflow, accuracy, and model fidelity. To ensure methodological robustness, both tools were used for cross-validation of the generated 3D models and for the comparative assessment of their dimensional consistency against archival documentation. The results confirm that both platforms can produce highly accurate, photorealistic 3D models suitable for engineering inventory and heritage preservation, with Agisoft Metashape yielding slightly higher geometric precision, while Bentley ContextCapture ensured superior automation for large datasets. The generated 3D models reproduced details such as rivets, cracks, and corrosion marks with millimeter-level accuracy. The presented workflow demonstrates the potential of UAV photogrammetry as a reliable and scalable method for safeguarding cultural and technical heritage. By enabling the creation of metrically precise digital archives of historic bridges, the methodology supports future conservation, monitoring, and restoration efforts—preserving not only physical form but also the historical and engineering legacy of these structures.

1. Introduction

Photogrammetry, defined as the science of acquiring reliable metric data from photographs, has a longstanding history of application in surveying, mapping, and documentation. Conventional aerial and terrestrial photogrammetric approaches, although well established, are often associated with substantial operational costs and logistic complexity. Over the past decade, the advent of low-altitude unmanned aerial vehicle (UAV) photogrammetry, typically employing Structure-from-Motion (SfM) Multi-View Stereo (MVS) techniques, has substantially broadened the practical scope of this field by enabling flexible, high-resolution, and cost-effective data acquisition workflows [1,2,3].
Contemporary applications of UAV-based photogrammetry now extend beyond classic topographic surveys, encompassing domains such as coastal monitoring [2], assessment of erosion processes [3], debris-flow tracking [4], and high-precision mapping of engineering structures [5,6]. Comparative studies, for example [7], have confirmed that UAV photogrammetry can achieve results comparable to terrestrial laser scanning (TLS), albeit sometimes with limitations regarding point density. The accuracy of UAV-derived output is closely linked to the optimal configuration of ground control points (GCPs), as demonstrated in recent research by Cho et al. [8].
In the context of cultural heritage preservation, photogrammetry plays an increasingly important role in the documentation and monitoring of historic structures, including bridges, viaducts, and other infrastructural monuments. Steel truss bridges, many of which were built during the late 19th and early 20th centuries, are not only valuable engineering artifacts but also significant cultural assets. Their documentation contributes to heritage protection by enabling accurate records of geometry, material condition, and long-term change detection [9,10]. Recent studies emphasize the role of UAV photogrammetry in monitoring temporal changes in the state of preservation of heritage structures [11].
Engineering heritage structures—such as steel truss viaducts, arch and masonry bridges—are characterized by complex geometries, difficult-to-access components, and progressive material deterioration, all of which present unique documentation challenges. UAV photogrammetry addresses many of these issues by enabling fast, non-invasive, and detailed documentation of such assets. Tang et al. [12] illustrated that strategic oblique imaging notably enhances the fidelity of 3D reconstructions of railway bridges, while Ioli et al. [13] utilized UAV photogrammetry for crack detection on concrete structures, achieving millimeter-level measurement precision. Other studies establish UAV photogrammetry as a reliable, though sometimes less dense, alternative to TLS in digital twin generation for infrastructure monitoring, and highlight its role in rapid seismic risk assessments [14], structural monitoring [15], and in bridging gaps within combined UAV–TLS methodologies [16].
The method is also exploited within aquatic and hydraulic contexts, such as mapping submerged infrastructure and simulating hydraulic impacts [2]. These collective advancements reaffirm UAV photogrammetry as both a robust and versatile instrument for the digital documentation of heritage bridges and similar assets.
Data products generated from UAV photogrammetry—such as dense point clouds and photorealistic 3D meshes—are increasingly embedded in digital twin workflows, advancing efforts in heritage conservation, structural defect detection, and infrastructure condition monitoring. Mohammadi et al. [17] underscored the growing potential for multi-scale deterioration assessment through photogrammetrically derived digital twins, while Mousavi et al. [18] reviewed digital twin integrations in bridge management, stressing the centrality of UAV-acquired data.
The field is also witnessing a trend toward automation, as shown by Yasit et al. [15], who linked UAV-based digital twins with algorithmic crack detection for predictive bridge maintenance. The synergy of UAV photogrammetry and GIS further streamlines workflows for bridge risk assessment [19], while multi-temporal surveys are now effectively deployed for change detection in both geomorphological and structural contexts [4].
Despite these technological advances, various practical challenges persist. The capture of fine construction details, such as rivets or corrosion spots, demands exceptionally high spatial resolution and robust georeferencing strategies [8,12]. Occlusions, particularly under decks or within truss interiors, continue to impede model completeness [12,16]. TLS still provides denser point clouds, but UAV photogrammetry offers superior accessibility and cost-efficiency, as shown in comparative studies [7,14]. Workflow standardization, e.g., selection of image overlap and GCP strategy, remains an evolving area [6,8]. Furthermore, multi-epoch change detection imposes technical and analytical demands on processing pipelines [17], and the increasing scale of datasets strains existing computational resources [5].
Building upon these foundations, the present study focuses on the case of the Niestępowo steel truss viaduct located in Northern Poland, which represents a valuable example of early twentieth-century engineering heritage. Despite significant advances in UAV photogrammetry, the precise detection of small-scale structural defects such as micro-cracks, missing rivets, or local deformations remains challenging due to the homogeneous surface texture of renovated steel components. The aim of this work is to evaluate the geometric reliability and internal consistency of 3D models generated through UAV photogrammetry under real engineering conditions. The adopted methodology combines high-resolution imaging with precise geodetic surveying to assess how effectively the applied workflow captures subtle irregularities in bridge geometry. Previous studies have confirmed the suitability of UAV photogrammetry for the detailed documentation of bridge structures and its growing potential in infrastructure diagnostics [20,21,22,23].
In the present study, the proposed photogrammetric approach allowed the identification of potential defects through geometric irregularities and texture anomalies observable in dense point clouds and orthophotos [24,25]. However, the Niestępowo viaduct was intentionally selected as a recently renovated structure, characterized by a uniform surface color and texture, to test the robustness of the method under visually homogeneous conditions. Such surfaces are generally challenging for defect detection, as color-based cues such as rust stains, cracks, or discolorations are absent. The methodology, therefore, focused on evaluating geometric consistency and the potential to detect subtle local discontinuities, such as missing rivets or deformations in metallic elements [26,27,28].
The aim of this work is to expand the growing body of literature on the digital documentation of engineering heritage objects by providing methodological advances as well as actionable insights for practice and further research. In particular, this study shows how UAV photogrammetry, together with precise geodetic measurements and advanced 3D reconstruction algorithms, can be used to document a historic steel truss viaduct. Unlike previous studies focusing primarily on qualitative heritage documentation, this work integrates UAV photogrammetry with precise geodetic control and comparative reconstruction parameters to quantitatively assess modeling accuracy in a complex steel truss viaduct. The comparison of different platforms and computational parameters contributes to a deeper understanding of how modeling accuracy is affected by data acquisition and processing strategies. From a scientific perspective, the research also advances knowledge on the optimization of tie-point distribution in repetitive structural geometries—an issue well recognized in SfM algorithms [10,11].

2. Materials and Methods

2.1. Bridge Description

The bridge under consideration was selected due to the favorable topographic conditions for conducting a photogrammetric survey, the availability of archival documentation, and the historical significance of the span. Furthermore, an additional factor justifying the selection of the analyzed structure for the study was the uniform coating applied over the entire surface, free from contaminants (such as corrosion or graffiti), and finished in a non-reflective color, minimizing light scattering. Notably, the homogeneous surface characteristics of the bridge posed an additional challenge for photogrammetric reconstruction, as such textures provide fewer distinctive features for tie-point identification. This aspect allowed the assessment of the robustness of the proposed UAV-based documentation workflow under visually demanding conditions.
The first crossing over the Radunia River was built in 1911 as part of the construction of the Kokoszki–Stara Piła railway line (Figure 1). However, the historic span was destroyed during World War II. The bridge was rebuilt in 1952. The facility is a living witness to the local history of engineering reconstructions, thus constituting an important element of the local cultural landscape. In its current state, the structure has undergone renovation completed in 2023 as part of the Kartuzy Bypass project, an investment aimed at improving passenger transport in the Pomerania region (Figure 2). The renovation works included strengthening the existing structure, replacing the track deck, and cleaning, anti-corrosion protection, and painting of the steel surfaces [29].
The bridge crosses the Radunia River and a municipal road. In the setting of the structure, there are arable fields and wastelands. The load-bearing system consists of a single-span steel truss with a top deck, following the static scheme of a simply supported beam. The total width of the span is 5.70 m, and the total length of the structure is 59.00 m. The truss girders have a theoretical span of 58.00 m. The upper and lower chords of the trusses are connected by diagonals and verticals arranged in a “W” configuration. The main load-bearing element of the structure is the upper chord, which supports the deck structure. The truss girder height at mid-span (measured at the chord axes) is 8.04 m. The truss was constructed from plate girder sections connected together with rivets. The upper chords are open box sections composed of L-profiles and plates. The verticals and diagonals were made from rolled I-beams reinforced with riveted plate overlays. The cross-sections of the vertical and horizontal bracings consist of L-profiles connected together.
The railway track deck was constructed as an open deck system. The structure is supported on monolithic abutments. The steel superstructure of the bridge was made of St3SX steel (PN-88/H-84020), while the deck structure and the reinforcements added during the renovation in the form of overlays were made using S355 steel (PN-EN 1993-2:2006). The geometry of the truss girder and the cross-section are shown in Figure 3.
The bridge’s configuration and material characteristics provide an appropriate experimental framework for assessing the accuracy and reliability of UAV photogrammetry in the documentation of historic engineering structures.

2.2. Theoretical Foundations of Photogrammetry

Photogrammetry is a scientific and technical discipline concerned with the derivation of precise quantitative information about physical objects and the environment through the process of recording, measuring, and interpreting photographic images and other forms of electromagnetic radiation [31,32,33].
In engineering applications, the primary objective of photogrammetry is to reconstruct the three-dimensional geometry of an object from a series of photographs. This process is fundamentally based on triangulation, a principle that utilizes multiple images of an object captured from different viewpoints to determine the 3D position of corresponding points. Modern photogrammetry employs digital sensors, including those integrated into Unmanned Aerial Vehicles (UAVs), in conjunction with advanced computational methods to generate accurate and high-precision 3D models. This process involves a structured workflow, encompassing key stages such as image acquisition, camera calibration, the identification of homologous points across multiple images, and the subsequent reconstruction of the spatial model.
A fundamental aspect of photogrammetry involves the geometric relationship that establishes a correspondence between two-dimensional (2D) image coordinates and three-dimensional (3D) spatial coordinates. This relationship is governed by the collinearity equation, which mathematically describes the projection of a 3D object point onto a 2D image plane using a rotation matrix and a translation vector to define the camera’s pose.
x y z = R 11 R 12 R 13 R 21 R 22 R 23 R 31 R 32 R 33 X X S Y Y S Z Z S + f 0 0
The variables in the equation are defined as follows:
  • f: The focal length represents the distance from the camera’s perspective center to the image plane.
  • ( X S , Y S , Z S ) : The coordinates of the camera’s perspective center in the object coordinate system, defining the camera’s position in 3D space.
  • R: The rotation matrix, a 3 × 3 matrix that specifies the camera’s orientation or attitude relative to the object coordinate system.
Digital photogrammetry relies on precise geometric transformations. As illustrated in Figure 4, the process maps a physical object point, P, defined within the object coordinate system [X, Y, Z], onto a corresponding image point, P′, located on the image (or sensor) coordinate system [ g X , g Y ] . This mapping is defined by the projection center O and the camera coordinate system [ c X , c Y , c Z ] . The point where the camera’s optical axis intersects the image plane is known as the principal point, H [34].
To move from a single 2D image plane to a fully determined 3D model, stereoscopic photogrammetry is employed, utilizing at least two images captured from different perspectives. The fundamental principle governing this 3D reconstruction is epipolar geometry, which significantly reduces the search space for corresponding points between two images [34]. As detailed in Figure 5 and Figure 6, the key benefit of epipolar geometry is that the search for a corresponding image point (P″) is constrained to a single line (l″(P)), rather than the entire image plane, greatly enhancing the efficiency of image matching [34].
A variety of data processing methods are employed in photogrammetry. The state-of-the-art technique is Bundle Adjustment, which simultaneously refines the network of camera poses (position and orientation) and the 3D coordinates of all reconstructed points. By optimizing all parameters in a single, large-scale least-squares adjustment, this method provides a statistically rigorous estimation of the results’ accuracy, a critical element for high-precision applications [16].
In addition to its traditional engineering and surveying applications, photogrammetry is increasingly recognized as a state-of-the-art, non-contact, and non-destructive diagnostic tool within the field of artwork conservation. The integration of high computing power for photogrammetric processing allows conservators to gather unprecedented detail about the artwork, which is essential for preserving human culture. Specifically, the technique is leveraged to generate highly accurate 3D models of paintings, murals, and artifacts, providing critical data on surface topology—including the precise location and extent of deformations, cracks, or material deficiencies. The resulting models serve as an indispensable tool for pre- and post-intervention documentation, supporting the diagnostic phase and enabling conservators to select and apply the most appropriate restoration and preservation procedures with high confidence and precision [35].

2.3. Advantages and Limitations of Photogrammetry

Photogrammetry is a recognized and widely adopted measurement technology for the documentation, analysis, and reconstruction of objects across diverse fields, including cultural heritage and civil engineering [36,37,38,39,40].
The primary advantages of photogrammetry can be summarized as follows (Figure 7):
  • Good geometric consistency: The accuracy of UAV photogrammetry is generally sufficient for documentation and monitoring purposes, although it is lower than that achieved with TLS. The method facilitates the generation of precise 3D models with high geometric fidelity, which is particularly critical for the detailed documentation of historical sites and artifacts [41].
  • Cost-Effectiveness: In comparison to active remote sensing methods like LiDAR, photogrammetry is significantly more cost-effective. The required equipment, primarily high-resolution digital cameras and consumer-grade UAVs, involves substantially lower initial investment and operating expenses [7].
  • Non-Invasiveness: As a non-contact measurement technique, photogrammetry is an ideal solution for documenting fragile or delicate objects without the risk of physical damage [38].
  • Versatility and Scalability: The technology is highly adaptable, enabling its application to objects of varying scales, from small artifacts to large-scale architectural structures. The integration of Unmanned Aerial Vehicles (UAVs) further extends the utility of this technology by allowing access to hard-to-reach areas [37,42].
  • Integration: Photogrammetric data can be seamlessly integrated with other technologies, such as 3D printing, to create physical replicas for educational purposes or conservation efforts [43].
Despite its numerous benefits, photogrammetry has several limitations that can affect the quality and accuracy of the final results [34].
  • Lighting and Image Quality: The technique relies heavily on clear images with good lighting and sharp contrast. Poor lighting, reflections, or motion blur can introduce significant errors and compromise the accuracy of the 3D model.
  • Calibration: For precise spatial measurements, the cameras used must be accurately calibrated. Errors in camera calibration can lead to inaccuracies in the final 3D model, as the geometric relationship between the camera and the object is incorrectly defined.
  • Image Geometry: The process can be challenged by complex objects or environments with hidden or obscured areas. Difficulty in acquiring images from appropriate perspectives can result in incomplete or distorted models.
  • Computational Intensity: The processing of large datasets of high-resolution images is computationally demanding. It requires significant processing power and time, which can be a major constraint for large-scale projects.
  • Object Properties: The method is less effective on objects with smooth, monochrome, or transparent surfaces (such as polished metal or glass). These surfaces lack the distinct feature points necessary for accurate image matching, which is a fundamental step in the photogrammetry workflow.
  • Regulatory and Legal Constraints: The increasing reliance on Unmanned Aerial Vehicles (UAVs) for aerial photogrammetry introduces significant limitations derived from national and international flight regulations. Operators are subject to mandatory registration, specific pilot qualifications, and strict airspace restrictions defined by relevant air navigation services. Non-compliance with these rules, including mandatory Civil Liability (OC) insurance and respecting designated geographical zones, can lead to substantial financial penalties [44].
To enhance clarity and provide a concise overview of the study’s workflow, the overall research methodology is presented in a flowchart (Figure 8). This diagram illustrates the sequential stages, from the initial data acquisition using UAV photogrammetry to the final data processing and analysis steps.
It is important to note that ongoing technological advancements, particularly in digital photogrammetry and computational power, are continuously helping to mitigate many of these limitations.

2.4. Geodetic Control Network Establishment

The first stage of the geodetic work involved establishing a measurement control network in the vicinity of the bridge using a high-precision Leica GS07 GNSS receiver (Leica Geosystems, St. Gallen, Switzerland). Six control points were stabilized on wooden stakes and designated with identifiers ranging from 1001 to 1006. These points were selected to ensure good visibility both to the structure and between one another, allowing their subsequent use as a stable reference base. By employing satellite positioning techniques, all control points were tied to the national horizontal coordinate system PL-2000 and the vertical reference system PL-EVRF2007-NH, thus providing a reliable geodetic framework for the entire project.
After the external control network had been firmly established, the electronic total station Leica TS03 was setup and aligned. The accuracy of the total station measurements, depending on the current setup and orientation, is up to approximately 5 mm for linear (horizontal) measurements and 3 mm for height (vertical) measurements. Two convenient instrument positions were chosen—on the northern and southern sides of the bridge—offering full visibility to both the control points and the surveyed structure. Precise leveling of the instrument, along with accurate orientation measurements, ensured that the measurement setup was fully integrated with the previously established control network.
The main stage of the survey focused on the detailed measurement of characteristic points on the bridge structure. From two independent total station positions, the spatial coordinates of all significant and easily identifiable elements were recorded, facilitating their subsequent recognition and marking in the photographs. These elements included riveted heads, joints between structural elements, truss nodes, and pre-attached photogrammetric targets. The targets were printed on A4 sheets and fixed in accessible locations. This approach enabled precise measurement of the targets and their accurate identification in photogrammetry software such as Bentley ContextCapture 24.1.6.2180 [39] and Agisoft Metashape 2.1.2 [39]. Each point’s spatial position was recorded multiple times to allow for accuracy verification and error elimination arising from minor misalignment of the total station’s crosshairs. In total, 41 points were measured and assigned coordinates in the PL-2000 horizontal coordinate system and PL-EVRF2007-NH vertical reference system. These points were labeled from 1 to 41 and were instrumental in orienting all photographs within 3D space.
For the georeferencing of the photogrammetric models, 6 Ground Control Points (GCPs) were stabilized for the primary GNSS network, alongside 41 measurement points established directly on the object. This extensive number of control points, strategically distributed both around the object and across its surfaces, was selected to ensure a robust and comprehensive scale definition and to effectively eliminate geometric distortions that can occur in large-scale or complex structures. The objective was to provide sufficient geometric constraints for the photogrammetric block’s Bundle Block Adjustment (BBA), which is crucial for achieving high geometric fidelity across the final 3D model.
The rigorous field procedures involved multiple readings for each point to verify measurement precision. The quality of the established network was confirmed by low resultant errors. The mean errors achieved were very low, specifically approximately ±2 mm for linear measurements and ±5 mm for height measurements within the GNSS and tachymetric measurements. This low level of uncertainty in the control point coordinates is vital, as these precise coordinates were subsequently used to georeference the final 3D models within the processing software, namely Bentley ContextCapture 24.1.6.2180 and Agisoft Metashape 2.1.2.
The appropriate number and optimal distribution of control points are fundamental to ensuring high accuracy of the final 3D model. The more points are precisely surveyed and evenly spread across the structure, the better the model’s distortion error is corrected.
Additionally, the Leica GS07 GNSS receiver was used to survey points located at the top of the embankment and on the structure itself, specifically in locations that were easily accessible with a surveying rod. In this phase, photogrammetric targets were distributed on the ground to ensure even coverage of the entire surveyed area. A total of 7 such ground control points were established and labeled FOT1 through FOT7. The mean errors for GNSS measurements are 2 cm for linear (horizontal) measurements and 3 cm for height (vertical) measurements. The accuracy of GNSS measurements primarily depends on satellite geometry (GDOP), atmospheric disturbances, and receiver quality.
In parallel with the measurement work, situational sketches were produced manually. These drawings included the location of each measured point, its identifier, and a brief description. This documentation proved invaluable both during the fieldwork and in the later data processing phase, enabling quick and unambiguous identification of individual structural elements.
The collected survey data—point coordinates combined with their corresponding sketch documentation—provided a solid foundation for further design and analytical work (see Appendix A). The precise geometric representation of the structure, referenced to the national spatial reference systems, constitutes a key element in the process of creating a metrically accurate 3D model of the historic structure.
A top view of the control points measured with a GNSS receiver is shown in Figure 9; north and south side sketches of the points measured with a total station from two setups are shown in Figure 10 and Figure 11. Point 32 in Figure 10 is shown outside the bridge structure because the corresponding photogrammetric target was placed on an elevated railway sign adjacent to the viaduct. This element was not included in the schematic drawing due to generalization and the need to maintain overall clarity.

2.5. UAV Data Acquisition and Processing

A simultaneous photogrammetric flight was conducted using three unmanned aerial platforms: DJI Phantom 4 Pro, DJI Mavic 2 Pro, and DJI Mini 3 Pro (Figure 12). All three drones operated concurrently. This decision was primarily operational, driven by two key constraints: First, the time allocated for the field photogrammetric work was severely limited due to the planned temporary cessation of railway traffic on the bridge. To meet this tight deadline and minimize disruption, simultaneous flights were necessary. Second, the use of three separate UAVs was mandated by the limited battery life and restricted number of available batteries for each model, making it impossible to complete the extensive data acquisition within the required single-day timeframe using only one drone. In addition, the DJI Mini 3 Pro was used because its camera can be tilted upward by up to 60°, allowing better coverage of the lower surfaces of the bridge structure, which would otherwise be difficult to capture with standard downward-facing cameras.
The DJI Phantom 4 Pro has a take-off weight of 1388 g. It is equipped with a camera featuring a 1-inch CMOS sensor and an image resolution of 20 megapixels (MP). The drone offers an adjustable aperture ranging from f/2.8 to f/11 and includes both a mechanical and electronic shutter. Its maximum flight time is 30 min, it has 5-directional obstacle detection, and its maximum transmission range (CE standard) is 3.5 km. The DJI Mavic 2 Pro is lighter, weighing 907 g, and utilizes a 1-inch CMOS (Hasselblad) sensor, which also provides an image resolution of 20 MP. Similar to the Phantom 4 Pro, it has an adjustable aperture from f/2.8 to f/11. Its maximum flight time is 31 min, it features a more advanced 6-directional obstacle detection system, and its transmission range (CE) is significantly greater at 8 km. It uses an electronic shutter only. The DJI Mini 3 Pro is the lightest model, with a take-off weight of just 249 g. Despite its smaller size, it offers the highest image resolution—48 MP—thanks to its 1/1.3-inch CMOS sensor. It features a fixed, bright aperture of f/1.7 and uses an electronic shutter. It provides the longest maximum flight time at 34 min and achieves a transmission range (CE) of 8 km. Obstacle detection is 3-directional. All these specifications are presented in Table 1. The provided table details key photogrammetric parameters for three UAV models: DJI Phantom 4 Pro, DJI Mavic 2 Pro, and DJI Mini 3 Pro, alongside their calculated Ground Sample Distance (GSD) in cm/px. The GSD, a measure of ground resolution, is universally calculated using the formula:
G S D = ( P i x e l   s i z e ) H f
where H is the flight altitude and f is the focal length.
Crucially, this GSD calculation assumes a pure nadir view (camera looking straight down, perpendicular to the ground). This is the standard scenario for photogrammetric mapping where distortion must be minimized; GSD increases (resolution worsens) as the camera angle deviates from nadir. The consistency of the presented GSD values (0.82 cm/px, 0.70 cm/px, and 1.09 cm/px) confirms they were all derived from a constant, typical photogrammetric flight altitude of 30 m. The DJI Mavic 2 Pro achieves the lowest and thus best GSD of 0.70 cm/px, primarily due to its 10.3 mm focal length and large 13.2 mm sensor, offering the highest ground resolution. The DJI Phantom 4 Pro has a slightly worse GSD of 0.82 cm/px because of its shorter 8.8 mm focal length. The DJI Mini 3 Pro yields the largest, and thus poorest, GSD of 1.09 cm/px, resulting from its smaller 9.8 mm sensor and shortest 6.7 mm focal length, indicating the lowest spatial resolution among the three models under ideal nadir flight conditions at 30 m.
UAV flights were conducted in two vertical planes oriented perpendicularly to each other: parallel to the railway track, on both sides of the structure, and perpendicularly, through crosswise passes. Visual data acquisition also included flights at varying angles, with the camera oriented both perpendicularly to the object and with approximately 45-degree rotations to the left and right. The top-down (upper grid) flights were carried out at a constant altitude of 30 m above the object, while the lateral (vertical) flights were conducted at a variable height ranging from 3 to 7 m due to terrain obstacles. This flight pattern and the variety of camera angles are crucial as they enable complete and detailed documentation of the structure. This is primarily essential for creating precise 3D models using photogrammetry, since for the software to create an accurate three-dimensional model, it needs data from multiple perspectives, including side and top views, as well as the oblique shots provided by the 45-degree camera rotations. Without this variety of shots, the 3D model would have gaps and distortions. Additionally, these flights are critical for a detailed visual inspection as they allow for a thorough examination of every part of the structure, even those difficult to access, and the change in camera angle helps detect minor damage, such as cracks or rust, that may be invisible from a perpendicular perspective.
Additionally, to obtain comprehensive documentation, automated flights were performed above the structure along a pre-defined flight path arranged in a grid pattern, in accordance with standard photogrammetric procedures. The remaining lateral flights were conducted manually (Figure 13). To ensure the accuracy and proper real-world positioning of the resulting 3D model, the UAV images were linked to a network of ground control points. This was achieved through a process called georeferencing, which assigns precise coordinates to the 3D model. First, the ground control points were accurately measured on-site using a total station to obtain their exact coordinates in the PL-2000 coordinate system (Section 2.4). The drone, meanwhile, recorded the approximate geographical coordinates of each image. By identifying these same points in both the survey data and the drone’s images, a coordinate transformation was performed in the processing software. This process aligned the geometrically correct model with the precise ground control points, resulting in a highly accurate final product.

2.6. Three-Dimensional Model Creation Methodology

Recent years have seen a significant increase in the application of photogrammetry, leading to the development of numerous software solutions. The selection of an appropriate package is critical, as it must be suited to the specific requirements of the project, such as the scale of the dataset, the required accuracy, and the object’s geometry, to efficiently achieve the desired outcomes. For this study, two software packages with different characteristics were chosen for comparison.
Bentley ContextCapture 24.1.6.2180 was selected for its capability to handle large volumes of imagery and generate high-resolution 3D models. Its automated processing workflows are designed for managing complex projects, making it suitable for large-scale engineering and documentation tasks.
For comparison, Agisoft Metashape 2.1.2 was chosen as a widely available and cost-effective solution. While it offers flexible parameter control for the users, creating an accurate model, especially of objects with repetitive geometric patterns (such as trusses), demands significant user expertise and processing time to manually correct potential automatic processing errors.
Both programs were deliberately selected to address complementary needs: Bentley ContextCapture 24.1.6.2180 efficiently processes large, relatively uniform datasets, whereas Agisoft Metashape 2.1.2 offers greater user control when handling complex, repetitive truss geometry. Using both packages in parallel provides internal cross-validation, enhancing the robustness of the workflow and increasing confidence in the accuracy and completeness of the digital record of the heritage asset.
The block diagram presented in Figure 14 shows a comprehensive methodology for the photogrammetric documentation of structures. The process begins with preliminary planning, including object selection, equipment preparation, and flight plan development. Fieldwork is then executed, involving UAV flights and the measurement of ground control points. The core of the workflow encompasses data processing in specialized software, leading to the generation of accurate 3D models through aerotriangulation and reconstruction. The procedure concludes with result verification, analysis, and the preparation of final documentation, underscoring the method’s significance for heritage protection.
The partial reliance on a manual process was a deliberate methodological choice, rather than a deficiency in algorithmic integration. The primary objective of this study was the quantitative testing and comparison of 3D modeling accuracy under challenging conditions, specifically the uniform steel surface and repetitive truss geometry, which pose a significant challenge to automated feature matching algorithms. The meticulous field documentation, which included manual placement of numerous marked targets and precise data processing across two distinct software platforms, ensured rigorous control over calibration and georeferencing. This level of control was paramount for achieving the required millimeter-level accuracy for documentation, in contrast to simpler automation. Nevertheless, subsequent research stages are planned to advance the methodology toward increased automation, including the algorithmic integration for structural damage detection and geometric analysis, thereby enhancing the efficiency and repeatability of the entire workflow for long-term monitoring purposes.

3. Results

3.1. Results of the Bentley Model

Photogrammetric 3D reconstruction of the Niestępowo Bridge performed in Bentley ContextCapture 24.1.6.2180 resulted in a high-resolution 3D model accurately representing the geometry of the structure (Figure 15). The model successfully captured both the overall truss configuration and the fine surface details, including riveted joints, deck plates, and texture variations in the materials, as shown in Figure 16, Figure 17, Figure 18 and Figure 19.
A total of 9274 photographs were used for model generation. These images were collectively acquired by all UAVs described in point 2.5, each contributing to the dataset according to its assigned flight mission. The software automatically selected and aligned the images, identifying 53 tie points across the dataset. Their non-uniform spatial distribution, particularly in the mid-span region, led to minor local discrepancies in the alignment of the truss elements.
The final model achieved a high level of geometric completeness and surface continuity. The reconstruction was performed at the highest available quality settings, and the full processing time amounted to approximately 10 days on a high-performance workstation. Due to the sequential nature of the software’s workflow, any interruption or misalignment required restarting the entire reconstruction process.
Overall, the Bentley ContextCapture 24.1.6.2180 model provided a reliable and detailed digital representation of the bridge, serving as a reference for the subsequent comparative analysis presented in Section 4.1.

3.2. Results of the Agisoft Metashape Model

Despite locally visible inaccuracies, the 3D model generated in Agisoft Metashape 2.1.2 demonstrates high global accuracy and provides a good representation of details such as joints, gusset plates, and bridge fittings across the entire structure. The primary limitation in model generation was the insufficient number of tie points, particularly at mid-span, where the repetition of elements in the images was significant and the number of distinctive features available to be used as tie points was too low.
Agisoft Metashape 2.1.2 enables the generation of models with high global accuracy, as clearly illustrated in Figure 20, Figure 21 and Figure 22. Moreover, the model reproduces defects in the protective coating, cracks, and surface moisture on the wooden elements of the structure. However, achieving this level of global accuracy comes at the cost of high hardware requirements. The database containing the 3D model of a single span occupies 261 GB of disk space, and the computation of the point cloud and 3D model based on 11,212 photographs required 4 days of processing. In total, 12,500 aerial photographs captured during the measurement campaigns were imported into Agisoft Metashape 2.1.2 to ensure full coverage of the bridge structure and its surroundings. These images were acquired by all UAVs listed in Section 2.5, with each platform contributing according to its designated flight mission. From this dataset, 11,212 photographs were selected and processed to generate the final 3D model in Agisoft Metashape 2.1.2.
The dimensions obtained from the photogrammetric survey and the creation of the 3D model were extracted in the software with an accuracy of up to 1 cm, which allows for the comparison of the global geometry of the truss. The following measurements of the geometry of the bridge span in Niestępowo, derived from the developed 3D model, are presented in the images (Figure 23 and Figure 24).
A cross-section of the railway track was also generated as part of the rail gauge measurement. The measurement accuracy in the case of determining the width in the cross-section was 1 mm. The diagram below presents the cross-section of the railway track, including the measured rail gauge as well as the height of the bridge railing (Figure 25).

4. Discussion

4.1. Comparison of Photogrammetry 3D Models

This case study, focusing on the documentation of the historic viaduct in Niestępowo, underscores the vital role of digital preservation for endangered cultural heritage. For structures like this truss bridge, which often lack any original technical documentation, creating a precise record is an urgent and critical task. This project deliberately evaluated a methodology reliant solely on UAV technology to address this need. This case study is not only technical but also methodological, as it provides a replicable workflow for documenting other endangered truss bridges.
To compare both models obtained from the photogrammetric survey processing, Table 2 presents the basic parameters of the generated 3D models, along with a tabular summary of the strengths and weaknesses of using both software solutions.
Both image sets originated from the same flight mission, with the larger dataset used in Agisoft Metashape 2.1.2 encompassing all photographs utilized in Bentley ContextCapture 24.1.6.2180. The selection of differently sized datasets was a direct result of software-specific processing characteristics and did not affect the documented conditions or the geometry of the analyzed structure, thereby ensuring the credibility of the comparative results. As clearly illustrated in Table 2, a significant difference is evident in the number of input images used by each software. It is crucial to note that the same master set of aerial photographs, captured during a single flight with identical parameters, was used as input for both software packages. The discrepancy in the number of images ultimately processed stemmed from the inherent characteristics of the software. Bentley ContextCapture 24.1.6.2180 demonstrates a higher sensitivity to photographs acquired under suboptimal conditions, such as harsh lighting. Consequently, this software automatically excluded a subset of approximately 1938 images during the reconstruction process, which were successfully utilized by Agisoft Metashape 2.1.2. Despite importing the entire dataset, ContextCapture 24.1.6.2180 independently determined to omit the images it deemed problematic for generating a reliable 3D model. It should be emphasized that the study’s objective was not a strict, like-for-like comparison of the software using an identical data subset. Rather, the aim was to evaluate the performance of each program under realistic conditions, reflecting typical engineering documentation workflows where software-specific pre-processing and data handling are integral to the results.
As shown, in the case of the model developed in Agisoft, the computation and 3D model generation process is significantly faster than in Bentley ContextCapture; however, Agisoft requires greater disk space utilization. Another notable difference lies in the higher capability for intermediate verification during model generation in Agisoft. The user of this software can assess the accuracy of the developing model already at the image alignment stage, whereas in Bentley ContextCapture 24.1.6.2180, such verification is possible only after the full 3D model has been generated.
Both Bentley ContextCapture 24.1.6.2180 and Agisoft Metashape 2.1.2 proved to be effective tools for cultural heritage documentation. However, their performance in defect identification differed. Bentley ContextCapture 24.1.6.2180offered a more automated workflow, suitable for the reconstruction of small object anomalies such as missing rivets or minor deformations; however, it rendered the overall object model with less detail. In contrast, Agisoft Metashape 2.1.2 provided greater control over point cloud density and filtering settings, resulting in a superior representation of the entire structure’s geometry. In terms of post-processing, Agisoft Metashape 2.1.2 delivered better model consistency for visualization purposes and also achieved more favorable results for metric analyses. Conversely, Bentley ContextCapture 24.1.6.2180 was more advantageous for analyzing individual details, thereby enabling better damage detection. The selection of a well-maintained and recently renovated structure was deliberate. It facilitated an assessment of the photogrammetric method’s capability to operate under visually uniform conditions, where typical damage indicators are absent. This approach strengthens the conclusions by demonstrating the method’s applicability not only for defect detection but also for preventive documentation and condition monitoring of historical engineering structures.
Both models guarantee high accuracy of results. Although both 3D models can be used for inventory purposes, the differences between them are noticeable.
The first visible difference can be observed in the generated texture on the surface of the railway ballast as well as on the checkered plates serving as fire protection plates along the entire length of the structure. In the case of the 3D model generated with Bentley ContextCapture 24.1.6.2180 (b), the texture and details are more distinct than in the model developed in Agisoft Metashape 2.1.2 (a). Figure 26 shows that Bentley ContextShape preserves surface texture more effectively than Agisoft Metashape 2.1.2.
The most significant difference can be seen when comparing the entire solid structure. Bentley ContextCapture 24.1.6.2180 (d) showed considerable difficulties in creating the point cloud in the middle part of the span, where, in both cases, there are gaps in point cloud generation. However, in the case of Agisoft Metashape 2.1.2 (c), the point cloud creation in the central part of the bridge resulted in smaller discrepancies than with Bentley ContextCapture. This effect results from the higher density of the point cloud, leading to a more accurate and detailed reconstruction achieved with Agisoft Metashape 2.1.2 (Figure 27).
Table 3 presents a comparison of the basic structural dimensions the generated in both programs. Although differences between generated models are visible, indicating an advantage of Agisoft Metashape 2.1.2, both programs guarantee high accuracy of the 3D model. Archival documentation of the object with the dimensions obtained from the 3D models.
As can be seen in Table 3, the relative error between all compared dimensions is below 2%. The largest observable deviations, unfavorable to Bentley ContextCapture, occur in the total span dimension of the 3D model. This difference results from the variation in the number and uneven distribution of tie points, as shown in Table 3, where the quantity of tie points is limited by the software itself.
The results shown in Table 3 demonstrate the great potential and possibilities of using close-range photogrammetry for the inventory of historical bridge structures and the immense potential of implementing photogrammetry for this purpose. This method also enables systematic monitoring and archiving of structural changes over the entire service life, while providing the capability to analyze the progression of deterioration processes (including the propagation of corrosion products, the rate of structural degradation, and the emergence of visible defects indicative of more severe failures).
The incomplete coverage of the 3D model in the lower and middle sections, as well as the interior of the truss structure, resulted exclusively from site-specific constraints and, most importantly, the safety regulations governing Unmanned Aerial Vehicle (UAV) operations. The reconstruction of these areas using UAVs presents a significant challenge, which was corroborated in the present study by the suboptimal modeling of these parts. As expected, this limitation stemmed primarily from restricted access to the structure’s underside, for instance, due to flowing water beneath it. This prevented the comprehensive data collection of structural elements located directly above the water surface.
During the measurement campaign, it was physically impossible to perform flight passes and capture imagery of the underside of the span or the interior of the truss structure without violating flight safety protocols. Consequently, imagery of these areas could not be acquired, precluding their representation in the 3D model. To enhance model quality, the acquisition of supplemental terrestrial imagery was considered. However, this methodology was deemed insufficient for capturing structural sections located directly above the water, as they could not be fully documented from the riverbank, and thus it does not constitute a universal solution for every scenario. Consequently, terrestrial photogrammetry was dismissed for this investigation. This deliberate decision facilitates the evaluation of a 3D model generated exclusively from aerial imagery—a more versatile method, albeit one inherently susceptible to the specific challenges we sought to analyze. The observed deficiencies in the lower and central parts of the model were, therefore, an anticipated outcome of the selected methodology. It should be emphasized that the primary objective of this study was not to deliver a complete geometric inventory of the structure, but to assess and compare the modeling accuracy of two photogrammetric software platforms—Bentley ContextCapture 24.1.6.2180 and Agisoft Metashape 2.1.2—using a realistic data acquisition scenario typical of engineering documentation practice. Despite the mentioned limitations, the obtained study area (the upper deck and external truss surfaces) provided sufficient and representative data to achieve the stated research objectives.
Additionally, no significant material or structural defects were observed on the Niestępowo railway viaduct, as the structure had undergone comprehensive renovation in 2023, including cleaning, anti-corrosion protection, and repainting of the steel surfaces. The photogrammetric models revealed only minor geometric inconsistencies, such as local deviations between riveted joints or slight alignment irregularities along the truss members.
Due to the necessity of performing a quantitative analysis of model consistency and considering the initial, significant discrepancies identified in the global georeferencing, the comparison of the photogrammetric point clouds (Agisoft and Bentley) was conducted locally, on selected and best-covered fragments of the bridge structure. This approach allowed the focus to remain on evaluating the relative geometric consistency of the models rather than their absolute global positioning accuracy.
For this purpose, the Cloud-to-Cloud (C2C) Distance tool in CloudCompare software was utilized. To minimize the impact of initial alignment errors, a local affine transformation was performed on the Bentley model’s point cloud against the Agisoft cloud for each compared segment. This process led to a significant reduction in error, confirming the high local geometric agreement between the models.
The results of the C2C analysis are presented as color-coded deviation maps (Figure 28, Figure 29 and Figure 30). The distance scale was configured from 0.0 m (blue), representing perfect agreement, to 0.05 m (red), representing deviations of 5 cm or more. These maps visibly demonstrate the high level of fit on structural elements such as truss joints and the bottom chord, while highlighting areas of noise and extraneous elements (red/yellow) in the Bentley model as discussed in the text.
Future work will focus on completing the documentation of the bridge’s lower and interior sections using complementary techniques, such as oblique UAV imaging and Terrestrial Laser Scanning (TLS), thereby enabling comprehensive coverage of the entire structure.

4.2. Challenges of Photogrammetric Post-Processing

A key planning consideration was the selection of photogrammetric software. Programs like Bentley ContextCapture 24.1.6.2180 and Agisoft Metashape 2.1.2 employ distinct modeling methodologies. The former excels with large-scale projects involving repetitive structures, while the latter is more adept at processing complex, atypical objects that require manual fine-tuning. The choice also hinges on user expertise. Agisoft Metashape 2.1.2 demands a more hands-on approach and a deeper understanding of the processing steps, but this closer oversight allows for earlier problem identification. In contrast, Bentley ContextCapture 24.1.6.2180 offers a more intuitive interface with integrated utilities to expedite the workflow. While both programs successfully generated the 3D model, Agisoft Metashape 2.1.2 produced a more accurate overall representation of the structure. The dimensions (as theoretical documentation) applied to the comparison were derived from the up-to-date inventory documentation and the detailed renovation design. These materials can be considered a reliable source, as the inventory documentation was prepared on the basis of field measurements and on-site inspection, while the subsequent renovation of the structure was executed in accordance with this renovation design. Although no TLS (Terrestrial Laser Scanning) measurements were performed, the comparison with the data from this documentation accurately reflects the actual condition of the structure, thus allowing the inventory and design documentation to be used as a reliable reference for evaluating model consistency.
The mission to comprehensively document the viaduct resulted in a massive dataset of 10,272 images, which presented significant computational challenges. The aerotriangulation process alone required a minimum of 24 h on high-performance hardware. To manage this, a rigorous data reduction strategy was implemented, discarding out-of-focus, poorly exposed, or off-target images. This quality-based selection refined the dataset to 9274 images. Since this still far exceeded the minimum required for adequate representation, the 10% reduction did not compromise the final model’s accuracy but successfully cut processing time to 16 h, highlighting the importance of strategic data collection for large-scale projects.
The methodology intentionally omitted supplementary terrestrial photographs to test the standalone capability of UAV technology. The consequence of this choice was a prolonged processing time due to manual corrections and some geometric simplifications in occluded areas, particularly on the bridge’s underside. However, this approach proves highly applicable for surveys in inaccessible locations. For future projects where the accurate depiction of every element is paramount, supplementing aerial data with terrestrial imagery is recommended.
For complex truss bridges, the automatic identification of matching points across photographs is a fundamental challenge. In this project, meticulous field documentation—including marked photo targets and precise sketches supplemented with close-range photos—proved invaluable. These comprehensive sketches prevented identification errors and significantly accelerated the office-based processing, confirming that robust field work is a cornerstone of efficient digital modeling.
Finally, the visual similarity of the truss’s repetitive spans posed a significant obstacle for automated algorithms. While 53 tie points were manually placed on unique features, this number—combined with their uneven distribution due to limited access—was insufficient to fully separate all spans, leading to local inaccuracies. For future documentation of similar structures, it is recommended to at least double the number of manually placed points or, where possible, use unique coded targets on each span to guide the software effectively.

5. Conclusions

The presented study demonstrates that the integration of UAV-based photogrammetry with precise geodetic referencing offers an effective, non-invasive, and scientifically robust method for documenting historic engineering structures. The case of the Niestępowo Bridge, a century-old truss viaduct, illustrates how advanced digital techniques can safeguard cultural and technological heritage by creating an accurate, metrically verified 3D record of a structure that once existed only in archival fragments.
The comparative analysis of two photogrammetric platforms—Bentley ContextCapture 24.1.6.2180 and Agisoft Metashape 2.1.2—proved that both can deliver high-fidelity models suitable for engineering inventory, yet with distinct advantages. Bentley ContextCapture 24.1.6.2180 provides an efficient and automated workflow for large-scale datasets, while Agisoft Metashape 2.1.2, despite being more hardware-demanding, ensures greater control over model accuracy and detail reconstruction. Together, they form a complementary toolkit that enables the generation of precise and visually realistic 3D representations even for geometrically complex truss bridges.
Beyond the technical findings, this research emphasizes the broader significance of digital photogrammetry as a bridge between engineering science and cultural heritage conservation. Each 3D model produced represents more than a geometric dataset—it is a digital preservation of human creativity, craftsmanship, and historical engineering knowledge. By implementing such methodologies systematically, it becomes possible to build a digital archive of endangered bridges and other infrastructural monuments, ensuring that their design and form remain accessible to future generations even if the physical structures are lost.
The methodology developed in this study—based solely on UAV photogrammetry and refined through rigorous geodetic control—has proven its capacity to deliver reliable, metrically consistent documentation. This approach can be replicated for similar heritage structures, especially those located in inaccessible or environmentally sensitive areas, where non-contact measurement is essential.
Finally, this study demonstrates that modern photogrammetry is not merely a documentation tool but an act of cultural preservation, allowing engineers and researchers to merge technological precision with a deep responsibility toward history. Continuing the development of such integrative methodologies will strengthen our ability to document, interpret, and protect the legacy of historic engineering for generations to come.

Author Contributions

Conceptualization, A.B.; supervision, A.B.; project administration, A.B. and D.K. (Dominika Kuryłowicz); funding acquisition, A.B.; methodology, A.B., K.M.-J., T.C., D.K. (Dominika Kuryłowicz), D.K. (Dominik Księżopolski), H.N. and P.W.; software selection, A.B., K.M.-J., D.K. (Dominika Kuryłowicz) and P.R.; validation, A.B. and K.M.-J.; formal analysis, A.B.; geodetic surveying, A.B., K.M.-J., D.K. (Dominik Księżopolski) and P.W.; geodetic data processing, K.M.-J.; 3D model creation in Bentley ContextCapture, D.K. (Dominika Kuryłowicz); 3D model creation in Agisoft Metashape, P.R.; geodetic data analysis, D.K. (Dominik Księżopolski) and H.N.; results comparison, P.W.; writing—original draft preparation, T.C., D.K. (Dominika Kuryłowicz), D.K. (Dominik Księżopolski), H.N., P.R., P.W. and P.S.; writing—review and editing, A.B. and K.M.-J.; writing—final version preparation, A.B., T.C., D.K. (Dominika Kuryłowicz), D.K. (Dominik Księżopolski), H.N., P.R., P.W. and P.S. All authors have read and agreed to the published version of the manuscript.

Funding

Financial support obtained for these studies from Gdańsk University of Technology by the DEC-9/2022/IDUB/III.4.3/Pu grant under the Plutonium—‘Excellence Initiative–Research University’ program is gratefully acknowledged.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
3DThree Dimensional
BBABundle Block Adjustment
CMOSComplementary Metal-Oxide Semiconductor
DLTDirect Linear Transformation
GCPGround Control Points
GDOPGeometric Dilution Of Precision
GISGeographic Information System
GNSSGlobal Navigation Satellite System
LDLinear Dichroism
MPMegapixel
MVSMulti-View Stereo
PL-2000Coordinate System 2000
PL-EVRF2007-NHEuropean Vertical Reference Frame 2007 for Poland, Normal Height
SFStructure-from-Motion
TLSTerrestrial Laser Scanning
UAVUnmanned Aerial Vehicle

Appendix A

Summary of coordinates in the PL-2000 plane coordinate system and the PL-EVRF2007-NH height system, including points measured with a total station on the structure, points measured with GNSS, and the established control network.
Table A1. Summary of coordinates for control network points and measured points.
Table A1. Summary of coordinates for control network points and measured points.
NumberXYZ
16,021,882.576,528,774.50119.61
26,021,881.196,528,769.42121.01
36,021,881.676,528,771.98119.62
46,021,893.006,528,777.25119.68
56,021,906.646,528,783.58119.71
66,021,880.776,528,770.77113.57
76,021,888.566,528,774.62111.67
86,021,892.526,528,779.14119.67
96,021,892.836,528,781.68121.11
106,021,918.706,528,792.41113.68
116,021,868.776,528,763.73119.94
126,021,874.676,528,767.64117.97
136,021,874.606,528,766.36120.98
146,021,892.316,528,780.11111.96
156,021,894.876,528,777.07111.95
166,021,901.416,528,780.11111.55
176,021,914.476,528,786.17112.57
186,021,875.796,528,772.43115.05
196,021,876.266,528,772.62118.42
206,021,926.846,528,792.25116.86
216,021,925.666,528,795.59118.07
226,021,922.656,528,794.19115.07
236,021,895.326,528,777.76110.78
246,021,901.736,528,780.73110.56
256,021,887.316,528,777.30111.67
266,021,894.086,528,780.46110.78
276,021,900.476,528,783.22110.57
286,021,881.506,528,775.08113.21
296,021,878.886,528,769.62114.53
306,021,882.756,528,771.42113.34
316,021,881.306,528,770.77115.12
326,021,902.666,528,786.15122.62
336,021,899.456,528,782.99110.78
346,021,876.826,528,772.55118.19
356,021,883.246,528,775.60118.31
366,021,889.766,528,778.59118.37
376,021,896.306,528,781.64118.43
386,021,902.906,528,784.67118.44
396,021,909.526,528,787.75118.44
406,021,879.746,528,773.83113.67
416,021,892.906,528,779.94111.10
FOT16,021,862.816,528,760.64119.18
FOT26,021,858.106,528,766.58119.34
FOT36,021,947.796,528,800.58119.83
FOT46,021,944.606,528,805.21119.95
FOT56,021,925.666,528,793.42120.37
FOT66,021,903.926,528,783.32120.31
FOT76,021,885.556,528,774.84120.25
10016,021,806.636,528,799.75103.35
10026,021,893.546,528,779.18103.22
10036,021,863.786,528,705.30103.67
10046,021,923.106,528,734.32103.70
10056,021,920.756,528,799.78112.65

References

  1. Geyik, M.; Tarı, U.; Özcan, O.; Sunal, G.; Yaltırak, C. A new technique mapping submerged beachrocks using low-altitude UAV photogrammetry, the Altınova region, northern coast of the Sea of Marmara (NW Türkiye). Quat. Int. 2024, 712, 109579. [Google Scholar] [CrossRef]
  2. Pan, Y.; Dong, Y.; Wang, D.; Chen, A.; Ye, Z. Three-Dimensional Reconstruction of Structural Surface Model of Heritage Bridges Using UAV-Based Photogrammetric Point Clouds. Remote Sens. 2019, 11, 1204. [Google Scholar] [CrossRef]
  3. Wang, Q.; Fang, N.; Zeng, Y.; Yuan, C.; Dai, W.; Fan, R.; Chang, H. Optimizing UAV-SfM photogrammetry for efficient monitoring of gully erosion in high-relief terrains. Measurement 2025, 256, 118154. [Google Scholar] [CrossRef]
  4. Dahal, S.; Imaizumi, F.; Takayama, S. Spatio-temporal distribution of boulders along a debris-flow torrent assessed by UAV photogrammetry. Geomorphology 2025, 480, 109757. [Google Scholar] [CrossRef]
  5. Sestras, P.; Badea, G.; Badea, A.C.; Salagean, T.; Roșca, S.; Kader, S.; Remondino, F. Land surveying with UAV photogrammetry and LiDAR for optimal building planning. Autom. Constr. 2025, 173, 106092. [Google Scholar] [CrossRef]
  6. Wu, S.; Feng, L.; Zhang, X.; Yin, C.; Quan, L.; Tian, B. Optimizing overlap percentage for enhanced accuracy A and efficiency in oblique photogrammetry building 3D modeling. Constr. Build. Mater. 2025, 489, 142382. [Google Scholar] [CrossRef]
  7. Gruszczyński, W.; Matwij, W.; Ćwiąkała, P. Comparison of low-altitude UAV photogrammetry with terrestrial laser scanning as data-source methods for terrain covered in low vegetation. ISPRS J. Photogramm. Remote Sens. 2017, 126, 168–179. [Google Scholar] [CrossRef]
  8. Cho, J.; Jeong, S.; Lee, B. Optimal ground control point layout for UAV photogrammetry in high precision 3D mapping. Measurement 2025, 257, 118343. [Google Scholar] [CrossRef]
  9. Pepe, M.; Costantino, D. UAV photogrammetry and 3D modelling of complex architecture for maintenance purposes: The case study of the masonry bridge on the Sele River, Italy. Period. Polytech. Civ. Eng. 2021, 65, 191–203. [Google Scholar] [CrossRef]
  10. Zollini, S.; Alicandro, M.; Dominici, D.; Quaresima, R.; Giallonardo, M. UAV Photogrammetry for Concrete Bridge Inspection Using Object-Based Image Analysis (OBIA). Remote Sens. 2020, 12, 3180. [Google Scholar] [CrossRef]
  11. Olaszek, P.; Maciejewski, E.; Rakoczy, A.; Cabral, R.; Santos, R.; Ribeiro, D. Remote Inspection of Bridges with the Integration of Scanning Total Station and Unmanned Aerial Vehicle Data. Remote Sens. 2024, 16, 4176. [Google Scholar] [CrossRef]
  12. Tang, Z.; Peng, Y.; Li, J.; Li, Z. UAV 3D modeling and application based on railroad bridge inspection. Buildings 2024, 14, 26. [Google Scholar] [CrossRef]
  13. Ioli, F.; Pinto, A.; Pinto, L. UAV photogrammetry for metric evaluation of concrete bridge cracks. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 1025–1032. [Google Scholar] [CrossRef]
  14. Wang, X.; Demartino, C.; Narazaki, Y.; Monti, G.; Spencer, B.F. Rapid seismic risk assessment of Bridges Rusing UAV aerial photogrammetry. Eng. Struct. 2023, 279, 115589. [Google Scholar] [CrossRef]
  15. Yiğit, A.Y.; Uysal, M. Virtual reality visualisation of automatic crack detection for bridge inspection from 3D digital twin generated by UAV photogrammetry. Measurement 2025, 242, 115931. [Google Scholar] [CrossRef]
  16. Castellani, M.; Meoni, A.; Garcia-Macias, E.; Antonini, F.; Ubertini, F. UAV photogrammetry and laser Canning of bridges: A new methodology and its application to a case study. Procedia Struct. Integr. 2024, 62, 193–200. [Google Scholar] [CrossRef]
  17. Mohammadi, M.; Rashidi, M.; Mousavi, V.; Karami, A.; Yu, Y.; Samali, B. Quality evaluation of digital twins generated based on UAV photogrammetry and TLS: Bridge case study. Remote Sens. 2021, 13, 3499. [Google Scholar] [CrossRef]
  18. Mousavi, V.; Rashidi, M.; Mohammadi, M.; Samali, B. Evolution of digital twin frameworks in bridge management: Review and future directions. Remote Sens. 2024, 16, 1887. [Google Scholar] [CrossRef]
  19. Jürgen, H.; Michał, W.; Oliver, S. Use of unmanned aerial vehicle photogrammetry to obtain topographical information to improve bridge risk assessment. J. Infrastruct. Syst. 2018, 24, 04017041. [Google Scholar] [CrossRef]
  20. Graves, W.; Aminfar, K.; Lattanzi, D. Full-Scale Highway Bridge Deformation Tracking via Photogrammetry and Remote Sensing. Remote Sens. 2022, 14, 2767. [Google Scholar] [CrossRef]
  21. Shang, Z.; Shen, Z. Flight Planning for Survey-Grade 3D Reconstruction of Truss Bridges. Remote Sens. 2022, 14, 3200. [Google Scholar] [CrossRef]
  22. Pargieła, K. Optimising UAV Data Acquisition and Processing for Photogrammetry: A Review. Geomat. Environ. Eng. 2023, 17, 29–59. [Google Scholar] [CrossRef]
  23. Burdziakowski, P.; Szulwic, J.; Janowski, A.; Tysiąc, P.; Dawidowicz, A. Using UAV Photogrammetry to Analyse Changes in the Coastal Zone Based on the Sopot Tombolo (Salient) Measurement Project. Sensors 2020, 20, 4000. [Google Scholar] [CrossRef] [PubMed]
  24. Cano, M.; Pastor, J.L.; Tomás, R.; Riquelme, A.; Asensio, J.L. A New Methodology for Bridge Inspections in Linear Infrastructures from Optical Images and HD Videos Obtained by UAV. Remote Sens. 2022, 14, 1244. [Google Scholar] [CrossRef]
  25. Qi, Y.; Lin, P.; Yang, G.; Liang, T. Crack detection and 3D visualization of crack distribution for UAV-based bridge inspection using efficient approaches. Structures 2025, 78, 109075. [Google Scholar] [CrossRef]
  26. Dabous, S.A.; Al-Ruzouq, R.; Llort, D. Three-dimensional modeling and defect quantification of existing concrete bridges based on photogrammetry and computer aided design. Ain Shams Eng. J. 2023, 14, 12. [Google Scholar] [CrossRef]
  27. Rashidi, M.; Mousavi, V.; Perera, S.; Devitt, J. Bridge health monitoring through photogrammetry-based digital twins: A topological data analysis approach to missing bolts detection. Measurement 2025, 259, 119713. [Google Scholar] [CrossRef]
  28. Luo, K.; Kong, X.; Zhang, J.; Hu, J.; Li, J.; Tang, H. Computer Vision-Based Bridge Inspection and Monitoring: A Review. Sensors 2023, 23, 7863. [Google Scholar] [CrossRef]
  29. Dudek, M.; Lachowicz, Ł. Przegląd Specjalny Mostu Kolejowego w km 17+106 W Ramach Zadania pn. „Przygotowanie Linii Kolejowych nr 234 na Odcinku Kokoszki—Stara Piła Oraz nr 229 na Odcinku Stara Piła—Glincz Jako Trasy Objazdowej na Czas Realizacji Projektu „Prace na Alternatywnym Ciągu Transportowym Bydgoszcz—Trójmiasto, Etap I”; Technical Report; PKP Polskie Linie Kolejowe S.A.: Gdaśnk, Poland, 2023; (material not publicly available). [Google Scholar]
  30. Szukaj w Archiwach. Available online: https://www.szukajwarchiwach.gov.pl/jednostka/-/jednostka/37485097 (accessed on 13 October 2025).
  31. Marín-Buzón, C.; Pérez-Romero, A.; López-Castro, J.L.; Ben Jerbania, I.; Manzano-Agugliaro, F. Photogrammetry as a New Scientific Tool in Archaeology: Worldwide Research Trends. Sustainability 2021, 13, 5319. [Google Scholar] [CrossRef]
  32. Borkowski, A.S.; Kubrat, A. Integration of Laser Scanning, Digital Photogrammetry and BIM Technology: A Review and Case Studies. Eng 2024, 5, 2395–2409. [Google Scholar] [CrossRef]
  33. Karami, A.; Menna, F.; Remondino, F. Combining Photogrammetry and Photometric Stereo to Achieve Precise and Complete 3D Reconstruction. Sensors 2022, 22, 8172. [Google Scholar] [CrossRef]
  34. Hellwich, O. Photogrammetric methods. In Encyclopedia of GIS; Shekhar, S., Xiong, H., Zhou, X., Eds.; Springer: Cham, Switzerland, 2017; pp. 1574–1580. [Google Scholar] [CrossRef]
  35. Borg, B.; Dunn, M.; Ang, A.; Villis, C. The application of state-of-the-art technologies to support artwork conservation: Literature review. J. Cult. Herit. 2020, 44, 239–259. [Google Scholar] [CrossRef]
  36. Tommasi, C.; Achille, C.; Fassi, F. From point cloud to BIM: A modelling challenge in the cultural heritage field. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 429–436. [Google Scholar] [CrossRef]
  37. Konstantakis, M.; Trichopoulos, G.; Aliprantis, J.; Gavogiannis, N.; Karagianni, A.; Parthenios, P.; Serraos, K.; Caridakis, G. An Improved Approach for Generating Digital Twins of Cultural Spaces through the Integration of Photogrammetry and Laser Scanning Technologies. Digital 2024, 4, 215–231. [Google Scholar] [CrossRef]
  38. Afaq, S.; Jain, S.K.; Sharma, N.; Sharma, S. Comparative assessment of 2D photogrammetry versus direct anthropometry in nasal measurements. Eur. J. Clin. Exp. Med. 2025, 23, 307–315. [Google Scholar] [CrossRef]
  39. Kingsland, K. Comparative analysis of digital photogrammetry software for cultural heritage. Digit. Appl. Archaeol. Cult. Herit. 2020, 18, e00157. [Google Scholar] [CrossRef]
  40. Cabral, R.; Oliveira, R.; Ribeiro, D.; Rakoczy, A.M.; Santos, R.; Azenha, M.; Correia, J. Railway bridge geometry assessment supported by cutting-edge reality capture technologies and 3D as-designed models. Infrastructures 2023, 8, 114. [Google Scholar] [CrossRef]
  41. Maboudi, M.; Backhaus, J.; Mai, I.; Ghassoun, Y.; Khedar, Y.; Lowke, D.; Riedel, B.; Bestmann, U.; Gerke, M. Very high-resolution bridge deformation monitoring using UAV-based photogrammetry. J. Civ. Struct. Health Monit. 2025, 15, 1–18. [Google Scholar] [CrossRef]
  42. Xing, Y.; Yang, S.; Fahy, C.; Harwood, T.; Shell, J. Capturing the Past, Shaping the Future: A Scoping Review of Photogrammetry in Cultural Building Heritage. Electronics 2025, 14, 3666. [Google Scholar] [CrossRef]
  43. Balletti, C.; Ballarin, M.; Vernier, P. Replicas in cultural heritage: 3D printing and the museum experience. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 55–62. [Google Scholar] [CrossRef]
  44. Ustawa z Dnia 24 Stycznia 2025 r. o Zmianie Ustawy—Prawo Lotnicze Oraz Niektórych Innych Ustaw. Dziennik Ustaw 2025, Poz. 179. Available online: https://isap.sejm.gov.pl/isap.nsf/DocDetails.xsp?id=WDU20250000179 (accessed on 3 November 2025).
Figure 1. Archival side plan of the Niestepowo Bridge [30].
Figure 1. Archival side plan of the Niestepowo Bridge [30].
Applsci 15 12491 g001
Figure 2. Overall view of the Niestepowo Bridge.
Figure 2. Overall view of the Niestepowo Bridge.
Applsci 15 12491 g002
Figure 3. Geometry of the Niestepowo Bridge: (a) cross-section of the Niestepowo Bridge; (b) Side view of truss girder geometry.
Figure 3. Geometry of the Niestepowo Bridge: (a) cross-section of the Niestepowo Bridge; (b) Side view of truss girder geometry.
Applsci 15 12491 g003
Figure 4. Geometric relationship between the object, camera, and image planes in digital photogrammetry [34].
Figure 4. Geometric relationship between the object, camera, and image planes in digital photogrammetry [34].
Applsci 15 12491 g004
Figure 5. Geometry of an image pair [34].
Figure 5. Geometry of an image pair [34].
Applsci 15 12491 g005
Figure 6. Elements of epipolar geometry in stereoscopic photogrammetry [34].
Figure 6. Elements of epipolar geometry in stereoscopic photogrammetry [34].
Applsci 15 12491 g006
Figure 7. Advantages and limitations of photogrammetry.
Figure 7. Advantages and limitations of photogrammetry.
Applsci 15 12491 g007
Figure 8. UAV Photogrammetry Workflow Flowchart.
Figure 8. UAV Photogrammetry Workflow Flowchart.
Applsci 15 12491 g008
Figure 9. Sketch of control points in a top view.
Figure 9. Sketch of control points in a top view.
Applsci 15 12491 g009
Figure 10. North side sketch of the points measured with a total station.
Figure 10. North side sketch of the points measured with a total station.
Applsci 15 12491 g010
Figure 11. South side sketch of the points measured with a total station.
Figure 11. South side sketch of the points measured with a total station.
Applsci 15 12491 g011
Figure 12. Examples of DJI drones used in photogrammetry: the DJI Phantom 4 Pro, DJI Mavic 2 Pro, and DJI Mini 3 Pro (photos from the DJI official website).
Figure 12. Examples of DJI drones used in photogrammetry: the DJI Phantom 4 Pro, DJI Mavic 2 Pro, and DJI Mini 3 Pro (photos from the DJI official website).
Applsci 15 12491 g012
Figure 13. Schematic of UAV flight patterns and camera orientations used for photogrammetric documentation.
Figure 13. Schematic of UAV flight patterns and camera orientations used for photogrammetric documentation.
Applsci 15 12491 g013
Figure 14. Workflow of the photogrammetric documentation process, from preliminary analysis to final validation and heritage assessment.
Figure 14. Workflow of the photogrammetric documentation process, from preliminary analysis to final validation and heritage assessment.
Applsci 15 12491 g014
Figure 15. Side view of the Niestepowo bridge, Bentley 3D model.
Figure 15. Side view of the Niestepowo bridge, Bentley 3D model.
Applsci 15 12491 g015
Figure 16. Detail view of the railway structure, Bentley 3D model: (a) on the bridge; (b) at the approaches to the bridge.
Figure 16. Detail view of the railway structure, Bentley 3D model: (a) on the bridge; (b) at the approaches to the bridge.
Applsci 15 12491 g016
Figure 17. Detail view on the truss joint of the Niestepowo Bridge, Bentley 3D model.
Figure 17. Detail view on the truss joint of the Niestepowo Bridge, Bentley 3D model.
Applsci 15 12491 g017
Figure 18. Bentley 3D model measurement of the bridge span.
Figure 18. Bentley 3D model measurement of the bridge span.
Applsci 15 12491 g018
Figure 19. Bentley 3D model measurement of the crossbeam spacing.
Figure 19. Bentley 3D model measurement of the crossbeam spacing.
Applsci 15 12491 g019
Figure 20. Side view of the Niestepowo bridge, Agisoft 3D model.
Figure 20. Side view of the Niestepowo bridge, Agisoft 3D model.
Applsci 15 12491 g020
Figure 21. Detail view of the railway structure Agisoft 3D model: (a) on the bridge; (b) at the approaches to the bridge.
Figure 21. Detail view of the railway structure Agisoft 3D model: (a) on the bridge; (b) at the approaches to the bridge.
Applsci 15 12491 g021
Figure 22. Detail view on the truss joint of the Niestepowo Bridge, Agisoft 3D model.
Figure 22. Detail view on the truss joint of the Niestepowo Bridge, Agisoft 3D model.
Applsci 15 12491 g022
Figure 23. Agisoft 3D model measurement of the bridge span.
Figure 23. Agisoft 3D model measurement of the bridge span.
Applsci 15 12491 g023
Figure 24. Agisoft 3D model measurement of the crossbeam spacing.
Figure 24. Agisoft 3D model measurement of the crossbeam spacing.
Applsci 15 12491 g024
Figure 25. Cross-sections of the railway track 3D model: (a) rail gauge; (b) height of the bridge railing.
Figure 25. Cross-sections of the railway track 3D model: (a) rail gauge; (b) height of the bridge railing.
Applsci 15 12491 g025
Figure 26. Detailed views on the 3D model of the railway structure texture: (a) generated in Agisoft Metashape; (b) generated in Bentley ContextCapture.
Figure 26. Detailed views on the 3D model of the railway structure texture: (a) generated in Agisoft Metashape; (b) generated in Bentley ContextCapture.
Applsci 15 12491 g026
Figure 27. Detail views on the 3D model of the truss joint: (a) generated in Agisoft Metashape; (b) generated in Bentley ContextCapture.
Figure 27. Detail views on the 3D model of the truss joint: (a) generated in Agisoft Metashape; (b) generated in Bentley ContextCapture.
Applsci 15 12491 g027
Figure 28. Deviation map C2C of a selected fragment of the bridge truss.
Figure 28. Deviation map C2C of a selected fragment of the bridge truss.
Applsci 15 12491 g028
Figure 29. Deviation Map C2C of the Abutment Joint.
Figure 29. Deviation Map C2C of the Abutment Joint.
Applsci 15 12491 g029
Figure 30. Deviation map C2C of the Bridge Abutment Area with the Approach Slab and Track Surface.
Figure 30. Deviation map C2C of the Bridge Abutment Area with the Approach Slab and Track Surface.
Applsci 15 12491 g030
Table 1. Parameters of the geometric resolution GSD of selected drone models at a flight altitude of 30 m.
Table 1. Parameters of the geometric resolution GSD of selected drone models at a flight altitude of 30 m.
UAVSensor WidthWidth in PixelFocal LengthPixel SizeGSD
DJI Phantom 4 Pro13.2 mm5472 px8.8 mm0.002412 mm0.82 cm/px
DJI Mavic 2 Pro13.2 mm5472 px10.3 mm0.002412 mm0.70 cm/px
DJI Mini 3 pro9.8 mm4032 px6.7 mm0.002430 mm1.09 cm/px
Table 2. Comparison of software workflow stages.
Table 2. Comparison of software workflow stages.
ParameterBentley ContextCaptureAgisoft Metashape
Number of input images927411,212
Processing time10 days4 days
Tie points used5378
Texture qualityVery highHigh
StrengthsRobust automation, photorealistic textureUser control
Detailed crack detection
LimitationsLong processing time,
alignment errors
Hardware demanding
repetitive geometry challenges
Table 3. Comparison of the basic structural dimensions.
Table 3. Comparison of the basic structural dimensions.
ElementArchival RecordsBentley ContextCaptureAgisoft Metashape
[m][m]Δ [%][m]Δ [%]
Theoretical span Lt58.0057.161.45%58.100.17%
Spacing crossbeams bc3.633.681.52%3.630.05%
Height of the truss girder hg8.048.090.50%8.211.99%
Track gauge gt1.4351.4460.77%1.4330.14%
Railing height hr1.101.091.27%1.091.27%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ciborowski, T.; Księżopolski, D.; Kuryłowicz, D.; Nowak, H.; Rocławski, P.; Stalmach, P.; Wałdowski, P.; Banas, A.; Makowska-Jarosik, K. Low-Altitude Photogrammetry and 3D Modeling for Engineering Heritage: A Case Study on the Digital Documentation of a Historic Steel Truss Viaduct. Appl. Sci. 2025, 15, 12491. https://doi.org/10.3390/app152312491

AMA Style

Ciborowski T, Księżopolski D, Kuryłowicz D, Nowak H, Rocławski P, Stalmach P, Wałdowski P, Banas A, Makowska-Jarosik K. Low-Altitude Photogrammetry and 3D Modeling for Engineering Heritage: A Case Study on the Digital Documentation of a Historic Steel Truss Viaduct. Applied Sciences. 2025; 15(23):12491. https://doi.org/10.3390/app152312491

Chicago/Turabian Style

Ciborowski, Tomasz, Dominik Księżopolski, Dominika Kuryłowicz, Hubert Nowak, Paweł Rocławski, Paweł Stalmach, Paweł Wałdowski, Anna Banas, and Karolina Makowska-Jarosik. 2025. "Low-Altitude Photogrammetry and 3D Modeling for Engineering Heritage: A Case Study on the Digital Documentation of a Historic Steel Truss Viaduct" Applied Sciences 15, no. 23: 12491. https://doi.org/10.3390/app152312491

APA Style

Ciborowski, T., Księżopolski, D., Kuryłowicz, D., Nowak, H., Rocławski, P., Stalmach, P., Wałdowski, P., Banas, A., & Makowska-Jarosik, K. (2025). Low-Altitude Photogrammetry and 3D Modeling for Engineering Heritage: A Case Study on the Digital Documentation of a Historic Steel Truss Viaduct. Applied Sciences, 15(23), 12491. https://doi.org/10.3390/app152312491

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop