Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = true orthophoto

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 8766 KiB  
Article
Fusion of Airborne, SLAM-Based, and iPhone LiDAR for Accurate Forest Road Mapping in Harvesting Areas
by Evangelia Siafali, Vasilis Polychronos and Petros A. Tsioras
Land 2025, 14(8), 1553; https://doi.org/10.3390/land14081553 - 28 Jul 2025
Viewed by 362
Abstract
This study examined the integraftion of airborne Light Detection and Ranging (LiDAR), Simultaneous Localization and Mapping (SLAM)-based handheld LiDAR, and iPhone LiDAR to inspect forest road networks following forest operations. The goal is to overcome the challenges posed by dense canopy cover and [...] Read more.
This study examined the integraftion of airborne Light Detection and Ranging (LiDAR), Simultaneous Localization and Mapping (SLAM)-based handheld LiDAR, and iPhone LiDAR to inspect forest road networks following forest operations. The goal is to overcome the challenges posed by dense canopy cover and ensure accurate and efficient data collection and mapping. Airborne data were collected using the DJI Matrice 300 RTK UAV equipped with a Zenmuse L2 LiDAR sensor, which achieved a high point density of 285 points/m2 at an altitude of 80 m. Ground-level data were collected using the BLK2GO handheld laser scanner (HPLS) with SLAM methods (LiDAR SLAM, Visual SLAM, Inertial Measurement Unit) and the iPhone 13 Pro Max LiDAR. Data processing included generating DEMs, DSMs, and True Digital Orthophotos (TDOMs) via DJI Terra, LiDAR360 V8, and Cyclone REGISTER 360 PLUS, with additional processing and merging using CloudCompare V2 and ArcGIS Pro 3.4.0. The pairwise comparison analysis between ALS data and each alternative method revealed notable differences in elevation, highlighting discrepancies between methods. ALS + iPhone demonstrated the smallest deviation from ALS (MAE = 0.011, RMSE = 0.011, RE = 0.003%) and HPLS the larger deviation from ALS (MAE = 0.507, RMSE = 0.542, RE = 0.123%). The findings highlight the potential of fusing point clouds from diverse platforms to enhance forest road mapping accuracy. However, the selection of technology should consider trade-offs among accuracy, cost, and operational constraints. Mobile LiDAR solutions, particularly the iPhone, offer promising low-cost alternatives for certain applications. Future research should explore real-time fusion workflows and strategies to improve the cost-effectiveness and scalability of multisensor approaches for forest road monitoring. Full article
Show Figures

Figure 1

22 pages, 4083 KiB  
Article
Employing Aerial LiDAR Data for Forest Clustering and Timber Volume Estimation: A Case Study with Pinus radiata in Northwest Spain
by Alberto López-Amoedo, Henrique Lorenzo, Carolina Acuña-Alonso and Xana Álvarez
Forests 2025, 16(7), 1140; https://doi.org/10.3390/f16071140 - 10 Jul 2025
Viewed by 263
Abstract
In the case of forest inventory, heterogeneous areas are particularly challenging due to variability in vegetation structure. This is especially true in Galicia (northwest Spain), where land is highly fragmented, complicating the planning and management of single-species plantations such as Pinus radiata. [...] Read more.
In the case of forest inventory, heterogeneous areas are particularly challenging due to variability in vegetation structure. This is especially true in Galicia (northwest Spain), where land is highly fragmented, complicating the planning and management of single-species plantations such as Pinus radiata. This study proposes a cost-effective strategy using open-access tools and data to characterize and estimate wood volume in these plantations. Two stratification approaches—classical and cluster-based—were compared to a modeling method based on Principal Component Analysis (PCA). Data came from open-access national LiDAR point clouds, acquired using manned aerial vehicles under the Spanish National Aerial Orthophoto Plan (PNOA). Moreover, two volume estimation methods were applied: one from the Xunta de Galicia (XdG) and another from Spain’s central administration (4IFN). A Generalized Linear Model (GLM) was also fitted using PCA-derived variables with logarithmic transformation. The results show that although overall volume estimates are similar across methods, cluster-based stratification yielded significantly lower absolute errors per hectare (XdG: 28.04 m3/ha vs. 44.07 m3/ha; 4IFN: 25.64 m3/ha vs. 38.22 m3/ha), improving accuracy by 7% over classical stratification. Moreover, it does not require precise field parcel locations, unlike PCA modeling. Both official volume estimation methods tended to overestimate stock by about 10% compared to PCA. These results confirm that clustering offers a practical, low-cost alternative that improves estimation accuracy by up to 18 m3/ha in fragmented forest landscapes. Full article
Show Figures

Figure 1

26 pages, 20953 KiB  
Article
Optimization-Based Downscaling of Satellite-Derived Isotropic Broadband Albedo to High Resolution
by Niko Lukač, Domen Mongus and Marko Bizjak
Remote Sens. 2025, 17(8), 1366; https://doi.org/10.3390/rs17081366 - 11 Apr 2025
Viewed by 374
Abstract
In this paper, a novel method for estimating high-resolution isotropic broadband albedo is proposed, by downscaling satellite-derived albedo using an optimization approach. At first, broadband albedo is calculated from the lower-resolution multispectral satellite image using standard narrow-to-broadband (NTB) conversion, where the surfaces are [...] Read more.
In this paper, a novel method for estimating high-resolution isotropic broadband albedo is proposed, by downscaling satellite-derived albedo using an optimization approach. At first, broadband albedo is calculated from the lower-resolution multispectral satellite image using standard narrow-to-broadband (NTB) conversion, where the surfaces are considered Lambertian with isotropic reflectance. The high-resolution true orthophoto for the same location is segmented with the deep learning-based Segment Anything Model (SAM), and the resulting segments are refined with a classified digital surface model (cDSM) to exclude small transient objects. Afterwards, the remaining segments are grouped using K-means clustering, by considering orthophoto-visible (VIS) and near-infrared (NIR) bands. These segments present surfaces with similar materials and underlying reflectance properties. Next, the Differential Evolution (DE) optimization algorithm is applied to approximate albedo values to these segments so that their spatial aggregate matches the coarse-resolution satellite albedo, by proposing two novel objective functions. Extensive experiments considering different DE parameters over an 0.75 km2 large urban area in Maribor, Slovenia, have been carried out, where Sentinel-2 Level-2A NTB-derived albedo was downscaled to 1 m spatial resolution. Looking at the performed spatiospectral analysis, the proposed method achieved absolute differences of 0.09 per VIS band and below 0.18 per NIR band, in comparison to lower-resolution NTB-derived albedo. Moreover, the proposed method achieved a root mean square error (RMSE) of 0.0179 and a mean absolute percentage error (MAPE) of 4.0299% against ground truth broadband albedo annotations of characteristic materials in the given urban area. The proposed method outperformed the Enhanced Super-Resolution Generative Adversarial Networks (ESRGANs), which achieved an RMSE of 0.0285 and an MAPE of 9.2778%, and the Blind Super-Resolution Generative Adversarial Network (BSRGAN), which achieved an RMSE of 0.0341 and an MAPE of 12.3104%. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

17 pages, 12277 KiB  
Article
Is Your Training Data Really Ground Truth? A Quality Assessment of Manual Annotation for Individual Tree Crown Delineation
by Janik Steier, Mona Goebel and Dorota Iwaszczuk
Remote Sens. 2024, 16(15), 2786; https://doi.org/10.3390/rs16152786 - 30 Jul 2024
Cited by 10 | Viewed by 1788
Abstract
For the accurate and automatic mapping of forest stands based on very-high-resolution satellite imagery and digital orthophotos, precise object detection at the individual tree level is necessary. Currently, supervised deep learning models are primarily applied for this task. To train a reliable model, [...] Read more.
For the accurate and automatic mapping of forest stands based on very-high-resolution satellite imagery and digital orthophotos, precise object detection at the individual tree level is necessary. Currently, supervised deep learning models are primarily applied for this task. To train a reliable model, it is crucial to have an accurate tree crown annotation dataset. The current method of generating these training datasets still relies on manual annotation and labeling. Because of the intricate contours of tree crowns, vegetation density in natural forests and the insufficient ground sampling distance of the imagery, manually generated annotations are error-prone. It is unlikely that the manually delineated tree crowns represent the true conditions on the ground. If these error-prone annotations are used as training data for deep learning models, this may lead to inaccurate mapping results for the models. This study critically validates manual tree crown annotations on two study sites: a forest-like plantation on a cemetery and a natural city forest. The validation is based on tree reference data in the form of an official tree register and tree segments extracted from UAV laser scanning (ULS) data for the quality assessment of a training dataset. The validation results reveal that the manual annotations detect only 37% of the tree crowns in the forest-like plantation area and 10% of the tree crowns in the natural forest correctly. Furthermore, it is frequent for multiple trees to be interpreted in the annotation as a single tree at both study sites. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

25 pages, 29087 KiB  
Article
HBIM for Conservation of Built Heritage
by Yahya Alshawabkeh, Ahmad Baik and Yehia Miky
ISPRS Int. J. Geo-Inf. 2024, 13(7), 231; https://doi.org/10.3390/ijgi13070231 - 1 Jul 2024
Cited by 5 | Viewed by 3274
Abstract
Building information modeling (BIM) has recently become more popular in historical buildings as a method to rebuild their geometry and collect relevant information. Heritage BIM (HBIM), which combines high-level data about surface conditions, is a valuable tool for conservation decision-making. However, implementing BIM [...] Read more.
Building information modeling (BIM) has recently become more popular in historical buildings as a method to rebuild their geometry and collect relevant information. Heritage BIM (HBIM), which combines high-level data about surface conditions, is a valuable tool for conservation decision-making. However, implementing BIM in heritage has its challenges because BIM libraries are designed for new constructions and are incapable of accommodating the morphological irregularities found in historical structures. This article discusses an architecture survey workflow that uses TLS, imagery, and deep learning algorithms to optimize HBIM for the conservation of the Nabatean built heritage. In addition to creating new resourceful Nabatean libraries with high details, the proposed approach enhanced HBIM by including two data outputs. The first dataset contained the TLS 3D dense mesh model, which was enhanced with high-quality textures extracted from independent imagery captured at the optimal time and location for accurate depictions of surface features. These images were also used to create true orthophotos using accurate and reliable 2.5D DSM derived from TLS, which eliminated all image distortion. The true orthophoto was then used in HBIM texturing to create a realistic decay map and combined with a deep learning algorithm to automatically detect and draw the outline of surface features and cracks in the BIM model, along with their statistical parameters. The use of deep learning on a structured 2D true orthophoto produced segmentation results in the metric units required for damage quantifications and helped overcome the limitations of using deep learning for 2D non-metric imagery, which typically uses pixels to measure crack widths and areas. The results show that the scanner and imagery integration allows for the efficient collection of data for informative HBIM models and provide stakeholders with an efficient tool for investigating and analyzing buildings to ensure proper conservation. Full article
Show Figures

Figure 1

19 pages, 6925 KiB  
Article
Advantages of Using Transfer Learning Technology with a Quantative Measurement
by Emilia Hattula, Lingli Zhu, Jere Raninen, Juha Oksanen and Juha Hyyppä
Remote Sens. 2023, 15(17), 4278; https://doi.org/10.3390/rs15174278 - 31 Aug 2023
Cited by 5 | Viewed by 1990
Abstract
The number of people living in cities is continuously growing, and the buildings in topographic maps are in need of frequent updates, which are costly to perform manually. This makes automatic building extraction a significant research subject. Transfer learning, on the other hand, [...] Read more.
The number of people living in cities is continuously growing, and the buildings in topographic maps are in need of frequent updates, which are costly to perform manually. This makes automatic building extraction a significant research subject. Transfer learning, on the other hand, offers solutions in situations where the data of a target area are scarce, making it a profitable research subject. Moreover, in previous studies, there was a lack of metrics in quantifying the accuracy improvement with transfer learning techniques. This paper investigated various transfer learning techniques and their combinations with U-Net for the semantic segmentation of buildings from true orthophotos. The results were analyzed using quantitative methods. Open-source remote sensing data from Poland were used for pretraining a model for building segmentation. The fine-tuning techniques including a fine-tuning contracting path, a fine-tuning expanding path, a retraining contracting path, and a retraining expanding path were studied. These fine-tuning techniques and their combinations were tested with three local datasets from the diverse environment in Finland: urban, suburban, and rural areas. Knowledge from the pretrained model was transferred to the local datasets from Helsinki (urban), Kajaani (suburban), and selected areas in Finland (rural area). Three models with no transfer learning were trained from scratch with three sets of local data to compare the fine-tuning results. Our experiment focused on how various transfer learning techniques perform on datasets from different environments (urban, suburban, and rural areas) and multiple locations (southern, northern, and across Finland). A quantitative assessment of performance improvement by using transfer learning techniques was conducted. Despite the differences in datasets, the results showed that using transfer learning techniques could achieve at least 5% better accuracy than a model trained from scratch with several different transfer learning techniques. In addition, the effect of the sizes of training datasets was also studied. Full article
Show Figures

Figure 1

24 pages, 12069 KiB  
Article
Exploring the Use of Orthophotos in Google Earth Engine for Very High-Resolution Mapping of Impervious Surfaces: A Data Fusion Approach in Wuppertal, Germany
by Jan-Philipp Langenkamp and Andreas Rienow
Remote Sens. 2023, 15(7), 1818; https://doi.org/10.3390/rs15071818 - 29 Mar 2023
Cited by 9 | Viewed by 3684
Abstract
Germany aims to reduce soil sealing to under 30 hectares per day by 2030 to address negative environmental impacts from the expansion of impervious surfaces. As cities adapt to climate change, spatially explicit very high-resolution information about the distribution of impervious surfaces is [...] Read more.
Germany aims to reduce soil sealing to under 30 hectares per day by 2030 to address negative environmental impacts from the expansion of impervious surfaces. As cities adapt to climate change, spatially explicit very high-resolution information about the distribution of impervious surfaces is becoming increasingly important for urban planning and decision-making. This study proposes a method for mapping impervious surfaces in Google Earth Engine (GEE) using a data fusion approach of 0.9 m colour-infrared true orthophotos, digital elevation models, and vector data. We conducted a pixel-based random forest (RF) classification utilizing spectral indices, Grey-Level Co-occurrence Matrix texture features, and topographic features. Impervious surfaces were mapped with 0.9 m precision resulting in an Overall Accuracy of 92.31% and Kappa-Coefficient of 84.62%. To address challenges posed by high-resolution imagery, we superimposed the RF classification results with land use data from Germany’s Authoritative Real Estate Cadastre Information System (ALKIS). The results show that 25.26% of the city of Wuppertal is covered by impervious surfaces coinciding with a government-funded study from 2020 based on Sentinel-2 Copernicus data that defined a proportion of 25.22% as built-up area. This demonstrates the effectiveness of our method for semi-automated mapping of impervious surfaces in GEE to support urban planning on a local to regional scale. Full article
(This article belongs to the Special Issue Urban Planning Supported by Remote Sensing Technology)
Show Figures

Graphical abstract

20 pages, 86247 KiB  
Article
SunMap: Towards Unattended Maintenance of Photovoltaic Plants Using Drone Photogrammetry
by David Hernández-López, Esteban Ruíz de Oña, Miguel A. Moreno and Diego González-Aguilera
Drones 2023, 7(2), 129; https://doi.org/10.3390/drones7020129 - 10 Feb 2023
Cited by 11 | Viewed by 4251
Abstract
Global awareness of environmental issues has boosted interest in renewable energy resources, among which solar energy is one of the most attractive renewable sources. The massive growth of PV plants, both in number and size, has motivated the development of new approaches for [...] Read more.
Global awareness of environmental issues has boosted interest in renewable energy resources, among which solar energy is one of the most attractive renewable sources. The massive growth of PV plants, both in number and size, has motivated the development of new approaches for their inspection and monitoring. In this paper, a rigorous drone photogrammetry approach using optical Red, Green and Blue (RGB) and Infrared Thermography (IRT) images is applied to detect one of the most common faults (hot spots) in photovoltaic (PV) plants. The latest advances in photogrammetry and computer vision (i.e., Structure from Motion (SfM) and multiview stereo (MVS)), together with advanced and robust analysis of IRT images, are the main elements of the proposed methodology. We developed an in-house software application, SunMap, that allows automatic, accurate, and reliable detection of hot spots on PV panels. Along with the identification and geolocation of malfunctioning PV panels, SunMap provides high-quality cartographic products by means of 3D models and true orthophotos that provide additional support for maintenance operations. Validation of SunMap was performed in two different PV plants located in Spain, generating positive results in the detection and geolocation of anomalies with an error incidence lower than 15% as validated by the manufacturer’s standard electrical tests. Full article
Show Figures

Figure 1

24 pages, 5666 KiB  
Article
Supporting Long-Term Archaeological Research in Southern Romania Chalcolithic Sites Using Multi-Platform UAV Mapping
by Cornelis Stal, Cristina Covataru, Johannes Müller, Valentin Parnic, Theodor Ignat, Robert Hofmann and Catalin Lazar
Drones 2022, 6(10), 277; https://doi.org/10.3390/drones6100277 - 26 Sep 2022
Cited by 8 | Viewed by 4037
Abstract
Spatial data play a crucial role in archaeological research, and orthophotos, digital elevation models, and 3D models are frequently used for the mapping, documentation, and monitoring of archaeological sites. Thanks to the availability of compact and low-cost uncrewed airborne vehicles, the use of [...] Read more.
Spatial data play a crucial role in archaeological research, and orthophotos, digital elevation models, and 3D models are frequently used for the mapping, documentation, and monitoring of archaeological sites. Thanks to the availability of compact and low-cost uncrewed airborne vehicles, the use of UAV-based photogrammetry matured in this field over the past two decades. More recently, compact airborne systems are also available that allow the recording of thermal data, multispectral data, and airborne laser scanning. In this article, various platforms and sensors are applied at the Chalcolithic archaeological sites in the Mostiștea Basin and Danube Valley (Southern Romania). By analysing the performance of the systems and the resulting data, insight is given into the selection of the appropriate system for the right application. This analysis requires thorough knowledge of data acquisition and data processing, as well. As both laser scanning and photogrammetry typically result in very large amounts of data, a special focus is also required on the storage and publication of the data. Hence, the objective of this article is to provide a full overview of various aspects of 3D data acquisition for UAV-based mapping. Based on the conclusions drawn in this article, it is stated that photogrammetry and laser scanning can result in data with similar geometrical properties when acquisition parameters are appropriately set. On the one hand, the used ALS-based system outperforms the photogrammetric platforms in terms of operational time and the area covered. On the other hand, conventional photogrammetry provides flexibility that might be required for very low-altitude flights, or emergency mapping. Furthermore, as the used ALS sensor only provides a geometrical representation of the topography, photogrammetric sensors are still required to obtain true colour or false colour composites of the surface. Lastly, the variety of data, such as pre- and post-rendered raster data, 3D models, and point clouds, requires the implementation of multiple methods for the online publication of data. Various client-side and server-side solutions are presented to make the data available for other researchers. Full article
(This article belongs to the Special Issue Drone Inspection in Cultural Heritage)
Show Figures

Figure 1

18 pages, 5360 KiB  
Article
Influence of Flight Height and Image Sensor on the Quality of the UAS Orthophotos for Cadastral Survey Purposes
by Hrvoje Sertić, Rinaldo Paar, Hrvoje Tomić and Fabijan Ravlić
Land 2022, 11(8), 1250; https://doi.org/10.3390/land11081250 - 5 Aug 2022
Cited by 2 | Viewed by 2278
Abstract
The possibility of using unmanned aircraft systems (UAS) for cadastral survey purposes was investigated in this research. A study site consisting of 26 ground control points (GCP) and checkpoints (CP) was established. The study site was first measured by the classical methods of [...] Read more.
The possibility of using unmanned aircraft systems (UAS) for cadastral survey purposes was investigated in this research. A study site consisting of 26 ground control points (GCP) and checkpoints (CP) was established. The study site was first measured by the classical methods of geodetic surveying, i.e., by the polar method using a total station. After that, all points were additionally measured by the Global Navigation Satellite System (GNSS) Real-Time Kinematic (RTK) method. The GNSS RTK method was used to determine the coordinates of all points in the official map projection of Croatia, HTRS96/TM, while the polar method was used to increase the positional “strength” of points in all directions, i.e., to improve the relative accuracy between them. Using UASs with different image sensor characteristics, the study site was measured by an aerial photogrammetry method at different flight heights with the purpose of obtaining a high-quality digital orthophoto plan (DOF). The absolute orientation of the model was performed using the external orientation data of each digital image based on GNSS and Inertial Measurement Unit (IMU) UAS’s sensors, as well as using GCPs. Achieved precision of obtained DOF, as well as accuracy analysis of aerial photogrammetry was performed by considering the adjusted survey data collected by classical and GNSS RTK methods as true values and comparing them with the coordinates obtained by the aerial photogrammetry method from DOFs. Based on the achieved results and conclusions obtained from the study site, the second field test was performed above a small settlement which served as an area for cadastral survey using the UAS and GNSS RTK method. Again, precision and accuracy were determined, based on which we derived recommendations and conclusions for using UASs for cadastral survey purposes. Full article
(This article belongs to the Special Issue Geospatial Data for 4D Land Administration)
Show Figures

Figure 1

18 pages, 12482 KiB  
Article
Comparative Assessment of Pixel and Object-Based Approaches for Mapping of Olive Tree Crowns Based on UAV Multispectral Imagery
by Ante Šiljeg, Lovre Panđa, Fran Domazetović, Ivan Marić, Mateo Gašparović, Mirko Borisov and Rina Milošević
Remote Sens. 2022, 14(3), 757; https://doi.org/10.3390/rs14030757 - 6 Feb 2022
Cited by 27 | Viewed by 4303
Abstract
Pixel-based (PB) and geographic-object-based (GEOBIA) classification approaches allow the extraction of different objects from multispectral images (MS). The primary goal of this research was the analysis of UAV imagery applicability and accuracy assessment of MLC and SVM classification algorithms within PB and GEOBIA [...] Read more.
Pixel-based (PB) and geographic-object-based (GEOBIA) classification approaches allow the extraction of different objects from multispectral images (MS). The primary goal of this research was the analysis of UAV imagery applicability and accuracy assessment of MLC and SVM classification algorithms within PB and GEOBIA classification approaches. The secondary goal was to use different accuracy assessment metrics to determine which of the two tested classification algorithms (SVM and MLC) most reliably distinguishes olive tree crowns and which approach is more accurate (PB or GEOBIA). The third goal was to add false polygon samples for Correctness (COR), Completeness (COM) and Overall Quality (OQ) metrics and use them to calculate the Total Accuracy (TA). The methodology can be divided into six steps, from data acquisition to selection of the best classification algorithm after accuracy assessment. High-quality DOP (digital orthophoto) and UAVMS were generated. A new accuracy metric, called Total Accuracy (TA), combined both false and true positive polygon samples, thus providing a more comprehensive insight into the assessed classification accuracy. The SVM (GEOBIA) was the most reliable classification algorithm for extracting olive tree crowns from UAVMS imagery. The assessment carried out indicated that application of GEOBIA-SVM achieved a TACOR of 0.527, TACOM of 0.811, TAOQ of 0.745, Overall Accuracy (OA) of 0.926 or 0.980 and Area Under Curve (AUC) value of 0.904 or 0.929. The calculated accuracy metrics confirmed that the GEOBIA approach (SVM and MLC) achieved more accurate olive tree crown extraction than the PB approach (SVM and MLC) if applied to classifying VHR UAVMS imagery. The SVM classification algorithm extracted olive tree crowns more accurately than MLC in both approaches. However, the accuracy assessment has proven that PB classification algorithms can also achieve satisfactory accuracy. Full article
Show Figures

Figure 1

21 pages, 17528 KiB  
Article
As-Textured As-Built BIM Using Sensor Fusion, Zee Ain Historical Village as a Case Study
by Yahya Alshawabkeh, Ahmad Baik and Ahmad Fallatah
Remote Sens. 2021, 13(24), 5135; https://doi.org/10.3390/rs13245135 - 17 Dec 2021
Cited by 24 | Viewed by 4399
Abstract
The work described in the paper emphasizes the importance of integrating imagery and laser scanner techniques (TLS) to optimize the geometry and visual quality of Heritage BIM. The fusion-based workflow was approached during the recording of Zee Ain Historical Village in Saudi Arabia. [...] Read more.
The work described in the paper emphasizes the importance of integrating imagery and laser scanner techniques (TLS) to optimize the geometry and visual quality of Heritage BIM. The fusion-based workflow was approached during the recording of Zee Ain Historical Village in Saudi Arabia. The village is a unique example of traditional human settlements, and represents a complex natural and cultural heritage site. The proposed workflow divides data integration into two levels. At the basic level, UAV photogrammetry with enhanced mobility and visibility is used to map the ragged terrain and supplement TLS point data in upper and unaccusable building zones where shadow data originated. The merging of point clouds ensures that the building’s overall geometry is correctly rebuilt and that data interpretation is improved during HBIM digitization. In addition to the correct geometry, texture mapping is particularly important in the area of cultural heritage. Constructing a realistic texture remains a challenge in HBIM; because the standard texture and materials provided in BIM libraries do not allow for reliable representation of heritage structures, mapping and sharing information are not always truthful. Thereby, at the second level, the workflow proposed true orthophoto texturing method for HBIM models by combining close-range imagery and laser data. True orthophotos have uniform scale that depicts all objects in their respective planimetric positions, providing reliable and realistic mapping. The process begins with the development of a Digital Surface Model (DSM) by sampling TLS 3D points in a regular grid, with each cell uniquely associated with a model point. Then each DSM cell is projected in the corresponding perspective imagery in order to map the relevant spectral information. The methods allow for flexible data fusion and image capture using either a TLS-installed camera or a separate camera at the optimal time and viewpoint for radiometric data. The developed workflows demonstrated adequate results in terms of complete and realistic textured HBIM, allowing for a better understanding of the complex heritage structures. Full article
Show Figures

Figure 1

25 pages, 13116 KiB  
Article
Color and Laser Data as a Complementary Approach for Heritage Documentation
by Yahya Alshawabkeh
Remote Sens. 2020, 12(20), 3465; https://doi.org/10.3390/rs12203465 - 21 Oct 2020
Cited by 11 | Viewed by 3688
Abstract
Heritage recording has received much attention and benefits from recent developments in the field of range and imaging sensors. While these methods have often been viewed as two different methodologies, data integration can achieve different products, which are not always found in a [...] Read more.
Heritage recording has received much attention and benefits from recent developments in the field of range and imaging sensors. While these methods have often been viewed as two different methodologies, data integration can achieve different products, which are not always found in a single technique. Data integration in this paper can be divided into two levels: laser scanner data aided by photogrammetry and photogrammetry aided by scanner data. At the first level, superior radiometric information, mobility and accessibility of imagery can be actively used to add texture information and allow for new possibilities in terms of data interpretation and completeness of complex site documentation. In the second level, true orthophoto is generated based on laser data, the results are rectified images with a uniform scale representing all objects at their planimetric position. The proposed approaches enable flexible data fusion and allow images to be taken at an optimum time and position for radiometric information. Data fusion usually involves serious distortions in the form of a double mapping of occluded objects that affect the product quality. In order to enhance the efficiency of visibility analysis in complex structures, a proposed visibility algorithm is implemented into the developed methods of texture mapping and true orthophoto generation. The algorithm filters occluded areas based on a patch processing using a grid square unit set around the projected vertices. The depth of the mapped triangular vertices within the patch neighborhood is calculated to assign the visible one. In this contribution, experimental results from different historical sites in Jordan are presented as a validation of the proposed algorithms. Algorithms show satisfactory performance in terms of completeness and correctness of occlusion detection and spectral information mapping. The results indicate that hybrid methods could be used efficiently in the representation of heritage structures. Full article
(This article belongs to the Special Issue Sensors & Methods in Cultural Heritage)
Show Figures

Figure 1

14 pages, 4617 KiB  
Letter
Towards Real-Time Building Damage Mapping with Low-Cost UAV Solutions
by Francesco Nex, Diogo Duarte, Anne Steenbeek and Norman Kerle
Remote Sens. 2019, 11(3), 287; https://doi.org/10.3390/rs11030287 - 1 Feb 2019
Cited by 99 | Viewed by 7603
Abstract
The timely and efficient generation of detailed damage maps is of fundamental importance following disaster events to speed up first responders’ (FR) rescue activities and help trapped victims. Several works dealing with the automated detection of building damages have been published in the [...] Read more.
The timely and efficient generation of detailed damage maps is of fundamental importance following disaster events to speed up first responders’ (FR) rescue activities and help trapped victims. Several works dealing with the automated detection of building damages have been published in the last decade. The increasingly widespread availability of inexpensive UAV platforms has also driven their recent adoption for rescue operations (i.e., search and rescue). Their deployment, however, remains largely limited to visual image inspection by skilled operators, limiting their applicability in time-constrained real conditions. This paper proposes a new solution to autonomously map building damages with a commercial UAV in near real-time. The solution integrates different components that allow the live streaming of the images on a laptop and their processing on the fly. Advanced photogrammetric techniques and deep learning algorithms are combined to deliver a true-orthophoto showing the position of building damages, which are already processed by the time the UAV returns to base. These algorithms have been customized to deliver fast results, fulfilling the near real-time requirements. The complete solution has been tested in different conditions, and received positive feedback by the FR involved in the EU funded project INACHUS. Two realistic pilot tests are described in the paper. The achieved results show the great potential of the presented approach, how close the proposed solution is to FR’ expectations, and where more work is still needed. Full article
Show Figures

Graphical abstract

15 pages, 10573 KiB  
Article
Generating a High-Precision True Digital Orthophoto Map Based on UAV Images
by Yu Liu, Xinqi Zheng, Gang Ai, Yi Zhang and Yuqiang Zuo
ISPRS Int. J. Geo-Inf. 2018, 7(9), 333; https://doi.org/10.3390/ijgi7090333 - 21 Aug 2018
Cited by 71 | Viewed by 11039
Abstract
Unmanned aerial vehicle (UAV) low-altitude remote sensing technology has recently been adopted in China. However, mapping accuracy and production processes of true digital orthophoto maps (TDOMs) generated by UAV images require further improvement. In this study, ground control points were distributed and images [...] Read more.
Unmanned aerial vehicle (UAV) low-altitude remote sensing technology has recently been adopted in China. However, mapping accuracy and production processes of true digital orthophoto maps (TDOMs) generated by UAV images require further improvement. In this study, ground control points were distributed and images were collected using a multi-rotor UAV and professional camera, at a flight height of 160 m above the ground and a designed ground sample distance (GSD) of 0.016 m. A structure from motion (SfM), revised digital surface model (DSM) and multi-view image texture compensation workflow were outlined to generate a high-precision TDOM. We then used randomly distributed checkpoints on the TDOM to verify its precision. The horizontal accuracy of the generated TDOM was 0.0365 m, the vertical accuracy was 0.0323 m, and the GSD was 0.0166 m. Tilt and shadowed areas of the TDOM were eliminated so that buildings maintained vertical viewing angles. This workflow produced a TDOM accuracy within 0.05 m, and provided an effective method for identifying rural homesteads, as well as land planning and design. Full article
(This article belongs to the Special Issue Applications and Potential of UAV Photogrammetric Survey)
Show Figures

Figure 1

Back to TopTop