Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (112)

Search Parameters:
Keywords = oblique aerial images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 32941 KiB  
Article
Assessment of Building Vulnerability to Tsunami in Ancon Bay, Peru, Using High-Resolution Unmanned Aerial Vehicle Imagery and Numerical Simulation
by Carlos Davila, Angel Quesquen, Fernando Garcia, Brigitte Puchoc, Oscar Solis, Julian Palacios, Jorge Morales and Miguel Estrada
Drones 2025, 9(6), 402; https://doi.org/10.3390/drones9060402 - 29 May 2025
Viewed by 2671
Abstract
Traditional tsunami vulnerability assessments often rely on empirical models and field surveys, which can be time-consuming and have limited accuracy. In this study, we propose a novel approach that integrates high-resolution Unmanned Aerial Vehicle (UAV) photogrammetry with numerical simulation to improve vulnerability assessment [...] Read more.
Traditional tsunami vulnerability assessments often rely on empirical models and field surveys, which can be time-consuming and have limited accuracy. In this study, we propose a novel approach that integrates high-resolution Unmanned Aerial Vehicle (UAV) photogrammetry with numerical simulation to improve vulnerability assessment efficacy in Ancon Bay, Lima, Peru, by using the Papathoma Tsunami Vulnerability Assessment (PTVA-4) model. For this purpose, a detailed 3D representation of the study area was generated using UAV-based oblique photogrammetry, enabling the extraction of building attributes. Additionally, a high-resolution numerical tsunami simulation was conducted using the TUNAMI-N2 model for a potential worst-case scenario that may affect the Central Peru subduction zone, incorporating topographic and land-use data obtained with UAV-based nadir photogrammetry. The results indicate that the northern region of Ancon Bay exhibits higher relative vulnerability levels due to greater inundation depths and more tsunami-prone building attributes. UAV-based assessments provide a rapid and detailed method for evaluating building vulnerability. These findings indicate that the proposed methodology is a valuable tool for supporting coastal risk planning and disaster preparedness in tsunami-prone areas. Full article
(This article belongs to the Special Issue Drones for Natural Hazards)
Show Figures

Figure 1

25 pages, 4496 KiB  
Article
Assessment of Photogrammetric Performance Test on Large Areas by Using a Rolling Shutter Camera Equipped in a Multi-Rotor UAV
by Alba Nely Arévalo-Verjel, José Luis Lerma, Juan Pedro Carbonell-Rivera, Juan F. Prieto and José Fernández
Appl. Sci. 2025, 15(9), 5035; https://doi.org/10.3390/app15095035 - 1 May 2025
Viewed by 799
Abstract
The generation of digital aerial photogrammetry products using unmanned aerial vehicle-digital aerial photogrammetry (UAV-DAP) has become an essential task due to the increasing use of UAVs in the world of geomatics, thanks to their low cost and spatial resolution. Therefore, it is relevant [...] Read more.
The generation of digital aerial photogrammetry products using unmanned aerial vehicle-digital aerial photogrammetry (UAV-DAP) has become an essential task due to the increasing use of UAVs in the world of geomatics, thanks to their low cost and spatial resolution. Therefore, it is relevant to explore the performance of new digital cameras equipped in UAVs using electronic rolling shutters instead of ideal mechanical or global shutter cameras to achieve accurate and reliable photogrammetric products, if possible, while minimizing workload, especially for their application in projects that require a high level of detail. In this paper, we analyse performance using oblique images along the perimeter (3D perimeter) on a flat area, i.e., with slopes of less than 3%. The area was photogrammetrically surveyed with a DJI (Dà-Jiāng Innovations) Inspire 2 multirotor UAV equipped with a Zenmuse X5S rolling shutter camera. The photogrammetric survey was accompanied by a Global Navigation Satellite System (GNSS) survey, in which dual frequency receivers were used to determine the ground control points (GCPs) and checkpoints (CPs). The study analysed different scenarios, including the combination of forward and transversal strips and oblique images. After examining the ideal scenario with the least root mean square error (RMSE), six different combinations were analysed to find the best location for the GCPs. The most significant results indicate that the optimal calibration of the camera is obtained in scenarios including oblique images, which outperform the rest of the scenarios for achieving the lowest RMSE (2.5x the GSD in Z and 3.0x the GSD in XYZ) with optimum GCPs layout; with non-ideal GCPs layout, unacceptable errors can be achieved (11.4x the GSD in XYZ), even with ideal block geometry. The UAV-DAP rolling shutter effect can only be minimised in the scenario that uses oblique images and GCPs at the edges of the overlapping zones and the perimeter. Full article
(This article belongs to the Special Issue Technical Advances in UAV Photogrammetry and Remote Sensing)
Show Figures

Figure 1

22 pages, 9648 KiB  
Article
Three-Dimensional Real-Scene-Enhanced GNSS/Intelligent Vision Surface Deformation Monitoring System
by Yuanrong He, Weijie Yang, Qun Su, Qiuhua He, Hongxin Li, Shuhang Lin and Shaochang Zhu
Appl. Sci. 2025, 15(9), 4983; https://doi.org/10.3390/app15094983 - 30 Apr 2025
Viewed by 671
Abstract
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene [...] Read more.
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene (3DRS)-enhanced GNSS/intelligent vision surface deformation monitoring system. The system integrates GNSS monitoring terminals and multi-source meteorological sensors to accurately capture minute displacements at monitoring points and multi-source Internet of Things (IoT) data, which are then automatically stored in MySQL databases. To enhance the functionality of the system, the visual sensor data are fused with 3D models through streaming media technology, enabling 3D real-scene augmented reality to support dynamic deformation monitoring and visual analysis. WebSocket-based remote lighting control is implemented to enhance the quality of video data at night. The spatiotemporal fusion of UAV aerial data with 3D models is achieved through Blender image-based rendering, while edge detection is employed to extract crack parameters from intelligent inspection vehicle data. The 3DRS model is constructed through UAV oblique photography, 3D laser scanning, and the combined use of SVSGeoModeler and SketchUp. A visualization platform for surface deformation monitoring is built on the 3DRS foundation, adopting an “edge collection–cloud fusion–terminal interaction” approach. This platform dynamically superimposes GNSS and multi-source IoT monitoring data onto the 3D spatial base, enabling spatiotemporal correlation analysis of millimeter-level displacements and early risk warning. Full article
Show Figures

Figure 1

23 pages, 20311 KiB  
Article
Bridge Geometric Shape Measurement Using LiDAR–Camera Fusion Mapping and Learning-Based Segmentation Method
by Shang Jiang, Yifan Yang, Siyang Gu, Jiahui Li and Yingyan Hou
Buildings 2025, 15(9), 1458; https://doi.org/10.3390/buildings15091458 - 25 Apr 2025
Cited by 2 | Viewed by 784
Abstract
The rapid measurement of three-dimensional bridge geometric shapes is crucial for assessing construction quality and in-service structural conditions. Existing geometric shape measurement methods predominantly rely on traditional surveying instruments, which suffer from low efficiency and are limited to sparse point sampling. This study [...] Read more.
The rapid measurement of three-dimensional bridge geometric shapes is crucial for assessing construction quality and in-service structural conditions. Existing geometric shape measurement methods predominantly rely on traditional surveying instruments, which suffer from low efficiency and are limited to sparse point sampling. This study proposes a novel framework that utilizes an airborne LiDAR–camera fusion system for data acquisition, reconstructs high-precision 3D bridge models through real-time mapping, and automatically extracts structural geometric shapes using deep learning. The main contributions include the following: (1) A synchronized LiDAR–camera fusion system integrated with an unmanned aerial vehicle (UAV) and a microprocessor was developed, enabling the flexible and large-scale acquisition of bridge images and point clouds; (2) A multi-sensor fusion mapping method coupling visual-inertial odometry (VIO) and Li-DAR-inertial odometry (LIO) was implemented to construct 3D bridge point clouds in real time robustly; and (3) An instance segmentation network-based approach was proposed to detect key structural components in images, with detected geometric shapes projected from image coordinates to 3D space using LiDAR–camera calibration parameters, addressing challenges in automated large-scale point cloud analysis. The proposed method was validated through geometric shape measurements on a concrete arch bridge. The results demonstrate that compared to the oblique photogrammetry method, the proposed approach reduces errors by 77.13%, while its detection time accounts for 4.18% of that required by a stationary laser scanner and 0.29% of that needed for oblique photogrammetry. Full article
(This article belongs to the Special Issue Urban Infrastructure and Resilient, Sustainable Buildings)
Show Figures

Figure 1

16 pages, 11425 KiB  
Article
Unmanned Aerial Vehicles Applicability to Mapping Soil Properties Under Homogeneous Steppe Vegetation
by Azamat Suleymanov, Mikhail Komissarov, Mikhail Aivazyan, Ruslan Suleymanov, Ilnur Bikbaev, Arseniy Garipov, Raphak Giniyatullin, Olesia Ishkinina, Iren Tuktarova and Larisa Belan
Land 2025, 14(5), 931; https://doi.org/10.3390/land14050931 - 25 Apr 2025
Viewed by 814
Abstract
Unmanned aerial vehicles (UAVs) are rapidly becoming a popular tool for digital soil mapping at a large-scale. However, their applicability in areas with homogeneous vegetation (i.e., not bare soil) has not been fully investigated. In this study, we aimed to predict soil organic [...] Read more.
Unmanned aerial vehicles (UAVs) are rapidly becoming a popular tool for digital soil mapping at a large-scale. However, their applicability in areas with homogeneous vegetation (i.e., not bare soil) has not been fully investigated. In this study, we aimed to predict soil organic carbon, soil texture at several depths, as well as the thickness of the AB soil horizon and penetration resistance using a machine learning algorithm in combination with UAV images. We used an area in the Eurasian steppe zone (Republic of Bashkortostan, Russia) covered with the Stipa vegetation type as a test plot, and collected 192 soil samples from it. We estimated the models using a cross-validation approach and spatial prediction uncertainties. To improve the prediction performance, we also tested the inclusion of oblique geographic coordinates (OGCs) as covariates that reflect spatial position. The following results were achieved: (i) the predictive models demonstrated poor performance using only UAV images as predictors; (ii) the incorporation of OGCs slightly improved the predictions, whereas their uncertainties remained high. We conclude that the inability to accurately predict soil properties using these predictor variables (UAV and OGC) is likely due to the limited access to soil spectral signatures and the high variability of soil properties within what appears to be a homogeneous site, particularly in relation to soil-forming factors. Our results demonstrated the limitations of UAVs’ application for modeling soil properties on a site with homogeneous vegetation, whereas including spatial autocorrelation information can benefit and should be not ignored in further studies. Full article
(This article belongs to the Special Issue Digital Soil Mapping for Soil Health Monitoring in Agricultural Lands)
Show Figures

Figure 1

25 pages, 34678 KiB  
Article
Historical Coast Snaps: Using Centennial Imagery to Track Shoreline Change
by Fátima Valverde, Rui Taborda, Amy E. East and Cristina Ponte Lira
Remote Sens. 2025, 17(8), 1326; https://doi.org/10.3390/rs17081326 - 8 Apr 2025
Viewed by 910
Abstract
Understanding long-term coastal evolution requires historical data, yet accessing reliable information becomes increasingly challenging for extended periods. While vertical aerial imagery has been extensively used in coastal studies since the mid-20th century, and satellite-derived shoreline measurements are now revolutionizing shoreline change studies, ground-based [...] Read more.
Understanding long-term coastal evolution requires historical data, yet accessing reliable information becomes increasingly challenging for extended periods. While vertical aerial imagery has been extensively used in coastal studies since the mid-20th century, and satellite-derived shoreline measurements are now revolutionizing shoreline change studies, ground-based images, such as historical photographs and picture postcards, provide an alternative source of shoreline data for earlier periods when other datasets are scarce. Despite their frequent use for documenting qualitative morphological changes, these valuable historical data sources have rarely supported quantitative assessments of coastal evolution. This study demonstrates the potential of historical ground-oblique images for quantitatively assessing shoreline position and long-term change. Using Conceição-Duquesa Beach (Cascais, Portugal) as a case study, we analyze shoreline evolution over 92 years by applying a novel methodology to historical photographs and postcards. The approach combines image registration, shoreline detection, coordinate transformation, and rectification while accounting for positional uncertainty. Results reveal a significant counterclockwise rotation of the shoreline between the 20th and 21st centuries, exceeding estimated uncertainty thresholds. This study highlights the feasibility of using historical ground-based imagery to reconstruct shoreline positions and quantify long-term coastal change. The methodology is straightforward, adaptable, and offers a promising avenue for extending the temporal range of shoreline datasets, advancing our understanding of coastal evolution. Full article
(This article belongs to the Special Issue Advances in Remote Sensing of the Inland and Coastal Water Zones II)
Show Figures

Figure 1

25 pages, 13237 KiB  
Article
A High-Precision Virtual Central Projection Image Generation Method for an Aerial Dual-Camera
by Xingzhou Luo, Haitao Zhao, Yaping Liu, Nannan Liu, Jiang Chen, Hong Yang and Jie Pan
Remote Sens. 2025, 17(4), 683; https://doi.org/10.3390/rs17040683 - 17 Feb 2025
Viewed by 730
Abstract
Aerial optical cameras are the primary method for capturing high-resolution images to produce large-scale mapping products. To improve aerial photography efficiency, multiple cameras are often used in combination to generate large-format virtual central projection images. This paper presents a high-precision method for directly [...] Read more.
Aerial optical cameras are the primary method for capturing high-resolution images to produce large-scale mapping products. To improve aerial photography efficiency, multiple cameras are often used in combination to generate large-format virtual central projection images. This paper presents a high-precision method for directly transforming raw images obtained from a dual-camera system mounted at an oblique angle into virtual central projection images, thereby enabling the construction of low-cost, large-format aerial camera systems. The method commences with an adaptive sub-block in the overlapping regions of the raw images to extract evenly distributed feature points, followed by iterative relative orientation to improve accuracy and reliability. A global projection transformation matrix is constructed, and the sigmoid function is employed as a weighted distance function for image stitching. The results demonstrate that the proposed method produces more evenly distributed feature points, higher relative orientation accuracy, and greater reliability. Simulation analysis of image overlap indicates that when the overlap exceeds 7%, stitching accuracy can be better than 1.25 μm. The aerial triangulation results demonstrate that the virtual central projection images satisfy the criteria for the production of 1:1000 scale mapping products. Full article
Show Figures

Figure 1

36 pages, 25347 KiB  
Article
Construction of a Real-Scene 3D Digital Campus Using a Multi-Source Data Fusion: A Case Study of Lanzhou Jiaotong University
by Rui Gao, Guanghui Yan, Yingzhi Wang, Tianfeng Yan, Ruiting Niu and Chunyang Tang
ISPRS Int. J. Geo-Inf. 2025, 14(1), 19; https://doi.org/10.3390/ijgi14010019 - 3 Jan 2025
Cited by 1 | Viewed by 2321
Abstract
Real-scene 3D digital campuses are essential for improving the accuracy and effectiveness of spatial data representation, facilitating informed decision-making for university administrators, optimizing resource management, and enriching user engagement for students and faculty. However, current approaches to constructing these digital environments face several [...] Read more.
Real-scene 3D digital campuses are essential for improving the accuracy and effectiveness of spatial data representation, facilitating informed decision-making for university administrators, optimizing resource management, and enriching user engagement for students and faculty. However, current approaches to constructing these digital environments face several challenges. They often rely on costly commercial platforms, struggle with integrating heterogeneous datasets, and require complex workflows to achieve both high precision and comprehensive campus coverage. This paper addresses these issues by proposing a systematic multi-source data fusion approach that employs open-source technologies to generate a real-scene 3D digital campus. A case study of Lanzhou Jiaotong University is presented to demonstrate the feasibility of this approach. Firstly, oblique photography based on unmanned aerial vehicles (UAVs) is used to capture large-scale, high-resolution images of the campus area, which are then processed using open-source software to generate an initial 3D model. Afterward, a high-resolution model of the campus buildings is then created by integrating the UAV data, while 3D Digital Elevation Model (DEM) and OpenStreetMap (OSM) building data provide a 3D overview of the surrounding campus area, resulting in a comprehensive 3D model for a real-scene digital campus. Finally, the 3D model is visualized on the web using Cesium, which enables functionalities such as real-time data loading, perspective switching, and spatial data querying. Results indicate that the proposed approach can effectively get rid of reliance on expensive proprietary systems, while rapidly and accurately reconstructing a real-scene digital campus. This framework not only streamlines data harmonization but also offers an open-source, practical, cost-effective solution for real-scene 3D digital campus construction, promoting further research and applications in twin city, Virtual Reality (VR), and Geographic Information Systems (GIS). Full article
Show Figures

Figure 1

17 pages, 9384 KiB  
Article
Multi-Spectral Point Cloud Constructed with Advanced UAV Technique for Anisotropic Reflectance Analysis of Maize Leaves
by Kaiyi Bi, Yifang Niu, Hao Yang, Zheng Niu, Yishuo Hao and Li Wang
Remote Sens. 2025, 17(1), 93; https://doi.org/10.3390/rs17010093 - 30 Dec 2024
Viewed by 900
Abstract
Reflectance anisotropy in remote sensing images can complicate the interpretation of spectral signature, and extracting precise structural information under these pixels is a promising approach. Low-altitude unmanned aerial vehicle (UAV) systems can capture high-resolution imagery even to centimeter-level detail, potentially simplifying the characterization [...] Read more.
Reflectance anisotropy in remote sensing images can complicate the interpretation of spectral signature, and extracting precise structural information under these pixels is a promising approach. Low-altitude unmanned aerial vehicle (UAV) systems can capture high-resolution imagery even to centimeter-level detail, potentially simplifying the characterization of leaf anisotropic reflectance. We proposed a novel maize point cloud generation method that combines an advanced UAV cross-circling oblique (CCO) photography route with the Structure from the Motion-Multi-View Stereo (SfM-MVS) algorithm. A multi-spectral point cloud was then generated by fusing multi-spectral imagery with the point cloud using a DSM-based approach. The Rahman–Pinty–Verstraete (RPV) model was finally applied to establish maize leaf-level anisotropic reflectance models. Our results indicated a high degree of similarity between measured and estimated maize structural parameters (R2 = 0.89 for leaf length and 0.96 for plant height) based on accurate point cloud data obtained from the CCO route. Most data points clustered around the principal plane due to a constant angle between the sun and view vectors, resulting in a limited range of view azimuths. Leaf reflectance anisotropy was characterized by the RPV model with R2 ranging from 0.38 to 0.75 for five wavelength bands. These findings hold significant promise for promoting the decoupling of plant structural information and leaf optical characteristics within remote sensing data. Full article
Show Figures

Figure 1

14 pages, 5971 KiB  
Article
Flight Altitude and Sensor Angle Affect Unmanned Aerial System Cotton Plant Height Assessments
by Oluwatola Adedeji, Alwaseela Abdalla, Bishnu Ghimire, Glen Ritchie and Wenxuan Guo
Drones 2024, 8(12), 746; https://doi.org/10.3390/drones8120746 - 10 Dec 2024
Cited by 2 | Viewed by 1670
Abstract
Plant height is a critical biophysical trait indicative of plant growth and developmental conditions and is valuable for biomass estimation and crop yield prediction. This study examined the effects of flight altitude and camera angle in quantifying cotton plant height using unmanned aerial [...] Read more.
Plant height is a critical biophysical trait indicative of plant growth and developmental conditions and is valuable for biomass estimation and crop yield prediction. This study examined the effects of flight altitude and camera angle in quantifying cotton plant height using unmanned aerial system (UAS) imagery. This study was conducted in a field with a sub-surface irrigation system in Lubbock, Texas, between 2022 and 2023. Images using the DJI Phantom 4 RTKs were collected at two altitudes (40 m and 80 m) and three sensor angles (45°, 60°, and 90°) at different growth stages. The resulting images depicted six scenarios of UAS altitudes and camera angles. The derived plant height was subsequently calculated as the vertical difference between the apical region of the plant and the ground elevation. Linear regression compared UAS-derived heights to manual measurements from 96 plots. Lower altitudes (40 m) outperformed higher altitudes (80 m) across all dates. For the early season (4 July 2023), the 40 m altitude had r2 = 0.82–0.86 and RMSE = 2.02–2.16 cm compared to 80 m (r2 = 0.66–0.68, RMSE = 7.52–8.76 cm). Oblique angles (45°) yielded higher accuracy than nadir (90°) images, especially in the late season (24 October 2022) results (r2 = 0.96, RMSE = 2.95 cm vs. r2 = 0.92, RMSE = 3.54 cm). These findings guide optimal UAS parameters for plant height measurement. Full article
(This article belongs to the Special Issue Advances of UAV Remote Sensing for Plant Phenology)
Show Figures

Figure 1

16 pages, 21810 KiB  
Article
Enhancing Direct Georeferencing Using Real-Time Kinematic UAVs and Structure from Motion-Based Photogrammetry for Large-Scale Infrastructure
by Soohee Han and Dongyeob Han
Drones 2024, 8(12), 736; https://doi.org/10.3390/drones8120736 - 5 Dec 2024
Cited by 2 | Viewed by 1545
Abstract
The growing demand for high-accuracy mapping and 3D modeling using unmanned aerial vehicles (UAVs) has accelerated advancements in flight dynamics, positioning accuracy, and imaging technology. Structure from motion (SfM), a computer vision-based approach, is increasingly replacing traditional photogrammetry through facilitating the automation of [...] Read more.
The growing demand for high-accuracy mapping and 3D modeling using unmanned aerial vehicles (UAVs) has accelerated advancements in flight dynamics, positioning accuracy, and imaging technology. Structure from motion (SfM), a computer vision-based approach, is increasingly replacing traditional photogrammetry through facilitating the automation of processes such as aerial triangulation (AT), terrain modeling, and orthomosaic generation. This study examines methods to enhance the accuracy of SfM-based AT through real-time kinematic (RTK) UAV imagery, focusing on large-scale infrastructure applications, including a dam and its entire basin. The target area, primarily consisting of homogeneous water surfaces, poses considerable challenges for feature point extraction and image matching, which are crucial for effective SfM. To overcome these challenges and improve the AT accuracy, a constraint equation was applied, incorporating weighted 3D coordinates derived from RTK UAV data. Furthermore, oblique images were combined with nadir images to stabilize AT, and confidence-based filtering was applied to point clouds to enhance geometric quality. The results indicate that assigning appropriate weights to 3D coordinates and incorporating oblique imagery significantly improve the AT accuracy. This approach presents promising advancements for RTK UAV-based AT in SfM-challenging, large-scale environments, thus supporting more efficient and precise mapping applications. Full article
Show Figures

Figure 1

25 pages, 12807 KiB  
Article
Improving Real-Scene 3D Model Quality of Unmanned Aerial Vehicle Oblique-Photogrammetry with a Ground Camera
by Jinghai Xu, Suya Zhang, Haoran Jing, Craig Hancock, Peng Qiao, Nan Shen and Karen B. Blay
Remote Sens. 2024, 16(21), 3933; https://doi.org/10.3390/rs16213933 - 22 Oct 2024
Cited by 3 | Viewed by 2465
Abstract
In UAV (unmanned aerial vehicle) oblique photogrammetry, the occlusion of ground objects, particularly at their base, often results in low-quality real-scene 3D models. To address this issue, we propose a method to enhance model quality by integrating ground-based camera images. This innovative image [...] Read more.
In UAV (unmanned aerial vehicle) oblique photogrammetry, the occlusion of ground objects, particularly at their base, often results in low-quality real-scene 3D models. To address this issue, we propose a method to enhance model quality by integrating ground-based camera images. This innovative image acquisition method allows the rephotographing of areas in the 3D model that exhibit poor quality. Three critical parameters for reshooting are the reshooting distance and the front- and side-overlap ratios of reshooting images. The proposed method for improving 3D model quality focuses on point accuracy, dimensional accuracy, texture details, and the triangular mesh structure. Validated by a case study involving a complex building, this method demonstrates that integrating ground camera photos significantly improves the overall quality of the 3D model. The findings show that optimal settings for reshooting include a distance (in meter units) of 1.5–1.6 times the camera’s focal length (in millimeter units), a front overlap ratio of 30%, and a side overlap ratio of 20%. Furthermore, we conclude that an overlap rate of 20–30% in reshooting is a usable value. Full article
Show Figures

Figure 1

38 pages, 98377 KiB  
Article
FaSS-MVS: Fast Multi-View Stereo with Surface-Aware Semi-Global Matching from UAV-Borne Monocular Imagery
by Boitumelo Ruf, Martin Weinmann and Stefan Hinz
Sensors 2024, 24(19), 6397; https://doi.org/10.3390/s24196397 - 2 Oct 2024
Viewed by 1422
Abstract
With FaSS-MVS, we present a fast, surface-aware semi-global optimization approach for multi-view stereo that allows for rapid depth and normal map estimation from monocular aerial video data captured by unmanned aerial vehicles (UAVs). The data estimated by FaSS-MVS, in turn, facilitate online 3D [...] Read more.
With FaSS-MVS, we present a fast, surface-aware semi-global optimization approach for multi-view stereo that allows for rapid depth and normal map estimation from monocular aerial video data captured by unmanned aerial vehicles (UAVs). The data estimated by FaSS-MVS, in turn, facilitate online 3D mapping, meaning that a 3D map of the scene is immediately and incrementally generated as the image data are acquired or being received. FaSS-MVS is composed of a hierarchical processing scheme in which depth and normal data, as well as corresponding confidence scores, are estimated in a coarse-to-fine manner, allowing efficient processing of large scene depths, such as those inherent in oblique images acquired by UAVs flying at low altitudes. The actual depth estimation uses a plane-sweep algorithm for dense multi-image matching to produce depth hypotheses from which the actual depth map is extracted by means of a surface-aware semi-global optimization, reducing the fronto-parallel bias of Semi-Global Matching (SGM). Given the estimated depth map, the pixel-wise surface normal information is then computed by reprojecting the depth map into a point cloud and computing the normal vectors within a confined local neighborhood. In a thorough quantitative and ablative study, we show that the accuracy of the 3D information computed by FaSS-MVS is close to that of state-of-the-art offline multi-view stereo approaches, with the error not even an order of magnitude higher than that of COLMAP. At the same time, however, the average runtime of FaSS-MVS for estimating a single depth and normal map is less than 14% of that of COLMAP, allowing us to perform online and incremental processing of full HD images at 1–2 Hz. Full article
(This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging)
Show Figures

Figure 1

16 pages, 15437 KiB  
Article
Digital Construction Preservation Techniques of Endangered Heritage Architecture: A Detailed Reconstruction Process of the Dong Ethnicity Drum Tower (China)
by Wantao Huang, Xiang Gao and Jiaguo Lu
Drones 2024, 8(9), 502; https://doi.org/10.3390/drones8090502 - 19 Sep 2024
Cited by 1 | Viewed by 2170
Abstract
This study suggests a pioneering conservation framework that significantly enhances the preservation, renovation, and restoration of heritage architecture through the integration of contemporary digital technologies. Focusing on the endangered drum towers of the Dong ethnic group in Southwestern China, the research employs a [...] Read more.
This study suggests a pioneering conservation framework that significantly enhances the preservation, renovation, and restoration of heritage architecture through the integration of contemporary digital technologies. Focusing on the endangered drum towers of the Dong ethnic group in Southwestern China, the research employs a meticulous data collection process that combines manual measurements with precise 2D imaging and oblique unmanned aerial vehicle (UAV) photography, enabling comprehensive documentation of tower interiors and exteriors. Collaboration with local experts in drum tower construction not only enriches the data gathered but also provides profound insights into the architectural nuances of these structures. An accurate building information modeling (BIM) simulation illuminates the internal engineering details, deepening the understanding of their complex design. Furthermore, UAV-obtained point cloud data facilitate a 3D reconstruction of the tower’s exterior. This innovative approach to heritage preservation not only advances the documentation and comprehension of heritage structures but also presents a scalable, replicable model for cultural conservation globally, paving the way for future research in the field. Full article
Show Figures

Figure 1

24 pages, 16499 KiB  
Article
Estimating Maize Crop Height and Aboveground Biomass Using Multi-Source Unmanned Aerial Vehicle Remote Sensing and Optuna-Optimized Ensemble Learning Algorithms
by Yafeng Li, Changchun Li, Qian Cheng, Fuyi Duan, Weiguang Zhai, Zongpeng Li, Bohan Mao, Fan Ding, Xiaohui Kuang and Zhen Chen
Remote Sens. 2024, 16(17), 3176; https://doi.org/10.3390/rs16173176 - 28 Aug 2024
Cited by 5 | Viewed by 2840
Abstract
Accurately assessing maize crop height (CH) and aboveground biomass (AGB) is crucial for understanding crop growth and light-use efficiency. Unmanned aerial vehicle (UAV) remote sensing, with its flexibility and high spatiotemporal resolution, has been widely applied in crop phenotyping studies. Traditional canopy height [...] Read more.
Accurately assessing maize crop height (CH) and aboveground biomass (AGB) is crucial for understanding crop growth and light-use efficiency. Unmanned aerial vehicle (UAV) remote sensing, with its flexibility and high spatiotemporal resolution, has been widely applied in crop phenotyping studies. Traditional canopy height models (CHMs) are significantly influenced by image resolution and meteorological factors. In contrast, the accumulated incremental height (AIH) extracted from point cloud data offers a more accurate estimation of CH. In this study, vegetation indices and structural features were extracted from optical imagery, nadir and oblique photography, and LiDAR point cloud data. Optuna-optimized models, including random forest regression (RFR), light gradient boosting machine (LightGBM), gradient boosting decision tree (GBDT), and support vector regression (SVR), were employed to estimate maize AGB. Results show that AIH99 has higher accuracy in estimating CH. LiDAR demonstrated the highest accuracy, while oblique photography and nadir photography point clouds were slightly less accurate. Fusion of multi-source data achieved higher estimation accuracy than single-sensor data. Embedding structural features can mitigate spectral saturation, with R2 ranging from 0.704 to 0.939 and RMSE ranging from 0.338 to 1.899 t/hm2. During the entire growth cycle, the R2 for LightGBM and RFR were 0.887 and 0.878, with an RMSE of 1.75 and 1.76 t/hm2. LightGBM and RFR also performed well across different growth stages, while SVR showed the poorest performance. As the amount of nitrogen application gradually decreases, the accumulation and accumulation rate of AGB also gradually decrease. This high-throughput crop-phenotyping analysis method offers advantages, such as speed and high accuracy, providing valuable references for precision agriculture management in maize fields. Full article
Show Figures

Graphical abstract

Back to TopTop