Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = UAV-based oblique photography

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 9648 KiB  
Article
Three-Dimensional Real-Scene-Enhanced GNSS/Intelligent Vision Surface Deformation Monitoring System
by Yuanrong He, Weijie Yang, Qun Su, Qiuhua He, Hongxin Li, Shuhang Lin and Shaochang Zhu
Appl. Sci. 2025, 15(9), 4983; https://doi.org/10.3390/app15094983 - 30 Apr 2025
Viewed by 664
Abstract
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene [...] Read more.
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene (3DRS)-enhanced GNSS/intelligent vision surface deformation monitoring system. The system integrates GNSS monitoring terminals and multi-source meteorological sensors to accurately capture minute displacements at monitoring points and multi-source Internet of Things (IoT) data, which are then automatically stored in MySQL databases. To enhance the functionality of the system, the visual sensor data are fused with 3D models through streaming media technology, enabling 3D real-scene augmented reality to support dynamic deformation monitoring and visual analysis. WebSocket-based remote lighting control is implemented to enhance the quality of video data at night. The spatiotemporal fusion of UAV aerial data with 3D models is achieved through Blender image-based rendering, while edge detection is employed to extract crack parameters from intelligent inspection vehicle data. The 3DRS model is constructed through UAV oblique photography, 3D laser scanning, and the combined use of SVSGeoModeler and SketchUp. A visualization platform for surface deformation monitoring is built on the 3DRS foundation, adopting an “edge collection–cloud fusion–terminal interaction” approach. This platform dynamically superimposes GNSS and multi-source IoT monitoring data onto the 3D spatial base, enabling spatiotemporal correlation analysis of millimeter-level displacements and early risk warning. Full article
Show Figures

Figure 1

31 pages, 21485 KiB  
Article
UAV-SfM Photogrammetry for Canopy Characterization Toward Unmanned Aerial Spraying Systems Precision Pesticide Application in an Orchard
by Qi Bing, Ruirui Zhang, Linhuan Zhang, Longlong Li and Liping Chen
Drones 2025, 9(2), 151; https://doi.org/10.3390/drones9020151 - 18 Feb 2025
Cited by 3 | Viewed by 1039
Abstract
The development of unmanned aerial spraying systems (UASSs) has significantly transformed pest and disease control methods of crop plants. Precisely adjusting pesticide application rates based on the target conditions is an effective method to improve pesticide use efficiency. In orchard spraying, the structural [...] Read more.
The development of unmanned aerial spraying systems (UASSs) has significantly transformed pest and disease control methods of crop plants. Precisely adjusting pesticide application rates based on the target conditions is an effective method to improve pesticide use efficiency. In orchard spraying, the structural characteristics of the canopy are crucial for guiding the pesticide application system to adjust spraying parameters. This study selected mango trees as the research sample and evaluated the differences between UAV aerial photography with a Structure from Motion (SfM) algorithm and airborne LiDAR in the results of extracting canopy parameters. The maximum canopy height, canopy projection area, and canopy volume parameters were extracted from the canopy height model of SfM (CHMSfM) and the canopy height model of LiDAR (CHMLiDAR) by grids with the same width as the planting rows (5.0 m) and 14 different heights (0.2 m, 0.3 m, 0.4 m, 0.5 m, 0.6 m, 0.8 m, 1.0 m, 2.0 m, 3.0 m, 4.0 m, 5.0 m, 6.0 m, 8.0 m, and 10.0 m), respectively. Linear regression equations were used to fit the canopy parameters obtained from different sensors. The correlation was evaluated using R2 and rRMSE, and a t-test (α = 0.05) was employed to assess the significance of the differences. The results show that as the grid height increases, the R2 values for the maximum canopy height, projection area, and canopy volume extracted from CHMSfM and CHMLiDAR increase, while the rRMSE values decrease. When the grid height is 10.0 m, the R2 for the maximum canopy height extracted from the two models is 92.85%, with an rRMSE of 0.0563. For the canopy projection area, the R2 is 97.83%, with an rRMSE of 0.01, and for the canopy volume, the R2 is 98.35%, with an rRMSE of 0.0337. When the grid height exceeds 1.0 m, the t-test results for the three parameters are all greater than 0.05, accepting the hypothesis that there is no significant difference in the canopy parameters obtained by the two sensors. Additionally, using the coordinates x0 of the intersection of the linear regression equation and y=x as a reference, CHMSfM tends to overestimate lower canopy maximum height and projection area, and underestimate higher canopy maximum height and projection area compared to CHMLiDAR. This to some extent reflects that the surface of CHMSfM is smoother. This study demonstrates the effectiveness of extracting canopy parameters to guide UASS systems for variable-rate spraying based on UAV oblique photography combined with the SfM algorithm. Full article
(This article belongs to the Special Issue Recent Advances in Crop Protection Using UAV and UGV)
Show Figures

Figure 1

36 pages, 25347 KiB  
Article
Construction of a Real-Scene 3D Digital Campus Using a Multi-Source Data Fusion: A Case Study of Lanzhou Jiaotong University
by Rui Gao, Guanghui Yan, Yingzhi Wang, Tianfeng Yan, Ruiting Niu and Chunyang Tang
ISPRS Int. J. Geo-Inf. 2025, 14(1), 19; https://doi.org/10.3390/ijgi14010019 - 3 Jan 2025
Cited by 1 | Viewed by 2317
Abstract
Real-scene 3D digital campuses are essential for improving the accuracy and effectiveness of spatial data representation, facilitating informed decision-making for university administrators, optimizing resource management, and enriching user engagement for students and faculty. However, current approaches to constructing these digital environments face several [...] Read more.
Real-scene 3D digital campuses are essential for improving the accuracy and effectiveness of spatial data representation, facilitating informed decision-making for university administrators, optimizing resource management, and enriching user engagement for students and faculty. However, current approaches to constructing these digital environments face several challenges. They often rely on costly commercial platforms, struggle with integrating heterogeneous datasets, and require complex workflows to achieve both high precision and comprehensive campus coverage. This paper addresses these issues by proposing a systematic multi-source data fusion approach that employs open-source technologies to generate a real-scene 3D digital campus. A case study of Lanzhou Jiaotong University is presented to demonstrate the feasibility of this approach. Firstly, oblique photography based on unmanned aerial vehicles (UAVs) is used to capture large-scale, high-resolution images of the campus area, which are then processed using open-source software to generate an initial 3D model. Afterward, a high-resolution model of the campus buildings is then created by integrating the UAV data, while 3D Digital Elevation Model (DEM) and OpenStreetMap (OSM) building data provide a 3D overview of the surrounding campus area, resulting in a comprehensive 3D model for a real-scene digital campus. Finally, the 3D model is visualized on the web using Cesium, which enables functionalities such as real-time data loading, perspective switching, and spatial data querying. Results indicate that the proposed approach can effectively get rid of reliance on expensive proprietary systems, while rapidly and accurately reconstructing a real-scene digital campus. This framework not only streamlines data harmonization but also offers an open-source, practical, cost-effective solution for real-scene 3D digital campus construction, promoting further research and applications in twin city, Virtual Reality (VR), and Geographic Information Systems (GIS). Full article
Show Figures

Figure 1

17 pages, 9384 KiB  
Article
Multi-Spectral Point Cloud Constructed with Advanced UAV Technique for Anisotropic Reflectance Analysis of Maize Leaves
by Kaiyi Bi, Yifang Niu, Hao Yang, Zheng Niu, Yishuo Hao and Li Wang
Remote Sens. 2025, 17(1), 93; https://doi.org/10.3390/rs17010093 - 30 Dec 2024
Viewed by 899
Abstract
Reflectance anisotropy in remote sensing images can complicate the interpretation of spectral signature, and extracting precise structural information under these pixels is a promising approach. Low-altitude unmanned aerial vehicle (UAV) systems can capture high-resolution imagery even to centimeter-level detail, potentially simplifying the characterization [...] Read more.
Reflectance anisotropy in remote sensing images can complicate the interpretation of spectral signature, and extracting precise structural information under these pixels is a promising approach. Low-altitude unmanned aerial vehicle (UAV) systems can capture high-resolution imagery even to centimeter-level detail, potentially simplifying the characterization of leaf anisotropic reflectance. We proposed a novel maize point cloud generation method that combines an advanced UAV cross-circling oblique (CCO) photography route with the Structure from the Motion-Multi-View Stereo (SfM-MVS) algorithm. A multi-spectral point cloud was then generated by fusing multi-spectral imagery with the point cloud using a DSM-based approach. The Rahman–Pinty–Verstraete (RPV) model was finally applied to establish maize leaf-level anisotropic reflectance models. Our results indicated a high degree of similarity between measured and estimated maize structural parameters (R2 = 0.89 for leaf length and 0.96 for plant height) based on accurate point cloud data obtained from the CCO route. Most data points clustered around the principal plane due to a constant angle between the sun and view vectors, resulting in a limited range of view azimuths. Leaf reflectance anisotropy was characterized by the RPV model with R2 ranging from 0.38 to 0.75 for five wavelength bands. These findings hold significant promise for promoting the decoupling of plant structural information and leaf optical characteristics within remote sensing data. Full article
Show Figures

Figure 1

21 pages, 13326 KiB  
Article
Analysis Methods for Landscapes and Features of Traditional Villages Based on Digital Technology—The Example of Puping Village in Zhangzhou
by Liangliang Wang, Yixin Wang, Wencan Huang and Jie Han
Land 2024, 13(9), 1539; https://doi.org/10.3390/land13091539 - 23 Sep 2024
Cited by 2 | Viewed by 1662
Abstract
Many traditional villages have been degraded to a certain extent due to urbanization and out-of-control management. In addition, due to the lack of recognition and continuation of spatial texture in some village conservation and planning that, in turn, resulting in the gradual disappearance [...] Read more.
Many traditional villages have been degraded to a certain extent due to urbanization and out-of-control management. In addition, due to the lack of recognition and continuation of spatial texture in some village conservation and planning that, in turn, resulting in the gradual disappearance of their distinctive landscape feature. Studying the spatial form of traditional villages helps preserve the authenticity of traditional villages as cultural landscape and inherits traditional historical characteristics. Using Puping Village in Zhangzhou City, Fujian Province as an example, this paper obtains the integrated information data of the village through UAV oblique photography, classifies and extracts the spatial constitution of the traditional village using digital technology, quantitatively analyses it from macroscopic to microscopic, and summarizes the spatial morphology analysis method of the traditional village. The results demonstrate that digital technology can effectively and accurately complete data collection and can provide an objective basis for zoning conservation of traditional villages based on the distinction between new and historic buildings. In addition, digital information collection on the traditional villages landscape features will prepare for the establishment of a database and comparative analysis in the future. We further suggest that digital technology analysis needs to be combined with traditional methods to have a deeper understanding of the formation process of village spatial morphology. The results of the practice in Puping Village show that the use of digital technology can provide a scientific basis for the protection and planning of traditional villages, and that this method is adaptable, which can help to efficiently collect and analyze data on landscape characteristics of other similar villages in China, and support innovative methodologies and technologies for China’s rural revitalization efforts. Full article
Show Figures

Figure 1

18 pages, 6001 KiB  
Article
Improving Target Geolocation Accuracy with Multi-View Aerial Images in Long-Range Oblique Photography
by Chongyang Liu, Yalin Ding, Hongwen Zhang, Jihong Xiu and Haipeng Kuang
Drones 2024, 8(5), 177; https://doi.org/10.3390/drones8050177 - 30 Apr 2024
Cited by 8 | Viewed by 2841
Abstract
Target geolocation in long-range oblique photography (LOROP) is a challenging study due to the fact that measurement errors become more evident with increasing shooting distance, significantly affecting the calculation results. This paper introduces a novel high-accuracy target geolocation method based on multi-view observations. [...] Read more.
Target geolocation in long-range oblique photography (LOROP) is a challenging study due to the fact that measurement errors become more evident with increasing shooting distance, significantly affecting the calculation results. This paper introduces a novel high-accuracy target geolocation method based on multi-view observations. Unlike the usual target geolocation methods, which heavily depend on the accuracy of GNSS (Global Navigation Satellite System) and INS (Inertial Navigation System), the proposed method overcomes these limitations and demonstrates an enhanced effectiveness by utilizing multiple aerial images captured at different locations without any additional supplementary information. In order to achieve this goal, camera optimization is performed to minimize the errors measured by GNSS and INS sensors. We first use feature matching between the images to acquire the matched keypoints, which determines the pixel coordinates of the landmarks in different images. A map-building process is then performed to obtain the spatial positions of these landmarks. With the initial guesses of landmarks, bundle adjustment is used to optimize the camera parameters and the spatial positions of the landmarks. After the camera optimization, a geolocation method based on line-of-sight (LOS) is used to calculate the target geolocation based on the optimized camera parameters. The proposed method is validated through simulation and an experiment utilizing unmanned aerial vehicle (UAV) images, demonstrating its efficiency, robustness, and ability to achieve high-accuracy target geolocation. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

14 pages, 19802 KiB  
Article
Three-Dimensional Reconstruction of Railway Bridges Based on Unmanned Aerial Vehicle–Terrestrial Laser Scanner Point Cloud Fusion
by Jian Li, Yipu Peng, Zhiyuan Tang and Zichao Li
Buildings 2023, 13(11), 2841; https://doi.org/10.3390/buildings13112841 - 13 Nov 2023
Cited by 18 | Viewed by 2516
Abstract
To address the incomplete image data collection of close-to-ground structures, such as bridge piers and local features like the suspension cables in bridges, obtained from single unmanned aerial vehicle (UAV) oblique photography and the difficulty in acquiring point cloud data for the top [...] Read more.
To address the incomplete image data collection of close-to-ground structures, such as bridge piers and local features like the suspension cables in bridges, obtained from single unmanned aerial vehicle (UAV) oblique photography and the difficulty in acquiring point cloud data for the top structures of bridges using single terrestrial laser scanners (TLSs), as well as the lack of textural information in TLS point clouds, this study aims to establish a high-precision, complete, and realistic bridge model by integrating UAV image data and TLS point cloud data. Using a particular large-scale dual-track bridge as a case study, the methodology involves aerial surveys using a DJI Phantom 4 RTK for comprehensive image capture. We obtain 564 images circling the bridge arches, 508 images for orthorectification, and 491 images of close-range side views. Subsequently, all images, POS data, and ground control point information are imported into Context Capture 2023 software for aerial triangulation and multi-view image dense matching to generate dense point clouds of the bridge. Additionally, ground LiDAR scanning, involving the placement of six scanning stations both on and beneath the bridge, was conducted and the point cloud data from each station are registered in Trimble Business Center 5.5.2 software based on identical feature points. Noise point clouds are then removed using statistical filtering techniques. The integration of UAV image point clouds with TLS point clouds is achieved using the iterative closest point (ICP) algorithm, followed by the creation of a TIN model and texture mapping using Context Capture 2023 software. The effectiveness of the integrated modeling is verified by comparing the geometric accuracy and completeness of the images with those obtained from a single UAV image-based model. The integrated model is used to generate cross-sectional profiles of the dual-track bridge, with detailed annotations of boundary dimensions. Structural inspections reveal honeycomb surfaces and seepage in the bridge piers, as well as painted rust and cracks in the arch ribs. The geometric accuracy of the integrated model in the X, Y, and Z directions is 1.2 cm, 0.8 cm, and 0.9 cm, respectively, while the overall 3D model accuracy is 1.70 cm. This method provides technical reference for the reconstruction of three-dimensional point cloud bridge models. Through 3D reconstruction, railway operators can better monitor and assess the condition of bridge structures, promptly identifying potential defects and damages, thus enabling the adoption of necessary maintenance and repair measures to ensure the structural safety of the bridges. Full article
Show Figures

Figure 1

19 pages, 14472 KiB  
Article
A Rapid Water Region Reconstruction Scheme in 3D Watershed Scene Generated by UAV Oblique Photography
by Yinguo Qiu, Yaqin Jiao, Juhua Luo, Zhenyu Tan, Linsheng Huang, Jinling Zhao, Qitao Xiao and Hongtao Duan
Remote Sens. 2023, 15(5), 1211; https://doi.org/10.3390/rs15051211 - 22 Feb 2023
Cited by 13 | Viewed by 2940
Abstract
Oblique photography technology based on UAV (unmanned aerial vehicle) provides an effective means for the rapid, real-scene 3D reconstruction of geographical objects on a watershed scale. However, existing research cannot achieve the automatic and high-precision reconstruction of water regions due to the sensitivity [...] Read more.
Oblique photography technology based on UAV (unmanned aerial vehicle) provides an effective means for the rapid, real-scene 3D reconstruction of geographical objects on a watershed scale. However, existing research cannot achieve the automatic and high-precision reconstruction of water regions due to the sensitivity of water surface patterns to wind and waves, reflections of objects on the shore, etc. To solve this problem, a novel rapid reconstruction scheme for water regions in 3D models of oblique photography is proposed in this paper. It extracts the boundaries of water regions firstly using a designed eight-neighborhood traversal algorithm, and then reconstructs the triangulated irregular network (TIN) of water regions. Afterwards, the corresponding texture images of water regions are intelligently selected and processed using a designed method based on coordinate matching, image stitching and clipping. Finally, the processed texture images are mapped to the obtained TIN, and the real information about water regions can be reconstructed, visualized and integrated into the original real-scene 3D environment. Experimental results have shown that the proposed scheme can rapidly and accurately reconstruct water regions in 3D models of oblique photography. The outcome of this work can refine the current technical system of 3D modeling by UAV oblique photography and expand its application in the construction of twin watershed, twin city, etc. Full article
Show Figures

Graphical abstract

12 pages, 2819 KiB  
Article
Edge Restoration of a 3D Building Model Based on Oblique Photography
by Defu Che, Kai He, Kehan Qiu, Yining Liu, Baodong Ma and Quan Liu
Appl. Sci. 2022, 12(24), 12911; https://doi.org/10.3390/app122412911 - 15 Dec 2022
Cited by 6 | Viewed by 2367
Abstract
Unmanned aerial vehicle (UAV) oblique photography technology is widely used in a variety of fields because of its excellent efficiency, realism, and low cost of manufacturing. However, due to the influence of lighting, occlusion, weak textures, and other factors in aerial images, the [...] Read more.
Unmanned aerial vehicle (UAV) oblique photography technology is widely used in a variety of fields because of its excellent efficiency, realism, and low cost of manufacturing. However, due to the influence of lighting, occlusion, weak textures, and other factors in aerial images, the modeling results can have the problem of an incorrect structure that is inconsistent with the real scene. The edge line of a building is the main external expression of its structure. Whether the edge line is straight or not will directly affect the realism of the building, so the restoration of the edge line can improve the realism of the building. In this study, we proposed and developed a method for the restoration of the edge line of a 3D building model based on triangular mesh cutting. Firstly, the feature line of the edge line was drawn using human–computer interaction, and axis-aligned bounding box (AABB) collision detection was carried out around the feature line to determine the triangular patches to be cut. Then, the triangular cutting algorithm was used to cut the triangular patches projected onto the plane. Finally, the structure and texture of the 3D building model were reconstructed. This method allowed us to actualize the physical separation of continuous triangulation; the triangulation around the edge line was cut, and the plane was fitted. This method was able to improve cutting accuracy and edge flatness and enhance the edge features of buildings and the rendering quality of models. The experimental results showed that the edge restoration method proposed in this paper is reliable and that it can effectively improve the building rendering effect of a 3D building model based on UAV oblique photography and can also enhance the realism of the model. Full article
Show Figures

Figure 1

14 pages, 8616 KiB  
Article
Investigation of Measurement Accuracy of Bridge Deformation Using UAV-Based Oblique Photography Technique
by Shaohua He, Xiaochun Guo, Jianyan He, Bo Guo and Cheng Zheng
Sensors 2022, 22(18), 6822; https://doi.org/10.3390/s22186822 - 9 Sep 2022
Cited by 3 | Viewed by 2449
Abstract
This paper investigates the measurement accuracy of unmanned aerial vehicle-based oblique photography (UAVOP) in bridge deformation identifications. A simply supported concrete beam model was selected and measured using the UAVOP technique. The influences of several parameters, such as overall flight altitude (h [...] Read more.
This paper investigates the measurement accuracy of unmanned aerial vehicle-based oblique photography (UAVOP) in bridge deformation identifications. A simply supported concrete beam model was selected and measured using the UAVOP technique. The influences of several parameters, such as overall flight altitude (h), local shooting distance (d), partial image overlap (λ), and arrangement of control points, on the quality of the reconstructed three-dimensional (3D) beam model, were presented and discussed. Experimental results indicated that the quality of the reconstructed 3D model was significantly improved by the fusion overall-partial flight routes (FR), of which the reconstructed model quality was 46.7% higher than those with the single flight route (SR). Despite the minimal impact of overall flight altitude, the reconstructed model quality prominently varied with the local shooting distance, partial image overlap, and control points arrangement. As the d decreased from 12 m to 8 m, the model quality was improved by 48.2%, and an improvement of 42.5% was also achieved by increasing the λ from 70% to 80%. The reconstructed model quality of UAVOP with the global-plane control points was 78.4% and 38.4%, respectively, higher than those with the linear and regional control points. Furthermore, an optimized scheme of UAVOP with control points in global-plane arrangement and FR (h = 50 m, d = 8 m, and λ = 80%) was recommended. A comparison between the results measured by the UAVOP and the total station showed maximum identification errors of 1.3 mm. The study’s outcomes are expected to serve as potential references for future applications of UAVOP in bridge measurements. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Back to TopTop