Next Article in Journal
The Design and Application of a Rubber Vibration Isolator for Aerospace Equipment
Previous Article in Journal
Simulation-Based Assessment of the Control of a Hybrid ECS Including a Vapor Cycle System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Autonomous Route Planning for UAV-Based 3D Reconstruction †

by
César Gómez Arnaldo
*,
Francisco Pérez Moreno
,
María Zamarreño Suárez
and
Raquel Delgado-Aguilera Jurado
Department of Aerospace Systems, Air Transport and Airports, Universidad Politécnica de Madrid (UPM), 28040 Madrid, Spain
*
Author to whom correspondence should be addressed.
Presented at the 14th EASN International Conference on “Innovation in Aviation & Space towards sustainability today & tomorrow”, Thessaloniki, Greece, 8–11 October 2024.
Eng. Proc. 2025, 90(1), 78; https://doi.org/10.3390/engproc2025090078
Published: 27 March 2025

Abstract

This study presents an innovative approach for the autonomous navigation of unmanned aerial vehicles (UAVs) in complex three-dimensional environments. By implementing the Rapidly Exploring Random Tree (RRT) algorithm, the system can efficiently plot safe flight paths that avoid obstacles of varying shapes and sizes. The discussion includes the technical obstacles faced and the strategies employed to overcome them, leading to the successful development of collision-free navigation routes. This foundational work aims to support future projects that will further refine the system through extensive simulations and real-world UAV deployments for tasks such as image capture and structural inspections.

1. Introduction

The autonomous scanning and three-dimensional reconstruction of buildings using drones is a significant technological advancement [1]. This research addresses the increasing need for automated systems that can perform these tasks without human intervention [2]. The focus is on developing methods to plan optimal flight routes around structures, enabling comprehensive imagery capture while ensuring safety and avoiding collisions.

1.1. Motivation

Advancements in 3D modeling and virtual reality have created a high demand for detailed three-dimensional models [3]. Drones have become vital in this field, providing efficient and versatile means of data collection that allow for detailed visualizations and enhanced understanding of environments.
The use of drones for capturing three-dimensional landscapes has numerous applications [4]. In tourism, drones provide aerial tours and panoramic views, offering new possibilities for virtual exploration. In technical inspections, drones can safely and efficiently examine infrastructures, like bridges and communication towers, which are difficult or dangerous for humans to inspect.
Drones have also transformed aerial cinematography in filmmaking and documentaries, offering unique perspectives that enhance storytelling. The main challenge is planning optimal flight paths around various structures to capture images from multiple angles while avoiding collisions and ensuring the safety of the equipment and the environment [5].

1.2. Objectives

This study aims to build upon and integrate previous research efforts to develop an autonomous path planning algorithm for UAVs focused on 3D scene reconstruction [6]. The primary goal is to devise a collision-free path planning algorithm that enables UAVs to perform reconnaissance missions around specific areas. The algorithm will account for various factors such as execution time, collision risk thresholds, and safety distances from objects to ensure optimal routing.
In addition to path planning, the research seeks to implement cutting-edge technologies in object recognition and image segmentation. Segment Anything Model (SAM) technology will be incorporated to segment and identify objects within images, which will assist in planning routes that avoid collisions.
Furthermore, the research aims to consolidate prior developments and technologies to enhance the overall system capabilities. This includes obtaining scene information via orthophotographs, extracting point clouds of buildings for path planning, and producing detailed 3D reconstructions.
Ultimately, the goal is to document the methodologies employed for data extraction, object reconstruction, and route definition for collision-free autonomous flights. This documentation will provide valuable insights and contribute to the advancement of fields such as computer science, robotics, aeronautics, and related disciplines.

2. State of the Art

The field of 3D reconstruction using autonomous drones has rapidly evolved, driven by advancements in perception and navigation technologies [7]. This section provides an overview of the key techniques and algorithms employed in this domain, highlighting their applications across various sectors.
Autonomous drones leverage a range of methods to generate detailed maps and perceive their environment in 3D. These techniques are crucial for capturing precise data and creating accurate three-dimensional models. Key methods include:
  • Simultaneous localization and mapping (SLAM) [8]: SLAM allows drones to build a map of their environment while simultaneously estimating their location within it. Using sensors such as cameras, IMU, and LiDAR, SLAM enables drones to navigate and explore unknown areas, including GPS-denied environments.
  • Photogrammetry: This technique involves capturing and analyzing images from various angles to reconstruct objects and environments in 3D. Drones equipped with cameras can take aerial photographs and use point matching and triangulation to create accurate models of terrain and structures.
  • LiDAR (light detection and ranging): LiDAR sensors emit laser pulses and measure their reflection time to generate a 3D point cloud of the environment. This method provides detailed and precise data, even in areas with dense vegetation or complex terrain [9].
  • Multi-sensor data fusion: Combining data from multiple sensors, such as RGB [10] cameras, thermal cameras, and LiDAR, enhances the accuracy and detail of the generated 3D models. This fusion provides a more comprehensive view of the environment [11].
  • Object recognition and tracking: Utilizing computer vision and machine learning, drones can recognize and track objects in real-time. This capability is valuable for industrial inspections, surveillance, and urban mapping.
  • Route planning and intelligent exploration: Algorithms for route planning enable drones to determine the most efficient paths for data collection, optimizing coverage and flight duration [12].
Also, recent advancements in artificial intelligence have significantly improved image segmentation and object recognition capabilities, which are essential for 3D reconstruction. Technologies like Meta’s Segment Anything Model (SAM) leverage convolutional neural networks (CNNs) [13] and deep learning to identify and classify objects within images and videos.
SAM can detect and recognize objects in real-time [14], enhancing the efficiency and accuracy of data capture. This technology supports better route planning and data acquisition by ensuring comprehensive coverage and avoiding collisions. Additionally, SAM’s ability to segment images accurately facilitates detailed 3D reconstructions by correctly identifying and classifying structural elements.
In addition, several techniques are used to convert captured data into detailed 3D models:
  • Point cloud and mesh methods: These techniques represent captured data as a collection of 3D points or meshes [15]. Methods like Delaunay triangulation, marching cubes, and Poisson surface reconstruction generate continuous surfaces from point clouds.
  • Data fusion from different sensors: Combining data from various sensors provides a more complete view of the environment. This approach improves model accuracy by integrating visual data with depth information.
  • Volumetric methods and octrees: These methods represent objects in 3D using discrete volumes [16]. Octrees, a hierarchical data structure, efficiently represent complex objects by dividing space into manageable segments.
  • Machine learning approaches: Machine learning algorithms enhance point matching, object recognition, and segmentation of 3D data, improving the quality and accuracy of reconstructions.

3. Methodology

The methodology proposed for this research encompasses several critical areas:
  • Problem understanding: The first step involves a thorough understanding of the problem space, which includes analyzing the requirements and constraints associated with UAV path planning in dynamic and cluttered environments. The main goal is to ensure navigation without collisions.
  • Literature survey: A detailed survey of the current literature is conducted to understand state-of-the-art techniques and algorithms for UAV path planning and obstacle avoidance [17,18,19,20,21]. Special focus is given to the RRT algorithm due to its extensive application in autonomous navigation.
  • Algorithm customization: Based on the insights gained from the literature review, the RRT algorithm is customized to meet the specific needs of the target application. This customization involves modifying the algorithm to generate trajectories in 3D environments while considering obstacles and spatial constraints.
  • Software development: The customized RRT algorithm is then developed into software using suitable programming languages and libraries. This development includes creating algorithms for path generation, collision detection, and trajectory optimization, as well as integrating the necessary data structures and algorithms.
  • Simulation setup: A simulation environment is created to facilitate rigorous testing and evaluation of the proposed path planning system. This setup involves designing a virtual 3D environment that closely mimics real-world scenarios.
  • Performance evaluation: Finally, the system’s performance is assessed through validation experiments. The results are analyzed to evaluate various performance metrics, including path quality, collision avoidance capability, computational efficiency, and scalability.

4. Implementation

The results of this study showcase the effectiveness of the proposed framework for UAV path planning in complex environments. Through a series of drone missions, high-resolution orthophotos and detailed point cloud models were generated, providing a comprehensive understanding of the test area. The application of the SAM for image segmentation allowed for precise identification of structures and obstacles, which facilitated the development of accurate and collision-free flight paths using the RRT algorithm. These stages collectively demonstrate the system’s capability to integrate advanced image processing and path planning techniques to ensure efficient and safe UAV navigation.

4.1. Generation of Orthophotos and Point Clouds

Drone missions are conducted to capture aerial images, which are then processed to generate orthophotos and point cloud models that are essential for the subsequent stages of the system. The orthophotos provide high-resolution, georeferenced images that accurately represent the test area, while the point clouds offer a 3D representation of the environment, capturing detailed spatial information about the structures and terrain.
The orthophoto shown in Figure 1 (left) provides a comprehensive overview of the test area, highlighting the layout and features that the UAVs must navigate. The high-resolution imagery is crucial for accurate segmentation and path planning.
Figure 1 (right) illustrates the point cloud model generated from the captured images. This 3D model is essential for understanding the spatial relationships and dimensions of the objects within the environment, enabling precise path planning and collision avoidance.

4.2. Image Segmentation

The SAM is applied to the orthophotos to segment structures and relevant objects, facilitating precise path planning around the identified features. Image segmentation helps isolate specific areas of interest, such as buildings and obstacles, from the background, ensuring that the UAV can navigate accurately and efficiently [22].
In Figure 2 (left), the results of the image segmentation process are displayed. The segmented image clearly distinguishes between the structures and the surrounding environment, providing the necessary data for detailed path planning. Thanks to this segmentation, it is possible to clearly identify an object, in this case the building shown in Figure 2 (right), over which a collision-free route is to be traced to allow its correct visualization.

4.3. Path Planning

Paths are generated and optimized using the RRT algorithm, ensuring the trajectories are collision-free and efficient for UAV navigation. The RRT algorithm explores the search space probabilistically, generating diverse trajectories that account for the segmented structures and obstacles.
Figure 3 depicts the flight path generated using the RRT algorithm. The path is designed to navigate around obstacles, optimizing the route for safety and efficiency. The algorithm’s adaptability allows it to handle dynamic and complex environments effectively.

5. Discussion

The RRT algorithm’s capability to efficiently explore the search space and generate diverse trajectories is highlighted as a key strength of the system. The probabilistic nature of RRT allows it to handle the inherent uncertainties and complexities of 3D environments effectively [23]. By randomly sampling the search space, RRT can identify feasible paths that avoid obstacles and optimize the UAV’s route, making it particularly suitable for dynamic and cluttered environments.
One of the major advantages of using RRT is its ability to find solutions quickly, even in high-dimensional spaces. This makes it ideal for real-time applications where computational speed is crucial. Furthermore, the algorithm’s adaptability enables it to be modified and improved upon, allowing for enhancements such as the incorporation of RRT* for path optimization.
However, several areas for improvement have been identified:
  • Computational performance optimization: While RRT is efficient, the algorithm can still be computationally intensive, especially in highly complex environments. Optimizing the algorithm to reduce computational load without compromising path quality is essential. This could involve parallel processing techniques, more efficient sampling methods, or integrating heuristics to guide the search process more effectively.
  • Integration of advanced object recognition technologies: Incorporating state-of-the-art object recognition and machine learning algorithms can significantly enhance the system’s ability to identify and classify obstacles. Techniques such as convolutional neural networks (CNNs) and deep learning models can provide more accurate and detailed environmental data, improving the overall robustness of the path planning process.
  • Enhanced image segmentation: The SAM has proven effective for basic segmentation tasks. However, integrating more advanced segmentation algorithms could improve the accuracy and reliability of the identified features. This includes using multi-spectral imaging and data fusion techniques to combine different sensor inputs, providing a more comprehensive understanding of the environment.
  • Real-time adaptability: Ensuring that the system can adapt to changes in real-time is critical for practical applications. This involves developing algorithms that can update the path dynamically in response to new obstacles or changes in the environment. Techniques such as dynamic RRT (DRRT) or hybrid approaches combining RRT with other path planning algorithms could be explored.
  • Scalability for large-scale environments: The current system is designed for relatively confined environments. Scaling the system to handle larger and more complex areas, such as urban landscapes or expansive rural settings, requires further development. This includes optimizing the data structures and algorithms to manage larger datasets and more extensive flight paths.
  • Validation and testing: Extensive real-world testing and validation are necessary to ensure the system’s reliability and performance in various conditions. This includes testing in different weather conditions, varying terrain types, and diverse structural environments. Feedback from these tests can be used to refine and improve the system continuously.
In conclusion, while the current system demonstrates significant capabilities in UAV path planning, continuous improvements and integrations with advanced technologies are essential to enhance its performance further. The ongoing research and development efforts will focus on addressing these areas of improvement, ensuring that the system remains at the forefront of autonomous UAV navigation technologies.

6. Conclusions and Future Works

This research provides a robust foundation for developing autonomous UAV path planning systems in complex environments. The integration of the RRT algorithm with advanced image processing techniques demonstrates a viable approach for generating collision-free trajectories in dynamic and cluttered settings. The ability to efficiently explore the search space and generate diverse paths highlights the system’s potential for real-world applications, from infrastructure inspection to environmental monitoring.
Future work will focus on several key areas to enhance and validate the system further:
  • Extensive simulation testing and real-world missions to validate and refine the system.
  • Enhancing path planning algorithms to improve efficiency and accuracy.
  • Integrating advanced object recognition and image segmentation technologies to improve route quality and precision.

Future Research Directions

Future research will explore the integration of swarm deployment and 3D reconstruction using various sensor combinations, aiming to refine and expand the system’s capabilities. Further developments will address the challenges of real-time data processing and the scalability of the proposed solution for large-scale applications.

Author Contributions

Conceptualization, C.G.A.; methodology, R.D.-A.J.; software, F.P.M.; validation, M.Z.S.; formal analysis, C.G.A.; investigation, C.G.A.; resources, R.D.-A.J.; data curation, F.P.M.; writing—original draft preparation, C.G.A.; writing—review and editing, M.Z.S.; visualization, F.P.M.; supervision, R.D.-A.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article. Any further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kuffner, J.J.; LaValle, S.M. RRT-Connect: An efficient approach to single-query path planning. In Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000; pp. 995–1001. [Google Scholar]
  2. Karaman, S.; Frazzoli, E. Sampling-based algorithms for optimal motion planning. Int. J. Robot. Res. 2011, 30, 846–894. [Google Scholar]
  3. Zermas, D.; Izzat, I.; Papanikolopoulos, N. Fast segmentation of 3D point clouds: A paradigm on LiDAR data for autonomous vehicle applications. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017; pp. 5067–5073. [Google Scholar]
  4. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 770–778. [Google Scholar]
  5. Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-scale direct monocular, S.L.A.M. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 834–849. [Google Scholar]
  6. Li, J.; Guo, Y.; Zhu, S.; Yuan, Y. UAV path planning method based on deep reinforcement learning. IEEE Access 2019, 7, 157083–157093. [Google Scholar]
  7. Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.J.; Medina-Carnicer, R. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognit. 2014, 47, 2280–2292. [Google Scholar]
  8. Whelan, T.; Salas-Moreno, R.F.; Glocker, B.; Davison, A.J.; Leutenegger, S. ElasticFusion: Real-time dense SLAM and light source estimation. Int. J. Robot. Res. 2016, 35, 1697–1716. [Google Scholar]
  9. Newcombe, R.A.; Fox, D.; Seitz, S.M. DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 343–352. [Google Scholar]
  10. Kerl, C.; Sturm, J.; Cremers, D. Robust odometry estimation for RGB-D cameras. In Proceedings of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 3748–3754. [Google Scholar]
  11. Shkurti, F.; Girdhar, Y. Underwater multi-robot convoying using visual tracking by detection. IEEE Robot. Autom. Lett. 2017, 2, 193–200. [Google Scholar]
  12. Gómez, C.; Zamarreño, M.; Pérez, F.; Delgado-Aguilera, R. Path planning for UAVs in complex environments. Drones 2024, 8, 288. [Google Scholar] [CrossRef]
  13. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  14. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  15. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  16. Newcombe, R.A.; Davison, A.J. Live dense reconstruction with a single moving camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 1498–1505. [Google Scholar]
  17. Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of RGB-D SLAM systems. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 573–580. [Google Scholar]
  18. Dellaert, F.; Kaess, M. Square Root SAM: Simultaneous localization and mapping via square root information smoothing. Int. J. Robot. Res. 2006, 25, 1181–1203. [Google Scholar] [CrossRef]
  19. Konolige, K.; Agrawal, M. FrameSLAM: From bundle adjustment to real-time visual mapping. IEEE Trans. Robot. 2008, 24, 1066–1077. [Google Scholar] [CrossRef]
  20. Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
  21. Engel, J.; Koltun, V.; Cremers, D. Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 611–625. [Google Scholar] [CrossRef] [PubMed]
  22. Kerl, C.; Sturm, J.; Cremers, D. Dense visual SLAM for RGB-D cameras. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 2100–2107. [Google Scholar]
  23. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
Figure 1. Orthophoto (left) and point cloud (right).
Figure 1. Orthophoto (left) and point cloud (right).
Engproc 90 00078 g001
Figure 2. System segmentation tool (left) and point cloud of the selected building (right).
Figure 2. System segmentation tool (left) and point cloud of the selected building (right).
Engproc 90 00078 g002
Figure 3. Identified points through which the route should pass (left). Optimal route generated (right).
Figure 3. Identified points through which the route should pass (left). Optimal route generated (right).
Engproc 90 00078 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arnaldo, C.G.; Moreno, F.P.; Suárez, M.Z.; Jurado, R.D.-A. Autonomous Route Planning for UAV-Based 3D Reconstruction. Eng. Proc. 2025, 90, 78. https://doi.org/10.3390/engproc2025090078

AMA Style

Arnaldo CG, Moreno FP, Suárez MZ, Jurado RD-A. Autonomous Route Planning for UAV-Based 3D Reconstruction. Engineering Proceedings. 2025; 90(1):78. https://doi.org/10.3390/engproc2025090078

Chicago/Turabian Style

Arnaldo, César Gómez, Francisco Pérez Moreno, María Zamarreño Suárez, and Raquel Delgado-Aguilera Jurado. 2025. "Autonomous Route Planning for UAV-Based 3D Reconstruction" Engineering Proceedings 90, no. 1: 78. https://doi.org/10.3390/engproc2025090078

APA Style

Arnaldo, C. G., Moreno, F. P., Suárez, M. Z., & Jurado, R. D.-A. (2025). Autonomous Route Planning for UAV-Based 3D Reconstruction. Engineering Proceedings, 90(1), 78. https://doi.org/10.3390/engproc2025090078

Article Metrics

Back to TopTop