Autonomous Route Planning for UAV-Based 3D Reconstruction †
Abstract
1. Introduction
1.1. Motivation
1.2. Objectives
2. State of the Art
- Simultaneous localization and mapping (SLAM) [8]: SLAM allows drones to build a map of their environment while simultaneously estimating their location within it. Using sensors such as cameras, IMU, and LiDAR, SLAM enables drones to navigate and explore unknown areas, including GPS-denied environments.
- Photogrammetry: This technique involves capturing and analyzing images from various angles to reconstruct objects and environments in 3D. Drones equipped with cameras can take aerial photographs and use point matching and triangulation to create accurate models of terrain and structures.
- LiDAR (light detection and ranging): LiDAR sensors emit laser pulses and measure their reflection time to generate a 3D point cloud of the environment. This method provides detailed and precise data, even in areas with dense vegetation or complex terrain [9].
- Object recognition and tracking: Utilizing computer vision and machine learning, drones can recognize and track objects in real-time. This capability is valuable for industrial inspections, surveillance, and urban mapping.
- Route planning and intelligent exploration: Algorithms for route planning enable drones to determine the most efficient paths for data collection, optimizing coverage and flight duration [12].
- Point cloud and mesh methods: These techniques represent captured data as a collection of 3D points or meshes [15]. Methods like Delaunay triangulation, marching cubes, and Poisson surface reconstruction generate continuous surfaces from point clouds.
- Data fusion from different sensors: Combining data from various sensors provides a more complete view of the environment. This approach improves model accuracy by integrating visual data with depth information.
- Volumetric methods and octrees: These methods represent objects in 3D using discrete volumes [16]. Octrees, a hierarchical data structure, efficiently represent complex objects by dividing space into manageable segments.
- Machine learning approaches: Machine learning algorithms enhance point matching, object recognition, and segmentation of 3D data, improving the quality and accuracy of reconstructions.
3. Methodology
- Problem understanding: The first step involves a thorough understanding of the problem space, which includes analyzing the requirements and constraints associated with UAV path planning in dynamic and cluttered environments. The main goal is to ensure navigation without collisions.
- Algorithm customization: Based on the insights gained from the literature review, the RRT algorithm is customized to meet the specific needs of the target application. This customization involves modifying the algorithm to generate trajectories in 3D environments while considering obstacles and spatial constraints.
- Software development: The customized RRT algorithm is then developed into software using suitable programming languages and libraries. This development includes creating algorithms for path generation, collision detection, and trajectory optimization, as well as integrating the necessary data structures and algorithms.
- Simulation setup: A simulation environment is created to facilitate rigorous testing and evaluation of the proposed path planning system. This setup involves designing a virtual 3D environment that closely mimics real-world scenarios.
- Performance evaluation: Finally, the system’s performance is assessed through validation experiments. The results are analyzed to evaluate various performance metrics, including path quality, collision avoidance capability, computational efficiency, and scalability.
4. Implementation
4.1. Generation of Orthophotos and Point Clouds
4.2. Image Segmentation
4.3. Path Planning
5. Discussion
- Computational performance optimization: While RRT is efficient, the algorithm can still be computationally intensive, especially in highly complex environments. Optimizing the algorithm to reduce computational load without compromising path quality is essential. This could involve parallel processing techniques, more efficient sampling methods, or integrating heuristics to guide the search process more effectively.
- Integration of advanced object recognition technologies: Incorporating state-of-the-art object recognition and machine learning algorithms can significantly enhance the system’s ability to identify and classify obstacles. Techniques such as convolutional neural networks (CNNs) and deep learning models can provide more accurate and detailed environmental data, improving the overall robustness of the path planning process.
- Enhanced image segmentation: The SAM has proven effective for basic segmentation tasks. However, integrating more advanced segmentation algorithms could improve the accuracy and reliability of the identified features. This includes using multi-spectral imaging and data fusion techniques to combine different sensor inputs, providing a more comprehensive understanding of the environment.
- Real-time adaptability: Ensuring that the system can adapt to changes in real-time is critical for practical applications. This involves developing algorithms that can update the path dynamically in response to new obstacles or changes in the environment. Techniques such as dynamic RRT (DRRT) or hybrid approaches combining RRT with other path planning algorithms could be explored.
- Scalability for large-scale environments: The current system is designed for relatively confined environments. Scaling the system to handle larger and more complex areas, such as urban landscapes or expansive rural settings, requires further development. This includes optimizing the data structures and algorithms to manage larger datasets and more extensive flight paths.
- Validation and testing: Extensive real-world testing and validation are necessary to ensure the system’s reliability and performance in various conditions. This includes testing in different weather conditions, varying terrain types, and diverse structural environments. Feedback from these tests can be used to refine and improve the system continuously.
6. Conclusions and Future Works
- Extensive simulation testing and real-world missions to validate and refine the system.
- Enhancing path planning algorithms to improve efficiency and accuracy.
- Integrating advanced object recognition and image segmentation technologies to improve route quality and precision.
Future Research Directions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kuffner, J.J.; LaValle, S.M. RRT-Connect: An efficient approach to single-query path planning. In Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000; pp. 995–1001. [Google Scholar]
- Karaman, S.; Frazzoli, E. Sampling-based algorithms for optimal motion planning. Int. J. Robot. Res. 2011, 30, 846–894. [Google Scholar]
- Zermas, D.; Izzat, I.; Papanikolopoulos, N. Fast segmentation of 3D point clouds: A paradigm on LiDAR data for autonomous vehicle applications. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017; pp. 5067–5073. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 770–778. [Google Scholar]
- Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-scale direct monocular, S.L.A.M. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 834–849. [Google Scholar]
- Li, J.; Guo, Y.; Zhu, S.; Yuan, Y. UAV path planning method based on deep reinforcement learning. IEEE Access 2019, 7, 157083–157093. [Google Scholar]
- Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.J.; Medina-Carnicer, R. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognit. 2014, 47, 2280–2292. [Google Scholar]
- Whelan, T.; Salas-Moreno, R.F.; Glocker, B.; Davison, A.J.; Leutenegger, S. ElasticFusion: Real-time dense SLAM and light source estimation. Int. J. Robot. Res. 2016, 35, 1697–1716. [Google Scholar]
- Newcombe, R.A.; Fox, D.; Seitz, S.M. DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 343–352. [Google Scholar]
- Kerl, C.; Sturm, J.; Cremers, D. Robust odometry estimation for RGB-D cameras. In Proceedings of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 3748–3754. [Google Scholar]
- Shkurti, F.; Girdhar, Y. Underwater multi-robot convoying using visual tracking by detection. IEEE Robot. Autom. Lett. 2017, 2, 193–200. [Google Scholar]
- Gómez, C.; Zamarreño, M.; Pérez, F.; Delgado-Aguilera, R. Path planning for UAVs in complex environments. Drones 2024, 8, 288. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Newcombe, R.A.; Davison, A.J. Live dense reconstruction with a single moving camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 1498–1505. [Google Scholar]
- Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of RGB-D SLAM systems. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 573–580. [Google Scholar]
- Dellaert, F.; Kaess, M. Square Root SAM: Simultaneous localization and mapping via square root information smoothing. Int. J. Robot. Res. 2006, 25, 1181–1203. [Google Scholar] [CrossRef]
- Konolige, K.; Agrawal, M. FrameSLAM: From bundle adjustment to real-time visual mapping. IEEE Trans. Robot. 2008, 24, 1066–1077. [Google Scholar] [CrossRef]
- Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
- Engel, J.; Koltun, V.; Cremers, D. Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 611–625. [Google Scholar] [CrossRef] [PubMed]
- Kerl, C.; Sturm, J.; Cremers, D. Dense visual SLAM for RGB-D cameras. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 2100–2107. [Google Scholar]
- Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Arnaldo, C.G.; Moreno, F.P.; Suárez, M.Z.; Jurado, R.D.-A. Autonomous Route Planning for UAV-Based 3D Reconstruction. Eng. Proc. 2025, 90, 78. https://doi.org/10.3390/engproc2025090078
Arnaldo CG, Moreno FP, Suárez MZ, Jurado RD-A. Autonomous Route Planning for UAV-Based 3D Reconstruction. Engineering Proceedings. 2025; 90(1):78. https://doi.org/10.3390/engproc2025090078
Chicago/Turabian StyleArnaldo, César Gómez, Francisco Pérez Moreno, María Zamarreño Suárez, and Raquel Delgado-Aguilera Jurado. 2025. "Autonomous Route Planning for UAV-Based 3D Reconstruction" Engineering Proceedings 90, no. 1: 78. https://doi.org/10.3390/engproc2025090078
APA StyleArnaldo, C. G., Moreno, F. P., Suárez, M. Z., & Jurado, R. D.-A. (2025). Autonomous Route Planning for UAV-Based 3D Reconstruction. Engineering Proceedings, 90(1), 78. https://doi.org/10.3390/engproc2025090078