Path Planning for Unmanned Aerial Vehicles in Complex Environments
Abstract
:1. Introduction
1.1. Motivation
1.2. Objectives
- -
- Scene information acquisition through orthophotographs: This entails extracting orthophotographs from cities and other surfaces using multiple drones to obtain aerial images [16,17]. The goal is to apply this system to extract the point clouds of buildings for path planning. The resulting system will be based on previous successes in orthophotograph generation.
- -
- 3D Reconstruction: The objective is to obtain data necessary for 3D reconstructions as close to reality as possible [18]. This involves generating routes for UAVs to capture the comprehensive imagery of structures, ensuring complete reconstructions without gaps. The system will build upon previous work to guide the process and validate route selection and sensor characteristics.
2. State of the Art
2.1. Mapping and Perception Techniques for Autonomous Drones
- SLAM (Simultaneous Localization and Mapping): SLAM is a key technique for autonomous drones that allows them to build maps of the environment while simultaneously estimating their position within that environment. Through SLAM, drones can navigate and explore unknown areas using sensor information (such as cameras, IMU, and LiDAR) to determine their position and create a real-time map. This technique is essential for autonomous flights and for operating in GPS-denied environments, such as indoors or in areas with poor satellite coverage.
- Photogrammetry: Photogrammetry is a technique that involves capturing and analyzing images from different angles to reconstruct objects and environments in 3D. Autonomous drones equipped with cameras can take aerial photographs of an area and then, using point matching and triangulation techniques, generate accurate three-dimensional models of the terrain surface or structures in the mapped area [21].
- LiDAR (Light Detection and Ranging): LiDAR is a sensor that emits laser light pulses and measures the time it takes to reflect off nearby objects or surfaces. By combining multiple distance measurements, a 3D point cloud representing the surrounding environment is obtained. Autonomous drones equipped with LiDAR can obtain precise and detailed data on the terrain, buildings, or other objects, even in areas with dense vegetation or complex terrain.
- Multi-sensor data fusion: Autonomous drones may be equipped with multiple sensors, such as RGB cameras, thermal cameras, LiDAR, and IMU. Multi-sensor data fusion allows combining information obtained from different sources to obtain a more complete and detailed view of the environment. The combination of visual and depth data, for example, improves the accuracy and resolution of the 3D models generated by drones [22].
- Object recognition and tracking: Autonomous drones can use computer vision techniques and machine learning to recognize the objects of interest in real-time and track their position and movement. This capability is useful in applications such as industrial inspections, surveillance, and the mapping of urban areas [23].
- Route planning and intelligent exploration: Route planning algorithms allow autonomous drones to determine the best trajectory to cover an area of interest and collect data efficiently. Intelligent route planning is essential to maximize the coverage of the mapped area and optimize flight duration.
2.2. Image Segmentation Techniques and Object Recognition
2.3. Three-Dimensional Reconstruction Methods
- Point cloud and mesh methods: Point cloud is a common technique in 3D reconstruction where the data captured by sensors, such as LiDAR or RGB cameras, are represented as a collection of points in a 3D space. These points can be densely distributed and contain information about the geometry and position of the objects in the environment. Various methods can be applied to generate a three-dimensional mesh from the point cloud, such as Delaunay triangulation [27], Marching Cubes [28], or Poisson Surface Reconstruction [29]. The resulting mesh provides a continuous surface that is easier to visualize and analyze [30].
- Fusion of data from different sensors: Autonomous drones can be equipped with multiple sensors, such as RGB cameras, thermal cameras, and LiDAR. Data fusion from the different sensors allows combining information obtained from each source to obtain a more complete and detailed view of the environment. For example, combining visual data with LiDAR depth data can improve the accuracy and resolution of the generated 3D model. Data fusion is also useful for obtaining complementary information about properties such as the texture, temperature, and reflectivity of the environment.
- Volumetric and octrees [31]: Volumetry is a technique that represents objects and scenes in 3D using discrete volumes instead of continuous surfaces. Octrees are a common hierarchical data structure used in volumetry, which divides space into octants to represent different levels of detail. This technique is especially useful for the efficient representation of complex objects or scenes with fine details.
- Machine Learning methods: Machine learning and artificial intelligence have been increasingly applied in 3D reconstruction with drones. Machine learning algorithms can improve the accuracy of point matching in photogrammetry, object recognition, and point cloud segmentation. They have also been used in the classification and labeling of 3D data to improve the interpretation and understanding of the generated models.
2.4. Relevant Research and Projects
- Project: “Drones for archaeological documentation and mapping” [32]: This project focused on using autonomous drones equipped with RGB cameras for archaeological documentation and mapping. Autonomous flights were conducted over archaeological sites, capturing aerial images from different perspectives. Photogrammetry was used to generate the detailed 3D models of historical structures. The results highlighted the efficiency and accuracy of the technique, allowing the rapid and non-invasive documentation of archaeological sites, facilitating the preservation of cultural heritage.
- Research: “Simultaneous localization and mapping for UAVs in GPS-denied environments” [33]: This research addressed the challenge of autonomous drone navigation in GPS-denied environments, such as indoors or densely populated urban areas. A specific SLAM algorithm for drones was implemented, allowing real-time map creation and position estimation using vision and LiDAR sensors. The results demonstrated the feasibility of precise and safe autonomous flights in GPS-denied scenarios, significantly expanding the possibilities of drone applications in complex urban environments.
- Project: “Precision agriculture using autonomous drones” [34]: This project focused on the application of autonomous drones for precision agriculture. Drones equipped with multispectral cameras and LiDAR were used to inspect crops and collect data on plant health, soil moisture, and nutrient distribution. The generated 3D models provided valuable information for agricultural decision making, such as optimizing irrigation, early disease detection, and efficient resource management.
- Research: “Deep Learning approaches for 3D reconstruction from drone data” [35]: This research explored the application of deep learning techniques in 3D reconstruction from the data captured by drones. Convolutional neural networks were implemented to improve the quality and accuracy of the 3D models generated by photogrammetry and LiDAR. The results showed significant advances in the efficiency and accuracy of 3D reconstruction, opening new opportunities for the use of drones in applications requiring high resolutions and fine details.
3. Methodology
- Problem framing: The research begins with a comprehensive analysis of the problem space, delineating the requirements and constraints associated with UAV path planning in dynamic and cluttered environments. This step involves identifying the primary objectives of the study, such as ensuring collision-free navigation and optimizing flight efficiency.
- Literature review: A thorough review of the existing literature is conducted to gain insights into the state-of-the-art techniques and algorithms for UAV path planning and obstacle avoidance. Emphasis is placed on understanding the principles underlying the Rapidly Exploring Random Tree (RRT) algorithm, a widely used approach in autonomous navigation systems.
- Algorithm adaptation: Building upon the insights gained from the literature review, the RRT algorithm is adapted and tailored to suit the specific requirements of the target application. This involves modifying the algorithm to enable trajectory generation in 3D environments while accounting for obstacles and dynamic spatial constraints.
- Software implementation: The adapted RRT algorithm is implemented in software, leveraging appropriate programming languages and libraries. The implementation process encompasses the development of algorithms for path generation, collision detection, and trajectory optimization, along with the integration of the necessary data structures and algorithms.
- Simulation environment setup: A simulation environment is set up to facilitate the rigorous testing and evaluation of the proposed path-planning system. This involves creating a virtual 3D environment that replicates a real-world scenario.
- Performance analysis: The results obtained from the validation experiments are analyzed and interpreted to evaluate the system’s performance in terms of path quality, collision avoidance capability, computational efficiency, and scalability.
- Discussion and reflection: The findings of the research are discussed in detail, highlighting the strengths, limitations, and potential areas for the improvement of the proposed methodology. Insights gained from the analysis are used to refine and iterate upon the developed system.
4. Solution Design
4.1. Orthophoto Mission
- -
- Enhanced geographical precision and uniform scale.
- -
- Detailed visual representation.
- -
- Compatibility with map overlay and integration.
4.2. Image Segmentation Model
Viewpoint Selection
- Aircraft safety and integrity [37]: A minimum safety distance must be defined to prevent collisions with the building structure. Although regulations do not specify an exact distance, common sense dictates defining this metric.
- Comprehensive structure inspection and surface detail acquisition: Capturing images too close to the building yields detailed but information-intensive photographs. However, closer images require a greater number to cover the entire object volume efficiently.
- Simplicity and efficiency. They are the simplest and most basic geometric shapes, which facilitates their manipulation and mathematical calculations. In addition, graphic engines and rendering algorithms are optimized to work efficiently with triangles.
- Versatility. They are flat polygons that can be used to approximate any three-dimensional surface with sufficient detail.
- Interpolation and smoothing. They allow the simple interpolation of vertex values, such as textures, colors, or surface normals. Additionally, when enough triangles are used, the resulting surface appears smooth.
- Lighting calculations. Triangles are ideal for performing lighting and shading calculations since their geometric and flat properties allow the easy determination of normals and light behavior on the surface.
- Ease of subdivision. If more detail is needed on a surface, triangles can be subdivided into smaller triangles without losing coherence in the representation.
4.3. Generation of Routes for Unmanned Aerial Vehicles in a 3D Scenario
4.3.1. Study of Path-Planning Algorithms
- A* (A-Star): The A* algorithm is widely used in pathfinding problems on graphs and meshes. A* performs an informed search using a heuristic function to estimate the cost of the remaining path to the goal. While A* can be efficient in environments with precise and known information, its performance can significantly degrade in complex and unknown 3D environments due to a lack of adequate information about the structure of the space.
- PRM (Probabilistic Roadmap): PRM is a probabilistic algorithm that creates a network of valid paths through the random sampling of the search space. Although PRM can generate valid trajectories, its effectiveness is affected by the density of the search space and may require a high number of sampling points to represent accurate trajectories in a 3D environment with complex obstacles.
- D* (D-Star): D* is a real-time search algorithm that recalculates the route when changes occur in the environment. While D* is suitable for dynamic environments, its computational complexity can be high, especially in 3D scenarios with many moving objects and obstacles.
- Theta* (Theta-Star): Theta* is an improvement of A* that performs a search in the discretized search space using linear interpolation to smooth the path. While Theta* can produce more direct and efficient trajectories than A*, its performance may degrade in environments with multiple obstacles and complex structures.
- RRT (Rapidly Exploring Random Tree): The RRT algorithm is a widely used path-planning technique in complex and unknown environments. RRT uses random sampling to build a search tree that represents the possible trajectories of the UAV. Its probabilistic nature and its ability to efficiently explore the search space make it suitable for route generation in 3D environments with obstacles and unknown structures.
- RRT* (Rapidly Exploring Random Tree Star): RRT* is an improvement of RRT that optimizes the trajectories generated by the original algorithm. RRT* seeks to improve the efficiency of the route found by RRT by reducing the path length and optimizing the tree structure. While RRT* can provide optimal routes, its computational complexity may significantly increase in complex 3D environments.
- Potential Fields: Potential fields is a path-planning technique that uses attractive and repulsive forces to guide the movement of the UAV towards the goal and away from obstacles. While potential fields can generate smooth trajectories, they may suffer from local minima and oscillations in environments with complex obstacles.
4.3.2. Selection of the RRT Algorithm
- RRT is a probabilistic algorithm that can efficiently explore the search space, making it suitable for unknown and complex environments.
- The random sampling nature of RRT allows for the generation of diverse trajectories and the exploration of the different regions of a space.
- RRT has a relatively simple implementation and low computational complexity compared to more advanced algorithms like RRT*.
4.3.3. Process of Route Generation and Optimization Using the RRT Algorithm
- 1.
- Initialization: The process begins with the initialization of a tree T, which initially contains only the starting position of the UAV, denoted as .
- 2.
- Random sampling: In this step, the algorithm randomly samples a point within the search space. This random point represents a potential new position that the UAV might move to.
- 3.
- Nearest neighbor selection: Once a random point is chosen, the algorithm identifies the nearest node in the existing tree T. This is typically accomplished using a distance metric such as the Euclidean distance to determine the closest existing node to the new random sample.
- 4.
- New node generation: After identifying the nearest node, the algorithm generates a new node . This new node is created by moving a certain step size δ from towards . This step ensures that the tree expands incrementally and steadily. The formula for the new node is as follows:Here, represents the Euclidean distance between and .
- 5.
- Collision checking: The next crucial step involves checking for collisions. The algorithm verifies whether the path from to intersects with any obstacles in the environment. If the path is collision-free, it is considered a valid extension of the tree.
- 6.
- Tree expansion: If no collisions are detected, is added to the tree T, with an edge connecting it to . This new node and edge represent a feasible segment of the UAV’s path.
- 7.
- Repeat process: The steps of random sampling, nearest neighbor selection, new node generation, and collision checking are repeated iteratively. The process continues until the tree reaches the goal or a predefined number of iterations is completed.
- 8.
- Path extraction: When the tree reaches the goal, the algorithm extracts the path from the initial position . This path is obtained by tracing back through the tree from the goal node to the start node, resulting in a series of connected nodes representing the route.
4.3.4. Parameter Settings in the RRT Algorithm
- Step size (δ): The step size determines the distance between the nodes in the RRT tree. It represents how far the algorithm moves from the nearest node towards a randomly sampled point in each iteration.
- Impact: A smaller step size results in a finer, more precise path but increases computational time and the number of nodes. Conversely, a larger step size reduces computational load but may miss narrow passages and lead to less optimal paths.
- Setting used: We set the step size to 5 m, which provided a good balance between path precision and computational efficiency.
- Maximum number of nodes: This parameter defines the upper limit on the number of nodes in the RRT tree.
- Impact: Limiting the number of nodes helps in managing computational resources and ensuring that the algorithm terminates in a reasonable time frame. However, too few nodes might result in the incomplete exploration of the search space.
- Setting used: We set the maximum number of nodes to 1000, ensuring sufficient exploration while maintaining computational feasibility.
- Goal bias: The goal bias parameter defines the probability of sampling the goal point directly instead of a random point in the search space.
- Impact: A higher goal bias increases the chances of quickly reaching the goal, but may reduce the algorithm’s ability to explore alternative, potentially better paths. A lower goal bias promotes exploration but might slow down convergence to the goal.
- Setting used: We used a goal bias of 0.1 (10%), which allowed for the balanced exploration and goal-directed growth of the tree.
- Collision detection threshold: This threshold defines the minimum distance from obstacles within which a path is considered a collision.
- Impact: A lower threshold increases the sensitivity to obstacles, ensuring safer paths but potentially making it harder to find feasible routes. A higher threshold reduces sensitivity, which might lead to paths that are too close to obstacles.
- Setting used: We set the collision detection threshold to 3 m to ensure safe navigation around obstacles.
- Path quality: The chosen step size and goal bias facilitated the generation of smooth and direct paths, enhancing the UAV’s ability to navigate complex environments efficiently.
- Computational efficiency: Setting a reasonable maximum number of nodes and step size ensured that the algorithm performed within acceptable time limits, making it suitable for real-time applications.
- Safety: The collision detection threshold ensured that the generated paths maintained a safe distance from obstacles, reducing the risk of collisions during the UAV’s mission.
5. Implementation
- First, the implementation of image segmentation from an orthophoto and the application of a SAM model.
- Second, the generation of viewpoints around a sample structure and the subsequent generation of a route that allows for the collision-free scanning of the structure.
5.1. Image Segmentation and Identification
5.1.1. Orthophoto Missions
5.1.2. Application of SAM Model
5.2. Generation of Viewpoints and Flight Route
- The generation of viewpoints;
- The definition of the flight route.
- The definition of functions;
- Data processing;
- The generation and filtering of displaced points;
- The clustering of filtered points;
- The additional filtering of centroids and second cluster.
5.2.1. Definition of Functions
5.2.2. Data Processing
- Reading the PLY file: In this stage, a point cloud file in PLY format is read. The PLY (Polygon File Format) format is widely used to represent 3D data, such as point clouds and three-dimensional meshes. The information contained in this file represents the 3D scene to be processed. Once the file has been read, the data are stored in the variable pcd, representing the 3D point cloud.
- Downsampling of the point cloud: Downsampling is a technique that allows reducing the number of points in a point cloud without losing too much relevant information. In this stage, downsampling is applied to the 3D point cloud using the voxel_grid_downsampling function. This function samples points using a voxel grid, where each voxel represents a three-dimensional cell in space. The result of downsampling is stored in the variable pcd_downsampled, containing the 3D point cloud with a reduced number of points.
- Creation of mesh using alpha shapes: The alpha shape technique is a mathematical tool used in computational geometry to represent sets of points in three-dimensional space. In this stage, an alpha value is defined, a crucial parameter in mesh generation. Then, the alpha shape technique is applied to the downsampled point cloud to create a three-dimensional mesh encapsulating and representing the points in space. The resulting mesh is stored in the variable mesh.
- Calculation of vertex normals of the mesh: Normals are vectors perpendicular to the surface of the mesh at each of its vertices. Calculating normals is useful for determining the surface orientation and direction at each point of the mesh. In this stage, the normals of the vertices of the mesh created in the previous step are calculated, providing additional information about the scene’s geometry and facilitating certain subsequent operations, such as the proper visualization of the mesh.
5.2.3. Generation and Filtering of Displaced Points
- Creation of displaced points: In this stage, the centers of each triangle of the previously created three-dimensional mesh are obtained. These centers represent points in space located at the center of each of the mesh’s triangles. From these points, new displaced points are generated along the normals of the triangles. The normal of a triangle is a vector perpendicular to its surface and is used to determine the direction in which points should be displaced. In this case, the points are displaced 10 m along the normals, creating new points in space.
- Creation of displaced point cloud: Once the displaced points have been generated, a new three-dimensional point cloud called displaced_pcd containing these points is created. This point cloud is a digital representation of the displaced points in space (see Figure 11).
- Calculation of distances: In this stage, the distances between the displaced points and the vertices of the three-dimensional mesh are calculated. For each displaced point, the minimum distance to all the vertices of the mesh is measured. The result of these calculations is stored in the list distances, containing the distances corresponding to each displaced point.
5.2.4. Clustering of Filtered Points
- 1.
- Applying the DBSCAN algorithm:
- 2.
- Separating points into clusters:
- 3.
- Assigning colors to clusters:
- 4.
- Creating point clouds for each cluster:
- 5.
- Calculation of cluster centroids
5.2.5. Additional Filtering of Centroids and New Clustering
- 1.
- Creating a new point cloud of centroids:
- 2.
- Calculating distances between centroids and mesh vertices:
- 3.
- Filtering centroids based on minimum distance to mesh:
5.2.6. Second Clustering of Filtered Centroids
- 1.
- Second Clustering with DBSCAN:
- 2.
- Separating centroids into clusters:
- 3.
- Assigning random colors to clusters:
- 4.
- Creating a new point cloud for each cluster:
- 5.
- Recalculating centroids:
5.3. Flight Path Definition
5.3.1. Sampling Point Selection
- 1.
- Generating sampling points:
- In each iteration of the RRT algorithm, a sampling point is randomly generated in the 3D space of the environment. This point represents a potential position to which the drone could move along its route.
- The sampling points are generated randomly, allowing for the exploration of the different regions of the search space and the discovery of new possible trajectories.
- 2.
- Collision checking:
- Once the sampling point is generated, the algorithm checks if the path between the new node and its nearest parent node collides with objects present in the environment. Collision detection functions are used to evaluate the distance between the new node and the objects represented in the 3D mesh of the environment.
- If the path between the new node and its parent node does not collide with objects, it means that the drone can fly from its current position to the new sampling point without encountering obstacles along the way.
- 3.
- Distance constraint compliance:
- In addition to collision checking, the algorithm also evaluates whether the new node complies with distance constraints. That is, it checks if the distance between the new node and its nearest parent node is less than or equal to a predetermined distance value.
- Distance constraint is essential to ensure that the drone makes reasonable movements and does not make excessively large jumps in space.
- 4.
- Addition to the RRT Tree:
- If the new node complies with the distance constraints and does not collide with objects, it is added to the RRT tree as a new node. The nearest parent node to the new node becomes its predecessor in the tree, allowing for tracking the path from the initial node to the new node.
- As new nodes are generated and added, the RRT tree expands and explores the different possible trajectories for the drone.
5.3.2. Path Search and Optimization
- 1.
- Path smoothing:
- In this stage, a smoothing technique is applied to the initial trajectory generated by the RRT tree. The goal of smoothing is to make the trajectory smoother and eliminate possible sharp oscillations or abrupt changes in direction. This not only improves the aesthetics of the trajectory but can also have a positive impact on flight efficiency and reduce energy consumption.
- A commonly used technique for trajectory smoothing is fitting a curve through the points of the initial trajectory. This can be achieved through interpolation or curve approximation techniques. By fitting a curve to the trajectory points, a new smoothed trajectory is obtained that follows the general direction of the original trajectory but with smoother curves.
- In this case, trajectory smoothing is performed using a spline fitting algorithm. A spline represents a mathematical curve that passes through a set of given points and creates a smooth and continuous path between them. By obtaining a smoother trajectory, the drone can follow the route more smoothly and steadily, resulting in a more efficient and safe flight.
- The spline fitting process involves the following steps:
- ○
- Taking the points of the initial route generated by the RRT tree.
- ○
- Applying the spline fitting algorithm, which calculates a curve that fits the points optimally, attempting to minimize deviations between the curve and the points.
- ○
- The resulting curve becomes the new smoothed trajectory that the drone will follow during flight.
- 2.
- Unnecessary node removal:
- Once the trajectory has been smoothed, it may still contain unnecessary nodes that do not provide relevant information for the drone’s flight. These redundant nodes may arise due to the hierarchical structure of the RRT tree and how the initial trajectory is constructed.
- To optimize the trajectory, the unnecessary nodes are removed. This is achieved by applying trajectory simplification algorithms, such as the Douglas–Peucker algorithm, which reduces the number of points in the trajectory while maintaining its general shape. Removing the unnecessary nodes helps reduce the complexity of the trajectory and improves system efficiency by simplifying the calculations required to follow the route.
- The process involves the following steps:
- ○
- Analyzing the initial trajectory generated by the RRT tree.
- ○
- Identifying those nodes that do not provide relevant information for reaching the desired destination point.
- ○
- These unnecessary nodes are removed from the trajectory, resulting in a simplified and more direct trajectory.
- The framework successfully identified multiple viewpoints around the statue, ensuring comprehensive coverage.
- The flight path generated allowed the UAV to navigate around the statue smoothly, with no detected collisions.
- The captured images had significant overlap, which is crucial for high-quality 3D reconstruction.
- The 3D model generated from these images was detailed and accurate, capturing the intricate features of the statue.
6. Validation
- The framework effectively adapted to the complex architecture of the building, generating appropriate viewpoints.
- The UAV followed the planned path without any collisions, successfully navigating around the building’s features.
- The images captured provided a comprehensive view of the building from various angles, essential for 3D modeling.
- The resulting 3D reconstruction was accurate, capturing the building’s structural details comprehensively.
7. Future Research Directions
8. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Hayat, S.; Yanmaz, E.; Muzaffar, R. Survey on Unmanned Aerial Vehicle Networks for Civil Applications: A Communications Viewpoint. IEEE Commun. Surv. Tutor. 2016, 18, 2624–2661. [Google Scholar] [CrossRef]
- Erdelj, M.; Natalizio, E.; Chowdhury, K.R.; Akyildiz, I.F. Help from the Sky: Leveraging UAVs for Disaster Management. IEEE Pervasive Comput. 2017, 16, 24–32. [Google Scholar] [CrossRef]
- DPedroche, S.; Amigo, D.; García, J.; Molina, J.M. Architecture for Trajectory-Based Fishing Ship Classification with AIS Data. Sensors 2020, 20, 3782. [Google Scholar] [CrossRef] [PubMed]
- Blanco, M.T. Generación y Explotación de Ortofotografías Mediante Enjambre de Drones Simulados; Universidad Carlos III de Madrid: Madrid, Spain, 2022. [Google Scholar]
- Gómez-Calderrada, J.L. Reconstrucción y Análisis de Objetos 3D con UAV; Universidad Carlos III de Madrid: Madrid, Spain, 2022. [Google Scholar]
- Amigo, D.; García, J.; Molina, J.M.; Lizcano, J. UAV Simulation for Object Detection and 3D Reconstruction Fusing 2D LiDAR and Camera. In Proceedings of the 17th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2022), Salamanca, Spain, 5–7 September 2022. [Google Scholar]
- Yang, L.; Qi, J.; Xiao, J.; Yong, X. A literature review of UAV 3D path planning. In Proceeding of the 11th World Congress on Intelligent Control and Automation, Shenyang, China, 29 June–4 July 2014. [Google Scholar]
- Aharchi, M.; Kbir, M.A. A Review on 3D Reconstruction Techniques from 2D Images. In Innovations in Smart Cities Applications Edition 3—The Proceedings of the 4th International Conference on Smart City Applications; Springer International Publishing: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
- Santana, L.V.; Brandão, A.S.; Sarcinelli-Filho, M. Outdoor waypoint navigation with the AR.Drone quadrotor. In Proceedings of the International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015. [Google Scholar]
- Fuqiang, L.; Runxue, J.; Hualing, B.; Zhiyuan, G. Order Distribution and Routing Optimization for Takeout Delivery under Drone–Rider Joint Delivery Mode. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 774–796. [Google Scholar] [CrossRef]
- Fuqaing, L.; Weidong, C.; Wenjing, F.; Hualing, B. 4PL Routing Problem Using Hybrid Beetle Swarm Optimization. Soft Comput. 2023, 27, 17011–17024. [Google Scholar]
- Tongren, Y.; Fuqiang, L.; Suxin, W.; Leizhen, W.; Hualing, B. A hybrid metaheuristic algorithm for the multi-objective location-routing problem in the early post-disaster stage. J. Ind. Manag. Optim. (JIMO) 2023, 19, p1. [Google Scholar]
- Kaur, D.; Kaur, Y. Various Image Segmentation. Int. J. Comput. Sci. Mob. Comput. 2014, 3, 809–814. [Google Scholar]
- Zaitoun, N.M.; Aqel, M.J. Survey on Image Segmentation Techniques. Procedia Comput. Sci. 2015, 65, 797–806. [Google Scholar] [CrossRef]
- Meta AI—Introducing Segment Anything: Working toward the First Foundation Model for Image Segmentation. Available online: https://segment-anything.com/ (accessed on 28 July 2023).
- Hohle, J. Experiences with the Production of Digital Orthophotos. Available online: https://www.asprs.org/wp-content/uploads/pers/1996journal/oct/1996_oct_1189-1194.pdf (accessed on 5 April 2023).
- Agüera-Vega, F.; Carvajal-Ramírez, F.; Martínez-Carricondo, P. Accuracy of Digital Surface Models and Orthophotos Derived from Unmanned Aerial Vehicle Photogrammetry. J. Surv. Eng. 2017, 143, 04016025. [Google Scholar] [CrossRef]
- Janowski, A.; Nagrodzka-Godycka, K.; Szulwic, J.; Ziółkowski, P. Remote sensing and photogrammetry techniques in diagnostics. Comput. Concr. 2016, 18, 405–420. [Google Scholar] [CrossRef]
- Dering, G.M.; Micklethwaite, S.; Thiele, S.T.; Vollgger, S.A.; Cruden, A.R. Review of drones, photogrammetry and emerging sensor technology for the study of dykes: Best practises and future potential. J. Volcanol. Geotherm. Res. 2019, 373, 148–166. [Google Scholar] [CrossRef]
- Weiss, U.; Biber, P. Plant detection and mapping for agricultural robots using a 3D LIDAR sensor. Robot. Auton. Syst. 2011, 59, 265–273. [Google Scholar] [CrossRef]
- Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
- Khaleghi, B.; Khamis, A.; Karray, F.O.; Razavi, S.N. Multisensor data fusion: A review of the state-of-the-art. Inf. Fusion 2013, 14, 28–44. [Google Scholar] [CrossRef]
- Sudderth, E.B. Graphical Models for Visual Object Recognition and Tracking. Doctoral Dissertation, Massachusetts Institute of Technology, Cambridge, MA, USA, 2006. [Google Scholar]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.-Y.; et al. Segment Anything. Cornell University. In Proceedings of the Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
- Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef] [PubMed]
- Abdullahi, H.S.; Sheriff, R.E.; Mahieddine, F. Convolution Neural Network in Precision Agriculture for Plant Image Recognition and Classification; University of Bradford, School of Engineering and Computer Science: Bradford, UK, 2017. [Google Scholar]
- Lee, D.T.; Schachter, B.J. Two algorithms for constructing a Delaunay triangulation. Int. J. Comput. Inf. Sci. 1980, 9, 219–242. [Google Scholar] [CrossRef]
- Newman, T.S.; Yi, H. A survey of the marching cubes algorithm. Comput. Graph. 2006, 30, 854–879. [Google Scholar] [CrossRef]
- Kazhdan, M.; Bolitho, M.; Hoppe, H. Poisson Surface Reconstruction. In Proceedings of the Eurographics Symposium on Geometry Processing, Cagliari, Italy, 26–28 June 2006. [Google Scholar]
- Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; Bennamoun, M. Deep Learning for 3D Point Clouds: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 4338–4364. [Google Scholar] [CrossRef] [PubMed]
- Samet, H. An Overview of Quadtrees, Octrees, and Related Hierarchical Data Structures. In Proceedings of the Theoretical Foundations of Computer Graphics and CAD, Ciocco, Italy, 4–17 July 1988. [Google Scholar]
- Adamopoulos, E.; Rinaudo, F. UAS-Based Archaeological Remote Sensing: Review, Meta-Analysis and State-of-the-Art. Drones 2020, 4, 46. [Google Scholar] [CrossRef]
- Hong, B.; Ngo, E.; Juarez, J.; Cano, T.; Dhoopar, P. Simultaneous Localization and Mapping for Autonomous Navigation of UAVs in GPS-Denied Indoor Environments; California State Polytechnic University: Pomona, CA, USA, 2021. [Google Scholar]
- Zualkernan, I.; Abuhani, D.A.; Hussain, M.H.; Khan, J.; ElMohandes, M. Machine Learning for Precision Agriculture Using Imagery from Unmanned Aerial Vehicles (UAVs): A Survey. Drones 2023, 7, 382. [Google Scholar] [CrossRef]
- Buyukdemircioglu, M.; Kocaman, S. Deep Learning for 3D Building Reconstruction: A Review. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 359–366. [Google Scholar] [CrossRef]
- Maboudi, M.; Homaei, M.; Song, S.; Malihi, S.; Saadatseresht, M.; Gerke, M. A Review on Viewpoints and Path-planning for UAV-based 3D Reconstruction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 16, 5026–5048. [Google Scholar] [CrossRef]
- Yaacoub, J.-P.; Noura, H.; Salman, O.; Chehab, A. Security analysis of drones systems: Attacks, limitations, and recommendations. Internet Things 2020, 11, 100218. [Google Scholar] [CrossRef] [PubMed]
- Bircher, A.; Alexis, K.; Burri, M.; Oettershagen, P.; Omari, S.; Mantel, T.; Siegwart, R. Structural inspection path planning via iterative viewpoint resampling with application to aerial robotics. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015. [Google Scholar]
- Song, C.; Chen, Z.; Wang, K.; Luo, H.; Cheng, J.C. BIM-supported scan and flight planning for fully autonomous LiDAR-carrying UAVs. Autom. Constr. 2022, 142, 104533. [Google Scholar] [CrossRef]
- Shang, Z.; Shen, Z. Topology-based UAV path planning for multi-view stereo 3D reconstruction of complex structures. Complex Intell. Syst. 2022, 9, 909–926. [Google Scholar] [CrossRef]
- Zammit, C.; van Kampen, E.-J. Comparison between A* and RRT Algorithms for UAV Path Planning. In Proceedings of the 2018 AIAA Guidance, Navigation, and Control Conference, Kissimmee, FL, USA, 8–12 January 2018. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gómez Arnaldo, C.; Zamarreño Suárez, M.; Pérez Moreno, F.; Delgado-Aguilera Jurado, R. Path Planning for Unmanned Aerial Vehicles in Complex Environments. Drones 2024, 8, 288. https://doi.org/10.3390/drones8070288
Gómez Arnaldo C, Zamarreño Suárez M, Pérez Moreno F, Delgado-Aguilera Jurado R. Path Planning for Unmanned Aerial Vehicles in Complex Environments. Drones. 2024; 8(7):288. https://doi.org/10.3390/drones8070288
Chicago/Turabian StyleGómez Arnaldo, César, María Zamarreño Suárez, Francisco Pérez Moreno, and Raquel Delgado-Aguilera Jurado. 2024. "Path Planning for Unmanned Aerial Vehicles in Complex Environments" Drones 8, no. 7: 288. https://doi.org/10.3390/drones8070288
APA StyleGómez Arnaldo, C., Zamarreño Suárez, M., Pérez Moreno, F., & Delgado-Aguilera Jurado, R. (2024). Path Planning for Unmanned Aerial Vehicles in Complex Environments. Drones, 8(7), 288. https://doi.org/10.3390/drones8070288