You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Review
  • Open Access

11 November 2025

Geometrical Optimal Navigation and Path Planning—Bridging Theory, Algorithms, and Applications

,
and
1
Control Engineering Department, Amirkabir University, Tehran 1591634311, Iran
2
Control Electrical Engineering Department, Iran University of Science of Technology, Tehran 1684613114, Iran
3
Faculty of Design and Creative Technologies, Auckland University of Technology, Auckland 1142, New Zealand
*
Author to whom correspondence should be addressed.
This article belongs to the Section Navigation and Positioning

Abstract

Autonomous systems, such as self-driving cars, surgical robots, and space rovers, require efficient and collision-free navigation in dynamic environments. Geometric optimal navigation and path planning have become critical research areas, combining geometry, optimization, and machine learning to address these challenges. This paper systematically reviews state-of-the-art methodologies in geometric navigation and path planning, with a focus on integrating advanced geometric principles, optimization techniques, and machine learning algorithms. It examines recent advancements in continuous optimization, real-time adaptability, and learning-based strategies, which enable robots to navigate dynamic environments, avoid moving obstacles, and optimize trajectories under complex constraints. The study identifies several unresolved challenges in the field, including scalability in high-dimensional spaces, real-time computation for dynamic environments, and the integration of perception systems for accurate environment modeling. Additionally, ethical and safety concerns in human–robot interactions are highlighted as critical issues for real-world deployment. The paper provides a comprehensive framework for addressing these challenges, bridging the gap between classical algorithms and modern techniques. By emphasizing recent advancements and unresolved challenges, this work contributes to the broader understanding of geometric optimal navigation and path planning. The insights presented here aim to inspire future research and foster the development of more robust, efficient, and intelligent navigation systems. This survey not only highlights the novelty of integrating geometry, optimization, and machine learning but also provides a roadmap for addressing critical issues in the field, paving the way for the next generation of autonomous systems.

1. Introduction

The automated industry is rapidly evolving, with robots being increasingly deployed in diverse applications such as manufacturing, healthcare, transportation, and smart homes [,]. To operate safely and efficiently in these environments, robots must navigate autonomously while avoiding obstacles, making path planning and motion planning critical components of robotics. Path planning involves generating a collision-free trajectory that guides a robot from an initial state to a desired goal. This requires accurate environment modeling to understand the spatial structure and relationships between locations. Three primary approaches are used to link the environment and navigation strategy: Geometric, Topological, and Semantic.
Geometric methods focus on guiding the robot from a start point to a goal based on map information, often leveraging mathematical representations of the environment []. These methods are widely used due to their simplicity and efficiency in static environments. Topological methods represent the environment as a graph, enabling decision-making that mimics human-like navigation []. Semantic approaches use logical representations of the environment, incorporating human cognitive processes to infer navigation strategies []. While topological and semantic methods are gaining attention for their adaptability, geometric approaches remain highly relevant, especially when combined with optimization techniques [].
This paper provides a systematic overview of geometric optimal navigation and path planning, focusing on the integration of classical geometric methods with modern optimization and machine learning techniques. We explore recent advancements in continuous optimization, real-time adaptability, and learning-based strategies, which enable robots to navigate complex environments, avoid moving obstacles, and optimize trajectories under constraints. Additionally, we highlight unresolved challenges, including scalability in high-dimensional spaces, real-time computation for dynamic environments, and the integration of perception systems for accurate environment modeling. Safety concerns in human–robot interactions are also discussed as critical issues for real-world deployment [].

3. Optimization Criteria in Geometric Trajectory Planning

In geometric trajectory planning, the primary and most observable parameter is the determination of the shortest possible trajectory. However, while trajectory length is a critical factor, it should not be the sole criterion for optimization. A comprehensive approach must consider additional factors such as trajectory smoothness, time efficiency, energy consumption, and potential hazards, including collisions or environmental disturbances like wind resistance. Furthermore, the robot’s specific task and objectives often necessitate the inclusion of other pertinent models and metrics to ensure optimal performance.
It is important to note that multiple optimization criteria often exhibit inherent interdependencies. The way these interdependencies are addressed depends largely on the priorities assigned to the navigation problem and the sensitivity of the scenario. For instance, in safety-critical applications, collision avoidance may dominate, while in energy-constrained missions, energy efficiency might take precedence. Common approaches include weighted-sum formulations, hierarchical prioritization, or treating certain objectives as hard constraints while optimizing others. While a fully generalized treatment of these interdependencies is beyond the scope of this work, this overview illustrates how diverse objectives can be balanced to achieve feasible, efficient, and robust navigation outcomes [,].

3.1. Key Optimization Criteria

The following criteria are essential for evaluating and optimizing trajectories in geometric path planning:
  • Trajectory Length: Minimizing the path length is often the primary objective, as it directly impacts the robot’s efficiency and resource utilization. Shorter trajectories reduce travel time and energy consumption, making them ideal for many applications [].
  • Trajectory Smoothness: Smooth trajectories are crucial for ensuring stable and efficient robot motion. Abrupt changes in direction or velocity can lead to mechanical stress, increased energy consumption, and reduced accuracy. Smoothness is often quantified using curvature and jerk metrics [].
  • Time Efficiency: Time-optimal trajectories are critical in applications where speed is a priority, such as in industrial automation or search-and-rescue operations. Time efficiency is closely tied to the robot’s velocity profile and acceleration limits [].
  • Energy Consumption: Energy-efficient trajectories are vital for battery-powered robots or systems operating in energy-constrained environments. Optimizing energy usage involves minimizing unnecessary acceleration, deceleration, and idling [].
  • Collision Avoidance: Ensuring collision-free trajectories is a fundamental requirement in any navigation task. This involves not only avoiding static obstacles but also dynamically adapting to moving obstacles in real-time [].
  • Environmental Factors: External conditions such as wind resistance, terrain roughness, or fluid dynamics (in underwater or aerial robots) can significantly impact trajectory planning. These factors must be modeled and accounted for to ensure robust performance [].

3.2. Path Length as an Optimization Criterion

Path length is one of the most critical parameters in the optimization of mobile robot path planning. It is frequently employed as a primary criterion for evaluating and optimizing trajectories. The length of a path is typically calculated by summing the distances traveled by the robot at each time step as it moves from its initial position to the target destination. Mathematically, path length can be determined using algebraic norms, with the Euclidean norm (or L 2 norm) being one of the most widely used distance metrics. The general definition of the L p norm, denoted as · p , between two points in an n-dimensional space is given by
x p = i = 1 n | x i | p 1 / p ,
where x = ( x 1 , x 2 , , x n ) is a vector representing the coordinates of a point, and p is the order of the norm. For path planning in three-dimensional (3D) space, n = 3 , and the coordinates x, y, and z correspond to the Cartesian axes.
The Euclidean norm ( L 2 norm) is a special case of the L p norm where p = 2 . It is defined as follows:
x 2 = x 2 + y 2 + z 2 .
Path Length Calculation
To calculate the total length of a path, the Euclidean norm is applied to each segment of the path. For a path consisting of m points, the total path length L k for the k-th path is given by:
L k = l = 1 m 1 p l + 1 p l 2 ,
where p l = ( x l , y l , z l ) represents the coordinates of the l-th point on the path, and m is the total number of points, including the starting and target points. To illustrate this concept, consider Figure 2, which depicts two possible paths from an initial state to a target destination. Path 1 consists of 3 intermediate points ( m = 4 ), while Path 2 consists of 2 intermediate points ( m = 3 ). Using Equation (3), the total path lengths L 1 and L 2 for Path 1 and Path 2, respectively, can be calculated. Typically, the path with the shorter length is preferred, as it minimizes travel distance and, consequently, energy consumption and time expenditure [].
Figure 2. Two possible paths from an initial state to a target destination. Path 1 consists of 3 intermediate points, while Path 2 consists of 2 intermediate points.
While minimizing path length is a common objective in path planning, it is essential to consider other factors such as trajectory smoothness, collision avoidance, and energy efficiency. For instance, a shorter path may require sharper turns or higher acceleration, which can increase energy consumption and mechanical stress on the robot []. Therefore, path length should be optimized in conjunction with other criteria to achieve a balanced and efficient trajectory.

3.3. Path Smoothness in Robotic Navigation

The notion of path smoothness in navigation arises from the limitations imposed by the robot’s constrained angular velocity or the disparity in movement angles between its current intended state and the subsequent state (Figure 3) []. However, it is essential to acknowledge that the smoothness of a planned path is a complex outcome influenced by various factors, encompassing the robot’s dynamics and kinematics, as well as the dynamics associated with surrounding obstacles. Parameters such as angle, velocity, acceleration, and environmental dynamics collectively contribute to determining the displacement and regularity of a path. Neglecting elements such as obstacle avoidance, non-holonomic features, and speed limitations can result in a significant disparity between the intended course and the actual path followed by the robot [].
Figure 3. Movement angle difference in smooth path and non-smooth path. The green line indicates the smooth path and the red dash-line shows the non-smooth path.
The pursuit of a smooth and uninterrupted path is motivated by its capability to enable the robot to navigate without sudden and acute turns. To accomplish path smoothness, three primary approaches are commonly employed: interpolation [,], special curves [,], and optimization [,]. Interpolation algorithms, while aiming to generate regular paths, can encounter challenges such as high computational costs and the non-convergence issue known as Runge’s phenomenon []. Transition curves, connecting straight segments of a path with a curve, offer advantages like zero start curvature, tangential joining of circular curves, and a uniform rate of curvature change. However, they necessitate the tuning of control points and parameters and may incur high computational costs, proving less effective at higher speeds [].
Optimization-Based Path Smoothing
One notably effective technique for achieving path smoothness is optimization-based path smoothing. This approach involves identifying the best path that satisfies various criteria, encompassing path length, safety, and energy consumption. Formulating the path design problem as an optimization issue allows for the development of algorithms that seek paths meeting these criteria. Recent research has extensively explored diverse optimization techniques for path design and trajectory optimization in both ground and aerial vehicles [,,].
Figure 3 provides a visual comparison between non-smooth and smooth paths, clearly illustrating that the smooth path not only covers a shorter distance but also follows a feasible route in terms of the robot’s dynamics. Accordingly, Figure 3 illustrates that the angular difference is reduced in the smooth path when compared to the non-smooth path.
Three-Dimensional Navigation
Optimal strategies for navigating in three-dimensional environments have witnessed notable advancements, as a variety of approaches have been developed that incorporate transition curves to abide by kinematic and dynamic motion constraints. Transition curves play a crucial role in achieving the seamless integration of smooth paths within these navigation frameworks. The key lies in ensuring that these curves meet the intricate requirements imposed by the motion dynamics of the robot, thus guaranteeing a harmonious trajectory. The notion of path smoothing, which is essential for enhancing the efficiency of robotic systems in terms of navigation, is intricately intertwined with a set of motion constraints. This becomes particularly vital in the context of three-dimensional environments, where the robot must traverse complex spatial configurations. The utilization of transition curves serves as an advanced solution, contributing to the overall optimization of the robot’s trajectory. Recent studies have introduced advanced smoothness criteria to address the limitations of traditional methods. These include the following:
  • Curvature Continuity: Ensuring that the curvature of the path is continuous, which is critical for high-speed navigation and dynamic environments [].
  • Jerk Minimization: Minimizing the rate of change of acceleration (jerk) to ensure smoother motion and reduce wear on the robot’s actuators [].
  • Energy-Efficient Smoothing: Optimizing paths to minimize energy consumption, which is particularly important for battery-operated robots [].
  • Adaptive Smoothing: Dynamically adjusting the smoothness of the path based on environmental changes and obstacle movements [].
Path Smoothing Methods
Table 1 provides a summary of some of the well-established path smoothing methods, including recent advancements. These methods leverage transition curves and encompass a diverse array of techniques aimed at addressing motion constraints. The table also includes detailed descriptions of the parameters and variables used in the mathematical formulations.
Table 1. Well-known path smoothing methods with practical considerations.
This section has presented a holistic perspective on the integration of transition curves in optimal navigation methods, emphasizing the crucial consideration of both kinematic and dynamic constraints. By doing so, the goal is to contribute to the development of a comprehensive and efficient robotic navigation experience. In this context, optimization-based methods emerge as a key player, adept at striking a delicate balance between competing criteria.
Optimization-based methods, showcased as a robust framework, demonstrate their prowess in generating paths that not only optimize efficiency but also ensure safety in robotic navigation. Leveraging mathematical optimization algorithms, these methods engage in an iterative refinement of the trajectory. This iterative process takes into account a spectrum of factors. The ultimate result of these optimization-based approaches is the creation of a path that optimally navigates the conflicting objectives inherent in robotic navigation systems. This involves a delicate equilibrium between minimizing path length, ensuring safety through obstacle avoidance, and conserving energy. The synthesis of these considerations culminates in an enhanced overall performance of robotic navigation systems, marking a significant stride toward achieving seamless and effective autonomous navigation.

3.4. Time Cost in Robotic Navigation

Time cost in navigation encapsulates the duration it takes for a robot to execute a predetermined path within its operational constraints. The imperative to minimize this time cost emerges as a pivotal optimization criterion, intricately shaped by the dynamic interplay of the robot’s functionality and the environmental dynamics and kinematics. The overarching objective is to bolster navigation efficiency by curtailing the time required for precise path tracking.
Time-Optimal Trajectory Planning (TOTP)
At the forefront of time optimization strategies is Time-Optimal Trajectory Planning (TOTP), a process finely tuned to craft a robot’s path for expeditious tracking of the predetermined trajectory. The pivotal juncture when the robot seamlessly adheres to the path while meeting operational requisites is termed the execution time. TOTP methodologies are strategically crafted to streamline path tracking, minimizing the time cost, which unfolds as a multifaceted function intricately woven with the robot’s dynamics and the environmental intricacies.
However, the resolution constraints inherent in both the robot and its environment pose formidable challenges to achieving TOTP, rendering modeling a formidable task. One pivotal approach involves maximizing velocity while judiciously considering constraints, mitigating the risk of undesirable jerking in the robot’s motion. The delineation of two primary time-optimal criteria, namely continuous time and discrete time problem definitions, further enriches the spectrum of time optimization strategies. Two types of time-optimal criteria can be defined for the general TOTP problem: continuous time and discrete time problem definitions. The continuous time-optimal optimization problem is defined as follows:
min u ( t ) 0 T 1 d t subject to x ˙ ( t ) = f ( x ( t ) , u ( t ) ) ,
where
  • f ( x , u ) is the state space of the robot dynamics;
  • x is the state vector, consisting of position, velocity, and possibly acceleration;
  • u is the control input that may depend on voltage; torque, or other functions of control manipulators;
  • T is the total execution time.
The discrete time-optimal optimization problem is defined as follows:
min u k k = 0 N 1 subject to x k + 1 = g ( x k , u k ) ,
where
  • g ( . ) is the robot motion acceleration, which is a function of the control input u k at the k-th time sample;
  • N is the final time step when the robot reaches the goal;
  • x k is the k-th collision-free waypoint.
The methods for calculating the time cost of robotic navigation incorporate dynamic constraints, environmental factors, and optimization techniques to minimize the total execution time. Below are some key formulas used in recent research:
Time Cost with Dynamic Constraints
The time cost J for a trajectory can be calculated as:
J = 0 T x ˙ ( t ) 2 + λ u ( t ) 2 d t ,
where:
  • x ˙ ( t ) is the velocity of the robot,
  • u ( t ) is the control input,
  • λ is a weighting factor that balances the trade-off between velocity and control effort.
This formulation ensures that the robot minimizes both the time and energy consumption during navigation []. The weighting factor λ in Equation (6) balances the trade-off between minimizing the robot’s velocity magnitude and the control effort, and its optimal value is typically selected through empirical tuning, multi-objective optimization, or domain-specific criteria to align with the navigation task’s priorities.
Time Cost with Environmental Constraints
In dynamic environments, the time cost must account for obstacles and environmental changes. A common approach is to use a penalty function:
J = 0 T x ˙ ( t ) 2 + λ u ( t ) 2 + i ϕ i ( x ( t ) ) d t ,
where
  • ϕ i ( x ( t ) ) is a penalty function that increases the cost when the robot approaches obstacles or violates environmental constraints;
  • λ is a weighting factor for the control effort.
This method ensures collision-free navigation while minimizing time cost [].
Time Cost in Multi-Robot Systems
For multi-robot systems, the time cost can be extended to include coordination constraints:
J = j = 1 M 0 T x ˙ j ( t ) 2 + λ u j ( t ) 2 + k j ψ j k ( x j ( t ) , x k ( t ) ) d t ,
where
  • M is the number of robots;
  • ψ j k ( x j ( t ) , x k ( t ) ) is a penalty function that ensures collision avoidance between robots j and k.
This formulation is particularly useful in swarm robotics and collaborative tasks []. Recent research has introduced advanced methods to address the challenges of TOTP, including the following:
Deep Reinforcement Learning (DRL)
DRL-based approaches have been employed to learn time-optimal trajectories in complex environments []. These methods leverage neural networks to approximate the optimal policy, minimizing the time cost while satisfying dynamic constraints. The calculation criteria for DRL include the following:
  • Reward Function: Designed to penalize time consumption and deviations from the desired trajectory.
  • State-Action Space: Encodes the robot’s dynamics and environmental constraints.
  • Training Efficiency: Measured by the convergence rate and computational resources required.
Model Predictive Control (MPC)
MPC frameworks have been extended to incorporate time-optimal constraints, enabling real-time trajectory optimization []. The calculation criteria for MPC include:
  • Horizon Length: Determines the number of future steps considered in the optimization.
  • Constraint Handling: Ensures the feasibility of the trajectory under dynamic and environmental constraints.
  • Computational Complexity: Measured by the time required to solve the optimization problem at each time step.
Multi-Objective Optimization
Recent studies have combined time-optimality with other objectives, such as energy efficiency and safety, using Pareto optimization techniques []. The calculation criteria for multi-objective optimization include the following:
  • Pareto Front: Represents the trade-off between competing objectives.
  • Weighting Factors: Used to prioritize time-optimality over other objectives.
  • Scalability: Evaluated based on the ability to handle high-dimensional state spaces.
The choice of the weighting factor λ is crucial, as it governs the trade-off between trajectory smoothness, energy efficiency, and responsiveness to dynamic changes. In practice, λ can be selected in several ways:
  • Empirical tuning: Values of λ are adjusted experimentally until the robot achieves satisfactory performance in representative environments.
  • Optimization-based methods:  λ is treated as a hyperparameter and optimized through grid search, Bayesian optimization, or reinforcement learning to minimize a performance metric such as tracking error or energy usage.
  • Domain-specific constraints: For safety-critical or resource-limited systems, λ may be constrained by hardware limits (e.g., maximum actuator torque) or mission requirements (e.g., prioritizing faster response over energy saving).
Thus, λ is not fixed universally but is determined by a balance between robot dynamics, environmental complexity, and task requirements.
Adaptive TOTP
Adaptive methods dynamically adjust the trajectory based on environmental changes, ensuring time-optimality in dynamic settings []. The calculation criteria for adaptive TOTP include the following:
  • Replanning Frequency: Determines how often the trajectory is updated.
  • Convergence Speed: Measures the time required to adapt to new environmental conditions.
  • Robustness: Evaluated based on the ability to handle uncertainties and disturbances.
Applications and Challenges
Time-Optimal Trajectory Planning finds applications in various domains, including the following:
  • Autonomous vehicles;
  • Industrial robotics;
  • Aerial drones.
However, challenges such as computational complexity, real-time adaptability, and the trade-off between time-optimality and other constraints (e.g., energy consumption) remain active areas of research.
In a nutshell, the aim of TOTP involves the consideration of crucial factors, including acceleration, velocity, jerk, and the dynamic characteristics of the robot. In defining the TOTP problem, two primary criteria are typically emphasized: continuous time and discrete time optimization. These criteria are essential for optimizing the performance of robotic systems, ensuring the shortest time for trajectory execution. Recent advancements, such as DRL, MPC, and adaptive methods, have further enhanced the capabilities of TOTP, paving the way for more efficient and robust robotic navigation systems.

3.5. Energy Cost in Robotic Navigation

The optimization of energy consumption in mobile robots is a multifaceted endeavor intricately tied to the dynamic and kinematic model of the robot and environmental characteristics. Formulating optimal control strategies necessitates a nuanced consideration of the interplay between these factors. A pivotal criterion for effective energy management involves the reduction of the generated path length. Additionally, environmental features play a crucial role in shaping an optimum governing strategy for energy consumption in mobile robots. Research indicates that minimizing unnecessary movements and deploying predictive control algorithms are key strategies for significant energy reduction. This holistic perspective encompasses the summation of various energy costs, including the following:
  • Kinetic Energy (Ek): Energy associated with the robot’s motion.
  • Traction Resistance Energy (Ef): Energy dissipated in overcoming traction resistances.
  • Motor Heating Energy (Ee): Energy lost as heat in the motors.
  • Mechanical Friction Energy (Em): Energy dissipated in overcoming friction torque.
  • Idle Energy (Eidle): Energy consumed by idling motors and onboard electric devices.
The total energy cost E total of a mobile robot can be comprehensively defined as follows:
E total = 0 T P total ( t ) d t ,
where
  • P total ( t ) is the total power consumption and loss at time t;
  • T is the total execution time.
This formulation provides a comprehensive metric for evaluating and optimizing the energy efficiency of mobile robots []. Recent research has introduced advanced methods to optimize energy consumption in robotic navigation, including the following:
Predictive Energy Management
Predictive control algorithms leverage environmental data to minimize energy consumption. The energy cost can be expressed as follows:
E predictive = 0 T P motion ( t ) + P idle ( t ) d t ,
where
  • P motion ( t ) is the power consumed during motion;
  • P idle ( t ) is the power consumed during idle states.
This approach has been shown to reduce energy consumption by up to 20% in dynamic environments [].
Regenerative Energy Recovery
Regenerative braking systems recover kinetic energy during deceleration. The recovered energy E regen can be calculated as:
E regen = 0 T η · P braking ( t ) d t ,
where:
  • η is the efficiency of the regenerative braking system,
  • P braking ( t ) is the power generated during braking.
This method is particularly effective in electric and hybrid robotic systems [].

3.6. Risk Cost in Robotic Navigation

Ensuring safety is a pivotal consideration in path planning algorithms, and it is essential to create a risk map that accurately assesses the level of danger associated with different routes. The development of such a risk map is indispensable for evaluating the potential hazards when traversing specific positions, taking into account the presence of obstacles. To construct a risk map, a grid of probabilities is established using two-dimensional coordinates. Within this framework, a probability of zero signifies a negligible collision risk, while a probability of one indicates a heightened likelihood of collision. The computation of risk costs is contingent upon the probability of unforeseen events, encompassing:
  • Collision Risk: Probability of collisions with environmental elements or individuals.
  • Robot Malfunction: Probability of robot failure or abrupt movements.
  • Environmental Hazards: Probability of natural events such as rain or wind increasing the risk of slipping or crashing.
The risk level R ( t ) at a specific location r ( t ) can be calculated as:
R ( t ) = P event ( t ) · C event ,
where:
  • P event ( t ) is the probability of a risk event at time t,
  • C event is the cost associated with the event.
The total risk cost R total of a generated path from r 0 to r goal can be expressed as:
R total = 0 T R ( t ) d t .
Recent studies have presented innovative techniques for evaluating risks in robotic navigation, which encompass:
Neural Network-Based Risk Prediction
Neural networks are used to predict collision probabilities based on historical data and real-time sensor inputs. The risk cost can be expressed as
R NN ( t ) = f NN ( r ( t ) , θ ) ,
where
  • f NN is the neural network function;
  • θ represents the network parameters.
This approach has been shown to improve risk prediction accuracy by up to 30% [].
Fuzzy Logic for Dynamic Risk Mapping
Fuzzy logic systems are employed to handle uncertainties in dynamic environments. The risk cost can be calculated as follows:
R fuzzy ( t ) = i μ i ( r ( t ) ) · C i ,
where
  • μ i ( r ( t ) ) is the membership function for the i-th risk factor;
  • C i is the cost associated with the i-th risk factor.
This method is particularly effective in unstructured environments [].
Figure 4 illustrates a concrete case example of a risk map within the context of path planning. This visual representation showcases the practical application of risk assessment and mapping techniques in determining optimal paths, providing valuable insights for navigating through dynamic environments safely and efficiently.
Figure 4. Risk map—Example of risk-aware navigation formulation. The left panel illustrates multiple candidate navigation paths from the Start to the Target, with different colored lines representing alternative routes generated by the path-planning algorithm (green: optimal path, purple: secondary paths, and blue: exploratory paths). The legend indicates that the urban map includes pedestrian zones, intersections, and road lanes that were considered in the navigation process. The right panel presents the corresponding risk map, where the height and color intensity represent the computed pedestrian–vehicle collision risk and other environmental obstacles along the route (from low—blue—to high—red). The optimal route is selected by minimizing the integrated pedestrian and obstacle collision risk across the path.

3.7. Integration of Optimization Criteria

The integration of these criteria into a unified optimization framework is a challenging yet essential task. Multi-objective optimization techniques are often employed to balance competing criteria, such as minimizing trajectory length while ensuring smoothness and collision avoidance []. Additionally, task-specific requirements, such as payload constraints or mission deadlines, may further influence the optimization process.

4. Approaches to Solving Optimal Navigation Problems

To address optimal navigation challenges, researchers use various methodologies, including numerical and analytical approaches. Analytical methods aim to minimize costs directly to obtain optimal solutions, but they can be challenging to apply in complex and constrained navigation scenarios. Therefore, researchers tend to use numerical solutions, which include sampling and sequence-based techniques such as greedy algorithms, Dynamic Programming, evolutionary algorithms, and sampling-based algorithms. These techniques have been applied in various studies, including multi-robot navigation, off-road navigation, and navigation under localization uncertainty [].

4.1. Greedy Algorithms

Greedy algorithms are a type of optimization algorithm that selects the best immediate solution at each step without considering the entire problem. These algorithms are top-down approaches that select the locally optimal solution at each step without considering the previous step. They are simple to implement and understand, but do not guarantee the best overall solution and can be slow. Some classic sequence-based approaches of greedy algorithms include Depth First Search (DFS) [], Dijkstra’s Algorithm [], A* Algorithm [], D* Algorithm [], and Phi* Algorithm []. Greedy algorithms are commonly used for various applications, such as robotic navigation and problem-solving. While they can be ideal for problems with an optimal substructure, they may not always yield the best solution. The comparison provided in Figure 5 offers insights into the fundamental differences between the greedy algorithm and optimal approaches in navigation scenarios.
Figure 5. Greedy algorithm pipeline: (Left) The real-world environment is represented as a map with obstacles and a target location. (Center) The environment is discretized into a grid map, with cells marking obstacles and accessible spaces. (Right) The grid is converted into a graph, where numbered circles (1–4) correspond to sampled key nodes (locations in the environment), and edge labels indicate transition costs used by the greedy algorithm to select the next step toward the target. This progression demonstrates how spatial navigation problems are systematically transformed for algorithmic decision-making and cost evaluation.

4.2. Dynamic Programming (DP)

Dynamic Programming (DP) strategically addresses optimal problems by breaking them down into more manageable sub-optimal problems, progressing towards the optimal solution step by step []. This process enhances efficiency and computational effectiveness. In the context of graph theory, DP formulations play a pivotal role in determining optimal sequences for visiting nodes within an adjacency graph associated with a decomposed environment. Figure 6 provides a general overview of dynamic programming in the context of navigation.
Figure 6. Dynamic programming pipeline for shortest path planning: (Left) The GPS map shows the robot’s position, obstacles, and the target. (Center) The environment is discretized into a grid, with black cells representing obstacles. (Right) Dynamic programming iteratively calculates the minimum cost-to-go (numbers 1–6) from each cell to the target (star) in a reverse manner—starting from the goal and expanding outward—and marks the optimal path cells used to reach the target. The blue arrow illustrates this backward calculation mechanism from the target to the start. (Far right) The solution is mapped onto a decision tree for policy extraction or further analysis. Symbols: black triangle = robot, black square = obstacle, green = optimal path cell, star = target.

4.3. Evolutionary Algorithms (EAs)

Evolutionary algorithms (EAs) randomly select a candidate set of solutions and apply the quality function as an abstract fitness measure. EAs utilize a fitness function to find the optimal solution by converging from the initial state to the global optimal. These algorithms, including Particle Swarm Optimization (PSO) [], Ant Colony Optimization (ACO) [], and Genetic Algorithm (GA) [], fall under the umbrella of EAs. Particularly effective in addressing multi-objective problems, they offer viable solutions for optimizing path planning scenarios with diverse criteria. The optimization process involves iteratively converging from an initial state towards the global optimum using a fitness function. However, it is essential to note that the inclusion of constraints can introduce computational challenges for these algorithms. Constraints are factors that restrict the feasible solution space, potentially increasing the computational burden. Despite this, EAs remain powerful tools for solving complex optimization problems, providing valuable solutions in various domains. Recent state-of-the-art (SOTA) approaches combine EAs with hybrid strategies, multi-objective optimization, and adaptive mechanisms to improve path quality, efficiency, and robustness, as summarized in Table 2.
Table 2. SOTA approaches in evolutionary algorithms for navigation and path planning.

4.4. Sampling-Based Algorithms

Sampling-based algorithms compromise between greedy algorithms and exploring data in unknown and large areas, making them suitable for high-dimensional and complex problems with low computational costs. Probabilistic Roadmap (PRM) [], Rapidly-exploring Random Tree (RRT) [], and Rapidly-exploring Random Tree Star (RRT*) [] are some of the popular sampling-based algorithms. However, these algorithms may fail in dynamic environments. Recent advancements, such as Adaptive RRT* [] and Deep Sampling-Based Planning [], have improved their performance in dynamic and uncertain environments. Table 3 summarizes recent developments in sampling-based navigation algorithms.
Table 3. Recent sampling-based navigation algorithms.
By and large, sampling-based motion planners offer a unique balance of strengths and limitations compared to other planning methods.
  • Key Advantages: These planners are scalable and flexible, making them suitable for high-dimensional spaces (e.g., robotic manipulators) where grid-based searches are computationally prohibitive. They do not require explicit environment discretization, enabling efficient path generation in complex, cluttered settings. Modern extensions, such as Adaptive RRT* and deep learning-guided planners, further enhance adaptability by dynamically adjusting sampling strategies in response to moving obstacles or semantic environmental cues.
  • Limitations: Despite their strengths, standard sampling-based methods can produce sub-optimal, non-smooth paths, often requiring post-processing. They also struggle in dynamic environments without specific adaptations and are sensitive to parameter tuning (e.g., step size, sampling distributions). Additionally, learning-based enhancements, while improving adaptability, can introduce extra computational overhead from model training and real-time inference, potentially affecting performance.

4.5. Recent Advances in Optimal Navigation

Recent research has introduced advanced methods to address the challenges of optimal navigation, including the following:
  • Deep Reinforcement Learning (DRL): DRL-based approaches have been employed to learn optimal navigation policies in complex environments [].
  • Model Predictive Control (MPC): MPC frameworks have been extended to incorporate dynamic constraints, enabling real-time navigation optimization [].
  • Multi-Objective Optimization: Recent studies have combined navigation objectives, such as time-optimality and energy efficiency, using Pareto optimization techniques [].
  • Adaptive Navigation: Adaptive methods dynamically adjust navigation strategies based on environmental changes, ensuring robustness in dynamic settings [].
The selection of path planning strategies and styles should consider the characteristics of the robot and environments, as well as priorities and tasks. A general comparison of different path planning architectures is provided by []. Recent advancements in greedy algorithms, dynamic programming, evolutionary algorithms, and sampling-based algorithms have significantly improved the efficiency and robustness of optimal navigation systems. These approaches, combined with modern techniques such as DRL and MPC, pave the way for more effective and adaptive navigation solutions in complex and dynamic environments.

5. Overview of Collision-Free Path Planning Strategy

To craft robust path planning algorithms, a meticulous understanding of both priorities and constraints specific to the problem is essential. Beyond prioritization, the path planning algorithm design must address various constraints inherent in the system. The process involves a comprehensive evaluation of factors influencing the navigation system’s performance. Figure 7 provides valuable insight into the overarching block diagram of a navigation system, serving as a guide to identify additional elements crucial for an inclusive design approach. This holistic perspective ensures that the developed algorithms align with the intricate requirements of the navigation environment, fostering efficiency and adaptability.
Figure 7. Overview of a collision-free path planning system diagram.
Assuming the robot is equipped with a tracking controller and relies on monitored motion, global map data, real-time sensory information, and obstacle positions, Figure 7 emphasizes the generation of a risk map matrix and the design of a path planning algorithm. While optimizing path length is a priority, the limitations of global approaches in ensuring safety within dynamic environments are acknowledged. To address this, a hybrid path planning approach integrates reactive and classic methods. This combination offers a robust solution to navigate challenges in dynamic environments, where real-time adaptation is crucial for ensuring both efficiency and safety. This approach can benefit from the advantages of both classic and reactive methods while reducing their deficiencies.

5.1. Hybrid Path Planning

Hybrid path planning combines the strengths of global and local planning methods to achieve robust navigation in dynamic environments. A typical hybrid approach can be formulated as follows:
P hybrid = α · P global + ( 1 α ) · P local ,
where
  • P hybrid is the hybrid path;
  • P global is the globally optimal path;
  • P local is the locally adjusted path;
  • α is a weighting factor that balances global and local planning.
In hybrid path planning, the weighting factor α plays a pivotal role in determining how much influence the globally optimal path P global has compared to the locally adjusted path P local . Rather than keeping α fixed, recent research emphasizes the importance of adaptively tuning this factor in real-time, allowing robots to balance long-term optimality with short-term responsiveness under dynamic and uncertain conditions.
Several adaptive strategies have been proposed in the literature:
  • Behavior-based methods: These approaches adjust α depending on environmental cues such as obstacle density or the need for smoother local maneuvers. For instance, heuristic and bio-inspired algorithms like the Hybrid Improved Artificial Fish Swarm Algorithm (HIAFSA) dynamically regulate α based on proximity to obstacles [].
  • Optimization-driven methods: Here, α is updated online through explicit optimization frameworks. A notable example is the adaptive visibility graph combined with A*, which minimizes tracking error and energy consumption by recalibrating α in response to evolving constraints [].
  • Learning-based approaches: More recent methods rely on reinforcement learning and evolutionary strategies to “learn” how α should evolve across diverse scenarios. The Multi-strategy Hybrid Adaptive Dung Beetle Optimization (MSHADO) algorithm, for example, employs chaotic mapping and multi-strategy fusion to enhance population diversity, enabling robots to adaptively balance global exploration with local exploitation [].
These adaptive mechanisms highlight that α is not merely a tuning constant, but a dynamic variable central to achieving both robust global guidance and agile local responsiveness in hybrid path planning frameworks.

5.2. Real-Time Adaptation

Real-time adaptation is crucial for ensuring collision-free navigation in dynamic environments. One approach involves the use of Model Predictive Control (MPC) to continuously update the robot’s trajectory based on real-time sensory data. The MPC formulation can be expressed as follows:
min u ( t ) t = 0 T x ( t ) x ref ( t ) 2 + u ( t ) 2 ,
where
  • x ( t ) is the robot’s state at time t;
  • x ref ( t ) is the reference trajectory;
  • u ( t ) is the control input;
  • T is the prediction horizon.
The specific form of u ( t ) depends on the system dynamics and the navigation problem under consideration. For instance, in a unicycle model
u ( t ) = v ( t ) ω ( t ) ,
where v ( t ) is the linear velocity and ω ( t ) is the angular velocity. In a differential-drive robot, u ( t ) may correspond to the left and right wheel velocities, while for holonomic robots, it can include translational components along multiple axes. In motion control formulations, u ( t ) may directly represent the applied torques, whereas in embedded tracking controllers, it can denote the next desired position of the robot at each prediction step.
The inclusion of the control penalty u ( t ) 2 in the cost function ensures smooth and feasible motion by discouraging aggressive actuation, while the tracking term x ( t ) x ref ( t ) 2 enforces accurate trajectory following. Thus, u ( t ) is central to balancing tracking accuracy, energy efficiency, and dynamic feasibility in the MPC framework.
Recent advancements in MPC include the integration of deep learning models to improve prediction accuracy [].

5.3. Recent Advancements in Collision-Free Path Planning

Recent research has introduced advanced methods to address the challenges of collision-free path planning, including the following:
  • Deep Reinforcement Learning (DRL): DRL-based approaches have been employed to learn collision-free navigation policies in complex environments [].
  • Multi-Agent Path Planning: Techniques for coordinating multiple robots to avoid collisions while achieving individual goals [].
  • Uncertainty-Aware Planning: Methods that account for uncertainties in sensor data and environmental dynamics [].
  • Energy-Efficient Path Planning: Energy-efficient path planning combines the optimization of energy consumption with collision avoidance, employing various algorithmic strategies tailored to different robotic systems and environments. These approaches address challenges like terrain roughness, multi-agent coordination, and dynamic obstacles while minimizing motion costs [].
The design of collision-free path planning algorithms requires a holistic approach that considers both global and local constraints. Recent advancements in risk map generation, hybrid path planning, and real-time adaptation have significantly improved the robustness and efficiency of navigation systems. These approaches, combined with modern techniques such as DRL and MPC, pave the way for more effective and adaptive collision-free navigation solutions in complex and dynamic environments.

6. Challenges in Geometric Optimal Navigation

Geometric optimal navigation involves finding the shortest or most efficient path for a robot while avoiding obstacles and adhering to dynamic constraints. However, several challenges arise in this domain, particularly in high-dimensional spaces, real-time computation, and the integration of perception systems. This section discusses these challenges, recent advancements, and their implications for robotic navigation.

6.1. Scalability in High-Dimensional Spaces

High-dimensional spaces pose significant computational challenges for path planning. As the dimensionality of the environment increases, the computational complexity grows exponentially, making it difficult to find optimal paths efficiently. Recent studies have explored dimensionality reduction techniques and machine learning to address this issue [].

6.1.1. Dimensionality Reduction Techniques

Dimensionality reduction techniques, such as Principal Component Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (t-SNE), have been applied to reduce the complexity of high-dimensional spaces. Mathematically, PCA can be expressed as
Y = X · W ,
where
  • X is the high-dimensional data matrix;
  • W is the transformation matrix;
  • Y is the low-dimensional representation.
These techniques help reduce the computational burden while preserving the essential structure of the environment [].

6.1.2. Machine Learning Approaches

Machine learning approaches, such as deep autoencoders, have been used to learn compact representations of high-dimensional spaces. The autoencoder loss function is given by
L = X X ^ 2 ,
where X ^ is the reconstructed data. These methods enable efficient path planning in high-dimensional environments by learning low-dimensional embeddings [].

6.2. Real-Time Computation for Dynamic Environments

Real-time navigation in dynamic environments involves solving optimal path planning problems under stringent timing requirements while accounting for moving obstacles, robot dynamics, and environmental uncertainty. As problem complexity scales with environment size, obstacle density, and number of agents, scalable algorithmic techniques become vital []. The optimal navigation problem in a dynamic environment can be formulated as follows:
min x ( t ) , u ( t ) J = 0 T c x ( t ) , u ( t ) , t d t s . t . x ˙ ( t ) = f ( x ( t ) , u ( t ) , t ) ( dynamics ) x ( t ) O ( t ) ( collision avoidance ) x ( 0 ) = x start , x ( T ) = x goal ( boundary conditions ) u ( t ) U ( control limits )
where x ( t ) is the robot state trajectory, u ( t ) are controls, c ( · ) is a cost integrating path length, smoothness, energy, or safety margins, and O ( t ) models time-varying obstacles.
Scalability challenges arise from the curse of dimensionality, dynamic constraints, and frequent replanning triggered by environmental changes.
Overall scalability strategies include the following:
  • Parallelizing computationally intensive steps (sampling, collision checking, graph search).
  • Adaptive spatial and temporal resolution to reduce unnecessary computation.
  • Decomposing the problem into hierarchical or modular sub-problems.
  • Leveraging learned models to prune search space or estimate costs rapidly.

6.3. Parallel Optimization for Real-Time Trajectory Planning

Optimization-based planners seek continuous trajectories by minimizing a cost function under dynamic and environmental constraints. While effective in theory, traditional solvers often converge too slowly for real-time navigation, limiting their use in dynamic environments.
Parallel gradient-based optimization addresses these challenges by leveraging multi-core CPUs or GPUs to accelerate cost and gradient evaluations. These methods enable planners to meet strict real-time requirements while maintaining trajectory quality, as described in Algorithm 1.
Algorithm 1 Real-Time Parallel Trajectory Optimization
  1:
Input: Previous trajectory x ( t ) , control inputs u ( t ) (warm-start)
  2:
while within time budget and convergence not met do
  3:
    Parallel: For each knot along the trajectory:
  4:
       Compute gradients for smoothness, obstacle cost, and dynamics
  5:
       Compute constraint gradients (e.g., collision or control limits)
  6:
   Aggregate gradients
  7:
   Update x ( t ) and u ( t ) (e.g., via projected gradient descent)
  8:
   Project updated trajectory to enforce hard constraints
  9:
end while
10:
Output: Refined trajectory x * , controls u *

6.4. Strategies for Real-Time Scalability

Effective real-time trajectory optimization relies on a combination of algorithmic innovations and hardware acceleration. The following strategies form the backbone of scalable real-time planners:
  • GPU-Accelerated Evaluation: Modern GPUs allow hundreds to thousands of trajectory knot gradients to be evaluated simultaneously, enabling speed-ups by orders of magnitude compared to serial CPUs. This parallelism dramatically reduces iteration times for gradient computation and constraint checking, empowering planners to meet real-time deadlines even in high-dimensional state spaces [,,].
    Recent work by Rastgar [] proposes novel GPU-parallel optimization algorithms that adapt constraint formulations to fully leverage GPU architectures, markedly enhancing scalability and robustness in dynamic scenarios. Similarly, Yu et al. [] introduce TOP, trajectory optimization via parallel consensus ADMM that achieves near-constant time complexity per iteration by decomposing long trajectories into parallelizable segments, enabling large-scale real-time path planning on GPUs.
  • Warm Starting: Utilizing the previously computed optimal trajectory as an initial guess accelerates the convergence of iterative solvers. This approach is especially effective in dynamic environments where consecutive plans differ only slightly [,].
    By reusing prior solutions, planners reduce redundant computation while improving continuity and smoothness of resulting trajectories. This practical technique aligns with state-of-the-art GPU-accelerated approaches that integrate warm-starting for real-time feasibility.
  • Receding Horizon Execution: Instead of optimizing over the entire trajectory horizon, planners focus on a shorter, fixed-duration segment. Only the initial portion of the plan is executed before replanning occurs, maintaining continual responsiveness to environmental changes and dynamic obstacles [,].
    This limited horizon approach bounds computational demands, facilitates faster replanning, and supports adaptive trajectory refinement, key for scalable navigation in cluttered or rapidly changing environments.
  • Adaptive Termination and Step Sizing: Optimization algorithms dynamically adjust their step sizes and employ early stopping criteria once trajectories reach a suitable quality level within the operational time budget. This balances the trade-off between latency and solution optimality.
    For example, algorithms may terminate as soon as collision-free smooth paths are found, even if not perfectly optimal, ensuring timely availability of actionable plans without compromising safety [].
  • Integrated Recent Advances:
    Incorporating the above core principles, recent methods extend scalability and robustness in real-time planning:
    Semantic-Aware Optimization: He et al. [] develop a spatio-temporal semantic graph optimizer tailored for urban autonomous driving. Their approach handles dynamic obstacle semantics through sparse graph formulations, enabling real-time feasible trajectories that intelligently incorporate semantic understanding of traffic participants and road elements.
    Hybrid Sampling and Rewiring: Silveira et al. [] propose RT-FMT, a hybrid of Fast Marching Tree and RT-RRT*, that combines incremental rewiring and local-to-global tree reuse for faster execution and improved path quality in dynamic environments. By efficiently reusing prior tree structures and limiting expansion scope, RT-FMT exemplifies algorithmic decomposition, enabling scalability.
    Piecewise Parallel Optimization: Yu et al. [] introduce the TOP framework, which decomposes long trajectories into smaller segments solved in parallel while ensuring high-order continuity through consensus constraints. Deploying this method on GPUs supports extremely large-scale and long-horizon real-time trajectory optimization, pushing the frontier of computational performance in robotics.
    Constraint Reformulation for Parallelism: Rastgar’s thesis [] innovates by remodeling kinematic and collision constraints to be more amenable to parallel GPU computation, thereby enhancing planner scalability and resulting in more reliable real-time performance. GPU acceleration leverages parallel processing in graphics units to speed up computation-heavy tasks like sensor data processing and path planning, enabling real-time navigation in complex environments. Neural approximators, such as neural networks running on GPUs or neuromorphic chips, offer efficient models that reduce computation while preserving accuracy. These technologies are crucial for power-limited robots and edge devices, allowing them to perform advanced navigation tasks with manageable latency and energy usage [,,].
Together, these innovations illustrate a robust framework for trajectory optimization under real-time constraints, combining modern parallel hardware, algorithmic acceleration, and semantic-aware planning to maintain low latency and high fidelity in dynamic, obstacle-rich environments.

6.5. Integration of Perception Systems

Accurate environment modeling relies on the seamless integration of perception systems, such as LiDAR, cameras, and IMUs. Recent advancements in multi-sensor fusion and deep learning have improved perception accuracy [].

Deep Learning for Perception

Deep learning models, such as convolutional neural networks (CNNs), have been used to process sensory data and extract meaningful features. The loss function for training a CNN is given by
L = i = 1 N y i y ^ i 2 ,
where y i and y ^ i are the ground truth and predicted values, respectively. These models improve perception accuracy in complex environments [].
Geometric optimal navigation is a critical component of robotic systems, enabling efficient and safe movement in complex environments. However, future research should focus on developing more adaptive and energy-efficient navigation strategies, as well as improving the robustness of perception systems in uncertain and dynamic environments. By addressing these challenges, we can pave the way for more effective and reliable robotic navigation systems.

6.6. Ethical and Safety Concerns in Human–Robot Interactions

Ethical and safety concerns are critical for the real-world deployment of autonomous systems. Recent research has focused on developing ethical frameworks and safety protocols for human–robot interactions. Ethical frameworks guide robots to act in ways that respect societal norms and human rights. Core principles include the following:
  • Transparency: Robots should clearly communicate intentions and behaviors to enhance human trust.
  • Accountability: Developers and operators must be accountable for system behavior, particularly in high-stakes contexts.
  • Fairness: Systems should actively mitigate bias and promote equitable treatment across user demographics.
Ethically-aligned systems must reconcile trade-offs between efficiency, safety, and moral responsibility [,]. Recent research emphasizes formalizing these principles in algorithms for real-time navigation and decision-making [].
Beyond ethical reasoning, physical safety remains a critical design constraint. Key safety mechanisms include the following:
  • Collision Avoidance: Sensor fusion and predictive models allow robots to anticipate and avoid human contact.
  • Emergency Stop Mechanisms: Systems must be capable of halting immediately under risk conditions.
  • Human-in-the-Loop Control: Dynamic shared autonomy enables humans to intervene in uncertain or dangerous scenarios.
Recent developments use machine learning to dynamically assess and adapt to human proximity and behavior, improving safety in unpredictable settings [,,].
To translate abstract ethical principles into practical algorithm designs, we propose a clear, step-wise framework that integrates ethics and safety throughout the path planning process [,,,]. The framework begins by identifying stakeholders and eliciting their safety and equity requirements, grounding the system in real social needs. Measurable metrics such as probabilistic risk scores, fairness constraints, and safety limits are then formalized to guide planning decisions.
Uncertainty and risk are modeled using stochastic predictions of dynamic agents, enabling the computation of per-trajectory risk assessments that inform ethical trade-offs. Ethics are embedded via multi-objective cost augmentation, supervisory filters to reject ethically unacceptable trajectories, or policy learning methods that internalize fairness and safety goals. Real-time feasibility and explainability are ensured by optimizing computational efficiency and producing human-interpretable justifications for decisions. Validation through diverse scenario testing and continuous audit ensures robustness and accountability.
A concrete example is the ethical trajectory planning algorithm by Geißlinger et al. [] which encodes five ethical principles—risk minimization, priority for the worst-off, equal treatment, responsibility, and acceptable risk thresholds—into the trajectory evaluation. Through probabilistic risk modeling, the planner scores multiple candidate paths, penalizing those that unfairly endanger vulnerable users or exceed risk limits. This results in fair, safe trajectories selected transparently in real-time, validated across thousands of scenarios.

6.7. Toward Human-Centric Design and Learning

Human-centric design integrates physical, cognitive, and emotional dimensions of safety:
  • Safe Reinforcement Learning (Safe RL): Safety-aware RL ensures that robots learn policies without violating safety constraints—even in exploration phases [,].
  • Context-Aware Adaptation: Integrating human feedback and context improves trust and user comfort in collaborative robotics [,].
Future systems must adopt unified frameworks that combine ethical reasoning, real-time risk assessment, and safe learning under uncertainty [,].
While progress is promising, significant challenges remain:
  • Integration Complexity: Embedding ethical and safety modules into fast, real-time planners on resource-constrained platforms remains difficult.
  • Standardization Gaps: Harmonization of international safety and ethics standards—such as ISO 12100 [], which outlines general principles for machinery risk assessment and reduction, and ISO/TS 15066 [], which defines collaborative robot safety requirements—with learning-based models is still ongoing []. These standards serve as global benchmarks for ensuring that autonomous and robotic systems meet essential safety and risk-reduction criteria while interacting with humans.
  • Human Perception: Models often neglect psychological dimensions such as perceived safety and emotional response.
Looking forward, future research should aim to integrate ethical decision-making algorithms, robust safe reinforcement learning methods, and human-centric design principles into cohesive frameworks. These frameworks must balance operational efficiency with respect for human values and dynamic risk management, paving the way for autonomous robotic systems that are reliable, trustworthy, and ethically aligned [,,].
By addressing these multifaceted challenges, the robotics community can accelerate the deployment of effective, reliable, and ethically sound robotic navigation and collaboration systems, fostering safer and more harmonious human–robot coexistence.

7. Enhancing Geometric Path Planning Through Optimization and Machine Learning

Optimization and machine learning (ML) techniques significantly enhance geometric path planning by combining precise trajectory computation with adaptive decision-making. Multi-modal Model Predictive Control (MMPC) combined with Q-Learning, as in the MAR-MPC framework, improves feasibility, enlarges the convergence region, and ensures collision-free navigation []. Reinforcement learning approaches, such as Deep Deterministic Policy Gradient (DDPG), allow robots to adaptively handle dynamic uncertainties, improving path smoothness and convergence [].
Advanced methods further integrate optimization with learning-based solvers. The PTP RSNN-based predictive neuro-navigator combines log-convex optimization with kinematic and collision constraints, often using multi-objective formulations, constrained optimization, or Lagrangian methods to achieve optimal or near-optimal solutions []. Similarly, the smooth PSO-IPF navigator uses Particle Swarm Optimization with inverse predictive filtering under kinematic constraints to generate smooth and feasible trajectories []. Hybrid strategies that integrate ML-based real-time obstacle avoidance with geometric planners such as S-RRT [] or GA-ACO [] achieve robust, efficient, and safe path planning in dynamic environments.
Overall, this synergy highlights a core principle: optimization ensures precise geometric trajectories, while learning and adaptive solvers expand the feasible region, improve robustness, and enable mobile robots to navigate efficiently and reliably in complex, dynamic environments.

8. Conclusions

This survey presents a comprehensive synthesis of geometric optimal navigation and path planning strategies, with an emphasis on scalability, real-time performance, and adaptability to dynamic environments. By unifying classical geometric methods with contemporary optimization techniques and learning-based models, including GPU-accelerated frameworks and neural approximators, this paper underscores the significant strides made toward real-time, scalable autonomous navigation.
The reviewed literature reveals a clear trend toward hybrid approaches—combining hierarchical decomposition, parallel computation, and learning-augmented modules—to tackle increasing computational demands and environmental complexity. Recent advancements, such as attention-based neural networks, particularly Transformer architectures, offer new pathways to represent spatial–temporal dependencies and plan in high-dimensional, uncertain spaces more efficiently.
This work also highlights the importance of ethical and safety considerations in deploying optimization-based navigation systems. The real-world deployment of autonomous agents in safety-critical domains (e.g., healthcare robotics, autonomous vehicles, drones) necessitates guarantees on bounded sub-optimality, resilience to adversarial conditions, and alignment with human-centric values.

Key Takeaways for Future Research

  • Transformer-Based Planning: Future systems should explore integrating spatial–temporal attention mechanisms into planning pipelines to improve generalization and context-aware navigation, especially in multi-agent and partially observable environments.
  • Real-Time and Embedded Efficiency: Further innovation is needed to support GPU and neuromorphic execution on power-constrained platforms, ensuring autonomy is feasible for small-scale robots and edge devices.
  • Scalable Multi-Agent Coordination: Scalability remains a bottleneck. Approaches that combine decentralized optimization, learning-based approximations, and adaptive communication protocols are promising.
  • Safe and Ethical Optimization: New planning frameworks should incorporate constraints and verification layers that explicitly account for safety, fairness, and human preferences, particularly when operating alongside humans.
  • Standardization and Benchmarking: To assess progress meaningfully, standardized evaluation frameworks and real-world benchmarks—particularly those involving uncertainty, real-time constraints, and ethical dilemmas—must be developed.
In summary, the convergence of parallel computing, geometric control theory, and deep learning promises robust, real-time path planning in dynamic environments. However, ensuring ethical behavior, safety guarantees, and hardware efficiency will remain central challenges in realizing truly autonomous, intelligent agents.

Author Contributions

Conceptualization was carried out by H.J. and S.A.S. The original draft of the manuscript was prepared by H.J. and S.A.S. M.M. critically reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

During the preparation of this manuscript, the authors used DeepSeek AI (model version DeepSeek-V2) by DeepSeek Company for purposes of language polishing, editing, and initial draft refinement. Following the use of this tool, the authors reviewed and edited the content and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Licardo, J.T.; Domjan, M.; Orehovački, T. Intelligent robotics—A systematic review of emerging technologies and trends. Electronics 2024, 13, 542. [Google Scholar] [CrossRef]
  2. Kamuni, N.; Dodda, S.; Vuppalapati, V.S.M.; Arlagadda, J.S.; Vemasani, P. Advancements in Reinforcement Learning Techniques for Robotics. J. Basic Sci. Eng. 2025, 19, 101–111. [Google Scholar]
  3. Zhang, Y.; Zhao, W.; Wang, J.; Yuan, Y. Recent progress, challenges and future prospects of applied deep reinforcement learning: A practical perspective in path planning. Neurocomputing 2024, 608, 128423. [Google Scholar] [CrossRef]
  4. Kanakis, G.S.; Rovithakis, G.A. Motion Planning in Topologically Complex Environments Via Hybrid Feedback. IEEE Trans. Autom. Control 2024, 70, 3742–3754. [Google Scholar] [CrossRef]
  5. Alqobali, R.; Alshmrani, M.; Alnasser, R.; Rashidi, A.; Alhmiedat, T.; Alia, O.M. A survey on robot semantic navigation systems for indoor environments. Appl. Sci. 2023, 14, 89. [Google Scholar] [CrossRef]
  6. Zhao, Z.; Cheng, S.; Ding, Y.; Zhou, Z.; Zhang, S.; Xu, D.; Zhao, Y. A survey of optimization-based task and motion planning: From classical to learning approaches. IEEE/ASME Trans. Mechatron. 2024, 30, 2799–2825. [Google Scholar] [CrossRef]
  7. Sutikno, T. The future of artificial intelligence-driven robotics: Applications and implications. IAES Int. J. Robot. Autom. 2024, 13, 361–372. [Google Scholar] [CrossRef]
  8. LaValle, S.M. Planning Algorithms; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
  9. Choset, H.; Lynch, K.M.; Hutchinson, S.; Kantor, G.; Burgard, W.; Kavraki, L.E.; Thrun, S. Principles of Robot Motion: Theory, Algorithms, and Implementations; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  10. Lozano-Perez, T. Spatial Planning: A Configuration Space Approach. IEEE Trans. Comput. 1983, 32, 108–120. [Google Scholar] [CrossRef]
  11. Sánchez-Ibáñez, J.R.; Pérez-del Pulgar, C.J.; García-Cerezo, A. Path planning for autonomous mobile robots: A review. Sensors 2021, 21, 7898. [Google Scholar] [CrossRef]
  12. Khatib, O. Real-Time Obstacle Avoidance for Manipulators and Mobile Robots. Int. J. Robot. Res. 1986, 5, 90–98. [Google Scholar] [CrossRef]
  13. Brooks, R.A. A Robust Layered Control System for a Mobile Robot. IEEE J. Robot. Autom. 1986, 2, 14–23. [Google Scholar] [CrossRef]
  14. Burgess-Limerick, B.; Haviland, J.; Lehnert, C.; Corke, P. Reactive base control for on-the-move mobile manipulation in dynamic environments. IEEE Robot. Autom. Lett. 2024, 9, 2048–2055. [Google Scholar] [CrossRef]
  15. Chen, H.; Zang, X.; Zhu, Y.; Zhao, J. Hybrid sampling-based path planning for mobile manipulators performing pick and place tasks in narrow spaces. Appl. Sci. 2024, 14, 10313. [Google Scholar] [CrossRef]
  16. Li, J.; Wang, K.; Chen, Z.; Wang, J. An Improved RRT* Path Planning Algorithm in Dynamic Environment. In Methods and Applications for Modeling and Simulation of Complex Systems; Fan, W., Zhang, L., Li, N., Song, X., Eds.; Springer Nature: Singapore, 2022; pp. 301–313. [Google Scholar]
  17. Amar, L.B.; Jasim, W.M. Hybrid metaheuristic approach for robot path planning in dynamic environment. Bull. Electr. Eng. Inform. 2021, 10, 2152–2162. [Google Scholar] [CrossRef]
  18. He, H.; Shi, P.; Zhao, Y. Adaptive connected hierarchical optimization algorithm for minimum energy spacecraft attitude maneuver path planning. Astrodynamics 2023, 7, 197–209. [Google Scholar] [CrossRef]
  19. Wang, N.; Jin, Z.; Wang, T.; Xiao, J.; Zhang, Z.; Wang, H.; Zhang, M.; Li, H. Hybrid path planning methods for complete coverage in harvesting operation scenarios. Comput. Electron. Agric. 2025, 231, 109946. [Google Scholar] [CrossRef]
  20. Hank, M.; Haddad, M. Hybrid Control Architecture for Mobile Robots Navigation in Partially Known Environments. In Proceedings of the 11th International Conference on Informatics in Control, Automation and Robotics (ICINCO), Vienna, Austria, 2–4 September 2014. [Google Scholar]
  21. Patle, B.; Ganesh, B.L.; Pandey, A.; Parhi, D.R.K.; Jagadeesh, A. A review: On path planning strategies for navigation of mobile robot. Def. Technol. 2019, 15, 582–606. [Google Scholar] [CrossRef]
  22. Raiesdana, S. A Hybrid Method for Industrial Robot Navigation. J. Optim. Ind. Eng. 2021, 14, 133–148. [Google Scholar] [CrossRef]
  23. Mohanan, M.; Salgoankar, A. A survey of robotic motion planning in dynamic environments. Robot. Auton. Syst. 2018, 100, 171–185. [Google Scholar] [CrossRef]
  24. Meegle. Real-Time Robotics Control. 2025. Available online: https://www.meegle.com/en_us/topics/robotics/real-time-robotics-control (accessed on 9 January 2025).
  25. Chen, W.; Chi, W. A survey of autonomous robots and multi-robot navigation: Perception, planning and collaboration. Biomim. Intell. Robot. 2025, 5, 100203. [Google Scholar] [CrossRef]
  26. Saeedinia, S.A.; Tale Masouleh, M. The synergy of the multi-modal MPC and Q-learning approach for the navigation of a three-wheeled omnidirectional robot based on the dynamic model with obstacle collision avoidance purposes. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2022, 236, 9716–9729. [Google Scholar] [CrossRef]
  27. Mohaghegh, M.; Saeedinia, S.; Roozbehi, Z. Optimal predictive neuro-navigator design for mobile robot navigation with moving obstacles. Front. Robot. AI 2023, 10, 1226028. [Google Scholar] [CrossRef]
  28. Ben-Messaoud, W.; Basset, M.; Lauffenburger, J.P.; Orjuela, R. Smooth obstacle avoidance path planning for autonomous vehicles. In Proceedings of the 2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Madrid, Spain, 12–14 September 2018; pp. 1–6. [Google Scholar]
  29. Betts, J.T. Practical Methods for Optimal Control and Estimation Using Nonlinear Programming; SIAM: Philadelphia, PA, USA, 2010. [Google Scholar]
  30. Zhang, H.; Zhang, Y.; Yang, T. A survey of energy-efficient motion planning for wheeled mobile robots. Ind. Robot Int. J. Robot. Res. Appl. 2020, 47, 607–621. [Google Scholar] [CrossRef]
  31. Liu, J.; Yan, Y.; Yang, Y.; Li, J. An improved artificial potential field UAV path planning algorithm guided by RRT under environment-aware modeling: Theory and simulation. IEEE Access 2024, 12, 12080–12097. [Google Scholar] [CrossRef]
  32. Karur, K.; Sharma, N.; Dharmatti, C.; Siegel, J.E. A survey of path planning algorithms for mobile robots. Vehicles 2021, 3, 448–468. [Google Scholar] [CrossRef]
  33. Sun, S.; Yu, D.; Wang, C.; Xie, C. A smooth tool path generation and real-time interpolation algorithm based on B-spline curves. Adv. Mech. Eng. 2018, 10, 1687814017750281. [Google Scholar] [CrossRef]
  34. Simba, K.R.; Uchiyama, N.; Sano, S. Real-time smooth trajectory generation for nonholonomic mobile robots using Bézier curves. Robot. Comput.-Integr. Manuf. 2016, 41, 31–42. [Google Scholar] [CrossRef]
  35. Zafar, M.N.; Mohanta, J. Methodology for path planning and optimization of mobile robots: A review. Procedia Comput. Sci. 2018, 133, 141–152. [Google Scholar] [CrossRef]
  36. Niu, H.; Lu, Y.; Savvaris, A.; Tsourdos, A. An energy-efficient path planning algorithm for unmanned surface vehicles. Ocean Eng. 2018, 161, 308–321. [Google Scholar] [CrossRef]
  37. Belanger, N. External fake constraints interpolation: The end of Runge phenomenon with high degree polynomials relying on equispaced nodes Application to aerial robotics motion planning. Indicator 2017, 2, 1–9. [Google Scholar]
  38. Villagra, J.; Milanés, V.; Rastelli, J.P.; Godoy, J.; Onieva, E. Path and speed planning for smooth autonomous navigation. In Proceedings of the IV 2012-IEEE Intelligent Vehicles Symposium, Alcalá de Henares, Spain, 3–7 June 2012. [Google Scholar]
  39. Wang, H.; Ren, G.; Chen, J.; Ding, G.; Yang, Y. Unmanned aerial vehicle-aided communications: Joint transmit power and trajectory optimization. IEEE Wirel. Commun. Lett. 2018, 7, 522–525. [Google Scholar] [CrossRef]
  40. Liu, J.; Anavatti, S.; Garratt, M.; Abbass, H.A. Modified continuous ant colony optimisation for multiple unmanned ground vehicle path planning. Expert Syst. Appl. 2022, 196, 116605. [Google Scholar] [CrossRef]
  41. Ma, C.; Zhao, M.; Liu, Y. Vehicle navigation path optimization based on complex networks. Phys. A Stat. Mech. Its Appl. 2025, 665, 130509. [Google Scholar] [CrossRef]
  42. Wang, X.; Jiang, P.; Li, D.; Sun, T. Curvature continuous and bounded path planning for fixed-wing UAVs. Sensors 2017, 17, 2155. [Google Scholar] [CrossRef] [PubMed]
  43. Dai, C.; Lefebvre, S.; Yu, K.M.; Geraedts, J.M.; Wang, C.C. Planning jerk-optimized trajectory with discrete time constraints for redundant robots. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1711–1724. [Google Scholar] [CrossRef]
  44. Datouo, R.; Motto, F.B.; Zobo, B.E.; Melingui, A.; Bensekrane, I.; Merzouki, R. Optimal motion planning for minimizing energy consumption of wheeled mobile robots. In Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Macau, Macao, 5–8 December 2017; pp. 2179–2184. [Google Scholar]
  45. Liu, X.; Li, Y.; Zhang, J.; Zheng, J.; Yang, C. Self-adaptive dynamic obstacle avoidance and path planning for USV under complex maritime environment. IEEE Access 2019, 7, 114945–114954. [Google Scholar] [CrossRef]
  46. Reclik, D.; Kost, G. The comparison of elastic band and B-Spline polynomials methods in smoothing process of collision-free robot trajectory. J. Achiev. Mater. Manuf. Eng. 2008, 29, 187–190. [Google Scholar]
  47. Shen, P.; Zhang, X.; Fang, Y. Complete and time-optimal path-constrained trajectory planning with torque and velocity constraints: Theory and applications. IEEE/ASME Trans. Mechatron. 2018, 23, 735–746. [Google Scholar] [CrossRef]
  48. Xu, Z.; Han, X.; Shen, H.; Jin, H.; Shimada, K. Navrl: Learning safe flight in dynamic environments. IEEE Robot. Autom. Lett. 2025, 10, 3668–3675. [Google Scholar] [CrossRef]
  49. Choudhury, S.; Gupta, J.K.; Kochenderfer, M.J.; Sadigh, D.; Bohg, J. Dynamic multi-robot task allocation under uncertainty and temporal constraints. Auton. Robot. 2022, 46, 231–247. [Google Scholar] [CrossRef]
  50. Ren, J.; Huang, X.; Huang, R.N. Efficient deep reinforcement learning for optimal path planning. Electronics 2022, 11, 3628. [Google Scholar] [CrossRef]
  51. Zhang, L.; Wang, Y.; Zhao, X.; Zhao, P.; He, L. Time-optimal trajectory planning of serial manipulator based on adaptive cuckoo search algorithm. J. Mech. Sci. Technol. 2021, 35, 3171–3181. [Google Scholar] [CrossRef]
  52. Parasuraman, R.; Kershaw, K.; Pagala, P.; Ferre, M. Model based on-line energy prediction system for semi-autonomous mobile robots. In Proceedings of the 2014 5th International Conference on Intelligent Systems, Modelling and Simulation, Langkawi, Malaysia, 27–29 January 2014; pp. 411–416. [Google Scholar]
  53. Palomba, I.; Wehrle, E.; Carabin, G.; Vidoni, R. Minimization of the energy consumption in industrial robots through regenerative drives and optimally designed compliant elements. Appl. Sci. 2020, 10, 7475. [Google Scholar] [CrossRef]
  54. Shamsfakhr, F.; Bigham, B.S. A neural network approach to navigation of a mobile robot and obstacle avoidance in dynamic and unknown environments. Turk. J. Electr. Eng. Comput. Sci. 2017, 25, 1629–1642. [Google Scholar] [CrossRef]
  55. Jacob, I.J. Fuzzy Logic Decision Model for Robust Risk Management in ubiquitous environment—A Review. J. Ubiquitous Comput. Commun. Technol. 2024, 5, 399–411. [Google Scholar]
  56. Zhang, X.; Shi, G. Multi-objective optimal trajectory planning for manipulators in the presence of obstacles. Robotica 2022, 40, 888–906. [Google Scholar] [CrossRef]
  57. Mellouk, A.; Benmachiche, A. A survey on navigation systems in dynamic environments. In Proceedings of the ICIST ’20: 10th International Conference on Information Systems and Technologies, Lecce, Italy, 4–5 June 2020; pp. 1–7. [Google Scholar]
  58. Xiao, B.; Zhang, Z.; Wang, Q.; Zhang, B.; Zheng, S. TSP-based depth-first search algorithms for enhanced path planning in laser-based directed energy. Precis. Eng. 2025, 93, 224–236. [Google Scholar] [CrossRef]
  59. Sundarraj, S.; Reddy, R.V.K.; Basam, M.B.; Lokesh, G.H.; Flammini, F.; Natarajan, R. Route planning for an autonomous robotic vehicle employing a weight-controlled particle swarm-optimized Dijkstra algorithm. IEEE Access 2023, 11, 92433–92442. [Google Scholar] [CrossRef]
  60. Korkmaz, M.; Durdu, A. Comparison of optimal path planning algorithms. In Proceedings of the 2018 14th International Conference on Advanced Trends in Radioelecrtronics, Telecommunications and Computer Engineering (TCSET), Lviv-Slavske, Ukraine, 20–24 February 2018; pp. 255–258. [Google Scholar]
  61. Halder, U.; Das, S.; Maity, D. A cluster-based differential evolution algorithm with external archive for optimization in dynamic environments. IEEE Trans. Cybern. 2013, 43, 881–897. [Google Scholar] [CrossRef]
  62. Saeedi, S.; Bodin, B.; Wagstaff, H.; Nisbet, A.; Nardi, L.; Mawer, J.; Melot, N.; Palomar, O.; Vespa, E.; Spink, T.; et al. Navigating the landscape for real-time localization and mapping for robotics and virtual and augmented reality. Proc. IEEE 2018, 106, 2020–2039. [Google Scholar] [CrossRef]
  63. Mohaghegh, M.; Jafarpourdavatgar, H.; Saeedinia, S.A. New design of smooth PSO-IPF navigator with kinematic constraints. IEEE Access 2024, 12, 175108–175121. [Google Scholar] [CrossRef]
  64. Miao, C.; Chen, G.; Yan, C.; Wu, Y. Path planning optimization of indoor mobile robot based on adaptive ant colony algorithm. Comput. Ind. Eng. 2021, 156, 107230. [Google Scholar] [CrossRef]
  65. Bao, C.; Gao, D.; Gu, W.; Xu, L.; Goodman, E.D. A new adaptive decomposition-based evolutionary algorithm for multi-and many-objective optimization. Expert Syst. Appl. 2023, 213, 119080. [Google Scholar] [CrossRef]
  66. Zhang, Y.; Zhang, H. Enhancing Robot Path Planning through a Twin-Reinforced Chimp Optimization Algorithm and Evolutionary Programming Algorithm. IEEE Access 2023, 12, 170057–170078. [Google Scholar] [CrossRef]
  67. Tang, B.; Xiang, K.; Pang, M.; Zhanxia, Z. Multi-robot Path Planning Using an Improved Self-adaptive Particle Swarm Optimization. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420936154. [Google Scholar] [CrossRef]
  68. Zhang, X.; Guo, Y.; Yang, J.; Li, D.; Wang, Y.; Zhao, R. Many-objective evolutionary algorithm based agricultural mobile robot route planning. Comput. Electron. Agric. 2022, 200, 107274. [Google Scholar] [CrossRef]
  69. Peng, C.; Qiu, S. A decomposition-based constrained multi-objective evolutionary algorithm with a local infeasibility utilization mechanism for UAV path planning. Appl. Soft Comput. 2022, 118, 108495. [Google Scholar] [CrossRef]
  70. Xu, X.; Xie, C.; Luo, Z.; Zhang, C.; Zhang, T. A Multi-objective Evolutionary Algorithm Based on Dimension Exploration and Discrepancy Evolution for UAV Path Planning Problem. Inf. Sci. 2024, 657, 119977. [Google Scholar] [CrossRef]
  71. Thammachantuek, I.; Ketcham, M. Path planning for autonomous mobile robots using multi-objective evolutionary particle swarm optimization. PLoS ONE 2022, 17, e0271924. [Google Scholar] [CrossRef]
  72. Chen, G.; Du, Y.; Xi, X.; Zhang, K.; Yang, J.; Xu, L.; Ren, C. Improved genetic algorithm based on bi-level co-evolution for coverage path planning in irregular region. Sci. Rep. 2025, 15, 10047. [Google Scholar] [CrossRef]
  73. Jiménez-Domínguez, M.A.; García-Rojas, N.A.; Zapotecas-Martínez, S.; Hemández, R.D.; Altamirano-Robles, L. Exploring Multi-Objective Evolutionary Approaches for Path Planning of Autonomous Mobile Robots. In Proceedings of the 2024 IEEE Congress on Evolutionary Computation (CEC), Yokohama, Japan, 30 June–5 July 2024; pp. 1–8. [Google Scholar] [CrossRef]
  74. Smierzchalski, R.; Kuczkowski, L.; Kolendo, P.; Jaworski, B. Distributed Evolutionary Algorithm for Path Planning in Navigation Situation. Transn. Int. J. Mar. Navig. Saf. Sea Transp. 2013, 7, 293–300. [Google Scholar] [CrossRef]
  75. Hu, Y.; Yang, X.; Zhou, K.; Liu, Q.; Ding, K.; Gao, H.; Zhu, P.; Liu, C. SwarmPRM: Probabilistic Roadmap Motion Planning for Large-Scale Swarm Robotic Systems. In Proceedings of the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates, 14–18 October 2024; pp. 10222–10228. [Google Scholar]
  76. Zhang, Z.; Qiao, B.; Zhao, W.; Chen, X. A predictive path planning algorithm for mobile robot in dynamic environments based on rapidly exploring random tree. Arab. J. Sci. Eng. 2021, 46, 8223–8232. [Google Scholar] [CrossRef]
  77. Wang, J.; Chi, W.; Li, C.; Wang, C.; Meng, M.Q.H. Neural RRT*: Learning-based optimal path planning. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1748–1758. [Google Scholar] [CrossRef]
  78. Sun, Z.; Xia, B.; Xie, P.; Li, X.; Wang, J. NAMR-RRT: Neural Adaptive Motion Planning for Mobile Robots in Dynamic Environments. IEEE Trans. Autom. Sci. Eng. 2025, 22, 13087–13100. [Google Scholar] [CrossRef]
  79. Wang, J.; Jia, X.; Zhang, T.; Ma, N.; Meng, M.Q.H. Deep neural network enhanced sampling-based path planning in 3D space. IEEE Trans. Autom. Sci. Eng. 2021, 19, 3434–3443. [Google Scholar] [CrossRef]
  80. Khalilullah, K.I.; Jindai, M.; Ota, S.; Yasuda, T. Fast road detection methods on a large scale dataset for assisting robot navigation using kernel principal component analysis and deep learning. In Proceedings of the 2018 57th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), Nara, Japan, 11–14 September 2018; pp. 798–803. [Google Scholar]
  81. Hu, K.; O’Connor, P. Learning a representation map for robot navigation using deep variational autoencoder. arXiv 2018, arXiv:1807.02401. [Google Scholar] [CrossRef]
  82. Gouda, K.C.; Thakur, R. Energy-efficient clustering and path planning for UAV-assisted D2D cellular networks. Ad Hoc Netw. 2025, 170, 103757. [Google Scholar] [CrossRef]
  83. Yu, H.; Lu, W.; Han, Y.; Liu, D.; Zhang, M. Heterogeneous dimensionality reduction for efficient motion planning in high-dimensional spaces. IEEE Access 2020, 8, 42619–42632. [Google Scholar] [CrossRef]
  84. Nardi, L.; Stachniss, C. Uncertainty-aware path planning for navigation on road networks using augmented MDPs. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 5780–5786. [Google Scholar]
  85. Zhu, K.; Zhang, T. Deep reinforcement learning based mobile robot navigation: A review. Tsinghua Sci. Technol. 2021, 26, 674–691. [Google Scholar] [CrossRef]
  86. Han, J.R.; Thomas, H.; Zhang, J.; Rhinehart, N.; Barfoot, T.D. DR-MPC: Deep Residual Model Predictive Control for Real-World Social Navigation. IEEE Robot. Autom. Lett. 2025, 10, 4029–4036. [Google Scholar] [CrossRef]
  87. Sellers, T.; Lei, T.; Jan, G.E.; Wang, Y.; Luo, C. Multi-objective optimization robot navigation through a graph-driven PSO mechanism. In Advances in Swarm Intelligence; Springer: Cham, Switzerland, 2022; pp. 66–77. [Google Scholar]
  88. Zeng, J.; Qin, L.; Hu, Y.; Yin, Q.; Hu, C. Integrating a path planner and an adaptive motion controller for navigation in dynamic environments. Appl. Sci. 2019, 9, 1384. [Google Scholar] [CrossRef]
  89. Hsueh, H.Y.; Toma, A.I.; Ali Jaafar, H.; Stow, E.; Murai, R.; Kelly, P.H.; Saeedi, S. Systematic comparison of path planning algorithms using PathBench. Adv. Robot. 2022, 36, 566–581. [Google Scholar] [CrossRef]
  90. Zhang, Y.; Hua, Y. Path planning of mobile robot based on hybrid improved artificial fish swarm algorithm. Vibroengineering Procedia 2018, 17, 131–138. [Google Scholar] [CrossRef]
  91. Ou, J.; Hong, S.H.; Song, G.; Wang, Y. Hybrid path planning based on adaptive visibility graph initialization and edge computing for mobile robots. Engineering Applications of Artificial Intelligence 2023, 126, 107110. [Google Scholar] [CrossRef]
  92. Wu, M.; Li, G.; Liao, J.; Wang, H.; Liu, W.; Yan, X.; Yang, M.; Li, S. Multi-strategy hybrid adaptive dung beetle optimization for path planning. Sci. Rep. 2025, 15, 15088. [Google Scholar] [CrossRef]
  93. Wang, J.; Swartz, C.L.; Huang, K. Deep learning-based model predictive control for real-time supply chain optimization. J. Process. Control 2023, 129, 103049. [Google Scholar] [CrossRef]
  94. Wang, H.; Qiu, Y.; Hou, Y.; Shi, Q.; Huang, H.W.; Huang, Q.; Fukuda, T. Deep Reinforcement Learning-Based Collision-Free Navigation for Magnetic Helical Microrobots in Dynamic Environments. IEEE Trans. Autom. Sci. Eng. 2024, 22, 7810–7820. [Google Scholar] [CrossRef]
  95. Murano, A.; Perelli, G.; Rubin, S. Multi-agent path planning in known dynamic environments. In PRIMA 2015: Principles and Practice of Multi-Agent Systems; Springer: Cham, Switzerland, 2015; pp. 218–231. [Google Scholar]
  96. Williams, S.; Indelman, V.; Kaess, M.; Roberts, R.; Leonard, J.J.; Dellaert, F. Concurrent filtering and smoothing: A parallel architecture for real-time navigation and full smoothing. Int. J. Robot. Res. 2014, 33, 1544–1568. [Google Scholar] [CrossRef]
  97. Almadhoun, R.; Taha, T.; Seneviratne, L.; Dias, J.; Cai, G. GPU Accelerated Coverage Path Planning Optimized for Accuracy in Robotic Inspection Applications. In Proceedings of the 2016 IEEE 59th International Midwest Symposium on Circuits and Systems (MWSCAS), Abu Dhabi, United Arab Emirates, 16–19 October 2016; pp. 1–4. [Google Scholar] [CrossRef]
  98. Rastgar, F. Towards Reliable Real-Time Trajectory Optimization. Ph.D. Thesis, University of Tartu, Tartu, Estonia, 2024. [Google Scholar]
  99. Yu, J.; Chen, N.; Liu, G.; Xu, C.; Gao, F.; Cao, Y. TOP: Trajectory Optimization via Parallel Optimization towards Constant Time Complexity. arXiv 2025, arXiv:2507.10290. [Google Scholar] [CrossRef]
  100. Dastider, A.; Ray, A. Retro-trajectory optimization: A memory-efficient warm-start approach. IEEE Trans. Robot. 2023, 39, 1105–1117. [Google Scholar]
  101. Williams, G.; Andrew, A.; Evangelos A, T. Model predictive path integral control: From theory to parallel computation. In Proceedings of the IEEE ICRA, Singapore, 29 May–3 June 2017. [Google Scholar]
  102. He, S.; Ma, Y.; Song, T.; Jiang, Y.; Wu, X. A Real-Time Spatio-Temporal Trajectory Planner for Autonomous Vehicles with Semantic Graph Optimization. IEEE Robot. Autom. Lett. 2024, 10, 72–79. [Google Scholar] [CrossRef]
  103. Silveira, J.; Cabral, K.; Givigi, S.; Marshall, J. Real-Time Fast Marching Tree for Mobile Robot Motion Planning in Dynamic Environments. arXiv 2025, arXiv:2502.09556. [Google Scholar] [CrossRef]
  104. Meegle. GPU Acceleration in Autonomous Navigation. 2025. Available online: https://www.meegle.com/en_us/topics/gpu-acceleration/gpu-acceleration-in-autonomous-navigation (accessed on 20 October 2025).
  105. Einfochips. GPU-Accelerated Real-Time SLAM Optimization for Autonomous Robots; Einfochips: San Jose, CA, USA, 2025. [Google Scholar]
  106. Cao, Y. GPU-Enabled Genetic Algorithm Optimization and Path Planning for Robotic Manipulators. Master’s Thesis, University of South Carolina, Columbia, SC, USA, 2023. [Google Scholar]
  107. Guo, X.; Jiang, F.; Chen, Q.; Wang, Y.; Sha, K.; Chen, J. Deep learning-enhanced environment perception for autonomous driving: MDNet with CSP-DarkNet53. Pattern Recognit. 2025, 160, 111174. [Google Scholar] [CrossRef]
  108. Papachristos, C.; Mascarich, F.; Khattak, S.; Dang, T.; Alexis, K. Localization uncertainty-aware autonomous exploration and mapping with aerial robots using receding horizon path-planning. Auton. Robot. 2019, 43, 2131–2161. [Google Scholar] [CrossRef]
  109. Cosenzo, K.A.; Barnes, M.J. Human–robot interaction research for current and future military applications: From the laboratory to the field. In Unmanned Systems Technology XII; SPIE: Bellingham, WA, USA, 2010; Volume 7692, pp. 30–38. [Google Scholar]
  110. Eder, K.; Harper, R.; Leonards, U. Towards ethical robots: A survey of existing paradigms. In Proceedings of the IEEE International Conference on Advanced Robotics, Hong Kong, China, 31 May–7 June 2014; pp. 1–6. [Google Scholar]
  111. Martinetti, A.; van Wingerden, J.W.; Nizamis, K.; Fosch-Villaronga, E. Redefining safety in human-robot interaction: From physical protection to social understanding. Front. Robot. AI 2021, 8, 666237. [Google Scholar]
  112. Valori, A.; Mendes, A. Validating robot safety protocols with international ISO standards: Challenges and best practices. In Proceedings of the IEEE International Conference on Human-Robot Interaction (HRI), Boulder, CO, USA, 8–11 March 2021. [Google Scholar]
  113. Yi, S.; Liu, S.; Yang, Y.; Yan, S.; Guo, D.; Wang, X.V.; Wang, L. Safety-Aware Human-Centric Collaborative Assembly. Adv. Eng. Inform. 2024, 60, 102371. [Google Scholar] [CrossRef]
  114. Geißlinger, M.; Poszler, F.; Lienkamp, M. An ethical trajectory planning algorithm for autonomous vehicles. Nat. Mach. Intell. 2023, 5, 137–144. [Google Scholar] [CrossRef]
  115. Awad, E.; Dsouza, S.; Kim, R.; Schulz, J.; Henrich, J.; Shariff, A.; Bonnefon, J.; Rahwan, I. The moral machine experiment. Nature 2018, 563, 59–64. [Google Scholar] [CrossRef]
  116. Liang, Y.; Zhao, H. CCPF-RRT*: An improved path planning algorithm with potential function-based sampling heuristic considering congestion. Robot. Auton. Syst. 2023, 162, 104326. [Google Scholar] [CrossRef]
  117. Xiao, Y. Application of Machine Learning in Ethical Design of Autonomous Driving Crash Algorithms. Comput. Intell. Neurosci. 2022, 2022, 2938011. [Google Scholar] [CrossRef]
  118. Achiam, J.; Held, D.; Tian, A.; Abbeel, P. Constrained policy optimization. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 22–31. [Google Scholar]
  119. Pandey, P.; Parasuraman, R.; Doshi, P. Integrating Perceptions: A Human-Centered Physical Safety Model for Human–Robot Interaction. arXiv 2025, arXiv:2507.06700. [Google Scholar]
  120. ISO 12100; Safety of Machinery—General Principles for Design—Risk Assessment and Risk Reduction. International Organization for Standardization: Geneva, Switzerland, 2010.
  121. ISO/TS 15066; Robots and Robotic Devices—Collaborative Robots. International Organization for Standardization: Geneva, Switzerland, 2016.
  122. Hess, R.; Jerg, R.; Lindeholz, T.; Eck, D.; Schilling, K. SRRT*—A Probabilistic Optimal Trajectory Planner for Problematic Area Structures. IFAC-PapersOnLine 2016, 49, 331–336. [Google Scholar] [CrossRef]
  123. Lu, Y.; Da, C. Global and local path planning of robots combining ACO and dynamic window algorithm. Sci. Rep. 2025, 15, 9452. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.