Autonomous Mobile Robot Path Planning Techniques—A Review: Metaheuristic and Cognitive Techniques
Abstract
1. Introduction
- Provide a structured taxonomy of metaheuristic and cognitive (AI-based) algorithms for AMR path planning, highlighting their principles, strengths, and limitations.
- Summarize recent advancements, hybridization strategies, and application domains, with emphasis on emerging trends that integrate global optimization and learning-based adaptability.
- Present a comprehensive comparative analysis of algorithmic capabilities across static and dynamic scenarios, including 2D vs. 3D navigation, multi-robot coordination, and available real-world experimental validations.
- Identify persistent research challenges, such as parameter sensitivity, computational cost, sim-to-real transfer, and interoperability, while outlining promising future directions in multi-robot collaboration, real-time adaptability, and scalable 3D navigation.
- Introduce a dedicated discussion of open research questions in AMR path planning, providing a roadmap for addressing safety guarantees, explainability, benchmarking, and lifelong adaptation in complex real-world environments.
2. Methodology
(“AMR” OR “Autonomous mobile robot”) AND (“Path planning” OR “Navigation” OR “Route planning”) AND (“Techniques” OR “Approaches” OR “Algorithms”)
3. Autonomous Mobile Robots Path Planning Techniques
3.1. Terminology and Taxonomy (Path Planning vs. Motion Planning vs. Navigation)
- Planning scope: global (map-based route generation), local (reactive obstacle avoidance in the vicinity), or exploration/coverage (information-driven navigation in unknown environments).
- Output type: waypoints/path, trajectory, or direct control commands (e.g., ).
- Constraint level: geometric-only vs. feasibility-aware planning that respects mobile robot motion constraints, including nonholonomic kinematics (e.g., unicycle/differential-drive, Ackermann steering) and, when available, dynamic limits.
- Environmental assumptions: static vs. dynamic obstacles/targets, known vs. unknown maps, and 2D vs. 3D operation.
3.2. Metaheuristic Approach
3.2.1. Particle Swarm Optimization
3.2.2. Genetic Algorithm
3.2.3. Ant Colony Optimization
3.2.4. Firefly Algorithm
3.2.5. Grey Wolf Optimization
3.2.6. Other Metaheuristics Algorithms
3.3. Artificial Intelligence Techniques
3.3.1. Fuzzy Logic
3.3.2. Artificial Neural Networks
3.3.3. Neuro-Fuzzy
3.3.4. Reinforcement Learning
Generic Formulation and Scope (Planning vs. Avoidance/Exploration)
3.3.5. Deep Reinforcement Learning
3.3.6. Foundation Models and LLM/VLM-Enabled High-Level Navigation
- 1.
- Natural-language goal specification;
- 2.
- Semantic cost-map construction and constraint generation;
- 3.
- Long-horizon task decomposition into navigation subgoals;
- 4.
- Improved human–robot interaction.
4. Analysis and Discussion
4.1. Algorithm Discussion and Analysis


| Method | Article | Year | Kinematic Model | Dynamic Obstacles | Dynamic Targets | Hybrid Approach | Real Experiments | Multi-Robot | Search Space 3D | Optimality | Robustness | Efficiency |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| [27] | 2018 | •• | • | •• | ||||||||
| [28] | 2018 | ✓ | ✓ | ••• | •• | •• | ||||||
| [29] | 2020 | •• | • | ••• | ||||||||
| [30] | 2018 | ✓ | ✓ | • | • | •• | ||||||
| [31] | 2019 | ✓ | ••• | •• | ••• | |||||||
| [32] | 2019 | ✓ | ✓ | ✓ | •• | •• | •• | |||||
| [34] | 2021 | ✓ | ✓ | ••• | ••• | ••• | ||||||
| [35] | 2020 | ✓ | ✓ | ••• | ••• | •• | ||||||
| [33] | 2021 | ✓ | ••• | •• | ••• | |||||||
| [36] | 2021 | ✓ | ✓ | ••• | •• | ••• | ||||||
| PSO | [37] | 2021 | ✓ | ✓ | •• | •• | •• | |||||
| [38] | 2022 | ✓ | ••• | ••• | ••• | |||||||
| [39] | 2022 | ✓ | ✓ | ✓ | ✓ | ✓ | •• | • | • | |||
| [40] | 2023 | ✓ | ••• | ••• | •• | |||||||
| [41] | 2023 | ✓ | ✓ | •• | • | • | ||||||
| [42] | 2024 | ✓ | ✓ | ✓ | •• | • | • | |||||
| [45] | 2024 | ✓ | ✓ | ✓ | •• | •• | •• | |||||
| [43] | 2024 | ✓ | ✓ | ••• | •• | ••• | ||||||
| [44] | 2025 | ✓ | ✓ | ••• | •• | •• | ||||||
| [46] | 2025 | ✓ | ✓ | ✓ | ✓ | •• | •• | •• | ||||
| [49] | 2017 | ✓ | •• | • | • | |||||||
| [50] | 2018 | •• | •• | •• | ||||||||
| [51] | 2018 | ✓ | ••• | •• | ••• | |||||||
| [52] | 2019 | ✓ | ✓ | ✓ | ✓ | •• | •• | • | ||||
| [53] | 2020 | •• | •• | •• | ||||||||
| [54] | 2020 | •• | •• | •• | ||||||||
| [55] | 2021 | ✓ | ••• | •• | ••• | |||||||
| [57] | 2021 | •• | •• | •• | ||||||||
| GA | [59] | 2021 | ✓ | •• | •• | •• | ||||||
| [60] | 2022 | ✓ | ✓ | ✓ | •• | ••• | ••• | |||||
| [61] | 2022 | ✓ | • | •• | • | |||||||
| [62] | 2024 | ••• | •• | ••• | ||||||||
| [63] | 2024 | •• | •• | •• | ||||||||
| [64] | 2024 | ✓ | ••• | •• | •• | |||||||
| [65] | 2024 | ✓ | ••• | •• | ••• | |||||||
| [66] | 2025 | ✓ | ✓ | ••• | •• | ••• | ||||||
| [67] | 2025 | ✓ | ✓ | ✓ | ✓ | •• | •• | • | ||||
| [72] | 2018 | ✓ | •• | ••• | ••• | |||||||
| [73] | 2019 | ✓ | ✓ | ✓ | ••• | ••• | ••• | |||||
| [74] | 2019 | ✓ | ✓ | • | •• | • | ||||||
| [75] | 2021 | ✓ | ✓ | • | •• | • | ||||||
| [71] | 2020 | ✓ | •• | •• | • | |||||||
| [76] | 2020 | ✓ | •• | • | •• | |||||||
| [77] | 2021 | ✓ | ✓ | •• | • | •• | ||||||
| [78] | 2021 | ✓ | •• | • | •• | |||||||
| [79] | 2022 | ••• | ••• | •• | ||||||||
| ACO | [80] | 2022 | ✓ | •• | • | •• | ||||||
| [81] | 2022 | ✓ | ✓ | ✓ | ✓ | •• | • | •• | ||||
| [83] | 2022 | •• | •• | ••• | ||||||||
| [84] | 2024 | ✓ | ••• | •• | ••• | |||||||
| [86] | 2024 | ✓ | •• | •• | ••• | |||||||
| [85] | 2025 | ••• | ••• | ••• | ||||||||
| [87] | 2025 | ✓ | ✓ | ✓ | ••• | ••• | ••• | |||||
| [88] | 2025 | ✓ | ✓ | ✓ | ✓ | ••• | •• | •• | ||||
| [89] | 2025 | ✓ | ✓ | •• | •• | •• | ||||||
| [90] | 2025 | ✓ | ✓ | ✓ | ••• | ••• | ••• | |||||
| [96] | 2018 | •• | •• | •• | ||||||||
| [97] | 2018 | ✓ | ✓ | ✓ | ✓ | •• | •• | • | ||||
| [98] | 2021 | ✓ | ✓ | •• | •• | • | ||||||
| [99] | 2021 | ✓ | ✓ | ✓ | •• | •• | • | |||||
| [100] | 2022 | ✓ | ✓ | ••• | •• | ••• | ||||||
| [101] | 2022 | ✓ | ✓ | ••• | •• | •• | ||||||
| FA | [103] | 2023 | ✓ | ✓ | ✓ | ✓ | •• | •• | •• | |||
| [102] | 2024 | ✓ | ••• | •• | ••• | |||||||
| [104] | 2024 | ✓ | ✓ | ✓ SMC | •• | • | • | |||||
| [105] | 2024 | ✓ | ✓ | ✓ RRT | •• | • | •• | |||||
| [106] | 2024 | ✓ | ✓ | ✓ CS | • | • | • | |||||
| [107] | 2024 | ✓ | •• | •• | •• | |||||||
| [110] | 2020 | ✓ | ✓ | ••• | •• | •• | ||||||
| [111] | 2021 | ✓ | ✓ | ✓ | •• | •• | •• | |||||
| [113] | 2022 | ✓ | ✓ | ✓ | ✓ | •• | •• | •• | ||||
| [114] | 2022 | ✓ | ✓ | ✓ | • | •• | • | |||||
| GWO | [115] | 2023 | ✓ | ✓ | ••• | •• | ••• | |||||
| [116] | 2023 | ✓ | ••• | ••• | ••• | |||||||
| [43] | 2024 | ✓ | ✓ | ••• | •• | ••• | ||||||
| [117] | 2025 | ✓ | ✓ | ✓ DWA | •• | •• | •• | |||||
| [118] | 2025 | ••• | •• | •• | ||||||||
| [119] | 2021 | ✓ | ✓ | • | • | • | ||||||
| [120] | 2022 | ✓ | ✓ | • | •• | • | ||||||
| [121] | 2022 | ✓ | ✓ | • | •• | • | ||||||
| Others | [122] | 2022 | ✓ | ✓ | ✓ | •• | •• | • | ||||
| [123] | 2024 | ✓ | ✓ | ✓ (HHO+AVOA) | •• | •• | •• | |||||
| [124] | 2025 | ✓ | •• | ••• | •• | |||||||
| [125] | 2025 | ✓ | ✓ (SSA+AVOA) | ✓ | ••• | ••• | ••• |


| Method | Paper | Year | Kinematic Model | Dynamic Obstacles | Dynamic Targets | Hybrid Approach | Real Experiments | Multi-Robot | Search Space 3D | Optimality | Robustness | Efficiency |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| [130] | 2017 | ✓ | ✓ | ✓ | •• | ••• | ••• | |||||
| [133] | 2018 | ✓ | ✓ | •• | ••• | ••• | ||||||
| [131] | 2018 | ✓ | ✓ | •• | ••• | ••• | ||||||
| [132] | 2019 | ✓ | ✓ | •• | ••• | ••• | ||||||
| [134] | 2020 | ✓ | ✓ | •• | ••• | ••• | ||||||
| [135] | 2020 | ✓ | ✓ | ✓ | •• | ••• | ••• | |||||
| Fuzzy | [136] | 2021 | ✓ | •• | ••• | ••• | ||||||
| Logic | [137] | 2022 | ✓ | ✓ | ✓ | •• | ••• | ••• | ||||
| [138] | 2022 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ••• | ••• | •• | ||
| [139] | 2023 | ✓ | ✓ | ✓ DWA | •• | ••• | ••• | |||||
| [140] | 2023 | ✓ | ✓ | ✓ DWA | •• | ••• | ••• | |||||
| [141] | 2024 | ✓ | ✓ | ✓ | •• | ••• | ••• | |||||
| [142] | 2024 | ✓ | ✓ DCA | ✓ | ••• | ••• | •• | |||||
| [143] | 2025 | ✓ | ✓ D*L, Quad3 | ••• | ••• | •• | ||||||
| [144] | 2025 | ✓ | ✓ A*, APF, QL | ••• | ••• | •• | ||||||
| [147] | 2018 | ✓ | ✓ | •• | ••• | ••• | ||||||
| [148] | 2018 | ✓ | •• | ••• | ••• | |||||||
| [149] | 2019 | ✓ | ✓ | ✓ | •• | ••• | ••• | |||||
| [150] | 2019 | ✓ | ✓ | ✓ | •• | ••• | ••• | |||||
| [151] | 2019 | ✓ | ✓ | ✓ | ✓ | ••• | ••• | •• | ||||
| [152] | 2020 | ✓ | ✓ | ✓ | •• | ••• | ••• | |||||
| [153] | 2020 | ✓ | ✓ | •• | ••• | ••• | ||||||
| [156] | 2021 | ✓ | ✓ | ✓ | ••• | ••• | •• | |||||
| Artificial | [157] | 2021 | ✓ | ✓ | •• | ••• | ••• | |||||
| Neural | [158] | 2021 | ✓ | •• | ••• | ••• | ||||||
| Network | [159] | 2022 | ✓ | ✓ | •• | ••• | ••• | |||||
| [160] | 2022 | ✓ | ✓ | ✓ | •• | ••• | ••• | |||||
| [161] | 2023 | ✓ | •• | •• | ••• | |||||||
| [162] | 2024 | ✓ | ✓ | ✓ | •• | ••• | ••• | |||||
| [163] | 2025 | ✓ | ✓ RL | ••• | ••• | •• | ||||||
| [164] | 2025 | ✓ | ✓ | ✓ DWA | ✓ | •• | ••• | ••• | ||||
| [165] | 2025 | ✓ | ✓ IWO | ••• | •• | •• | ||||||
| [166] | 2025 | ✓ | ✓ | ✓ IGA | ✓ | ••• | ••• | •• | ||||
| [170] | 2019 | ✓ | ✓ | ✓ | •• | ••• | •• | |||||
| [171] | 2019 | ✓ | ✓ | ✓ | •• | ••• | •• | |||||
| [172] | 2020 | ✓ | ✓ | ✓ GPS | ••• | ••• | •• | |||||
| [173] | 2021 | ✓ | ✓ Fuzzy | •• | ••• | •• | ||||||
| [174] | 2022 | ✓ | •• | ••• | •• | |||||||
| [175] | 2022 | ✓ | ✓ | ✓ | ••• | ••• | •• | |||||
| Neuro- | [176] | 2022 | ✓ | ✓ | ••• | ••• | •• | |||||
| Fuzzy | [179] | 2022 | ✓ | ✓ | ✓ FCM | ✓ | ••• | ••• | •• | |||
| [177] | 2023 | ✓ | ✓ | ••• | ••• | •• | ||||||
| [178] | 2024 | ✓ | ✓ | ✓ | ••• | ••• | •• | |||||
| [180] | 2025 | ✓ | •• | ••• | ••• | |||||||
| [181] | 2025 | ✓ | ✓ TLBO | ••• | ••• | •• | ||||||
| [182] | 2025 | ✓ | ✓ | ✓ A*, DWA | ••• | ••• | •• | |||||
| [166] | 2025 | ✓ | ✓ | ✓ BINN, GA | ✓ | ••• | ••• | •• | ||||
| [183] | 2025 | ✓ | ✓ | •• | ••• | •• | ||||||
| [195] | 2019 | ✓ | ✓ | •• | ••• | •• | ||||||
| [193] | 2020 | ✓ | ✓ | •• | •• | •• | ||||||
| [192] | 2021 | ✓ | •• | •• | •• | |||||||
| [194] | 2021 | ✓ | ✓ | •• | ••• | •• | ||||||
| [194] | 2021 | ✓ | ✓ | ✓ | •• | ••• | •• | |||||
| [190] | 2022 | ✓ | •• | •• | •• | |||||||
| [189] | 2023 | ✓ | ✓ | ✓ | ••• | ••• | •• | |||||
| RL | [196] | 2023 | ✓ | ✓ | ✓ | •• | ••• | •• | ||||
| [197] | 2024 | ✓ | •• | •• | •• | |||||||
| [238] | 2024 | ✓ | ✓ | •• | ••• | •• | ||||||
| [198] | 2024 | ✓ | •• | •• | •• | |||||||
| [144] | 2025 | ✓ | ✓ Fuz A*, APF | ••• | ••• | •• | ||||||
| [199] | 2025 | ✓ | ✓ | ✓ A*, DWA | ✓ | ••• | ••• | •• | ||||
| [200] | 2025 | ✓ | •• | •• | •• | |||||||
| [201] | 2025 | ✓ | ✓ ACO, Adap | ••• | ••• | •• | ||||||
| [208] | 2019 | ✓ | ✓ | ✓ | ••• | ••• | •• | |||||
| [209] | 2020 | ✓ | ✓ | ✓ | ••• | ••• | •• | |||||
| [210] | 2020 | ✓ | ✓ | •• | ••• | •• | ||||||
| [211] | 2021 | ✓ | ✓ | ✓ | ••• | ••• | •• | |||||
| [212] | 2021 | ✓ | ✓ | ✓ | ✓ | ••• | ••• | •• | ||||
| [213] | 2021 | ✓ | ✓ | ✓ | ✓ | ••• | ••• | •• | ||||
| [215] | 2022 | ✓ | ✓ | ✓ | ✓ | ••• | ••• | •• | ||||
| [216] | 2022 | ✓ | ✓ | •• | ••• | •• | ||||||
| Deep | [217] | 2022 | ✓ | ✓ | ✓ | ✓ | ••• | ••• | •• | |||
| RL | [218] | 2022 | ✓ | •• | ••• | •• | ||||||
| [224] | 2023 | ✓ | ✓ | ✓ | ••• | ••• | •• | |||||
| [219] | 2024 | •• | •• | •• | ||||||||
| [220] | 2024 | ✓ | ✓ | •• | ••• | •• | ||||||
| [225] | 2024 | ✓ | •• | ••• | •• | |||||||
| [227] | 2024 | ✓ | ✓ | ••• | ••• | •• | ||||||
| [128] | 2025 | ✓ | ✓ SAC, PER | ••• | ••• | •• | ||||||
| [228] | 2025 | ✓ | ✓ | •• | ••• | •• | ||||||
| [229] | 2025 | ✓ | ✓ ANFIS, TAHA | ••• | ••• | •• | ||||||
| [230] | 2025 | ✓ | ✓ | ✓ MOSFMO | ✓ | ••• | ••• | •• |
4.2. Key Synthesized Findings and Maturity-Level Insights
- F1 (Adoption trend in dynamic settings). AI-based methods appear more frequently in dynamic-environment studies than metaheuristics, reflecting a shift toward perception-driven and reactive navigation policies.
- F2 (Deployment maturity gap). Despite substantial algorithmic progress, real-world validation remains limited (only about 21% of surveyed works report physical experiments), indicating that the field is still deployment-immature under real sensing noise, latency, and safety constraints.
- F3 (Evaluation scope remains uneven). The literature strongly emphasizes static obstacles and 2D settings (see Figure 17 and Figure 18), whereas comparatively fewer studies address mobile-target tracking, multi-robot coordination, and 3D navigation. These scenarios expose additional requirements (prediction, communication, and dynamic feasibility) that are not consistently evaluated across benchmarks.
- F4 (Dominance of hierarchical navigation stacks). Many contemporary systems are best viewed as stacks rather than single planners: a global planner provides route structure, while a local module handles reactive obstacle avoidance and short-horizon feasibility. This is particularly evident in RL/DRL studies, which frequently act as local avoidance/exploration modules rather than standalone global path planners.
- F5 (Metaheuristics as flexible optimizers, not guaranteed solvers). Metaheuristics remain attractive for flexible objective design (e.g., smoothness, energy, risk) and global search; however, they are typically near-optimal, sensitive to tuning/initialization, and computationally heavy for strict real-time settings unless combined with receding-horizon updates, parallelization, or hybridization.
- F6 (Learning methods trade training burden for online adaptability). Once trained, learning-based policies (especially DRL) can react quickly in dynamic scenes and under partial observability. Still, performance depends on data/simulation fidelity and usually requires additional safety constraints (e.g., shielding) for reliable deployment.
- F7 (Hybridization as a maturity bridge). Hybrid pipelines (metaheuristic + deterministic or global planner + learned local policy) increasingly represent a practical compromise: they improve robustness and adaptability by combining complementary strengths, at the cost of greater integration complexity.
4.3. Practical Decision Guidance
4.4. Computational Considerations: Time and Space Complexity
4.4.1. Metaheuristic Approaches
4.4.2. Heuristic-Guided RL and Deep RL
5. Opportunities in AMR Path Planning
5.1. Parameter Tuning of Existing Metaheuristic Approaches
- Utilizing automated methods, including grid search, random search, Bayesian optimization, and AI-based approaches to find the best parameter combinations in a variety of settings.
- Algorithm performance in a dynamic environment can be improved by creating self-adaptive metaheuristics that adjust parameters in real time, such as mutation rates in GAs or adaptive inertia weights in PSOs.
5.2. Hybrid Algorithm Development
5.2.1. Principles for Designing Hybrid Planners
5.2.2. Common Hybridization Patterns
- Hierarchical decomposition (global–local stack): A global planner produces a coarse route or waypoint sequence, while a local planner/controller performs reactive collision avoidance and enforces short-horizon feasibility. This pattern is particularly effective in dynamic or partially observed environments where frequent replanning is required.
- Embedded local refinement: A global search stage (e.g., sampling-based or metaheuristic optimization) generates candidate paths, followed by deterministic refinement (e.g., smoothing, constraint projection, or local optimization) to satisfy kinematic/dynamic limits and improve clearance and smoothness.
- Safety shielding/constraint enforcement: A learning-based or heuristic module proposes actions or waypoints, while an explicit safety layer (e.g., rule-based constraints, constrained optimization, control barrier functions, or a classical fallback planner) filters unsafe outputs and guarantees collision avoidance.
5.2.3. Methodology for Hybrid Algorithm Design
- Match the decomposition to the task regime: Assign each module a well-scoped role (global routing, reactive avoidance, exploration, or trajectory feasibility) based on the operating scenario (e.g., static vs. dynamic, known vs. unknown, kinematically constrained vs. unconstrained).
- Define shared objectives and constraints: Ensure modules optimize compatible criteria (e.g., time/length, clearance, smoothness, energy) and encode hard constraints (collision-free, kinematic/dynamic feasibility) consistently to avoid conflicting behaviors.
- Specify coupling and information flow: Define what is exchanged (e.g., waypoints, local costmaps, obstacle predictions, feasibility feedback) and how often replanning occurs (periodic vs. event-triggered), including how to resolve disagreements between modules.
- Manage complexity and failure modes: Hybrids improve capability but increase system complexity; evaluation should include ablations (removing modules), sensitivity to hyperparameters, and systematic failure-case analysis under sensing noise and map errors.
5.2.4. Future Opportunities
- Learning-guided metaheuristics: combining DRL with bio-inspired optimization to bias search toward high-value regions and improve planning in unknown or rapidly changing environments.
- Layered hybrid architectures: using a metaheuristic (or graph/sampling planner) for global routing, coupled with a learned local policy (e.g., DRL or fuzzy logic) for reactive avoidance and short-horizon feasibility.
- AutoML-style adaptation: using metaheuristics to tune or evolve hyperparameters of learning-based planners online (or across deployment sites), improving robustness across different layouts and traffic patterns.
5.3. Dynamic and Complex Environments
- Disaster response zones with unpredictable hazards.
- Developing real-time adaptive planners that continuously learn from environmental changes using reinforcement learning systems.
- Introducing predictive models to estimate the movement of dynamic obstacles (e.g., pedestrians, vehicles) and adjust navigation plans accordingly.
5.4. Multi-Robot Systems and Coordination
- Decentralized learning-based approaches (e.g., multi-agent reinforcement learning) enable robots to learn cooperative strategies without relying on centralized control.
- Communication-aware planning, where robots share partial knowledge of the environment or goals to improve collective decision-making.
- Dynamic role assignment and task sharing, where robots adapt roles (leader, follower, scout) based on changing mission needs or failures.
5.5. 3D Path Planning and Aerial/Underwater Robots
5.6. Real-World Deployment and Validation
- Combining heuristic methods with DRL to create systems that are both interpretable (via heuristics) and adaptive (via learning), enabling safer real-world deployment.
- Field testing in application-specific settings, such as agriculture, mining, or logistics, to assess algorithm robustness and adaptability under real-world constraints, including limited sensing, noise, and latency.
5.7. Agentic AI for Autonomous Mobile Robots
Key Research Directions and Opportunities
- Agentic orchestration of planning tools. Develop task-level agents that decompose goals into sub-goals, select/configure planners (e.g., D*, RRT*), and iteratively refine plans through self-critique. Integrate formal verifiers (collision, kinematics, and aisle/traffic rules) to provide structured feedback for the next iteration in case of failures.
- Uncertainty- and risk-aware routing. Couple agentic controllers with probabilistic forecasts of dynamic actors and sensing quality; optimize risk-sensitive objectives (e.g., chance constraints, Conditional Value at Risk) rather than focusing solely on time/distance. Explore anytime variants that trade computation for reduced tail risk.
- Decentralized multi-robot coordination. Use agentic negotiation to allocate right-of-way, form platoons, and de-conflict routes with minimal communication under partial observability. Compare emergent conventions with centralized schedulers across density regimes.
- Energy-, compute-, and latency-aware scheduling. Co-design agent policies with embedded constraints (CPU/GPU power, thermal limits), prioritizing planner/tool calls that meet real-time deadlines while minimizing energy for long-duration autonomy.
- Human-in-the-loop intent and semantics. Map high-level human directives (semantic goals, no-go zones, and task priorities) into sub-goals and constraints, enabling interpretable re-planning and operator override when mission objectives change.
6. Open Questions in AMR Path Planning
7. Conclusions and Future Prospects
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
- The following abbreviations are used in this manuscript: List of acronyms used throughout the paper for key planning strategies and algorithms.
| Acronym | Meaning |
| Robotic Platforms | |
| AMR | Autonomous Mobile Robot |
| WMR | Wheeled Mobile Robot |
| AGV | Automated Guided Vehicle |
| Heuristic and Bio-Inspired Path Planning Methods | |
| PSO | Particle Swarm Optimization |
| GA | Genetic Algorithm |
| ACO | Ant Colony Optimization |
| GWO | Grey Wolf Optimizer |
| BSO | Bat Swarm Optimization |
| ABC | Artificial Bee Colony |
| FA | Firefly Algorithm |
| WOA | Whale Optimization Algorithm |
| AVOA | African Vulture Optimization Algorithm |
| LS | Local Search |
| Classical Path Planning Methods | |
| APF | Artificial Potential Field |
| ODA | Obstacle Detection and Avoidance |
| DWA | Dynamic Window Approach |
| Fuzzy, Analytic, and Neuro-Fuzzy Techniques | |
| FL | Fuzzy Logic |
| AHP | Analytic Hierarchy Process |
| ANN | Artificial Neural Network |
| BNN | Behavioral Neural Network |
| GRU | Gated Recurrent Unit |
| ANFIS | Adaptive Neuro-Fuzzy Inference System |
| Reinforcement Learning and Deep RL Methods | |
| RL | Reinforcement Learning |
| DRL | Deep Reinforcement Learning |
| Q-L | Q-Learning |
| DQN | Deep Q-Network |
| DDQN | Double Deep Q-Network |
| DDPG | Deep Deterministic Policy Gradient |
| PPO | Proximal Policy Optimization |
| PER | Prioritized Experience Replay |
References
- Teja, G.K.; Mohanty, P.K.; Das, S. Review on path planning methods for mobile robot. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2025, 239, 5547–5580. [Google Scholar] [CrossRef]
- Loganathan, A.; Ahmad, N.S. A systematic review on recent advances in autonomous mobile robot navigation. Eng. Sci. Technol. Int. J. 2023, 40, 101343. [Google Scholar] [CrossRef]
- Sheltami, T.; Ahmed, G.; Ghaleb, M.; Mahmoud, A. UAV Path Planning and Trajectory Optimization: A Comprehensive Survey. Arab. J. Sci. Eng. 2025, 1–41. [Google Scholar] [CrossRef]
- Sheltami, T.; Ahmed, G.; Yasar, A.U.H. An Optimization Approach of IoD Deployment for Optimal Coverage Based on Radio Frequency Model. Comput. Model. Eng. Sci. 2024, 139, 2627–2647. [Google Scholar] [CrossRef]
- Badamasi Aremu, M.; Kabir, I.K.; Ahmed, G.; El-Ferik, S. Autonomous Mobile Robot Path Planning Techniques—A Review: Classical and Heuristic Techniques. IEEE Access 2025, 13, 117999–118022. [Google Scholar] [CrossRef]
- Ugwoke, K.C.; Nnanna, N.A.; Abdullahi, S.E.Y. Simulation-based review of classical, heuristic, and metaheuristic path planning algorithms. Sci. Rep. 2025, 15, 12643. [Google Scholar] [CrossRef]
- Liu, L.; Wang, X.; Yang, X.; Liu, H.; Li, J.; Wang, P. Path planning techniques for mobile robots: Review and prospect. Expert Syst. Appl. 2023, 227, 120254. [Google Scholar] [CrossRef]
- Ibraheem, I.K.; Ajeil, F.H. Path Planning of an Autonomous Mobile Robot in a Dynamic Environment using Modified Bat Swarm Optimization. arXiv 2018, arXiv:1807.05352. [Google Scholar]
- Wahab, M.N.A.; Nefti-Meziani, S.; Atyabi, A. A comparative review on mobile robot path planning: Classical or meta-heuristic methods? Annu. Rev. Control 2020, 50, 233–252. [Google Scholar] [CrossRef]
- Shahid, S. Artificial Intelligence in Path Planning for Autonomous Robots: A Review. Metaheuristic Optim. Rev. 2024, 2, 37–47. [Google Scholar] [CrossRef]
- Thrun, S. Probabilistic algorithms in robotics. Ai Mag. 2000, 21, 93. [Google Scholar]
- LaValle, S.M. Planning Algorithms; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
- Siegwart, R.; Nourbakhsh, I.R.; Scaramuzza, D. Introduction to Autonomous Mobile Robots; MIT Press: Cambridge, MA, USA, 2011. [Google Scholar]
- Dokeroglu, T.; Canturk, D.; Kucukyilmaz, T. A survey on pioneering metaheuristic algorithms between 2019 and 2024. arXiv 2024, arXiv:2501.14769. [Google Scholar]
- Lin, S.; Liu, A.; Wang, J.; Kong, X. An intelligence-based hybrid PSO-SA for mobile robot path planning in warehouse. J. Comput. Sci. 2023, 67, 101938. [Google Scholar] [CrossRef]
- Lou, T.S.; Yue, Z.P.; Jiao, Y.; He, Z. A hybrid strategy-based GJO algorithm for robot path planning. Expert Syst. Appl. 2023, 238, 121975. [Google Scholar] [CrossRef]
- Sehuveret Hernández, D.; García-Muñoz, J.A.; Barranco Gutiérrez, A.I. Evaluation of metaheuristic optimization algorithms applied to path planning. Int. J. Adv. Robot. Syst. 2024, 21, 17298806241285302. [Google Scholar] [CrossRef]
- Qin, H.; Shao, S.; Wang, T.; Yu, X.; Jiang, Y.; Cao, Z. Review of Autonomous Path Planning Algorithms for Mobile Robots. Drones 2023, 7, 211. [Google Scholar] [CrossRef]
- Xu, S.; Ho, E.S.; Shum, H.P. A hybrid metaheuristic navigation algorithm for robot path rolling planning in an unknown environment. Mechatron. Syst. Control 2019, 47, 216–224. [Google Scholar] [CrossRef]
- Garip, Z.; Karayel, D.; Erhan Çimen, M. A study on path planning optimization of mobile robots based on hybrid algorithm. Concurr. Comput. Pract. Exp. 2022, 34, e6721. [Google Scholar] [CrossRef]
- Sood, M.; Panchal, V.K. Meta-heuristic techniques for path planning: Recent trends and advancements. Int. J. Intell. Syst. Technol. Appl. 2020, 19, 36–77. [Google Scholar] [CrossRef]
- Ahmed, G.; Sheltami, T.; Mahmoud, A.; Yasar, A. IoD swarms collision avoidance via improved particle swarm optimization. Transp. Res. Part A Policy Pract. 2020, 142, 260–278. [Google Scholar] [CrossRef]
- Ahmed, G.; Eltayeb, A.; Alyazidi, N.M.; Imran, I.H.; Sheltami, T.; El-Ferik, S. Improved particle swarm optimization for fractional order PID control design in robotic Manipulator system: A performance analysis. Results Eng. 2024, 24, 103089. [Google Scholar] [CrossRef]
- Mobarez, E.; Sarhan, A.; Ashry, M. Obstacle avoidance for multi-UAV path planning based on particle swarm optimization. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2021; Volume 1172, p. 012039. [Google Scholar]
- Ahmed, G.; Sheltami, T.R. A safety system for maximizing operated uavs capacity under regulation constraints. IEEE Access 2023, 11, 139069–139081. [Google Scholar] [CrossRef]
- Aremu, M.B.; Abdel-Nasser, M.; Alyazidi, N.M.; El-Ferik, S. Disturbance Observer-Based Bio-Inspired LQR Optimization for DC Motor Speed Control. IEEE Access 2024, 12, 152418–152429. [Google Scholar] [CrossRef]
- Adamu, P.I.; Jegede, J.T.; Okagbue, H.I.; Oguntunde, P.E. Shortest path planning algorithm—A Particle Swarm Optimization (PSO) approach. In Proceedings of the World Congress on Engineering, London, UK, 4–6 July 2018; Volume 1, pp. 4–6. [Google Scholar]
- Wang, B.; Li, S.; Guo, J.; Chen, Q. Car-like mobile robot path planning in rough terrain using multi-objective particle swarm optimization algorithm. Neurocomputing 2018, 282, 42–51. [Google Scholar] [CrossRef]
- Alam, M.S.; Rafique, M.U.; Khan, M.U. Mobile robot path planning in static environments using particle swarm optimization. arXiv 2020, arXiv:2008.10000. [Google Scholar] [CrossRef]
- Meerza, S.I.A.; Islam, M.; Uzzal, M.M. Optimal Path Planning Algorithm for Swarm of Robots Using Particle Swarm Optimization Technique. In Proceedings of the 2018 3rd International Conference on Information Technology, Information System and Electrical Engineering (ICITISEE), Yogyakarta, Indonesia, 13–14 November 2018; IEEE: New York, NY, USA, 2018; pp. 330–334. [Google Scholar]
- Lian, J.; Yu, W.; Liu, W. A Chaotic Adaptive Particle Swarm Optimization for Robot Path Planning. In Proceedings of the 2019 Chinese Control Conference (CCC), Guangzhou, China, 27–30 July 2019; pp. 4751–4756. [Google Scholar] [CrossRef]
- Krell, E.; Sheta, A.; Balasubramanian, A.; King, S. Collision-Free Autonomous Robot Navigation in Unknown Environments Utilizing PSO for Path Planning. J. Artif. Intell. Soft Comput. Res. 2019, 9, 267–282. [Google Scholar] [CrossRef]
- Cheng, X.; Li, J.; Zheng, C.; Zhang, J.; Zhao, M. An Improved PSO-GWO Algorithm with Chaos and Adaptive Inertial Weight for Robot Path Planning. Front. Neurorobot. 2021, 15, 770361. [Google Scholar] [CrossRef] [PubMed]
- Gul, F.; Rahiman, W.; Alhady, S.; Ali, A.; Mir, I.; Jalil, A. Meta-heuristic approach for solving multi-objective path planning for autonomous guided robot using PSO–GWO optimization algorithm with evolutionary programming. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 7873–7890. [Google Scholar] [CrossRef]
- Ajeil, F.H.; Ibraheem, I.K.; Azar, A.T.; Humaidi, A.J. Autonomous navigation and obstacle avoidance of an omnidirectional mobile robot using swarm optimization and sensors deployment. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420929498. [Google Scholar] [CrossRef]
- Lu, J.; Zhang, Z. An Improved Simulated Annealing Particle Swarm Optimization Algorithm for Path Planning of Mobile Robots Using Mutation Particles. Wirel. Commun. Mob. Comput. 2021, 2021, 2374712. [Google Scholar] [CrossRef]
- Mohanty, P.K.; Dewang, H.S. A smart path planner for wheeled mobile robots using adaptive particle swarm optimization. J. Braz. Soc. Mech. Sci. Eng. 2021, 43, 101. [Google Scholar] [CrossRef]
- Abaas, T.F.; Shabeeb, A.H. Obstacle Avoidance and Path Planning of a Wheeled Mobile Robot Using Hybrid Algorithm. Eng. Technol. J. 2022, 40, 1659–1670. [Google Scholar] [CrossRef]
- Abed, B.M.; Jasim, W.M. Multi-objective optimization path planning with moving target. IAES Int. J. Artif. Intell. 2022, 11, 1184–1196. [Google Scholar] [CrossRef]
- Jia, L.; Li, J.; Ni, H.; Zhang, D. Autonomous mobile robot global path planning: A prior information-based particle swarm optimization approach. Control Theory Technol. 2023, 21, 173–189. [Google Scholar] [CrossRef]
- Xin, J.; Li, Z.; Zhang, Y.; Li, N. Efficient real-time path planning with self-evolving particle swarm optimization in dynamic scenarios. Unmanned Syst. 2024, 12, 215–226. [Google Scholar] [CrossRef]
- Xin, J.; Kim, J.; Chu, S.; Li, N. OkayPlan: Obstacle Kinematics Augmented Dynamic real-time path Planning via particle swarm optimization. Ocean Eng. 2024, 303, 117841. [Google Scholar] [CrossRef]
- Promkaew, N.; Thammawiset, S.; Srisan, P.; Sanitchon, P.; Tummawai, T.; Sukpancharoen, S. Development of metaheuristic algorithms for efficient path planning of autonomous mobile robots in indoor environments. Results Eng. 2024, 22, 102280. [Google Scholar] [CrossRef]
- Menebröker, F.; Stadtler, J.; Mohamed, M. Mobile Robot Path Planning Under Kinematic Constraints by Metaheuristic B-Spline Optimization. In Proceedings of the 2025 11th International Conference on Automation, Robotics, and Applications (ICARA), Zagreb, Croatia, 12–14 February 2025; IEEE: New York, NY, USA, 2025; pp. 224–229. [Google Scholar]
- Mohaghegh, M.; Jafarpourdavatgar, H.; Saeedinia, S.A. New design of smooth PSO-IPF navigator with kinematic constraints. IEEE Access 2024, 12, 175108–175121. [Google Scholar] [CrossRef]
- Zhuang, H.; Jiang, C. RRT* path planning method for mobile robot based on particle swarm optimization and rotation angle constraint. In Proceedings of the International Conference Optoelectronic Information and Optical Engineering (OIOE2024), Kunming, China, 8–10 March 2024; SPIE: Philadelphia, PA, USA, 2025; Volume 13513, pp. 593–602. [Google Scholar]
- Patle, B.; Pandey, A.; Parhi, D.; Jagadeesh, A. A review: On path planning strategies for navigation of mobile robot. Def. Technol. 2019, 15, 582–606. [Google Scholar] [CrossRef]
- Ahmed, G.; Sheltami, T.; Ghaleb, M.; Hamdan, M.; Mahmoud, A.; Yasar, A. Energy-efficient internet of drones path-planning study using meta-heuristic algorithms. Appl. Sci. 2024, 14, 2418. [Google Scholar] [CrossRef]
- Wang, Y.; Zhou, H.; Wang, Y. Mobile robot dynamic path planning based on improved genetic algorithm. AIP Conf. Proc. 2017, 1864, 020046. [Google Scholar] [CrossRef]
- Lamini, C.; Benhlima, S.; Elbekri, A. Genetic algorithm based approach for autonomous mobile robot path planning. Procedia Comput. Sci. 2018, 127, 180–189. [Google Scholar] [CrossRef]
- Mao, L.; Ji, X.; Qin, F. A robot obstacle avoidance method based on improved genetic algorithm. In Proceedings of the 2018 11th International Conference on Intelligent Computation Technology and Automation (ICICTA), Changsha, China, 22–23 September 2018; IEEE: New York, NY, USA, 2018; pp. 327–331. [Google Scholar]
- Nazarahari, M.; Khanmirza, E.; Doostie, S. Multi-objective multi-robot path planning in continuous environment using an enhanced genetic algorithm. Expert Syst. Appl. 2019, 115, 106–120. [Google Scholar] [CrossRef]
- Li, Y.; Huang, Z.; Xie, Y. Path planning of mobile robot based on improved genetic algorithm. In Proceedings of the 2020 3rd International Conference on Electron Device and Mechanical Engineering (ICEDME), Suzhou, China, 1–3 May 2020; IEEE: New York, NY, USA, 2020; pp. 691–695. [Google Scholar]
- Li, Y.; Dong, D.; Guo, X. Mobile robot path planning based on improved genetic algorithm with A-star heuristic method. In Proceedings of the 2020 IEEE 9th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Online, 11–13 December 2020; IEEE: New York, NY, USA, 2020; Volume 9, pp. 1306–1311. [Google Scholar]
- Wang, M. Real-time path optimization of mobile robots based on improved genetic algorithm. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2021, 235, 646–651. [Google Scholar] [CrossRef]
- Zhang, Z.; Lu, R.; Zhao, M.; Luan, S.; Bu, M. Robot path planning based on genetic algorithm with hybrid initialization method. J. Intell. Fuzzy Syst. 2021, 42, 2041–2056. [Google Scholar] [CrossRef]
- Li, K.; Hu, Q.; Liu, J. Path planning of mobile robot based on improved multiobjective genetic algorithm. Wirel. Commun. Mob. Comput. 2021, 2021, 8836615. [Google Scholar] [CrossRef]
- Suresh, K.; Venkatesan, R.; Venugopal, S. Mobile robot path planning using multi-objective genetic algorithm in industrial automation. Soft Comput. 2022, 26, 7387–7400. [Google Scholar] [CrossRef]
- Huang, F.; Fu, H.; Chen, J.; Wang, X. Mobile robot path planning based on improved genetic algorithm. In Proceedings of the 2021 4th World Conference on Mechanical Engineering and Intelligent Manufacturing (WCMEIM), Wuhan, China, 29 November–1 December 2021; pp. 378–383. [Google Scholar] [CrossRef]
- Shi, K.; Huang, L.; Jiang, D.; Sun, Y.; Tong, X.; Xie, Y.; Fang, Z. Path planning optimization of intelligent vehicle based on improved genetic and ant colony hybrid algorithm. Front. Bioeng. Biotechnol. 2022, 10, 905983. [Google Scholar] [CrossRef]
- Rahmaniar, W.; Rakhmania, A.E. Mobile Robot Path Planning in a Trajectory with Multiple Obstacles Using Genetic Algorithms. J. Robot. Control 2022, 3, 1–7. [Google Scholar] [CrossRef]
- Ab Wahab, M.N.; Nazir, A.; Khalil, A.; Ho, W.J.; Akbar, M.F.; Noor, M.H.M.; Mohamed, A.S.A. Improved genetic algorithm for mobile robot path planning in static environments. Expert Syst. Appl. 2024, 249, 123762. [Google Scholar] [CrossRef]
- Rajendran, P.A.; Othman, M. A Comparative Study on Ant-Colony Algorithm and Genetic Algorithm for Mobile Robot Planning. In Proceedings of the International Conference on Soft Computing and Data Mining, Putrajaya, Malaysia, 21–22 August 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 286–295. [Google Scholar]
- Zhu, J.; Pan, D. Improved Genetic Algorithm for Solving Robot Path Planning Based on Grid Maps. Mathematics 2024, 12, 4017. [Google Scholar] [CrossRef]
- Mankudiyil, R.; Dornberger, R.; Hanne, T. Improved Genetic Algorithm in a Static Environment for the Robotic Path Planning Problem. In Proceedings of the International Conference on Data Science and Applications (ICDSA 2023), Jaipur, India, 14–15 July 2023; Springer: Berlin/Heidelberg, Germany, 2024; Volume 819, pp. 217–230. [Google Scholar]
- Zhang, Z.; Yang, H.; Bai, X.; Zhang, S.; Xu, C. The Path Planning of Mobile Robots Based on an Improved Genetic Algorithm. Appl. Sci. 2025, 15, 3700. [Google Scholar] [CrossRef]
- Balza, M.; Goldbarg, M.A.; Silva, S.N.; Silva, L.M.; Fernandes, M.A. A Real-Time Safe Navigation Proposal for Mobile Robots in Unknown Environments Using Meta-Heuristics. IEEE Access 2025, 13, 23987–24013. [Google Scholar] [CrossRef]
- Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [Google Scholar] [CrossRef]
- Yang, Y.; Xiong, X.; Yan, Y. UAV Formation Trajectory Planning Algorithms: A Review. Drones 2023, 7, 62. [Google Scholar] [CrossRef]
- Zhang, H.Y.; Lin, W.M.; Chen, A.X. Path planning for the mobile robot: A review. Symmetry 2018, 10, 450. [Google Scholar] [CrossRef]
- Ajeil, F.H.; Ibraheem, I.K.; Azar, A.T.; Humaidi, A.J. Grid-based mobile robot path planning using aging-based ant colony optimization algorithm in static and dynamic environments. Sensors 2020, 20, 1880. [Google Scholar] [CrossRef]
- Akka, K.; Khaber, F. Mobile robot path planning using an improved ant colony optimization. Int. J. Adv. Robot. Syst. 2018, 15, 1729881418774673. [Google Scholar] [CrossRef]
- Chen, G.; Liu, J. Mobile robot path planning using ant colony algorithm and improved potential field method. Comput. Intell. Neurosci. 2019, 2019, 1932812. [Google Scholar] [CrossRef]
- Liu, Y.; Ma, J.; Zang, S.; Min, Y. Dynamic path planning of mobile robot based on improved ant colony optimization algorithm. In Proceedings of the 2019 8th International Conference on Networks, Communication and Computing, Luoyang, China, 13–15 December 2019; pp. 248–252. [Google Scholar]
- Song, Q.; Li, S.; Yang, J.; Bai, Q.; Hu, J.; Zhang, X.; Zhang, A. Intelligent Optimization Algorithm-Based Path Planning for a Mobile Robot. Comput. Intell. Neurosci. 2021, 2021, 8025730. [Google Scholar] [CrossRef]
- Luo, Q.; Wang, H.; Zheng, Y.; He, J. Research on path planning of mobile robot based on improved ant colony algorithm. Neural Comput. Appl. 2020, 32, 1555–1566. [Google Scholar] [CrossRef]
- Sangeetha, V.; Krishankumar, R.; Ravichandran, K.; Kar, S. Energy-efficient green ant colony optimization for path planning in dynamic 3D environments. Soft Comput. 2021, 25, 4749–4769. [Google Scholar] [CrossRef]
- Miao, C.; Chen, G.; Yan, C.; Wu, Y. Path planning optimization of indoor mobile robot based on adaptive ant colony algorithm. Comput. Ind. Eng. 2021, 156, 107230. [Google Scholar] [CrossRef]
- Hou, W.; Xiong, Z.; Wang, C.; Chen, H. Enhanced ant colony algorithm with communication mechanism for mobile robot path planning. Robot. Auton. Syst. 2022, 148, 103949. [Google Scholar] [CrossRef]
- Gong, C.; Yang, Y.; Yuan, L.; Wang, J. An improved ant colony algorithm for integrating global path planning and local obstacle avoidance for mobile robot in dynamic environment. Math. Biosci. Eng. 2022, 19, 12405–12426. [Google Scholar] [CrossRef] [PubMed]
- Yang, L.; Fu, L.; Li, P.; Mao, J.; Guo, N. An Effective Dynamic Path Planning Approach for Mobile Robots Based on Ant Colony Fusion Dynamic Windows. Machines 2022, 10, 50. [Google Scholar] [CrossRef]
- Wang, Q.; Li, J.; Yang, L.; Yang, Z.; Li, P.; Xia, G. Distributed Multi-Mobile Robot Path Planning and Obstacle Avoidance Based on ACO–DWA in Unknown Complex Terrain. Electronics 2022, 11, 2144. [Google Scholar] [CrossRef]
- Huang, H.; Tan, G.; Jiang, L. Robot Path Planning Using Improved Ant Colony Algorithm in the Environment of Internet of Things. J. Robot. 2022, 2022, 1739884. [Google Scholar] [CrossRef]
- Gajendra, K.; Thivagar, K.; Karna, V.V.R. Path Planning and Trajectory Control of Autonomous Robot Using Metaheuristic Algorithms. In Proceedings of the International Conference on Algorithms and Computational Theory for Engineering Applications, Online, 2–3 February 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 285–292. [Google Scholar]
- Ma, J.; Liu, Q.; Yang, Z.; Wang, B. Improved Trimming Ant Colony Optimization Algorithm for Mobile Robot Path Planning. Algorithms 2025, 18, 240. [Google Scholar] [CrossRef]
- Si, J.; Bao, X. A novel parallel ant colony optimization algorithm for mobile robot path planning. Math. Biosci. Eng. 2024, 21, 2568–2586. [Google Scholar] [CrossRef]
- Li, P.; Wei, L.; Wu, D. An Intelligently Enhanced Ant Colony Optimization Algorithm for Global Path Planning of Mobile Robots in Engineering Applications. Sensors 2025, 25, 1326. [Google Scholar] [CrossRef] [PubMed]
- Liu, J.; Qian, Y.; Zhang, W.; Ji, M.; Xv, Q.; Song, H. High-safety path optimization for mobile robots using an improved ant colony algorithm with integrated repulsive field rules. Robot. Auton. Syst. 2025, 190, 104998. [Google Scholar] [CrossRef]
- Nor Azmi, S.N.L.K.; Rafique, M.; Anwar Apandi, N.I.; Md Noar, N.A.Z. Investigation of Autonomous Mobile Robot Path Planning with Edge Cloud Based on Ant Colony Optimization. In Proceedings of the Smart and Sustainable Industrial Ecosystem Conference, Kuala Lumpur, Malaysia, 5–6 August 2025; Springer Nature: Singapore, 2025; pp. 51–56. [Google Scholar]
- Li, Q.; Li, Q.; Cui, B. Enhanced Ant Colony Algorithm Based on Islands for Mobile Robot Path Planning. Appl. Sci. 2025, 15, 7023. [Google Scholar] [CrossRef]
- Yang, X.S.; He, X. Firefly algorithm: Recent advances and applications. arXiv 2013, arXiv:1308.3898. [Google Scholar] [CrossRef]
- Banerjee, A.; Singh, D.; Sahana, S.; Nath, I. Chapter 3—Impacts of metaheuristic and swarm intelligence approach in optimization. In Cognitive Big Data Intelligence with a Metaheuristic Approach; Mishra, S., Tripathy, H.K., Mallick, P.K., Sangaiah, A.K., Chae, G.S., Eds.; Cognitive Data Science in Sustainable Computing; Academic Press: Cambridge, MA, USA, 2022; pp. 71–99. [Google Scholar] [CrossRef]
- Li, J.; Wei, X.; Li, B.; Zeng, Z. A survey on firefly algorithms. Neurocomputing 2022, 500, 662–678. [Google Scholar] [CrossRef]
- Nayak, J.; Naik, B.; Dinesh, P.; Vakula, K.; Dash, P.B. Firefly algorithm in biomedical and health care: Advances, issues and challenges. SN Comput. Sci. 2020, 1, 311. [Google Scholar] [CrossRef]
- Karur, K.; Sharma, N.; Dharmatti, C.; Siegel, J.E. A survey of path planning algorithms for mobile robots. Vehicles 2021, 3, 448–468. [Google Scholar] [CrossRef]
- Duan, P.; Li, J.; Sang, H.; Han, Y.; Sun, Q. A Developed Firefly Algorithm for Multi-Objective Path Planning Optimization Problem. In Proceedings of the 2018 IEEE 8th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Tianjin, China, 19–23 July 2018; IEEE: New York, NY, USA, 2018; pp. 1393–1397. [Google Scholar]
- Patle, B.; Pandey, A.; Jagadeesh, A.; Parhi, D.R. Path planning in uncertain environment by using firefly algorithm. Def. Technol. 2018, 14, 691–701. [Google Scholar] [CrossRef]
- Fu, H.; Liu, X. A Path Planning Method for Mobile Robots Based on Fuzzy Firefly Algorithms. Recent Adv. Comput. Sci. Commun. 2021, 14, 3040–3045. [Google Scholar] [CrossRef]
- Abbas, N.A.F. Mobile Robot Path Planning Optimization Based on Integration of Firefly Algorithm and Quadratic Polynomial Equation. In Proceedings of the International Conference on Intelligent Systems & Networks, Hanoi, Vietnam, 9 March 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 538–547. [Google Scholar]
- Zhang, T.W.; Xu, G.H.; Zhan, X.S.; Han, T. A new hybrid algorithm for path planning of mobile robot. J. Supercomput. 2022, 78, 4158–4181. [Google Scholar] [CrossRef]
- Patle, B.; Pagar, N.; Parhi, D.; Sanap, S. Hybrid FA-GA Controller for Path Planning of Mobile Robot. In Proceedings of the 2022 International Conference on Intelligent Controller and Computing for Smart Power (ICICCSP), Hyderabad, India, 21–23 July 2022; IEEE: New York, NY, USA, 2022; pp. 1–6. [Google Scholar]
- Ab Wahab, M.N.; Nazir, A.; Khalil, A.; Bhatt, B.; Noor, M.H.M.; Akbar, M.F.; Mohamed, A.S.A. Optimised path planning using Enhanced Firefly Algorithm for a mobile robot. PLoS ONE 2024, 19, e0308264. [Google Scholar] [CrossRef] [PubMed]
- Patle, B.; Patel, A.J.; Kashyap, S.K. Self-Directed Mobile Robot Navigation Based on Functional Firefly Algorithm and Choice Function. Eng 2023, 4, 2656–2681. [Google Scholar] [CrossRef]
- Achouri, M.; Zennir, Y. Path planning and tracking of wheeled mobile robot: Using firefly algorithm and kinematic controller combined with sliding mode control. J. Braz. Soc. Mech. Sci. Eng. 2024, 46, 228. [Google Scholar] [CrossRef]
- Muhsen, D.K.; Raheem, F.A.; Yusof, Y.; Sadiq, A.T.; Al Alawy, F. Improved Rapidly-Exploring Random Tree using Firefly Algorithm for Robot Path Planning. J. Soft Comput. Comput. Appl. 2024, 1, 1. [Google Scholar] [CrossRef]
- Sood, M.; Panchal, V.K. Optimal path planning with hybrid firefly algorithm and cuckoo search optimisation. Int. J. Adv. Intell. Paradig. 2024, 27, 223–248. [Google Scholar] [CrossRef]
- Tian, T.; Liang, Z.; Wei, Y.; Luo, Q.; Zhou, Y. Hybrid whale optimization with a firefly algorithm for function optimization and mobile robot path planning. Biomimetics 2024, 9, 39. [Google Scholar] [CrossRef]
- Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
- Wang, J.S.; Li, S.X. An improved grey wolf optimizer based on differential evolution and elimination mechanism. Sci. Rep. 2019, 9, 7181. [Google Scholar] [CrossRef]
- Qu, C.; Gai, W.; Zhang, J.; Zhong, M. A novel hybrid grey wolf optimizer algorithm for unmanned aerial vehicle (UAV) path planning. Knowl.-Based Syst. 2020, 194, 105530. [Google Scholar] [CrossRef]
- Jamshidi, V.; Nekoukar, V.; Refan, M.H. Real time UAV path planning by parallel grey wolf optimization with align coefficient on CAN bus. Clust. Comput. 2021, 24, 2495–2509. [Google Scholar] [CrossRef]
- Seyyedabbasi, A.; Kiani, F. I-GWO and Ex-GWO: Improved algorithms of the Grey Wolf Optimizer to solve global optimization problems. Eng. Comput. 2021, 37, 509–532. [Google Scholar] [CrossRef]
- Kiani, F.; Seyyedabbasi, A.; Nematzadeh, S.; Candan, F.; Çevik, T.; Anka, F.A.; Randazzo, G.; Lanza, S.; Muzirafuti, A. Adaptive metaheuristic-based methods for autonomous robot path planning: Sustainable agricultural applications. Appl. Sci. 2022, 12, 943. [Google Scholar] [CrossRef]
- Zhou, M.; Wang, Z.; Wang, J.; Dong, Z. A Hybrid Path Planning and Formation Control Strategy of Multi-Robots in a Dynamic Environment. J. Adv. Comput. Intell. Intell. Inform. 2022, 26, 342–354. [Google Scholar] [CrossRef]
- Li, H.; Lv, T.; Shui, Y.; Zhang, J.; Zhang, H.; Zhao, H.; Ma, S. An improved grey wolf optimizer with weighting functions and its application to unmanned aerial vehicles path planning. Comput. Electr. Eng. 2023, 111, 108893. [Google Scholar] [CrossRef]
- Liu, L.; Li, L.; Nian, H.; Lu, Y.; Zhao, H.; Chen, Y. Enhanced grey wolf optimization algorithm for mobile robot path planning. Electronics 2023, 12, 4026. [Google Scholar] [CrossRef]
- Chen, W.; Liu, L.; Zhang, L.; Lin, Z.; Chen, J.; He, D. Path Planning of Mobile Robots with an Improved Grey Wolf Optimizer and Dynamic Window Approach. Appl. Sci. 2025, 15, 3999. [Google Scholar] [CrossRef]
- Teng, Z.; Dong, Q.; Zhang, Z.; Huang, S.; Zhang, W.; Wang, J.; Li, J.; Chen, X. An Improved Grey Wolf Optimizer Inspired by Advanced Cooperative Predation for UAV Shortest Path Planning. arXiv 2025, arXiv:2506.03663. [Google Scholar] [CrossRef]
- Ajeil, F.H.; Ibraheem, I.K.; Humaidi, A.J.; Khan, Z.H. A novel path planning algorithm for mobile robot in dynamic environments using modified bat swarm optimization. J. Eng. 2021, 2021, 37–48. [Google Scholar] [CrossRef]
- Abed, M.S.; Lutfy, O.F.; Al-Doori, Q.F. Online Path Planning of Mobile Robots Based on African Vultures Optimization Algorithm in Unknown Environments. J. Eur. Syst. Autom. 2022, 55, 405–412. [Google Scholar] [CrossRef]
- Reguii, I.; Hassani, I.; Rekik, C. Mobile Robot Navigation Using Planning Algorithm and Sliding Mode Control in a Cluttered Environment. J. Robot. Control 2022, 3, 166–175. [Google Scholar] [CrossRef]
- Abdulsaheb, J.A.; Kadhim, D.J. Robot Path Planning in Unknown Environments with Multi-Objectives Using an Improved COOT Optimization Algorithm. Int. J. Intell. Eng. Syst. 2022, 15, 548–565. [Google Scholar] [CrossRef]
- Loganathan, A.; Ahmad, N.S. A hybrid HHO-AVOA for path planning of a differential wheeled mobile robot in static and dynamic environments. IEEE Access 2024, 12, 25967–25979. [Google Scholar] [CrossRef]
- Das, P.; Parhi, D.R.; Mahapatro, A.; Dash, H.S.; Prakash, V. Navigational Analysis of Legged Robot Using the Modified African Vulture Optimization Algorithm. Res. Sq. 2025, preprint. [Google Scholar] [CrossRef]
- Zhu, W.; Kuang, X.; Jiang, H. Unmanned Aerial Vehicle Path Planning Based on Sparrow-Enhanced African Vulture Optimization Algorithm. Appl. Sci. 2025, 15, 8461. [Google Scholar] [CrossRef]
- Deshpande, S.; Kashyap, A.K.; Patle, B.K. A review on path planning AI techniques for mobile robots. Robot. Syst. Appl. 2023, 3, 27–46. [Google Scholar] [CrossRef]
- Sharma, U.; Rani, S.; Anand, A. Path Planning and Navigation in Autonomous Mobile Robots (AMRs): A Review. In Proceedings of the International Conference on Artificial-Business Analytics, Quantum and Machine Learning, Faridabad, India, 14–15 July 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 517–529. [Google Scholar]
- Zhang, Z.; Fu, H.; Yang, J.; Lin, Y. Deep reinforcement learning for path planning of autonomous mobile robots in complicated environments. Complex Intell. Syst. 2025, 11, 277. [Google Scholar] [CrossRef]
- Wang, Y.; Li, X.; Zhang, J.; Li, S.; Xu, Z.; Zhou, X. Review of wheeled mobile robot collision avoidance under unknown environment. Sci. Prog. 2021, 104, 00368504211037771. [Google Scholar] [CrossRef] [PubMed]
- Boujelben, M.s.; Rekik, C.; Derbel, N. A reactive approach for mobile robot navigation in static and dynamic environment using fuzzy logic control. Int. J. Model. Identif. Control 2017, 27, 293–302. [Google Scholar] [CrossRef]
- Singh, N.H.; Thongam, K. Mobile robot navigation using fuzzy logic in static environments. Procedia Comput. Sci. 2018, 125, 11–17. [Google Scholar] [CrossRef]
- Wang, C.; Piao, Z.; Li, L.; Sun, W.; Wang, J.; Li, C. Path planning for mobile robot using fuzzy controllers with artificial potential field. In Proceedings of the 2019 IEEE International Conference on Unmanned Systems (ICUS), Beijing, China, 17–19 October 2019; IEEE: New York, NY, USA, 2019; pp. 391–396. [Google Scholar]
- Dirik, M. Collision-free mobile robot navigation using fuzzy logic approach. Int. J. Comput. Appl. 2018, 179, 33–39. [Google Scholar] [CrossRef]
- Kim, C.; Kim, Y.; Yi, H. Fuzzy analytic hierarchy process-based mobile robot path planning. Electronics 2020, 9, 290. [Google Scholar] [CrossRef]
- Kim, C.; Suh, J.; Han, J.H. Development of a hybrid path planning algorithm and a bio-inspired control for an omni-wheel mobile robot. Sensors 2020, 20, 4258. [Google Scholar] [CrossRef]
- Tao, Y.; Gao, H.; Ren, F.; Chen, C.; Wang, T.; Xiong, H.; Jiang, S. A mobile service robot global path planning method based on ant colony optimization and fuzzy control. Appl. Sci. 2021, 11, 3605. [Google Scholar] [CrossRef]
- Lin, Z.; Yue, M.; Chen, G.; Sun, J. Path planning of mobile robot with PSO-based APF and fuzzy-based DWA subject to moving obstacles. Trans. Inst. Meas. Control 2022, 44, 121–132. [Google Scholar] [CrossRef]
- Sathiya, V.; Chinnadurai, M.; Ramabalan, S. Mobile robot path planning using fuzzy enhanced improved Multi-Objective particle swarm optimization (FIMOPSO). Expert Syst. Appl. 2022, 198, 116875. [Google Scholar] [CrossRef]
- Zhu, Y.; Lu, T. A Fuzzy-Based Improved Dynamic Window Approach for Path Planning of Mobile Robot. In Proceedings of the International Conference on Intelligent Robotics and Applications, Hangzhou, China, 5–7 July 2023; Springer: Berlin/Heidelberg, Germany, 2023; Volume 14270, pp. 586–597. [Google Scholar]
- Sun, Y.; Wang, W.; Xu, M.; Huang, L.; Shi, K.; Zou, C.; Chen, B. Local path planning for mobile robots based on fuzzy dynamic window algorithm. Sensors 2023, 23, 8260. [Google Scholar] [CrossRef] [PubMed]
- Kumar, A.; Sahasrabudhe, A.; Nirgude, S. Fuzzy Logic Control for Indoor Navigation of Mobile Robots. arXiv 2024, arXiv:2409.02437. [Google Scholar] [CrossRef]
- Hentout, A.; Maoudj, A.; Kouider, A. Shortest path planning and efficient fuzzy logic control of mobile robots in indoor static and dynamic environments. Sci. Technol. 2024, 27, 21–36. [Google Scholar] [CrossRef]
- Nguyen Minh, H.; Trinh An, H.; Tran Thi Cam, G.; Dinh Thi Ha, L.; Do Quoc, H. Fuzzy Logic and Quadtree-Based Control for Mobile Robots in Dynamic Environments. In Proceedings of the International Conference on Intelligent Robotics and Applications, Okayama, Japan, 6–9 August 2025; Springer: Berlin/Heidelberg, Germany, 2025; Volume 15205, pp. 429–445. [Google Scholar]
- Hu, L.; Wei, C.; Yin, L. Fuzzy A* quantum multi-stage Q-learning artificial potential field for path planning of mobile robots. Eng. Appl. Artif. Intell. 2025, 141, 109866. [Google Scholar] [CrossRef]
- Katona, K.; Neamah, H.A.; Korondi, P. Obstacle Avoidance and Path Planning Methods for Autonomous Navigation of Mobile Robot. Sensors 2024, 24, 3573. [Google Scholar] [CrossRef]
- Peng, G.; Yang, C.; He, W.; Chen, C.P. Force sensorless admittance control with neural learning for robots with actuator saturation. IEEE Trans. Ind. Electron. 2019, 67, 3138–3148. [Google Scholar] [CrossRef]
- Khnissi, K.; Seddik, C.; Seddik, H. Smart navigation of mobile robot using neural network controller. In Proceedings of the 2018 International Conference on Smart Communications in Network Technologies (SaCoNeT), El Oued, Algeria, 27–31 October 2018; IEEE: New York, NY, USA, 2018; pp. 205–210. [Google Scholar]
- Dewi, T.; Risma, P.; Oktarina, Y.; Nawawi, M. Neural network simulation for obstacle avoidance and wall follower robot as a helping tool for teaching-learning process in classroom. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2018; Volume 403, p. 012043. [Google Scholar]
- Yuan, J.; Wang, H.; Lin, C.; Liu, D.; Yu, D. A novel GRU-RNN network model for dynamic path planning of mobile robot. IEEE Access 2019, 7, 15140–15151. [Google Scholar] [CrossRef]
- Zheng, W.; Wang, H.B.; Zhang, Z.M.; Li, N.; Yin, P.H. Multi-layer feed-forward neural network deep learning control with hybrid position and virtual-force algorithm for mobile robot obstacle avoidance. Int. J. Control. Autom. Syst. 2019, 17, 1007–1018. [Google Scholar] [CrossRef]
- Zhu, Q.; Han, Y.; Liu, P.; Xiao, Y.; Lu, P.; Cai, C. Motion planning of autonomous mobile robot using recurrent fuzzy neural network trained by extended Kalman filter. Comput. Intell. Neurosci. 2019, 2019, 1934575. [Google Scholar] [CrossRef] [PubMed]
- Pandey, K.K.; Parhi, D.R. Trajectory planning and the target search by the mobile robot in an environment using a behavior-based neural network approach. Robotica 2020, 38, 1627–1641. [Google Scholar] [CrossRef]
- Khnissi, K.; Jabeur, C.B.; Seddik, H. A smart mobile robot commands predictor using recursive neural network. Robot. Auton. Syst. 2020, 131, 103593. [Google Scholar] [CrossRef]
- Wang, J.; Chi, W.; Li, C.; Wang, C.; Meng, M.Q.H. Neural RRT*: Learning-Based Optimal Path Planning. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1748–1758. [Google Scholar] [CrossRef]
- Qureshi, A.H.; Miao, Y.; Simeonov, A.; Yip, M.C. Motion Planning Networks: Bridging the Gap Between Learning-Based and Classical Motion Planners. IEEE Trans. Robot. 2021, 37, 48–66. [Google Scholar] [CrossRef]
- Ren, Z.; Lai, J.; Wu, Z.; Xie, S. Deep neural networks-based real-time optimal navigation for an automatic guided vehicle with static and dynamic obstacles. Neurocomputing 2021, 443, 329–344. [Google Scholar] [CrossRef]
- Farag, K.K.A.; Shehata, H.H.; El-Batsh, H.M. Mobile robot obstacle avoidance based on neural network with a standardization technique. J. Robot. 2021, 2021, 1129872. [Google Scholar] [CrossRef]
- Molina-Leal, A.; Gómez-Espinosa, A.; Escobedo Cabello, J.A.; Cuan-Urquizo, E.; Cruz-Ramírez, S.R. Trajectory Planning for a Mobile Robot in a Dynamic Environment Using an LSTM Neural Network. Appl. Sci. 2021, 11, 10689. [Google Scholar] [CrossRef]
- Chen, Y.; Cheng, C.; Zhang, Y.; Li, X.; Sun, L. A neural network-based navigation approach for autonomous mobile robot systems. Appl. Sci. 2022, 12, 7796. [Google Scholar] [CrossRef]
- Guan, L.; Lu, Y.; He, Z.; Chen, X. Intelligent Obstacle Avoidance Algorithm for Mobile Robots in Uncertain Environment. J. Robot. 2022, 2022, 8954060. [Google Scholar] [CrossRef]
- Li, J.; Wang, S.; Chen, Z.; Kan, Z.; Yu, J. Lightweight neural path planning. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; IEEE: New York, NY, USA, 2023; pp. 6713–6718. [Google Scholar]
- Galarza-Falfan, J.; García-Guerrero, E.E.; Aguirre-Castro, O.A.; López-Bonilla, O.R.; Tamayo-Pérez, U.J.; Cárdenas-Valdez, J.R.; Hernández-Mejía, C.; Borrego-Dominguez, S.; Inzunza-Gonzalez, E. Path Planning for Autonomous Mobile Robot Using Intelligent Algorithms. Technologies 2024, 12, 82. [Google Scholar] [CrossRef]
- Zhang, J.; Chen, H.; Sun, H.; Xu, H.; Yan, T. Convolutional neural network-based deep Q-network (CNN-DQN) path planning method for mobile robots. Intell. Serv. Robot. 2025, 18, 929–950. [Google Scholar] [CrossRef]
- Teng, Y.; Feng, T.; Li, J.; Chen, S.; Tang, X. A Dual-Layer Symmetric Multi-Robot Path Planning System Based on an Improved Neural Network-DWA Algorithm. Symmetry 2025, 17, 85. [Google Scholar] [CrossRef]
- Sahoo, B.; Das, D.; Pujhari, K.C.; Vikas. Optimization of route planning for the mobile robot using a hybrid Neuro-IWO technique. Int. J. Inf. Technol. 2025, 17, 1431–1439. [Google Scholar] [CrossRef]
- Chen, S.; Feng, T.; Li, J.; Yang, S.X. Research on Intelligent Path Planning of Mobile Robot Based on Hybrid Symmetric Bio-Inspired Neural Network Algorithm in Complex Road Environments. Symmetry 2025, 17, 836. [Google Scholar] [CrossRef]
- Huang, H.; Yang, C.; Chen, C.P. Optimal robot–environment interaction under broad fuzzy neural adaptive control. IEEE Trans. Cybern. 2020, 51, 3824–3835. [Google Scholar] [CrossRef]
- Le, M. Robust Deep Neural Networks Inspired by Fuzzy Logic. arXiv 2019, arXiv:1911.08635. [Google Scholar]
- Zhang, S.; Sakulyeva, T.; Pitukhin, E.; Doguchaeva, S. Neuro-fuzzy and soft computing-A computational approach to learning and artificial intelligence. Int. Rev. Autom. Control 2020, 13, 191–199. [Google Scholar] [CrossRef]
- Brahimi, S.; Azouaoui, O.; Loudini, M. Intelligent mobile robot navigation using a neuro-fuzzy approach. Int. J. Comput. Aided Eng. Technol. 2019, 11, 710–726. [Google Scholar] [CrossRef]
- Pandey, A.; Kashyap, A.K.; Parhi, D.R.; Patle, B. Autonomous mobile robot navigation between static and dynamic obstacles using multiple ANFIS architecture. World J. Eng. 2019, 16, 275–286. [Google Scholar] [CrossRef]
- Gharajeh, M.S.; Jond, H.B. Hybrid global positioning system-adaptive neuro-fuzzy inference system based autonomous mobile robot navigation. Robot. Auton. Syst. 2020, 134, 103669. [Google Scholar] [CrossRef]
- Batti, H.; Ben Jabeur, C.; Seddik, H. Autonomous smart robot for path predicting and finding in maze based on fuzzy and neuro-Fuzzy approaches. Asian J. Control 2021, 23, 3–12. [Google Scholar] [CrossRef]
- Gharajeh, M.S.; Jond, H.B. An intelligent approach for autonomous mobile robots path planning based on adaptive neuro-fuzzy inference system. Ain Shams Eng. J. 2022, 13, 101491. [Google Scholar] [CrossRef]
- Haider, M.H.; Wang, Z.; Khan, A.A.; Ali, H.; Zheng, H.; Usman, S.; Kumar, R.; Bhutta, M.U.M.; Zhi, P. Robust mobile robot navigation in cluttered environments based on hybrid adaptive neuro-fuzzy inference and sensor fusion. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 9060–9070. [Google Scholar] [CrossRef]
- Mishra, D.K.; Thomas, A.; Kuruvilla, J.; Kalyanasundaram, P.; Prasad, K.R.; Haldorai, A. Design of mobile robot navigation controller using neuro-fuzzy logic system. Comput. Electr. Eng. 2022, 101, 108044. [Google Scholar] [CrossRef]
- Ayub, S.; Singh, N.; Hussain, M.Z.; Ashraf, M.; Singh, D.K.; Haldorai, A. Hybrid approach to implement multi-robotic navigation system using neural network, fuzzy logic, and bio-inspired optimization methodologies. Comput. Intell. 2023, 39, 592–606. [Google Scholar] [CrossRef]
- Stavrinidis, S.; Zacharia, P. An ANFIS-Based Strategy for Autonomous Robot Collision-Free Navigation in Dynamic Environments. Robotics 2024, 13, 124. [Google Scholar] [CrossRef]
- Mohanty, P. Path Planning of Mobile Robots Under Uncertain Navigation Environments Using FCM Clustering ANFIS. Wirel. Pers. Commun. 2024, 137, 1251–1276. [Google Scholar] [CrossRef]
- Hilali, B.; Ramdani, M.; Naji, A. An efficient strategy for optimizing a neuro-fuzzy controller for mobile robot navigation. Int. J. Electr. Comput. Eng. 2025, 15, 1065–1078. [Google Scholar] [CrossRef]
- Mostafanasab, A.; Menhaj, M.B.; Shamshirsaz, M.; Fesharakifard, R. A novel mobile robot path planning method based on neuro-fuzzy controller. AUT J. Math. Comput. 2025, 6, 41–53. [Google Scholar]
- Machavaram, R. Intelligent path planning for autonomous ground vehicles in dynamic environments utilizing adaptive Neuro-Fuzzy control. Eng. Appl. Artif. Intell. 2025, 144, 110119. [Google Scholar] [CrossRef]
- Saleh, M.S.; Al Mashhadany, Y.I.; Alshaibi, M.; Ameen, F.M.; Algburi, S. Optimal Mobile Robot Navigation for Obstacle Avoidance Based on ANFIS Controller. J. Robot. Control 2025, 6, 484–492. [Google Scholar] [CrossRef]
- Albrecht, S.V.; Christianos, F.; Schäfer, L. Multi-Agent Reinforcement Learning: Foundations and Modern Approaches; MIT Press: Cambridge, MA, USA, 2024. [Google Scholar]
- Singh, R.; Ren, J.; Lin, X. A review of deep reinforcement learning algorithms for mobile robot path planning. Vehicles 2023, 5, 1423–1451. [Google Scholar] [CrossRef]
- Mitchell, T. Reinforcement learning. In Machine Learning; Springer: Berlin/Heidelberg, Germany, 1997; pp. 367–390. [Google Scholar]
- Khlif, N.; Nahla, K.; Safya, B. Reinforcement learning with modified exploration strategy for mobile robot path planning. Robotica 2023, 41, 2688–2702. [Google Scholar] [CrossRef]
- Shi, Z.; Wang, K.; Zhang, J. Improved reinforcement learning path planning algorithm integrating prior knowledge. PLoS ONE 2023, 18, e0284942. [Google Scholar] [CrossRef] [PubMed]
- Zhang, L.; Hou, Z.; Wang, J.; Liu, Z.; Li, W. Robot navigation with reinforcement learned path generation and fine-tuned motion control. IEEE Robot. Autom. Lett. 2023, 8, 4489–4496. [Google Scholar] [CrossRef]
- Orozco-Rosas, U.; Picos, K.; Pantrigo, J.J.; Montemayor, A.S.; Cuesta-Infante, A. Mobile robot path planning using a QAPF learning algorithm for known and unknown environments. IEEE Access 2022, 10, 84648–84663. [Google Scholar] [CrossRef]
- Gong, H.; Wang, P.; Ni, C.; Cheng, N. Efficient path planning for mobile robot based on deep deterministic policy gradient. Sensors 2022, 22, 3579. [Google Scholar] [CrossRef] [PubMed]
- López-Lozada, E.; Rubio-Espino, E.; Sossa-Azuela, J.H.; Ponce-Ponce, V.H. Reactive navigation under a fuzzy rules-based scheme and reinforcement learning for mobile robots. PeerJ Comput. Sci. 2021, 7, e556. [Google Scholar] [CrossRef] [PubMed]
- Wang, B.; Liu, Z.; Li, Q.; Prorok, A. Mobile robot path planning in dynamic environments through globally guided reinforcement learning. IEEE Robot. Autom. Lett. 2020, 5, 6932–6939. [Google Scholar] [CrossRef]
- Wen, S.; Wen, Z.; Zhang, D.; Zhang, H.; Wang, T. A multi-robot path-planning algorithm for autonomous navigation using meta-reinforcement learning based on transfer learning. Appl. Soft Comput. 2021, 110, 107605. [Google Scholar] [CrossRef]
- Bae, H.; Kim, G.; Kim, J.; Qian, D.; Lee, S. Multi-robot path planning method using reinforcement learning. Appl. Sci. 2019, 9, 3057. [Google Scholar] [CrossRef]
- Chen, Y.J.; Jhong, B.G.; Chen, M.Y. A Real-Time Path Planning Algorithm Based on the Markov Decision Process in a Dynamic Environment for Wheeled Mobile Robots. Actuators 2023, 12, 166. [Google Scholar] [CrossRef]
- Huang, J.; Zhang, Z.; Ruan, X. An Improved Dyna-Q Algorithm Inspired by the Forward Prediction Mechanism in the Rat Brain for Mobile Robot Path Planning. Biomimetics 2024, 9, 315. [Google Scholar] [CrossRef]
- Zhou, Q.; Lian, Y.; Wu, J.; Zhu, M.; Wang, H.; Cao, J. An optimized Q-Learning algorithm for mobile robot local path planning. Knowl.-Based Syst. 2024, 286, 111400. [Google Scholar] [CrossRef]
- Zhang, Y.; Cui, C.; Zhao, Q. Path Planning of Mobile Robot Based on A Star Algorithm Combining DQN and DWA in Complex Environment. Appl. Sci. 2025, 15, 4367. [Google Scholar] [CrossRef]
- Nguyen, A.T.; Pham, D.D.; Le, V.N.; Luu, V.H. Design a path–planning strategy for mobile robot in multi-structured environment based on distributional reinforcement learning. MethodsX 2025, 15, 103554. [Google Scholar] [CrossRef]
- Zhang, Y.; Liu, Y.; Chen, Y.; Yang, Z. ARE-QL: An enhanced Q-learning algorithm with optimized search for mobile robot path planning. Phys. Scr. 2025, 100, 036015. [Google Scholar] [CrossRef]
- Kober, J.; Bagnell, J.A.; Peters, J. Reinforcement learning in robotics: A survey. Int. J. Robot. Res. 2013, 32, 1238–1274. [Google Scholar] [CrossRef]
- Tang, Y.; Zhao, C.; Wang, J.; Zhang, C.; Sun, Q.; Zheng, W.X.; Du, W.; Qian, F.; Kurths, J. Perception and navigation in autonomous systems in the era of learning: A survey. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 9604–9624. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Y.; Zhang, Y.; Wang, S. A Review of Mobile Robot Path Planning Based on Deep Reinforcement Learning Algorithm. J. Phys. Conf. Ser. 2021, 2138, 012011. [Google Scholar] [CrossRef]
- Hoseinnezhad, R. A comprehensive review of deep learning techniques in mobile robot path planning: Categorization and analysis. Appl. Sci. 2025, 15, 2179. [Google Scholar] [CrossRef]
- Sun, H.; Zhang, W.; Yu, R.; Zhang, Y. Motion planning for mobile robots—Focusing on deep reinforcement learning: A systematic review. IEEE Access 2021, 9, 69061–69081. [Google Scholar] [CrossRef]
- Nguyen, M.; Dubay, R. Application of Deep Learning in Autonomous Mobile Robot Control: An Overview. In Proceedings of the 2025 IEEE International Systems Conference (SysCon), Montreal, QC, Canada, 7–10 April 2025; IEEE: New York, NY, USA, 2025; pp. 1–8. [Google Scholar]
- Niroui, F.; Zhang, K.; Kashino, Z.; Nejat, G. Deep reinforcement learning robot for search and rescue applications: Exploration in unknown cluttered environments. IEEE Robot. Autom. Lett. 2019, 4, 610–617. [Google Scholar] [CrossRef]
- Gao, J.; Ye, W.; Guo, J.; Li, Z. Deep reinforcement learning for indoor mobile robot path planning. Sensors 2020, 20, 5493. [Google Scholar] [CrossRef]
- Xiaoxian, S.; Chenpeng, Y.; Haoran, Z.; Chengju, L.; Qijun, C. Obstacle Avoidance Algorithm for Mobile Robot Based on Deep Reinforcement Learning in Dynamic Environments. In Proceedings of the 2020 IEEE 16th International Conference on Control & Automation (ICCA), Singapore, 9–11 October 2020; IEEE: New York, NY, USA, 2020; pp. 366–372. [Google Scholar]
- Gao, X.; Yan, L.; Wang, G.; Wang, T.; Du, N.; Gerada, C. Toward Obstacle Avoidance for Mobile Robots Using Deep Reinforcement Learning Algorithm. In Proceedings of the 2021 IEEE 16th Conference on Industrial Electronics and Applications (ICIEA), Chengdu, China, 1–4 August 2021; IEEE: New York, NY, USA, 2021; pp. 2136–2139. [Google Scholar]
- Chen, G.; Pan, L.; Chen, Y.; Xu, P.; Wang, Z.; Wu, P.; Ji, J.; Chen, X. Deep Reinforcement Learning of Map-Based Obstacle Avoidance for Mobile Robot Navigation. SN Comput. Sci. 2021, 2, 417. [Google Scholar] [CrossRef]
- Yu, X.; Wang, P.; Zhang, Z. Learning-based end-to-end path planning for lunar rovers with safety constraints. Sensors 2021, 21, 796. [Google Scholar] [CrossRef]
- Samma, H.; Abubaker, A.; Aremu, M.B.; Abdel-Nasser, M.; El-Ferik, S. Fusion of Visual Attention and Scene Descriptions with Deep Reinforcement Learning for UAV Indoor Autonomous Navigation. IEEE Access 2025, 13, 81298–81311. [Google Scholar] [CrossRef]
- Wang, W.; Wu, Z.; Luo, H.; Zhang, B. Path planning method of mobile robot using improved deep reinforcement learning. J. Electr. Comput. Eng. 2022, 2022, 5433988. [Google Scholar] [CrossRef]
- Tang, Y.; Chen, Q.; Wei, Y. Robot Obstacle Avoidance Controller Based on Deep Reinforcement Learning. J. Sens. 2022, 2022, 4194747. [Google Scholar] [CrossRef]
- Chai, R.; Niu, H.; Carrasco, J.; Arvin, F.; Yin, H.; Lennox, B. Design and experimental validation of deep reinforcement learning-based fast trajectory planning and control for mobile robot in unknown environment. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 5778–5792. [Google Scholar] [CrossRef]
- Ren, J.; Huang, X.; Huang, R.N. Efficient Deep Reinforcement Learning for Optimal Path Planning. Electronics 2022, 11, 3628. [Google Scholar] [CrossRef]
- Fu, H.; Wang, Q.; He, H. Path-Following Navigation in Crowds with Deep Reinforcement Learning. IEEE Internet Things J. 2024, 11, 20236–20245. [Google Scholar] [CrossRef]
- Yin, H.; Wang, C.; Yan, C.; Xiang, X.; Cai, B.; Wei, C. Deep Reinforcement Learning with Multi-Critic TD3 for Decentralized Multi-Robot Path Planning. IEEE Trans. Cogn. Dev. Syst. 2024, 16, 1233–1247. [Google Scholar] [CrossRef]
- Ni, J.; Gu, Y.; Tang, G.; Ke, C.; Gu, Y. Cooperative Coverage Path Planning for Multi-Mobile Robots Based on Improved K-Means Clustering and Deep Reinforcement Learning. Electronics 2024, 13, 944. [Google Scholar] [CrossRef]
- Han, H.; Wang, J.; Kuang, L.; Han, X.; Xue, H. Improved Robot Path Planning Method Based on Deep Reinforcement Learning. Sensors 2023, 23, 5622. [Google Scholar] [CrossRef] [PubMed]
- Liu, H.; Shen, Y.; Yu, S.; Gao, Z.; Wu, T. Deep reinforcement learning for mobile robot path planning. arXiv 2024, arXiv:2404.06974. [Google Scholar] [CrossRef]
- Yan, C.; Chen, G.; Li, Y.; Sun, F.; Wu, Y. Immune deep reinforcement learning-based path planning for mobile robot in unknown environment. Appl. Soft Comput. 2023, 145, 110601. [Google Scholar] [CrossRef]
- Zhang, J.; Zhao, H. Mobile Robot Path Planning Based on Improved Deep Reinforcement Learning Algorithm. In Proceedings of the 2024 4th International Conference on Neural Networks, Information and Communication (NNICE), Guangzhou, China, 19–21 January 2024; IEEE: New York, NY, USA, 2024; pp. 1758–1761. [Google Scholar]
- Tabakis, I.M.; Dasygenis, M. Deep Reinforcement Learning-Based Path Planning for Dynamic and Heterogeneous Environments. In Proceedings of the 2024 Panhellenic Conference on Electronics & Telecommunications (PACET), Thessaloniki, Greece, 28–29 March 2024; IEEE: New York, NY, USA, 2024; pp. 1–4. [Google Scholar]
- Vashisth, A.; Ruckin, J.; Magistri, F.; Stachniss, C.; Popovic, M. Deep Reinforcement Learning with Dynamic Graphs for Adaptive Informative Path Planning. IEEE Robot. Autom. Lett. 2024, 9, 7747–7754. [Google Scholar] [CrossRef]
- Ozdemir, K.; Tuncer, A. Navigation of autonomous mobile robots in dynamic unknown environments based on dueling double deep Q networks. Eng. Appl. Artif. Intell. 2025, 139, 109498. [Google Scholar] [CrossRef]
- Chakraborty, S.; Raghuvanshi, A.S. Adaptive Deep Reinforcement Learning Hybrid Neuro-Fuzzy Inference System Based Path Planning Algorithm for Mobile Robot. J. Field Robot. 2025, 42, 3425–3439. [Google Scholar] [CrossRef]
- Hai, X.; Zhu, Z.; Liu, Y.; Khong, A.W.; Wen, C. Resilient real-time decision-making for autonomous mobile robot path planning in complex dynamic environments. IEEE Trans. Ind. Electron. 2025, 72, 11551–11562. [Google Scholar] [CrossRef]
- Shah, D.; Osiński, B.; Ichter, B.; Levine, S. LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action. In Proceedings of the 6th Conference on Robot Learning (CoRL 2023), Atlanta, GA, USA, 6–9 November 2023; Proceedings of Machine Learning Research (PMLR): Cambridge, MA, USA, 2023; Volume 205, pp. 492–504. [Google Scholar] [CrossRef]
- Ahn, M.; Brohan, A.; Brown, N.; Chebotar, Y.; Cortes, O.; David, B.; Zeng, A. Do As I Can, Not As I Say: Grounding Language in Robotic Affordances. In Proceedings of the Conference on Robot Learning (CoRL), Auckland, New Zealand, 14–18 December 2022. [Google Scholar] [CrossRef]
- Liang, J.; Huang, W.; Xia, F.; Xu, P.; Hausman, K.; Ichter, B.; Florence, P.; Zeng, A. Code as Policies: Language Model Programs for Embodied Control. arXiv 2022, arXiv:2209.07753. [Google Scholar] [CrossRef]
- Driess, D.; Xia, F.; Sajjadi, M.S.M.; Lynch, C.; Chowdhery, A.; Ichter, B.; Wahid, A.; Tompson, J.; Vuong, Q.; Yu, T.; et al. PaLM-E: An Embodied Multimodal Language Model. In Proceedings of the 40th International Conference on Machine Learning (ICML 2023), Honolulu, HI, USA, 23–29 July 2023; PMLR: Cambridge, MA, USA, 2023; Volume 202, pp. 8469–8488. [Google Scholar]
- Zitkovich, B.; Yu, T.; Xu, S.; Xu, P.; Xiao, T.; Xia, F.; Wu, J.; Wohlhart, P.; Welker, S.; Wahid, A.; et al. RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control. In Proceedings of the Conference on Robot Learning (CoRL 2023), Atlanta, GA, USA, 6–9 November 2023; PMLR: Cambridge, MA, USA, 2023; pp. 2165–2183. [Google Scholar]
- Gemini Robotics Team. Gemini Robotics 1.5: Pushing the Frontier of Generalist Robots with Advanced Embodied Reasoning, Thinking, and Motion Transfer. arXiv 2025, arXiv:2510.03342. [Google Scholar] [CrossRef]
- Zhang, L.; Cai, K.; Sun, Z.; Bing, Z.; Wang, C.; Figueredo, L.; Haddadin, S.; Knoll, A. Motion planning for robotics: A review for sampling-based planners. Biomim. Intell. Robot. 2025, 5, 100207. [Google Scholar] [CrossRef]
- Xiao, H.; Chen, C.; Zhang, G.; Chen, C.P. Reinforcement learning-driven dynamic obstacle avoidance for mobile robot trajectory tracking. Knowl.-Based Syst. 2024, 297, 111974. [Google Scholar] [CrossRef]



















| Ref. | Year | Method | Robot/Platform/Environment | Key Outcome |
|---|---|---|---|---|
| [27] | 2018 | PSO for global path optimization | Mobile robot (simulation) | Achieved shortest obstacle-free route |
| [30] | 2018 | PSO with IR-based dynamic obstacle avoidance | Multiple mobile robots | Collision-free navigation for moving obstacles |
| [28] | 2018 | Crowding-radius adaptive PSO | Constrained mobile robot | Enhanced diversity and global optimum convergence |
| [31] | 2019 | Adaptive PSO + BSA hybrid with chaotic control | Mobile robot (simulation) | Improved global search and efficiency |
| [32] | 2019 | PSO-based autonomous navigation using sensors and LIDAR | TurtleBot (3D Gazebo) | Robust guidance and obstacle avoidance in complex terrain |
| [29] | 2020 | PSO on grid maps with convex obstacles | Mobile robot (simulation) | Generated shortest collision-free paths |
| [35] | 2020 | Hybrid frequency BA–PSO with obstacle detection | Omnidirectional mobile robot | Effective static and dynamic obstacle avoidance |
| [33] | 2021 | Hybrid chaotic PSO–GWO with adaptive inertia | Autonomous mobile robot | Enhanced convergence and avoidance of local minima |
| [34] | 2021 | PSO–GWO–LS hybrid planner | Autonomous robot (simulation) | Smoothed paths and improved feasibility |
| [37] | 2021 | Adaptive PSO with route-smoothing fitness | Wheeled mobile robot | Shortest smooth path, validated in simulation and experiment |
| [36] | 2021 | Enhanced PSO–SA with mutation | AMR in polygonal obstacle environment | Optimal paths with premature convergence avoidance |
| [38] | 2022 | Improved PSO–GWO hybrid | Wheeled mobile robot | Balanced global and local search, secure navigation |
| [39] | 2022 | Multi-objective PSO–BA–LS–ODA | AMR in dynamic environment | Optimized length, smoothness, and safety |
| [40] | 2023 | Prior-knowledge PSO (PKPSO) with quintic smoothing | Mobile robot | Shorter, smoother paths with adaptive velocity update |
| [41] | 2023 | Self-Evolving PSO (SEPSO) with auto hyperparameter tuning | AMR (dynamic settings) | Real-time performance, improved path efficiency |
| [42] | 2024 | Obstacle Kinematics Augmented Optimization (OkayPlan) | TurtleBot in VRX simulator | Real-time 125 Hz planning, safer trajectories |
| [43] | 2024 | Improved PSO, GWO, and ABC comparison study | Two-wheeled AMR (ROS 2 + LiDAR) | Experimental validation; PSO achieved superior static path efficiency |
| [45] | 2024 | Smooth PSO-IPF (SPSO-IPF) with kinematic constraints | Mobile robot (static/dynamic) | Reduced jerks, smoother paths, improved convergence |
| [44] | 2025 | Adaptive Random Fluctuation PSO + B-spline optimization | AMR (simulation) | Smooth curvature-constrained trajectories with local minima avoidance |
| [46] | 2025 | PSO-RRT* with B-spline smoothing | Raspberry Pi mobile robot | 37% faster planning, 20% fewer turns, shorter paths |
| Ref. | Year | Method | Robot/Platform/Environment | Key Outcome |
|---|---|---|---|---|
| [49] | 2017 | Reward-based dynamic GA path planner using probabilistic obstacle modeling | Dynamic mobile robot | Enhanced efficiency and obstacle avoidance in dynamic settings |
| [50] | 2018 | GA with enhanced crossover considering chromosome length variation | Mobile robot (simulation) | Reduced infeasible paths and improved convergence |
| [51] | 2018 | Genetic adaptive navigation control strategy integrated with GA | Wheeled mobile robot | Robust obstacle avoidance and reliable motion generation |
| [52] | 2019 | Hybrid Artificial Potential Field (APF)–GA with collision mitigation operator | Mobile robot (grid environment) | High-quality, collision-free paths and better optimality |
| [53] | 2020 | Improved GA with adaptive operators for local optima avoidance | Mobile robot (static) | Faster convergence and shorter, smoother paths |
| [54] | 2020 | Hybrid A*–GA for complex map navigation | Autonomous mobile robot | Efficient and stable trajectory generation |
| [55] | 2021 | Sensor-based adaptive GA for real-time path planning | AMR with onboard sensors | Reduced path length and faster convergence |
| [57] | 2021 | Multi-objective GA with heuristic median initialization | Mobile robot (simulation) | Balanced global paths with multi-objective optimization |
| [58] | 2022 | Industrial GA-based planner for crowded environments | Industrial AMR | Optimal routes with reduced computation time |
| [59] | 2022 | Hybrid 2-way RRT with enhanced GA operators | Mobile robot (grid maps) | Smoother trajectories, improved population diversity |
| [60] | 2022 | Hybrid ACO–GA with adaptive parameters | Wheeled mobile robot | Shorter, smoother, and fewer-turn paths |
| [61] | 2022 | GA with path-level fitness and dynamic mutation | Dynamic mobile robot | Real-time feasible path planning under moving obstacles |
| [62] | 2024 | Improved GA with Clearance-Based Probabilistic Roadmap (CBPRM) | Mobile robot (simulation) | Safer paths and fewer infeasible routes in static 2D maps |
| [63] | 2024 | Comparative GA–ACO routing optimization study | AMR in warehouse environment | ACO achieved shorter distances and better scheduling |
| [64] | 2024 | Direction-guided GA with adaptive crossover/mutation | Grid-based AMR | Higher success rate and faster convergence |
| [65] | 2024 | Double-Domain Inversion (DDI)-based GA for improved local search | Mobile robot (simulation) | Faster convergence and superior local exploration |
| [66] | 2025 | Dichotomy-based GA with adaptive operators and Bézier smoothing | Mobile robot (simulation) | Improved smoothness, reduced length, and faster computation |
| [67] | 2025 | MHRTSN (GA + APF metaheuristic) hybrid for real-time navigation | Dynamic AMR (3D environment) | Safe, adaptive navigation; PSO variant achieved faster processing |
| Ref. | Year | Method | Robot/Platform/Environment | Key Outcome |
|---|---|---|---|---|
| [72] | 2018 | Modified ACO with limitless step length and stimulus probability for decision-making | Mobile robot (grid map) | Faster convergence and wider search area coverage |
| [73] | 2019 | Hybrid APF–ACO with directional forces from potential fields | Mobile robot (static/dynamic) | Accelerated convergence and obstacle-free motion |
| [74] | 2019 | APF–ACO with GA operators and unbounded step length heuristics | Mobile robot (dynamic) | Reliable navigation and fast convergence |
| [75] | 2021 | Hybrid GA–ACO for real-time collision avoidance | Static/dynamic robot environment | Shorter path length, reduced loops, faster convergence |
| [76] | 2020 | Improved ACO with uneven pheromone initialization and deadlock penalty | Autonomous mobile robot | Reduced stagnation and faster convergence |
| [77] | 2021 | Energy-efficient 3D ACO with gain-based pheromone enhancement | 3D AMR platform | Minimized energy use and smoother global trajectories |
| [78] | 2021 | Adaptive multi-objective ACO for indoor navigation | Indoor mobile robot | Balanced optimization of distance, safety, and smoothness |
| [79] | 2022 | Communication-inspired ACO using tentacle interaction modeling | Multi-robot (simulation) | Improved cooperation and composite route formation |
| [80] | 2022 | ACO with adaptive parameter adjustment and dynamic avoidance | AMR (simulation) | Robust performance in cluttered and changing environments |
| [81] | 2022 | Improved ACO integrated with DWA for global–local path planning | Mobile robot (dynamic) | Efficient obstacle avoidance and real-time adaptation |
| [82] | 2022 | Distributed ACO for multi-robot path coordination | Multi-AMR system | Reduced collision rate and computation load |
| [83] | 2022 | Enhanced ACO for complex grid environments | Wheeled robot | Avoided stagnation, generated optimal routes |
| [84] | 2024 | Hybrid ACO–GA framework for smoother and faster paths | Mobile robot (simulation) | Outperformed standalone ACO and GA in trajectory quality |
| [86] | 2024 | Parallel ACO (PACO) with rank pheromone updates and “continue-or-kill” strategy | Mobile robot (multi-grid tests) | Halved planning time and avoided deadlocks |
| [85] | 2025 | Improved Trimming ACO (ITACO) with triangular pruning and APF heuristics | Mobile robot (simulation) | 60% shorter paths and faster convergence than classical ACO |
| [87] | 2025 | Intelligently Enhanced ACO (IEACO) with multi-layer innovation (adaptive tuning, -greedy) | Physical and simulated AMRs | Superior path quality and faster convergence |
| [88] | 2025 | AR-ACO combining repulsive potential field and ACO | Mobile robot (complex map) | Faster convergence and improved robustness |
| [89] | 2025 | Edge-assisted ACO with cloud pheromone update optimization | Dynamic AMR system | Low-latency, safe, and real-time path planning |
| [90] | 2025 | Enhanced island-based ACO (EACI) with irregular pheromone initialization and auxiliary map pre-processing | Mini-ROS vehicle testbed | 95% fewer lost ants and 90% fewer iterations |
| Ref. | Year | Method | Robot/Platform/Environment | Key Outcome |
|---|---|---|---|---|
| [96] | 2018 | Firefly Algorithm for shortest and smooth trajectory optimization | Mobile robot (simulation) | Reduced path length with smooth and feasible trajectories |
| [97] | 2018 | Dynamic FA-based planner for moving obstacles and goals | Mobile robot (dynamic environment) | Effective path generation and obstacle avoidance |
| [98] | 2021 | Fuzzy-FA hybrid using Sobol initialization and dynamic displacement | Mobile robot (simulation) | Improved convergence speed and adaptability |
| [99] | 2021 | D–FA hybrid with quadratic factor for obstacle avoidance | Wheeled robot (simulation) | Safer paths and faster obstacle detection |
| [100] | 2022 | Hybrid GA–FA planner for 2D/3D navigation | Mobile robot (multi-domain) | Mitigated local optima and enhanced computational efficiency |
| [101] | 2022 | GAFA hybrid validated on Khepra-II platform | Khepra-II robot | Reliable navigation and reduced routing time in cluttered settings |
| [93] | 2022 | Comprehensive survey of FA variants and hybrids | Various AMR systems | Overview of FA’s evolution and hybridization trends |
| [102] | 2024 | Enhanced FA (EFA) with linear reduction | Mobile robot (multiple maps) | 10% shorter paths and reduced variance in path length |
| [103] | 2023 | Functional FA (FFA) with choice-based functions and constraints | 2D/3D dynamic environments | Efficient and collision-free navigation |
| [104] | 2024 | FA-SMC hybrid for real-time control and path optimization | Wheeled mobile robot | Improved trajectory tracking and reduced error vs. PSO/TLBO |
| [105] | 2024 | ERRT–FA hybrid combining FA with Rapidly Exploring Random Tree | AMR (complex static/dynamic) | Faster exploration and shorter paths |
| [106] | 2024 | Hybrid FA–Cuckoo Search (HAC) planner | Mobile robot (simulation) | Enhanced obstacle avoidance and global path efficiency |
| [107] | 2024 | Whale–Firefly Optimization Algorithm (FWOA) with opposition-based learning | AMR (static complex environment) | Improved exploration, convergence, and multi-population diversity |
| Ref. | Year | Method | Robot/Platform/Environment | Key Outcome |
|---|---|---|---|---|
| [109] | 2019 | Improved GWO with evolution and elimination strategies | Mobile robot (simulation) | Faster convergence and improved global search performance |
| [110] | 2020 | Hybrid GWO–Symbiotic Organism Search (SOS) | AMR in cluttered environments | Enhanced detection capability and convergence speed |
| [111] | 2021 | Improved GWO with variable weighting factor for 3D UAV path planning | UAV (3D environment) | Mitigated waypoint dispersion and improved trajectory stability |
| [112] | 2021 | Expanded and Incremental Adaptive GWO variants | UAV (farm environment) | Balanced exploration and exploitation; improved path feasibility |
| [113] | 2022 | Adaptive GWO for multi-obstacle navigation | Mobile robot (simulation) | Generated accident-free, time-efficient, and cost-effective paths |
| [114] | 2022 | Hybrid GWO–WOA with Dynamic Window Approach (DWA) | AMR (dynamic environment) | Combined global and local planning; improved obstacle avoidance |
| [43] | 2024 | Improved GWO for static indoor navigation | Two-wheeled AMR (LiDAR, Raspberry Pi) | Real-world validation; efficient path generation in structured maps |
| [115] | 2023 | Improved GWO with adaptive weighting functions (IGWO-WFs) | UAV (simulation) | Smoother trajectories and stable convergence |
| [116] | 2023 | Hybrid Improved GWO (HI-GWO) with chaotic mapping, Lévy flight, and golden sine | AMR (simulation) | Enhanced accuracy, robustness, and convergence rate |
| [117] | 2025 | Piecewise Adaptive GWO with DWA (PAGWO-IDWA) | Mobile robot (dynamic environment) | Reduced turns by 33% and runtime by 30% vs. standard GWO |
| [118] | 2025 | Improved GWO with cooperative predation and Lens Opposition-Based Learning | UAV (dynamic navigation) | Improved exploration, reduced local trapping, and shorter trajectories |
| Ref. | Year | Method | Robot/Platform/Environment | Key Outcome |
|---|---|---|---|---|
| [130] | 2017 | Fuzzy controller integrated with VFH | Wheeled mobile robot (WMR) | Reactive obstacle avoidance using safe-zone concepts |
| [133] | 2018 | FL-based obstacle recognition and avoidance | Sensor-equipped WMR | Reliable collision avoidance in cluttered environments |
| [131] | 2018 | Mamdani-type fuzzy inference system | Mobile robot | Near-optimal path with reduced time and energy |
| [132] | 2019 | Improved APF + dual fuzzy controllers (heading/stepping) | WMR in complex environment | Smooth trajectories and reduced local minima effects |
| [134] | 2020 | FL-based AHP for multi-objective planning | Omnidirectional WMR | Balanced trade-offs between path length, rotation, and safety |
| [135] | 2020 | A* + FL-AHP + bio-inspired BLS controller | Dynamic environment robot | Resilient, rapid control with adaptive decision-making |
| [136] | 2021 | Fuzzy-tuned heuristic parameters in improved ACO | Mobile robot (simulation) | Faster convergence and stable path generation |
| [137] | 2022 | Hybrid global–local planner (APF + fuzzy DWA) | AMR in dynamic environments | Improved local obstacle avoidance via fuzzy hazard evaluation |
| [138] | 2022 | FL + multi-objective PSO framework for 3D navigation | Car-like mobile robot | Energy-efficient, smooth, and collision-free 3D path |
| [139] | 2023 | Fuzzy-enhanced Dynamic Window Approach (FDWA) | WMR in dynamic obstacles | Improved heading accuracy and responsiveness |
| [140] | 2023 | Adaptive Fuzzy DWA with sub-target generation | AMR (simulation) | Robust navigation avoiding local optima |
| [141] | 2024 | Takagi–Sugeno fuzzy inference + Kinect-based perception | TurtleBot 2 (indoor) | Reliable real-time navigation with reduced computational cost |
| [142] | 2024 | DCA + Efficient FL Controller (EFLC) | Pioneer robot in V-REP | Shorter path and runtime than GA, RRT, and ACO |
| [143] | 2025 | Quad_D*–Fuzzy hybrid (quadtree + D* Lite + FL) | Mobile robot in dense/trap-like environments | 100% success rate and 80% reduction in planning time |
| [144] | 2025 | Fuzzy A*–quantum multi-stage Q-learning–APF hybrid | AMR in static/dynamic environments | 80% faster learning convergence and smoother collision-free paths |
| Ref. | Year | Method | Robot/Platform/Environment | Key Outcome |
|---|---|---|---|---|
| [147] | 2018 | ANN control scheme for 3D WMR | Non-holonomic WMR | Effective obstacle avoidance in 3D |
| [148] | 2018 | Teacher–student ANN for wall-following robot | Simulated Mobotism platform | Steering and path optimization |
| [149] | 2019 | GRU RNN for path planning in uncharted spaces | Autonomous robot in uncharted environment | Smooth, real-time path planning |
| [150] | 2019 | Feedforward ANN + potential field hybrid | Autonomous vehicle | Trajectory tracking with uncertainty handling |
| [151] | 2019 | Recurrent fuzzy NN + Extended Kalman Filter | Autonomous MR | Resilient collision-free navigation |
| [152] | 2020 | Behavior-based ANN with BP training | Mobile robot | Fast goal-reaching with adaptive control |
| [153] | 2020 | Recursive ANN predictor for PWM control | Autonomous MR | Improved accuracy and speed |
| [154] | 2020 | Neural RRT*: CNN-assisted RRT* | Mobile robot (simulation) | Improved exploration efficiency and path optimality |
| [155] | 2019 | MPNet: Deep neural motion planning network with continual learning + RRT* | Various Robotic platforms | Achieved near-optimal, collision-free paths in <1 s with generalization to unseen environments |
| [156] | 2021 | DNN + high-fidelity direct optimization | AGV (industrial) | Robust real-time planning via DNN |
| [157] | 2021 | ANN with online data update for obstacle avoidance | Mobile robot | Dynamic obstacle avoidance in real time |
| [158] | 2021 | LSTM for LiDAR-based trajectory prediction | Mobile robot (LiDAR-equipped) | Optimal navigation under dynamic obstacles |
| [159] | 2022 | NN decision-making for unknown environments | Mobile robot | Robust motion behavior adaptation |
| [160] | 2022 | Hybrid ANN + FL adaptive control with monitoring layer | AMR (general framework) | Improved localization, stability, and safety |
| [161] | 2023 | Lightweight dual-input CNN planner with hybrid sampling | TurtleBot, simulated and real | 10× smaller model, 5 times faster computation, and competitive accuracy |
| [162] | 2024 | CNN-based ResNet18 + YOLOv3 for perception-driven navigation | Jetson Nano-based AMR | 98.5% recognition accuracy in dynamic obstacle-rich scenarios |
| [163] | 2025 | CNN-DQN with exponential decay + B-spline smoothing | Mobile robot (simulation) | Faster convergence, smoother and more efficient paths than DQN |
| [164] | 2025 | Dual-layer BINN + reward-augmented DWA | Multi-robot (static/dynamic/mixed) | 20% shorter paths, 90% fewer turns than GA, ACO, BINN |
| [165] | 2025 | Neuro-IWO (NN + Invasive Weed Optimization) | Mobile robot (simulation) | <5% deviation, smoother and shorter paths vs. NN planners |
| [166] | 2025 | Hybrid Symmetric BINN + Improved GA | AMR on heterogeneous terrain | 11.4% shorter paths, 10 fewer turns in corridor experiments |
| Ref. | Year | Method | Robot/Platform/Environment | Key Outcome |
|---|---|---|---|---|
| [170] | 2019 | Mamdani-type fuzzy controller + RL-based NN | Car-like mobile robot | Collision-free navigation in cluttered environments |
| [171] | 2019 | Multiple adaptive neuro-fuzzy inference system using IR sensors | Mobile robot (simulation) | Real-time navigation, reduced path length, robust against collisions |
| [172] | 2020 | Adaptive neuro-fuzzy inference system + GPS guidance | AMR with GPS | Collision-free path following toward fixed or moving targets |
| [173] | 2021 | Adaptive neuro-fuzzy controller | Mobile robot in static obstacle environment | Smoother and shorter paths compared to standalone FL |
| [174] | 2022 | ANFIS with ultrasonic sensors for obstacle avoidance | AMR with ultrasonic sensors | Dynamic steering control when obstacles detected |
| [175] | 2022 | ANFIS + GPS + heading sensor hybrid motion planner | Wheeled mobile robot | Robust navigation in crowded/unknown environments, improved global–local control |
| [176] | 2022 | Neuro-fuzzy system for deliberative and reactive strategies | Mobile robot | Improved corrective decision-making in real time |
| [179] | 2022 | FCM-MANFIS (clustering + ANFIS) | Khepera-IV, V-REP simulation | Reduced rule complexity; <9% deviation between simulation and real-world; shorter/faster paths |
| [177] | 2023 | Neuro-fuzzy optimized with PSO for multi-robot navigation | Multiple mobile robots (crowded setting) | Increased navigation accuracy, reduced path length and duration |
| [178] | 2024 | Adaptive Neuro-Fuzzy Inference System (ANFIS) | AMR in dynamic environments | Reduced computational complexity, smoother trajectories |
| [180] | 2025 | Subtractive clustering + fuzzy set merging + ANFIS | AMR (simulation) | RMSE 0.0442 with only 5 rules; outperformed GPS-ANFIS, CS-ANFIS, IWO-ANFIS |
| [181] | 2025 | Type-1, Type-2, Type-3 ANFIS with TLBO training | Turtlebot (Gazebo) | Type-3 ANFIS achieved smoother trajectories, faster convergence, safer navigation |
| [182] | 2025 | Hybrid A*–DWA–ANFIS PID controller | Ground vehicle | Faster convergence (0.038s vs. 0.052–0.075 s); efficient dynamic obstacle handling |
| [166] | 2025 | Hybrid Symmetric BINN + IGA + NF reasoning | Corridor experiments | Reduced path length by 11.4% and turns by 10; robust navigation in heterogeneous terrain |
| [183] | 2025 | Compact ANFIS (8 rules) with IR + ultrasonic sensing | Matlab simulations | Collision-free paths up to 20% shorter vs. baseline fuzzy controllers |
| Ref. | Year | Method | Robot/Platform/Environment | Key Outcome |
|---|---|---|---|---|
| [195] | 2019 | PPO + DQN + CNN | Multi-robot AMR navigation | Improved perception, robust trajectory generation, faster convergence |
| [193] | 2020 | Globally guided RL for navigation with dynamic obstacles and robots | Multi-robot setting | Effective navigation and collision avoidance in dynamic multi-agent scenarios |
| [192] | 2021 | Hybrid RL + fuzzy inference system | Mobile robot (navigation + charging decision) | Reduced training time, improved energy efficiency |
| [194] | 2021 | PPO + transfer learning for multi-robot path planning | Multi-robot simulation | Stable learning, efficient coordination, reduced training overhead |
| [190] | 2022 | Q-learning + Artificial Potential Field (APF) | AMR in cluttered environment | Faster learning, improved performance, reduced training time |
| [191] | 2022 | Augmented DDPG with LSTM, reward shaping, normalization, and mixed noise | AMR (simulation) | Accelerated convergence, better generalization, more efficient paths |
| [187] | 2023 | Q-learning with improved exploration–exploitation balance | AMR in dynamic environment | Reduced execution time, shorter paths, and improved cost performance |
| [188] | 2023 | Q-learning with prior knowledge initialization and adaptive greedy factor | Mobile robot (simulation) | Faster convergence and higher success rate toward targets |
| [189] | 2023 | RL Path Generation with deep Markov model + motion fine-tuning | Mobile robot | Predictive path generation and refined trajectory control |
| [196] | 2023 | Real-time MDP-based RL planning | AMR in dynamic environment | Robust real-time navigation under uncertainty |
| [197] | 2024 | Improved Dyna-Q with hippocampus-inspired forward prediction | AMR (simulation) | Better exploration–exploitation balance, enhanced navigation efficiency |
| [198] | 2024 | Optimized Q-learning (O-QL) with distance-based initialization, hybrid exploration, Gaussian reward shaping | AMR (randomized maps) | Faster convergence, higher cumulative rewards than GA-QL and EnDQN |
| [144] | 2025 | Hybrid fuzzy A* + quantum multi-stage Q-learning + APF | AMR (simulations with static/dynamic obstacles) | 80% faster learning, smoother and shorter paths, avoided trap deadlocks |
| [199] | 2025 | A* + DQN + DWA with Bezier smoothing | ROS-based wheeled robots | 99.2% success rate, robust against dynamic obstacles compared to standalone A*, DWA, or DQN |
| [200] | 2025 | Distributional RL with LIDIA + QR-DQN | AMR in Gazebo | Reduced path length by 13%, smoother trajectories, improved reliability in cluttered maps |
| [201] | 2025 | ARE-QL (Q-learning + ant colony pheromone guidance + adaptive + continuous rewards) | AMR in grid environments | Path length reduced up to 64%, convergence time reduced by >80% compared to Q-learning, IQ-FPA, DRQN |
| Ref. | Year | Method | Robot/Platform/Environment | Key Outcome |
|---|---|---|---|---|
| [208] | 2019 | Frontier-based exploration + DRL | Urban search and rescue robot | Autonomous navigation in uncharted, cluttered terrains |
| [209] | 2020 | Incremental training strategy for DRL | AMR (simulation) | Improved convergence speed and learning efficiency |
| [210] | 2020 | Value-function enhanced DRL path planner | AMR in dynamic environments | Adaptive obstacle avoidance with high success rate |
| [211] | 2021 | DDPG with separate experience replay | Mobile robot | Improved obstacle avoidance and learning from experience |
| [212] | 2021 | Dueling DQN with end-to-end grid map input | AMR in complex maps | Enhanced decision-making, robust steering commands |
| [213] | 2021 | DRL-based secure navigation framework | Lunar rover | Safety-constrained planning for space exploration |
| [215] | 2022 | Improved DQN with enhanced reward and exploration | AMR in unknown environments | Faster learning, improved exploration, optimized state–action mapping |
| [216] | 2022 | Hierarchical DRL controller (DQN-based) | Wheeled robot in congested environments | Effective obstacle avoidance via sub-action decomposition |
| [217] | 2022 | Two-layer controller (motion planning + DRL path-tracking) | AMR in unknown terrains | Real-time navigation, deadlock resolution, faster training |
| [218] | 2022 | DRL + dynamic programming with prioritized replay | Autonomous agent (simulation) | Efficient learning, improved robustness in complex environments |
| [222] | 2023 | Improved DDQN framework | Physical AMR | Real-world validation, enhanced convergence |
| [219] | 2024 | DRL with pedestrian danger modeling + virtual robot | AMR in crowd navigation | Smoother and safer trajectories in dense pedestrian environments |
| [220] | 2024 | AMC-TD3 decentralized DRL | Multi-robot systems | Coordinated multi-robot navigation with robust performance |
| [221] | 2024 | DRL + improved K-means clustering | Multi-robot coverage tasks | Efficient cooperative coverage path planning |
| [223] | 2024 | Hybrid A* + DRL framework | Real AMR platform | Real-time deployment, robust navigation in dynamic maps |
| [128] | 2025 | GAP_SAC with gated attention, prioritized replay, dynamic reward | TurtleBot 3 in ROS–Gazebo | Improved robustness, 90%+ success rate, smoother/shorter paths |
| [228] | 2025 | Dueling Double DQN (D3QN) variants with depth images and orientation cues | AMR in dynamic unknown environments | 85–90% success in real and simulated crowded scenarios |
| [229] | 2025 | Hybrid DRL–ANFIS with Tent-based Artificial Hummingbird Algorithm (TAHA) | AMR in unstructured environments | Reduced path length (15%), computation time (25%), energy use |
| [230] | 2025 | Dual-layer MOSFMO + DRL local planner with composite rewards | AgileX Ranger-Mini 2.0, Gazebo | 92–95% success, 99% on-time arrivals in dense pedestrian traffic |
| Problem Dimension | Key Requirements | Commonly Effective Design Pattern | How to Evaluate |
|---|---|---|---|
| Static, known environment (global routing) | Near-optimal cost; smoothness; clearance; moderate compute | Graph/sampling/metaheuristic: global planning + smoothing | Path cost/length; clearance; smoothness; runtime; success rate |
| Dynamic obstacles (time-varying scene) | Fast replanning; safety under uncertainty; stability | Hierarchical stack: global route + reactive local avoidance; prediction-aware local module | Replanning latency; replanning rate; collision rate; time-to-goal, and robustness to sensing delay/noise |
| Unknown/partially known (online navigation/exploration) | Safety with partial observability; exploration efficiency, and robustness | Online mapping + local policy; frontier/information gain; safety shielding | Coverage/time-to-map; safety violations; compute; generalization across layouts |
| Kinematically constrained (nonholonomic/dynamic feasibility) | Dynamic feasibility; tracking stability; smoothness | Trajectory optimization or constrained refinement (e.g., spline-based) on top of a route/path | Feasibility violation rate; curvature/jerk; tracking error; runtime; safety margin |
| Narrow passages/cluttered (high constraint density) | Escape local minima; maintain clearance; avoid dead-ends | Diversity-preserving global search + feasibility projection; hybrid refinement | Success rate; clearance distribution; failure cases; sensitivity to initialization |
| Multi-objective/task-driven (time–energy–risk tradeoffs) | Pareto tradeoffs; constraint satisfaction; interpretability | Multi-objective optimization (e.g., weighted or Pareto) + scenario-specific objectives | Pareto front quality; constraint violations; stability under weight changes; task success rate |
| Method | Strengths and Advantages | Limitations and Challenges |
|---|---|---|
| Metaheuristic Approaches | ||
| PSO | Simple implementation; applicable to both continuous and discrete optimization problems; efficient global search behavior | Convergence speed is sensitive to parameter settings; may suffer from premature convergence in complex environments |
| GA | Handles complex, multimodal optimization; maintains population diversity for broad exploration | Slow convergence in large search spaces; computationally demanding for real-time applications |
| ACO | Suitable for graph-based path planning; effective in dynamic and partially known environments | Slow convergence; high memory usage; parameter tuning is nontrivial |
| FA | Balances exploration and exploitation; useful in multimodal search problems | Performance varies significantly with parameter tuning; effectiveness may degrade in noisy environments |
| GWO | Fast convergence; effective for continuous path optimization; easy to implement | Limited exploration in complex search spaces; sensitive to population size and parameter scaling |
| AI-Based Approaches | ||
| FL | Robust to sensor noise and uncertainty; interpretable decision-making using linguistic rules | Rule design is heuristic and problem-specific; scalability and performance degrade with complex scenarios |
| NN | Learns nonlinear mappings for obstacle avoidance and path prediction; generalizes across environments | requires large training datasets; lacks transparency and interpretability; high computational demand |
| Neuro-Fuzzy | Combines the adaptability of NNs with the transparency of FL; handles both crisp and fuzzy input | High model complexity; training is sensitive and may lead to overfitting or instability |
| RL | Learns policies through trial-and-error interaction with the environment; adaptable to dynamic tasks without explicit models | Convergence can be slow; performance is sensitive to reward design; limited generalization beyond trained scenarios |
| DRL | Learns optimal navigation strategies in dynamic, uncertain environments; enables autonomous adaptation | Training requires extensive data and compute; interpretability of policies remains a significant challenge |
| Operating Regime/Constraint | Typical Suitable Families | Why This Choice Is Common | Key Caution/Failure Mode |
|---|---|---|---|
| Known map; mostly static obstacles; need global route | Graph search/roadmap/sampling (e.g., A*, D*, PRM/RRT*) | High reliability and interpretability; mature toolchain; repeatable performance | May degrade in dense dynamics; requires good map; may ignore nonholonomic/dynamic limits unless kinodynamic variants are used |
| Known map but frequent local changes (temporary obstacles, congestion) | Global planner + local reactive layer (e.g., A*/D* + DWA/APF/FL) | Global optimality/structure + fast local reaction; practical for warehouses and indoor navigation | Local minima/oscillation risks; tuning needed; may violate dynamic feasibility without constraints |
| Unknown/partially known environment; exploration/coverage required | Frontier/exploration methods + local avoidance; RL/DRL for exploration policies (often hybridized) | Handles incomplete maps; learns exploration behaviors; supports information-gain objectives | Sparse rewards and brittleness; safety constraints needed; sim-to-real transfer can fail |
| Highly dynamic scenes (crowds, moving obstacles); strict real-time response | DRL local avoidance (policy inference) + global planner; or adaptive FL local layer | Fast online inference and responsiveness; handles partial observability when trained appropriately | Training cost and data/simulation fidelity; safety verification/shielding often required |
| Multi-objective planning (short, smooth, safe, energy-aware) with complex cost terms | Metaheuristics (GA/PSO/ACO/GWO) and hybrid metaheuristic + deterministic | Flexible objective design; can escape local minima; effective in cluttered search spaces | Parameter sensitivity; higher compute; real-time use often requires receding-horizon or parallelization |
| Tight nonholonomic constraints (Ackermann, differential drive) and/or dynamic feasibility required | Kinodynamic planners/optimization-based methods; constrained sampling; hybrid with feasibility checks | Produces feasible trajectories respecting kinematics/dynamics; better for high-speed or constrained platforms | Computational burden; requires accurate models; may need replanning under disturbances |
| Safety-critical deployment; need guarantees (hard constraints, certification) | Deterministic planners + constrained optimization; safe RL with shields/CBFs as augmentation | Easier to reason about constraints; can integrate formal checks (collision, speed/accel limits) | Conservatism; may underperform in complex dynamic scenes without learned prediction |
| Multi-robot coordination (formation, deconfliction, shared goals) | Centralized/distributed planners + MARL (hybrid) depending on comms | Supports coordination objectives and negotiation; handles partial observability in learned settings | Communication limits; non-stationarity in learning; scalability and safety remain open issues |
| Method | Time Complexity | Space Complexity |
|---|---|---|
| Heuristic-guided RL | ||
| Deep RL | ||
| Tabular RL |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Aremu, M.B.; Ahmed, G.; Elferik, S.; Saif, A.-W.A. Autonomous Mobile Robot Path Planning Techniques—A Review: Metaheuristic and Cognitive Techniques. Robotics 2026, 15, 23. https://doi.org/10.3390/robotics15010023
Aremu MB, Ahmed G, Elferik S, Saif A-WA. Autonomous Mobile Robot Path Planning Techniques—A Review: Metaheuristic and Cognitive Techniques. Robotics. 2026; 15(1):23. https://doi.org/10.3390/robotics15010023
Chicago/Turabian StyleAremu, Mubarak Badamasi, Gamil Ahmed, Sami Elferik, and Abdul-Wahid A. Saif. 2026. "Autonomous Mobile Robot Path Planning Techniques—A Review: Metaheuristic and Cognitive Techniques" Robotics 15, no. 1: 23. https://doi.org/10.3390/robotics15010023
APA StyleAremu, M. B., Ahmed, G., Elferik, S., & Saif, A.-W. A. (2026). Autonomous Mobile Robot Path Planning Techniques—A Review: Metaheuristic and Cognitive Techniques. Robotics, 15(1), 23. https://doi.org/10.3390/robotics15010023

