Next Article in Journal
How Does University Innovation Respond to Local Industrial Development?
Previous Article in Journal
Prioritizing Worker-Related Factors of Safety Climate Using Fuzzy DEMATEL Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Research on Mobile Agent Path Planning Based on Deep Reinforcement Learning

Hunan Institute of Engineering, College of Information Science and Engineering, Xiangtan 411104, China
*
Author to whom correspondence should be addressed.
Systems 2025, 13(5), 385; https://doi.org/10.3390/systems13050385 (registering DOI)
Submission received: 9 April 2025 / Revised: 9 May 2025 / Accepted: 15 May 2025 / Published: 16 May 2025

Abstract

For mobile agent path planning, traditional path planning algorithms frequently induce abrupt variations in path curvature and steering angles, increasing the risk of lateral tire slippage and undermining operational safety. Concurrently, conventional reinforcement learning methods struggle to converge rapidly, leading to an insufficient efficiency in planning to meet the demand for energy economy. This study proposes LSTM Bézier–Double Deep Q-Network (LB-DDQN), an advanced path-planning framework for mobile agents based on deep reinforcement learning. The architecture first enables mapless navigation through a DDQN foundation, subsequently integrates long short-term memory (LSTM) networks for the fusion of environmental features and preservation of training information, and ultimately enhances the path’s quality through redundant node elimination via an obstacle–path relationship analysis, combined with Bézier curve-based trajectory smoothing. A sensor-driven three-dimensional simulation environment featuring static obstacles was constructed using the ROS and Gazebo platforms, where LiDAR-equipped mobile agent models were trained for real-time environmental perception and strategy optimization prior to deployment on experimental vehicles. The simulation and physical implementation results reveal that LB-DDQN achieves effective collision avoidance, while demonstrating marked enhancements in critical metrics: the path’s smoothness, energy efficiency, and motion stability exhibit average improvements exceeding 50%. The framework further maintains superior safety standards and operational efficiency across diverse scenarios.
Keywords: reinforcement learning; path planning; deep Q-network reinforcement learning; path planning; deep Q-network

Share and Cite

MDPI and ACS Style

Jin, S.; Zhang, X.; Hu, Y.; Liu, R.; Wang, Q.; He, H.; Liao, J.; Zeng, L. Research on Mobile Agent Path Planning Based on Deep Reinforcement Learning. Systems 2025, 13, 385. https://doi.org/10.3390/systems13050385

AMA Style

Jin S, Zhang X, Hu Y, Liu R, Wang Q, He H, Liao J, Zeng L. Research on Mobile Agent Path Planning Based on Deep Reinforcement Learning. Systems. 2025; 13(5):385. https://doi.org/10.3390/systems13050385

Chicago/Turabian Style

Jin, Shengwei, Xizheng Zhang, Ying Hu, Ruoyuan Liu, Qing Wang, Haihua He, Junyu Liao, and Lijing Zeng. 2025. "Research on Mobile Agent Path Planning Based on Deep Reinforcement Learning" Systems 13, no. 5: 385. https://doi.org/10.3390/systems13050385

APA Style

Jin, S., Zhang, X., Hu, Y., Liu, R., Wang, Q., He, H., Liao, J., & Zeng, L. (2025). Research on Mobile Agent Path Planning Based on Deep Reinforcement Learning. Systems, 13(5), 385. https://doi.org/10.3390/systems13050385

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop