Next Article in Journal
Dynamic Parameter Identification of a Lower Extremity Exoskeleton Using RLS-PSO
Next Article in Special Issue
Pick and Place Operations in Logistics Using a Mobile Manipulator Controlled with Deep Reinforcement Learning
Previous Article in Journal
Integrating the Shape Constants of a Novel Material Stress-Strain Characterization Model for Parametric Numerical Analysis of the Deformational Capacity of High-Strength X80-Grade Steel Pipelines
Previous Article in Special Issue
Comparison of Spray Deposition, Control Efficacy on Wheat Aphids and Working Efficiency in the Wheat Field of the Unmanned Aerial Vehicle with Boom Sprayer and Two Conventional Knapsack Sprayers
Open AccessArticle

Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder

College of System Engineering, National University of Defense Technology, Changsha 410073, Hunan, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(2), 323; https://doi.org/10.3390/app9020323
Received: 9 August 2018 / Revised: 31 August 2018 / Accepted: 3 September 2018 / Published: 17 January 2019
(This article belongs to the Special Issue Advanced Mobile Robotics)
In this paper, we present a hierarchical path planning framework called SG–RL (subgoal graphs–reinforcement learning), to plan rational paths for agents maneuvering in continuous and uncertain environments. By “rational”, we mean (1) efficient path planning to eliminate first-move lags; (2) collision-free and smooth for agents with kinematic constraints satisfied. SG–RL works in a two-level manner. At the first level, SG–RL uses a geometric path-planning method, i.e., simple subgoal graphs (SSGs), to efficiently find optimal abstract paths, also called subgoal sequences. At the second level, SG–RL uses an RL method, i.e., least-squares policy iteration (LSPI), to learn near-optimal motion-planning policies which can generate kinematically feasible and collision-free trajectories between adjacent subgoals. The first advantage of the proposed method is that SSG can solve the limitations of sparse reward and local minima trap for RL agents; thus, LSPI can be used to generate paths in complex environments. The second advantage is that, when the environment changes slightly (i.e., unexpected obstacles appearing), SG–RL does not need to reconstruct subgoal graphs and replan subgoal sequences using SSGs, since LSPI can deal with uncertainties by exploiting its generalization ability to handle changes in environments. Simulation experiments in representative scenarios demonstrate that, compared with existing methods, SG–RL can work well on large-scale maps with relatively low action-switching frequencies and shorter path lengths, and SG–RL can deal with small changes in environments. We further demonstrate that the design of reward functions and the types of training environments are important factors for learning feasible policies. View Full-Text
Keywords: subgoal graphs; reinforcement learning; hierarchical path planning; uncertain environments; mobile robots subgoal graphs; reinforcement learning; hierarchical path planning; uncertain environments; mobile robots
Show Figures

Figure 1

MDPI and ACS Style

Zeng, J.; Qin, L.; Hu, Y.; Hu, C.; Yin, Q. Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder. Appl. Sci. 2019, 9, 323.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop