You are currently on the new version of our website. Access the old version .
ElectronicsElectronics
  • Article
  • Open Access

13 November 2022

GR(1)-Guided Deep Reinforcement Learning for Multi-Task Motion Planning under a Stochastic Environment

,
,
,
and
1
School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213100, China
2
Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Recent Advances in Multi-Agent System

Abstract

Motion planning has been used in robotics research to make movement decisions under certain movement constraints. Deep Reinforcement Learning (DRL) approaches have been applied to the cases of motion planning with continuous state representations. However, current DRL approaches suffer from reward sparsity and overestimation issues. It is also challenging to train the agents to deal with complex task specifications under deep neural network approximations. This paper considers one of the fragments of Linear Temporal Logic (LTL), Generalized Reactivity of rank 1 (GR(1)), as a high-level reactive temporal logic to guide robots in learning efficient movement strategies under a stochastic environment. We first use the synthesized strategy of GR(1) to construct a potential-based reward machine, to which we save the experiences per state. We integrate GR(1) with DQN, double DQN and dueling double DQN. We also observe that the synthesized strategies of GR(1) could be in the form of directed cyclic graphs. We develop a topological-sort-based reward-shaping approach to calculate the potential values of the reward machine, based on which we use the dueling architecture on the double deep Q-network with the experiences to train the agents. Experiments on multi-task learning show that the proposed approach outperforms the state-of-art algorithms in learning rate and optimal rewards. In addition, compared with the value-iteration-based reward-shaping approaches, our topological-sort-based reward-shaping approach has a higher accumulated reward compared with the cases where the synthesized strategies are in the form of directed cyclic graphs.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.