Next Article in Journal
High Precision Pseudo-Range Measurement in GNSS Anti-Jamming Antenna Array Processing
Previous Article in Journal
Complex Dynamics of a Novel Chaotic System Based on an Active Memristor
Open AccessArticle

Goal-Oriented Obstacle Avoidance with Deep Reinforcement Learning in Continuous Action Space

1
Department of Intelligent Robot Engineering, Hanyang University, Seoul 04763, Korea
2
Department of Electronics and Computer Engineering, Hanyang University, Seoul 04763, Korea
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(3), 411; https://doi.org/10.3390/electronics9030411
Received: 20 January 2020 / Revised: 24 February 2020 / Accepted: 26 February 2020 / Published: 28 February 2020
(This article belongs to the Section Artificial Intelligence)
In this paper, we propose a goal-oriented obstacle avoidance navigation system based on deep reinforcement learning that uses depth information in scenes, as well as goal position in polar coordinates as state inputs. The control signals for robot motion are output in a continuous action space. We devise a deep deterministic policy gradient network with the inclusion of depth-wise separable convolution layers to process the large amounts of sequential depth image information. The goal-oriented obstacle avoidance navigation is performed without prior knowledge of the environment or a map. We show that through the proposed deep reinforcement learning network, a goal-oriented collision avoidance model can be trained end-to-end without manual tuning or supervision by a human operator. We train our model in a simulation, and the resulting network is directly transferred to other environments. Experiments show the capability of the trained network to navigate safely around obstacles and arrive at the designated goal positions in the simulation, as well as in the real world. The proposed method exhibits higher reliability than the compared approaches when navigating around obstacles with complex shapes. The experiments show that the approach is capable of avoiding not only static, but also dynamic obstacles. View Full-Text
Keywords: deep reinforcement learning; obstacle avoidance; map-less vector navigation; mixed-input network deep reinforcement learning; obstacle avoidance; map-less vector navigation; mixed-input network
Show Figures

Figure 1

  • Externally hosted supplementary file 1
    Link: https://youtu.be/nNWoabjKxIA
    Description: Training of the network in a simulation followed by validation in simulation as well as real-world environments.
MDPI and ACS Style

Cimurs, R.; Lee, J.H.; Suh, I.H. Goal-Oriented Obstacle Avoidance with Deep Reinforcement Learning in Continuous Action Space. Electronics 2020, 9, 411.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop