Next Article in Journal
Percentiles and Reference Values for the Accelerometric Assessment of Static Balance in Women Aged 50–80 Years
Next Article in Special Issue
Employing Shadows for Multi-Person Tracking Based on a Single RGB-D Camera
Previous Article in Journal
On the Achievable Capacity of MIMO-OFDM Systems in the CathLab Environment
Previous Article in Special Issue
Real-Time Facial Affective Computing on Mobile Devices
Article

Learning Mobile Manipulation through Deep Reinforcement Learning

by 1,2,3,4, 1,2,*, 1,2, 1,2, 1,2, 4, 4 and 4
1
State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
2
Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110016, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
School of Engineering & Physical Sciences, Heriot-Watt University, Edinburgh EH14 4AS, UK
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(3), 939; https://doi.org/10.3390/s20030939
Received: 31 December 2019 / Revised: 3 February 2020 / Accepted: 5 February 2020 / Published: 10 February 2020
Mobile manipulation has a broad range of applications in robotics. However, it is usually more challenging than fixed-base manipulation due to the complex coordination of a mobile base and a manipulator. Although recent works have demonstrated that deep reinforcement learning is a powerful technique for fixed-base manipulation tasks, most of them are not applicable to mobile manipulation. This paper investigates how to leverage deep reinforcement learning to tackle whole-body mobile manipulation tasks in unstructured environments using only on-board sensors. A novel mobile manipulation system which integrates the state-of-the-art deep reinforcement learning algorithms with visual perception is proposed. It has an efficient framework decoupling visual perception from the deep reinforcement learning control, which enables its generalization from simulation training to real-world testing. Extensive simulation and experiment results show that the proposed mobile manipulation system is able to grasp different types of objects autonomously in various simulation and real-world scenarios, verifying the effectiveness of the proposed mobile manipulation system. View Full-Text
Keywords: mobile manipulation; deep reinforcement learning; deep learning mobile manipulation; deep reinforcement learning; deep learning
Show Figures

Figure 1

MDPI and ACS Style

Wang, C.; Zhang, Q.; Tian, Q.; Li, S.; Wang, X.; Lane, D.; Petillot, Y.; Wang, S. Learning Mobile Manipulation through Deep Reinforcement Learning. Sensors 2020, 20, 939. https://doi.org/10.3390/s20030939

AMA Style

Wang C, Zhang Q, Tian Q, Li S, Wang X, Lane D, Petillot Y, Wang S. Learning Mobile Manipulation through Deep Reinforcement Learning. Sensors. 2020; 20(3):939. https://doi.org/10.3390/s20030939

Chicago/Turabian Style

Wang, Cong, Qifeng Zhang, Qiyan Tian, Shuo Li, Xiaohui Wang, David Lane, Yvan Petillot, and Sen Wang. 2020. "Learning Mobile Manipulation through Deep Reinforcement Learning" Sensors 20, no. 3: 939. https://doi.org/10.3390/s20030939

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop