Next Article in Journal
Bi-Level Optimization of the Energy Recovery System from Internal Combustion Engines of a Cruise Ship
Next Article in Special Issue
A Portable Intuitive Haptic Device on a Desk for User-Friendly Teleoperation of a Cable-Driven Parallel Robot
Previous Article in Journal
A Numerical Evaluation of Structural Hot-Spot Stress Methods in Rib-To-Deck Joint of Orthotropic Steel Deck
Article

Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep-Reinforcement-Learning Approach

1
Graduate School of Engineering Science, Osaka University, Osaka 560-8531, Japan
2
Automation Research Team, Industrial CPS Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tokyo 135-0064, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(19), 6923; https://doi.org/10.3390/app10196923
Received: 24 August 2020 / Revised: 17 September 2020 / Accepted: 23 September 2020 / Published: 2 October 2020
(This article belongs to the Special Issue Machine-Learning Techniques for Robotics)
Industrial robot manipulators are playing a significant role in modern manufacturing industries. Though peg-in-hole assembly is a common industrial task that has been extensively researched, safely solving complex, high-precision assembly in an unstructured environment remains an open problem. Reinforcement-learning (RL) methods have proven to be successful in autonomously solving manipulation tasks. However, RL is still not widely adopted in real robotic systems because working with real hardware entails additional challenges, especially when using position-controlled manipulators. The main contribution of this work is a learning-based method to solve peg-in-hole tasks with hole-position uncertainty. We propose the use of an off-policy, model-free reinforcement-learning method, and we bootstraped the training speed by using several transfer-learning techniques (sim2real) and domain randomization. Our proposed learning framework for position-controlled robots was extensively evaluated in contact-rich insertion tasks in a variety of environments. View Full-Text
Keywords: reinforcement learning; compliance control; robotic assembly; sim2real; domain randomization reinforcement learning; compliance control; robotic assembly; sim2real; domain randomization
Show Figures

Graphical abstract

MDPI and ACS Style

Beltran-Hernandez, C.C.; Petit, D.; Ramirez-Alpizar, I.G.; Harada, K. Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep-Reinforcement-Learning Approach. Appl. Sci. 2020, 10, 6923. https://doi.org/10.3390/app10196923

AMA Style

Beltran-Hernandez CC, Petit D, Ramirez-Alpizar IG, Harada K. Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep-Reinforcement-Learning Approach. Applied Sciences. 2020; 10(19):6923. https://doi.org/10.3390/app10196923

Chicago/Turabian Style

Beltran-Hernandez, Cristian C.; Petit, Damien; Ramirez-Alpizar, Ixchel G.; Harada, Kensuke. 2020. "Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep-Reinforcement-Learning Approach" Appl. Sci. 10, no. 19: 6923. https://doi.org/10.3390/app10196923

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop