Reinforcement Learning for Robotics Applications

A special issue of Robotics (ISSN 2218-6581).

Deadline for manuscript submissions: closed (30 September 2020) | Viewed by 18668

Special Issue Editors


E-Mail Website
Guest Editor
School of Electrical Engineering and Computer Engineering, University of Ottawa, Ottawa, ON, Canada
Interests: robotics and mechatronics; intelligent systems; multi-sensory data fusion

E-Mail Website
Guest Editor
School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada
Interests: systems and controls; machine learning; artificial intelligent controllers for unmanned vehicles

Special Issue Information

Dear Colleagues,

With the ever-increasing complexity of current robotic systems, it is becoming empirically important that they develop their own intelligence to reduce their dependence on human operators. Reinforcement learning is an artificial intelligence technique that has been regarded by several researchers as among the most promising vehicles leading in this direction. It is a machine learning concept that teaches an agent how to choose its actions in a dynamic environment in such a way that maximizes a cumulative reward. Thanks to its relative simplicity and effectiveness in handling systems with large uncertainties, the robotic community has shown a growing interest in these algorithms. A number of successful robotic applications of such methods have been recently reported in the literature. Such applications range from mapless navigation and model-free control to object grasping and multirobot coordination.

This Special Issue invites researchers to showcase their novel contributions to the theory and applications of reinforcement learning in the field of robotics. It aims at promoting the recent advances in this research field while highlighting the main real-world challenges that are yet to be overcome. Potential topics include, but are not limited to, the following:

  • Deep reinforcement learning model-based methods
  • Autonomous robots lifelong learning
  • Multi-task reinforcement learning
  • Goal-based skill learning
  • Reinforcement learning in humanoid robotics
  • Computational emotion models
  • Imitation learning
  • Self-supervised learning
  • Inverse reinforcement learning
  • Assistive and medical technologies
  • Multi-agent learning
  • Cooperating swarm robotics
  • System identification
  • Intelligent control systems

Prof. Dr. Wail Gueaieb
Dr. Mohammed Abouheaf
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 9414 KiB  
Article
Model-Free Optimized Tracking Control Heuristic
by Ning Wang, Mohammed Abouheaf, Wail Gueaieb and Nabil Nahas
Robotics 2020, 9(3), 49; https://doi.org/10.3390/robotics9030049 - 29 Jun 2020
Cited by 1 | Viewed by 4203
Abstract
Many tracking control solutions proposed in the literature rely on various forms of tracking error signals at the expense of possibly overlooking other dynamic criteria, such as optimizing the control effort, overshoot, and settling time, for example. In this article, a model-free control [...] Read more.
Many tracking control solutions proposed in the literature rely on various forms of tracking error signals at the expense of possibly overlooking other dynamic criteria, such as optimizing the control effort, overshoot, and settling time, for example. In this article, a model-free control architectural framework is presented to track reference signals while optimizing other criteria as per the designer’s preference. The control architecture is model-free in the sense that the plant’s dynamics do not have to be known in advance. To this end, we propose and compare four tracking control algorithms which synergistically integrate a few machine learning tools to compromise between tracking a reference signal and optimizing a user-defined dynamic cost function. This is accomplished via two orchestrated control loops, one for tracking and one for optimization. Two control algorithms are designed and compared for the tracking loop. The first is based on reinforcement learning while the second is based on nonlinear threshold accepting technique. The optimization control loop is implemented using an artificial neural network. Each controller is trained offline before being integrated in the aggregate control system. Simulation results of three scenarios with various complexities demonstrated the effectiveness of the proposed control schemes in forcing the tracking error to converge while minimizing a pre-defined system-wide objective function. Full article
(This article belongs to the Special Issue Reinforcement Learning for Robotics Applications)
Show Figures

Figure 1

15 pages, 2848 KiB  
Article
Application of Reinforcement Learning to a Robotic Drinking Assistant
by Tejas Kumar Shastha, Maria Kyrarini and Axel Gräser
Robotics 2020, 9(1), 1; https://doi.org/10.3390/robotics9010001 - 18 Dec 2019
Cited by 14 | Viewed by 6556
Abstract
Meal assistant robots form a very important part of the assistive robotics sector since self-feeding is a priority activity of daily living (ADL) for people suffering from physical disabilities like tetraplegia. A quick survey of the current trends in this domain reveals that, [...] Read more.
Meal assistant robots form a very important part of the assistive robotics sector since self-feeding is a priority activity of daily living (ADL) for people suffering from physical disabilities like tetraplegia. A quick survey of the current trends in this domain reveals that, while tremendous progress has been made in the development of assistive robots for the feeding of solid foods, the task of feeding liquids from a cup remains largely underdeveloped. Therefore, this paper describes an assistive robot that focuses specifically on the feeding of liquids from a cup using tactile feedback through force sensors with direct human–robot interaction (HRI). The main focus of this paper is the application of reinforcement learning (RL) to learn what the best robotic actions are, based on the force applied by the user. A model of the application environment is developed based on the Markov decision process and a software training procedure is designed for quick development and testing. Five of the commonly used RL algorithms are investigated, with the intention of finding the best fit for training, and the system is tested in an experimental study. The preliminary results show a high degree of acceptance by the participants. Feedback from the users indicates that the assistive robot functions intuitively and effectively. Full article
(This article belongs to the Special Issue Reinforcement Learning for Robotics Applications)
Show Figures

Figure 1

16 pages, 3415 KiB  
Article
Accelerating Interactive Reinforcement Learning by Human Advice for an Assembly Task by a Cobot
by Joris De Winter, Albert De Beir, Ilias El Makrini, Greet Van de Perre, Ann Nowé and Bram Vanderborght
Robotics 2019, 8(4), 104; https://doi.org/10.3390/robotics8040104 - 16 Dec 2019
Cited by 12 | Viewed by 6660
Abstract
The assembly industry is shifting more towards customizable products, or requiring assembly of small batches. This requires a lot of reprogramming, which is expensive because a specialized engineer is required. It would be an improvement if untrained workers could help a cobot to [...] Read more.
The assembly industry is shifting more towards customizable products, or requiring assembly of small batches. This requires a lot of reprogramming, which is expensive because a specialized engineer is required. It would be an improvement if untrained workers could help a cobot to learn an assembly sequence by giving advice. Learning an assembly sequence is a hard task for a cobot, because the solution space increases drastically when the complexity of the task increases. This work introduces a novel method where human knowledge is used to reduce this solution space, and as a result increases the learning speed. The method proposed is the IRL-PBRS method, which uses Interactive Reinforcement Learning (IRL) to learn from human advice in an interactive way, and uses Potential Based Reward Shaping (PBRS), in a simulated environment, to focus learning on a smaller part of the solution space. The method was compared in simulation to two other feedback strategies. The results show that IRL-PBRS converges more quickly to a valid assembly sequence policy and does this with the fewest human interactions. Finally, a use case is presented where participants were asked to program an assembly task. Here, the results show that IRL-PBRS learns quickly enough to keep up with advice given by a user, and is able to adapt online to a changing knowledge base. Full article
(This article belongs to the Special Issue Reinforcement Learning for Robotics Applications)
Show Figures

Figure 1

Back to TopTop