Special Issue "Reinforcement Learning for Robotics Applications"

A special issue of Robotics (ISSN 2218-6581).

Deadline for manuscript submissions: 30 April 2020.

Special Issue Editors

Prof. Dr. Wail Gueaieb
E-Mail Website
Guest Editor
School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada
Interests: robotics and mechatronics; control systems; intelligent systems; multi-sensory data fusion
Dr. Mohammed Abouheaf
E-Mail Website
Guest Editor
School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada
Interests: systems and controls; machine learning; artificial intelligent controllers for unmanned vehicles

Special Issue Information

Dear Colleagues,

With the ever-increasing complexity of current robotic systems, it is becoming empirically important that they develop their own intelligence to reduce their dependence on human operators. Reinforcement learning is an artificial intelligence technique that has been regarded by several researchers as among the most promising vehicles leading in this direction. It is a machine learning concept that teaches an agent how to choose its actions in a dynamic environment in such a way that maximizes a cumulative reward. Thanks to its relative simplicity and effectiveness in handling systems with large uncertainties, the robotic community has shown a growing interest in these algorithms. A number of successful robotic applications of such methods have been recently reported in the literature. Such applications range from mapless navigation and model-free control to object grasping and multirobot coordination.

This Special Issue invites researchers to showcase their novel contributions to the theory and applications of reinforcement learning in the field of robotics. It aims at promoting the recent advances in this research field while highlighting the main real-world challenges that are yet to be overcome. Potential topics include, but are not limited to, the following:

  • Deep reinforcement learning model-based methods
  • Autonomous robots lifelong learning
  • Multi-task reinforcement learning
  • Goal-based skill learning
  • Reinforcement learning in humanoid robotics
  • Computational emotion models
  • Imitation learning
  • Self-supervised learning
  • Inverse reinforcement learning
  • Assistive and medical technologies
  • Multi-agent learning
  • Cooperating swarm robotics
  • System identification
  • Intelligent control systems

Prof. Dr. Wail Gueaieb
Dr. Mohammed Abouheaf
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Application of Reinforcement Learning to a Robotic Drinking Assistant
Robotics 2020, 9(1), 1; https://doi.org/10.3390/robotics9010001 - 18 Dec 2019
Abstract
Meal assistant robots form a very important part of the assistive robotics sector since self-feeding is a priority activity of daily living (ADL) for people suffering from physical disabilities like tetraplegia. A quick survey of the current trends in this domain reveals that, [...] Read more.
Meal assistant robots form a very important part of the assistive robotics sector since self-feeding is a priority activity of daily living (ADL) for people suffering from physical disabilities like tetraplegia. A quick survey of the current trends in this domain reveals that, while tremendous progress has been made in the development of assistive robots for the feeding of solid foods, the task of feeding liquids from a cup remains largely underdeveloped. Therefore, this paper describes an assistive robot that focuses specifically on the feeding of liquids from a cup using tactile feedback through force sensors with direct human–robot interaction (HRI). The main focus of this paper is the application of reinforcement learning (RL) to learn what the best robotic actions are, based on the force applied by the user. A model of the application environment is developed based on the Markov decision process and a software training procedure is designed for quick development and testing. Five of the commonly used RL algorithms are investigated, with the intention of finding the best fit for training, and the system is tested in an experimental study. The preliminary results show a high degree of acceptance by the participants. Feedback from the users indicates that the assistive robot functions intuitively and effectively. Full article
(This article belongs to the Special Issue Reinforcement Learning for Robotics Applications)
Show Figures

Figure 1

Open AccessArticle
Accelerating Interactive Reinforcement Learning by Human Advice for an Assembly Task by a Cobot
Robotics 2019, 8(4), 104; https://doi.org/10.3390/robotics8040104 - 16 Dec 2019
Abstract
The assembly industry is shifting more towards customizable products, or requiring assembly of small batches. This requires a lot of reprogramming, which is expensive because a specialized engineer is required. It would be an improvement if untrained workers could help a cobot to [...] Read more.
The assembly industry is shifting more towards customizable products, or requiring assembly of small batches. This requires a lot of reprogramming, which is expensive because a specialized engineer is required. It would be an improvement if untrained workers could help a cobot to learn an assembly sequence by giving advice. Learning an assembly sequence is a hard task for a cobot, because the solution space increases drastically when the complexity of the task increases. This work introduces a novel method where human knowledge is used to reduce this solution space, and as a result increases the learning speed. The method proposed is the IRL-PBRS method, which uses Interactive Reinforcement Learning (IRL) to learn from human advice in an interactive way, and uses Potential Based Reward Shaping (PBRS), in a simulated environment, to focus learning on a smaller part of the solution space. The method was compared in simulation to two other feedback strategies. The results show that IRL-PBRS converges more quickly to a valid assembly sequence policy and does this with the fewest human interactions. Finally, a use case is presented where participants were asked to program an assembly task. Here, the results show that IRL-PBRS learns quickly enough to keep up with advice given by a user, and is able to adapt online to a changing knowledge base. Full article
(This article belongs to the Special Issue Reinforcement Learning for Robotics Applications)
Show Figures

Figure 1

Back to TopTop