Next Issue
Previous Issue

Table of Contents

Robotics, Volume 2, Issue 3 (September 2013), Pages 122-186

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-4
Export citation of selected articles as:

Editorial

Jump to: Research

Open AccessEditorial Special Issue on Intelligent Robots
Robotics 2013, 2(3), 185-186; doi:10.3390/robotics2030185
Received: 1 August 2013 / Accepted: 2 August 2013 / Published: 6 August 2013
PDF Full-text (59 KB) | HTML Full-text | XML Full-text
Abstract
The research on intelligent robots will produce robots that are able to operate in everyday life environments, to adapt their program according to environment changes, and to cooperate with other team members and humans. Operating in human environments, robots need to process, in
[...] Read more.
The research on intelligent robots will produce robots that are able to operate in everyday life environments, to adapt their program according to environment changes, and to cooperate with other team members and humans. Operating in human environments, robots need to process, in real time, a large amount of sensory data—such as vision, laser, microphone—in order to determine the best action. Intelligent algorithms have been successfully applied to link complex sensory data to robot action. This editorial briefly summarizes recent findings in the field of intelligent robots as described in the articles published in this special issue. Full article
(This article belongs to the Special Issue Intelligent Robots)

Research

Jump to: Editorial

Open AccessArticle Reinforcement Learning in Robotics: Applications and Real-World Challenges
Robotics 2013, 2(3), 122-148; doi:10.3390/robotics2030122
Received: 4 June 2013 / Revised: 24 June 2013 / Accepted: 28 June 2013 / Published: 5 July 2013
Cited by 15 | PDF Full-text (1941 KB) | HTML Full-text | XML Full-text
Abstract
In robotics, the ultimate goal of reinforcement learning is to endow robots with the ability to learn, improve, adapt and reproduce tasks with dynamically changing constraints based on exploration and autonomous learning. We give a summary of the state-of-the-art of reinforcement learning in
[...] Read more.
In robotics, the ultimate goal of reinforcement learning is to endow robots with the ability to learn, improve, adapt and reproduce tasks with dynamically changing constraints based on exploration and autonomous learning. We give a summary of the state-of-the-art of reinforcement learning in the context of robotics, in terms of both algorithms and policy representations. Numerous challenges faced by the policy representation in robotics are identified. Three recent examples for the application of reinforcement learning to real-world robots are described: a pancake flipping task, a bipedal walking energy minimization task and an archery-based aiming task. In all examples, a state-of-the-art expectation-maximization-based reinforcement learning is used, and different policy representations are proposed and evaluated for each task. The proposed policy representations offer viable solutions to six rarely-addressed challenges in policy representations: correlations, adaptability, multi-resolution, globality, multi-dimensionality and convergence. Both the successes and the practical difficulties encountered in these examples are discussed. Based on insights from these particular cases, conclusions are drawn about the state-of-the-art and the future perspective directions for reinforcement learning in robotics. Full article
(This article belongs to the Special Issue Intelligent Robots)
Figures

Open AccessArticle An Improved Reinforcement Learning System Using Affective Factors
Robotics 2013, 2(3), 149-164; doi:10.3390/robotics2030149
Received: 30 May 2013 / Revised: 25 June 2013 / Accepted: 27 June 2013 / Published: 10 July 2013
Cited by 1 | PDF Full-text (431 KB) | HTML Full-text | XML Full-text
Abstract
As a powerful and intelligent machine learning method, reinforcement learning (RL) has been widely used in many fields such as game theory, adaptive control, multi-agent system, nonlinear forecasting, and so on. The main contribution of this technique is its exploration and exploitation approaches
[...] Read more.
As a powerful and intelligent machine learning method, reinforcement learning (RL) has been widely used in many fields such as game theory, adaptive control, multi-agent system, nonlinear forecasting, and so on. The main contribution of this technique is its exploration and exploitation approaches to find the optimal solution or semi-optimal solution of goal-directed problems. However, when RL is applied to multi-agent systems (MASs), problems such as “curse of dimension”, “perceptual aliasing problem”, and uncertainty of the environment constitute high hurdles to RL. Meanwhile, although RL is inspired by behavioral psychology and reward/punishment from the environment is used, higher mental factors such as affects, emotions, and motivations are rarely adopted in the learning procedure of RL. In this paper, to challenge agents learning in MASs, we propose a computational motivation function, which adopts two principle affective factors “Arousal” and “Pleasure” of Russell’s circumplex model of affects, to improve the learning performance of a conventional RL algorithm named Q-learning (QL). Compared with the conventional QL, computer simulations of pursuit problems with static and dynamic preys were carried out, and the results showed that the proposed method results in agents having a faster and more stable learning performance. Full article
(This article belongs to the Special Issue Intelligent Robots)
Open AccessArticle Hormone-Inspired Behaviour Switching for the Control of Collective Robotic Organisms
Robotics 2013, 2(3), 165-184; doi:10.3390/robotics2030165
Received: 1 June 2013 / Revised: 15 July 2013 / Accepted: 16 July 2013 / Published: 24 July 2013
Cited by 1 | PDF Full-text (4806 KB) | HTML Full-text | XML Full-text
Abstract
Swarming and modular robotic locomotion are two disconnected behaviours that a group of small homogeneous robots can be used to achieve. The use of these two behaviours is a popular subject in robotics research involving search, rescue and exploration. However, they are rarely
[...] Read more.
Swarming and modular robotic locomotion are two disconnected behaviours that a group of small homogeneous robots can be used to achieve. The use of these two behaviours is a popular subject in robotics research involving search, rescue and exploration. However, they are rarely addressed as two behaviours that can coexist within a single robotic system. Here, we present a bio-inspired decision mechanism, which provides a convenient way for evolution to configure the conditions and timing of behaving as a swarm or a modular robot in an exploration scenario. The decision mechanism switches among two behaviours that are previously developed (a pheromone-based swarm control and a sinusoidal rectilinear modular robot movement). We use Genetic Programming (GP) to evolve the controller for these decisions, which acts without a centralized mechanism and with limited inter-robot communication. The results show that the proposed bio-inspired decision mechanism provides an evolvable medium for the GP to utilize in evolving an effective decision-making mechanism. Full article
(This article belongs to the Special Issue Intelligent Robots)
Figures

Journal Contact

MDPI AG
Robotics Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
robotics@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Robotics
Back to Top