Special Issue "Intelligent Robots"

Quicklinks

A special issue of Robotics (ISSN 2218-6581).

Deadline for manuscript submissions: closed (31 March 2013)

Special Issue Editor

Guest Editor
Prof. Dr. Genci Capi

Department of Electrical and Electronic Systems Engineering, Faculty of Engineering, University of Toyama Gofuku Campus, 3190 Gofuku, Toyama, 930-8555, Japan
Website | E-Mail
Interests: evolutionary robotics; human-robot interaction; reinforcement learning; humanoid robots; service robots; intelligent robots

Special Issue Information

Dear Colleagues,

Future robots are expected to operate in unstructured and unpredicted environments. Therefore, the robots must adapt their policy as environment changes. Learning and evolution have been proved to give good results generating a good mapping of various sensory data to robot action. The goal of this special issue is to bring together recent works from a wide range of topics concerning application of learning and evolution in robotics.

The scope of the special issue includes but is not limited:

  • Reinforcement learning
  • Evolutionary robotics
  • Sensorimotor learning
  • Combining learning and evolution
  • Hierarchical learning
  • Biologically motivated neural controllers
  • Imitation learning
  • Learning and evolution in multi robotic systems

Prof. Dr. Genci Capi
Guest Editor

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (N.B. Conference papers may only be submitted if the paper was not originally copyrighted and if it has been extended substantially and completely re-written). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed Open Access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. For the first couple of issues the Article Processing Charge (APC) will be waived for well-prepared manuscripts. English correction and/or formatting fees of 250 CHF (Swiss Francs) will be charged in certain cases for those articles accepted for publication that require extensive additional formatting and/or English corrections.

Keywords

  • reinforcement learning
  • evolutionary robotics
  • imitation learning
  • hybrid learning

Published Papers (6 papers)

View options order results:
result details:
Displaying articles 1-6
Export citation of selected articles as:

Editorial

Jump to: Research

Open AccessEditorial Special Issue on Intelligent Robots
Robotics 2013, 2(3), 185-186; doi:10.3390/robotics2030185
Received: 1 August 2013 / Accepted: 2 August 2013 / Published: 6 August 2013
PDF Full-text (59 KB) | HTML Full-text | XML Full-text
Abstract
The research on intelligent robots will produce robots that are able to operate in everyday life environments, to adapt their program according to environment changes, and to cooperate with other team members and humans. Operating in human environments, robots need to process, in
[...] Read more.
The research on intelligent robots will produce robots that are able to operate in everyday life environments, to adapt their program according to environment changes, and to cooperate with other team members and humans. Operating in human environments, robots need to process, in real time, a large amount of sensory data—such as vision, laser, microphone—in order to determine the best action. Intelligent algorithms have been successfully applied to link complex sensory data to robot action. This editorial briefly summarizes recent findings in the field of intelligent robots as described in the articles published in this special issue. Full article
(This article belongs to the Special Issue Intelligent Robots)

Research

Jump to: Editorial

Open AccessArticle Hormone-Inspired Behaviour Switching for the Control of Collective Robotic Organisms
Robotics 2013, 2(3), 165-184; doi:10.3390/robotics2030165
Received: 1 June 2013 / Revised: 15 July 2013 / Accepted: 16 July 2013 / Published: 24 July 2013
Cited by 1 | PDF Full-text (4806 KB) | HTML Full-text | XML Full-text
Abstract
Swarming and modular robotic locomotion are two disconnected behaviours that a group of small homogeneous robots can be used to achieve. The use of these two behaviours is a popular subject in robotics research involving search, rescue and exploration. However, they are rarely
[...] Read more.
Swarming and modular robotic locomotion are two disconnected behaviours that a group of small homogeneous robots can be used to achieve. The use of these two behaviours is a popular subject in robotics research involving search, rescue and exploration. However, they are rarely addressed as two behaviours that can coexist within a single robotic system. Here, we present a bio-inspired decision mechanism, which provides a convenient way for evolution to configure the conditions and timing of behaving as a swarm or a modular robot in an exploration scenario. The decision mechanism switches among two behaviours that are previously developed (a pheromone-based swarm control and a sinusoidal rectilinear modular robot movement). We use Genetic Programming (GP) to evolve the controller for these decisions, which acts without a centralized mechanism and with limited inter-robot communication. The results show that the proposed bio-inspired decision mechanism provides an evolvable medium for the GP to utilize in evolving an effective decision-making mechanism. Full article
(This article belongs to the Special Issue Intelligent Robots)
Figures

Open AccessArticle An Improved Reinforcement Learning System Using Affective Factors
Robotics 2013, 2(3), 149-164; doi:10.3390/robotics2030149
Received: 30 May 2013 / Revised: 25 June 2013 / Accepted: 27 June 2013 / Published: 10 July 2013
Cited by 1 | PDF Full-text (431 KB) | HTML Full-text | XML Full-text
Abstract
As a powerful and intelligent machine learning method, reinforcement learning (RL) has been widely used in many fields such as game theory, adaptive control, multi-agent system, nonlinear forecasting, and so on. The main contribution of this technique is its exploration and exploitation approaches
[...] Read more.
As a powerful and intelligent machine learning method, reinforcement learning (RL) has been widely used in many fields such as game theory, adaptive control, multi-agent system, nonlinear forecasting, and so on. The main contribution of this technique is its exploration and exploitation approaches to find the optimal solution or semi-optimal solution of goal-directed problems. However, when RL is applied to multi-agent systems (MASs), problems such as “curse of dimension”, “perceptual aliasing problem”, and uncertainty of the environment constitute high hurdles to RL. Meanwhile, although RL is inspired by behavioral psychology and reward/punishment from the environment is used, higher mental factors such as affects, emotions, and motivations are rarely adopted in the learning procedure of RL. In this paper, to challenge agents learning in MASs, we propose a computational motivation function, which adopts two principle affective factors “Arousal” and “Pleasure” of Russell’s circumplex model of affects, to improve the learning performance of a conventional RL algorithm named Q-learning (QL). Compared with the conventional QL, computer simulations of pursuit problems with static and dynamic preys were carried out, and the results showed that the proposed method results in agents having a faster and more stable learning performance. Full article
(This article belongs to the Special Issue Intelligent Robots)
Open AccessArticle Reinforcement Learning in Robotics: Applications and Real-World Challenges
Robotics 2013, 2(3), 122-148; doi:10.3390/robotics2030122
Received: 4 June 2013 / Revised: 24 June 2013 / Accepted: 28 June 2013 / Published: 5 July 2013
Cited by 15 | PDF Full-text (1941 KB) | HTML Full-text | XML Full-text
Abstract
In robotics, the ultimate goal of reinforcement learning is to endow robots with the ability to learn, improve, adapt and reproduce tasks with dynamically changing constraints based on exploration and autonomous learning. We give a summary of the state-of-the-art of reinforcement learning in
[...] Read more.
In robotics, the ultimate goal of reinforcement learning is to endow robots with the ability to learn, improve, adapt and reproduce tasks with dynamically changing constraints based on exploration and autonomous learning. We give a summary of the state-of-the-art of reinforcement learning in the context of robotics, in terms of both algorithms and policy representations. Numerous challenges faced by the policy representation in robotics are identified. Three recent examples for the application of reinforcement learning to real-world robots are described: a pancake flipping task, a bipedal walking energy minimization task and an archery-based aiming task. In all examples, a state-of-the-art expectation-maximization-based reinforcement learning is used, and different policy representations are proposed and evaluated for each task. The proposed policy representations offer viable solutions to six rarely-addressed challenges in policy representations: correlations, adaptability, multi-resolution, globality, multi-dimensionality and convergence. Both the successes and the practical difficulties encountered in these examples are discussed. Based on insights from these particular cases, conclusions are drawn about the state-of-the-art and the future perspective directions for reinforcement learning in robotics. Full article
(This article belongs to the Special Issue Intelligent Robots)
Figures

Open AccessArticle Computationally Efficient Adaptive Type-2 Fuzzy Control of Flexible-Joint Manipulators
Robotics 2013, 2(2), 66-91; doi:10.3390/robotics2020066
Received: 1 April 2013 / Revised: 4 May 2013 / Accepted: 13 May 2013 / Published: 21 May 2013
Cited by 2 | PDF Full-text (635 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we introduce an adaptive type-2 fuzzy logic controller (FLC) for flexible-joint manipulators with structured and unstructured dynamical uncertainties. Simplified interval fuzzy sets are used for real-time efficiency, and internal stability is enhanced by adopting a trade-off strategy between the manipulator’s
[...] Read more.
In this paper, we introduce an adaptive type-2 fuzzy logic controller (FLC) for flexible-joint manipulators with structured and unstructured dynamical uncertainties. Simplified interval fuzzy sets are used for real-time efficiency, and internal stability is enhanced by adopting a trade-off strategy between the manipulator’s and the actuators’ velocities. Furthermore, the control scheme is independent of the computationally expensive noisy torque and acceleration signals. The controller is validated through a set of numerical simulations and by comparing it against its type-1 counterpart. The ability of the adaptive type-2 FLC in coping with large magnitudes of uncertainties yields an improved performance. The stability of the proposed control system is guaranteed using Lyapunov stability theory. Full article
(This article belongs to the Special Issue Intelligent Robots)
Figures

Open AccessArticle An Artificial Neural Network Based Robot Controller that Uses Rat’s Brain Signals
Robotics 2013, 2(2), 54-65; doi:10.3390/robotics2020054
Received: 27 March 2013 / Revised: 16 April 2013 / Accepted: 19 April 2013 / Published: 29 April 2013
Cited by 2 | PDF Full-text (723 KB) | HTML Full-text | XML Full-text
Abstract
Brain machine interface (BMI) has been proposed as a novel technique to control prosthetic devices aimed at restoring motor functions in paralyzed patients. In this paper, we propose a neural network based controller that maps rat’s brain signals and transforms them into robot
[...] Read more.
Brain machine interface (BMI) has been proposed as a novel technique to control prosthetic devices aimed at restoring motor functions in paralyzed patients. In this paper, we propose a neural network based controller that maps rat’s brain signals and transforms them into robot movement. First, the rat is trained to move the robot by pressing the right and left lever in order to get food. Next, we collect brain signals with four implanted electrodes, two in the motor cortex and two in the somatosensory cortex area. The collected data are used to train and evaluate different artificial neural controllers. Trained neural controllers are employed online to map brain signals and transform them into robot motion. Offline and online classification results of rat’s brain signals show that the Radial Basis Function Neural Networks (RBFNN) outperforms other neural networks. In addition, online robot control results show that even with a limited number of electrodes, the robot motion generated by RBFNN matched the motion generated by the left and right lever position. Full article
(This article belongs to the Special Issue Intelligent Robots)
Figures

Journal Contact

MDPI AG
Robotics Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
robotics@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Robotics
Back to Top