- freely available
Robotics 2013, 2(3), 185-186; doi:10.3390/robotics2030185
Abstract: The research on intelligent robots will produce robots that are able to operate in everyday life environments, to adapt their program according to environment changes, and to cooperate with other team members and humans. Operating in human environments, robots need to process, in real time, a large amount of sensory data—such as vision, laser, microphone—in order to determine the best action. Intelligent algorithms have been successfully applied to link complex sensory data to robot action. This editorial briefly summarizes recent findings in the field of intelligent robots as described in the articles published in this special issue.
Robots are expected to operate in everyday life environments. In contrast to industrial robots that perform a set of predetermined motions, operation of robots in human environments requires adaptation of the robot policy based on the surrounding conditions. In the last few decades, many successful applications of intelligent algorithms, such as neural networks, genetic algorithms, and fuzzy logic, have been proposed. Other important applications of intelligent algorithms have been developed in the field of multiple robots for applications such as environment exploration, collaborative object transportation, and surveillance missions. The focus of this special issue is on recent findings in the field of intelligent robots [1,2,3,4,5].
Genetic Programming has been proven to give good results in the field of intelligent robots. Kuyucu et al.  present a bio-inspired decision mechanism, which provides a convenient way for evolution to configure the conditions and timing of behaving as a swarm or a modular robot in an exploration scenario. The collective decision of the multiple robotic system, to switch from a certain type of behavior into another when each individual robot makes its own decisions is very complex. They used Genetic Programming (GP) to evolve the controller for these decisions, which acts without a centralized mechanism and with limited inter-robot communication.
In the field of intelligent robots, reinforcement learning has been proven to give good results for creating robots with the ability to learn and adapt their policy in dynamic environments. An interesting application of reinforcement learning in multiple agent systems is presented by Kuremoto et al. . In contrast to conventional Q-learning algorithms, the authors propose a computational motivation function, which adopts two principle affective factors “Arousal” and “Pleasure” of Russell’s circumplex model. The computer simulations of pursuit problems with static and dynamic preys show a fast and stable learning performance.
Kormushev et al.  provides a state-of-the-art summary of the reinforcement learning and applications in robotics. Specifically, the paper focuses on performance evaluation of reinforcement learning policy representation. The authors propose a policy representation that offers correlation, adaptability, multi-resolution, globality, multi-dimensionality and convergence. The proposed policy representation is applied in the robot manipulator pancake flipping task.
Control of flexible manipulators, which have several advantages over their rigid counterpart, is a challenging research problem. The combination of high, particularly unstructured, nonlinearities, such as in the form of Coulomb friction and manipulator’s joint elasticity, changes significantly the system’s dynamics. Chaoui et al.  applied an adaptive fuzzy logic controller which can deal with structured and unstructured dynamic uncertainties. The authors show that type-2 fuzzy controller outperformed type-1 when large magnitudes of uncertainties are presented.
Brain machine interface (BMI) has been proposed as a novel technique to control prosthetic devices aimed at restoring motor functions in paralyzed patients. In this issue, Mano et al.  proposed a neural network based controller that maps brain signals and transforms them into robot movement. The experiments were performed with electrodes implanted in the rat’s brain. First, the rat is trained to move the robot. The brain signals of four electrodes, two in the motor cortex and two in the somatosensory cortex area are collected. These data are used to train several neural networks, which are employed online to map brain signals and transform them into robot motion.
- Kuyucu, T.; Tanev, I.; Shimohara, K. Hormone-inspired behaviour switching for the control of collective robotic organisms. Robotics 2013, 2, 165–184. [Google Scholar] [CrossRef]
- Kuremoto, T.; Tsurusaki, T.; Kobayashi, K.; Mabu, S.; Obayashi, M. An improved reinforcement learning system using affective factors. Robotics 2013, 2, 149–164. [Google Scholar] [CrossRef]
- Kormushev, P.; Calinon, S.; Caldwell, D.G. Reinforcement learning in robotics: Applications and real-world challenges. Robotics 2013, 2, 122–148. [Google Scholar] [CrossRef]
- Chaoui, H.; Gueaieb, W.; Biglarbegian, M.; Yagoub, M.C.E. Computationally efficient adaptive type-2 fuzzy control of flexible-joint manipulators. Robotics 2013, 2, 66–91. [Google Scholar] [CrossRef]
- Mano, M.; Capi, G.; Tanaka, N.; Kawahara, S. An artificial neural network based robot controller that uses rat’s brain signals. Robotics 2013, 2, 54–65. [Google Scholar] [CrossRef]
© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).