Robots are expected to operate in everyday life environments. In contrast to industrial robots that perform a set of predetermined motions, operation of robots in human environments requires adaptation of the robot policy based on the surrounding conditions. In the last few decades, many successful applications of intelligent algorithms, such as neural networks, genetic algorithms, and fuzzy logic, have been proposed. Other important applications of intelligent algorithms have been developed in the field of multiple robots for applications such as environment exploration, collaborative object transportation, and surveillance missions. The focus of this special issue is on recent findings in the field of intelligent robots [
1,
2,
3,
4,
5].
Genetic Programming has been proven to give good results in the field of intelligent robots. Kuyucu
et al. [
1] present a bio-inspired decision mechanism, which provides a convenient way for evolution to configure the conditions and timing of behaving as a swarm or a modular robot in an exploration scenario. The collective decision of the multiple robotic system, to switch from a certain type of behavior into another when each individual robot makes its own decisions is very complex. They used Genetic Programming (GP) to evolve the controller for these decisions, which acts without a centralized mechanism and with limited inter-robot communication.
In the field of intelligent robots, reinforcement learning has been proven to give good results for creating robots with the ability to learn and adapt their policy in dynamic environments. An interesting application of reinforcement learning in multiple agent systems is presented by Kuremoto
et al. [
2]. In contrast to conventional Q-learning algorithms, the authors propose a computational motivation function, which adopts two principle affective factors “Arousal” and “Pleasure” of Russell’s circumplex model. The computer simulations of pursuit problems with static and dynamic preys show a fast and stable learning performance.
Kormushev
et al. [
3] provides a state-of-the-art summary of the reinforcement learning and applications in robotics. Specifically, the paper focuses on performance evaluation of reinforcement learning policy representation. The authors propose a policy representation that offers correlation, adaptability, multi-resolution, globality, multi-dimensionality and convergence. The proposed policy representation is applied in the robot manipulator pancake flipping task.
Control of flexible manipulators, which have several advantages over their rigid counterpart, is a challenging research problem. The combination of high, particularly unstructured, nonlinearities, such as in the form of Coulomb friction and manipulator’s joint elasticity, changes significantly the system’s dynamics. Chaoui
et al. [
4] applied an adaptive fuzzy logic controller which can deal with structured and unstructured dynamic uncertainties. The authors show that type-2 fuzzy controller outperformed type-1 when large magnitudes of uncertainties are presented.
Brain machine interface (BMI) has been proposed as a novel technique to control prosthetic devices aimed at restoring motor functions in paralyzed patients. In this issue, Mano
et al. [
5] proposed a neural network based controller that maps brain signals and transforms them into robot movement. The experiments were performed with electrodes implanted in the rat’s brain. First, the rat is trained to move the robot. The brain signals of four electrodes, two in the motor cortex and two in the somatosensory cortex area are collected. These data are used to train several neural networks, which are employed online to map brain signals and transform them into robot motion.