Next Article in Journal / Special Issue
Hormone-Inspired Behaviour Switching for the Control of Collective Robotic Organisms
Previous Article in Journal / Special Issue
Reinforcement Learning in Robotics: Applications and Real-World Challenges
Article Menu

Export Article

Open AccessArticle
Robotics 2013, 2(3), 149-164; doi:10.3390/robotics2030149

An Improved Reinforcement Learning System Using Affective Factors

Graduate School of Science and Engineering, Yamaguchi University, Tokiwadai 2-16-1, Ube, Yamaguchi 755-8611, Japan
School of Information Science & Technology, Aichi Prefectural University, Ibaragabasama 152203, Ngakute, Aichi 480-1198, Japan
Author to whom correspondence should be addressed.
Received: 30 May 2013 / Revised: 25 June 2013 / Accepted: 27 June 2013 / Published: 10 July 2013
(This article belongs to the Special Issue Intelligent Robots)
View Full-Text   |   Download PDF [431 KB, uploaded 10 July 2013]   |  


As a powerful and intelligent machine learning method, reinforcement learning (RL) has been widely used in many fields such as game theory, adaptive control, multi-agent system, nonlinear forecasting, and so on. The main contribution of this technique is its exploration and exploitation approaches to find the optimal solution or semi-optimal solution of goal-directed problems. However, when RL is applied to multi-agent systems (MASs), problems such as “curse of dimension”, “perceptual aliasing problem”, and uncertainty of the environment constitute high hurdles to RL. Meanwhile, although RL is inspired by behavioral psychology and reward/punishment from the environment is used, higher mental factors such as affects, emotions, and motivations are rarely adopted in the learning procedure of RL. In this paper, to challenge agents learning in MASs, we propose a computational motivation function, which adopts two principle affective factors “Arousal” and “Pleasure” of Russell’s circumplex model of affects, to improve the learning performance of a conventional RL algorithm named Q-learning (QL). Compared with the conventional QL, computer simulations of pursuit problems with static and dynamic preys were carried out, and the results showed that the proposed method results in agents having a faster and more stable learning performance.
Keywords: multi-agent system (MAS); computational motivation function; circumplex model of affect; pursuit problem; reinforcement learning (RL) multi-agent system (MAS); computational motivation function; circumplex model of affect; pursuit problem; reinforcement learning (RL)
This is an open access article distributed under the Creative Commons Attribution License (CC BY 3.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Kuremoto, T.; Tsurusaki, T.; Kobayashi, K.; Mabu, S.; Obayashi, M. An Improved Reinforcement Learning System Using Affective Factors. Robotics 2013, 2, 149-164.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
Robotics EISSN 2218-6581 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top