Robotics 2013, 2(3), 149-164; doi:10.3390/robotics2030149
Article

An Improved Reinforcement Learning System Using Affective Factors

1 Graduate School of Science and Engineering, Yamaguchi University, Tokiwadai 2-16-1, Ube, Yamaguchi 755-8611, Japan 2 School of Information Science & Technology, Aichi Prefectural University, Ibaragabasama 152203, Ngakute, Aichi 480-1198, Japan
* Author to whom correspondence should be addressed.
Received: 30 May 2013; in revised form: 25 June 2013 / Accepted: 27 June 2013 / Published: 10 July 2013
(This article belongs to the Special Issue Intelligent Robots)
PDF Full-text Download PDF Full-Text [431 KB, uploaded 10 July 2013 10:36 CEST]
Abstract: As a powerful and intelligent machine learning method, reinforcement learning (RL) has been widely used in many fields such as game theory, adaptive control, multi-agent system, nonlinear forecasting, and so on. The main contribution of this technique is its exploration and exploitation approaches to find the optimal solution or semi-optimal solution of goal-directed problems. However, when RL is applied to multi-agent systems (MASs), problems such as “curse of dimension”, “perceptual aliasing problem”, and uncertainty of the environment constitute high hurdles to RL. Meanwhile, although RL is inspired by behavioral psychology and reward/punishment from the environment is used, higher mental factors such as affects, emotions, and motivations are rarely adopted in the learning procedure of RL. In this paper, to challenge agents learning in MASs, we propose a computational motivation function, which adopts two principle affective factors “Arousal” and “Pleasure” of Russell’s circumplex model of affects, to improve the learning performance of a conventional RL algorithm named Q-learning (QL). Compared with the conventional QL, computer simulations of pursuit problems with static and dynamic preys were carried out, and the results showed that the proposed method results in agents having a faster and more stable learning performance.
Keywords: multi-agent system (MAS); computational motivation function; circumplex model of affect; pursuit problem; reinforcement learning (RL)

Article Statistics

Load and display the download statistics.

Citations to this Article

Cite This Article

MDPI and ACS Style

Kuremoto, T.; Tsurusaki, T.; Kobayashi, K.; Mabu, S.; Obayashi, M. An Improved Reinforcement Learning System Using Affective Factors. Robotics 2013, 2, 149-164.

AMA Style

Kuremoto T, Tsurusaki T, Kobayashi K, Mabu S, Obayashi M. An Improved Reinforcement Learning System Using Affective Factors. Robotics. 2013; 2(3):149-164.

Chicago/Turabian Style

Kuremoto, Takashi; Tsurusaki, Tetsuya; Kobayashi, Kunikazu; Mabu, Shingo; Obayashi, Masanao. 2013. "An Improved Reinforcement Learning System Using Affective Factors." Robotics 2, no. 3: 149-164.

Robotics EISSN 2218-6581 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert