Learning to Teach Reinforcement Learning Agents
AbstractIn this article, we study the transfer learning model of action advice under a budget. We focus on reinforcement learning teachers providing action advice to heterogeneous students playing the game of Pac-Man under a limited advice budget. First, we examine several critical factors affecting advice quality in this setting, such as the average performance of the teacher, its variance and the importance of reward discounting in advising. The experiments show that the best performers are not always the best teachers and reveal the non-trivial importance of the coefficient of variation (CV) as a statistic for choosing policies that generate advice. The CV statistic relates variance to the corresponding mean. Second, the article studies policy learning for distributing advice under a budget. Whereas most methods in the relevant literature rely on heuristics for advice distribution, we formulate the problem as a learning one and propose a novel reinforcement learning algorithm capable of learning when to advise or not. The proposed algorithm is able to advise even when it does not have knowledge of the student’s intended action and needs significantly less training time compared to previous learning approaches. Finally, in this article, we argue that learning to advise under a budget is an instance of a more generic learning problem: Constrained Exploitation Reinforcement Learning. View Full-Text
Share & Cite This Article
Fachantidis, A.; Taylor, M.E.; Vlahavas, I. Learning to Teach Reinforcement Learning Agents. Mach. Learn. Knowl. Extr. 2018, 1, 2.
Fachantidis A, Taylor ME, Vlahavas I. Learning to Teach Reinforcement Learning Agents. Machine Learning and Knowledge Extraction. 2018; 1(1):2.Chicago/Turabian Style
Fachantidis, Anestis; Taylor, Matthew E.; Vlahavas, Ioannis. 2018. "Learning to Teach Reinforcement Learning Agents." Mach. Learn. Knowl. Extr. 1, no. 1: 2.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.