Next Article in Journal
Maximum Entropy Method for Solving the Turbulent Channel Flow Problem
Previous Article in Journal
Assessment of a Stream Gauge Network Using Upstream and Downstream Runoff Characteristics and Entropy
Previous Article in Special Issue
Online Gradient Descent for Kernel-Based Maximum Correntropy Criterion
Article Menu

Export Article

Open AccessArticle

Entropic Regularization of Markov Decision Processes

1
Department of Computer Science, Technische Universität Darmstadt, 64289 Darmstadt, Germany
2
Max Planck Institute for Intelligent Systems, 72076 Tübingen, Germany
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(7), 674; https://doi.org/10.3390/e21070674
Received: 14 June 2019 / Revised: 6 July 2019 / Accepted: 8 July 2019 / Published: 10 July 2019
(This article belongs to the Special Issue Entropy Based Inference and Optimization in Machine Learning)
  |  
PDF [651 KB, uploaded 12 July 2019]
  |  

Abstract

An optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration. However, if the system dynamics and the reward function are unknown, a learning agent must discover an optimal controller via direct interaction with the environment. Such interactive data gathering commonly leads to divergence towards dangerous or uninformative regions of the state space unless additional regularization measures are taken. Prior works proposed bounding the information loss measured by the Kullback–Leibler (KL) divergence at every policy improvement step to eliminate instability in the learning dynamics. In this paper, we consider a broader family of f-divergences, and more concretely α -divergences, which inherit the beneficial property of providing the policy improvement step in closed form at the same time yielding a corresponding dual objective for policy evaluation. Such entropic proximal policy optimization view gives a unified perspective on compatible actor-critic architectures. In particular, common least-squares value function estimation coupled with advantage-weighted maximum likelihood policy improvement is shown to correspond to the Pearson χ 2 -divergence penalty. Other actor-critic pairs arise for various choices of the penalty-generating function f. On a concrete instantiation of our framework with the α -divergence, we carry out asymptotic analysis of the solutions for different values of α and demonstrate the effects of the divergence function choice on common standard reinforcement learning problems. View Full-Text
Keywords: maximum entropy reinforcement learning; actor-critic methods; f-divergence; KL control maximum entropy reinforcement learning; actor-critic methods; f-divergence; KL control
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Belousov, B.; Peters, J. Entropic Regularization of Markov Decision Processes. Entropy 2019, 21, 674.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top