Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (1)

Search Parameters:
Authors = Janosch Moos ORCID = 0000-0003-2484-3830

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 1371 KiB  
Review
Robust Reinforcement Learning: A Review of Foundations and Recent Advances
by Janosch Moos, Kay Hansel, Hany Abdulsamad, Svenja Stark, Debora Clever and Jan Peters
Mach. Learn. Knowl. Extr. 2022, 4(1), 276-315; https://doi.org/10.3390/make4010013 - 19 Mar 2022
Cited by 96 | Viewed by 21666
Abstract
Reinforcement learning (RL) has become a highly successful framework for learning in Markov decision processes (MDP). Due to the adoption of RL in realistic and complex environments, solution robustness becomes an increasingly important aspect of RL deployment. Nevertheless, current RL algorithms struggle with [...] Read more.
Reinforcement learning (RL) has become a highly successful framework for learning in Markov decision processes (MDP). Due to the adoption of RL in realistic and complex environments, solution robustness becomes an increasingly important aspect of RL deployment. Nevertheless, current RL algorithms struggle with robustness to uncertainty, disturbances, or structural changes in the environment. We survey the literature on robust approaches to reinforcement learning and categorize these methods in four different ways: (i) Transition robust designs account for uncertainties in the system dynamics by manipulating the transition probabilities between states; (ii) Disturbance robust designs leverage external forces to model uncertainty in the system behavior; (iii) Action robust designs redirect transitions of the system by corrupting an agent’s output; (iv) Observation robust designs exploit or distort the perceived system state of the policy. Each of these robust designs alters a different aspect of the MDP. Additionally, we address the connection of robustness to the risk-based and entropy-regularized RL formulations. The resulting survey covers all fundamental concepts underlying the approaches to robust reinforcement learning and their recent advances. Full article
(This article belongs to the Special Issue Advances in Reinforcement Learning)
Show Figures

Figure 1

Back to TopTop