An Intrinsically-Motivated Approach for Learning Highly Exploring and Fast Mixing Policies

Authors

  • Mirco Mutti Università di Bologna
  • Marcello Restelli Politecnico di Milano

DOI:

https://doi.org/10.1609/aaai.v34i04.5968

Abstract

What is a good exploration strategy for an agent that interacts with an environment in the absence of external rewards? Ideally, we would like to get a policy driving towards a uniform state-action visitation (highly exploring) in a minimum number of steps (fast mixing), in order to ease efficient learning of any goal-conditioned policy later on. Unfortunately, it is remarkably arduous to directly learn an optimal policy of this nature. In this paper, we propose a novel surrogate objective for learning highly exploring and fast mixing policies, which focuses on maximizing a lower bound to the entropy of the steady-state distribution induced by the policy. In particular, we introduce three novel lower bounds, that lead to as many optimization problems, that tradeoff the theoretical guarantees with computational complexity. Then, we present a model-based reinforcement learning algorithm, IDE3AL, to learn an optimal policy according to the introduced objective. Finally, we provide an empirical evaluation of this algorithm on a set of hard-exploration tasks.

Downloads

Published

2020-04-03

How to Cite

Mutti, M., & Restelli, M. (2020). An Intrinsically-Motivated Approach for Learning Highly Exploring and Fast Mixing Policies. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5232-5239. https://doi.org/10.1609/aaai.v34i04.5968

Issue

Section

AAAI Technical Track: Machine Learning