Model Learning for Look-Ahead Exploration in Continuous Control

Authors

  • Arpit Agarwal Carnegie Mellon University
  • Katharina Muelling Carnegie Mellon University
  • Katerina Fragkiadaki Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v33i01.33013151

Abstract

We propose an exploration method that incorporates lookahead search over basic learnt skills and their dynamics, and use it for reinforcement learning (RL) of manipulation policies. Our skills are multi-goal policies learned in isolation in simpler environments using existing multigoal RL formulations, analogous to options or macroactions. Coarse skill dynamics, i.e., the state transition caused by a (complete) skill execution, are learnt and are unrolled forward during lookahead search. Policy search benefits from temporal abstraction during exploration, though itself operates over low-level primitive actions, and thus the resulting policies does not suffer from suboptimality and inflexibility caused by coarse skill chaining. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parametrized skills as building blocks of the policy itself, as opposed to guiding exploration. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parameterized skills as building blocks of the policy itself, as opposed to guiding exploration.

Downloads

Published

2019-07-17

How to Cite

Agarwal, A., Muelling, K., & Fragkiadaki, K. (2019). Model Learning for Look-Ahead Exploration in Continuous Control. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3151-3158. https://doi.org/10.1609/aaai.v33i01.33013151

Issue

Section

AAAI Technical Track: Machine Learning