Diverse Exploration via Conjugate Policies for Policy Gradient Methods

Authors

  • Andrew Cohen Binghamton University
  • Xingye Qiao Binghamton University
  • Lei Yu Binghamton University
  • Elliot Way Binghamton University
  • Xiangrong Tong Yantai University

DOI:

https://doi.org/10.1609/aaai.v33i01.33013404

Abstract

We address the challenge of effective exploration while maintaining good performance in policy gradient methods. As a solution, we propose diverse exploration (DE) via conjugate policies. DE learns and deploys a set of conjugate policies which can be conveniently generated as a byproduct of conjugate gradient descent. We provide both theoretical and empirical results showing the effectiveness of DE at achieving exploration, improving policy performance, and the advantage of DE over exploration by random policy perturbations.

Downloads

Published

2019-07-17

How to Cite

Cohen, A., Qiao, X., Yu, L., Way, E., & Tong, X. (2019). Diverse Exploration via Conjugate Policies for Policy Gradient Methods. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3404-3411. https://doi.org/10.1609/aaai.v33i01.33013404

Issue

Section

AAAI Technical Track: Machine Learning