Variational Adversarial Kernel Learned Imitation Learning

Authors

  • Fan Yang SUNY Buffalo
  • Alina Vereshchaka SUNY Buffalo
  • Yufan Zhou SUNY Buffalo
  • Changyou Chen SUNY Buffalo
  • Wen Dong SUNY Buffalo

DOI:

https://doi.org/10.1609/aaai.v34i04.6135

Abstract

Imitation learning refers to the problem where an agent learns to perform a task through observing and mimicking expert demonstrations, without knowledge of the cost function. State-of-the-art imitation learning algorithms reduce imitation learning to distribution-matching problems by minimizing some distance measures. However, the distance measure may not always provide informative signals for a policy update. To this end, we propose the variational adversarial kernel learned imitation learning (VAKLIL), which measures the distance using the maximum mean discrepancy with variational kernel learning. Our method optimizes over a large cost-function space and is sample efficient and robust to overfitting. We demonstrate the performance of our algorithm through benchmarking with four state-of-the-art imitation learning algorithms over five high-dimensional control tasks, and a complex transportation control task. Experimental results indicate that our algorithm significantly outperforms related algorithms in all scenarios.

Downloads

Published

2020-04-03

How to Cite

Yang, F., Vereshchaka, A., Zhou, Y., Chen, C., & Dong, W. (2020). Variational Adversarial Kernel Learned Imitation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6599-6606. https://doi.org/10.1609/aaai.v34i04.6135

Issue

Section

AAAI Technical Track: Machine Learning