Embedding High-Level Knowledge into DQNs to Learn Faster and More Safely

Authors

  • Zihang Gao Shenzhen University
  • Fangzhen Lin Hong Kong University of Science and Technology
  • Yi Zhou Shanghai Research Center for Brain Science and Brain-Inspired Intelligence/Zhangjiang Laboratory
  • Hao Zhang Dorabot Inc.
  • Kaishun Wu Shenzhen University
  • Haodi Zhang Shenzhen University

DOI:

https://doi.org/10.1609/aaai.v34i09.7091

Abstract

Deep reinforcement learning has been successfully applied in many decision making scenarios. However, the slow training process and difficulty in explaining limit its application. In this paper, we attempt to address some of these problems by proposing a framework of Rule-interposing Learning (RIL) that embeds knowledge into deep reinforcement learning. In this framework, the rules dynamically effect the training progress, and accelerate the learning. The embedded knowledge in form of rule not only improves learning efficiency, but also prevents unnecessary or disastrous explorations at early stage of training. Moreover, the modularity of the framework makes it straightforward to transfer high-level knowledge among similar tasks.

Downloads

Published

2020-04-03

How to Cite

Gao, Z., Lin, F., Zhou, Y., Zhang, H., Wu, K., & Zhang, H. (2020). Embedding High-Level Knowledge into DQNs to Learn Faster and More Safely . Proceedings of the AAAI Conference on Artificial Intelligence, 34(09), 13608-13609. https://doi.org/10.1609/aaai.v34i09.7091