AAAI Publications, The Thirtieth International Flairs Conference

Font Size: 
Towards An Understanding of What is Learned: Extracting Multi-Abstraction-Level Knowledge from Learning Agents
Daan Apeldoorn, Gabriele Kern-Isberner

Last modified: 2017-05-08

Abstract


Machine Learning approaches used in the context of agents (like Reinforcement Learning) commonly result in weighted state-action pair representations (where the weights determine which action should be performed, given a perceived state). The weighted state-action pairs are stored, e.g., in tabular form or as approximated functions which makes the learned knowledge hard to comprehend by humans, since the number of state-action pairs can be extremely high. In this paper, a knowledge extraction approach is presented which extracts compact and comprehensible knowledge bases from such weighted state-action pairs. For this purpose, so-called Hierarchical Knowledge Bases are described which allow for a top-down view on the learned knowledge at an adequate level of abstraction. The approach can be applied to gain structural insights into a problem and its solution and it can be easily transformed into common knowledge representation formalisms, like normal logic programs.

Keywords


agents; knowledge base extraction; machine learning; reinforcement learning

Full Text: PDF