A Task Specification Language for Bootstrap Learning

Ian Fasel, Michael Quinlan, Peter Stone

Reinforcement learning (RL) is an effective framework for online learning by autonomous agents. Most RL research focuses on domain-independent learning algorithms, requiring an expert human to define the environment (state and action representation) and task to be performed (e.g. start state and reward function) on a case-by-case basis. In this paper, we describe a general language for a teacher to specify sequential decision making tasks to RL agents. The teacher may communicate properties such as start states, reward functions, termination conditions, successful execution traces, task decompositions, and other advice. The learner may then practice and learn the task on its own using any RL algorithm. We demonstrate our language in a simple BlocksWorld example and on the RoboCup soccer keepaway benchmark problem. The language forms the basis of a larger "Bootstrap Learning" model for machine learning, a paradigm for incremental development of complete systems through integration of multiple machine learning techniques.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.