DOI:
Abstract:
My thesis aims to contribute towards building autonomous agents that are able to develop competency over their environment -- agents that are able to achieve mastery over their domain and are able to solve new problems as they arise using the knowledge and skills they acquired in the past. I propose a number of methods for building competence in autonomous agents using the reinforcement learning framework, a computational approach to learning from interaction. These methods allow an agent to autonomously develop a set of skills (closed-loop policies over lower-level actions) that allows the agent to interact effectively with its environment and flexibly deal with new tasks.