AAAI Publications, Thirty-First AAAI Conference on Artificial Intelligence

Font Size: 
Scalable Multitask Policy Gradient Reinforcement Learning
Salam El Bsat, Haitham Bou Ammar, Matthew E. Taylor

Last modified: 2017-02-13

Abstract


Policy search reinforcement learning (RL) allows agents to learn autonomously with limited feedback. However, such methods typically require extensive experience for successful behavior due to their tabula rasa nature. Multitask RL is an approach, which aims to reduce data requirements by allowing knowledge transfer between tasks. Although successful, current multitask learning methods suffer from scalability issues when considering large number of tasks. The main reasons behind this limitation is the reliance on centralized solutions. This paper proposes to a novel distributed multitask RL framework, improving the scalability across many different types of tasks. Our framework maps multitask RL to an instance of general consensus and develops an efficient decentralized solver. We justify the correctness of the algorithm both theoretically and empirically: we first proof an improvement of convergence speed to an order of O(1/k) with k being the number of iterations, and then show our algorithm surpassing others on multiple dynamical system benchmarks.

Keywords


Transfer Learning; Multi-Task Learning; Reinforcement Learning; Scalable MTL

Full Text: PDF