Generative Continual Concept Learning

Authors

  • Mohammad Rostami University of Pennsylvania
  • Soheil Kolouri HRL Labs, LLC
  • Praveen Pilly HRL Labs, LLC
  • James McClelland Stanford University

DOI:

https://doi.org/10.1609/aaai.v34i04.6006

Abstract

After learning a concept, humans are also able to continually generalize their learned concepts to new domains by observing only a few labeled instances without any interference with the past learned knowledge. In contrast, learning concepts efficiently in a continual learning setting remains an open challenge for current Artificial Intelligence algorithms as persistent model retraining is necessary. Inspired by the Parallel Distributed Processing learning and the Complementary Learning Systems theories, we develop a computational model that is able to expand its previously learned concepts efficiently to new domains using a few labeled samples. We couple the new form of a concept to its past learned forms in an embedding space for effective continual learning. Doing so, a generative distribution is learned such that it is shared across the tasks in the embedding space and models the abstract concepts. This procedure enables the model to generate pseudo-data points to replay the past experience to tackle catastrophic forgetting.

Downloads

Published

2020-04-03

How to Cite

Rostami, M., Kolouri, S., Pilly, P., & McClelland, J. (2020). Generative Continual Concept Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5545-5552. https://doi.org/10.1609/aaai.v34i04.6006

Issue

Section

AAAI Technical Track: Machine Learning