Proceedings:
Training Issues in Incremental Learning
Volume
Issue:
Training Issues in Incremental Learning
Track:
Contents
Downloads:
Abstract:
In connectionist networks, newly-learned information destroys previously-learned information unless the network is continually retrained on the old information. This behavior, known as catastrophic forgetting, is unacceptable both for practical purposes and as a model of mind. This paper advances the claim that catastrophic forgetting is a direct consequence of the overlap of the system’s distributed representations and can be reduced by reducing this overlap. A simple algorithm is presented that allows a standard feedforward backpropagation network to develop semi-distributed representations, thereby significantly reducing the problem of catastrophic forgetting.
Spring
Training Issues in Incremental Learning