Track:
All Papers
Downloads:
Abstract:
Knowledge transfer research has traditionally focused on features that are relevant for a class of problems. In contrast, our research focuses on features that are irrelevant. When attempting to acquire a new concept from sensory data, a learner is exposed to significant volumes of extraneous data. In order to use knowledge transfer for quickly acquiring new concepts within a given class (e.g. learning a new character from the set of characters, a new face from the set of faces, a new vehicle from the set of vehicles, etc.), a learner must know which features are ignorable or it will repeatedly be forced to relearn them. We have previously demonstrated knowledge transfer in deep convolutional neural nets (DCNN's). In this paper, we give experimental results that demonstrate the increased importance of knowledge transfer when learning new concepts from noisy data. Additionally, we exploit the layered nature of DCNN's to discover more efficient and targeted methods of knowledge transfer. We observe that most of the transfer occurs within the 3.2% of weights that are closest to the input.