Track:
Contents
Downloads:
Abstract:
An efficient and useful representation for an object viewed from different positions is in terms of its instantiation parameters. We show how the Minimum Description Length principle (MDL) can be used to train the hidden units of a neural network to develop a population code for the instantiation parameters of an object in an image. Each hidden unit has a location in a low-dimensional {em implicit} space. If the hidden unit activities form a standard shape (a bump) in this space, they can be cheaply encoded by the center of this bump. So the weights from the input units to the hidden units in a self-supervised network are trained to make the activities form a bump. The coordinates of the hidden units in the implicit space are also learned, thus allowing flexibility, as the network develops separate population codes when presented with different objects.