Recurrent Representation Reinterpreted

David Landy

Unlike many classical systems, what constitutes the "representations" in a neural network is not always explicit. In many analyses, the contents of the hidden layer are taken to be the representations. From this perspective, SRNs are context-sensitive and not generally compositional. However, in this paper, an alternative analysis of the way SRNs "represent" is presented that leads to a different conclusion. It is shown that if an SRN’s representation of some input is taken to be a function specifying the network’s dispositional response to that input, then SRNs are in fact formally compositional. This analysis of representation is defended as both natural and valid. Implications for the relation between compositionality and systematicity are then explored. It is concluded that compositionality does not play a large part in explaining systematicity.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.