This paper introduces a new representational scheme for word sense disambiguation. Drawing on work in information retrieval (Latent Semantic Indexing as proposed by [Deerwester et al. 1990]) an efficient method for learning sublezical representations is described: Words and contexts are represented as vectors in a multidimensional space which approximates similarity of collocational patterns. Closeness of words in the space is equivalent to occurrence in similar contexts, thus giving a rough approximation of semantic similarity. The Bayesian classification system AutoClass was then used to perform an unsupervised classification of sublexical representations of contexts of the three ambiguous words interest, suit and plant in a training text. In applying this classification to a test text, AutoClass disamhiguated 90% of all occurrences correctly. Unsupervised classification failed for tank, but a more sophisticated algorithm also achieved a disambiguation rate of 90%.