The realization of natural human-computer interfaces suffers from a wide range of restrictions concerning noisy data, vague meanings, and context dependence. An essential aspect of everyday communication is the ability of humans to ground verbal interpretations in visual perception. Thus, the system has to be able to solve the correspondence problem of relating verbal and visual descriptions of the same object. This contribution proposes a new and innovative solution to this problem using Bayesian networks. In order to capture vague meanings of adjectives used by the speaker, psycholinguistic experiments are evaluated. Object recognition errors are taken into account by conditional probabilities estimated on test sets. The Bayesian network is dynamically built up from verbal object description and is evaluated by an inference technique combining bucket elimination and conditioning. Results show that speech and image data is interpreted more robustly in the combined case than in the case of isolated interpretations.