Integrating Vision and Natural Language Without Central Models

Ian Horswill

Ludwig answers natural language queries about a simple scenes using a real-time vision system based on current biological theories of vision. Ludwig is unusual in that it does not use a propositional database to model the world. Instead, it simulates the interface of a traditional world model by providing plugcompatible operations that are implemented directly using real-time vision. Logic variables are bound to image regions, rather than complex data structures, while predicates, relations, and existential queries are computed on demand by the vision system. This architecture allow Ludwig to "use the world as its own best model" in the most literal sense. The resulting simpfifications in the modeling, reasoning, and parsing systems allow them to be implemented as communicating finite state machines, thus giving them a weak biological plausibility. The resulting system is highly pipelined and incremental, allowing noun phrase referents to be visually determined even before the entire sentence has been parsed.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.