Proceedings:
Lessons Learned from Implemented Software Architectures for Physical Agents
Volume
Issue:
Papers from the 1995 AAAI Spring Symposium
Track:
Contents
Downloads:
Abstract:
Researchers designing robots that can perform nontrivial behaviors in real-world environments are faced with a problem. There is a growing consensus that machines for several highly desirable applications, especially visually guided navigation and manipulation, must incorporate mechanisms peculiar to their function, and must interact with the world in order to derive the necessary information. This behavioral/active vision paradigm has been the subject of numerous recent papers, and several conferences. Basically, the philosophy holds that if a machine is going to work in a real world, you have to build and test it in that real world. Moreover, most of the intellectual effort required to produce such a machine revolves around the specific real-world interactions in which the machine must be involved. Working with sophisticated, physical manipulators or navigators however, is an extremely expensive, time-consuming, and in some ways hazardous process. Unlike pure software development, an error in an algorithm running a physical machine can result in thousands of dollars of damage and weeks of delay, not to mention the possibility of personal injury. And yet because of the complexity of real-world environments, physical hardware has typically been involved at relatively early, error prone phases of system development. Furthermore, controlling machinery in the real world requires dealing with real-time interactions among multiple complex computational processes. Design and testing of such real-time parallel systems is a major issue in itself. The consequent risk, complexity, and expense has limited development of non-trivial visually controlled machines to a few wellfunded institutions, and even there development has been slow.
Spring
Papers from the 1995 AAAI Spring Symposium