Proceedings:
Lessons Learned from Implemented Software Architectures for Physical Agents
Volume
Issue:
Papers from the 1995 AAAI Spring Symposium
Track:
Contents
Downloads:
Abstract:
Recently considerable interest in behavior-based robots has been generated by industrial, space and defence related activities. Such independent robots are envisioned to perform tasks where safety or economic factors prevent direct human control and communication difficulties prevent easy remote control. Although many successes have been reported using behavior-based robots with prespecified skills and behaviors, it is clear that there are many more applications where learning and adaptation are required. In this research, a method whereby reinforcement learning can be combined into a behavior based control system is presented. Skills and behaviors which are impossible or impractical to embed as predetermined responses are learned by the robot through exploration and discovery using a temporal difference reinforcement learning technique. This results in what is referred to as a Distributed Adaptive Control System (DACS), effect the robot’s artificial nervous system. Presented in this paper is only a general overview of the DACS architecture with many details neglected. A DACS is then developed for a simulated quadruped mobile robot. The locomotion and body coordination behavioral levels are isolated and evaluated.
Spring
Papers from the 1995 AAAI Spring Symposium