Proceedings:
Proceedings of the AAAI Conference on Artificial Intelligence, 13
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 13
Track:
AAAI-96 Student Abstracts
Downloads:
Abstract:
Autonomous agents functioning in complex and rapidly changing environments can improve their task performance if they update and correct their world model over the life of the agent. Existing research on this problem can be divided into two classes. First, reinforcement learners that use weak inductive methods to directly modify an agent’s procedural execution knowledge. These systems are robust in dynamic and complex environments but generally do not support planning or the pursuit of multiple goals and learn slowly as a result of their weak methods. In contrast, the second category, theory revision systems, learn declarative planning knowledge through stronger methods that use explicit reasoning to identify and correct errors in the agent’s domain knowledge. However, these methods are generally only applicable to agents with instantaneous actions in fully sensed domains.
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 13
ISBN 978-0-262-51091-2
August 4-8, 1996, Portland, Oregon. Published by The AAAI Press, Menlo Park, California.