Learning Situation-Dependent Rules: Improving Planning for an Incompletely Modeled Domain

Karen Zita Haigh and Manuela M. Veloso

Most real world environments are hard to model completely and correctly, especially to model the dynamics of the environment. In this paper we present our work to improve a domain model through learning from execution, thereby improving a task planner’s performance. Our system collects execution traces from the robot, and automatically extracts relevant information to improve the domain model. We introduce the concept of situation-dependent rules, where situational features are used to identify the conditions that affect action achievability. The system then converts this execution knowledge into a symbolic representation that the planner can use to generate plans appropriate for given situations.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.