AAAI Publications, 2017 AAAI Fall Symposium Series

Font Size: 
Integrating Knowledge Representation, Reasoning, and Learning for Human-Robot Interaction
Mohan Sridharan

Last modified: 2017-10-09

Abstract


Robots interacting with humans often have to represent and reason with different descriptions of incomplete domain knowledge and uncertainty, and revise this knowledge over time. Towards achieving these capabilities, the architecture described in this paper combines the complementary strengths of declarative programming, probabilistic graphical models, and reinforcement learning. For any given goal, non-monotonic logical reasoning with a coarse-resolution representation of the domain is used to compute a tentative plan of abstract actions. Each abstract action is implemented as a sequence of concrete actions by reasoning probabilistically over the relevant part of a fine-resolution representation tightly-coupled to the coarse-resolution representation. The outcomes of executing the concrete actions are used for subsequenct reasoning at the coarse resolution. Furthermore, the task of interactively learning axioms governing action capabilities, preconditions and effects, is posed as a relational reinforcement learning problem, using decision tree regression and sampling to construct and generalize over candidate axioms. These capabilities are illustrated in simulation and on a physical robot moving objects to specific people or locations in an indoor domain.

Keywords


Human-robot interaction, declarative language, probabilistic graphical models, relational reinforcement learning

Full Text: PDF