How to Build Explanations of Automated Proofs A Methodology and Requirements on Domain Representations

Helmut Horacek

There is ample evidence that results produced by problem-solving methods and ingredients suitable for human-adequate explanations may differ fundamentally, which makes documenting the behavior of intelligent systems and explaining the solutions they produce quite challenging. Focusing on the explanation of solutions found by the most general problem-solvers, automated theorem provers, we sketch what has emerged as a methodology over the past decade in our working group for building content specifications for these kind of explanations. This methodology in conceived as a stratified model with dedicated transformation processes between adjacent strata. Our investigations have shown that explanation capabilities based on problem-solving knowledge only are limited in a number of ways, which motivates one to represent extra knowledge relevant for communication purposes.

Subjects: 11. Knowledge Representation; 13. Natural Language Processing

Submitted: May 15, 2007

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.