Learning by demonstration technology holds great promise for enabling users to automate routine but time-consuming online tasks. However, correct generalization from demonstration traces presents numerous technical challenges related to limited or noisy training data. To address these challenges, we propose a framework that augments learning by demonstration through a facility for asking questions of the demonstrator. We define a catalog of questions designed to inform learning by demonstration technologies that covers function and causality, abstraction, alternatives, limitations on learned knowledge, and the process of learning. We present approaches for managing when to ask questions and selecting which questions to ask that balance the potential utility of the answers against the cost to the demonstrator of answering. Metareasoning about explicit learning state and goals, along with models of question utility and cost, guide the question-asking process.