This paper addresses the problem of reinforcement learning in continuous domains through teaching by demonstration. Our approach is based on the Continuous U-Tree algorithm, which generates a tree-based discretization of a continuous state space while applying general reinforcement learning techniques. We introduce a method for generating a preliminary state discretization and policy from expert demonstration in the form of a decision tree. This discretization is used to bootstrap the Continuous U-Tree algorithm and guide the autonomous learning process. In our experiments, we show how a small number of demonstration trials provided by an expert can significantly reduce the number of trials required to learn an optimal policy, resulting in a significant improvement in both learning efficiency and state space size.