Imitation in robotics is seen as a powerful means to reduce the complexity of robot programming. It allows users to instruct robots by simply showing them how to execute a given task. Through imitation robots can learn from their environment and adapt to it just as human newborns do. In order to be useful as human companions, robots must act for a purpose by achieving goals and fullfiling human expectations. But, what is the goal behind the surface of the demonstrated behavior? How to extract, encode and reuse eventual regularities observed? These questions are indispensable for the development of cognitive agents capable of being human companions in everyday life. In this paper we present ConSCIS, a framework for robot teaching through observation and imitation inspired by recent findings in cognitive sciences, biology and neuroscience. In ConSCIS we regard imitation as the process of manipulating high-level symbols in order to achieve goals and intentions hidden in the observation of task. The architecture has been tested both in simulation and on an anthropomorphic robot platform.