Coping with rapid changes in technologies in professional lives requires tools that can persistently train the users. An important task for humans is to control the physical system behaviors with prediction. Though humans learn to perform this task with relative ease over several natural physical systems, engineered physical systems pose a problem as they are often inaccessible for natural interaction. Our research proposes a new architecture of autonomous agents, which allows them to be persistent assistants. These agents learn to perform a given Control based Prediction task over the physical systems by applying a modified Q_Learning algorithm. During this learning the state and actions spaces of physical system are acquired, which are projected as perceptual spaces in which the user can explore to learn to perform the given task. While facilitating such an exploration, the agent learns to support learning by the user by applying a modification of the td algorithm. In this paper, we present the architecture of the agent, the learning algorithms, and a solution to some complex implementation issues. This agent is being tested on examples from a number of domains such as electronics, mechatronics and economics.