In our current research, we devise a qualitative control layer to be integrated into a real-time multi-agent reactive planner. The reactive planning system consists of distributed planning agents attending to various perspectives of the task environment. Each perspective corresponds to an objective. The set of objectives considered are sometimes in conflict with each other. Each agent receives information about events as they occur, and a set of actions based on heuristics can be taken by the agents. Within the qualitative control scheme, we use a set of qualitative feature vectors to describe the effects of applying actions. A qualitative transition vector is used to denote the qualitative distance between the current state and the target state. Given a target state and a set of heuristics, we have an algorithm to test the reachability of the target state. We will then apply on-line learning at the qualitative control level to achieve adaptive planning. Our goal is to design a mechanism to refine the heuristics used by the reactive planner every time an action is taken toward achieving the objectives, using feedback from the results of the actions. When the outcome is compared with expectations, our prior objectives may be modified and a new set of objectives (or a new assessment of the relative importance of the different objectives) can introduced. Because we are able to obtain better estimates of the time-varying objectives, the reactive strategies can be improved and better prediction can be achieved.