The pursuit of moving targets in real-time environments such as computer games and robotics presents several challenges to situated agents. A priori unknown state spaces and the need to interleave acting and planning limits the applicability of traditional search, learning, and adversarial game-tree search methods. In this paper we build on the previous idea of hierarchical state abstraction, showing how it can be effectively applied to each of the problems in moving target pursuit. First, we develop new algorithms for both chasing agents and target agents that automatically build and refine a hierarchical state abstraction. We then demonstrate the improvements in performance that the state abstraction affords when applied to incremental A* search, learning real-time heuristic search, and minimax adversarial search. Finally, we propose and evaluate a systematic and efficient method of using single-agent techniques in adversarial domains. This leads to a diverse family of new moving target pursuit algorithms which draw on recent advances in single-agent search. In a preliminary empirical evaluation, we demonstrate effects of state abstraction on search and learning in the agents.