An intelligent agent must integrate three central components of behavior - planning, action, and perception. Different researchers have explored alternative strategies for interleaving these processes, typically assuming that their approach is desirable in all domains. In contrast, we believe that different domains require different interleaving schemes. In this paper we identify three continua along which these strategies can vary, show how one can represent each spectrum in terms of probabilistic information, and outline how an agent might learn the best position along each continuum through experience with a particular domain. We are exploring these issues in the context of ICARUS, an integrated architecture for controlling physical agents (Langley et al., 1991). The framework’s central data structure is the grounded plan, which it represents as a sequence of segments that correspond to qualitative ’states'. That is, each segment specifies an interval of time - a continuous sequence of situations - during which the signs of a set of observed derivatives are constant; thus, state boundaries between segments occur between pairs of situations in which the sign of one or more derivatives change. A segment also specifies a set of primitive actions that should occur while the segment is active. A problem is simply an abstract plan that has not yet been completely grounded.