The goal of most agents is not just to reach a goal state, but rather also (or alternatively) to put restrictions on its trajectory, in terms of states it must avoid and goals that it must maintain. This is analogous to the notions of `safety' and `stability' in the discrete event systems and temporal logic community. In this paper we argue that the notion of `stability' is too strong for formulating `maintenance' goals of an agent -- in particular, reactive and software agents, and give examples of such agents. We present a weaker notion of `maintainability' and show that our agents which do not satisfy the stability criteria, do satisfy the weaker criteria. We give algorithms to test maintainability, and also to generate control for maintainability. We then develop the notion of `supportability' that generalizes both `maintainability' and `stabilizability, develop an automata theory that distinguishes between exogenous and control actions, and develop a temporal logic based on it.