Recent exciting, ambitious applications in agent technology involve agents acting individually or in teams in support of critical activities of individual humans or entire human organizations. Applications range from intelligent homes, to "routine" organizational coordination, to electronic commerce to long-term space missions. These new applications have brought forth an increasing interest in agents’ adjustable autonomy (AA), i.e., in agents’ dynamically adjusting their own level of autonomy based on the situation. In fact, many of these applications will not be deployed, unless reliable AA reasoning is a central component. At the heart of AA is the question of whether and when agents should make autonomous decisions and when they should transfer decision-making control to other entities (e.g., human users). Unfortunately, previous work in adjustable autonomy has focused on individual agent-human interactions and the techniques developed fail to scale-up to complex heterogeneous organizations. Indeed, as a first step, we focused on a small-scale, but real-world agent-human organization called Electric Elves, where an individual agent and human worked together within a larger multiagent context. Although the application limits the interactions among entities, key weaknesses of previous approaches to adjustable autonomy are readily apparent. In particular, previous approaches to transfer-of-control are seen to be too rigid, employing one-shot transfers-of-control that can result in unacceptable coordination failures. Furthermore, the previous approaches ignore potential costs (e.g., from delays) to an agent’s team due to such transfers of control. To remedy such problems, we propose a novel approach to AA, based on the notion of a transfer-of-control strategy. A transfer-of-control strategy consists of a conditional sequence of two types of actions: (i) actions to transfer decision-making control (e.g., from the agent to the user or vice versa) and (ii) actions to change an agent’s pre-specified coordination constraints with team members, aimed at minimizing miscoordination costs. The goal is for high quality individual decisions to be made with minimal disruption to the coordination of the team. We operationalize such strategies via Markov decision processes (MDPs) which select the optimal strategy given an uncertain environment and costs to individuals and teams. We have developed a general reward function and state representation for such an MDP, to facilitate application of the approach to different domains. We present results from a careful evaluation of this approach, including via its use in our real-world, deployed Electric Elves system.