This paper summarizes our efforts to address some of the technical and social aspects of agent design for increased human acceptability. From a technical perspective, we want to be able to ensure the protection of agent state, the viability of agent communities, and the reliability of the resources on which they depend. To accomplish this, we must guarantee insofar as possible that the autonomy of agents can always be bounded by explicit enforceable policy that can be continually adjusted to maximize their effectiveness and safety in both human and computational environments. From a social perspective, we want agents to be designed so as to fit well with how people actually work together. Explicit policies governing human-agent interaction based on careful observation of work practice and an understanding of current social science research can help assure that effective and natural coordination, appropriate levels and modalities of feedback, and adequate predictability and responsiveness to human control are maintained. We see these technical and social factors as key to providing the reassurance and trust that are prerequisite to the widespread acceptance of agent technology for nontrivial applications.