Although autonomous systems can support a wide variety of application goals, their perceived risk inhibits their deployment. While this problem is traditionally associated with an agent’s skills, we take the novel perspective that the issue is communication. In this view, the agent simply needs a better representation of the user’s interests so that its choices will not produce unintended effects. This paper proposes a methodology for constructing artificial agents that is rooted in this perspective. In particular, we show how the goal of aligning agent held objective functions with human utility can be transformed into a practical methodology for constructing, and then utilizing agents that provably act in their users’ best interests.