Developing and testing intelligent agents is a complex task that is both time-consuming and costly. This is especially true for agents whose behavior is judged not only by the final states they achieve, but also by the way in which they accomplish their task. In this paper, we examine methods for ensuring an agent upholds constraints particular to a given domain. We explore two significant projects dealing with this problem and we determine that two properties are crucial to success in complex domains. First, we must seek efficient methods of representing domain constraints and testing potential actions for consistency. Second, behavior must be assessed at run-time, as opposed to only during a planning phase. Finally, we explore how abstract behavior representations might be used to satisfy these two properties. We end the paper with a brief discussion of the current state of our self-assessment framework and our plans for future work.