Interactive narratives (IN) are stories that branch and change based on the actions of a participant. A class of automated systems generate INs where all story branches conform to a set of constraints predefined by an author. Participants in these systems may create invalid branches by navigating the story world outside the bounds of the author's constraints. Two existing methods, choice removal and intervention, are designed to mitigate these situations. However, these methods are expected to lower invisibility, being recognized as system manipulations by the participant. In this paper we present an evaluation of a new method, domain revision, that is designed to have no negative effect on invisibility. We measure invisibility by asking survey participants how believable a choice's options and outcomes are in the context of a Choose Your Own Adventure story. We find that domain revision is more believable than choice removal when applied to a choice's options. We also find that domain revision is equally believable as intervention on a choice's outcomes because intervention does not cause a drop in invisibility.