Optimizing the level of detail of an interactive simulation involves maximizing its perceived scope while minimizing the computational resources that are required to maintain it. Using varying levels of detail is common in computer graphics, but the challenges of doing so in simulations remain substantially less explored. The interactive simulations of video games often govern the behaviour of intelligent agents in the environment, and such behaviours can take substantial computational resources to maintain. As the ambitions of designers and players demand larger and more complex simulations, new strategies are needed to disassociate the perceived scope of a simulation with its computational needs. To this end, we propose a way to automatically adjust between different levels of detail in an interactive, narrative planning context, while simultaneously identifying and visualizing the elements that can currently be perceived.