A decision space is defined by the range of options at the decision maker's disposal. For each option there is a distribution of possible consequences. Each distribution is a function of the uncertainty of elements in the decision situation (how big is the fire) and uncertainty regarding executing the course of actions defined in the decision option (what percent of fire trucks will get to the scene and when). To aid decision-makers, we can use computer models to visualize this decision space —explicitly representing the distribution of consequences for each decision option. Because decisions for dynamic domains like emergency response need to be made in seconds or minutes, the underlying (possibly complex) simulation models will need to frequently recalculate the myriad plausible consequences of each possible decision choice. This raises the question of the essential precision and fidelity of such simulations that are needed to support such decision spaces. If we can avoid needless fidelity that does not substantially change the decision space, then we can save development cost and computational time, which in turn will support more tactical decision-making. This work explored the trade space of necessary precision/fidelity of simulation models that feed data to decision-support tools. We performed sensitivity analyses to determine breakpoints where simulations become too imprecise to provide decision-quality data. The eventual goal of this work is to provide general principles or a methodology for determining the boundary conditions of needed precision/fidelity.