Classical AI planners use loops over subgoals to move a stack of blocks by repeatedly moving the top block. Probabilistic planners and reactive systems repeatedly try to pick up a block to increase the probability of success in an uncertain environment. These planners terminate a loop only when the goal is achieved or when the probability of success has reached some threshold. The tradeoff between the cost of repeating a loop and the expected benefit is ignored. Decision-theoretic refinement planners take this tradeoff into account, but to date, have been limited to considering only finite length plans. In this paper, we describe extensions to a decision-theoretic refinement planner, DRIPS, for handling loops. The extended planner terminates a loop when it can show that all plans with one or more additional iterations of the loop have lower utility. We give conditions under which optimal plans are finite and conditions under which the planner will find an optimal plan and terminate. With loops, a decision-theoretic refinement planner searches an infinite space of plans, making search control critical. We demonstrate how our sensitivity analysis-based search control technique provides effective search control without requiring the domain designer to hand-tune parameters, or otherwise provide search control information.