Methods for planning in stochastic domains often aim for finding plans that minimize expected execution cost or maximize the probability of goal achievement. Researchers have largely ignored the question how to incorporate risk-sensitive attitudes into their planning mechanisms. Since utility theory shows that it can be rational to maximize expected utility, one might believe that by replacing all costs with their respective utilities (for an appropriate utility function) one could achieve risk-sensitive attitudes without having to change the existing probabilistic planning methods. Unfortunately, we show that this is usually not the case and, moreover, that the best action in a state can depend on the total cost that the agent has already accumulated. However, we demonstrate how one can transform risksensitive planning problems into equivalent ones for riskneutral agents provided that utility functions with the delta property are used. The transformation of a riskseeking planning problem can then be solved with any AI planning algorithm that either minimizes (or sarisrices) expected execution cost or, equivalently, one that maximizes (or satisrices) the probability of goal achievement. Thus, one can extend the functionality of these planners to risk-sensitive planning.