Markov decision processes (MDP) offer a rich model that has been extensively used by the AI community for planning and learning under uncertainty. However, solving MDPs is often intractable, which has led to the development of many approximate algorithms. In my dissertation work I introduce a new paradigm to handle this complexity by deﬁning a family of MDP reduced models characterized by two parameters: the maximum number of primary outcomes per action that are fully accounted for and the maximum number of occurrences of the remaining exceptional outcomes that are planned for in advance. Reduced models can be solved much faster using heuristic search algorithms, beneﬁting from the dramatic reduction in the number of reachable states. This framework places recent work on MDP determinization in a broader context and lays the foundation for efﬁcient and systematic exploration of the space of MDP model reductions. Progress so far work includes a formal deﬁnition of this family of MDP reductions, a continual planning paradigm to handle the case when the number of exceptions reaches the maximum allowed, a simple greedy approach to generate good reductions for a given planning domain, and a compilation scheme that generates MDP reductions from a PPDDL description of a planning problem.