Computer scientists always strive to find better and faster algorithms for any computational problem. It is usually true that programmers and/or users come across a plethora of different algorithms when looking to solve a particular problem efficiently. Each one of these algorithms might offer different guarantees and properties, but it is unlikely that a single one of them is the best (fastest) in all possible cases. So, the question that the programmer/user typically faces is: "Which algorithm should I select?" This question is largely due to the uncertainty in the input space, the inner workings of the algorithm (especially true for randomized algorithms), and the hardware characteristics. It’s hard to know in advance what kind of inputs will be provided, how exactly the computation will proceed, or even how efficiently the underlying hardware will support the needs of the different algorithms. Sometimes, a careful study can reveal that committing to a particular algorithm is better than committing to any of the other algorithms, but is this the best we can do? What if uncertainty is explicitly taken into account and the right decision is made dynamically on an instance-by-instance basis?