In this work, we address the problem of finding paths for multiple agents while avoiding collisions between them, where agents' actions have stochastic outcomes. The objective is to create a joint policy for all agents that minimize the expected sum of costs of getting all agents to their goals, while guaranteeing that collisions never occur. Unlike previous work on multi-agent pathfinding (MAPF), the stochastic outcomes are not limited to delays, and thus the set of locations each agent may end up at can be very large. Consequently, offline planning is prohibitively expensive since collisions between agents may occur in many locations and time steps, while avoiding them is a hard constraint. Instead, we propose a suboptimal online approach in which each agent follows its individually-optimal policy until it detected potential collisions in the future. Then, the potentially conflicting agents create a joint policy for resolving the potential collision. We evaluated this policy experimentally on existing an MAPF benchmark, modified to include stochasticity. The results show that we are able to find high quality solutions for non-trivial grids with up to 12 agents, significantly surpassing several baseline approaches.