We describe Dec-RPAE, a system for decentralized multi-agent acting and planning in partially observable and non-deterministic environments. The system includes both an acting component and an online planning component. The acting component is similar to RAE, a well-known acting engine, but incorporates changes that enable it to be used by multiple autonomous agents working independently in a collaborative setting. Each agent runs a local copy of Dec-RPAE, with a set of hierarchical refinement methods using operational models that specify various ways to accomplish its designated tasks. To perform actions, the agent uses Dec-RPAE’s acting component to execute the methods in the agent’s environment. To advise the acting component on which method to execute, the planning component repeatedly does Monte Carlo simulations of the methods to estimate their potential outcomes. Agents can communicate with each other to exchange information about their states, tasks, goals, and plans in order to cooperatively succeed in their respective missions. Our experimental results demonstrate that Dec-RPAE is useful for improving the agents’ performances.