Team formation, i.e., allocating agents to roles within a team or subteams of a team, and the reorganization of a team upon team member failure or arrival of new tasks are critical aspects of teamwork. Despite significant progress, research in multiagent team formation and reorganization has failed to provide a rigorous analysis of the computational complexities of the approaches proposed or their degree of optimality. This shortcoming has hindered quantitative comparisons of approaches or their complexity-optimality tradeoffs, e.g., is the team reorganization approach in practical teamwork models such as STEAM optimal in most cases or only as an exception? To alleviate these difficulties, this paper presents R-COM-MTDP, a formal model based on decentralized communicating POMDPs, where agents explicitly take on and change roles to (re)form teams. R-COM-MTDP significantly extends an earlier COM-MTDP model, by analyzing how agents’ roles, local states and reward decompositions gradually reduce the complexity of its policy generation from NEXP-complete to PSPACE-complete to P-complete. We also encode key role reorganization approaches (e.g., STEAM) as R-COM-MTDP policies, and compare them with a locally optimal policy derivable in R-COM-MTDP, thus, theoretically and empirically illustrating the complexity-optimality tradeoffs.