Point-based methods are an effective way to produce reasonable solutions for large POMDP problems. Their effectiveness relies on the fact that it is efficient and accurate to evaluate existing policies for one specific belief state. In the corresponding decentralized POMDPs (DEC-POMDPs), however, this evaluation can not be done efficiently, because the belief state must also consist of a belief about the other agent's policy. Thus, current point-based DEC-POMDP methods take a slightly different tack; they must do more computation for each point, and therefore they select less points, and generate multiple new policies for each point. Observation aggregation techniques for DEC-POMDPs represent one implementation of this compromise. In this paper, we explore the ramifications of using previously developed DEC-POMDP aggregation methods in POMDPs.