Cooperative behavior is a desired trait in many fields from computer games to robotics. Yet, achieving cooperative behavior is often difficult, as maintaining shared information about the dynamics of agents in the world can be complex. We focus on the specific task of cooperative pathfinding and introduce a new approach based on the idea of "direction maps" that learns about the movement of agents in the world. This learned data then is used to produce implicit cooperation between agents. This approach is less expensive and has better performance than several existing cooperative algorithms.