Researchers in Distributed Artificial Intelligence have suggested that it would be worthwhile to isolate "aspects of cooperative behavior," general rules that cause agents to act in ways conducive to cooperation. One kind of cooperative behavior is when agents independently alter the environment to make it easier for everyone to function effectively. Cooperative behavior of this kind might be to put away a hammer that one finds lying on the floor, knowing that another agent will be able to find it more easily later on. We examine the effect a specific "cooperation rule" has on agents in the multi-agent Tileworld domain. Agents are encouraged to increase tiles’ degrees of freedom, even when the tile is not involved in an agent’s own primary plan. The amount of extra work an agent is willing to do is captured in the agent' s cooperation level. Results from simulations are presented. We present a way of characterizing domains as multi-agent deterministic finite automata, and characterizing cooperative rules as transformations of these automata. We also discuss general characteristics of cooperative state-changing rules. It is shown that a relatively simple, easily calculated rule can sometimes improve global system performance in the Tileworld. Coordination emerges from agents who use this rule of cooperation, without any explicit coordination or negotiation.