Autonomous agents are designed to carry out problem solving actions in order to achieve given, or self generated, goals. A central aspect of this design is the agent’s decision making function which determines the right actions to perform in a given situation to best achieve the agent’s objectives. Traditionally, this function has been solipsistic in nature and based upon the principle of individual utility maximisation. However we believe that when designing multi-agent systems this may not be the most appropriate choice. Rather we advocate a more social view of rationality which strikes a balance between the needs of the individual and those of the overall system. To this end, we describe a preliminary formulation of social rationality, indicate how the definition can vary depending on resource bounds, and illustrate its use in a fire-fighting scenario.