Several systems (usually robotics) are controlled by a multiagent system (MAS). In such systems, the group of agents is responsible of deciding which action the system has to take. The different methods used to select the action are known as action selection mechanisms, and there is a wide range of them in the literature (a review of coordination mechanisms can be found in (Weiss 1999)). We have focused on developing an action selection mechanism within the frame of a robot navigation system. One interesting characteristic of such a system is that there exists a clear distinction between the agents that have a global view of the task to be performed (i.e. navigate to reach a target) and those that have local views (such as avoiding obstacles). The agents with global views have a wider perception of the environment than those with local views, and they have knowledge for solving the task. On the other hand, the agents with local views have a very restricted perception and are only capable of solving local situations. This distinction is also applicable to the kind of contribution the agents do to the system. The agents with global views contribute in reaching the goal and, talking in terms of rewards, maximise the benefit associated to the robot performance, while the agents with local views contribute in solving specific problems and minimise local costs that will affect the final benefit for the system. In this paper we present an action selection mechanism based on this idea. The agents in the MAS are divided in two groups, global agents and local agents. Combining the different views of the agents, the action that is found to be the most beneficial in order to reach the goal is selected. The aim of this mechanism is not finding optimal behaviours, but robust ones, in the sense that it does not find optimal paths to the target, but ensures that the robot reaches it.