An emerging trend in multiagent system research is the use of decentralized control models (notably the decentralized MDP formalism) as the basis for decision theoretic studies of multiagent problem solving. Research often focuses on the development of solution algorithms for some special classes of decentralized MDPs. While these studies will no doubt enhance our understanding of the formal foundations for multiagent system, they have a long way to go toward the actual making and deployment of decision-theoretic agents. This leads to the need for some new thinkings toward the role of MDPs in multiagent research. Since MDP policies define the mapping between an agent’s knowledge (information states) and its actions, they can be viewed as abstractions to, or be regarded as an external view of, the internal agent reasoning processes. As such, we propose the use of MDPs as a tool for agents to establish meta-level capabilities such as selfmonitoring and self-control. These capabilities are essential for agents to learn organizational knowledge. Furthermore, we also propose to apply this MDP formalism to the group level, to represent the group control and coordination knowledge.