Track:
Contents
Downloads:
Abstract:
We provide a framework for the study of learning in certain types of multi-agent systems (MAS), that divides an agent’s knowledge about others into different utypes. We use concepts from computational learning theory to calculate the relative sample complexities of learning the different types of knowledge, given either a supervised or a reinforcement learning algorithm. These results apply only for the learning of a fixed target function, which would probably not exist if the other agents are also learning. We then show how a changing target function affects the learning behaviors of the/agents, and how to determine the advantages of having lower sample complexity. Our results can be used by a designer of a learning agent in a MAS to determine which knowledge he should put into the agent and which knowledge should be learned by the agent.