Track:
Contents
Downloads:
Abstract:
Trust between agents has been explored extensively in the literature. However, trust between agents and users has largely been left untouched. In this paper, we report our preliminary results of how reinforcement-learning agents (i.e. broker agents, or brokers) win the trust of their client in an artificial market I-TRUST. The goals of these broker agents are not only to maximize the total revenue subject to their clients’ risk preference as most other agents do in [LeBaron et al. 1997; Parkes and Huberman 2001; Schroeder et al. 2000], but also to maximize the trust they receive from their clients. Trust is introduced into I-TRUST as a relationship between clients and their software broker agents in terms of the amount of money they are willing to give to these agents to invest on their behalf. To achieve this, broker agents must first elicit user models both explicitly through questionnaires and implicitly through three games. Then based on the initial user models, a broker agent will learn to invest and later update the model when necessary. In addition to the broker agent’s individual learning of how to maximize the reward he may receive from his client, we have incorporated agents’ cooperative reinforcement learning to adjust their portfolio selecting strategy, which is implemented in FIPA-OS. A large-scale experiment is expected as our future research.