Effective team task performance requires that participants make appropriate use of all available knowledge and skills. In natural settings plagued by unreliable data and data sources, the difficulty of improving performance through aggregation of data increases with available resources. Teams must develop a shared perception of the task, knowledge of one another and of the reliability of each other’s data to derive full value from pooled resources. Initial participant roles and expectations are refined to incorporate past experience and evolve interaction patterns recognizing individual strengths and weaknesses. While our understanding of informational dynamics in human teams remains incomplete, even less is known about humancomputer team dynamics. We hypothesized that effective human-computer performance also may require calibration of roles and expectations so that the decision maker can accurately interpret software behavior and anticipate its limitations. This calibration should incorporate expectations, experience and supporting evidence. We designed a target classification task in which a software agent played different roles varying from a simple aggregator of data through full-fledged decision maker. In experimental trials with 120 subjects, we found that higher roles of software participation positively influenced willingness to adopt aiding and resulted in improved task performance even under conditions of known errors.