A Commitment Theory of Emotions

Michel Aube

In the near future, useful software agents or robots will have to evidence a fair amount of autonomy, with the capacity of carrying their own goals and even of setting up their own motives. As complex and expensive tools, they will also have to be endowed with a built-in incentive to protect themselves and endure. As sophisticated "employees", they will eventually have to interact and cooperate with others agents of similar ontology, either from the same company or from other ones. In such a scenario, the ability to establish and maintain solid and trustable alliances, as well as to detect enemies and profiteers, will likely become a matter of sheer survival. The article proposes that establishing and securing commitments between social agents is the key to harness the benefits of cooperation, and that emotions emerged through natural selection precisely to meet such requirements, by insuring commitments management and control. The article concludes by listing some of the minimal specifications that seem required for the implementation of emotions as commitments operators.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.