Abstract:
Autonomous systems must consider the moral ramifications of their actions. Moral norms vary among people and depend on common sense, posing a challenge for encoding them explicitly in a system. I propose to develop a model of repeated analogical chaining and analogical reasoning to enable autonomous agents to interactively learn to apply common sense and model an individual’s moral norms.
DOI:
10.1609/aaai.v30i1.9817