Abstract:
Codenames – a board game by Vlaada Chvátil – is a game that requires deep, multi-modal language understanding. One player, the codemaster, gives a clue to another set of players, the guessers, and the guessers must determine which of 25 possible words on the board correspond to the clue. The nature of the game requires understanding language in a multi-modal manner – e.g., the clue ‘cold’ could refer to temperature or disease. The recently proposed Codenames AI Competition seeks to advance natural language processing, by using Codenames as a testbed for multi-modal language understanding. In this work, we evaluate a number of different natural language processing techniques (ranging from neural approaches to classical knowledge-base methods) in the context of the Codenames AI framework, attempting to determine how different approaches perform. The agents are evaluated when working with identical agents, as well as evaluated with all other approaches – i.e., when they have no knowledge about their partner.
DOI:
10.1609/aiide.v15i1.5239