DOI:
10.1609/aiide.v8i1.12507
Abstract:
Speech recognition failures and limited vocabulary coverage pose challenges for speech interaction with characters in games. We describe an end-to-end system for automating characters from a large corpus of recorded human game logs, and demonstrate that inferring utterance meaning through a combination of plan recognition and surface text similarity compensates for recognition and understanding failures significantly better than relying on surface similarity alone.