Abstract:
This project aims to compose background music in real-time for tabletop role-playing games. To accomplish this goal, we propose a system called MTG that listens to players' speeches in order to recognize the context of the current scene and generate background music to match the scene. A speech recognition system is used to transcribe players' speeches to text and a supervised learning algorithm detects when scene transitions take place. In its current version, a scene transition occurs whenever the emotional state of the narrative changes. Moreover, the background music is not generated, but selected based on its emotion from a library of hand-authored pieces. As future work, we plan to generate the background music considering the current scene context and the probability of scene transition. We also consider to retrieve more information from the narrative to detect scene transitions, such as the scene's location and time of the day as well as actions taken by characters.
DOI:
10.1609/aiide.v13i1.12914