A method is presented for training an Input-Output Hidden Markov Model (IOHMM) to identify a player's current goal in an action-adventure game. The goals were Explore, Fight, or Return to Town, which served as the hidden states of the IOHMM. The observation model was trained by directing the player to achieve particular goals and counting actions. When trained on first-time players, training to the specific players did not appear to provide any benefits over a model trained to the experimenter. However, models trained on these players' subsequent trials were significantly better than the models trained to the specific players the first time, and also outperformed the model trained to the experimenter. This suggests that game goal recognition systems are best trained after the players have some time to develop a style of play. Systems for probabilistic reasoning over time could help game designers make games more responsive to players' individual styles and approaches.