Track:
Contents
Downloads:
Abstract:
In a multiplayer game, your opponents are other human players. These players make mistakes. Mistakes and miscalculations provide opportunity for other players. The challenge (and ultimately the fun) which comes from a multiplayer game is the give and take that comes from human interaction. The standard opponent in a first person shooter uses a finite-state machine and a series of hand coded rules. Drawbacks of this system include a high level of predictability of opponents and a large amount of work manually programming each rule. To mimic the multiplayer experience of human vs. human combat typically involves a high amount of tuning for game balance. Because of the difficulty of the problem, most single player games instead focus on story and other game types. A perfect artificial opponent for a first person shooter has never been modeled. Modern advances in machine learning have enabled agents to accurately learn rules from a set of examples. By sampling data from an expert player we use these machine learning algorithms to model a player in a first person shooter. With this system in place, the programmer has less work when hand coding the combat rules and the learned behaviors are often more unpredictable and life-like than any hard-wired finite state machine. This paper explores several popular machine learning algorithms and shows how these algorithms can be applied to the game. We show that a subset of AI behaviors can be learned effectively by player modeling using the machine learning technique of neural network classifiers trained with boosting and bagging. Under this system we have successfully been able to learn the combat behaviors of an expert player and apply them to an agent in a modified version of the video game Soldier of Fortune 2. However, the learning system has the potential of being extended to many other game types.