Using Deep Convolutional Neural Networks to Detect Rendered Glitches in Video Games

Authors

  • Carlos García Ling KTH Royal Institute of Technology
  • Konrad Tollmar SEED Electronic Arts Stockholm
  • Linus Gisslén SEED Electronic Arts Stockholm

DOI:

https://doi.org/10.1609/aiide.v16i1.7409

Abstract

In this paper, we present a method using Deep Convolutional Neural Networks (DCNNs) to detect common glitches in video games. The problem setting consists of an image (800x800 RGB) as input to be classified into one of five defined classes, normal image, or one of four different kinds of glitches (stretched, low resolution, missing and placeholder textures). Using a supervised approach, we train a ShuffleNetV2 using generated data. This work focuses on detecting texture graphical anomalies achieving arguably good performance with an accuracy of 86.8%, detecting 88% of the glitches with a false positive rate of 8.7%, and with the models being able to generalize and detect glitches even in unseen objects. We apply a confidence measure as well to tackle the issue with false positives as well as an effective way of aggregating images to achieve better detection in production. The main use of this work is the partial automatization of graphical testing in the final stages of video game development.

Downloads

Published

2020-10-01

How to Cite

Ling, C., Tollmar, K., & Gisslén, L. (2020). Using Deep Convolutional Neural Networks to Detect Rendered Glitches in Video Games. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 16(1), 66-73. https://doi.org/10.1609/aiide.v16i1.7409