td-Gammon is a neural network that is able to teach itself to play backgammon solely by playing against itself and learning from the results, based on the td(A) reinforcement learning algorithm (Sutton, 1988). Despite starting from random initial weights (and hence random initial strategy), td-Gammon achieves a surprisingly strong level of play. With zero knowledge built in at the start of learning (i.e. given only a "raw" description of the board state), the network learns to play at a strong intermediate level. Furthermore, when a set of hand-crafted features is added to the network’s input representation, the result is a truly staggering level of performance: the latest version of td-Gammon is now estimated to play at a strong master level that is extremely close to the world’s best human players.