AAAI Publications, 2017 AAAI Spring Symposium Series

Font Size: 
Markov Transitions between Attractor States in a Recurrent Neural Network
Jeremy Bernstein, Ishita Dasgupta, David Rolnick, Haim Sompolinsky

Last modified: 2017-03-20


Stochasticity is an essential part of explaining the world. Increasingly, neuroscientists and cognitive scientists are identifying mechanisms whereby the brain uses probabilistic reasoning in representational, predictive, and generative settings. But stochasticity is not always useful: robust perception and memory retrieval require representations that are immune to corruption by stochastic noise. In an effort to combine these robust representations with stochastic computation, we present an architecture that generalizes traditional recurrent attractor networks to follow probabilistic Markov dynamics between stable and noise-resistant fixed points.


Hopfield network Markov transition attractor recurrent neural network

Full Text: PDF