Proceedings:
No. 18: AAAI-21 Student Papers and Demonstrations
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 35
Track:
AAAI Student Abstract and Poster Program
Downloads:
Abstract:
Imparting biological realism during the learning process is gaining attention towards producing computationally efficient algorithms without compromising the performance. Feedback alignment and mirror neuron concept are two such approaches where the feedback weight remains static in the former and update via Hebbian learning in the later. Though these approaches have proven to work efficiently for supervised learning, it remained unknown if the same can be applicable to reinforcement learning applications. Therefore, this study introduces RHebb-DFA where the reward-based Hebbian learning is used to update feedback weights in direct feedback alignment mode. This approach is validated on various Atari games and obtained equivalent performance in comparison with DDQN.
DOI:
10.1609/aaai.v35i18.17871
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 35