The HSIC Bottleneck: Deep Learning without Back-Propagation

Authors

  • Wan-Duo Kurt Ma Victoria University of Wellington
  • J. P. Lewis Victoria University of Wellington
  • W. Bastiaan Kleijn Victoria University of Wellington

DOI:

https://doi.org/10.1609/aaai.v34i04.5950

Abstract

We introduce the HSIC (Hilbert-Schmidt independence criterion) bottleneck for training deep neural networks. The HSIC bottleneck is an alternative to the conventional cross-entropy loss and backpropagation that has a number of distinct advantages. It mitigates exploding and vanishing gradients, resulting in the ability to learn very deep networks without skip connections. There is no requirement for symmetric feedback or update locking. We find that the HSIC bottleneck provides performance on MNIST/FashionMNIST/CIFAR10 classification comparable to backpropagation with a cross-entropy target, even when the system is not encouraged to make the output resemble the classification labels. Appending a single layer trained with SGD (without backpropagation) to reformat the information further improves performance.

Downloads

Published

2020-04-03

How to Cite

Ma, W.-D. K., Lewis, J. P., & Kleijn, W. B. (2020). The HSIC Bottleneck: Deep Learning without Back-Propagation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5085-5092. https://doi.org/10.1609/aaai.v34i04.5950

Issue

Section

AAAI Technical Track: Machine Learning