Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers

Authors

  • Masoumeh Soflaei University of Ottawa
  • Hongyu Guo National Research Council Canada
  • Ali Al-Bashabsheh Beihang University
  • Yongyi Mao University of Ottawa
  • Richong Zhang Beihang University

DOI:

https://doi.org/10.1609/aaai.v34i04.6038

Abstract

We consider the problem of learning a neural network classifier. Under the information bottleneck (IB) principle, we associate with this classification problem a representation learning problem, which we call “IB learning”. We show that IB learning is, in fact, equivalent to a special class of the quantization problem. The classical results in rate-distortion theory then suggest that IB learning can benefit from a “vector quantization” approach, namely, simultaneously learning the representations of multiple input objects. Such an approach assisted with some variational techniques, result in a novel learning framework, “Aggregated Learning”, for classification with neural network models. In this framework, several objects are jointly classified by a single neural network. The effectiveness of this framework is verified through extensive experiments on standard image recognition and text classification tasks.

Downloads

Published

2020-04-03

How to Cite

Soflaei, M., Guo, H., Al-Bashabsheh, A., Mao, Y., & Zhang, R. (2020). Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5810-5817. https://doi.org/10.1609/aaai.v34i04.6038

Issue

Section

AAAI Technical Track: Machine Learning