AffinityNet: Semi-Supervised Few-Shot Learning for Disease Type Prediction

Authors

  • Tianle Ma State University of New York at Buffalo
  • Aidong Zhang State University of New York at Buffalo

DOI:

https://doi.org/10.1609/aaai.v33i01.33011069

Abstract

While deep learning has achieved great success in computer vision and many other fields, currently it does not work very well on patient genomic data with the “big p, small N” problem (i.e., a relatively small number of samples with highdimensional features). In order to make deep learning work with a small amount of training data, we have to design new models that facilitate few-shot learning. Here we present the Affinity Network Model (AffinityNet), a data efficient deep learning model that can learn from a limited number of training examples and generalize well. The backbone of the AffinityNet model consists of stacked k-Nearest-Neighbor (kNN) attention pooling layers. The kNN attention pooling layer is a generalization of the Graph Attention Model (GAM), and can be applied to not only graphs but also any set of objects regardless of whether a graph is given or not. As a new deep learning module, kNN attention pooling layers can be plugged into any neural network model just like convolutional layers. As a simple special case of kNN attention pooling layer, feature attention layer can directly select important features that are useful for classification tasks. Experiments on both synthetic data and cancer genomic data from TCGA projects show that our AffinityNet model has better generalization power than conventional neural network models with little training data.

Downloads

Published

2019-07-17

How to Cite

Ma, T., & Zhang, A. (2019). AffinityNet: Semi-Supervised Few-Shot Learning for Disease Type Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 1069-1076. https://doi.org/10.1609/aaai.v33i01.33011069

Issue

Section

AAAI Technical Track: Applications