Conditional Generative Neural Decoding with Structured CNN Feature Prediction

Authors

  • Changde Du CASIA
  • Changying Du Huawei Noah's Ark Lab
  • Lijie Huang CASIA
  • Huiguang He CASIA

DOI:

https://doi.org/10.1609/aaai.v34i03.5647

Abstract

Decoding visual contents from human brain activity is a challenging task with great scientific value. Two main facts that hinder existing methods from producing satisfactory results are 1) typically small paired training data; 2) under-exploitation of the structural information underlying the data. In this paper, we present a novel conditional deep generative neural decoding approach with structured intermediate feature prediction. Specifically, our approach first decodes the brain activity to the multilayer intermediate features of a pretrained convolutional neural network (CNN) with a structured multi-output regression (SMR) model, and then inverts the decoded CNN features to the visual images with an introspective conditional generation (ICG) model. The proposed SMR model can simultaneously leverage the covariance structures underlying the brain activities, the CNN features and the prediction tasks to improve the decoding accuracy and interpretability. Further, our ICG model can 1) leverage abundant unpaired images to augment the training data; 2) self-evaluate the quality of its conditionally generated images; and 3) adversarially improve itself without extra discriminator. Experimental results show that our approach yields state-of-the-art visual reconstructions from brain activities.

Downloads

Published

2020-04-03

How to Cite

Du, C., Du, C., Huang, L., & He, H. (2020). Conditional Generative Neural Decoding with Structured CNN Feature Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 34(03), 2629-2636. https://doi.org/10.1609/aaai.v34i03.5647

Issue

Section

AAAI Technical Track: Humans and AI