Zero-Shot Ingredient Recognition by Multi-Relational Graph Convolutional Network

  • Jingjing Chen Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University
  • Liangming Pan NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore
  • Zhipeng Wei Jilin University
  • Xiang Wang School of Computing, National University of Singapore
  • Chong-Wah Ngo City University of Hong Kong
  • Tat-Seng Chua School of Computing, National University of Singapore


Recognizing ingredients for a given dish image is at the core of automatic dietary assessment, attracting increasing attention from both industry and academia. Nevertheless, the task is challenging due to the difficulty of collecting and labeling sufficient training data. On one hand, there are hundred thousands of food ingredients in the world, ranging from the common to rare. Collecting training samples for all of the ingredient categories is difficult. On the other hand, as the ingredient appearances exhibit huge visual variance during the food preparation, it requires to collect the training samples under different cooking and cutting methods for robust recognition. Since obtaining sufficient fully annotated training data is not easy, a more practical way of scaling up the recognition is to develop models that are capable of recognizing unseen ingredients. Therefore, in this paper, we target the problem of ingredient recognition with zero training samples. More specifically, we introduce multi-relational GCN (graph convolutional network) that integrates ingredient hierarchy, attribute as well as co-occurrence for zero-shot ingredient recognition. Extensive experiments on both Chinese and Japanese food datasets are performed to demonstrate the superior performance of multi-relational GCN and shed light on zero-shot ingredients recognition.

AAAI Technical Track: Vision