Harnessing GANs for Zero-Shot Learning of New Classes in Visual Speech Recognition

Authors

  • Yaman Kumar Adobe
  • Dhruva Sahrawat NUS
  • Shubham Maheshwari IIIT-Delhi
  • Debanjan Mahata Bloomberg
  • Amanda Stent Bloomberg
  • Yifang Yin NUS
  • Rajiv Ratn Shah IIIT-Delhi
  • Roger Zimmermann NUS

DOI:

https://doi.org/10.1609/aaai.v34i03.5649

Abstract

Visual Speech Recognition (VSR) is the process of recognizing or interpreting speech by watching the lip movements of the speaker. Recent machine learning based approaches model VSR as a classification problem; however, the scarcity of training data leads to error-prone systems with very low accuracies in predicting unseen classes. To solve this problem, we present a novel approach to zero-shot learning by generating new classes using Generative Adversarial Networks (GANs), and show how the addition of unseen class samples increases the accuracy of a VSR system by a significant margin of 27% and allows it to handle speaker-independent out-of-vocabulary phrases. We also show that our models are language agnostic and therefore capable of seamlessly generating, using English training data, videos for a new language (Hindi). To the best of our knowledge, this is the first work to show empirical evidence of the use of GANs for generating training samples of unseen classes in the domain of VSR, hence facilitating zero-shot learning. We make the added videos for new classes publicly available along with our code1.

Downloads

Published

2020-04-03

How to Cite

Kumar, Y., Sahrawat, D., Maheshwari, S., Mahata, D., Stent, A., Yin, Y., Ratn Shah, R., & Zimmermann, R. (2020). Harnessing GANs for Zero-Shot Learning of New Classes in Visual Speech Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 34(03), 2645-2652. https://doi.org/10.1609/aaai.v34i03.5649

Issue

Section

AAAI Technical Track: Humans and AI