Unsupervised Stylish Image Description Generation via Domain Layer Norm

Authors

  • Cheng-Kuan Chen National Tsing Hua University
  • Zhufeng Pan National Tsing Hua University
  • Ming-Yu Liu NVIDIA Corporation
  • Min Sun National Tsing Hua University

DOI:

https://doi.org/10.1609/aaai.v33i01.33018151

Abstract

Most of the existing works on image description focus on generating expressive descriptions. The only few works that are dedicated to generating stylish (e.g., romantic, lyric, etc.) descriptions suffer from limited style variation and content digression. To address these limitations, we propose a controllable stylish image description generation model. It can learn to generate stylish image descriptions that are more related to image content and can be trained with the arbitrary monolingual corpus without collecting new paired image and stylish descriptions. Moreover, it enables users to generate various stylish descriptions by plugging in style-specific parameters to include new styles into the existing model. We achieve this capability via a novel layer normalization layer design, which we will refer to as the Domain Layer Norm (DLN). Extensive experimental validation and user study on various stylish image description generation tasks are conducted to show the competitive advantages of the proposed model.

Downloads

Published

2019-07-17

How to Cite

Chen, C.-K., Pan, Z., Liu, M.-Y., & Sun, M. (2019). Unsupervised Stylish Image Description Generation via Domain Layer Norm. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8151-8158. https://doi.org/10.1609/aaai.v33i01.33018151

Issue

Section

AAAI Technical Track: Vision