FET-GAN: Font and Effect Transfer via K-shot Adaptive Instance Normalization

Authors

  • Wei Li Zhejiang University
  • Yongxing He Zhejiang University
  • Yanwei Qi Zhejiang University
  • Zejian Li Zhejiang University
  • Yongchuan Tang Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v34i02.5535

Abstract

Text effect transfer aims at learning the mapping between text visual effects while maintaining the text content. While remarkably successful, existing methods have limited robustness in font transfer and weak generalization ability to unseen effects. To address these problems, we propose FET-GAN, a novel end-to-end framework to implement visual effects transfer with font variation among multiple text effects domains. Our model achieves remarkable results both on arbitrary effect transfer between texts and effect translation from text to graphic objects. By a few-shot fine-tuning strategy, FET-GAN can generalize the transfer of the pre-trained model to the new effect. Through extensive experimental validation and comparison, our model advances the state-of-the-art in the text effect transfer task. Besides, we have collected a font dataset including 100 fonts of more than 800 Chinese and English characters. Based on this dataset, we demonstrated the generalization ability of our model by the application that complements the font library automatically by few-shot samples. This application is significant in reducing the labor cost for the font designer.

Downloads

Published

2020-04-03

How to Cite

Li, W., He, Y., Qi, Y., Li, Z., & Tang, Y. (2020). FET-GAN: Font and Effect Transfer via K-shot Adaptive Instance Normalization. Proceedings of the AAAI Conference on Artificial Intelligence, 34(02), 1717-1724. https://doi.org/10.1609/aaai.v34i02.5535

Issue

Section

AAAI Technical Track: Game Playing and Interactive Entertainment