Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition

Authors

  • Hui Li University of Adelaide
  • Peng Wang Northwestern Polytechnical University
  • Chunhua Shen University of Adelaide
  • Guyu Zhang Northwestern Polytechnical University

DOI:

https://doi.org/10.1609/aaai.v33i01.33018610

Abstract

Recognizing irregular text in natural scene images is challenging due to the large variance in text appearance, such as curvature, orientation and distortion. Most existing approaches rely heavily on sophisticated model designs and/or extra fine-grained annotations, which, to some extent, increase the difficulty in algorithm implementation and data collection. In this work, we propose an easy-to-implement strong baseline for irregular scene text recognition, using offthe-shelf neural network components and only word-level annotations. It is composed of a 31-layer ResNet, an LSTMbased encoder-decoder framework and a 2-dimensional attention module. Despite its simplicity, the proposed method is robust. It achieves state-of-the-art performance on irregular text recognition benchmarks and comparable results on regular text datasets. The code will be released.

Downloads

Published

2019-07-17

How to Cite

Li, H., Wang, P., Shen, C., & Zhang, G. (2019). Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8610-8617. https://doi.org/10.1609/aaai.v33i01.33018610

Issue

Section

AAAI Technical Track: Vision