TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection

Authors

  • Siddhant Garg University of Wisconsin-Madison
  • Thuy Vu Amazon Alexa
  • Alessandro Moschitti Amazon Alexa

DOI:

https://doi.org/10.1609/aaai.v34i05.6282

Abstract

We propose TandA, an effective technique for fine-tuning pre-trained Transformer models for natural language tasks. Specifically, we first transfer a pre-trained model into a model for a general task by fine-tuning it with a large and high-quality dataset. We then perform a second fine-tuning step to adapt the transferred model to the target domain. We demonstrate the benefits of our approach for answer sentence selection, which is a well-known inference task in Question Answering. We built a large scale dataset to enable the transfer step, exploiting the Natural Questions dataset. Our approach establishes the state of the art on two well-known benchmarks, WikiQA and TREC-QA, achieving the impressive MAP scores of 92% and 94.3%, respectively, which largely outperform the the highest scores of 83.4% and 87.5% of previous work. We empirically show that TandA generates more stable and robust models reducing the effort required for selecting optimal hyper-parameters. Additionally, we show that the transfer step of TandA makes the adaptation step more robust to noise. This enables a more effective use of noisy datasets for fine-tuning. Finally, we also confirm the positive impact of TandA in an industrial setting, using domain specific datasets subject to different types of noise.

Downloads

Published

2020-04-03

How to Cite

Garg, S., Vu, T., & Moschitti, A. (2020). TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7780-7788. https://doi.org/10.1609/aaai.v34i05.6282

Issue

Section

AAAI Technical Track: Natural Language Processing