MA-DST: Multi-Attention-Based Scalable Dialog State Tracking

Authors

  • Adarsh Kumar UW Madison
  • Peter Ku Amazon
  • Anuj Goyal Amazon
  • Angeliki Metallinou Amazon
  • Dilek Hakkani-Tur Amazon

DOI:

https://doi.org/10.1609/aaai.v34i05.6322

Abstract

Task oriented dialog agents provide a natural language interface for users to complete their goal. Dialog State Tracking (DST), which is often a core component of these systems, tracks the system's understanding of the user's goal throughout the conversation. To enable accurate multi-domain DST, the model needs to encode dependencies between past utterances and slot semantics and understand the dialog context, including long-range cross-domain references. We introduce a novel architecture for this task to encode the conversation history and slot semantics more robustly by using attention mechanisms at multiple granularities. In particular, we use cross-attention to model relationships between the context and slots at different semantic levels and self-attention to resolve cross-domain coreferences. In addition, our proposed architecture does not rely on knowing the domain ontologies beforehand and can also be used in a zero-shot setting for new domains or unseen slot values. Our model improves the joint goal accuracy by 5% (absolute) in the full-data setting and by up to 2% (absolute) in the zero-shot setting over the present state-of-the-art on the MultiWoZ 2.1 dataset.

Downloads

Published

2020-04-03

How to Cite

Kumar, A., Ku, P., Goyal, A., Metallinou, A., & Hakkani-Tur, D. (2020). MA-DST: Multi-Attention-Based Scalable Dialog State Tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8107-8114. https://doi.org/10.1609/aaai.v34i05.6322

Issue

Section

AAAI Technical Track: Natural Language Processing