Diana's World: A Situated Multimodal Interactive Agent

Authors

  • Nikhil Krishnaswamy Brandeis University
  • Pradyumna Narayana Colorado State University
  • Rahul Bangar Colorado State University
  • Kyeongmin Rim Brandeis University
  • Dhruva Patil Colorado State University
  • David McNeely-White Colorado State University
  • Jaime Ruiz University of Florida
  • Bruce Draper DARPA
  • Ross Beveridge Colorado State University
  • James Pustejovsky Brandeis University

DOI:

https://doi.org/10.1609/aaai.v34i09.7096

Abstract

State of the art unimodal dialogue agents lack some core aspects of peer-to-peer communication—the nonverbal and visual cues that are a fundamental aspect of human interaction. To facilitate true peer-to-peer communication with a computer, we present Diana, a situated multimodal agent who exists in a mixed-reality environment with a human interlocutor, is situation- and context-aware, and responds to the human's language, gesture, and affect to complete collaborative tasks.

Downloads

Published

2020-04-03

How to Cite

Krishnaswamy, N., Narayana, P., Bangar, R., Rim, K., Patil, D., McNeely-White, D., Ruiz, J., Draper, B., Beveridge, R., & Pustejovsky, J. (2020). Diana’s World: A Situated Multimodal Interactive Agent. Proceedings of the AAAI Conference on Artificial Intelligence, 34(09), 13618-13619. https://doi.org/10.1609/aaai.v34i09.7096