Monday, February 5, 2018
Grand Ballroom A
AAAI-18 is pleased to present the special AAAI-18 Emerging Topic Program on Human-AI Collaboration. While AI systems continue to get better at independent perception, learning, and well-defined reasoning tasks, there is emerging interest in designing AI system to complement and enhance, rather than supplant, human capabilities. New technical challenges arise in the development of machines that model and infer the mental and physical state of human counterparts, and apply these models to collaborate with people in richer, more flexible ways. The goal of the emerging topic on Human-machine collaboration at AAAI-18 is to highlight these technical challenges and opportunities, as well as showcase the value of new human-AI partnerships.
The program will include four invited talks and 21 technical papers in full oral and spotlight/poster presentations. The four 30-minute invited talks will be split between two hour-long sessions in the 10:00 AM and 4:00 PM sessions.
The papers will be presented in the intervening 11:30 AM and 2:00 PM sessions. For complete schedule information, please consult the online program.
Talk: Communicative Actions in Human-Robot Teams
Time: Monday, February 5, 10:00-10:30 AM
Abstract : Robots out in the world today work for people but not with people. Before robots can work closely with ordinary people as part of a human-robot team in a home or office setting, robots need the ability to think and act more like people. When people act jointly as part of a team, they engage in collaborative planning, which involves forming a consensus through an exchange of information about goals, capabilities, and partial plans. In this talk, I describe a framework for robots to understand and generate messages broadly — not only through natural language but also by functional actions that carry meaning through the context in which they occur. Careful action selection allows robots to clearly and concisely communicate meaning with human partners in a manner that almost resembles telepathy. I show examples of how this implicit communication can facilitate activities as basic as hallway navigation and as sophisticated as collaborative tool use in assembly tasks. I also show how these abilities can assist in recovery after a failure.
Ross A. Knepper is an Assistant Professor in the Department of Computer Science at Cornell University, where he directs the Robotic Personal Assistants Lab. His research focuses on the theory and algorithms of human-robot interaction in collaborative work. He builds systems to perform complex tasks where partnering a human and robot together is advantageous for both, such as factory assembly or home chores. Ross has built robot systems that can assemble Ikea furniture, ask for help when something goes wrong, and interpret informal speech and gesture commands. Before Cornell, Ross was a Research Scientist at MIT. He received his Ph.D. in Robotics from Carnegie Mellon University in 2011.
Ben-Gurion University of the Negev
Talk: Supporting People’s Interactions in Online Groups. A New Challenge for AI
Time: Monday, February 5, 10:30-11:00 AM
Abstract: Advances in network technologies and interface design are enabling group activities of varying complexities to be carried out, in whole or in part, over the internet (e.g., citizen science, Massive Online Open Courses (MOOC) and questions-and-answers sites).
The need to support these highly diverse interactions brings new and significant challenges to AI; how to design efficient representations for describing online group interactions; how to provide incentives that keep participants motivated and productive; and how to provide useful, non-intrusive information to system designers to help them decide whether and how to intervene with the group’s work. I will describe two ongoing projects that address these challenges in the wild, with the goal of supporting student group-learning in the classroom, and for increasing the contributions of thousands of volunteers in one of the largest citizen science platforms on the web.
Joint work with : Avi Segal, Ece Kamar, Eric Horvitz, Baruch Schwartz
Dr. Ya’akov (Kobi) Gal is a faculty member of the Department of Information Systems Engineering at the Ben-Gurion University of the Negev, and an associate of the School of Engineering and Applied Sciences at Harvard University. His work investigates representations and algorithms for making decisions in heterogeneous groups comprising both people and computational agents. He has worked on combining artificial intelligence algorithms with educational technology towards supporting students’ in their learning and teachers’ understanding of students’ learning strategies. He is a recipient of the Wolf foundation’s 2013 Krill prize for young Israeli scientists, the ACM Economics and Computation best paper award for 2016, a Marie Curie International fellowship, and a three-time recipient of Harvard University’s outstanding teacher award.
Washington State University
Talk: Improving Reinforcement Learning with Human Input
Time: Monday, February 5, 4:00-4:30 PM
Abstract: Reinforcement learning has had many successes, but significant amounts of time and/or data can be required to reach acceptable performance. If agents or robots are to be deployed in real-world environments, it is critical that our algorithms take advantage of existing human knowledge. This talk will discuss a selection of recent work that improves reinforcement learning by leveraging 1) demonstrations and 2) reward feedback from imperfect users, with an emphasis on how interactive machine learning can be extended to best leverage the unique abilities of both computers and humans.
Matthew E. Taylor received his doctorate from the Department of Computer Sciences at UT-Austin in 2008. Matt then completed a two-year postdoctoral research position at the University of Southern California and spent two years as an assistant professor at Lafayette College. He holds the Allred Distinguished Professorship in Artificial Intelligence at Washington State University in the School of EECS and is a recipient of the National Science Foundation CAREER award. Matt is currently on leave at Borealis AI, a Canadian institute funded by the Royal Bank of Canada, where he leads a research team focused on reinforcement learning in Edmonton.
Georgia Institute of Technology
Talk: Towards Theory of AI’s Mind
Time: Monday, February 5, 4:30-5:00 PM
Abstract: To effectively leverage the progress in Artificial Intelligence (AI) to make our lives more productive, it is important for humans and AI to work well together in a team. Traditionally, research has focused primarily on making AI more accurate, and (to a lesser extent) on having it better understand human intentions, tendencies, beliefs, and contexts. The latter involves making AI more human-like and having it develop a theory of our minds. In this talk, I will argue that for human-AI teams to be effective, humans must also develop a Theory of AI’s Mind – get to know its strengths, weaknesses, beliefs, and quirks. I will present some (very) initial results in the context of visual question answering and visual dialog — where the AI agent is trained to answer natural language questions about images.
Devi Parikh is an Assistant Professor in the School of Interactive Computing at Georgia Tech, and a Research Scientist at Facebook AI Research (FAIR). From 2013 to 2016, she was an Assistant Professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech. From 2009 to 2012, she was a Research Assistant Professor at Toyota Technological Institute at Chicago (TTIC), an academic computer science institute affiliated with University of Chicago. She has held visiting positions at Cornell University, University of Texas at Austin, Microsoft Research, MIT, Carnegie Mellon University, and Facebook AI Research. She received her M.S. and Ph.D. degrees from the Electrical and Computer Engineering department at Carnegie Mellon University in 2007 and 2009 respectively. She received her B.S. in Electrical and Computer Engineering from Rowan University in 2005.
Her research interests include computer vision and AI in general and visual recognition problems in particular. Her recent work involves exploring problems at the intersection of vision and language, and leveraging human-machine collaboration for building smarter machines. She has also worked on other topics such as ensemble of classifiers, data fusion, inference in probabilistic models, 3D reassembly, barcode segmentation, computational photography, interactive computer vision, contextual reasoning, hierarchical representations of images, and human-debugging.
She is a recipient of an NSF CAREER award, an IJCAI Computers and Thought award, a Sloan Research Fellowship, an Office of Naval Research (ONR) Young Investigator Program (YIP) award, an Army Research Office (ARO) Young Investigator Program (YIP) award, an Allen Distinguished Investigator Award in Artificial Intelligence from the Paul G. Allen Family Foundation, four Google Faculty Research Awards, an Amazon Academic Research Award, an Outstanding New Assistant Professor award from the College of Engineering at Virginia Tech, a Rowan University Medal of Excellence for Alumni Achievement, Rowan University’s 40 under 40 recognition, and a Marr Best Paper Prize awarded at the International Conference on Computer Vision (ICCV). https://www.cc.gatech.edu/~parikh