AAAI-18 Invited Speakers Schedule
Wednesday, February 7
8:50 – 9:50 am – Percy Liang
AAAI-18 Invited Speakers
AAAI-18 / IAAI-18 will feature the following series of distinguished speakers (partial list):
Arizona State University
AAAI 2018 Presidential Address
Talk: Challenges of Human-Aware AI Systems
Abstract: Research in AI suffers from a longstanding ambivalence to humans–swinging as it does, between their replacement and augmentation. Now, as AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to work synergistically with humans. To do this effectively, AI systems must pay more attention to aspects of intelligence that helped humans work with each other–including emotional and social intelligence.
I will discuss the research challenges in designing such human-aware AI systems, including modeling the mental states of humans in the loop, recognizing their desires and intentions, providing proactive support, exhibiting explicable behavior, giving cogent explanations on demand, and engendering trust. I will survey the progress made so far on these challenges, and highlight some promising directions. I will also touch on the additional ethical quandaries that such systems pose.
I will end by arguing that the quest for human-aware AI systems broadens the scope of AI enterprise, necessitates and facilitates true inter-disciplinary collaborations, and can go a long way towards increasing public acceptance of AI technologies.
Subbarao Kambhampati (Rao) is a professor of Computer Science at Arizona State University. He received his B.Tech. in Electrical Engineering (Electronics) from Indian Institute of Technology, Madras (1983), and M.S.(1985) and Ph.D.(1989) in Computer Science (1985,1989) from University of Maryland, College Park. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems.
Kambhampati is a fellow of AAAI and AAAS, and was an NSF Young Investigator. He received multiple teaching awards, including a university last lecture recognition. Kambhampati served as the Technical Program Co-Chair of AAAI 2005, as a AAAI Councilor and as AAAI Conference Committee Chair, before being elected as the President of AAAI. He is also a trustee of IJCAI and was the program chair for IJCAI 2016. He serves on the board of directors of Partnership on AI. Other roles include program co-chair of AIPS 2000 and ICAPS 2013.
Kambhampati’s research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. Further information available at rakaposhi.eas.asu.edu.
University of Washington
AAAI-18 Invited Speaker
Talk: From Naive Physics to Connotation: Learning and Reasoning about the World using Language
Abstract: Intelligent communication requires reading between the lines, which in turn, requires rich background knowledge about how the world works. However, learning unspoken commonsense knowledge from language is nontrivial, as people rarely state the obvious, e.g., “my house is bigger than me.” In this talk, I will discuss how we can recover the trivial everyday knowledge just from language without an embodied agent. A key insight is this: the implicit knowledge people share and assume systematically influences the way people use language, which provides indirect clues to reason about the world. For example, if “Jen entered her house,” it must be that her house is bigger than her.
In this talk, I will first present how we can organize various aspects of commonsense — ranging from naive physics knowledge to more abstract connotations — by adapting representations of frame semantics. I will then discuss neural network approaches that complement the frame-centric approaches. I will conclude the talk by discussing the challenges in current models and formalisms, pointing to avenues for future research.
Yejin Choi is an associate professor of Paul G. Allen School of Computer Science & Engineering at the University of Washington. Her recent research focuses on integrating language and vision, learning knowledge about the world from text and images, modeling richer context for natural language generation, and modeling nonliteral meaning of text using connotation frames. She was among the IEEE’s AI Top 10 to Watch in 2015 and a co-recipient of the Marr Prize at ICCV 2013. Her work on detecting deceptive reviews, predicting the literary success, and learning to interpret connotation has been featured by numerous media outlets including NBC News for New York, NPR Radio, New York Times, and Bloomberg Business Week. She received her Ph.D. in Computer Science at Cornell University.
Harvard / Radcliffe Institute for Advanced Study
AAAI-18 Invited Speaker
Talk: Fair Questions
Abstract: “Unfairness” of algorithms – for tasks ranging from advertising to recidivism prediction – has attracted considerable attention in the popular press. Algorithmic techniques for achieving fairness now routinely appear in dedicated workshops and symposia as well as in established research conferences. This talk will focus on the (relatively) new study of a mathematically rigorous theory of fairness: definitions, methods, and provable limits and tradeoffs, providing a lens for hot-button policy issues such as “interpretability” and raising new questions for future research.
Cynthia Dwork, Gordon McKay Professor of Computer Science at Harvard and Radcliffe Alumnae Professor at the Radcliffe Institute for Advanced Study, is renowned for placing privacy-preserving data analysis on a mathematically rigorous foundation. A cornerstone of this work is differential privacy, a strong privacy guarantee frequently permitting highly accurate data analysis, recognized by the 2016 Theory of Cryptography Conference Test-of-Time award and the Goedel Prize. Dwork has also made seminal contributions in cryptography and distributed computing, and is a recipient of the Edsger W. Dijkstra Prize, recognizing some of her earliest work establishing the pillars on which every fault-tolerant system has been built for decades. Her most recent foci include stability in adaptive data analysis (especially via differential privacy) and fairness in classification. Dwork is a member of the US National Academy of Sciences and the US National Academy of Engineering, and the American Philosophical Society, and is a Fellow of the American Academy of Arts and Sciences.
University of Cambridge / Uber
AAAI/IAAI Joint Invited Talk
Title: Probabilistic Machine Learning and AI
Abstract: Probability theory provides a mathematical framework for understanding learning and for building rational intelligent systems. I will review the foundations of the field of probabilistic AI. I will then highlight some current areas of research at the frontiers, touching on topics such as Bayesian deep learning, probabilistic programming, Bayesian optimisation, and AI for data science.
Zoubin Ghahramani FRS is Professor of Information Engineering at the University of Cambridge and Chief Scientist at Uber. He is also Deputy Director of the Leverhulme Centre for the Future of Intelligence, and a Fellow of St John’s College. He was a founding Cambridge Director of the Alan Turing Institute, the UK’s national institute for data science. He has worked and studied at the University of Pennsylvania, MIT, the University of Toronto, the Gatsby Unit at University College London, and Carnegie Mellon University. His research focuses on probabilistic approaches to machine learning and artificial intelligence, and he has published over 250 research papers on these topics. He was co-founder of Geometric Intelligence (now Uber AI Labs) and advises a number of AI and machine learning companies. In 2015, he was elected a Fellow of the Royal Society for his contributions to machine learning.
AAAI-18 Invited Speaker
Talk: Actual Causality: A Survey
Abstract: What does it mean that an event C “actually caused” event E? The problem of defining actual causation goes beyond mere philosophical speculation. For example, in many legal arguments, it is precisely what needs to be established in order to determine responsibility. (What exactly was the actual cause of the car accident or the medical problem?) The philosophy literature has been struggling with the problem of defining causality since the days of Hume, in the 1700s. Many of the definitions have been couched in terms of counterfactuals. (C is a cause of E if, had C not happened, then E would not have happened.) In 2001, Judea Pearl and I introduced a new definition of actual cause, using Pearl’s notion of structural equations to model counterfactuals. The definition has been revised twice since then, extended to deal with notions like “responsibility” and “blame”, and applied in databases and program verification. I survey the last 15 years of work here, including joint work with Judea Pearl, Hana Chockler, and Chris Hitchcock. The talk will be completely self-contained.
Joseph Halpern received a Ph.D. in mathematics from Harvard after spending two years as the head of the Mathematics Department at Bawku Secondary School, in Ghana. After a postdoc at MIT and 14 years at the IBM Almaden Research Center (and serving as a consulting professor at Stanford), he joined the CS Department at Cornell in 1996, and was department chair 2010-14. Halpern is a Fellow of AAAI, AAAS (American Association for the Advancement of Science), the American Academy of Arts and Sciences, ACM, IEEE, the Game Theory Society, and the Society for the Advancement of Economic Theory. He has received the ACM SIGART Autonomous Agents Research Award, the Dijkstra Prize, the Newell Award, the Kampe de Feriet Award, and the Godel Prize. He was editor-in-chief of the Journal of the ACM (1997-2003) and started and continues to be the administrator of CoRR, the computer science section of arxiv.
Georgia Institute of Technology
AAAI-18 Invited Speaker
Abstract: We build machine-learning systems because we want them to behave a certain way. In this case, the “we” is usually human beings. Whether we want to convey particular strategies or subtle preferences that define the objective itself, some form of knowledge transfer from person to algorithm is always needed. Interactive machine learning focuses on techniques for facilitating that transfer in the context of solving artificial intelligence problems with machine-learning techniques. This talk will survey some of the problems and techniques studied in interactive machine learning with a special emphasis on counterintuitive design principles that have arisen from the results of experiments with human participants especially where those counterintuitive principles arise from “we” being wrong about “us.”
Charles Isbell (PhD MIT, 1998) is a Professor and Senior Associate Dean in the College of Computing at Georgia Tech. A machine learning researcher, his passion is AI, building systems that live and interact with large numbers of agents, some of whom may be human. Isbell was a National Academy of Sciences Kavli Fellow for three years and earned both the NSF CAREER and DARPA CSSG awards for young investigators. He has best papers at Agents and ICML. He has served on the organizing committees for ICML, NIPS, RoboCup, Tapia, and the NAS Frontiers of Science Symposia, among others, and organized meetings at a number of conferences. He participates regularly in doctoral consortia and workshops on mentoring young faculty, and sits on three NSF boards.
AAAI-18 Invited Speaker
Talk: How Should We Evaluate Machine Learning for AI?
Abstract: Machine learning has undoubtedly been hugely successful in driving progress in AI, but it implicitly brings with it the train-test evaluation paradigm. This standard evaluation only encourages behavior that is good on average; it does not ensure robustness as demonstrated by adversarial examples, and it breaks down for tasks such as dialogue that are interactive or do not have a correct answer. In this talk, I will describe alternative evaluation paradigms with a focus on natural language understanding tasks, and discuss ramifications for guiding progress in AI in meaningful directions.
Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research spans machine learning and natural language processing, with the goal of developing trustworthy agents that can communicate effectively with people and improve over time through interaction. Specific topics include question answering, dialogue, program induction, interactive learning, and reliable machine learning. His awards include the IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).