AAAI-11 / IAAI-11 Invited Speakers
AAAI-11 Panel Discussion
25th AAAI Conference Anniversary Panel
AAAI-11 / IAAI-11 Invited Talk
Building Watson: An Overview of DeepQA for the Jeopardy! Challenge
David Ferrucci (IBM T. J. Watson Research Center)
Computer systems that can directly and accurately answer peoples’ questions over a broad domain of human knowledge have been envisioned by scientists and writers since the advent of computers themselves. Open domain question answering holds tremendous promise for facilitating informed decision making over vast volumes of natural language content. Applications in business intelligence, healthcare, customer support, enterprise knowledge management, social computing, science and government would all benefit from deep language processing. The DeepQA project is aimed at exploring how advancing and integrating natural language processing, information retrieval, machine learning, massively parallel computation, and knowledge representation and reasoning can greatly advance open-domain automatic Question Answering. An exciting proof-point in this challenge is to develop a computer system that can successfully compete against top human players at the Jeopardy! quiz show. Attaining champion-level performance Jeopardy! requires a computer system to rapidly and accurately answer rich open-domain questions, and to predict its own performance on any given category/question. The system must deliver high degrees of precision and confidence over a very broad range of knowledge and natural language content with a 3-second response time. To do this DeepQA evidences and evaluates many competing hypotheses. A key to success is automatically learning and combining accurate confidences across an array of complex algorithms and over different dimensions of evidence. Accurate confidences are needed to know when to “buzz in” against your competitors and how much to bet. High precision and accurate confidence computations are just as critical for providing real value in business settings where helping users focus on the right content sooner and with greater confidence can make all the difference. The need for speed and high precision demands a massively parallel computing platform capable of generating, evaluating and combing 1000s of hypotheses and their associated evidence. In this talk I will introduce the audience to the Jeopardy! Challenge and how we tackled it using DeepQA.
David Ferrucci is the lead researcher and principal investigator for the Watson/Jeopardy! project. He has been a research staff member at IBM’s T. J. Watson’s Research Center since 1995 where he heads up the Semantic Analysis and Integration department. Ferrucci focuses on technologies for automatically discovering valuable knowledge in natural language content and using it to enable better decision making.
As part of his research he led the team that developed UIMA. UIMA is a software framework and open standard widely used by industry and academia for collaboratively integrating, deploying and scaling advanced text and multimodal (speech, video) analytics. As chief software architect for UIMA, Ferrucci led its design and chaired the UIMA standards committee at OASIS. The UIMA software framework is deployed in IBM products and has been contributed to Apache open-source to facilitate broader adoption and development. In 2007, Ferrucci took on the Jeopardy! Challenge — tasked to create a computer system that can rival human champions at the game of Jeopardy!. As the PI for the exploratory research project dubbed DeepQA, he focused on advancing automatic, open-domain question answering using massively parallel evidence based hypothesis generation and evaluation. By building on UIMA, on key university collaborations and by taking bold research, engineering and management steps, he led his team to integrate and advance many search, natural language processing, and semantic technologies to deliver results that have out-performed all expectations and have demonstrated world-class performance at a task previously thought insurmountable with the current state of the art. Watson, the computer system built by Ferrucci’s team is now competing with top Jeopardy! champions. Under his leadership they have already begun to demonstrate how DeepQA can make dramatic advances for intelligent decision support in areas including medicine, finance, publishing, government and law.
Ferrucci has been the principal investigator on several government-funded research programs on automatic question answering, intelligent systems and saleable text analytics. His team at IBM consists of 28 researchers and software engineers specializing in the areas of natural language processing, software architecture, information retrieval, machine learning and knowledge representation and reasoning.
Ferrucci graduated from Manhattan College with a BS in biology and from Rensselaer Polytechnic Institute in 1994 with a PhD in computer science specializing in knowledge representation and reasoning. He is published in the areas of AI, knowledge representation and reasoning, natural language processing, and automatic question-answering.
AAAI-11 Invited Talk
Towards Artificial Systems: What Can We Learn From Human Perception?
Heinrich H. Buelthoff (Max Planck Institute for Biological Cybernetics)
Recent progress in learning algorithms and sensor hardware has led to rapid advances in artificial systems. However, their performance continues to fall short of the efficiency and plasticity of human behavior. In many ways, a deeper understanding of how humans process and act upon physical sensory information can contribute to the development of better artificial systems. In this presentation, Buelthoff will highlight how the latest tools in computer vision, computer graphics, and virtual reality technology can be used to systematically understand the factors that determine how humans behave and solve tasks in realistic scenarios.
Heinrich Buelthoff is scientific member of the Max Planck Society and director at the Max Planck Institute for Biological Cybernetics in Tuebingen. He is head of the Department of Human Perception, Cognition, and Action in which a group of about 70 researchers investigate psychophysical and computational aspects of higher level visual processes in object and face recognition, sensory-motor integration, spatial cognition, and perception and action in virtual environments. He holds a Ph.D. degree in the natural sciences from the Eberhard-Karls-Universitaet in Tuebingen. From 1980 to 1988 he worked as a research scientist at the Max Planck Institute for Biological Cybernetics and the Massachusetts Institute of Technology and from 1988-1993 he was an assistant, associate and full professor of cognitive science at Brown University in Providence. He is an honorary professor at the Eberhard-Karls-Universitaet (Tuebingen) and Korea University (Seoul) and editor of several international journals.
AAAI-11 Invited Talk
From Turn-Taking to Social Ties
Karrie Karahalios (University of Illinois)
Online communities have been studied from various perspectives since the 1980s. Much of this work has taken existing sociology techniques and molded them to fit a specific electronic environment such as IRC, Usenet, Facebook, and others. The existence of digital traces of online interaction has made this research possible at a large scale.
In this talk, Karahalios begins by discussing a brief history of the study of online interaction and the cues used by researchers to formulate their research. She continues describing how the study of online social spaces has changed through the lens of the work done in the Social Spaces Group. Karahalios argues that digital traces can be misleading and new techniques and interfaces are necessary to improve and study social online interaction. This discussion includes work highlighting the differences between interaction between rural and urban areas, tie strength from social network software, and implications of this work. Finally, Karahalios concludes by highlighting how online social interaction is diverging from face-to-face interaction and the importance of new methodologies and interfaces for studying this change.
Karrie Karahalios is an associate professor in computer science at the University of Illinois where she heads the Social Spaces Group. Her work focuses on the interaction between people and the social cues they emit and perceive in networked electronic spaces. Her work is informed by studies and visualizations of social communities. Of particular interest are interfaces for pubic online and physical gathering spaces such as twitter, chatrooms, cafes, parks, and others. One goal is to create interfaces that enable users to perceive conversational patterns that are present, but not obvious, in traditional communication interfaces. Karahalios completed a S.B. in electrical engineering, an M.Eng. in electrical engineering and computer science, and an S.M. and Ph.D in media arts and science at MIT.
AAAI-11 Invited Talk
Strategic Intelligence in Social Networks
Michael Kearns (University of Pennsylvania)
For the past six years at Penn, we have been conducting controlled human-subject experiments on strategic interaction in social networks. The overarching goal of these experiments is to provide a behavioral counterpart to the flourishing research on mathematical models of social networks, diffusion dynamics, influence in social networks, and related topics.
To date we have conducted experiments on a wide variety of strategic and computational tasks in social networks, including graph coloring (which can be viewed as a problem of social differentiation), consensus, biased voting, trading and bargaining in networks, and network formation. These experiments have yielded a wealth of findings and data on the ability of human subjects to solve challenging collective tasks from only local interactions, and have shed light on basic topics such as influence and altruism in social networks, and the relationship between network structure and collective and individual performance and behavior. The experiments also raise interesting challenges for notions of collective intelligence in humans and machines, and for the application of machine learning to the resulting data.
Michael Kearns is a professor in the Deparment of Computer and Information Science at the University of Pennsylvania, where he pursues research in machine learning, algorithmic game theory, social networks and computational finance. Prior to joining the Penn faculty he spent a decade in research at AT&T/Bell Labs, where he was head of the Artificial Intelligence Research Department. Kearns received his undergraduate degree from the University of California, Berkeley in mathematics and computer science, and his doctorate from Harvard University in computer science.
AAAI-11 Invited Talk
Registration and Recognition for Robotics
Kurt Konolige (Willow Garage, Inc and Stanford University)
Robotic manipulation around the home and office requires perception of the environment and objects within it. In this talk, Konolige highlights the key roles played by visual registration. The first role is in keeping track of where the robot is, and for understanding how multiple views of the environment correspond to each other. The second is in finding and manipulating objects in the world. Registration and recognition methods will be illustrated with examples from Willow Garage’s PR2 robot.
Kurt Konolige is a senior researcher at Willow Garage, a robotics startup dedicated to open-source software for personal robotics. He is also a consulting professor of computer science at Stanford University, and a Fellow of AAAI. He received his PhD in computer science from Stanford University in 1984. He has authored or coauthored over 300 scientific publications, including 3 books and best papers at the 1995 IJCAI conference and the 1998 IROS conference. He teaches a course in mobile robotics at Stanford University, and codeveloped the Pioneer mobile robots, and the low-cost laser sensor in the Neato vacuuming robot. His recent research has concentrated on realtime perception and navigation for mobile robots.
IAAI-11 Robert S. Engelmore Memorial Award Lecture
Playing with Cases: Rendering Expressive Music Performance with Case-Based Reasoning
Ramon Lopez De Mantaras (Artificial Intelligence Research Institute (IIIA) and Spanish National Research Council (CSIC))
Rendering expressive music performances involves complex processes that constitute a challenging research area for computer music research. Besides, it is a rich field for investigating aspects of human intelligence, emotion, and creativity. Case-based reasoning is one of the AI techniques that have produced more promising results in rendering expressive music performances. Furthermore, it has advanced the state of the art in case-based reasoning through the invention of new approaches to case representation, case retrieval and case reuse adapted to musical knowledge. In this talk Lopez de Mantaras will describe in some detail two successful case-based reasoning systems applied to expressive music performance that have been developed at the Artificial Intelligence Research Institute.
Ramon Lopez de Mantaras is a research professor in the Spanish National Research Council (CSIC) and director of the Artificial Intelligence Research Institute (IIIA). He earned his MS in computer science from the University of California, Berkeley, a PhD in physics from the University of Toulouse, and a PhD in computer science from the Technical University of Barcelona. He is an associate editor of the AI Journal, and an editorial board member of several international journals including AI Magazine. He was program committee Chairman of UAI-94 and ECML-00, conference chairof ECAI-04, ECML-07 and IJCAI-07, and is an ECCAI Fellow and recipient of several awards including ECCAI’s DEC European AI Research Paper Award, the “City of Barcelona” Research Prize, and the International Computer Music Association “Swets & Zeitlinger” Award. Lopez de Mantaras was president of the Board of Trustees of IJCAI from 2007 to 2009. His research interests include case-based reasoning, machine learning, artificial intelligence and music, and object recognition for robotics.
IAAI-11 Invited Talk
HaloBook and Progress Towards Digital Aristotle
David Gunning (Vulcan Inc.)
Project Halo is a long-range research effort, pursuing the vision of the “Digital Aristotle” — a system containing large volumes of scientific knowledge and capable of applying sophisticated problem-solving methods to answer novel questions, with applications in education and scientific research. The current focus of the project is the development of HaloBook — an electronic textbook capable of answering a student’s questions. This talk will summarize the history and motivation for Project Halo, describe the current work on HaloBook, and discuss possible Grand Challenges to motivate future research.
David Gunning is a senor research program manager at Vulcan Inc., where he leads the HaloBook development. Prior to joining Vulcan, he served as a program manager at the Defense Advanced Research Projects Agency (DARPA) from 2003–2008 and 1994–2000, where he managed a number of AI projects, including the Personalized Assistant that Learns (PAL), Command Post of the Future (CPOF), Evidence Extraction and Link Discovery (EELD), and High-Performance Knowledge Bases (HPKB). Between tours at DARPA, Gunning worked as vice president of Cycorp, Inc. and SET Corporation. He holds MS degrees in computer science from Stanford University and experimental psychology from the University of Dayton.