AAAI-12 Invited Talks and Presidential Address
AAAI-12 will feature the following series of distinguished speakers (preliminary list):
- Henry Kautz: AAAI Presidential Address
- Judea Pearl: 2011 ACM A. M. Turing Award Lecture
- Christos H. Papadimitriou: AAAI Turing Lecture
- Regina Barzilay: AAAI-12 invited talk
- Ross King: AAAI-12 invited talk
- Josh Tenenbaum: AAAI-12 invited talk
- Luis von Ahn: AAAI-12 invited talk
- Steven Minton: Robert S. Engelmore Award Lecture
- Sebastian Thrun: IAAI-12 invited talk
- Andrew Ng: EAAI-12 Invited talk
Videos And Slides Available!
AAAI-12 invited talk videos and corresponding slides are now available at VideoLectures.net.
The ACM Turing Lecture presented by Judea Pearl is available on the ACM website.
AAAI Presidential Address
Revisiting the Dream
Henry Kautz (University of Rochester)
Tuesday, July 24
9:00 – 10:00 AM
For much of its history, the field of AI has been in retreat from the most ambitious goals of its founders. Rather than attempting to understand and replicate general intelligence, research concentrated on smaller, better-defined perceptual and reasoning tasks over limited domains. We are now at a point, however, where the subfields are coming together again, and the idea of achieving the old dream of AI is no longer fanciful. We are entering an age of computerized personal servants, self-driving vehicles, the universal natural language translator, and the solution to the mysteries of the brain. The merging of human and machine intelligence will drive progress on problems across science, industry, and education. I will recap some of the transformative events we have witnessed over the past few years, and describe a vision of the near future that is both realistic and more wildly optimistic than most serious scientists would have dared imagine a decade ago.
Henry Kautz is chair of the Department of Computer Science at the University of Rochester. He performs research in knowledge representation, satisfiability testing, pervasive computing, and assistive technology. His academic degrees include an A.B. in mathematics from Cornell University, an M.A. in Creative Writing from the Johns Hopkins University, an M.Sc. in Computer Science from the University of Toronto, and a Ph.D. in computer science from the University of Rochester. He was a researcher and department head at Bell Labs and AT&T Laboratories until becoming a professor in the Department of Computer Science and Engineering of the University of Washington in 2000. He left Seattle in 2006. He is president (2010-2012) of the Association for the Advancement of Artificial Intelligence, a Fellow of the Association for the Advancement of Artificial Intelligence, a Fellow of the American Association for the Advancement of Science, and a recipient of the IJCAI Computers and Thought Award.
ACM A. M. Turing Award Lecture
The Mechanization of Causal Inference: A “Mini Turing Test” and Beyond
Judea Pearl (University of California, Los Angeles)
(Open to all registrants and ACM members)
Tuesday, July 24
10:20 – 11:20 AM
Judea Pearl, professor of computer science at the University of California, Los Angeles, was recently named the recipient of the 2011 ACM A. M. Turing Award for fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning. Pearl has chosen AAAI-12 as the venue to deliver his Turing Award Lecture. This lecture is open to all conference participants and ACM members.
Judea Pearl will review concepts, principles, and mathematical tools that were found useful in applications involving causal and counterfactual reasoning. This semantical framework gives rise to a coherent and friendly calculus that unifies several approaches to causation and resolves long-standing problems in the empirical sciences. The mechanization of counterfactual reasoning amounts to passing a mini “Turing test” in causal conversations. Its application in the empirical sciences unveils several opportunities and limitations of the “big-data” enterprise.
Judea Pearl is a professor of computer science and statistics at the University of California, Los Angeles. He is a graduate of the Technion, Israel, and joined the faculty of UCLA in 1970, where he currently directs the Cognitive Systems Laboratory and conducts research in artificial intelligence, causal inference and philosophy of science. Pearl has authored three books: Heuristics (1984), Probabilistic Reasoning (1988), and Causality (2000, 2009), and is member of the National Academy of Engineering, and the American Academy of Arts and Science. He is the recipient of the 2008 Benjamin Franklin Medal for Computer and Cognitive Science and the 2011 David Rumelhart Prize from the Cognitive Science Society. In 2012 he received the Technion’s Harvey Prize and the ACM A. M. Turing Award.
AAAI Turing Lecture
The Origin of Computable Numbers: A Tale of Two Classics
Christos H. Papadimitriou (University of California, Berkeley)
Wednesday, July 25
Turing, like Darwin, transformed scientific and human culture through a singularly disruptive work written in a brilliantly self-conscious style. I shall recount the stories of these two classics, concluding with certain unexpected connections between computational ideas and evolution.
Christos H. Papadimitriou is the C. Lester Hogan Professor of Electrical Engineering and Computer Science, Computer Science Division, University of California at Berkeley. Before joining Berkeley in 1996 he taught at Harvard, MIT, Athens Polytechnic, Stanford, and the University of California, San Diego. He has written five textbooks and many articles on algorithms and complexity, and their applications to optimization, databases, AI, economics, evolution, and the Internet. He holds a PhD from Princeton (1976), and honorary doctorates from ETH (Zurich), Athens Polytechnic, and the Universities of Macedonia, Athens, Cyprus, and Patras. He is a member of the Academy of Sciences of the US, the American Academy of Arts and Sciences, and the National Academy of Engineering, and a fellow of the ACM. His novel Turing (a novel about computation) was published by The MIT Press in 2003, and his graphic novel Logicomix (with Apostolos Doxiadis) was translated in more than 25 languages.
AAAI-12 Invited Talk
Learning to Behave by Reading
Regina Barzilay (Massachusetts Institute of Technology)
Thursday, July 26
In this talk, I will address the problem of grounding linguistic analysis in control applications, such as game playing and robot navigation. We assume access to natural language documents that describe the desired behavior of a control algorithm (such as game strategy guides). Our goal is to demonstrate that knowledge automatically extracted from such documents can dramatically improve performance of the target application. First, I will present a reinforcement learning algorithm for learning to map natural language instructions to executable actions. This technique has enabled automation of tasks that until now have required human participation — for example, automatically configuring software by consulting how-to guides. Next, I will present a Monte-Carlo search algorithm for game playing that incorporates information from game strategy guides. In this framework, the task of text interpretation is formulated as a probabilistic model that is trained based on feedback from Monte-Carlo search. When applied to the Civilization strategy game, a language-empowered player outperforms its traditional counterpart by a significant margin.
Regina Barzilay is an associate professor in the Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technolog. Her research interests are in natural language processing. She is a recipient of various awards including of the NSF Career Award, the MIT Technology Review TR-35 Award, Microsoft Faculty Fellowship and several Best Paper Awards in top NLP conferences. She received her Ph.D. in Computer Science from Columbia University in 2003 and spent a year as a postdoc at Cornell University.
AAAI-12 Invited Talk
Automating Biology Using Robot Scientists
Ross D. King (University of Manchester, UK)
Wednesday, July 25
A robot scientist is a physically implemented robotic system that applies techniques from artificial intelligence to execute cycles of automated scientific experimentation. A Robot Scientist can automatically execute cycles of: hypothesis formation, selection of efficient experiments to discriminate between hypotheses, execution of experiments using laboratory automation equipment, and analysis of results. We developed the robot scientist Adam to investigate yeast functional genomics. Adam is the first time a machine has discovered novel scientific knowledge. This knowledge is described in a formal argument involving over 10,000 different research units that relates Adam’s 6.6 million observations to its conclusions. Our new robot scientist Eve applies the same approach to drug design. Eve has efficiently found “lead compounds” for malaria and other neglected tropical diseases.
Ross King’s first degree was in microbiology, and then he switched to computer science, and has been interested in the intersection of these disciplines ever since. His Ph.D. was done on machine learning for bioinformatics in the late 1980s at the Turing Institute in Scotland. This was led by the AI pioneer Donald Michie, and was the best place he has ever worked. In his career he has worked in a statistics department, a startup-company, a structural biology lab, and computer science departments. Unusually, he ran both a “wet” biology laboratory (with his robots), and a computer science lab. In his view science is an excellent application domain for AI research, and AI can have a real impact on making the practice of science more efficient. The physics Nobel Laureate Frank Wilczek is on record as stating that the best physicist in 100 years time may be a machine; King is working towards this vision.
AAAI-12 Invited Talk
How to Grow a Mind: Statistics, Structure and Abstraction
Joshua B. Tenenbaum (Massachusetts Institute of Technology)
Thursday, July 26
The fields of cognitive science and artificial intelligence grew up together, with the twin goals of understanding human minds and making machines smarter in more humanlike ways. Yet since the 1980s they have mostly grown apart, as cognitive scientists came to see AI as too focused on applications and technical engineering issues rather than big questions of intelligence, while AI researchers came to see cognitive science as too informal and concerned with peculiarities of human minds and brains rather than general principles. Just in the last few years, however, these fields appear poised to reconverge in exciting and deep ways. Cognitive scientists have begun to adopt the toolkit of modern probabilistic AI as a unifying framework for modeling natural intelligence, while many AI researchers are looking beyond immediate applications to some of the big picture questions that originally motivated the field, and both communities are increasingly aware of and even informed by the other’s moves in these directions.
This talk will describe recent work at the center of the convergence: computational accounts of human intelligence that both draw on and advance state-of-the-art AI. I will focus on capacities for which even young children still far surpass machines: learning from very few examples, and common sense reasoning about the physical and social world. These abilities can be explained as approximate forms of probabilistic (Bayesian) inference over richly structured models — probabilistic models built on top of knowledge representations familiar from earlier, classic AI days, such as graphs, grammars, schemas, predicate logic, and functional programs. In many cases, sampling-based approximate inference with these models can be surprisingly tractable and can predict human judgments with high quantitative accuracy. Extended in a hierarchical nonparametric Bayesian framework, these models can explain how children learn to learn, bootstrapping adult-like intelligence from more primitive foundations. Using probabilistic programming languages, these models can be integrated into a unified cognitive architecture. Throughout the talk I will present concrete examples, along with a few more speculative predictions, of how these cognitive modeling efforts can inform the development of more intelligent machine systems.
Josh Tenenbaum studies learning, reasoning and perception in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. His current work focuses on building probabilistic models to explain how people come to be able to learn new concepts from very sparse data, how we learn to learn, and the nature and origins of people’s intuitive theories about the physical and social worlds. He is Professor of Computational Cognitive Science in the Department of Brain and Cognitive Sciences at MIT, and is a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his Ph.D. from MIT in 1999, and was a member of the Stanford University faculty in Psychology and (by courtesy) Computer Science from 1999 to 2002. His papers have received awards at the IEEE Computer Vision and Pattern Recognition (CVPR), NIPS, IJCAI and Cognitive Science Society conferences. He is the recipient of early career awards from the Society for Mathematical Psychology (2005), the Society of Experimental Psychologists, and the American Psychological Association (2008), and the Troland Research Award from the National Academy of Sciences (2011).
AAAI-12 Invited Talk
Duolingo: Translating the Web with Millions of People
Luis von Ahn (Carnegie Mellon University)
Tuesday, July 24
I want to translate the web into every major language: every web page, every video, and, yes, even Justin Bieber’s tweets. With its content split up into hundreds of languages — and with over 50 percent of it in English — most of the web is inaccessible to most people in the world. This problem is pressing, now more than ever, with millions of people from China, Russia, Latin America and other quickly developing regions entering the web. In this talk, I introduce my new project, called Duolingo, which aims at breaking the language barrier, and thus making the web truly world wide.
We have all seen how systems such as Google Translate are improving every day at translating the gist of things written in other languages. Unfortunately, they are not yet accurate enough for my purpose: Even when what they spit out is intelligible, it’s so badly written that I can’t read more than a few lines before getting a headache. This is why you don’t see machine-translated books. With Duolingo, our goal is to encourage people, like you and me, to translate the web into their native languages.
Luis von Ahn is the A. Nico Habermann Associate Professor of Computer Science at Carnegie Mellon University. He is working to develop a new area of computer science that he calls Human Computation. He builds systems that combine the intelligence of humans and computers to solve large-scale problems that neither can solve alone. An example of his work is reCAPTCHA, in which over 900 million people — more than 10 percent of humanity — have helped digitize books and newspapers. Among his many honors are a MacArthur Fellowship, a Packard Fellowship, a Sloan Research Fellowship, a Microsoft New Faculty Fellowship, and CMU’s Herbert A. Simon Award for Teaching Excellence and Alan J. Perlis Teaching Award.
Robert S. Engelmore Award Lecture
Building AI: Our Shared Enterpise
Steven Minton (President, InferLink Corporation)
Wednesday, July 25
The past few decades have seen great progress in AI. Most of this progress has resulted from contributions made by teams of scientists and engineers building on the earlier contributions of other teams, rather than from individual “breakthroughs”. Ultimately, this is made possible because we share our methods and results. There are many mechanisms that help us build upon previous work, from traditional scholarly journals and conferences, to open source software, to data repositories. In fact, the development of the internet has facilitated new and exciting ways to build communities and to jointly contribute to the development of AI. This talk will discuss new and evolving models for scientific collaboration, approaches for funding nonprofit enterprises, and innovative ways that we can all contribute to the development of AI.
Steven Minton is the president and founder of InferLink Corporation, which is developing technology for integrating massive amounts of entity-oriented data. Previously, he was chairman and cofounder of Fetch Technologies, a project leader and research associate professor at USC’s Information Sciences Institute, and a principal investigator at NASA’s Ames Research Center. He received his Ph.D from Carnegie Mellon University in 1988. He is a fellow of the AAAI, and a previous recipient of AAAI’s Classic Paper and Best Paper awards. He founded AI Access Foundation in 1993, and served as JAIR’s first executive editor and managing editor.
IAAI-12 Invited Talk
Recent Progress on Self-Driving Cars
Sebastian Thrun (Stanford University/Google)
Thursday, July 26
This talk provides an update on self-driving car technology developed at Google. Google is on its way to develop technology that can safely control cars that drive in traffic without human attention. In doing so, the team is heavily leveraging AI technology in areas of perception, planning, control, and machine learning. The speaker will discuss these advances, and also reflect on potential applications of this technology.
Sebastian Thrun is a Google Fellow and a research professor at Stanford University. He is also cofounder of Udacity, a XXI century university. Thrun will be chair of IJCAI-2013.
EAAI-12 Invited Talk
ml-class.org: Teaching Machine Learning to 100,000 Students
Andrew Ng (Stanford University and Coursera)
Monday, July 23
Last year, Stanford University offered three online courses, which anyone in the world could enroll in and take for free. Students were expected to submit homeworks, meet deadlines, and were awarded a “Statement of Accomplishment” only if they met our high grading bar. Offered this way, my machine learning class had over 100,000 enrolled students. To put this number in context, in order to reach an audience of this size, I would have had to teach my normal Stanford class (enrollment of ~400) for 250 years.
In this talk, I’ll report on the outcome of this bold experiment in distributed education. I’ll also describe my experience teaching one of these classes, and leading the development of the platform used to teach two of the classes. I’ll describe the key technology and pedagogy ideas used to offer these courses, ranging from easy-to-create video chunks, to a scalable online question and answer forum where students can get their questions answered quickly, to sophisticated autograded homeworks. Importantly, using a “flipped classroom” model, we also used these resources to improve the education of the enrolled, on-campus, Stanford students as well.
Whereas technology and automation have made almost all segments of our economy — such agriculture, energy, manufacturing, transportation — vastly more efficient, education today isn’t much different than it was 300 years ago. Given also the rising costs of higher education, the hyper-competitive nature of college admissions, and the lack of access to a high quality education, I think there is a huge opportunity to use modern internet and AI technology to inexpensively offer a high quality education online. Through such technology, we envision millions of people gaining access to the world-leading education that has so far been available only to a tiny few, and using this education to improve their lives, the lives of their families, and the communities they live in. Following the success of the first set of courses, there are now 14 planned courses for Winter quarter (offered by instructors from the University of Michigan, the University of California, Berkeley, and Stanford), and we hope to grow this effort further over time.
Andrew Ng received his PhD from the University of California, Berkeley, and is now an Associate professor of computer science at Stanford University, where he works on machine learning and AI. He is also director of the Stanford AI Lab, which is home to about 12 professors and 150 PhD students and post docs. His previous work includes autonomous helicopters, the Stanford AI Robot (STAIR) project, and ROS (probably the most widely used open-source robotics software platform today). He current work focuses on neuroscience-informed deep learning and unsupervised feature learning algorithms. His group has won best paper or best student paper awards at ICML, ACL, CEAS, 3DRR. He is a recipient of the Alfred P. Sloan Fellowship, and the 2009 IJCAI Computers and Thought award. He also works on free online education, and recently taught a machine learning class (ml-class.org) to over 100,000 students. He is also a cofounder of Coursera.