The 38th Annual AAAI Conference on Artificial Intelligence
February 20-27, 2024 | Vancouver, Canada
AAAI-24 Panels
Sponsored by the Association for the Advancement of Artificial Intelligence
February 22-25, 2024 | Vancouver Convention Centre – West Building | Vancouver, BC, Canada
AI Strategic Initiatives and Policies
Friday, February 23 – 11:15 AM – 12:30PM
Location: Ballroom AB
As the massive excitement about recent advances in AI indicates, AI is a transformative technology that likely will have a deep and systemic impact on the national and global economy and security. The recognition of the potential impact of AI has led many countries to launch national strategic research initiatives and policies about the development and deployment of AI. The goal of this panel is to discuss some of these strategic initiatives and policies from different perspectives and at different levels of aggregation. We will begin with an overview of the US NSF’s National AI Institutes Program (Donlon), illustrate it with a specific National AI Institute (Goel), describe the structures and processes in the US that result in a national strategic initiative (Littman), present current and potential AI-related policies in the US (Wagstaff), and compare with similar initiatives and policies across the world (Walsh).
Panelists:
Ashok Goel
Georgia Institute of Technology, Chair
Ashok K. Goel is a Professor of Computer Science and Human-Centered Computing in the School of Interactive Computing at Georgia Institute of Technology and the Chief Scientist with Georgia Tech’s Center for 21st Century Universities. For almost forty years, he has conducted research into cognitive systems at the intersection of artificial intelligence and cognitive science with a focus on computational design and creativity. For almost two decades, much of his research has increasingly focused on AI in education and education in AI. He is a Fellow of AAAI and the Cognitive Science Society, an Editor Emeritus of AAAI’s AI Magazine, and a recipient of AAAI’s Outstanding AI Educator Award as well as the University of System of Georgia’s Scholarship of Learning and Teaching Award. Ashok is the PI and Executive Director of National AI Institute for Adult Learning and Online Education (aialoe.org) sponsored by the United States National Science Foundation.
James Donlon
National Science Foundation
James Donlon is a Program Director at NSF. He created and leads the National AI Research Institutes program, the nation’s flagship, multisector program for federally funded research pursuing the advancement of AI and AI-powered innovation in a wide range of use-inspired sectors. He is also a program lead for initiatives aimed at growing the AI Institutes into a richly interconnected research community through initiatives including the Expanding AI Innovation through Capacity Building and Partnerships (ExpandAI) program and the AI Institutes Virtual Organization (AIVO). Prior to NSF (from 2008 to 2013) Jim was a Program Manager for AI at the Defense Advanced Research Projects Agency (DARPA) where he created the Mind’s Eye program and led the Computer Science Study Group. Prior to federal civil service, Jim served 20 years in the U.S. Military, where he conducted use-inspired research and development in knowledge-based systems, intelligent tutoring systems, evolutionary algorithms, and discrete optimization.
Michael Littman
National Science Foundation
Michael Littman is currently serving as Division Director for Information and Intelligent Systems at the National Science Foundation. The division is home to the programs and program officers that support researchers in artificial intelligence, human-centered computing, data management, and assistive technologies, as well as those exploring the impact of intelligent information systems on society. Littman is also University Professor of Computer Science at Brown University, where he studies machine learning and decision-making under uncertainty. He has earned multiple university-level awards for teaching and his research has been recognized with three best-paper awards and three influential paper awards. Littman is a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery.
Kiri Wagstaff
AAAS Congressional AI Fellow
Kiri L. Wagstaff is a machine learning researcher, educator, and AAAI Fellow. She is currently serving a one-year term as a U.S. Congressional Fellow in Artificial Intelligence, sponsored by the American Association for the Advancement of Science (AAAS). As a Principal Researcher at NASA’s Jet Propulsion Laboratory, she specialized in developing machine learning methods for use onboard spacecraft and in data archives for planetary science, astronomy, cosmology, and more. She also investigates how we can understand and trust machine learning systems. She co-founded the Symposium on Educational Advances in Artificial Intelligence (EAAI) and teaches graduate machine learning courses at Oregon State University. She earned a Ph.D. in Computer Science from Cornell University followed by an M.S. in Geological Sciences and a Master of Library and Information Science (MLIS). Her work has been recognized by two NASA Exceptional Technology Achievement Medals. She is passionate about keeping machine learning relevant to real-world problems.
Toby Walsh
University of New South Wales
Toby Walsh is Chief Scientist of UNSW.AI, UNSW’s new AI Institute. He is a strong advocate for limits to ensure AI is used to improve our lives, having spoken at the UN, and to heads of state, parliamentary bodies, company boards and many others on this topic. This advocacy has led to him being “banned indefinitely” from Russia. He is a Fellow of the Australia Academy of Science, and was named on the
international “Who’s Who in AI” list of influencers. He has written four books on AI for a general audience, the most recent is “Faking It! Artificial Intelligence in A Human World”.
Special Session: Envisioning Open Research Resources for Artificial Intelligence in the US
Friday, February 23 – 1:00 PM – 2:00 PM (Bring your own lunch to the session)
Location: Room 220
Presenters: Yolanda Gil (USC), Shantenu Jha (Rutgers), Michael Littman (NSF), Cornelia Caragea (NSF)
The US National Artificial Intelligence Research Resource (NAIRR) Task Force report published a roadmap for shared research infrastructure that would provide AI researchers and students with significantly expanded access to computational resources, high-quality data, educational tools, and user support. While the National Science Foundation (NSF) has funded significant cyberinfrastructure efforts for research in different scientific disciplines, the use of national cyberinfrastructure is not very common in the AI research community. As AI becomes more experimental and important research breakthroughs require large-scale computations, the availability of advanced cyberinfrastructure for AI research is paramount to new AI innovations. We invite the AI community to share ideas on how AI researchers currently access the infrastructure necessary for experimental work, desiderata for resources and infrastructure for AI research, and the resource requirements that may be unique to AI as a discipline. The discussion will inform an upcoming NSF workshop on this topic as well as other planning activities for NAIRR.
Implications of LLMs
Saturday, February 24 – 4:30 PM – 6:00 PM
Location: Ballroom AB
Moderator: Kevin Leyton-Brown, University of British Columbia
Kevin Leyton-Brown is a professor of Computer Science and a Distinguished University Scholar at the University of British Columbia. He also holds a Canada CIFAR AI Chair at the Alberta Machine Intelligence Institute and is an associate member of the Vancouver School of Economics. He received a PhD and an M.Sc. from Stanford University (2003; 2001) and a B.Sc. from McMaster University (1998). He studies artificial intelligence, mostly at the intersection of machine learning and either the design and operation of electronic markets or the design of heuristic algorithms. He is increasingly interested in large language models, particularly as components of agent architectures. He is passionate about leveraging AI to benefit underserved communities, particularly in the developing world.
Panelists: Christopher Manning, Subbarao Kambhampati, Shelia McIlraith and Charles Sutton
Christopher Manning
Stanford University
Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). His research is on computers that can intelligently process, understand, and generate human language. Chris is the most-cited researcher within NLP, with best paper awards at the ACL, Coling, EMNLP, and CHI conferences and an ACL Test of Time award for his pioneering work on applying neural network or deep learning approaches to human language understanding. He founded the Stanford NLP group, has written widely used NLP textbooks, and teaches the popular NLP class CS224N, which is also available online.
Subbarao Kambhampati
Arizona State University
Subbarao Kambhampati is a professor of computer science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, and has recently been interested in the role generative AI systems can play there (a topic on which he is also delivering a tutorial at AAAI 24). His research group also studies the challenges of human-aware AI systems. He is a fellow of AAAI, AAAS and ACM. He served as the president of AAAI, was a trustee of IJCAI, and a founding board member of Partnership on AI. He is the current chair of AAAS Section T (Information, Communication and Computation). Kambhampati’s research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. He can be followed on Twitter @rao2z.
Shelia McIlraith
Sheila McIlraith is a Professor in the Department of Computer Science, University of Toronto, Canada CIFAR AI Chair (Vector Institute for Artificial Intelligence), and Associate Director and Research Lead of the Schwartz Reisman Institute for Technology and Society. Prior to joining U of T, McIlraith spent six years as a Research Scientist at Stanford University, and one year at Xerox PARC. McIlraith is the author of over 100 scholarly publications in the area of knowledge representation, automated reasoning and machine learning. Her work focuses on AI sequential decision making broadly construed, through the lens of human-compatible AI. McIlraith is a fellow of the ACM, a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and a past President of KR Inc., the international scientific foundation concerned with fostering research and communication on knowledge representation and reasoning. She is currently serving on the Standing Committee of the Stanford One Hundred Year Study on Artificial Intelligence (AI100). McIlraith is an associate editor of the Journal of Artificial Intelligence Research (JAIR), a past associate editor of the journal Artificial Intelligence (AIJ), and a past board member of Artificial Intelligence Magazine. In 2018, McIlraith served as program co-chair of the 32nd AAAI Conference on Artificial Intelligence (AAAI-18). She also served as program co-chair of the International Conference on Principles of Knowledge Representation and Reasoning (KR2012), and the International Semantic Web Conference (ISWC2004). McIlraith’s early work on Semantic Web Services has had notable impact. In 2011 she and her co-authors were honoured with the SWSA 10-year Award, a test of time award recognizing the highest impact paper from the International Semantic Web Conference, 10 years prior; in 2022 McIlraith and co-authors were honoured with the 2022 ICAPS Influential Paper Award, recognizing a significant and influential paper published 10 years prior at the International Conference on Automated Planning and Scheduling; and in 2023 McIlraith and co-authors were honoured with the IJCAI-JAIR Best Paper Prize, awarded annually to an outstanding paper published in JAIR in the preceding five years.
Doug Lenat, CYC, and Future Directions in Reasoning and Knowledge Representation
Sunday, February 25 – 9:30 AM – 10:30 AM
Location: Ballroom AB
Co-organizers: Gary Marcus and Michael Witbrock
At AAAI 2024, we honour the legacy of Doug Lenat, a pioneering figure in artificial intelligence who founded Cycorp and profoundly impacted AI research by scaling both the extent and the ambition of logic-based and common-sense reasoning. This memorial and retrospective session brings together a distinguished panel of speakers close to Doug and the Cyc project, to reflect on Doug’s contributions to AI, particularly through his work on the Cyc project. Panelists, including Blake Shepard, Francesca Rossi, Gary Marcus, and Michael Witbrock, will discuss Lenat’s vision for AI, his groundbreaking approaches to knowledge representation and reasoning, and the enduring influence of his ideas on current and future AI research. We’ll explore how Lenat’s work on Cyc laid foundational principles for building intelligent systems and his advocacy for a comprehensive, inferentially powerful, AI knowledge base. Attendees will gain insights into Lenat’s impact on AI’s past, present, and its trajectory towards more sophisticated and human-like reasoning capabilities.
Francesca Rossi
IBM
Francesca Rossi is an IBM fellow and the IBM AI Ethics Global Leader. She works at the T.J. Watson IBM Research Lab, New York.
Her research interests focus on artificial intelligence, specifically they include constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behaviour of AI systems, in particular for decision support systems for group decision making. She has published over 200 scientific articles in journals and conference proceedings, and as book chapters. She has co-authored a book and she has edited 17 volumes, between conference proceedings, collections of contributions, special issues of journals, and a handbook.
She is a fellow of both the worldwide association of AI (AAAI) and of the European one (EurAI). She has been president of IJCAI (International Joint Conference on AI), an executive councillor of AAAI, and the Editor in Chief of the Journal of AI Research. She is a member of the scientific advisory board of the Future of Life Institute (Cambridge, USA) and a deputy director of the Leverhulme Centre for the Future of Intelligence (Cambridge, UK). She is in the executive committee of the IEEE global initiative on ethical considerations on the development of autonomous and intelligent systems and she is a member of the board of directors of the Partnership on AI, where she represents IBM as one of the founding partners.
She has been a member of the European Commission High Level Expert Group on AI and the general chair of the AAAI 2020 conference. She is a member of the Responsible AI working group of the Global Partnership on AI and the industry representative in its Steering Committee. She has been AAAI President since 2022.
Blake Shepard
Cycorp
Blake Shepard is the Director of Ontological Engineering at Cycorp. Over the course of his 25-year career at Cycorp, he has directed a wide range of extensions of the Cyc platform for commercial and government applications and he has published numerous articles on Cyc. Some areas in which he has led Cyc platform development include integrating Cyc with LLMs to automatically expand the Cyc knowledge base and to validate LLM output with Cyc reasoning, abductive planning for embodied AI, abductive reasoning and scenario generation for terrorist threat anticipation, computer network risk assessment, decision support for space launch facilities, learning-by-teaching, simulation of realistic emotional engagement with fictional characters in rich fictional universes, and root cause anomaly understanding for complex systems including deep-sea and unconventional oil wells. He holds a Ph.D. in Philosophy from The University of Texas at Austin.
Michael Witbrock
The University of Auckland
Michael Witbrock is a Computer Science professor at The University of Auckland, leading its Broad AI Lab. With a PhD from Carnegie Mellon University and a rich background in AI research and development, he worked alongside Doug Lenat at Cycorp for 15 years, serving as Vice President of Research. His work focuses on blending formal logic with machine learning to create intelligent systems. A passionate advocate for AI’s positive impact, Witbrock’s contributions span academia and industry, aiming to advance the field toward more human-like reasoning and social good.