AAAI-22 is pleased to present the following series of distinguished speakers:
AAAI Presidential Address: Bart Selman
AAAI Squirrel AI Award Lecture: Cynthia Rudin
IAAI Robert S. Engelmore Memorial Award Lecture: Andrew Ng
AAAI/EAAI Outstanding Educator Award Lecture: AI4K12 Team (David Touretzky, Christina Gardner-McCune, Fred Martin, and Debora Seehorn)
AAAI-22 Invited Talk: Gil Alterovitz
AAAI-22 Invited Talk: Marta Kwiatkowska
AAAI-22 Invited Talk: Michael Littman
AAAI-22 Invited Talk: Francesca Rossi
AAAI-22 Invited Talk: Patrick Schnable
AAAI-22 Invited Talk: Funding Panel
AAAI-22 Invited Talk: AI Institutes Panel
AAAI 2022 Presidential Address
Title: The State of AI
We are witnessing a highly accelerated phase of progress in AI, largely due to the deep learning revolution. This revolution is also reunifying our field, with researchers building bridges across different research areas, such as computer vision, natural language understanding, and decision making. In this work, we see significant progress on the big questions that have challenged our field since its inception. I will review the current state of AI and outline challenges that need to be addressed to develop genuinely robust and reliable AI systems. I postulate that the next level of AI will require an integration of the highly successful data-driven paradigm with a knowledge-driven approach coupled with human feedback for human-aligned intelligent systems. Bart Selman is the Joseph C. Ford Professor of Engineering and Computer Science at Cornell University. Prof. Selman is President of the Association for the Advancement of Artificial Intelligence (AAAI), the main international professional society for AI researchers and practitioners. He was also the co-Chair of a national study to develop a 20-year Roadmap for AI research to guide US government research investments in AI. The Roadmap incorporates input from over 100 leading AI researchers. Prof. Selman was previously at AT&T Bell Laboratories. His research interests include artificial intelligence, computational sustainability, efficient reasoning procedures, machine learning, deep learning, deep reinforcement learning, planning, knowledge representation, and connections between computer science and statistical physics. He has (co-)authored over 150 publications, including six best paper awards and two classic paper awards. His papers have appeared in venues spanning Nature, Science, Proc. Natl. Acad. of Sci., and a variety of conferences and journals in AI and Computer Science. He is a Fellow of the American Association for Artificial Intelligence (AAAI), a Fellow of the American Association for the Advancement of Science (AAAS), and a Fellow of the American Association for Computing Machinery (ACM). He is the recipient of the Inaugural IJCAI John McCarthy Research Award.
AAAI 2022 Invited Talk
Recipient of the 2022 AAAI Squirrel Award for Artificial Intelligence for the Benefit of Humanity
Title: Interpretable Machine Learning: Bringing Data Science Out of the “Dark Age”
With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions in criminal justice, healthcare, financial lending, and beyond. Interpretability of machine learning models is critical when the cost of a wrong decision is high. Throughout my career, I have had the opportunity to work with power engineers, doctors, and police detectives. Using interpretable models has been the key to allowing me to help them with important high-stakes societal problems. Interpretability can bring us out of the “dark” age of the black box into the age of insight and enlightenment.
Cynthia Rudin is a professor of computer science and engineering at Duke University and directs the Interpretable Machine Learning Lab. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds degrees from the University at Buffalo and Princeton. She received the 2022 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from AAAI. She is also a three-time winner of the INFORMS Innovative Applications in Analytics Award and a fellow of the American Statistical Association and the Institute of Mathematical Statistics. Her goal is to design predictive models that people can understand.
DeepLearning AI, Landing AI, and Coursera
IAAI-22 Robert S. Engelmore Memorial Lecture Award
Title: The Data-Centric AI
Data-Centric AI (DCAI) represents the recent transition from focusing on modeling to the underlying data used to train and evaluate models. Increasingly, common model architectures have begun to dominate a wide range of tasks, and predictable scaling rules have emerged. While building and using datasets has been critical to these successes, the endeavor is often artisanal — painstaking and expensive. The community lacks high productivity and efficient open data engineering tools to make building, maintaining, and evaluating datasets easier, cheaper, and more repeatable. In the presentation, Dr. Ng will talk about what DCAI is, the challenges, and tips for using the DCAI approach.
Andrew Ng is Founder of DeepLearning.AI, Founder and CEO of Landing AI, Chairman and Co-Founder of Coursera, and an Adjunct Professor at Stanford University.
As a pioneer both in machine learning and online education, Dr. Ng has changed countless lives through his work in AI, authoring or co-authoring over 200 research papers in machine learning, robotics, and related fields. Previously, he was chief scientist at Baidu, the founding lead of the Google Brain team, and the co-founder of Coursera – the world’s largest MOOC platform. Dr. Ng now focuses his time primarily on his entrepreneurial ventures, looking for the best ways to accelerate responsible AI practices in the larger global economy. Follow Dr.Ng on Twitter (@AndrewYNg) and LinkedIn.
U.S. Department of Veterans Affairs
AAAI-22 Invited Talk
Title: Toward an AI Network for Trustworthy AI
Trustworthy AI is being enabled by new federal strategies, policies, and collaborative possibilities – bringing to bear a new paradigm. AI also brings the promise of better and more efficient care, especially for our nation’s Veterans. This session will share several use cases from the work of the National Artificial Intelligence Institute (NAII) at the VA and collaborations, including pilots that empower Veterans to take control of medication adherence, physicians to evaluate COVID-19-associated prognosis and needs, and VHA staff to triage text-based input to quickly identify and assist Veterans in crisis. Establishing an AI network to scale up such research and development will also be covered.
Bio: Dr. Gil Alterovitz is the inaugural Director of the National Artificial Intelligence Institute at the U.S. Department of Veterans Affairs. He is focusing on leveraging health information for AI as well as building AI research and development capacity to help our nation’s Veterans through the Office of Research and Development. He is also a faculty member of Harvard Medical School.
Dr. Alterovitz was one of the core authors of the White House Office of Science Technology and Policy’s National AI R&D Strategic Plan from 2019. He has also spearheaded the “AI-able Data Ecosystem” pilot, creating a new approach for public-private collaborations with personnel/resources across a dozen agencies and working with companies internationally.
University of Oxford
AAAI-22 Invited Talk
Title: Safety and Robustness for Deep Learning with Provable Guarantees
Computing systems are becoming ever more complex, with automated decisions increasingly often based on deep learning components. A wide variety of applications are being developed, many of them safety-critical, such as self-driving cars and medical diagnosis. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning components. This lecture will describe progress in developing automated certification techniques for learnt software components to ensure the safety and adversarial robustness of their decisions, including a discussion of the role played by Bayesian learning and causality.
Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. She is known for fundamental contributions to the theory and practice of model checking for probabilistic systems and is currently focusing on safety, robustness, and fairness of automated decision-making in Artificial Intelligence. She led the development of the PRISM model checker (www.prismmodelchecker.org), which has been adopted in diverse fields, including wireless networks, security, robotics, healthcare, and DNA computing, with genuine flaws found and corrected in real-world protocols. Her research has been supported by two ERC Advanced Grants, VERIWARE and FUN2MODEL, EPSRC Programme Grant on Mobile Autonomy, and EPSRC Prosperity Partnership FAIR. Kwiatkowska won the Royal Society Milner Award, the BCS Lovelace Medal, and the Van Wijngaarden Award and received an honorary doctorate from KTH Royal Institute of Technology in Stockholm. She is a Fellow of the Royal Society, a Fellow of ACM, and a Member of Academia Europea.
AAAI 2022 Invited Talk
Title: Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report
The One Hundred Year Study on Artificial Intelligence (AI100) is a long-term investigation of the field of AI and its influences on people, their communities, and society. With oversight from a standing committee administered out of Stanford University, the AI100 convenes a study panel every five years with the goal of issuing a report accessible to AI researchers, policymakers, industry leaders, and the public at large. The reports describe the technical and societal challenges and opportunities that have arisen since the previous report and envision potential future advances. This talk will summarize the 2nd report, issued September 2021.
Michael L. Littman is The Royce Family Professor of Teaching Excellence in Computer Science at Brown University, currently on leave at Georgia Tech. He studies machine learning and decision making under uncertainty. He has earned multiple awards for teaching, and his research has been recognized with four best-paper awards and three influential paper awards for his work on reinforcement learning, probabilistic planning, and automated crossword puzzle solving. Littman was program chair of AAAI in 2013. He is co-director of Brown’s Humanity Centered Robotics Initiative and a Fellow of both AAAI and the ACM. He chaired the 2021 AI100 Report.
AAAI 2022 Invited Talk
Title: Thinking Fast and Slow in AI
Current AI systems lack several important human capabilities, such as adaptability, generalizability, self-control, consistency, common sense, and causal reasoning. We believe that existing cognitive theories of human decision-making, such as the “thinking fast and slow” theory, can provide insights on how to advance AI systems towards some of these capabilities. In this talk, I will describe a general architecture that is based on fast/slow solvers and a metacognitive component. I will then present experimental results on the behavior of an instance of this architecture for AI systems that make decisions about navigating in a constrained environment. I will show how combining the fast and slow decision modalities allows this system to evolve over time and gradually pass from slow to fast thinking with enough experience and that this greatly helps in decision quality, resource consumption, and efficiency. Francesca Rossi is an IBM Fellow and the IBM AI Ethics Global Leader. In this role, she leads research projects to advance AI capabilities, and she co-chairs the IBM AI Ethics board. Her research interests span various areas of AI, from constraints to preferences to graphical models to neuro-symbolic AI. Prior to joining IBM, she was a professor of computer science at the University of Padova, Italy. Francesca is a fellow of both AAAI and EurAI, and she will be the next president of AAAI
Patrick S. Schnable
Iowa State University
AAAI-22 Invited Talk
Title: Advancing Agricultural Genome to Phenome Research
Our goal is to develop statistical models that will predict crop performance in diverse environments. Crop phenotypes such as yield and drought tolerance are controlled by genotype, environment, and their interactions. The necessary volumes of phenotypic data, however, remain limiting, and our understanding of the interaction between genotypes and environments is limited. To address this limitation, we are building new sensors and robots to automatically collect large volumes of phenotypic data.
Pat Schnable is a distinguished professor at Iowa State University, where he holds an endowed chair and directs the Plant Sciences Institute, which fosters collaborations among plant scientists, engineers, and data scientists. Schnable received his BS in Agronomy from Cornell University and his Ph.D. in Plant Breeding and Cytogenetics from Iowa State University; he conducted post-doctoral research in Molecular Genetics at the Max Planck Institute in Köln, Germany.
Schnable’s wide-ranging investigations of the maize genome have resulted in over 200 peer-reviewed publications, an h-index of 77, and over 24,000 citations. He is a fellow of the American Association for the Advancement of Science, co-lead of the Genomes to Fields Initiative, the PI of the Agricultural Genomes to Phenomes Initiative (AG2PI), a past chair of the American Society of Plant Biology’s Science Policy Committee, and a past chair of the Maize Genetics Executive Committee.
Schnable is also a serial entrepreneur and serves on the scientific advisory boards of several ag-tech companies.
David Touretzky, Christina Gardner-McCune, Fred Martin, and Deborah Seehorn
AAAI/EAAI Outstanding Educator Award
Title: You Know AI Has Arrived When We’re Teaching It In Elementary School
In mid-2018, we launched the AI4K12 initiative (AI4K12.org) to develop national guidelines for teaching AI in K-12. The AI4K12 Working Group produced a list of “Five Big Ideas in AI” that has influenced views worldwide about what students should know about AI. We are now releasing detailed grade band progression charts for each big idea as guidance for curriculum developers. AI4K12.org is also working with education departments in 15 states and 2 US territories to help them incorporate AI into their K-12 curriculum standards. In this talk, we’ll share what AI looks like in elementary school and beyond.
David S. Touretzky
Carnegie Mellon University
David S. Touretzky is a Research Professor in the Computer Science Department and Neuroscience Institute at Carnegie Mellon University. He is the founder and chair of the AI4K12 Initiative (AI4K12.org). Touretzky’s research interests over his 40+ year career have included knowledge representation, connectionist modeling, computational neuroscience (specifically, spatial representations in the rodent brain), cognitive robotics, and CS and AI education. He is a Senior Member of AAAI, a Fellow of the American Association for the Advancement of Science, and was named a Distinguished Scientist by the Association for Computing Machinery.
University of Florida
Christina Gardner-McCune is an Associate Professor in the Computer & Information Science & Engineering Department at the University of Florida’s Herbert Wertheim College of Engineering. Dr. Gardner-McCune is the co-chair of the AI for K-12 Initiative (AI4K12.org) and Director of the Engaging Learning Lab. As Director of the Engaging Learning Lab research group @ UF, Gardner-McCune and her students research and develop engaging hands-on learning experiences for K-12 students and teachers in the areas of artificial intelligence, cybersecurity, robotics, mobile app development, game design, and Introductory programming.
University of Massachusetts Lowell
Fred Martin is Associate Dean for Teaching, Learning, and Undergraduate Studies at the University of Massachusetts Lowell’s Kennedy College of Sciences and Professor in its Computer Science Department. Martin’s research team develops and studies new computational technologies for STEM teaching and learning, including learnmyr.org, a virtual reality programming environment, and isenseproject.org, a cloud-based collaborative data visualization platform. He served on the Board of Directors of the Computer Science Teachers Association from 2014–2020, including as chair from 2018–2019.
Deborah Seehorn served as the chair of the 2011 CSTA Computer Science Standards Task Force and as Co-Chair of the 2016 CSTA Computer Science Standards Task Force. After more than 40 years in K-12 education, she retired in 2015 from the North Carolina Department of Public Instruction. Deborah has presented about CS education at multiple conferences, including CSTA. She currently serves as the North Carolina state lead for ECEP (Expanding Computer Education Pathways).
AAAI-22 Invited Talk: Funding Panel
The panelists will present the vision and AI funding priorities of their respective organizations and answer questions from the audience.
Moderator: Katia Sycara, General Chair, AAAI-2022
Panelists: Henry Kautz, Division Director, Information and Intelligent Systems Division, National Science Foundation; Doug Riecken, Program Officer, Machine Learning, Air Force Office of Scientific Research; Bo Xu, President, Institute of Automation, Chinese Academy of Science and Associate Director, Center for Excellence in Brain Science & Intelligent Technologies; Marc Steinberg, Program Manager, Robotics and Human-Robot Interaction Program, Office of Naval Research; Cecile Huet, Deputy Head, Robotics and Artificial Intelligence Innovation and Excellence, European Commission; David Boothe, Neuroscientist, Center for Agent-Soldier Teaming, Army Research Lab; Thomas Kalil, Chief Innovation Officer, Schmidt Futures
AAAI-22 Invited Talk: AI Institutes Panel
The U.S.A. National Science Foundation has established the Artificial Intelligence Research Institutes awards in 2020. These Institutes represent the nation’s most significant federal investment in AI research and education to date. The panelists will present their Institute’s research vision and results.
Moderator: Vasant Honavar, Technical Program Co-Chair, AAAI-2022
NSF AI Institutes Awards from 2020
- Student-AI Teaming: Sydney D’Mello
- Molecular Discovery, Synthetic Strategy, and Manufacturing: Huimin Zhao
- Food systems: Ilias Tagkopoulos
- Trustworthy AI in Weather, Climate, and Coastal Oceanography: Amy McGovern
- Artificial Intelligence and Fundamental Interactions: Jesse Thaler
- Future Agricultural Resilience, Management, and Sustainability: Vikram Adve
- Foundations of Machine Learning: Adam Klivans