AAAI-22 is pleased to present the following series of distinguished speakers:
AAAI Presidential Address: Bart Selman
AAAI Squirrel AI Award Lecture: Cynthia Rudin
IAAI Robert S. Engelmore Memorial Award Lecture: Andrew Ng
AAAI/EAAI Outstanding Educator Award Lecture: AI4K12 Team (David Touretzky, Christina Gardner-McCune, Fred Martin, and Debora Seehorn)
AAAI-22 Invited Talk: Gil Alterovitz
AAAI-22 Invited Talk: Marta Kwiatkowska
AAAI-22 Invited Talk: Michael Littman
AAAI-22 Invited Talk: Francesca Rossi
AAAI-22 Invited Talk: Patrick Schnable
AAAI-22 Invited Talk: Funding Panel
AAAI-22 Invited Talk: AI Institutes Panel

Bart Selman
Cornell University
AAAI 2022 Presidential Address
Title: The State of AI

Cynthia Rudin
Duke University
AAAI 2022 Invited Talk
Recipient of the 2022 AAAI Squirrel Award for for Artificial Intelligence for the Benefit of Humanity
Title: Interpretable Machine Learning: Bringing Data Science Out of the “Dark Age”
With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions in criminal justice, healthcare, financial lending, and beyond. Interpretability of machine learning models is critical when the cost of a wrong decision is high. Throughout my career, I have had the opportunity to work with power engineers, doctors, and police detectives. Using interpretable models has been the key to allowing me to help them with important high-stakes societal problems. Interpretability can bring us out of the “dark” age of the black box into the age of insight and enlightenment.
Cynthia Rudin is a professor of computer science and engineering at Duke University, and directs the Interpretable Machine Learning Lab. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds degrees from the University at Buffalo and Princeton. She is the recipient of the 2022 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from AAAI. She is also a three-time winner of the INFORMS Innovative Applications in Analytics Award, and a fellow of the American Statistical Association and the Institute of Mathematical Statistics. Her goal is to design predictive models that people can understand.

Andrew Ng
DeepLearning AI, Landing AI, and Coursera
Title: The Data-Centric AI
Data-Centric AI (DCAI) represents the recent transition from focusing on modeling to the underlying data used to train and evaluate models. Increasingly, common model architectures have begun to dominate a wide range of tasks, and predictable scaling rules have emerged. While building and using datasets has been critical to these successes, the endeavor is often artisanal — painstaking and expensive. The community lacks high productivity and efficient open data engineering tools to make building, maintaining, and evaluating datasets easier, cheaper, and more repeatable. In the presentation, Dr. Ng will talk about what DCAI is, the challenges, and tips for using the DCAI approach.
Andrew Ng is Founder of DeepLearning.AI, Founder and CEO of Landing AI, Chairman and Co-Founder of Coursera, and an Adjunct Professor at Stanford University.
As a pioneer both in machine learning and online education, Dr. Ng has changed countless lives through his work in AI, authoring or co-authoring over 200 research papers in machine learning, robotics, and related fields. Previously, he was chief scientist at Baidu, the founding lead of the Google Brain team, and the co-founder of Coursera – the world’s largest MOOC platform. Dr. Ng now focuses his time primarily on his entrepreneurial ventures, looking for the best ways to accelerate responsible AI practices in the larger global economy. Follow Dr.Ng on Twitter (@AndrewYNg) and LinkedIn.

Gil Alterovitz
U.S. Department of Veterans Affairs
AAAI-22 Invited Talk
Title: Toward an AI Network for Trustworthy AI
Trustworthy AI is being enabled by new federal strategies, policies, and collaborative possibilities – bringing to bear a new paradigm. AI also brings the promise of better and more efficient care, especially for our nation’s Veterans. This session will share several use cases from the work of the National Artificial Intelligence Institute (NAII) at the VA and collaborations, including pilots that empower Veterans to take control of medication adherence, physicians to evaluate COVID-19-associated prognosis and needs, and VHA staff to triage text-based input to quickly identify and assist Veterans in crisis. Establishing an AI network to scale up such research and development will also be covered.
Bio: Dr. Gil Alterovitz is the inaugural Director of the National Artificial Intelligence Institute at the U.S. Department of Veterans Affairs. He is focusing on leveraging health information for AI as well as building AI research and development capacity to help our nations Veterans through the Office of Research and Development. He is also a faculty member with Harvard Medical School.
Dr. Alterovitz was one of the core authors of White House Office of Science Technology and Policy’s The National AI R&D Strategic Plan from 2019. He has also spearheaded the “AI-able Data Ecosystem” pilot, creating a new approach for public-private collaborations with personnel/resources across a dozen agencies and working with companies internationally.

Marta Kwiatkowska
University of Oxford
AAAI-22 Invited Talk
Title: Safety and Robustness for Deep Learning with Provable Guarantees
Computing systems are becoming ever more complex, with automated decisions increasingly often based on deep learning components. A wide variety of applications are being developed, many of them safety-critical, such as self-driving cars and medical diagnosis. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning components. This lecture will describe progress with developing automated certification techniques for learnt software components to ensure safety and adversarial robustness of their decisions, including discussion of the role played by Bayesian learning and causality.
Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. She is known for fundamental contributions to the theory and practice of model checking for probabilistic systems, and is currently focusing on safety, robustness and fairness of automated decision making in Artificial Intelligence. She led the development of the PRISM model checker (www.prismmodelchecker.org), which has been adopted in diverse fields, including wireless networks, security, robotics, healthcare and DNA computing, with genuine flaws found and corrected in real-world protocols. Her research has been supported by two ERC Advanced Grants, VERIWARE and FUN2MODEL, EPSRC Programme Grant on Mobile Autonomy and EPSRC Prosperity Partnership FAIR. Kwiatkowska won the Royal Society Milner Award, the BCS Lovelace Medal and the Van Wijngaarden Award, and received an honorary doctorate from KTH Royal Institute of Technology in Stockholm. She is a Fellow of the Royal Society, Fellow of ACM and Member of Academia Europea.

Michael Littman
Brown University
AAAI 2022 Invited Talk
Title: Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report
The One Hundred Year Study on Artificial Intelligence (AI100) is a long-term investigation of the field of AI and its influences on people, their communities, and society. With oversight from a standing committee administered out of Stanford University, the AI100 convenes a study panel every 5 years with the goal of issuing a report accessible to AI researchers, policy makers, industry leaders, and the public at large. The reports describe the technical and societal challenges and opportunities that have arisen since the previous report and envision potential future advances. This talk will summarize the 2nd report, issued September 2021.

Francesca Rossi
IBM Research
AAAI 2022 Invited Talk
Title: Thinking Fast and Slow in AI

Patrick S. Schnable
Iowa State University
AAAI-22 Invited Talk
Title: Advancing Agricultural Genome to Phenome Research
Our goal is to develop statistical models that will predict crop performance in diverse environments. Crop phenotypes such as yield and drought tolerance are controlled by genotype, environment and their interactions. The necessary volumes of phenotypic data, however, remain limiting and our understanding of the interaction between genotypes and environments is limited. To address this limitation, we are building new sensors and robots to automatically collect large volumes of phenotypic data.
Pat Schnable is a distinguished professor at Iowa State University where he holds an endowed chair and directs the Plant Sciences Institute that is fostering collaborations among plant scientists, engineers, and data scientists. Schnable received his BS in Agronomy from Cornell University and his PhD in Plant Breeding and Cytogenetics from Iowa State University; he conducted post-doctoral research in Molecular Genetics at the Max Planck Institute in Köln, Germany.
Schnable’s wide-ranging investigations of the maize genome have resulted in over 200 peer-reviewed publications, an h-index of 77, and over 24,000 citations. He is a fellow of the American Association for the Advancement of Science, co-lead of the Genomes to Fields Initiative, the PI of the Agricultural Genomes to Phenomes Initiative (AG2PI), a past chair of the American Society of Plant Biology’s Science Policy Committee, and a past chair of the Maize Genetics Executive Committee.
Schnable is also a serial entrepreneur and serves on the scientific advisory boards of several ag-tech companies.
David Touretzky, Christina Gardner-McCune, Fred Martin, and Deborah Seehorn
AAAI/EAAI Outstanding Educator Award
Title: You Know AI Has Arrived When We’re Teaching It In Elementary School
In mid-2018 we launched the AI4K12 initiative (AI4K12.org) to develop national guidelines for teaching AI in K-12. The AI4K12 Working Group produced a list of “Five Big Ideas in AI” that has influenced views world-wide about what students should know about AI. We are now releasing detailed grade band progression charts for each big idea as guidance for curriculum developers. AI4K12.org is also working with education departments in 15 states and 2 US territories to help them incorporate AI into their K-12 curriculum standards. In this talk we’ll share what AI looks like in elementary school and beyond.

David S. Touretzky
Carnegie Mellon University
David S. Touretzky is a Research Professor in the Computer Science Department and Neuroscience Institute at Carnegie Mellon University. He is the founder and chair of the AI4K12 Initiative (AI4K12.org). Touretzky’s research interests over his 40+ year career have included knowledge representation, connectionist modeling, computational neuroscience (specifically, spatial representations in the rodent brain), cognitive robotics, and CS and AI education. He is a Senior Member of AAAI, a Fellow of the American Association for the Advancement of Science, and was named a Distinguished Scientist by the Association for Computing Machinery.

Christina Gardner-McCune
University of Florida’
Christina Gardner-McCune is an Associate Professor in the Computer & Information Science & Engineering Department at the University of Florida’s Herbert Wertheim College of Engineering. Dr. Gardner-McCune is the co-chair of the AI for K-12 Initiative (AI4K12.org) and Director of the Engaging Learning Lab. As Director of the Engaging Learning Lab research group @ UF, Gardner-McCune and her students research and develop engaging hands-on learning experiences for K-12 students and teachers in the areas of artificial intelligence, cybersecurity, robotics, mobile app development, game design, and Introductory programming.

Fred Martin
University of Massachusetts Lowell
Fred Martin is Associate Dean for Teaching, Learning and Undergraduate Studies at the University of Massachusetts Lowell’s Kennedy College of Sciences and Professor in its Computer Science Department. Martin’s research team develops and studies new computational technologies for STEM teaching and learning, including learnmyr.org, a virtual reality programming environment, and isenseproject.org, a cloud-based collaborative data visualization platform. He served on the Board of Directors of the Computer Science Teachers Association from 2014–2020, including as chair from 2018–2019.

Deborah Seehorn
Deborah Seehorn served as the chair of the 2011 CSTA Computer Science Standards Task Force and as Co-Chair of the 2016 CSTA Computer Science Standards Task Force. After more than 40 years in K-12 education, she retired in 2015 from the North Carolina Department of Public Instruction. Deborah has presented about CS education at multiple conferences, including CSTA. She currently serves as the North Carolina state lead for ECEP (Expanding Computer Education Pathways).
AAAI-22 Invited Talk: Funding Panel
The panelists will present the vision and AI funding priorities of their respective organizations and answer questions from the audience.
Moderator: Katia Sycara, General Chair, AAAI-2022
Panelists: Henry Kautz, Division Director, Information and Intelligent Systems Division, National Science Foundation; Doug Riecken, Program Officer, Machine Learning, Air Force Office of Scientific Research; Bo Xu, President, Institute of Automation, Chinese Academy of Science and Associate Director, Center for Excellence in Brain Science & Intelligent Technologies; Marc Steinberg, Program Manager, Robotics and Human-Robot Interaction Program, Office of Naval Research; Cecile Huet, Deputy Head, Robotics and Artificial Intelligence Innovation and Excellence, European Commission; David Boothe, Neuroscientist, Center for Agent-Soldier Teaming, Army Research Lab; Thomas Kalil, Chief Innovation Officer, Schmidt Futures
AAAI-22 Invited Talk: AI Institutes Panel
The U.S.A. National Science Foundation has established the Artificial Intelligence Research Institutes awards in 2020. These Institutes represent the nation’s most significant federal investment in AI research and education to date. The panelists will present their Institute’s research vision and results.
Moderator: Vasant Honavar, Technical Program Co-Chair, AAAI-2022
NSF AI Institutes Awards from 2020
- Student-AI Teaming: Sydney D’Mello
- Molecular Discovery, Synthetic Strategy, and Manufacturing: Huimin Zhao
- Food systems: Ilias Tagkopoulos
- Trustworthy AI in Weather, Climate, and Coastal Oceanography: Amy McGovern
- Artificial Intelligence and Fundamental Interactions: Jesse Thaler
- Future Agricultural Resilience, Management and Sustainability: Vikram Adve
- Foundations of Machine Learning: Adam Klivans