AAAI-22 / IAAI-22 / EAAI-22 Invited Speaker Program

February 24-27, 2022

AAAI-22 is pleased to present the following series of distinguished speakers:

AAAI Presidential Address: Bart Selman
AAAI Squirrel AI Award Lecture: Cynthia Rudin
IAAI Robert S. Engelmore Memorial Award Lecture: Andrew Ng
AAAI/EAAI Outstanding Educator Award Lecture: AI4K12 Team (David Touretzky, Christina Gardner-McCune, Fred Martin, and Debora Seehorn)
AAAI-22 Invited Talk: Gil Alterovitz
AAAI-22 Invited Talk: Marta Kwiatkowska
AAAI-22 Invited Talk: Francesca Rossi
AAAI-22 Invited Talk: Patrick Schnable


 

Cynthia Rudin

Cynthia Rudin

Duke University

AAAI 2022 Invited Talk

Recipient of the 2022 AAAI Squirrel Award for for Artificial Intelligence for the Benefit of Humanity

Title: Interpretable Machine Learning: Bringing Data Science Out of the “Dark Age”

With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions in criminal justice, healthcare, financial lending, and beyond. Interpretability of machine learning models is critical when the cost of a wrong decision is high. Throughout my career, I have had the opportunity to work with power engineers, doctors, and police detectives. Using interpretable models has been the key to allowing me to help them with important high-stakes societal problems. Interpretability can bring us out of the “dark” age of the black box into the age of insight and enlightenment.

Cynthia Rudin is a professor of computer science and engineering at Duke University, and directs the Interpretable Machine Learning Lab. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds degrees from the University at Buffalo and Princeton. She is the recipient of the 2022 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from AAAI. She is also a three-time winner of the INFORMS Innovative Applications in Analytics Award, and a fellow of the American Statistical Association and the Institute of Mathematical Statistics. Her goal is to design predictive models that people can understand.

Andrew Ng

Andrew Ng

DeepLearning AI, Landing AI, and Coursera

IAAI-22 Robert S. Engelmore Memorial Lecture Award

Title: The Data-Centric AI

Data-Centric AI (DCAI) represents the recent transition from focusing on modeling to the underlying data used to train and evaluate models. Increasingly, common model architectures have begun to dominate a wide range of tasks, and predictable scaling rules have emerged. While building and using datasets has been critical to these successes, the endeavor is often artisanal — painstaking and expensive. The community lacks high productivity and efficient open data engineering tools to make building, maintaining, and evaluating datasets easier, cheaper, and more repeatable. In the presentation, Dr. Ng will talk about what DCAI is, the challenges, and tips for using the DCAI approach.

Andrew Ng is Founder of DeepLearning.AI, Founder and CEO of Landing AI, Chairman and Co-Founder of Coursera, and an Adjunct Professor at Stanford University.

As a pioneer both in machine learning and online education, Dr. Ng has changed countless lives through his work in AI, authoring or co-authoring over 200 research papers in machine learning, robotics, and related fields. Previously, he was chief scientist at Baidu, the founding lead of the Google Brain team, and the co-founder of Coursera – the world’s largest MOOC platform. Dr. Ng now focuses his time primarily on his entrepreneurial ventures, looking for the best ways to accelerate responsible AI practices in the larger global economy. Follow Dr.Ng on Twitter (@AndrewYNg) and LinkedIn.

Marta Kwiatkowska

Marta Kwiatkowska

University of Oxford

AAAI-22 Invited Talk

Title: Safety and Robustness for Deep Learning with Provable Guarantees

Computing systems are becoming ever more complex, with automated decisions increasingly often based on deep learning components. A wide variety of applications are being developed, many of them safety-critical, such as self-driving cars and medical diagnosis. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning components. This lecture will describe progress with developing automated certification techniques for learnt software components to ensure safety and adversarial robustness of their decisions, including discussion of the role played by Bayesian learning and causality.

Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. She is known for fundamental contributions to the theory and practice of model checking for probabilistic systems, and is currently focusing on safety, robustness and fairness of automated decision making in Artificial Intelligence. She led the development of the PRISM model checker (www.prismmodelchecker.org), which has been adopted in diverse fields, including wireless networks, security, robotics, healthcare and DNA computing, with genuine flaws found and corrected in real-world protocols. Her research has been supported by two ERC Advanced Grants, VERIWARE and FUN2MODEL, EPSRC Programme Grant on Mobile Autonomy and EPSRC Prosperity Partnership FAIR. Kwiatkowska won the Royal Society Milner Award, the BCS Lovelace Medal and the Van Wijngaarden Award, and received an honorary doctorate from KTH Royal Institute of Technology in Stockholm. She is a Fellow of the Royal Society, Fellow of ACM and Member of Academia Europea.

Patrick S. Schnable

Patrick S. Schnable

Iowa State University

AAAI-22 Invited Talk

Title: Advancing Agricultural Genome to Phenome Research

Our goal is to develop statistical models that will predict crop performance in diverse environments. Crop phenotypes such as yield and drought tolerance are controlled by genotype, environment and their interactions. The necessary volumes of phenotypic data, however, remain limiting and our understanding of the interaction between genotypes and environments is limited. To address this limitation, we are building new sensors and robots to automatically collect large volumes of phenotypic data.

Pat Schnable is a distinguished professor at Iowa State University where he holds an endowed chair and directs the Plant Sciences Institute that is fostering collaborations among plant scientists, engineers, and data scientists. Schnable received his BS in Agronomy from Cornell University and his PhD in Plant Breeding and Cytogenetics from Iowa State University; he conducted post-doctoral research in Molecular Genetics at the Max Planck Institute in Köln, Germany.

Schnable’s wide-ranging investigations of the maize genome have resulted in over 200 peer-reviewed publications, an h-index of 77, and over 24,000 citations. He is a fellow of the American Association for the Advancement of Science, co-lead of the Genomes to Fields Initiative, the PI of the Agricultural Genomes to Phenomes Initiative (AG2PI), a past chair of the American Society of Plant Biology’s Science Policy Committee, and a past chair of the Maize Genetics Executive Committee.

Schnable is also a serial entrepreneur and serves on the scientific advisory boards of several ag-tech companies.

Francesca Rossi

Francesca Rossi

IBM Research

AAAI 2022 Invited Talk 

Title: Thinking Fast and Slow in AI

Current AI systems lack several important human capabilities, such as adaptability, generalizability, self-control, consistency, common sense, and causal reasoning. We believe that existing cognitive theories of human decision making, such as the “thinking fast and slow” theory, can provide insights on how to advance AI systems towards some of these capabilities. In this talk, I will describe a general architecture that is based on fast/slow solvers and a metacognitive component. I will then present experimental results on the behavior of an instance of this architecture, for AI systems that make decisions about navigating in a constrained environment. I will show how combining the fast and slow decision modalities allows this system to evolve over time and gradually pass from slow to fast thinking with enough experience, and that this greatly helps in decision quality, resource consumption, and efficiency.
 
 
Francesca Rossi is an IBM Fellow and the IBM AI Ethics Global Leader. In this role, she leads research projects to advance AI capabilities and she co-chairs the IBM AI Ethics board. Her research interests span various areas of AI, from constraints to preferences to graphical models to neuro-symbolic AI. Prior to joining IBM, she was a professor of computer science at the University of Padova, Italy. Francesca is a fellow of both AAAI and EurAI, and she will be the next president of AAAI.

David Touretzky, Christina Gardner-McCune, Fred Martin, and Debora Seehorn
AAAI/EAAI Outstanding Educator Award

Title: You Know AI Has Arrived When We’re Teaching It In Elementary School

In mid-2018 we launched the AI4K12 initiative (AI4K12.org) to develop national guidelines for teaching AI in K-12. The AI4K12 Working Group produced a list of “Five Big Ideas in AI” that has influenced views world-wide about what students should know about AI. We are now releasing detailed grade band progression charts for each big idea as guidance for curriculum developers. AI4K12.org is also working with education departments in 15 states and 2 US territories to help them incorporate AI into their K-12 curriculum standards. In this talk we’ll share what AI looks like in elementary school and beyond.

David S. Touretzky

David S. Touretzky

Carnegie Mellon University

David S. Touretzky is a Research Professor in the Computer Science Department and Neuroscience Institute at Carnegie Mellon University. He is the founder and chair of the AI4K12 Initiative (AI4K12.org). Touretzky’s research interests over his 40+ year career have included knowledge representation, connectionist modeling, computational neuroscience (specifically, spatial representations in the rodent brain), cognitive robotics, and CS and AI education. He is a Senior Member of AAAI, a Fellow of the American Association for the Advancement of Science, and was named a Distinguished Scientist by the Association for Computing Machinery.

Christina Gardner-McCune

Christina Gardner-McCune

University of Florida’

Christina Gardner-McCune is an Associate Professor in the Computer & Information Science & Engineering Department at the University of Florida’s Herbert Wertheim College of Engineering. Dr. Gardner-McCune is the co-chair of the AI for K-12 Initiative (AI4K12.org) and Director of the Engaging Learning Lab. As Director of the Engaging Learning Lab research group @ UF, Gardner-McCune and her students research and develop engaging hands-on learning experiences for K-12 students and teachers in the areas of artificial intelligence, cybersecurity, robotics, mobile app development, game design, and Introductory programming.

Fred Martin

Fred Martin

University of Massachusetts Lowell

Fred Martin is associate dean for teaching, learning and undergraduate studies at the University of Massachusetts Lowell’s Kennedy College of Sciences and professor in its Computer Science Department. He has been serving on the board of the Computer Science Teachers Association since 2014, and is past-chair of Computer Science Teachers Association’s Board of Directors.

Deborah Seehorn

Deborah Seehorn

Deborah Seehorn served as the chair of the 2011 CSTA Computer Science Standards Task Force and as Co-Chair of the 2016 CSTA Computer Science Standards Task Force. After more than 40 years in K-12 education, she retired in 2015 from the North Carolina Department of Public Instruction. Deborah has presented about CS education at multiple conferences, including CSTA. She currently serves as the North Carolina state lead for ECEP (Expanding Computer Education Pathways).

 

This site is protected by copyright and trademark laws under US and International law. All rights reserved. Copyright © 1995–2022 AAAI