AAAI-23 / IAAI-23 / EAAI-23 Invited Speaker Program
AAAI Presidential Address
Francesca Rossi (IBM)
Thursday, February 9, 8:40 – 9:30 AM
Francesca Rossi is an IBM Fellow and the IBM AI Ethics Global Leader. She is based at the T.J. Watson IBM Research Lab, New York, USA. Her research interests focus on artificial intelligence, with special focus on constraint reasoning, preferences, multi-agent systems, computational social choice, neuro-symbolic AI, cognitive architectures, and value alignment. She is also very active in the AI Ethics space: she co-chairs the IBM AI Ethics board, she participates in many global multi-stakeholder initiatives on AI ethics, such as the Partnership on AI, the World Economic Forum, the United Nations ITU AI for Good Summit, and the Global Partnership on AI, and she is in the steering committee of the AAAI/ACM Conference on AI, Ethics, and Society. She is a fellow of both AAAI and of EurAI, she has been president of IJCAI and the Editor in Chief of the Journal of AI Research. Currently she is the president of AAAI.
AAAI-23 Invited Speakers
Sebastian Bubeck (Microsoft Research)
Thursday, February 9, 6:00-7:00 PM
Josh Tenenbaum (Massachusetts Institute of Technology, USA)
Friday, February 10, 5:00-6:00 PM
Susan Murphy (Harvard University)
Friday, February 10, 6:00-7:00 PM
Sami Haddadin (Technical University of Munich)
Saturday, February 11, 8:30-9:30 AM
Sheila McIlraith (University of Toronto)
Saturday, February 11, 3:45-4:45 PM
Isabelle Augenstein (University of Copenhagen)
Saturday, February 11, 4:45-5:45 PM
Vincent Conitzer (Carnegie Mellon University)
Sunday, February 12, 8:30-9:30 AM
Anima Anandkuma (California Institute of Technology)
Sunday, February 12, 5:00-6:00 PM
IAAI-23 Speaker
2023 Robert S. Engelmore Memorial Lecture Award
Manuela Veloso (JP Morgan Chase)
Friday, February 10, 8:30-9:30 AM
EAAI-23 Speaker
AAAI/EAAI Patrick Henry Winston Outstanding Educator Award
Ayanna Howard (The Ohio State University)
Saturday, February 11, 2:00-3:00 PM
Anima Anandkuma
California Institute of Technology and NVIDIA
AAAI 2023 Invited Talk
Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors such as Alfred. P. Sloan Fellowship, NSF Career Award, Young investigator awards from DoD, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum’s Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods.
Isabelle Augenstein
University of Copenhagen
AAAI 2023 Invited Talk
Talk Title: Beyond Fact Checking — Modelling Information Change in Scientific Communication
Abstract: Most work on scholarly document processing assumes that the information processed is trustworthy and factually correct. However, this is not always the case. There are two core challenges, which should be addressed: 1) ensuring that scientific publications are credible — e.g. that claims are not made without supporting evidence, and that all relevant supporting evidence is provided; and 2) that scientific findings are not misrepresented, distorted or outright misreported when communicated by journalists or the general public. In this talk, I will present some first steps towards addressing these problems, discussing our research on exaggeration detection, scientific fact checking, and on modelling information change in scientific communication more broadly.
Isabelle Augenstein is a Professor at the University of Copenhagen, Department of Computer Science, where she heads the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. Her main research interests are fact checking, low-resource learning, and explainability. Prior to starting a faculty position, she was a postdoctoral researcher at University College London, and before that a PhD student at the University of Sheffield.
In October 2022, Isabelle Augenstein became Denmark’s youngest ever female full professor. She currently holds a prestigious ERC Starting Grant on ‘Explainable and Robust Automatic Fact Checking’, as well as the Danish equivalent of that, a DFF Sapere Aude Research Leader fellowship on ‘Learning to Explain Attitudes on Social Media’. She is a member of the Young Royal Danish Academy of Sciences and Letters, and Vice President-Elect of SIGDAT, which organises the EMNLP conference series.
Sebastien Bubeck
Microsoft Research
AAAI 2023 Invited Talk
Talk Title: Physics of AI — some first steps
Abstract: I would like to propose an approach to the science of deep learning that roughly follows what physicists do to understand reality: (1) explore phenomena through controlled experiments, and (2) build theories based on toy mathematical models and non-fully- rigorous mathematical reasoning. I will illustrate (1) with the LEGO study (LEGO stands for Learning Equality and Group Operations), where we observe how transformers learn to solve simple linear systems of equations. I will also briefly illustrate (2) with an analysis of the emergence of threshold units when training a two-layers neural network to solve a simple sparse coding problem. The latter analysis connects to the recently discovered Edge of Stability phenomenon.
Based on joint works with Kwangjun Ahn, Arturs Backurs, Sinho Chewi Ronen Eldan, Suriya Gunasekar, Yin Tat Lee, Felipe Suarez, Tal Wagner, Yi Zhang, see arxiv.org/abs/2206.04301 and arxiv.org/abs/2212.07469.
Sebastien Bubeck is a Senior Principal Research Manager in the Machine Learning Foundations group at Microsoft Research (MSR). He joined the Theory Group at MSR in 2014, after three years as an assistant professor at Princeton University. His works on convex optimization, online algorithms and adversarial robustness in machine learning received several best paper awards (NeurIPS 2021 best paper, NeurIPS 2018 best paper, ALT 2018 best student paper in joint work with MSR interns, COLT 2016 best paper, and COLT 2009 best student paper). In 2022 he has been focused on exploring a physics-like theory of neural networks learning.
Gil Vincent Conitzer
Carnegie Mellon University
AAAI 2023 Invited Talk
Talk Title: New Design Decisions for Modern AI Agents
Abstract: Consider an intelligent virtual assistant such as Siri, or perhaps a more capable future version of it. Should we think of all of Siri as one big agent? Or is there a separate agent on every phone, each with its own objectives and/or beliefs? And what should those objectives and beliefs be? Such questions reveal that the traditional, somewhat anthropomorphic model of an agent – with clear boundaries, centralized belief formation and decision making, and a clear given objective – falls short for thinking about today’s AI systems. We need better methods for specifying the objectives that these agents should pursue in the real world, especially when their actions have ethical implications. I will discuss some methods that we have been developing for this purpose, drawing on techniques from preference elicitation and computational social choice. But we need to specify more than objectives. When agents are distributed, systematically forget what they knew before (say, for privacy reasons), can be simulated by others, and potentially face copies of themselves, it is no longer obvious what the correct way is even to do probabilistic reasoning, let alone to make optimal decisions. I will explain why this is so and discuss our work on doing these things well. (No previous background required.)
Vincent Conitzer is Professor of Computer Science (with affiliate/courtesy appointments in Machine Learning, Philosophy, and the Tepper School of Business) at Carnegie Mellon University, where he directs the Foundations of Cooperative AI Lab (FOCAL). He is also Head of Technical AI Engagement at the Institute for Ethics in AI, and Professor of Computer Science and Philosophy, at the University of Oxford.
Previous to joining CMU, Conitzer was the Kimberly J. Jenkins Distinguished University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University.
Conitzer has received the 2021 ACM/SIGAI Autonomous Agents Research Award, the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an NSF CAREER award, the inaugural Victor Lesser dissertation award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Guggenheim Fellow, a Sloan Fellow, a Kavli Fellow, a Bass Fellow, an ACM Fellow, a AAAI Fellow, and one of AI’s Ten to Watch. He has served as program and/or general chair of the AAAI, AAMAS, AIES, COMSOC, and EC conferences. Conitzer and Preston McAfee were the founding Editors-in-Chief of the ACM Transactions on Economics and Computation (TEAC).
Sami Haddadin
Technical University of Munich (TUM)
AAAI 2023 Invited Talk
Talk Title: Robots with a sense of touch: self-replicating the machine and learning the self
Abstract: The development of robots that can learn to interact with the world and manipulate the objects in it has emerged as one of the greatest and so far largely unsolved challenges in robotics research. In this talk, I will argue that the development of such advanced machines requires a transition from classical manual design with purely model-based control to a novel synthesis paradigm. We need to allow the machine to autonomously develop its own blueprint and algorithmically generate its topological, kinematic, and dynamic self. Building on this, it shall develop controls for its own body as it moves, learns to manipulate objects in a controlled way, and sensitively interacts with the world.
Drawing from our work in torque-controlled lightweight robots towards human-safe tactile robots that can manipulate, fly, or drive, I outline the technological quantum leaps that recently have taken place. In particular, this progress was made possible by a human-centered design, soft and force-sensitive control, contact reflexes, and model-based machine learning. In the real world, by enabling human-robot coexistence, collaboration, and interaction for the first time, this robotic technology has proven transformative to traditional manufacturing already around the globe. Increasingly, it is now impacting professional services, domestic applications, medicine and healthcare.
After that, I will use our current work to chart the path toward the next generation of tactile machines. We have taken first steps towards autonomously designing and building machines that have the ability to learn their self and thus adapt to changes in body topology and ultimately their entire dynamics. Finally, I will present recent results on designing modular control and learning architectures that achieve complex behaviors for challenging manipulation problems while being provably stable.
Sami Haddadin is the Executive Director of the Munich Institute of Robotics and Machine Intelligence at the Technical University of Munich (TUM) and holds the Chair of Robotics and Systems Intelligence. His research interests include human-centered robotics, embodied AI, collective intelligence and human-robot symbiosis. His scientific contributions range from tactile mechatronics, contact-aware robots, safety methods in human-robot interaction to autonomous manipulation learning. Before joining TUM, he was Chair of the Institute of Automatic Control at Gottfried Wilhelm Leibniz University Hannover from 2014 to 2018. Prior to that, he held various positions as a researcher at the German Aerospace Center DLR. He holds degrees in electrical engineering, computer science and technology management from the Technical University of Munich and the Ludwig Maximilian University of Munich. He received his PhD with summa cum laude from RWTH Aachen University and published more than 200 scientific articles in international journals and conferences, many of them award-winning. He has received numerous awards for his scientific work, including the George Giralt PhD Award (2012), the RSS Early Career Spotlight (2015), the IEEE/RAS Early Career Award (2015), the Alfried Krupp Award for Young Professors (2015), the German President’s Award for Innovation in Science and Technology (2017) and the highest German basic science award Leibniz Prize (2019). He is a member of the German National Academy of Sciences Leopoldina, the national academy of science and engineering acatech and chairman of the Bavarian AI Council.
Ayanna Howard
The Ohio State University
EAAI 2023 Invited Talk – AAAI/EAAI Patrick Henry Winston Outstanding Educator Award
Talk Title: Socially Interactive Robots for Supporting Early Interventions for Children with Special Needs
Abstract: It is estimated 15% of children aged 3 through 17 born in the U.S. have one or more developmental disabilities. For many of these children, proper early intervention is provided as a mechanism to support the child’s academic, developmental, and functional goals from birth and beyond. With the recent advances in robotics and artificial intelligence (AI), early intervention protocols using robots is now ideally positioned to make an impact in this domain. In this talk, I will discuss the role of robotics and AI for engaging children with special needs and highlight our methods and preclinical studies that bring us closer to this goal.
Dr. Ayanna Howard is the Dean of Engineering at The Ohio State University. Previously she was the Chair of the School of Interactive Computing at the Georgia Institute of Technology. Dr. Howard’s research encompasses advancements in artificial intelligence (AI), assistive technologies, and robotics, and has resulted in over 275 peer-reviewed publications. She is a Fellow of IEEE, AAAI, AAAS, the National Academy of Inventors, and elected member of the American Academy of Arts and Sciences. Prior to Georgia Tech, Dr. Howard was at NASA’s Jet Propulsion Laboratory where she held the title of Senior Robotics Researcher and Deputy Manager in the Office of the Chief Scientist.
Sheila McIlraith
University of Toronto
AAAI 2023 Invited Talk
Talk Title: (Formal) Languages Help AI Agents Learn and Reason
Susan A. Murphy
Harvard University
AAAI 2023 Invited Talk
Talk Title: We used Reinforcement Learning; but did it work?
Abstract: Reinforcement Learning provides an attractive suite of online learning methods for personalizing interventions in Digital Behavioral Health. However, after a reinforcement learning algorithm has been run in a clinical study, how do we assess whether personalization occurred? We might find users for whom it appears that the algorithm has indeed learned in which contexts the user is more responsive to a particular intervention. But could this have happened completely by chance? We discuss some first approaches to addressing these questions
Susan Murphy’s research focuses on improving sequential, individualized, decision making in digital health. She developed the micro-randomized trial for use in constructing digital health interventions; this trial design is in use across a broad range of health-related areas. Her lab works on online learning algorithms for developing personalized digital health interventions. Dr. Murphy is a member of the National Academy of Sciences and of the National Academy of Medicine, both of the US National Academies. In 2013 she was awarded a MacArthur Fellowship for her work on experimental designs to inform sequential decision making. She is a Fellow of the College on Problems in Drug Dependence, Past-President of Institute of Mathematical Statistics, Past-President of the Bernoulli Society and a former editor of the Annals of Statistics.
Josh Tenenbaum
(Massachusetts Institute of Technology, USA)
AAAI 2023 Invited Talk
Talk Title: Learning to see the human way
Abstract: Computer vision is one of the great AI success stories. Yet we are still far from having machine systems that can reliably and robustly see everything a human being sees in an image or in the real world. Despite rapid advances in self-supervised visual and multimodal representation learning, we are also far from having systems that can learn to see as richly as a human does, from so little data, or that can learn new visual concepts or adapt their representations as quickly as a human does. And even today’s remarkable generative image synthesis systems imagine the world in a very different and fundamentally less flexible way than human beings do. How can we close these gaps? I will describe several core insights from the study of human vision and visual cognitive development that run counter to the dominant trends in today’s computer vision and machine learning world, but that can motivate and guide an alternative approach to building practical machine vision systems.
Technically, this approach rests on advances in differentiable and probabilistic programming: hybrids of neural, symbolic and probabilistic modeling and inference that can be more robust, more flexible and more data-efficient than purely neural approaches to learning to see. New probabilistic programming platforms offer to make these approaches scalable as well. Conceptually, this approach draws on classic proposals for understanding vision as ‘inverse graphics’, ‘analysis by synthesis’ or ‘inference to the best explanation’, and the notion that at least some high-level architecture for scene representation is built into the brain by evolution rather than learned from experience, reflecting invariant properties of the physical world. Learning then enables, enriches and extends these built-in representations; it does not create them from scratch. I will show a few examples of recent machine vision successes based on these ideas, from our group and others. But the hardest problems are still very open. I will highlight some ‘Grand Challenge’ tasks for building machines that learn to see like people: problems that far outstrip the abilities of any current system, and that I hope can inspire the next steps towards progress for computer vision researchers regardless of which approach they favor.
Josh Tenenbaum is Professor of Computational Cognitive Science at the Massachusetts Institute of Technology in the Department of Brain and Cognitive Sciences, the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds and Machines (CBMM). He received a BS from Yale University (1993) and a PhD from MIT (1999). His long-term goal is to reverse-engineer intelligence in the human mind and brain, and use these insights to engineer more human-like machine intelligence. In cognitive science, he is best known for developing theories of cognition as probabilistic inference in structured generative models, and applications to concept learning, causal reasoning, language acquisition, visual perception, intuitive physics, and theory of mind. In AI, he and his group have developed widely influential models for nonlinear dimensionality reduction, probabilistic programming, and Bayesian unsupervised learning and structure discovery. His current research focuses on the development of common sense in children and machines, common sense scene understanding in humans and machines, and models of learning as program synthesis. His work has been recognized with awards at conferences in Cognitive Science, Philosophy and Psychology, Computer Vision, Neural Information Processing Systems, Reinforcement Learning and Decision Making, and Robotics. He is the recipient of the Troland Research Award from the National Academy of Sciences (2012), the Howard Crosby Warren Medal from the Society of Experimental Psychologists (2015), the R&D Magazine Innovator of the Year (2018), and a MacArthur Fellowship (2019), and he is an elected member of the American Academy of Arts and Sciences.
Manuela Veloso
JP Morgan Chase
2023 Robert S. Engelmore Memorial Lecture Award
Talk Title: AI in Robotics & AI in Finance
Dr. Laura Freeman
(Virginia Tech National Security Institute)
IAAI AI Assurance Panel
Dr. Laura Freeman is a Research Associate Professor of Statistics and dual hatted as the Deputy Director of the Virginia Tech National Security Institute and Assistant Dean for Research for the College of Science. Her research leverages experimental methods for conducting research that brings together cyber-physical systems, data science, artificial intelligence (AI), and machine learning to address critical challenges in national security. She develops new methods for test and evaluation focusing on emerging system technology. She focuses on transitioning emerging research to solve challenges in Defense and Homeland Security. She is also a hub faculty member in the Commonwealth Cyber Initiative and leads research in AI Assurance.
Previously, Dr. Freeman was the Assistant Director of the Operational Evaluation Division at the Institute for Defense Analyses. In that position, she established and developed an interdisciplinary analytical team of statisticians, psychologists, and engineers to advance scientific approaches to DoD test and evaluation. During 2018, Dr. Freeman served as that acting Senior Technical Advisor for Director Operational Test and Evaluation (DOT&E). As the Senior Technical Advisor, Dr. Freeman provided leadership, advice, and counsel to all personnel on technical aspects of testing military systems. She reviewed test strategies, plans, and reports from all systems on DOT&E oversight.
Dr. Freeman has a B.S. in Aerospace Engineering, a M.S. in Statistics and a Ph.D. in Statistics, all from Virginia Tech. Her Ph.D. research was on design and analysis of experiments for reliability data.
Ima Okonny
(Employment and Social Development Canada (ESDC))
IAAI AI Assurance Panel
Ima, the Chief Data Officer at Employment and Social Development Canada (ESDC), has over 23 years of experience in the field of data.
She has extensive experience with building the evidence base through the development of analytical databases and tools, implementing departmental data reporting and release strategies, data management, data privacy protocols and with forward-looking policy development and research.
Ima has an educational background in Mathematics, Computer Programming and Public Management and during her time with the Government of Canada, she has received several nominations and awards for her leadership and results.
She is passionate about helping organizations develop the capabilities required to ethically and intentionally unleash concrete business value from data.
Dr. Yevgeniya (Jane) Pinelis
(Chief Digital and Artificial Intelligence Office (CDAO))
IAAI AI Assurance Panel
Dr. Jane Pinelis is the Chief of AI Assurance at the Chief Digital and Artificial Intelligence Office (CDAO). In this role, she leads a diverse team of testers and analysts in rigorous test and evaluation (T&E) for CDAO capabilities, as well as development of T&E-specific products and standards that will support testing of AI-enabled systems across the DoD. She also leads the team that is responsible for instantiating Responsible AI principles into DoD practices. Prior to joining the CDAO, Dr. Pinelis served as the Director of Test and Evaluation for USDI’s Algorithmic Warfare Cross-Functional Team, better known as Project Maven. She directed the developmental testing for the AI models, including computer vision, machine translation, facial recognition and natural language processing.
Also, Dr. Pinelis led the design and analysis of the widely publicized study on the effects of integrating women into combat roles in the Marine Corps. Based on this experience, she co-authored a book, titled “The Experiment of a Lifetime: Doing Science in the Wild for the United States Marine Corps.”
Dr. Pinelis holds a BS in Statistics, Economics, and Mathematics, an MA in Statistics, and a PhD in Statistics, all from the University of Michigan, Ann Arbor.
Dr. Michael R. Salpukas
(Raytheon Technology)
IAAI AI Assurance Panel
Dr. Michael Salpukas is a Raytheon Technology Senior Engineering Fellow focusing on Artificial Intelligence and Advanced Algorithms. He is presently the Principal Investigator for Artificial Intelligence Research and Development projects in Sensors, C5, Predictive Maintenance, and Manufacturing. His research includes Radar and Sonar Classification, Pattern-of-Life, Predictive Analytics, and Defect Containment. Dr. Salpukas is also a Lead Technologist for Artificial Intelligence and Mission Application Algorithm development. His past work includes Advanced Tracking, Compressed Sensing, Antenna Calibration, Search Patterns, Scheduling, and Clutter Mitigation.
Dr. Salpukas is active in Innovation and University Partnering, holds two patents with four more filed, and many more filed Trade Secrets. He has served as Chief Engineer and Systems Engineering Lead on a wide range of programs, and as Lead on multiple SBIR partnerships and program transitions.
Dr. Salpukas received his Bachelor’s Degree in Mathematics from the University of Chicago, and his Ph.D. in Mathematics and Masters in Statistics from SUNY-Albany.