Thirty-Fourth Conference on Artificial Intelligence
February 7-8, 2020
New York, NY, USA
What Is the Tutorial Forum?
The Tutorial Forum provides an opportunity for researchers and practitioners to spend two days each year exploring exciting advances in disciplines outside their normal focus. We believe this type of forum is essential for the cross fertilization, cohesiveness, and vitality of the AI field. We all have a lot to learn from each other; the Tutorial Forum promotes the continuing education of each member of AAAI.
Schedule
The following list of tutorials have been accepted for presentation at AAAI-20:
Friday, February 7, 2020
8:30 am – 12:30 pm
- FA1: AI in Precision Medicine: Towards Knowledge Empowered Intelligence over “Small” Data
Fei Wang
Tutorial Materials: https://sites.google.com/site/cornellwanglab/aaai20tutorial - FA2: AI Planning for Robotics with ROSPlan
Michael Cashmore and Daniele Magazzeni
Tutorial Materials: https://kcl-planning.github.io/ROSPlan/demos/conference_pages/tutorialAAAI2020.html - FA3: Exploration-Exploitation in Reinforcement Learning
Mohammad Ghavamzadeh, Alessandro Lazaric and Matteo Pirotta
Tutorial Materials: https://rlgammazero.github.io/ - FA4: Graph Neural Networks: Models and Applications
Yao Ma, Wei Jin, Jiliang Tang, Lingfei Wu and Tengfei Ma
Tutorial Materials: http://cse.msu.edu/~mayao4/tutorials/aaai2020/ - FA5: Recent Directions in Heuristic-Search
Ariel Felner, Sven Koenig, Nathan Sturtevant and Daniel Harabor
2:00 pm – 6:00 pm
- FP1: Differential Deep Learning on Graphs and its Applications
Chengxi Zang and Fei Wang
Tutorial Materials: http://www.calvinzang.com/DDLG_AAAI_2020.html - FP2: New Frontiers of Automated Mechanism Design for Pricing and Auctions
Maria-Florina Balcan, Tuomas Sandholm and Ellen Vitercik - FP3: Probabilistic Circuits: Representations, Inference, Learning and Applications
Antonio Vergari, YooJung Choi, Robert Peharz, and Guy Van den Broeck
Tutorial Materials: http://starai.cs.ucla.edu/slides/AAAI20.pdf - FP4: Recent Advances in Transferable Representation Learning
Muhao Chen, Kai-Wei Chang and Dan Roth
Tutorial Materials: https://cogcomp.seas.upenn.edu/page/tutorial.202002/ - FP5: Statistical Machine Learning: Big, Multi-Source and Sparse Data with Complex Relations and Dynamics
Trong Dinh Thac Do, Longbing Cao and Jinjin Guo
Tutorial Materials: https://sites.google.com/site/trongdinhthacdo/talks-and-tutorials/aaai-2020
Saturday, February 8, 2020
8:30 am – 12:30 pm
- SA1: Explainable AI: Foundations, Industrial Applications, Practical Challenges, and Lessons Learned (Part One, ¾-day extending until 3:45 pm)
Freddy Lecue, Krishna Gade, Fosca Giannotti, Sahin Geyik, Riccardo Guidotti, Krishnaram Kenthapadi, Pasquale Minervini, Varun Mithal and Ankur Taly - SA2: Fairness and Bias in Peer Review and other Sociotechnical Intelligent Systems
Nihar Shah and Zachary Lipton
Tutorial Materials: http://www.cs.cmu.edu/~nihars/tutorials/AAAI2020/ - SA3: Recent Advances in Fair Resource Allocation
Rupert Freeman and Nisarg Shah
Tutorial Materials: https://users.cs.duke.edu/~rupert/fair-division-aaai20/index.html
8:30 am – 10:15 am
- SA5Q: Guidelines for Human-AI Interaction
Besmira Nushi, Dan Weld, Saleema Amershi and Adam Fourney
Tutorial Materials: https://www.microsoft.com/en-us/research/project/guidelines-for-human-ai-interaction/articles/aaai-2020-tutorial-guidelines-for-human-ai-interaction/
10:45 am – 12:30 pm
- SA6Q: Modularizing Natural Language Processing
Zhengzhong Liu, Zhiting Hu and Eric Xing
Tutorial Materials: https://asyml.github.io/aaai_tutorial/
2:00 pm – 6:00 pm
- SP1: Rigorous Verification and Explanation of ML Models
Alexey Ignatiev, Joao Marques-Silva, Kuldeep Meel and Nina Narodytska
Tutorial Materials: https://alexeyignatiev.github.io/aaai20-tutorial/ - SP2: Optimization and Learning Approaches to Resource Allocation for Social Good
Faez Ahmed, Sanmay Das, John Dickerson, Duncan McElfresh and Bryan Wilder - SP3: Representation Learning for Causal Inference
Sheng Li, Liuyi Yao, Yaliang Li, Jing Gao and Aidong Zhang
Tutorial Materials: http://cobweb.cs.uga.edu/~shengli/AAAI20-Causal-Tutorial.html - SP7: Synthesizing Explainable and Deceptive Behavior in Human-AI Interaction
Subbarao Kambhampati, Tathagata Chakraborti, Sarath Sreedharan and Anagha Kulkarni
2:00 pm – 3:45 pm
- SA1: Explainable AI: Foundations, Industrial Applications, Practical Challenges, and Lessons Learned (Part Two)
Freddy Lecue, Krishna Gade, Fosca Giannotti, Sahin Geyik, Riccardo Guidotti, Krishnaram Kenthapadi, Pasquale Minervini, Varun Mithal and Ankur Taly
Tutorial Materials: https://xaitutorial2020.github.io/ - SP4Q: Multi-Agent Distributed Constrained Optimization
Ferdinando Fioretto and William Yeoh
Tutorial Materials: https://www2.isye.gatech.edu/~fferdinando3/cfp/AAAI20/
4:15 pm – 6:00 pm
- SP5Q: Creative and Artistic Writing via Text Generation
Juntao Li and Rui Yan
Tutorial Materials: https://lijuntaopku.github.io/AAAI2020-tutorial/ - SP6Q: Recent Advances in Machine Teaching: From Machine to Human
Yao Zhou and Jingrui He
Tutorial Materials: https://sites.google.com/view/aaai20tutorial-mt/home
FA1: AI in Precision Medicine: Towards Knowledge Empowered Intelligence over “Small” Data
Fei Wang
Artificial Intelligence (AI) technologies have demonstrated great promises in different areas of medicine in recent years, such as clinical decision support, drug discovery and development, healthcare insurance plan management, etc. Now we are in the era of precision medicine, with the goal of providing tailored treatment or management plans to individual patients. One big challenge of applying AI in precision medicine is “small data:” typically we only have the data from a limited number of individuals (patients) to conduct our research. In this tutorial I will introduce the recent advances of knowledge empowered AI technologies that are particularly promising for precision medicine including knowledge integration, knowledge distillation and knowledge transfer, discuss the challenges and point out future directions.
Fei Wang
Cornell University
Fei Wang is an Associate Professor in Division of Health Informatics, Department of Healthcare Policy and Research, Weill Cornell Medicine, Cornell University. His major research interest is developing effective data mining and machine learning algorithms for helping with various healthcare problems such as clinical decision support and computational drug development.
FA2: AI Planning for Robotics with ROSPlan
Michael Cashmore and Daniele Magazzeni
This tutorial will provide an introduction to AI Planning & Scheduling for Robotics, the essentials of the Robot Operating System (ROS) and ROSPlan. This tutorial will also provide an overview of recent advances in integrating planning and robotics. The main goal is to illustrate the integrated planning and execution for robotics, describe how it is beneficial to both communities, as well as to highlight the main challenges and open issues.
The first part of the tutorial will be an introduction to planning, including an overview of techniques and an explanation of PDDL. The second part introduces the Robot Operating System (ROS) covering the basic concepts of ROS. In the final part of the tutorial an introduction to ROSPlan will cover the ROS nodes that comprise a basic planning and execution framework. This will include an overview of recent work and related systems in planning and robotics, highlighting the process of using planning to direct the behavior of a robot.
The tutorial includes a hands-on exercise that will also be demonstrated on the projector for those without a laptop. The instructions and materials for the hands-on exercise will be made available on the tutorial website.
Michael Cashmore
University of Strathclyde
Michael Cashmore is a Chancellor’s Fellow at the University of Strathclyde Department of Computer Science. His research explores integration between AI Planning and autonomous robots, and he is the developer of ROSPlan, a widely-used framework for task AI Planning and Scheduling in the Robot Operating System (ROS).
Daniele Magazzeni
King’s College London
Daniele Magazzeni is a Reader in Artificial Intelligence at King’s College London, where he leads the Human-AI Teaming Lab. His research interests are in Safe, Trusted, and Explainable AI, with a particular focus on AI Planning for Robotics and Autonomous Systems, and Human-AI Teaming.
FA3: Exploration-Exploitation in Reinforcement Learning
Mohammad Ghavamzadeh, Alessandro Lazaric and Matteo Pirotta
Reinforcement Learning (RL) studies the problem of sequential decision-making when the environment (i.e., the dynamics and the reward) is initially unknown but can be learned through direct interaction. RL algorithms recently achieved impressive results in a variety of problems including games and robotics. Nonetheless, most of recent RL algorithms require a huge amount of data to learn a satisfactory policy and cannot be used in domains where samples are expensive and/or long simulations are not possible (e.g., human-computer interaction). A fundamental step towards more sample-efficient algorithms is to devise methods to properly balance the exploration of the environment, to gather useful information, and the exploitation of the learned policy to collect as much reward as possible. The objective of the tutorial is to bring awareness of the importance of the exploration-exploitation dilemma in improving the sample-efficiency of modern RL algorithms. The tutorial will provide the audience with a review of the major algorithmic principles (notably, optimism in face of uncertainty and posterior sampling), their theoretical guarantees in the exact case (i.e., tabular RL) and their application to more complex environments, including parameterized MDPs, linear-quadratic control, and their integration with deep learning architectures. The tutorial should provide enough theoretical and algorithmic background to enable researchers in AI and RL to integrate exploration principles in existing RL algorithms and devise novel sample-efficient RL methods able to deal with complex applications such as human-computer interaction (e.g., conversational agents), medical applications (e.g., drug optimization), and advertising (e.g., lifetime value optimization in marketing). Throughout the whole tutorial, we will discuss open problems and possible future research directions.
Mohammad Ghavamzadeh
Facebook AI Research (FAIR)
M. Ghavamzadeh received a PhD from UMass Amherst in 2005. He was a postdoctoral fellow at UAlberta from 2005 to 2008. He has been a permanent researcher at INRIA since 2008. He was the recipient of the “INRIA award for scientific excellence” in 2011, and obtained his Habilitation in 2014. Since 2013, he has been a senior researcher, first at Adobe Research, then at DeepMind, and now at Facebook AI Research (FAIR). He has published over 70 refereed papers in major machine learning, AI, and control journals and conferences.
Alessandro Lazaric
Facebook AI Research (FAIR)
A. Lazaric is a research scientist at the Facebook AI Research (FAIR) lab since 2017 and he was previously a researcher at INRIA in the SequeL team. His main research topic is reinforcement learning, with extensive contributions on both the theoretical and algorithmic aspects of RL. In the last ten years, he has studied the exploration-exploitation dilemma both in the multi-armed bandit and reinforcement learning framework, notably on the problems of regret minimization, best-arm identification, pure exploration, and hierarchical RL. He has published over 40 papers in top machine learning conferences and journals.
Matteo Pirotta
Facebook AI Research (FAIR)
M. Pirotta is a research scientist at Facebook AI Research (FAIR) lab in Paris. Previously, he was a postdoc at INRIA in the SequeL team. He received his PhD in computer science from the Politecnico di Milano (Italy) in 2016. For his doctoral thesis in reinforcement learning, he received the Dimitris N. Chorafas Foundation Award and an honorable mention for the EurAI Distinguished Dissertation Award. His main research interest is reinforcement learning. In the last years, he has mainly focused on the exploration-exploitation dilemma in RL.
FA4: Graph Neural Networks: Models and Applications
Yao Ma, Wei Jin, Jiliang Tang, Lingfei Wu and Tengfei Ma
Graph structured data such as social networks and molecular graphs are ubiquitous in the real world. It is of great research importance to design advanced algorithms for representation learning on graph structured data so that downstream tasks can be facilitated. Graph Neural Networks (GNNs), which generalize the deep neural network models to graph structured data, pave a new way to effectively learn representations for graph-structured data either from the node level or the graph level. Thanks to their strong representation learning capability, GNNs have gained practical significance in various applications ranging from recommendation, natural language processing to healthcare. It has become a hot research topic and attracted increasing attention from the machine learning and data mining community recently. This tutorial of GNNs is timely for AAAI 2020 and covers relevant and interesting topics, including representation learning on graph structured data using GNNs, the robustness of GNNs, the scalability of GNNs and applications based on GNNs.
Yao Ma
Michigan State University
Yao Ma is a Ph.D. student of Computer Science and Engineering at Michigan State University. His research interests include network embedding and graph neural networks for representation learning on graph-structured data. He has published innovative works in top-tier conferences such as WSDM, ASONAM, ICDM, SDM, WWW, KDD and IJCAI.
Wei Jin
Michigan State University (MSU)
Wei Jin is a Ph.D. student of Computer Science and Engineering at Michigan State University (MSU), supervised by Dr. Jiliang Tang. He works on the area of graph neural network including its theory foundations, model robustness and applications. Before joining MSU, he obtained his bachelor’s degree from Zhejiang University.
Jiliang Tang
Michigan State University (MSU)
Jiliang Tang is an assistant professor in CSE department at MSU. His research interests including social computing, data mining and machine learning and their applications in education. He was the recipients of 2019 NSF Career Award and 7 best paper awards (runner-ups). He serves as conference organizers and journal editors.
Lingfei Wu
IBM T. J. Watson Research Center
Lingfei Wu is a Research Staff Member at IBM T. J. Watson Research Center. He earned his Ph.D. degree in computer science from College of William and Mary in August 2016. He has given several tutorials and organized multiple workshops on deep learning on graphs in KDD’19, AAAI’20, and BigData’19.
Tengfei Ma
IBM T.J. Watson Research Center
Tengfei Ma is a research staff member of IBM T.J. Watson Research Center. He obtained his PhD from The University of Tokyo in 2015. His recent research is focused on graph neural networks and their applications.
FA5: Recent Directions in Heuristic-Search
Ariel Felner, Sven Koenig, Nathan Sturtevant and Daniel Harabor
Heuristic state-space search is a fundamental technique in AI and virtually every AI class includes the A* algorithm. But what is exciting in heuristic search today? This tutorial provides overviews of important results in several areas within heuristic search that have seen recent progress, presented by four researchers active in those areas from different parts of the globe (USA, Canada, Israel and Australia). In particular, we will deal with the following topics:
(1) Background and Fundamental Algorithms.
(2) Optimal search and bidirectional search.
(3) Suboptimal and bounded-suboptimal search.
(4) Search in explicit domains, such as grids
(5) Any-angle Search in continuous domains
(6) Multi-agent pathfinding.
Prerequisite knowledge: we assume that participants have taken a basic AI class and are familiar with the basics of state-space search (e.g., the A* algorithm and admissible heuristics). Deeper knowledge in heuristic search is not required.
More information on this tutorial can be found at https://movingai.com/AAAI-HS20/about.html
Ariel Felner
Ben-Gurion University
Ariel Felner is a Professor at Ben-Gurion University, Israel. His research area includes all aspects of heuristic search such as theoretical foundations, new search algorithms, the study and development of heuristics and applying all these to different domains and settings. He also has special interest in historical and pedagogical aspects of search algorithms. In addition, he has been recently working on the problem of multi-agent pathfinding and on bidirectional search.
Sven Koenig
University of Southern California
Sven Koenig is a professor in computer science at the University of Southern California. Most of his research centers around techniques for decision making (planning and learning) that enable single situated agents (such as robots or decision-support systems) and teams of agents to act intelligently in their environments and exhibit goal-directed behavior in real-time, even if they have only incomplete knowledge of their environment, imperfect abilities to manipulate it, limited or noisy perception or insufficient reasoning speed.
Nathan Sturtevant
University of Alberta
Nathan Sturtevant a professor at the University of Alberta. His research looks at heuristic and combinatorial search for single and multiple agents including bidirectional search, cooperative search, large-scale and parallel search, search for game design, heuristic learning, automated abstraction for building heuristics, refinement search, and inconsistent heuristics. Particular applications of his work include pathfinding and planning in memory-constrained real-time environments (e.g. commercial video games) as well as algorithms for building and using memory-based heuristics via large-scale search. He is also interested in theoretical and practical issues in games with more than two players, including opponent modeling, learning, and imperfect information.
Daniel Harabor
Monash University
Daniel Harabor is a Senior Lecturer at the Faculty of Information Technology at Monash University. Daniel’s background is in the area of Artificial Intelligence and Heuristic Search. His research interests include single and multi-agent pathfinding, journey planning, rail scheduling and other interesting topics with applications in Transportation and Logistics and/or Computer Games.
FP1: Differential Deep Learning on Graphs and its Applications
Chengxi Zang and Fei Wang
In this tutorial, we will cover the recent advancements in introducing Differential Equation (DE) theory to the Deep Learning methods, which we call it Differential Deep Learning for short. We will first introduce the DE theory and physics models and then rethink the deep learning from the perspective of dynamic systems. Such an understanding may introduce a broad range of theories developed in DE theory into the computer science community and view the black-box deep models as mechanistic models in physics. We will discuss several new differential deep learning architectures, with an emphasis on dealing with GRAPHS. We further showcase some applications of using differential deep learning methods on graphs, including the data-driven discovery of complex systems’ dynamics and drug molecule generation, etc. All the researchers and practitioners engaged in data mining and machine learning are welcome. Basic knowledge of deep learning, graph mining, and differential equations is preferred but not required.
Chengxi Zang
Cornell University
Chengxi Zang is currently a Postdoctoral Research Associate collaborating with Dr. Fei Wang in the Weill Cornell Medicine. He got his PhD from Tsinghua University in 2019 with Excellent Ph.D. Award in Tsinghua University (top 3%). He has worked extensively on data-driven dynamical modeling of complex social and biological systems.
Fei Wang
Cornell University
Fei Wang is an Associate Professor in Division of Health Informatics, Department of Healthcare Policy and Research, Weill Cornell Medicine, Cornell University. His major research interest is developing effective data mining and machine learning algorithms for helping with various healthcare problems such as clinical decision support and computational drug development.
FP2: New Frontiers of Automated Mechanism Design for Pricing and Auctions
Maria-Florina Balcan, Tuomas Sandholm and Ellen Vitercik
Mechanism design is a field of game theory with significant real-world impact, encompassing areas such as pricing and auction design. Mechanisms are used in sales settings ranging from large-scale internet marketplaces to the US government’s radio spectrum reallocation efforts. A powerful and prominent approach in this field is automated mechanism design, which uses optimization and machine learning to design mechanisms based on data. This automated approach helps overcome challenges faced by traditional, manual approaches to mechanism design, which have been stuck for decades due to inherent computational complexity challenges: the revenue-maximizing mechanism is not known even for just two items for sale! This tutorial is focused on the rapidly growing area of automated mechanism design for revenue maximization. This encompasses both the foundations of batch and online learning (including statistical guarantees and optimization procedures), as well as real-world success stories.
Maria-Florina Balcan
Carnegie Mellon University
Maria-Florina Balcan is an Associate Professor of Computer Science at Carnegie Mellon University, working in machine learning, game theory, and algorithms. She was Program Committee Co-chair for COLT 2014 and ICML 2016, and will be Program Committee Co-Chair for NeurIPS 2020, and General Chair for ICML 2021.
Tuomas Sandholm
Carnegie Mellon University
Tuomas Sandholm is Angel Jordan Professor of Computer Science at Carnegie Mellon University and Co-Director of CMU AI. He is Founder and Director of the Electronic Marketplaces Laboratory. He is a successful serial entrepreneur. He has fielded over 800 combinatorial auctions, worth over $60 billion. He is Founder and CEO of Optimized Markets, Strategic Machine, and Strategy Robot.
Ellen Vitercik
Carnegie Mellon University
Ellen Vitercik is a PhD student at Carnegie Mellon University. Her research interests include artificial intelligence, machine learning theory, and the interface between economics and computation. She has received the IBM PhD Fellowship, a fellowship from CMU’s Center for Machine Learning and Health, and the NSF Graduate Research Fellowship.
FP3: Probabilistic Circuits: Representations, Inference, Learning and Applications
Antonio Vergari, YooJung Choi, Robert Peharz, and Guy Van den Broeck
In several real-world scenarios, decision making involves complex reasoning, i.e., the ability to answer complex probabilistic queries (e.g., involving logical constraints), which allow only a limited amount of time to be answered. Moreover, in many sensitive domains like health- care and economical decision making, the result of these queries is required to be exact as approximations without guarantees would make the decision making process brittle. In all these scenarios, tractable probabilistic inference and learning are becoming more and more mandatory. In this tutorial, we will show how tractability is a continuum spectrum that can be traversed by trading-off model expressiveness and flexibility in answering complex probability queries. On one side of the spectrum we have recent neural estimators, e.g., variational autoencoders, which have very limited inference capabilities, and intractable classical probabilistic graphical models like bayesian and markov networks. On the other side, there are expressive but tractable models, like tree models and mixtures thereof. We will introduce probabilistic circuits as probabilistic models allowing several complex probability query families to be answered exactly and efficiently with little or no compromise in terms of model expressiveness. Furthermore, we will show how probabilistic circuits allow for a unifying computational framework under which one can make sense of the alphabet soup that populates the current landscape of tractable probabilistic models (ACs, CNs, DNNFs, d-DNNFs, OBDDs, PS- DDs, SDDs, SPNs, etc). We will also discuss which structural properties on the circuits delineate each model class and enable different kinds of tractability. We will provide a bridge between probabilistic circuits and their counter- parts in propositional logics – which we name logical circuits. Logical circuits have been extensively researched for decades in automated reasoning and verification communities. We will also provide a unifying view for learning probabilistic circuits – both their structures and parameters – from data. Lastly, we will showcase several successful application scenarios where probabilistic circuits have been employed as an alternative to or in conjunction with intractable models, including image classification, completion and generation, scene understanding, activity recognition, language and speech modeling, bioinformatics, collaborative filtering, verification and diagnosis.
Antonio Vergari
UCLA
Antonio Vergari is a postdoc at UCLA working on enabling advanced probabilistic reasoning on deep representations. Previously, he was a postdoc at the MPI-IS, Tuebingen, working on automating machine learning via tractable models. He organized the Tractable Probabilistic Modeling workshop at ICML2019.
YooJung Choi
UCLA
YooJung Choi is a Ph.D. student in Computer Science at UCLA. Her research focuses on probabilistic reasoning with tractable probabilistic models, especially with application in verifying and learning robust and fair decision making systems.
Robert Peharz
TU Eindhoven, Netherlands
Robert Peharz is an Assistant Professor at TU Eindhoven, Netherlands. His main research focus lies on combining (tractable) probabilistic models and deep learning approaches. Previously, he held a Marie-Curie Individual Fellowship at the University of Cambridge (Computational Biological Learning Lab), UK.
Guy Van den Broeck
UCLA
Guy Van den Broeck is an Assistant Professor at UCLA, where he directs the Statistical and Relational Artificial Intelligence (StarAI) lab. His work has been recognized with best paper awards from key artificial intelligence venues, and he is the recipient of the IJCAI-19 Computers and Thought Award.
FP4: Recent Advances in Transferable Representation Learning
Muhao Chen, Kai-Wei Chang and Dan Roth
This tutorial targets AI researchers and practitioners who are interested in applying deep learning techniques to cross-domain decision making tasks. These include tasks that involve multilingual and cross-lingual natural language processing, domain-specific knowledge, and different data modalities. This tutorial will provide the audience with a holistic view of (i) a wide selection of representation learning methods for unlabeled text, multi-relational and multimedia data, (ii) techniques for aligning and transferring knowledge across multiple representations, with limited supervision, and (iii) a wide range of AI applications using these techniques in natural language understanding, knowledge bases, and computational biology. We will conclude the tutorial by outlining future research directions in this area. No specific background knowledge is assumed of the audience.
Muhao Chen
University of Pennsylvania
Muhao Chen is currently a postdoctoral fellow in CogComp, UPenn. He received a Ph.D. degree in Computer Science from UCLA in 2019. Muhao has worked on various topics in machine learning and NLP. His recent research also applies related techniques to computational biology. Additional information is available at http://muhaochen.github.io.
Kai-Wei Chang
UCLA
Kai-Wei Chang is an assistant professor in the Department of Computer Science at UCLA. His research interests include designing robust machine learning methods for large and complex data and building language processing models for social good applications. Additional information is available at http://kwchang.net.
Dan Roth
UPenn
Dan Roth is the Eduardo D. Glandt Distinguished Professor at CIS, UPenn, and a Fellow of the AAAS, ACM, AAAI, and the ACL. Roth was recognized for major conceptual and theoretical advances in the modeling of natural language understanding, machine learning, and reasoning. Additional information is available at http://www.cis.upenn.edu/˜danroth/
FP5: Statistical Machine Learning: Big, Multi-Source and Sparse Data with Complex Relations and Dynamics
Trong Dinh Thac Do, Longbing Cao and Jinjin Guo
With the explosion of data on the Internet, social networks, finance, and e-commerce websites, modelling large and sparse datasets while exploring the complex relations and dynamics inside the data is highly in demand yet challenging. However, traditional methods face problem in handling these real-life datasets because of the intensive mathematical computation required. In this tutorial, we summarize various statistical methods that are effective and efficient in handling large and sparse datasets. On the other hand, combining observable data (e.g., users’ ratings on items in recommender systems, user friendship or item relations, and user/item metadata) and learning the complex relations within and between multiple sources and the dynamics of the data help deal with cold-start problems where we have no or limited preliminary knowledge about one specific element. Accordingly, we focus on introducing our series of designs for tackling these challenges on large, sparse, and multi-source data with complex relations and dynamics. It will create new opportunities, directions, and means for learning and analysing complex and practical machine learning problems. People who are familiar with basic knowledge of machine learning, statistic and Bayesian theory would find it easier to understand the algorithms and case studies to be introduced in this tutorial.
Trong Dinh Thac Do
University of Technology Sydney (UTS)
Trong Dinh Thac Do is currently a Research Fellow in Advanced Analytics Institute (AAI) at the University of Technology Sydney (UTS), where he also obtained his PhD degree in Machine Learning and Artificial Intelligence. His research interests include machine learning, statistical models, graphical models, Bayesian Nomparametrics and deep learning.
Longbing Cao
University of Technology Sydney (UTS)
Longbing Cao is a Professor in Advanced Analytics Institute (AAI) at the University of Technology Sydney. He has a PhD in Pattern Recognition and Intelligent Systems and another in Computing Sciences. His research interests include data science, analytics and machine learning, and behavior informatics and their enterprise applications.
Jinjin Guo
University of Macau
Jinjin Guo is currently a final year Ph.D. student majoring in Computer Science at University of Macau. Her research interests include Bayesian Nonparametrics, statistical Bayesian models and applications in event detection for social media data.
SA1: Explainable AI: Foundations, Industrial Applications, Practical Challenges, and Lessons Learned
Freddy Lecue, Krishna Gade, Fosca Giannotti, Sahin Geyik, Riccardo Guidotti, Krishnaram Kenthapadi, Pasquale Minervini, Varun Mithal and Ankur Taly
The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. XAI (eXplainable AI) aims at addressing such challenges by combining the best of symbolic AI and traditional Machine Learning. Such topic has been studied for years by all different communities of AI, with different definitions, evaluation metrics, motivations and results.
This tutorial is a snapshot on the work of XAI to date, and surveys the work achieved by the AI community with a focus on machine learning and symbolic AI related approaches. We will motivate the needs of XAI in real-world and large-scale applications, while presenting state-of-the-art techniques and best practices. In the first part of the tutorial, we give an introduction to the different aspects of explanations in AI. We then focus the tutorial on two specific approaches: (i) XAI using machine learning and (ii) XAI using a combination of graph-based knowledge representation and machine learning. For both we get into the specifics of the approach, the state of the art and the limitations and research challenges for the next steps. The final part of the tutorial gives an overview of real-world applications of XAI.
Freddy Lecue
Accenture Technology Labs, Dublin – Ireland
Freddy Lecue (PhD 2008, Habilitation 2015) is a principal scientist and research manager in Artificial Intelligent systems, systems combining learning and reasoning capabilities, in Accenture Technology Labs, Dublin – Ireland. He is also a research associate at INRIA, in WIMMICS, Sophia Antipolis – France. Before joining Accenture Labs, he was a Research Scientist at IBM Research, Smarter Cities Technology Center (SCTC) in Dublin, Ireland, and lead investigator of the Knowledge Representation and Reasoning group. His main research interests are Explainable AI systems. The application domain of his current research is Smarter Cities, with a focus on Smart Transportation and Building. In particular, he is interested in exploiting and advancing Knowledge Representation and Reasoning methods for representing and inferring actionable insight from large, noisy, heterogeneous and big data. He has over 40 publications in refereed journals and conferences related to Artificial Intelligence (AAAI, ECAI, IJCAI, IUI) and Semantic Web (ESWC, ISWC), all describing new system to handle expressive semantic representation and reasoning. He co-organized the first three workshops on semantic cities (AAAI 2012, 2014, 2015, IJCAI 2013), and the first two tutorial on smart cities at AAAI 2015 and IJCAI 2016. Prior to joining IBM, Freddy Lecue was a Research Fellow (2008-2011) with the Centre for Service Research at The University of Manchester, UK. He has been awarded by a second prize for his Ph.D thesis by the French Association for the Advancement of Artificial Intelligence in 2009, and has been recipient of the Best Research Paper Award at the ACM/IEEE Web Intelligence conference in 2008.
Krishna Gade
Fiddler Labs
Krishna Gade is the founder and CEO of Fiddler Labs, an enterprise startup building an explainable AI engine to address problems regarding bias, fairness, and transparency in AI. An entrepreneur and engineering leader with a strong technical experience of creating scalable platforms and delightful consumer products, Krishna previously held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft. He has given several invited talks at prominent practitioner forums, including a talk on addressing bias, fairness, and transparency in AI at Strata Data Conference, 2019.
Sahin Cem Geyik
Sahin Cem Geyik has been part of the Careers/Talent AI teams at LinkedIn over the past three years, focusing on personalized and fairness-aware recommendations across several LinkedIn Talent Solutions products. Prior to LinkedIn, he was a research scientist at Turn Inc., an online advertising startup which was later acquired by Amobee, a subsidiary of Singtel. He received his Ph.D. degree in Computer Science from Rensselaer Polytechnic Institute in 2012, and his Bachelor degree in Computer Engineering in 2007 at Bogazici University, Istanbul/Turkey. Sahin worked on various research topics in ML spanning over Online Advertising Models and Algorithms, Recommender and Search Systems, Fairness-aware ML, and Explainability. He also has performed extensive research in Systems domain, which resulted in multiple publications in Ad-hoc/Sensor Networks and Service-Oriented Architecture fields. Sahin has authored papers in several top-tier conferences and journals such as KDD, WWW, INFOCOM, SIGIR, ICDM, CIKM, IEEE TMC, IEEE TSC, and presented his work in multiple external venues.
Krishnaram Kenthapadi
Krishnaram Kenthapadi is part of the AI team at LinkedIn, where he leads the fairness, transparency, explainability, and privacy modeling efforts across different LinkedIn applications. He also serves as LinkedIn’s representative in Microsoft’s AI and Ethics in Engineering and Research (AETHER) Committee. He shaped the technical roadmap and led the privacy/modeling efforts for LinkedIn Salary product, and prior to that, served as the AI lead for the LinkedIn Careers and Talent Solutions team, which powers search/recommendation products at the intersection of members, recruiters, and career opportunities. Previously, he was a Researcher at Microsoft Research Silicon Valley, where his work resulted in product impact (and Gold Star / Technology Transfer awards), and several publications/patents. Krishnaram received his Ph.D. in Computer Science from Stanford University in 2006, and his Bachelors in Computer Science from IIT Madras. He serves regularly on the program committees of KDD, WWW, WSDM, and related conferences, and co-chaired the 2014 ACM Symposium on Computing for Development. He received Microsoft’s AI/ML conference (MLADS) distinguished contribution award, NAACL best thematic paper award, the CIKM best case studies paper award, the SODA best student paper award, and the WWW best paper award nomination. He has published 40+ papers, with 2500+ citations and filed 140+ patents (35 granted). He has taught tutorials and presented lectures on privacy, fairness, and explainable AI in industry at forums such as KDD ’18 ’19, WSDM ’19, and WWW ’19, instructed a course on artificial intelligence at Stanford, and given several talks on his research work.
Varun Mithal
Varun Mithal is an AI researcher at LinkedIn, where he works on jobs and hiring recommendations. Prior to joining LinkedIn, he received his PhD in Computer Science from University of Minnesota-Twin Cities, and his Bachelors in Computer Science from Indian Institute of Technology, Kanpur. He has developed several algorithms to identify rare classes and anomalies using unsupervised change detection as well as supervised learning from weak labels. His thesis also explored machine learning models for scientific domains that incorporate physics-based constraints and makes them interpretable for domain scientists. He has published 20 papers with 350+ citations. His work has appeared in top-tier data mining conferences and journals such as IEEE TKDE, AAAI, and ICDM.
Ankur Taly
Fiddler Labs
Ankur Taly is the Head of Data Science at Fiddler Labs, where he is responsible for developing and evangelizing core explainable AI technology. Previously, he was a Staff Research Scientist at Google Brain where he carried out research in explainable AI, and was most well-known for his contribution to developing and applying Integrated Gradients (220+ citations) — a new interpretability algorithm for Deep Networks. His research in this area has resulted in publications at top-tier machine learning conferences (ICML 2017, ACL 2018), and prestigious journals like the American Academy of Ophthalmology (AAO) and Proceedings of the National Academy of Sciences (PNAS). He also given invited talks (Slides, Video) at several academic and industrial venues, including, UC Berkeley (DREAMS seminar), SRI International, Dagstuhl seminar, and Samsung AI Research. Besides explainable AI, Ankur has a broad research background, and has published 25+ papers in several other areas including Computer Security, Programming Languages, Formal Verification, and Machine Learning. He has served on several conference program committees (PLDI 2014 and 2019, POST 2014, PLAS 2013), taught guest lectures at graduate courses, and instructed a short course on distributed authorization at the FOSAD summer school in 2016. Ankur obtained his Ph.D. in computer science from Stanford University in 2012 and a B. Tech in CS from IIT Bombay in 2007.
Riccardo Guidotti
University of Pisa, Italy
Riccardo Guidotti is currently a post-doc researcher at the Department of Computer Science University of Pisa, Italy and a member of the Knowledge Discovery and Data Mining Laboratory (KDDLab), a joint research group with the Information Science and Technology Institute of the National Research Council in Pisa. Riccardo Guidotti was born in 1988 in Pitigliano (GR) Italy. In 2013 and 2010 he graduated cum laude in Computer Science (MS and BS) at University of Pisa. He received the PhD in Computer Science with a thesis on Personal Data Analytics in the same institution. He won the IBM fellowship program and has been an intern in IBM Research Dublin, Ireland in 2015. His research interests are in personal data mining, clustering, explainable models, analysis of transactional data related to recipes and to migration flows.
Pasquale Minervini
University College London (UCL)
Pasquale Minervini is a Research Associate at University College London (UCL), United Kingdom, working with the Machine Reading group led by Prof. Sebastian Riedel. He received a Ph.D. in Computer Science from University of Bari, Italy, with a thesis titled “Mining Methods for the Web of Data,” advised by Prof. Nicola Fanizzi. After obtaining his Ph.D., Pasquale worked as a postdoctoral researcher at University of Bari, Italy, and at the INSIGHT Centre for Data Analytics (INSIGHT), Galway, Ireland. At INSIGHT, he worked in the Knowledge Engineering and DIscovery (KEDI) group, composed by researchers and engineers from INSIGHT and Fujitsu Ireland Research and Innovation. Over the course of his research career, Pasquale published 29 peer-reviewed papers, including in top-tier AI conferences (such as UAI, AAAI, ICDM, CoNLL, ECML, and ESWC), receiving two best paper awards. He is the main inventor of a patent application assigned to Fujitsu Ltd.
SA2: Fairness and Bias in Peer Review and other Sociotechnical Intelligent Systems
Nihar Shah and Zachary Lipton
Questions of fairness and bias abound in all socially-consequential decision-making. Whether designing the protocols for peer review of research papers, setting hiring policies, or framing research questions in genetics, any decision with the potential to allocate benefits or confer harms raises concerns about *who* gains or loses that may fail to surface in naively-chosen performance measures.
Data science interacts with these questions in two ways:
(i) as the technology driving the very systems responsible for certain social impacts, posing new questions about what it means for such systems to accord with ethical norms and the law; and
(ii) as a set of powerful tools for analyzing existing systems (even those that don’t themselves depend on ML), e.g., for auditing existing systems for various biases.
This tutorial will tackle both angles on the interaction between technology and society vis-a-vis concerns over fairness and bias. Our presentation will cover a wide range of disciplinary perspectives with the first part focusing on the social impacts of technology and the formulations of fairness and bias defined via protected characteristics, and the second part taking a deep into peer review to explore other forms of bias such as that due to subjectivity, miscalibration, and fraud.
Nihar Shah
Carnegie Mellon University
Nihar B. Shah is an assistant professor in the Machine Learning and Computer Science departments at Carnegie Mellon University. His research interests span machine learning, statistics, information theory and game theory. The current focus of his research is on issues in applications involving distributed evaluations by people, such as peer review.
Zachary Lipton
Carnegie Mellon University
Zachary Lipton is an assistant professor at Carnegie Mellon University whose research spans both core machine learning methods and their social impact. This work addresses diverse methodological focuses—including algorithmic perspectives on fairness, robustness under distribution shift, and sequence learning—and diverse application areas— including medical diagnosis, dialogue systems, and product recommendation.
SA3: Recent Advances in Fair Resource Allocation
Rupert Freeman and Nisarg Shah
Fairness in algorithmic decision-making has received growing attention recently. However, fairness in the context of resource allocation has been formally studied for many decades in microeconomics, and for a few decades in computer science. This tutorial will present an overview of this literature, its various fairness definitions, and fair algorithms. The focus will be on recent advances, but no prior background will be required.
The first part of the tutorial will look at the classic setting of cake-cutting, which models allocation of a divisible resource. This part will cover classic fairness notions such as proportionality, envy-freeness, equitability, and Pareto optimality, and explore their interplay with game-theoretic notions such as strategyproofness. This part will end with a discussion on connections between fairness and market equilibria.
The second part of the tutorial will focus on the allocation of indivisible items. This will cover relaxations of proportionality, envy-freeness, and equitability, as well as other fairness notions such as maximin share guarantee. This part will explore static versus dynamic allocations, goods versus chores, private versus public goods, etc.
The tutorial will end with a high-level discussion on how to apply these fairness definitions in other contexts such as voting, machine learning, or ethical decision-making.
Nisarg Shah
University of Toronto
Nisarg Shah is an Assistant Professor at the University of Toronto. His research is focused on theory and applications of algorithmic economics, and spans areas such as computational social choice, fair division, multi-agent systems, and algorithmic fairness. Shah is the winner of the 2016 IFAAMAS Victor Lesser Distinguished Dissertation Award.
Rupert Freeman
Microsoft Research, New York City
Rupert Freeman is a postdoc at Microsoft Research New York City. His research focuses on the intersection of artificial intelligence and economics, particularly in topics such as resource allocation, voting, and information elicitation. He is the recipient of a Facebook Ph.D. Fellowship and a Duke Computer Science outstanding dissertation award.
SA5Q: Guidelines for Human-AI Interaction
Besmira Nushi, Dan Weld, Saleema Amershi and Adam Fourney
Considerable research attention has focused on improving the raw performance of AI and ML systems, but much less on the best ways to facilitate effective human-AI interaction. Due to their probabilistic behavior and inherent uncertainty, AI-based systems are fundamentally different from traditional computing systems and mismatches between AI capabilities and user experience (UX) design can cause frustrating and even harmful outcomes. Therefore, the development and deployment of beneficial AI systems affording appropriate user experiences requires guidelines to help AI developers make informed decisions with respect to model selection, objective function design, and data collection. This tutorial will introduce the audience with a comprehensive set of guidelines for building systems and interfaces designed for fluid human-AI interaction. Most importantly, it will also reflect upon the research and engineering challenges whose solutions can enable the implementation of such guidelines for real-world AI systems.
This is the first time this tutorial is being organized and we hope it will promote an inter-community discussion on how to build and deploy human-centered machine learning. The audience needs to be familiar with basic concepts in AI and ML, such as training and validation, optimization techniques, and objective functions.
Saleema Amershi
Microsoft Research AI
Saleema is a Principal Researcher at Microsoft Research AI working at the intersection of human-computer interaction and AI, creating technologies to help people build and use AI-based systems. Saleema currently chairs Microsoft’s Aether working group on Human-AI Interaction and Collaboration. Aether is Microsoft’s advisory committee on responsible and ethical AI.
Adam Fourney
Microsoft Research, Redmond
Adam Fourney is a computer scientist and Senior Researcher in the Information and Data Sciences group at Microsoft Research in Redmond. Adam’s research intersects the fields of HCI and IR, and explores the roles that information systems (e.g., search & conversational agents) play in supporting people’s tasks and interactions with other technologies.
Besmira Nushi
Microsoft Research AI
Besmira Nushi is a Senior Researcher at Microsoft Research AI. She currently works on two main directions at the intersection of human and machine intelligence: Human-AI Collaboration for solving complex decision-making, as well as Debugging and Failure Analysis for Machine Learning.
Daniel S. Weld
Paul G. Allen School of Computer Science & Engineering
Daniel S. Weld is Thomas J. Cable / WRF Professor in the Paul G. Allen School of Computer Science & Engineering. He received both Presidential and ONR Young Investigator’s awards and is a fellow of both AAAI and ACM. Weld’s research focus is human-centered Artificial Intelligence.
SA6Q: Modularizing Natural Language Processing
Zhengzhong Liu, Zhiting Hu and Eric Xing
The recent success and growth in the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI) have presented the world with a large number of new applications, techniques, models, and architectures. In this tutorial, the audience will learn how appropriate abstraction and modularization can streamline both the development and deployment of NLP technologies. The tutorial will provide a systematic view of the NLP landscape, spanning text understanding, generation, and retrieval. We will present a principled breakdown of the broad tasks/techniques, and the actual systems that implement and operationalize modular NLP development. The tutorial also includes hands-on sessions, from which the audience will use the open-source systems to practice modular NLP and build complex applications. In sum, the tutorial delivers NLP in a modularized and systematic view, with a significant focus on practical development.
The audience is expected to know the basic concepts of natural language processing and machine learning. People with hands-on experience within these fields will be particularly suitable. Knowledge of deep learning tools (PyTorch/TensorFlow) is helpful but not required.
Zhengzhong Liu
Petuum Inc.
Zhengzhong Liu is a Research Scientist at Petuum Inc. and a Ph.D. candidate on NLP at Carnegie Mellon University (CMU). Liu’s research covers a broad range of topics in NLP and computational semantics. He received an ACL Outstanding Long Paper award in 2016 and an ACL Best System Demonstration Nomination in 2019.
Zhiting Hu
Petuum Inc.
Zhiting Hu is a PhD candidate at CMU ML Department and a scientist at Petuum Inc. Hu’s research centers around unified learning framework ingesting rich supervisions, and controllable text generation. His work on harnessing DNNs with logic rules won ACL2016 Outstanding Paper, and text generation tool Texar nominated ACL2019 Best Demonstration.
Eric P. Xing
Petuum Inc.
Eric P. Xing is a Professor of CS at CMU, and the Founder and CEO of Petuum Inc., a company that builds standardized AI development platform and operating system for broad and general industrial AI applications. Xing has served as the Program Chair (2014) and General Chair (2019) of ICML.
SP1: Rigorous Verification and Explanation of ML Models
Alexey Ignatiev, Joao Marques-Silva, Kuldeep Meel and Nina Narodytska
The tutorial overviews recent trends in verification and explainability of machine learning (ML) models from the formal logic standpoint. It will illustrate how powerful logic-based methods can be employed to solve a variety of practical applications, including certifying robustness, safety of ML models, generating explanations of ML models decisions, etc. The primary objective of this tutorial is to introduce and explain a topic of emerging importance for the AI researcher and practitioner.
The tutorial is designed around three emerging research areas:
- Property verification of ML models: (1) logic-based methods for checking existential properties of ML models (e.g. is there an input that violates a given property?); (2) quantitative estimation of probabilistic proprieties (e.g. what is the probability that a random valid input violates a property?)
- Two orthogonal approaches to interpretability of ML models: (1) building interpretable (transparent) models, like decision trees or decision sets and (2) extracting post-hoc explanations from non-interpretable models, like neural networks.
- Explaining predictions of ML models. We will cover recent and popular heuristic approaches, e.g. LIME, Anchor, and SHAP, and then delve into rigorous logic-based approaches for computing explanations.
Prerequisites: The audience is assumed to have basic understanding of the concepts arising in formal logic and automated reasoning.
Alexey Ignatiev
Monash University, Australia
Alexey Ignatiev is a Senior Lecturer at Monash University, Australia. His recent work is mainly focused on reasoning with SAT and SMT oracles, analysis of overconstrained systems, knowledge compilation, and a multitude of practical applications in AI: from graph optimization problems to model-based diagnosis and explainable AI. Webpage: https://alexeyignatiev.github.io/
Joao Marques-Silva
ANITI, Univ. Toulouse, France
Joao Marques-Silva is affiliated with ANITI, Univ. Toulouse, France. Joao has made seminal contributions in the area of automated reasoning (AR), including development of Conflict-Driven Clause Learning (CDCL). Joao has taught at SAT/SMT Summer schools and has presented tutorials at leading venues, including a recent tutorial at IJCAI 2019. Webpage: https://jpmarquessilva.github.io/
Kuldeep Meel
National University of Singapore
Kuldeep Meel is Sung Kah Kay Assistant Professor of Computer Science at the National University of Singapore. His research interests lie at the intersection of AI and Formal Methods. He is a recipient of the 2019 NRF Fellowship for AI and have presented tutorials at AAAI, UAI, and IJCAI. Webpage: https://www.comp.nus.edu.sg/~meel/
Nina Narodytska
VMware Research
Nina Narodytska is a senior researcher at VMware Research. Nina works on developing efficient search algorithms for decision and optimization problems, verification and explainability of machine learning models. She has presented invited talks at SAT 2017 and CP 2019, and a tutorial FMCAD 2018. Webpage: http://narodytska.com/
SP2: Optimization and Learning Approaches to Resource Allocation for Social Good
Sanmay Das, John Dickerson, Duncan McElfresh and Bryan Wilder
Societies around the world face an array of difficult challenges: preventing and treating disease, confronting poverty and homelessness, and a range of other issues impacting billions of people. In response, governments and communities deploy interventions addressing these problems (e.g., outreach campaigns to enroll patients in treatment or offering subsidized public housing). However, these interventions are always subject to limited resources and are deployed under considerable uncertainty about properties of the system; deciding manually on the best way to deploy an intervention is extremely difficult.
At the same time, research in artificial intelligence has witnessed incredible growth, providing us with unprecedented computational tools with which to contribute to solving societal problems. This tutorial will introduce AI students and researchers to the use of techniques from optimization and machine learning to enhance the delivery of policy or community-level interventions aimed at addressing social challenges. We will focus in particular on three application areas: public health, social work, and healthcare. On a technical level, the tutorial will introduce methods for aggregating value judgments from multiple agents about an intervention’s goals, discuss the creation of agents which can learn and plan under uncertainty to aid in resource allocation, and showcase examples of how these techniques are used in concrete, deployed applications. The goal of this tutorial is to provide a unified view of computational methods for resource allocation for social good and spark new research cutting across the sub-areas we cover.
Faez Ahmed
Northwestern University
Faez Ahmed is a Postdoctoral Fellow at Northwestern University. He will join MIT’s Mechanical Engineering Department as an Assistant Professor in Fall 2020 and establish a new computational design lab. His research centers on solving complex engineering design problems using techniques from optimization and machine learning. His recent work has focused on developing algorithms for diverse matching, crowd content filtering, and automated creativity evaluation.
Sanmay Das
Washington University, St. Louis
Sanmay Das is an associate professor at Washington University in St. Louis. His research interests are in designing effective algorithms for agents in complex, uncertain environments, and in understanding the collective outcomes of individual behavior. His recent work focuses on algorithmic allocation of scarce societal resources, with an eye towards distributive justice implications.
John Dickerson
University of Maryland
John P Dickerson is an Assistant Professor of Computer Science at the University of Maryland. His research centers on solving practical economic problems using techniques from computer science, stochastic optimization, and machine learning. He has worked extensively on theoretical and empirical approaches to designing markets for organ allocation, blood donation, school admissions, hiring, and computational advertising.
Duncan C McElfresh
University of Maryland
Duncan C McElfresh is a PhD student in applied math & computer science at the University of Maryland, College Park. His research centers on applications of computer science for social good; recently he has worked on kidney exchange, public housing allocation, and blood donation. This work involves approaches from matching, recommender systems, preference elicitation, and machine learning.
Bryan Wilder
Harvard University
Bryan Wilder is a PhD Student in Computer Science at Harvard University. His work focuses on the intersection of optimization and machine learning, with the goal of improving decision-making for interventions that serve vulnerable populations. Example applications include HIV prevention for homeless youth and improving tuberculosis treatment in India.
SP3: Representation Learning for Causal Inference
Sheng Li, Liuyi Yao, Yaliang Li, Jing Gao and Aidong Zhang
Causal inference has numerous real-world applications in many domains such as health care, marketing, health care, political science and online advertising. Treatment effect estimation, as a fundamental problem in causal inference, has been extensively studied in statistics for decades. However, traditional treatment effect estimation methods may not well handle large-scale and high-dimensional heterogeneous data. In recent years, an emerging research direction has attracted increasing attention in the broad artificial intelligence field, which combines the advantages of traditional treatment effect estimation approaches (e.g., matching estimators) and advanced representation learning approaches (e.g., deep neural networks). In this tutorial, we will introduce both traditional and state-of-the-art representation learning algorithms for treatment effect estimation. Background about causal inference, counterfactuals and matching estimators will be covered as well. We will also showcase promising applications of these methods in different application domains.
Sheng Li
University of Georgia
Sheng Li is an Assistant Professor of Computer Science at the University of Georgia. His research interests include representation learning, graph based machine learning, causal inference and computer vision. He has published over 80 papers in peer-reviewed conferences and journals. He serves as a SPC member of AAAI.
Liuyi Yao
University at Buffalo
Liuyi Yao is a fifth-year Ph.D. student at University at Buffalo advised by Dr. Aidong Zhang and Dr. Jing Gao. Her research focuses on Causal Inference especially for analyzing treatment effect with representation learning, and temporal data analysis with its application on healthcare.
Yaliang Li
Alibaba Group, DAMO Academy
Yaliang Li is a research scientist at Alibaba Group, DAMO Academy. His research topics include truth discovery, knowledge graph, question answering, and automated machine learning. He has published over 50 papers in referred journals and conferences. He serves in the senior program committee of AAAI.
Jing Gao
University at Buffalo
Jing Gao is an Associate Professor in the CSE Department at University at Buffalo. She has published more than 150 papers in referred journals and conferences, with a focus on data mining and machine learning. She is a recipient of NSF CAREER award and IBM faculty award.
Aidong Zhang
University of Virginia
Aidong Zhang is a William Wulf Faculty Fellow and Professor at University of Virginia. Prior to UVA, she was a SUNY Distinguished Professor at University at Buffalo. Her research interests include data mining/data science, machine learning, bioinformatics, and health informatics. She has authored over 300 research publications in these areas. She is a Fellow of ACM and IEEE.
SP7: Synthesizing Explainable and Deceptive Behavior in Human-AI Interaction
Subbarao Kambhampati, Tathagata Chakraborti, Sarath Sreedharan and Anagha Kulkarni
With the increasing complexity of AI systems, it has become harder for naive users to understand these systems at an intuitive level and work with them effectively. Thus the onus is on us, as AI system developers, to equip these systems with capabilities that allow them to effectively interact and collaborate with humans-in-the-loop. In this tutorial, we will introduce the problem of human-aware decision making, and the challenges associated with generation of agent behaviors in these settings. In particular, we will discuss state-of-the-art works that have looked at capturing and reasoning with the human mental models to achieve fluent coordination. We will discuss how such models allow the agent to generate interpretable as well as privacy-preserving behavior and provide explanations. This half day tutorial is aimed at researchers and graduate students with a background and/or interest in exploring real-world AI systems that are meant to interact and collaborate with people.
Subbarao Kambhampati
Arizona State University
Subbarao Kambhampati is a professor of Computer Science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. Kambhampati is a fellow of AAAI and AAAS, and the past president of AAAI.
Tathagata Chakraborti
IBM Research
Tathagata Chakraborti works at IBM Research on human-AI interaction and explainable AI. He had received back to back IBM PhD Fellowships, an honorable mention for the ICAPS Best Dissertation and was invited to draft the landscaping primer for the PAI Pillar on Collaborations Between People and AI Systems.
Anagha Kulkarni
Arizona State University
Anagha Kulkarni is a fifth-year Ph.D. student at Arizona State University working at Yochan lab spearheaded by Prof. Subbarao Kambhampati. Her research interests include human-aware AI planning and privacy preservation planning for AI systems. Her research has been featured in conferences like AAAI, AAMAS, ICAPS, ICRA.
Sarath Sreedharan
Arizona State University
Sarath Sreedharan is a fourth-year Ph.D. student at Arizona State University working at Yochan lab under Prof. Subbarao Kambhampati. His research interests include explanations for automated planning and human-aware decision making. His research has been featured in conferences like AAMAS, ICAPS, ICRA, IJCAI, HRI and journals like AIJ.
SP4Q: Multi-Agent Distributed Constrained Optimization
Ferdinando Fioretto and William Yeah
Teams of agents often have to coordinate their decisions in a distributed manner to achieve both individual and shared goals. Examples include service-oriented computing, sensor network problems, and smart device coordination homes problems. The resulting Distributed Constraint Optimization Problem (DCOP) is NP-hard to solve, and the multi-agent coordination process non-trivial.
In this tutorial we will provide an overview of DCOPs, focusing on its algorithms and its applications. We will present an accessible and structure overview of the available optimal and suboptimal approaches to solve DCOPs. We will discuss recent extensions to the DCOP framework to capture agents acting in a dynamic environment and/or using continuous domains and objective functions. Finally, we will discuss what are the suitable applications that can be modeled and solved as a DCOP, and conclude with the most recurrent challenges and open questions.
Ferdinando Fioretto
Syracuse University
Ferdinando Fioretto is an assistant professor at the Syracuse University. His research focuses on multiagent systems, data privacy, and optimization. He is the recipient of a best student paper award (CMSB, 2013), a most visionary paper award (AAMAS workshop series, 2017), and a best AI dissertation award (AI*IA, 2017).
William Yeoh
Washington University in St. Louis
William Yeoh is an assistant professor in the Computer Science and Engineering Department at Washington University in St. Louis. His research interests include multi-agent systems, distributed constraint reasoning, and planning with uncertainty. He is an NSF CAREER awardee and was named in IEEE’s 2015 AI’s 10-to-Watch list
SP5Q: Creative and Artistic Writing via Text Generation
Juntao Li and Rui Yan
Text generation and automatic writing have gradually been one of the frontiers for the artificial intelligence community. To facilitate the development of the text generation file, we summarize the existing researches and give an overview of the technical implementations in this tutorial. We focus on creative and artistic writing, including storytelling, poetry composition, mute-modal poetry/story generation, and lyrics creation. Moreover, we will elaborate on the “core” challenges of artistic text generation and existing advanced solutions.
Juntao Li
Peking University
Juntao Li is now in the 5th year of his doctoral program at Peking University, supervised by Professor Rui Yan. His research focuses on Natural Language Processing and Artificial Intelligence and has published multiple papers on AAAI, ACL, EMNLP, IJCAIetc. More concretely, he is now working on personalized conversation systems and artistic writing.
Rui Yan
Peking University
Rui Yan is now a tenure-track assistant professor at Peking University. For the past 10+ years, Dr. Rui Yan has been working on Artificial Intelligence (AI) for Natural Language Processing (NLP) and other related research fields such as Data Mining (DM), Information Retrieval (IR) and Machine Learning (ML). Dr. Rui Yan now focuses on human-computer conversational models (a.k.a., dialogues), natural language cognition and generation, summarization, and other interdisciplinary tasks. Previously, he has been invited to give tutorial tasks at EMNLP, WWW, and SIGIR
SP6Q: Recent Advances in Machine Teaching: From Machine to Human
Yao Zhou and Jingrui He
Machine teaching is the inverse problem of machine learning. It aims at constructing an optimal data set according to a given target concept so that the target concept can be learned on this data set. Based on the different types of machine teaching space, we will introduce several applications under various teaching settings: (1) Machine teaches human, i.e., supervising the crowdsourcing workers to learn and label in the form of teaching (e.g., teaching the crowdsourcing workers a concept such as labeling an image or categorizing a document); (2) Machine teaches machine, i.e., an adversary can intentionally modify the training data and enforce the training framework to end up with an ill-trained model (e.g., an adversarial attack and defense); (3) Human teaches machine, i.e., in AI system building, machine teaching can enable a machine learning system to be trained faster and more accurately via teaching by a human domain expert by simply providing labeled data and selected features. For each teaching setting, we will provide a comprehensive review of existing techniques, and discuss the related applications.
Yao Zhou
UIUC
Yao Zhou is a Ph.D. student at UIUC. He received his M.S. from University of Rochester and Oregon State University respectively. His research focuses on the human-in-the-loop learning including crowdsourcing, heterogeneous learning, machine teaching, etc. He has published multiple articles in peer-reviewed conference and journals (e.g., KDD, ICDM, SDM, IJCAI, TKDD, etc.). He has also served as a program committee member in major conferences (e.g., ICML, NeurIPS, AAAI, IJCAI, SDM, PAKDD, etc.).
Jingrui He
UIUC
Jingrui He is an Associate Professor at iSchool of UIUC. She received her Ph.D. from CMU. Her research focuses on heterogeneous machine learning, rare category analysis, active learning, and semi-supervised learning. Dr. He is the recipient of the NSF CAREER Award, IBM Faculty Awards (three times) and IJCAI 2017 Early Career Spotlight. She has more than 90 publications at major conferences and journals (e.g., IJCAI, AAAI, KDD, ICML, NeurIPS, ICDM, TKDE, TKDD, DMKD). Her papers have been selected as Bests of the Conference by ICDM 2016, ICDM 2010, and SDM 2010.